Sensorship is a machine that censors your words in real-time and transmits them to another person. In some sort of ironic twist in a time where the youngest generations have detracted from making phone calls, two people will try and have a conversation with two handsets, ancient objects that are now mostly obsolete.

In our scenario, one person is a scientist—a member of the UN’s Intergovernmental Panel on Climate Change (IPCC) who is trying to transmit information about climate change from the IPCC’s latest climate report, one that explicated just how dire, pressing and worrying this issue is. Sensorship will censor and transmute words like global warming to something else. It won’t allow the data we’ll ask the scientist to broadcast/tell the other person to be transmitted. It’ll search, find, and replace in real-time. The other person is the emissary—they’ll be able to tell the world about the data they learn. Their task is to receive the correct message from the researcher. They’ll have to beat the system in creative ways.

*project at ITP/IMA Winter Show 2018

*project at ITP/IMA Winter Show 2018


Our relationship with those around us is increasingly carried out on screen. As the magnitude of digital communication increases inverse to face-to-face interactions, it belays the question: what happens when what we mean to say is not faithfully transmitted to who we’re communicating with?

The censorship of thought, language, and expression has, of course, been carried out in past ages, our current times, and likely future eras. Yet most of this has come across via blockage—a hand obstructing a printing press, a story cut from a manuscript, a forced deletion on social media…

What happens, though, when your message comes through? We trust that our machines will relay our messages in good faith. But what happens when your message is altered, warped, and effectively pre-packaged for its recipient?


Users will have to try and communicate—somewhat ironically by speaking into two analog phones—through a system that twists their nouns, adjectives, and verbs, replacing them with others of similar phonemes, terms, and thus meanings.

Ideas about the space in which the users communicate, started out with two phone boxes. Yep, the red, British ones. They are obviously quite complicated to fabricate so we moved on a more simplified environment for the participants.

The core of all the censorship, happens inside a mystic black box, which houses the two laptops that process respective directions of the communication. The box is decorated with electronic parts, computer hardware, and some extravagent cables, so that the users would be able to tell that some sort of processing is happening, without being explicitly told so.

In this manner, the censorship occurs with voice not after-the-fact, but during. Furthermore, the denouncement of climate change—especially in light of such a frightening report of the state of natural life on this planet—and rejection of facts as ‘fake news’ is worrying. (To augment this, we included a few sentences that may be fun to say with this notion in mind.)

*work in progress box


The system could be described as a black box, that reads analog input, the users voice, transforms the voice into actual text, manipulates that text, speaks the altered text to the other user, and vice versa. Thus the main components include speech recognition, working with natural language(computational literature), and text-to-speech. The p5.js code relies on two libraries for the respective tasks:

However, since it is virtually impossible to make logical and yet deceptive real-time replacements based on each user input, we ended up hardcoding a list of phrases and words that are closely related to the topic, along with respective replacements. That way, aside from the communication the machine will not permit to be said, the conversation can take place and the machine can act imperceptibly.


by R. Luke DuBois, NYU Ability Lab

The library tackles the input and output of the project, speech recognition and text-to-speech, by providing a easy-to-use interface for calling the Web Speech API.


by Daniel C. Howe

We used the RiTa.js library to perform natural language processing of the intermediate text, namely replacing words.


Link to the p5.js online editor

The browser may ask for permission to use the microphone.


The reactions to our project were pretty consistent—people were intrigued (and much of the time delighted/humored) by the word replacement but catalyzing and holding a conversation proved difficult. People either weren’t sure what to say, or the machine twisted the other party’s words in excess to where there wasn’t anything to say at all.

This provided the impetus for us to narrow the focus of our project’s capabilities and focus on a single topic/provide some sort of dialogue starter. In this manner, we’ll provide some sort of scenario to get both parties to talk/give them a goal. We want to do so with a meaningful topic that will coincide with the thematic elements of our project.


The users’ voices are uploaded to Google for speech recognition and may be put to other uses.