IMG_2878 IMG_2880

For the midterm I was happily paired with Kevin, who has very similar interests and ideas for projects. He had started this idea during the Synthesis lab and was looking to take it to the next level with how to make it a meaningful message. When he first showed me the the project, it was a basic button press, in which each time the button was pressed an ellipse would appear in its location based on the csv file loaded into the sketch. Although I liked the idea of using the gun to show the incredible amount of school shootings in the States, I didn’t want the focus to be as much about guns and shooting. I thought this was a great opportunity to explore how we could convey how school shootings affect the perception of other countries have on the U.S, and also how shootings are more about people than guns.

We worked on changing the sketch, so that each time the button was pressed, a black rectangle increases down the y-axis. This creates a new background as the player shoots the gun. The font of the school names is also white, which at first you might not notice, but after as the background changes black, the words are more visible. This gives the impression that perhaps a person might not be aware of the message of the game until a few seconds. My favorite aspect of this project is how the map evolves from the standard map of the U.S. to what looks like a new shape, or map shown in the red “dots”. The message here would be even though we might see the map of the States like the original image, other countries might see this other map. Furthermore, this reflects how we have let ourselves be defined by tragedy, negligence, and the inability to acknowledge that we are all apart of the why and how of school shootings.

Attaching the button to the gun was likely the most difficult part of the process. We tried using double-sided tape, tape, hot glue, and super glue and nothing seemed to work. Ultimately we soldered the wires from the board to the legs of the push button and then wrapped other wire from the handle of the gun a few times to lock it in place.

Final project ideas

I mentioned an idea in the previous weeks about having a device that would read an analog sensor and a sketch would change color depending on the results of the sensor. Taking this basic idea of analog input and serial communication, in addition to my interest in language acquisition, making education more accessible, and finding a way that both can be a more comfortable and private experience. I mean this in the sense that learning new things is incredible, but it also make us incredibly vulnerable, and with vulnerability comes discomfort. Now we know that when we move out of our comfort zone, amazing things happen, and we’re opened to an endless environment of learning, but that’s sometimes easier said than done. With that said, I want to explore finding the comfort in the discomfort.

So, how do we do this? One of my main goals as a teacher was to create an environment where my students felt so comfortable that didn’t think of going to class as a daunting experience. In fact, if they didn’t think of my class as a class at all, then I would consider that a success. In a 3 hour lesson, I would begin with a 10-20 minute open conversation with my students. This led to learning about them by things they would say, topics they brought up, and inevitably something they’d show to the class. We got lucky a few times and would learn that someone is an avid breakdancer or a famous actor in their country. Most importantly though this allowed them to speak freely in English without feeling judged, or thinking that they’re not speaking in the right tense or using the correct gender pronoun.

Incredible things can happen within a class. There’s an undoubtable bond that builds between teacher and students, and students and students. If all the students feel comfortable, the environment can be so welcoming that learning experience feels private. This is the same idea if we danced how we did alone at home when we are out in brightly-lit public space.

To explore this for my final project, I would like to focus on learning basic vocabulary and pronunciation in English. To do this, I would use p5 and a voice sensor. An image would appear when the sketch runs and the person must say the name of the image. If it is correct, the background would change a color. To build on this idea, I would like to associate colors to a scale.






The scale would be tied to the pronunciation of a native English speaker. I would even include this information. For example, this is a native speaker from Chicago.


Intro to Asynchronous communication lab

FullSizeRender 13

I used two FSRs for this lab. Although everything worked the way I wanted, my biggest take away is that I tend to copy and paste the code into Arduino and then in p5. This is an awful habit as I’m not understanding the actual code, as well as I don’t truly understand why something is not working. The same was true for the second lab: Serial Input to the p5.js IDE. So, to combat this issue I spent more time on related labs.

Below is the board setup for two labs I did with serial.


Using the board, I programmed p5 and Arduino together. When H is pressed, it’s sent to the Arduino and turns on the light. When the light is on, the Arduino then send a signal back to p5 and changes the color of the canvas background.

Screen Shot 2015-10-14 at 7.19.57 PM

Screen Shot 2015-10-14 at 7.20.36 PM

To practice a bit more, I wanted to use my ICM sketch from last week. When the sketch begins, the light on the Arduino is off and the balloons begin to me. In p5, you must change the color of each ball by clicking inside of each one. Once all of them are clicked, the light will turn on.

Preparing for serial

FullSizeRender 9





FullSizeRender 7





FullSizeRender 10

FullSizeRender 11

FullSizeRender 8


This week I spent some time to redo a few of the labs and get more comfortable with other variable resistors. I wanted to mainly make sure that I understand the wiring of a circuit and how to code what I wanted to happen without referencing the all of the code from the labs. My sub-aim of the week is to experiment with changing things around on my board in order to rev up some creativity for a bigger project.

The information about serial communication is tad overwhelming at the moment. Especially since I wasn’t able to attend the synthesis lab last Friday. However, I did get the digital and analog switches to work connecting p5 to Arduino. This gave me some hope for an idea for a project idea I’ve been thinking about for ICM that I would eventually like to combine with Arduino.

The readings were very helpful this week in terms of thinking about prototyping. I vow to start sketching ideas when they happen.

Here is a beginning sketch for a game that would involve Arduino to control the change in colors of moving balloons. Eventually the balloons would become the words of the color as a way to teach a child basic colors.

FullSizeRender 12

The reading by Igoe, Making Interactive Art: Set the Stage, Then Shut Up and Listen spoke to me the most in terms of understanding the importance of feedback when making interactive art. It really highlights the importance of process over product when learning, as well as teaching.

Getting more comfortable with circuits

This week I re-did a few of the labs to practice setting up the board and coding. In trying to practice using variable resistors, I had some issues when I tried to include two photo sensors and two LEDs.

Below you can see that only the green LED is on when I plug in the Arduino. This was my first issue. I couldn’t figure out why the red light wasn’t turning on. I ended up trying out a few different LEDs and it still didn’t come on.


The other issue was related to my code. I used the map function to change the range of photo sensor so it could communicate to the LED output. But when I printed the values of the green LED it was still showing 400-900, instead of 0 to 255. This could be related to having issues understanding the map function, or if something is wrong with my wiring.

Servo Motor Lab

After trying out the Analog Output lab with tones (see previous post), I tried the Servo Motor lab.

It took me a few tries to understand where the analog input, power, and ground went in relation to the motor. It seems as though the more I understand how to code and wiring functions, I become less creative in how I can apply this to real applications.

Analog output tones (with flex sensors)

I decided to re-do the tones lab this week and use flex sensors for my variable resistors, rather than photo sensors. Although I like the idea of detecting light and using that to create an output, I was feeling that my results weren’t very accurate. And finally realizing that the shop has flex sensors to rent, I went to town.

A few strange things about this lab:

The flex sensor connected to Analog Input 1 didn’t detect anything in the serial monitor. I changed the flex sensor a few times, and it didn’t seem related to the sensor itself.

Also, a question about this lab: other than declaring the global variable for speakerPin as 8, we don’t mention which analog inputs the flex sensors are connected to. How are the sensors reading the input? Why is that not important in the code?


Screen Shot 2015-09-29 at 7.54.40 PM


Sensing the surrounding

Screen Shot 2015-09-24 at 9.15.29 AM

Screen Shot 2015-09-24 at 9.15.49 AM

Screen Shot 2015-09-24 at 9.14.03 AM

We are the sensors

This week in general was very overwhelming. There were days, moments, actually, where I was feeling a lot of clarity and things would start to make more sense. And then suddenly, it felt as if I lost everything and hadn’t understood anything to begin with. I’m in the belief that ITP is actually a language school. The languages we are learning aren’t ones we necessarily use to verbally communicate with one another. Rather we are exploring ways in which we understand what’s happening around;  we’re exploring how to convey to others what we’re constantly thinking about; we’re exploring ways in which we want to see change in the world. We’re communicating through the things we create and build. And we’re using a variety of languages to do so– Arduino, p5.js, Photoshop, Illustrator, github, Unity, C Sharp. These are some of the languages I’m currently learning. The languages I’m attempting to use to create my ideas. And like learning any language, the only way to learn one is to use it.

Learning about analog input and output has made more sense than anything we have done so far in PComp. I think because it’s most like how humans take in and convey information. In the workshop on Tuesday, the residents gave a physical representation of how analog input and output works.

“Let’s say I receive information. I am reading the information. analogRead. And then I want to tell someone else. analogWrite.”

Just like humans, or possibly any living thing in the world, there’s no yes or no answer. We’re much more complicated than that. What’s my takeaway? We are the sensors here. We are the analog input and what we create is the output.

Below was my favorite of the labs we did this week. I loved it because it inspired me the most. Although I didn’t build anything upon this, I thought of ways to apply this for future projects to build upon.

I love the idea of sensing light. I also love the idea of helping people feel more comfortable. Especially when learning. Here at ITP we are absorbing tons and tons of information. Sometimes after long days when I try to go to sleep, I won’t be able to turn my brain off. Sure, I could watch something mindless on Netflix, but that’ll still keep my brain busy. I want to create a device that when light is sensed, a lullabye-type song will play. The sound played can be altered and personalized for the person. The sounds of the ocean, rain, or even a calming song. I’d love to also adapt this for children.

Imagine a child’s mind whirling with constant wonder. They have so many thoughts going on at once perceiving/sensing what they see, hear, or feel in the world. All the want to do is tell you about it. Ask questions. Get answers. Give their “output”. Then it’s time for bed. How to calm their mind down? Other than reading, sometimes a child just wants to hear a song, which results in often an awkward lullabye. With this said, this device would sense as the room would get darker, and a soft lullabye would play. Or it could even be personalized to be the voice of their parent/loved one when they are sleeping at a friend’s house.

Interactive Technology

I have a few ideas for interactive technology:


  • CVS self-checkout
  • Movie theater kiosk
  • Disney World Fast-Pass kiosk

For each of these, I can’t help but think of Brett Victor’s rant, in that using screen isn’t as interactive as it really could. Nothing is changing shape and giving us clues that things are working in the way they should.

I’m going to explore the interactive technology of self-checkout, specifically at CVS. I regularly shop at CVS. I have a personal reason for using self-checkout and I’m curious if my reason is similar to others. Whenever I have to buy something I’m slightly embarrassed about, such as Gas X, or an obscene amount of chocolate that’s obviously for me, but I’m pretending I will share, I will go to self-checkout. This idea works with Norman’s article on emotion design, as the experience is then more pleasing for the user.

However, funnily enough, the actual interaction most of us have with self-checkout kiosks are not very pleasurable. This ties along with Norman’s idea of affect and behavior. When you touch the screen, but the screen senses you want a different button than what you intended to choose, it’s common to feel frustrated. You might even feel embarrassed. Perhaps this negative affect will cause you to hurry up and finish your transaction before a line too large complies behind you. Conversely, you might be so excited that virtually no one (that you know of) saw that you just bought 4 types of candy bars and a bag of beef jerky that you breeze through the checkout process.

Something I always find funny about self-help kiosks is that there is typically an employee monitoring the process. Either they’re helping with visibility of the kiosks or helping them figure out how to make the kiosk stop telling you to place your items in the bag.

Turning things on

I learned a lot doing the labs this week. I attended two Physical Computing workshops.

Workshop on 9/11:

  • What is a circuit?
  • What are these components?
  • I want to turn this LED on.


Workshop on 9/15:

  • Seriously, why do we use a multimeter?
  • No really, what are these components for?
  • I want to turn two LEDs on.


To apply what I learned using switches, I wanted to make a simple application that would ultimately encourage someone to finish a book.

When the book is completely closed both sets of LEDs would go on:


When the book is halfway read, the yellow LEDs turn off, and only the red LEDs are on.


Wouldn’t you want those bright red lights to turn off? Me too! So, why not finish the book?


To demonstrate someone using the book:


Combining the readings from Crawford and Victor with the in-class exercise to create a fantasy device, I see interactivity as a much more primal concept than I had before. I guess I had been identifying interaction with the digital age and disregarding how basic interaction can be.

With Crawford’s definition that interaction is a conversation, it helps to show that interaction not only needs two actors for the speaking and listening, but it also requires the time to process what each person is contributing. This makes me wonder how we, as producers can control the result of a user’s interaction. Each time someone interacts with a device, person, etc, it leaves some unpredictability. Overall, Crawford’s definition highlights the importance of interaction for people to grow individually and expand ideas.

Along with this idea, this is what happened during class while creating fantasy devices. In many cases, it seemed that initial ideas evolved after groups came together and expanded the idea to make sense for their individual vision. Having the devices be as unfeasible as possible opened up the possibility to create something they might have not even thought about before the class. I think that alone shows how powerful interaction is–not only were we as humans interacting with our groups to expand our ideas to eventually settle on one, we were also interacting with the materials, and having the materials interact with each other.

However, the idea that we were interacting with the junk materials goes against Crawford’s definition of interaction, as it would be impossible for a piece of paper to converse with metal to say, “Hey, this isn’t working”. Yet, working with the materials in hand allowed me to understand how the idea would come together, which I think goes along with Victor’s rant: the importance of our hands.

Hands are such a basic tool, I for one often take for granted. It’s true we always touch screens and pushing buttons, but what are really interacting with? Similar to knowing when a conversation is good, we also can understand when something feels strange or good just with our hands. Victor shows that most of the physical, digital interaction we have today doesn’t return a physical reaction, which is an important part of the “conversation”.

In thinking about digital technology and interactivity, I would say that most apps we use to talk today, especially across the globe are not interactive.