Final Project Concept

This project is all about how we “send messages”. We send messages intentionally, for example, when we text someone, and also unintentionally from our body language. This idea came from my interest in thinking about how we have grown to communicate in the digital age. As new technologies for communication are introduced less of our bodies are being used. I want to explore what we lose from digital communication by sending messages to loved ones with our finger tips. What will happen to our bodies if technology continues to grow? How much control do we actually have when we send messages to people?

 

diagram_final-001

Final Project Proposal

I will be combining a final project for Live Web and Interaction Design Studio. My goal is to make a physical communication device for people with loved ones who live far away. What I find fascinating about long distance communication is that we miss something that cannot be simulated. I want to achieve a way to not only communicate to someone afar, but do it in a way that’s unobtrusive, private, and uses the body.

Questions:

How can we communicate using our bodies?

Can use more of our bodies that our fingers to communicate?

Can I simulate what if feels like to communicate with someone who’s far away?

Initial idea:

fullsizerender-12

Above is an illustration of what this communication device would be like. Essentially it would be a wearable that would send data through a socket server to another person through their body movements. In the above illustration, I am proposing to use an accelerometer as an input that would send data to the socket server and then back to the other person with an output of a vibration. That way when person 1 sends a message, person 2 receives a vibration. Giving the sensation of feeling someone else without them actually being there.

fullsizerender-11

Live Mystery Dinner Party

For this week I worked with Skylar Jessen and Rebecca Leopold to create a live physical chat. Our concept was very simple and straightforward: to build a basic chat using PeerJs and setting up a Peer Server. Our idea was to have two people simulate having dinner together, but without them not knowing who they eating with but who they are. To set this up, we decided there would be two teams, each consisting of a person having dinner and a team controlling what the person said during dinner.

We divided the class into teams and chose two people to be their Bots. We told them that the people chosen to have dinner together no longer know who they are, and even further don’t know who they are. The idea is that the AI Teams would be inside of the class room connected to their Bot via a PeerId. The AI Teams were each given 10 questions and 10 answers which they would send through their laptop webcams. Their Bots would receive the question and ask the question to the other Bot. The other Bot would then wait for their AI Team to show their answer.

Below is a diagram to show the interaction:

screenshot-2016-11-01-12-54-29

Technologies: Javascript, HTML/CSS, Peerjs, WebRTC, Android phones, computers

Materials: Printed signs based on Twitter feeds, Dinner plates/silverware

Setup:

The class was separated into the two AI teams and each given a computer with the page loaded. We also secretly told each AI Team who their Bot was in order to direct the answers.

AI Team & Bot 1: Kayne West

AI Team & Bot 2: Leon Eckert

img_5484

Then the 2 Bots were sent outside of the classroom with their corresponding PeerIds to connect to their AI teams. The Bots, played by Lindsey Johnson and Aaron Montoya, have no idea who they are and who they are having dinner with. Aaron=Kayne; Lindsey=Leon.

img_5485

Screenshots from the mobile view:

screenshot-2016-11-01-13-31-12 screenshot-2016-11-01-13-31-32

We also wanted to be able to watch the interactions between the Bots as well as hear the questions and answers.

img_5490  img_5493

The AI teams are shuffling through their questions/answers to find the “right” thing to send over to their bot. (*There is no right answer to any questions).

img_5487

img_5489     img_5496

After about 8 questions, we sent the Bot to meet their AI Teams to decide who the Bot was and who the other Bot was.

img_5497

Week 6: Midterm

screenshot-2016-10-18-10-47-40 screenshot-2016-10-18-10-51-24 screenshot-2016-10-18-10-18-46

My Live Web Midterm took me on quite on journey. I had a vision that I wanted to create a chat where you can’t see yourself except when you send a message.

Where this idea stemmed from:

When I video chat with people I typically use FaceTime. I have noticed that when I am chatting, the majority of the time, I am just looking at myself. This tends to be a bit distracting as I’m paying more attention to how I look when I speak instead of listening and speaking in a meaningful way. Conversely, when I am not paying attention to how I look and am more engaged in the conversation, I occasionally notice how incredibly silly I look when I am active listening.

Basic concept:

The default interface will be very plain and the user will be prompted to send a message. When a message is sent a picture is taken and displayed on the screen with the text message.

My first task was to get the general video chat working. It took some time, but I was able to get the chat to work with multiple users. I also used Bootstrap to help with the styling, and ideally put the videos in a grid to be nicely displayed on the screen.

Test with 1 chatter:

screen-shot-2016-10-11-at-4-14-10-pm

screen-shot-2016-10-11-at-4-41-10-pm

Test with multiple users:

I sent out texts to a bunch of people both in the States and outside. I just said if you are on a computer, please go to this URL. And with that we had a little chat working.

screen-shot-2016-10-11-at-9-45-05-pm

screen-shot-2016-10-11-at-9-45-55-pm

What I would want to improve is the placement of the messages coming in and where the videos go.

I put this to the side and then started to work on my other idea. I got everything set up but then starting running into problems.

screenshot-2016-10-17-00-08-38

screenshot-2016-10-16-17-02-53

I asked one of my friends who is backend developer to look at my code and see what the issue is. And alas! It was that I was missing the Image and Text HTML tags. Now it’s at a point where it’s taking awkward photos, below is showing one of the first while I was testing.

screenshot-2016-10-17-00-17-35

screenshot-2016-10-18-02-06-46

screenshot-2016-10-18-09-20-01

screenshot-2016-10-18-09-23-58

screenshot-2016-10-18-09-25-06

screenshot-2016-10-18-09-26-45

screenshot-2016-10-18-09-28-23

screenshot-2016-10-18-09-52-06

Week 5: Peer.Js

screen-shot-2016-10-09-at-6-56-06-pm

Using PeerJs was a big challenge for me. And still I am not entirely sure what’s going on in this chat. Below it shows when I got the example code form class working. Even after getting this to work, I wasn’t entirely sure why we are calling other people in the chat. So eventually I took it out of the code.

screen-shot-2016-10-09-at-7-21-33-pm

screen-shot-2016-10-09-at-9-14-19-pm

screen-shot-2016-10-09-at-9-37-07-pm

Once I got the video working, I decided

screen-shot-2016-10-09-at-10-45-37-pm

screen-shot-2016-10-09-at-10-52-03-pm

screen-shot-2016-10-10-at-4-30-33-pm

screen-shot-2016-10-10-at-4-44-37-pm

screen-shot-2016-10-10-at-4-44-13-pm

Week 4: Web RTC

Initial error :-/

screen-shot-2016-10-04-at-11-07-11-am

After five office hour sessions ===)

WORKING!

screen-shot-2016-10-03-at-5-27-19-pm

This week was a little bit better than last week. I officially completed week2: chat and week3: canvas drawing. I wasn’t as successful with getUserMedia and WebRTC, but I’m working on it. It turns out my biggest issue was not matching the right IDs for the text that I am sending and the other person was sending in the chat. I wasn’t using broadcast emit in the server javascript–once I got that working, I could read incoming and outgoing messages in the terminal.

As for the WebRTC for week4, I am able to run the server in the terminal, but when I go to the url, nothing loads. Except, now this

screen-shot-2016-10-04-at-1-16-08-pm

Say whaaa?

Week 1: portrait

For the first week’s assignment I was challenged to get back into coding after taking the summer off. I spent my entire summer heavily designing, and haven’t touched JavaScript since last spring. I wanted to keep my project simple, but meaningful and apply code that not only works, but that I understand.

My concept for my portrait is to show my “unusual” talent of lip-synching with songs that reflect my humor, passion, and love for simple design.

  1. Load the page. See a looped video of me dancing. Hover and click the “Sing with me” button to go to the next page.

loadscreen

2. The next page you see a title “Choose a lyric” with 6 lyrics/song titles to choose.

screen-shot-2016-09-11-at-7-45-49-pm

3. When you select a title the video associated with that song plays.

screen-shot-2016-09-11-at-7-38-31-pm

4. You can toggle between any of the 6 videos to sing along.

screen-shot-2016-09-11-at-7-38-41-pm

5. When a video ends, a new image appears.

screen-shot-2016-09-11-at-7-38-55-pm

Here is Week1 Portrait!

A live or synchronous site

It seems that live streaming opens a portal for anything you could possibly think of. The first live feed I will talk about is of stray cats in South Korea eating. Super interesting.

Why just watch cats when you can also spend your time watching other people’s dogs.

screen-shot-2016-09-13-at-11-48-26-am

http://livepuppycam.com/

———————————–

I need to revisit this question, but right now I would say a live site I find helpful and enjoy is Zenhub. Zenhub is a project management tool within Github. It’s very similar to many agile tools out there.

screen-shot-2016-09-13-at-9-16-03-am

What I like about Zenhub:

  1. Linked to Github
  2. Helps manage progress across an entire team
  3. Live updates seen instantly

It’s a nice way to keep everyone communicated about what work each member of the team is working on.

More to come!