Music Interface Design

For this assignment we were asked to re-design the interface for a Max patch that Luke made. Since I am new to Max, I wanted to focus more on understanding basic elements of Max and how to connect things to make sound happen. Below is a screenshot of the basic patch I made that controls the sound based on the light in the video. I am not 100% happy with this and hope to continue working on this after this class.

screenshot-2016-10-17-19-06-39

Week 6: Midterm

screenshot-2016-10-18-10-47-40 screenshot-2016-10-18-10-51-24 screenshot-2016-10-18-10-18-46

My Live Web Midterm took me on quite on journey. I had a vision that I wanted to create a chat where you can’t see yourself except when you send a message.

Where this idea stemmed from:

When I video chat with people I typically use FaceTime. I have noticed that when I am chatting, the majority of the time, I am just looking at myself. This tends to be a bit distracting as I’m paying more attention to how I look when I speak instead of listening and speaking in a meaningful way. Conversely, when I am not paying attention to how I look and am more engaged in the conversation, I occasionally notice how incredibly silly I look when I am active listening.

Basic concept:

The default interface will be very plain and the user will be prompted to send a message. When a message is sent a picture is taken and displayed on the screen with the text message.

My first task was to get the general video chat working. It took some time, but I was able to get the chat to work with multiple users. I also used Bootstrap to help with the styling, and ideally put the videos in a grid to be nicely displayed on the screen.

Test with 1 chatter:

screen-shot-2016-10-11-at-4-14-10-pm

screen-shot-2016-10-11-at-4-41-10-pm

Test with multiple users:

I sent out texts to a bunch of people both in the States and outside. I just said if you are on a computer, please go to this URL. And with that we had a little chat working.

screen-shot-2016-10-11-at-9-45-05-pm

screen-shot-2016-10-11-at-9-45-55-pm

What I would want to improve is the placement of the messages coming in and where the videos go.

I put this to the side and then started to work on my other idea. I got everything set up but then starting running into problems.

screenshot-2016-10-17-00-08-38

screenshot-2016-10-16-17-02-53

I asked one of my friends who is backend developer to look at my code and see what the issue is. And alas! It was that I was missing the Image and Text HTML tags. Now it’s at a point where it’s taking awkward photos, below is showing one of the first while I was testing.

screenshot-2016-10-17-00-17-35

screenshot-2016-10-18-02-06-46

screenshot-2016-10-18-09-20-01

screenshot-2016-10-18-09-23-58

screenshot-2016-10-18-09-25-06

screenshot-2016-10-18-09-26-45

screenshot-2016-10-18-09-28-23

screenshot-2016-10-18-09-52-06

Week 5: Peer.Js

screen-shot-2016-10-09-at-6-56-06-pm

Using PeerJs was a big challenge for me. And still I am not entirely sure what’s going on in this chat. Below it shows when I got the example code form class working. Even after getting this to work, I wasn’t entirely sure why we are calling other people in the chat. So eventually I took it out of the code.

screen-shot-2016-10-09-at-7-21-33-pm

screen-shot-2016-10-09-at-9-14-19-pm

screen-shot-2016-10-09-at-9-37-07-pm

Once I got the video working, I decided

screen-shot-2016-10-09-at-10-45-37-pm

screen-shot-2016-10-09-at-10-52-03-pm

screen-shot-2016-10-10-at-4-30-33-pm

screen-shot-2016-10-10-at-4-44-37-pm

screen-shot-2016-10-10-at-4-44-13-pm

3 Minute Video Prototype

We are currently in the fourth week of Big Screens. My partner, Katie Temowski and I have been working hard building our concept, collecting our visual inspirations, and beginning to craft out a narrative. We have just completed our first 3 minute prototype for our experience.

Aim

Create an experience for the audience that encourages interaction, community, and love.

Our focus for this experience is really all about the audience. We want to send an overall message to the audience that says It’s good to feel the feels. There are a few elements that are special for our experience: voice overs, text, and colors.

screen-shot-2016-10-07-at-12-53-27-pm

Week 4: Web RTC

Initial error :-/

screen-shot-2016-10-04-at-11-07-11-am

After five office hour sessions ===)

WORKING!

screen-shot-2016-10-03-at-5-27-19-pm

This week was a little bit better than last week. I officially completed week2: chat and week3: canvas drawing. I wasn’t as successful with getUserMedia and WebRTC, but I’m working on it. It turns out my biggest issue was not matching the right IDs for the text that I am sending and the other person was sending in the chat. I wasn’t using broadcast emit in the server javascript–once I got that working, I could read incoming and outgoing messages in the terminal.

As for the WebRTC for week4, I am able to run the server in the terminal, but when I go to the url, nothing loads. Except, now this

screen-shot-2016-10-04-at-1-16-08-pm

Say whaaa?

Week 3: Connecting terms

WWND?

What would nature do?

For this assignment I was given four design terms to define and then connect into a narrative. The terms I was given were archetype, mimicry, scaling fallacy, and threat detection.

Definitions based on Lidwell’s Universal Design Principles:

Archetype:

Universal patterns of theme and form resulting from innate biases or dispositions. Hardwired ideas and conceptions. Example, Harley Davison= outlaw archetype.

Archetypes represent fundamental human motifs of our experience as we evolved; consequentially, they evoke deep emotions.

http://www.soulcraft.co/essays/the_12_common_archetypes.html

Mimicry:

What would nature do? WWND

the act of copying properties of familiar objects, organisms, or environments in order to realize specific benefits afforded by those properties. These can be surface, behavioral, or functional. (improves usability)

“I think the biggest innovations of the 21st century will be at the intersection of biology and technology. A new era is beginning.”- Steve Jobs referring to biomimicry

Scaling Fallacy:

A tendency to assume that a system that works at one scale will also work at smaller or bigger level. The fallacy occurs when a designer assumes usability will be retained when a design is scaled up or down. ie performance & interaction. Eg Big Screens

Threat Detection:

The natural ability to detect threatening stimuli more efficiently than non-threatening stimuli.

Examples: Smoking advertisements, animal protection

Each of these design principle is related to nature and how we naturally see things. When I think of Archetype I think of the brand of an object or design and who is going to be using the product. Comparing two types of vacuum designs a manual and a robotic. For example, looking at a Dyson, its functionality is much more important to the design. It’s designed for people who want to get things done and get their house cleaned; the design reflects its function by exposing how it moves across the floor and collects dirt. Whereas, looking at the Roomba, robotic vacuum, we immediately know it doesn’t need assistance. The form of this vacuum design shows us that it works autonomously. Just like threat detection, the design gives us information about what it does without having to know very much about the product itself.

 

screen-shot-2016-10-02-at-3-10-16-pm screen-shot-2016-10-02-at-3-13-14-pm

Considering the original designs of the vacuum were quite large, I would assume the design of the vacuum went through many design iterations in order to scale it down to its modern day size (scaling fallacy).

The biggest takeaway of understanding these design principles is that one of the main goals designers have is to design products that are intuitive and visually compelling. In order to do this, it’s best to pull from nature, from what people know and are comfortable with–looking back at last week’s assignment Good & Bad Design, BMW’s focus is to take the most modern car technology and combine it with the familiar. This way the user doesn’t have to learn two new things at once, rather they are able to truly utilize the advancements in technology without having an experience that is overly complicated and has a learning curve.

Examples of Mimicry:
(surface) software icons looking like the actual objects they are representing;  cloud;
(behavioral) Tamagotchi
A super popular gadget game in the 90s mimicked the behavior of an actual pet. It was also ambiguous, allowing children to define what kind of pet, realistic or not, they were taking care of.
screen-shot-2016-10-03-at-2-40-12-pm
(functional) Claw Machines
The infamous claw machine seen at most arcades and amusement parks, which has haunted likely millions of people, is functionally mimicking a human hand grabbing for something. Making users believe it’s not always about how many quarters you use.
screen-shot-2016-10-03-at-2-45-03-pm

Week 3: Future Scenarios & Artifacts

I am getting very excited for the development of this project now that I’m working with a topic I enjoy researching and exploring. I decided to study the future world where nobody sits, which is based on our growing current concern with our deadly sedentary lifestyles. From this broad idea, I narrowed it down to a very realistic future to a very unlikely, sort of wacky world, where everything is designed standing up.

Below are the slides I put together which include 10 prototype future scenarios based on this world, lifestyle, and artifacts.

screen-shot-2016-09-29-at-9-22-37-pm

screen-shot-2016-09-29-at-9-23-19-pm

screen-shot-2016-09-29-at-9-23-12-pm

screen-shot-2016-09-29-at-9-23-31-pm

screen-shot-2016-09-29-at-9-23-38-pm

screen-shot-2016-09-29-at-9-23-49-pm

screen-shot-2016-09-29-at-9-25-02-pm

screen-shot-2016-09-29-at-9-24-54-pm

screen-shot-2016-09-29-at-9-24-43-pm

screen-shot-2016-09-29-at-9-24-34-pm

screen-shot-2016-09-29-at-9-24-19-pm

screen-shot-2016-09-29-at-9-23-58-pm

screen-shot-2016-09-29-at-9-24-08-pm