Common Objects Used Uncommonly

Interactive Installation

2022

Our team’s research objective was on extending the screen into physical space, in order to prototype unconventional forms of analog input through computer vision and machine learning applications.

For this project, our research question entailed an investigation into collapsing the distance between the digital (screen space) and the physical (space inhabited by the user) and how this might be conducive to increased feelings of presence. As part of that investigation, our team explored embodied interactions with analog objects that communicated with the screen space through computer vision. We aimed to inspire the user to make the connection between their physical bodies with the digital avatar, engendering a sense of surprise and delight.

Above: exploration sketches with teammates Shipra Balasubramani and David Oppenheim.

Through an iterative process, our team explored individual code sketches based on our research and ideation phase, the results of which we developed and combined into the prototype.

Technical Description from the team’s Project Documentation:

Our larger vision would be a geometric installation with lots of affordances for projecting onto and filled with multiple everyday objects. We would play with the affordances and conventional associations of each object to create micro narratives that stemmed from the user’s own histories and relationships to those objects.

For this initial prototype we focused on an interaction with one object only, although we did work in a surprise second object to test the logic of our programmatic approach (state machines) and object recognition library.

We focused on designing a space that would not require instructions and rely instead on the affordances of the physical design – it was important that the installation feel alive when the user first entered (the screen’s camera displayed the user’s image and a grid of moving tennis balls) and that the analog object and its position afford interaction (we chose a tennis ball and positioned it on a lit pedestal). 

We used P5.js in conjunction with ml5.js and the PoseNet machine learning model as well as the COCO dataset. 

We segmented our v1 prototype concept into features and created code sketches for each —object recognition and state machine, digital object using GIFs, and the GIFs’ interactions with the skeleton —and then integrated our separate code bases for testing and debugging.  

Final prototype v1 code: https://editor.p5js.org/tamikayamamoto/sketches/FKTt5dcfM

Fullscreen v1 code: https://editor.p5js.org/tamikayamamoto/full/FKTt5dcfM

 

User Experience Description + Feedback from the team's Project Documentation:

Above: User Flow Diagram of v1 prototype by David Oppenheim.

1. User enters the playspace and sees a grid of digital tennis balls on screen, a webcam capture that mirrors their movement, and a physical tennis ball displayed on a tripod in front of them.

2. User picks up the tennis ball. On screen, the ball grid deconstructs and forms the body of the user. Background fades to black and a parallaxing background appears.

3. As the User moves, the Ball Person on screen mirrors their movement. A parallax background that moves with the user suggests a sense of moving in a 3D environment that includes both the digital screen and the physical playspace.

4. User releases the physical tennis ball, either dropping it to the ground or returning it to the pedestal (tripod). On screen, the Ball Person deconstructs and tennis balls fall to the ground.

5. End: after five seconds, the ball grid appears on screen once more.

*We partially prototyped a second object (donut) but chose not to include it as part of the overall user experience and demoed it separately instead.

Feedback and Next Steps

Feedback was obtained from the critique and not as part of formal user testing sessions. 

The critique began with three volunteers who tested the installation before receiving any context from us as designers. We observed their sessions and took note of their body language and utterances. During the discussion that followed our verbal presentation, we asked for the tester’s observations. Additionally we received feedback from individuals who observed the three testers. Finally, a few more users tried the installation toward the end of the critique. 

Our main takeaways from the session were:

  • Overall response to the experience was positive; users didn’t require instructions to move through the intended experience (pick up the tennis ball and play around with it and move their body); 
  • Users seemed to enjoy the key moment of picking up the ball and watching themselves transform into a ‘tennis ball person’, moving around as that form of tennis balls and then breaking apart (by dropping or placing the ball back on the pedestal); 
  • One user required prompting to pick up the ball; 
  • There was an acceptable moment of tension when users didn’t quite know what they were allowed (or supposed) to do with the tennis ball, however all users quickly began to bounce it, throw it, or put it back on the pedestal; 
  • Users did seem to want more complex behavior from the system, for example, to see one of the digital tennis balls follow their analog tennis ball when they threw it in the air or against the wall; 
  • Our demonstration of a second object (donut) seemed to be well-received, as was the larger vision of having multiple everyday objects available to play with.

Should we decide to further develop the project, we would begin by conducting formal user testing of this v1 prototype and then dive back into further research and ideation as part of a larger iterative design and build process.

 

This project was submitted in partial fulfillment of a Creation and Computation course at OCAD University’s Digital Futures graduate program, by Tamika Yamamoto, Shipra Balasubramani, and David Oppenheim. This project’s description and writing are a collaborative effort between all team members, and taken from the primary project documentation.