Threads: Visualizing Body Electric in 12.4 Million Pixels

We visualized the body electric of three contemporary circus performers on a 120’ x 10’ screen in real time. These are the data considerations that most shaped the project.

Lisa Jamhoury
8 min readDec 22, 2016
Françoise Voranger of Hybrid Movement Company performs Threads at IAC Headquarters. Photo by Andrew T. Foster.

When Aaron Parsekian and I found out we would have the opportunity to program the IAC media wall, we knew immediately we would create an experience working with data and live performance.

We had seen a lot of performances using motion capture — the performer moves and the visualizations respond to the movements — and began to consider how we could expose less obvious processes happening on a performer’s body. Our previous collaboration, Beats Exposed, which visualizes and sonifies a performer’s pulse in real time, was a first attempt at answering this question.

Threads is an interactive performance that continues to explore this question with the addition of EMG — muscle electric — sensors, wearable lighting, and more complex visualizations.

Video excerpts of Threads performed by Hybrid Movement Company—Françoise Voranger, Marcus Anthony, and Rowen Sadlier—on Dec. 9, 2016 at the ITP Big Screens 10th Anniversary Show.

We asked ourselves a number of questions related to the data throughout our process. What follows are the questions that most influenced our process and how we chose to answer them.

What data should we collect?

When working with data, arguably the most important—but often overlooked—decision is choosing what data to collect. For us, this presented a two-part question: what should we sense and from whom should we sense it?

Our original concept was to sense based on the concept of “the head, the heart, and the hands.” This idea is a representation of the three human faculties used to create a piece of art — the heart represents the passion needed to create the idea, the head is the mind, which figures out how to make it happen, and the hands represent the physical work that bring the art to fruition.

We decided we would represent these three ideas by measuring EEG, electroencephalogram or brain waves, EKG, electrocardiography, or pulse, and accelerometers on the hands and feet.

We then asked Hybrid Movement Company, founded and led by Françoise Voranger, to collaborate on the project. The company displays a rare mastery of passion, focus and execution that we thought would be a perfect fit for the project. We were thrilled when Françoise agreed to collaborate with us, and brought on her colleagues, Marcus Anthony, and Rowen Sadlier. The collaboration fundamentally shaped many aspects of the project.

Although our decision of “who to sense” was spot on from the beginning, our decision of “what to sense” evolved significantly over our 16 weeks of development time.

In the early weeks of testing brain wave sensors, we discovered that the sensors that we could afford (i.e. not medical grade) were disrupted too easily by movement. In the process we started to learn about the small amounts of voltage that signal changes in the body. As a result we decided to work with muscle electric, electromyography (EMG).

Early debug view of live data. From top to bottom: EMG, accelerometer, EKG.

When we eventually had all three of our sensors and our debug view working with the EKG, EMG and accelerometers, we began to feel that the accelerometers were too close to motion capture and didn’t fit within the evolving concept of the project. We decided to cut them and were left with sensing body electric, EKG and EMG.

How do we ensure we have good data?

Ensuring we had trustworthy data was a big struggle at every step of the project.

On the physical side, we had a number of points of potential failure. We soldered each of our sensors to its own transceiver, which communicated with a unique receiver. Each wire and soldering point created an entry point for noise. In addition the sensors were sensitive to hair, dirt and fabric. We rebuilt the sensors several times to ensure the best readings possible. Some of the most important lessons that we learned were to braid the wires to help fend off noise, and to always clean the sensor area with alcohol to avoid problems with sweat and dirt. We used a conductive gel with the EKG sensor to ensure a better reading.

For the performers wearing EMG, we were sensing both their left and right trapezius muscles, and we would often lose one side. Eventually we realized this was because we were misplacing the sensor on the muscle by a small amount. Over time we got better at finding the perfect placement on the muscle for the sensor and its ground. The stages of creating and placing the sensors were integral to having reliable data.

Aaron adjusts EMG sensors on Marcus’s back.

On the receiving end, we were using Max/MSP to process data coming in via the computer’s serial port. This data was then sent over OSC to a Processing program. Despite the work we did to have clean data on the physical side, the range of the sensors still varied from day to day and performer to performer. The actual range of the EMG sensors is from 0–1000, but we would find that on one performer, for example, we would only get a range of 0–600. To compensate for this, we built high and low bounds in our Processing code that could be changed each time we ran the sketch.

How obvious should the relationship between the data and its representation be?

The question of how direct the relationship between the data and its representation should be was an important one for us. One one hand, we felt if the correlation were too literal, then the experience would feel clinical. On the other hand, if we took too much “artistic license” in the representation, and obscured the correlation, then we figured we shouldn’t even bother working with live data. We worked hard to find that “goldilocks” position.

Our original idea was to to create a generative piece of art that would be drawn on the screen over time. The sketch would be additive and nothing would be erased from the screen during the three-minute performance. In this way, the data coming from the performers would create a unique piece of art with each performance. This had two problems — GPU performance on the high resolution (11520 x 1080) screen, and little visible correlation with the real-time performance.

Screenshots of early generative drawings made for the 11520 x 1080 screen.

In the end, we settled on a visualization that reflected what was happening in real time, but we changed the aspects of the data that we would show over time. For example, at some times we would show only the range of the sensors through brightness, while at other times we would show logic across the sensors through drawing different elements on screen.

How should it look?

The inspiration for the visual look and feel of the piece came from our writings based on “Visit to a Small Planet” by Elinor Fuchs. Through her questions about environment, mood and hidden spaces, we decided to create visualizations that represented a natural, human, “curved” world, and also showed the machine behind the project.

Screenshots of visualizations that begin to show the digital aspect of the project. Top is EMG, bottom EKG.

Our initial sketches tried to include the two sentiments in different scenes, but the feedback we received was that they felt too separate, and we needed to choose a direction. Eventually, we decided to let the dancers bring the natural movements to the piece, and allow the screen to represent the digital. This decision brought a fundamental change to the look and feel of the project.

The final visualization is based on a flow field, or vector field. The EKG is drawn on the grid of the flow field. For each pulse, a line is drawn on the grid starting in the center of the screen and expanding over time to edges of the screen. Each line that is drawn pulses at the same rate as that of the performer’s heart rate at the time that it was placed on the screen. The brightness of the line has an inverse relationship to speed. If it is a quicker heartbeat, it is darker, and if it is a slower heartbeat it is brighter.

Final visualization. From top: EKG visualization, EMG visualization, EMG visualization with logic across two performers.

The EMG is drawn starting at the left and right side of the screen, moving toward the screen’s center. Each time a sample comes in, a new particle is added, and the particles move along the flow field. The brightness is affected on a scale of 0 to 1000. A higher voltage reading from the muscle is brighter, less voltage is dimmer. Aside from mapping the intensity to brightness, certain conditions across all of the sensors trigger specific visualizations. For example, if one sensor on one performer is high, it draws a white vertical line on screen, if both performers wearing EMG sensors top out both of their sensors, it draws a large triangle across the screen. In this way, the performers create the light in the room based on the intensity of their movements.

How should it sound?

Aaron Montoya-Moraga handled all of our signal processing in Max/MSP. In doing so, he also created a system to digitally affect music based on the incoming sensor data. For our first performance of the experience, the EMG sensors controlled the frequency cutoff of a lowpass filter, which affects the sensation of proximity to the sound source. The more the performers flexed their muscles, the more distant the music would sound. If they relaxed their muscles, the music source appeared to be closer.

In the future we would like to compose a piece of music in real time entirely based on the performers’ incoming data.

“Threads” was presented on December 9, 2016 at IAC Headquarters as part of the ITP Big Screens 10th Anniversary Show.

A number of considerations outside of the decisions around data went into the creation of this piece. We’ll be writing about those in more detail over the coming weeks on the project website.

Our most heartfelt thanks go out to our collaborators at the Hybrid Movement Company, whose patience, hospitality, and positivity have continued to inspire us to work on this project, to Mimi Yin, Jer Thorp, Andrew Lazarow, Daniel Shiffman, Andy Sigler, Craig Pickard, Chang Liu, and Zach Lieberman for their awesome guidance, and to Sam Turner, Elizabeth White, Tiriree Kananuruk, Richard Lapham, Roxanne Kim, José Vega-Cebrián, Melissa Pamela Orozco Salazar, for volunteering their time to this project.

--

--

Lisa Jamhoury
Lisa Jamhoury

Written by Lisa Jamhoury

Artist & researcher working with computation and the body • Teaching @ITP_NYU • Formerly Machine Intel @Adobe Design and Digital Lead @BlueChalkMedia

No responses yet