Live Image Processing Final Proposal

For the final LIPP show Anna and I will be working together to create a performance that takes on the idea of digitization of humanity and how this could look in an alternative and or futuristic universe. Techniques covered for the performance will range from GL, face meshes, pre recorded and live microscope feeds, and audio manipulation.

The concept is a combination of concepts ranging from organic matter taking on and metamorphosing it into a digital presence to artificial intelligence and how these systems are becoming increasingly humanlike.

Below are a few examples of visual styles we are going to try to incorporate into the piece.

The Performance is broken up into four acts:

Act I:

1. Morgan and Anna both at table in middle.

2. Computer Voice: “Baseline test has now begun” (something similar signifying test on organic life has taken place).

3. Morgan taking samples from both himself and Anna. Anna helping hand him petri dishes, etc.

4. Max visuals of organic life in petri dishes on projector.

Act II:

1. Start of distortion in cells (shown through visuals on screen).

2. Computer Voice: “Anomaly Detected”.

3. Cell distortion continues & increases.

4. Narrative Baseline test of repeating words, answering questions between Morgan and Anna with both of them saying different answers. Audience is unsure of which one is the anomaly (or which answer is the right one to the test).

Act III:

1. Computer says one of them did not pass test (reword later).

2. Computer Voice: “Warning, infected subject is highly contagious”.

3. Fade out of microscope (cell) visuals.

4. Anna and Morgan’s face side by side on each screen. No distortions yet.

5. Anna and Morgan’s face both start to distort (openGL face mesh, and other manipulation techniques).

Act IV:

1. Anna and Morgan's Face start to merge together and distort into one abstract entity.

2. Abstract form/shapes show up on the screen (shaders).

3. Computer Voice: saying something about test being resolved. (the abnormal/non human won).

4. END.

Digital Meiosis, First Performance

Last week was our first performance for Live Image Processing. When I was thinking about the piece I wanted to perform I wasn’t really sure what direction I wanted to go in. For previous assignments I had made things that were visually interesting to me but not really meaningful.

After talking with Matt about my concerns with the performance he gave me some good advice about experimenting and that it didn’t necessarily have to be meaningful. I began to experiment with a few different concepts. The first using stock political footage and voiceover from various politicians. This idea didn’t really take me anywhere interesting so i decided to switch to using old home video footage. This idea eventually led me to finding footage of cells splitting under a microscope.

From here I started working on a piece that began as a very organic process happening and then over time begin to take the form of something extremely inorganic and geometric. Overall I was pretty happy with how the piece turned out given the short performance window. I enjoyed the element of performing the piece and it made me more excited for the next two performances. Below is a test run through of the piece. It varies from the final performance but it follows most of the same timing and structure.

And some screenshots of the patch and presentation. The patch was broken into four different sections and then I crossfaded between each section except for one.

Screen Shot 2019-03-04 at 3.38.38 PM.png
Screen Shot 2019-03-04 at 3.38.50 PM.png

Additions to Playback System

This week I wanted to get deeper into using newer objects and seeing if I could work to create glitchy distortion effects. At this point I want to try to push myself to understand why certain effects and objects create such alarming visuals to try to work backwards so that I can subtly bring in these types of visuals in a less immediately jarring way during my performances.

At this point, I think the things I am struggling with the most is keeping track of why objects are having certain effects when I implement them farther down the patch. For example, the more complex my patch has become the less I understand why adding a certain effect, that I think should cause a video to be manipulated in one way, causes something unexpected and confusing. That along with just keeping track of the data in a way that I can remember how to make sure I’m not missing planes etc. These seem like they will take time so I’m just enjoying the learning process.

For my performance I’ve been thinking about the phrase "I am” and how it relates to the way I have been feeling since starting ITP. The way I have defined myself or just thought about who I am over the past few years seems to be changing constantly and over the past few months even more so. I’m not sure how I would convey that through the performance or even if it would be right for the performance but it’s something I’ve been thinking about. I also like the idea of just manipulating a bunch of videos/photo headshots I’ve taken of people.

My patch was broken up into three main parts. The middle and right parts was a modified version of what I had done last week. Modified by adding in msp signals and cleaning up a few of the less optimized areas.

The leftmost section was new. The idea was to make a sort of jumpy glitch effect that I could overlay between recorded video and screen caps of myself. I found a page online that showed how to make a nice glitch effect but I honestly didn’t understand most of what was going on so I broke it down and spend some time figuring it out piece by piece. I’m still not totally sure what is happening in it but I figured out enough to make my own frankenstein version. Finally I took all the patches and made a presentation mode overlay so I could work with them and blend them.

Video Playback System

The Assignment for this week was to begin building a video playback system. My goals were to get more comfortable with a lot of the objects and processes that we had gone over in class but also to try experimenting and seeing what else was possible.

Most of the issues I had were that I’m still not totally sure how a lot of objects work, such as chromakey and xfade, so understanding why the output acts the way it does is a bit difficult. I find the visual programming to be quite intuitive and fun though. The major issue I had was that, even using window instead of pwindow, when I loaded in videos to the program the framerate dropped significantly and there was a lot of lag. This is why I used live webcam footage for my documentation.

The patch that I created is broken up to four different parts.

The first part of the patch is really responsible for unpacking the video’s values and then sending the rgb out to the second portion. Once they return from part they are repacked and sent into a gswitch for toggling between color and b&w. the second portion of the patch manipulates the zoom levels and anchor points of the first live video before sending it back. For this portion, Matt Romein’s sample patches from week 2 were used. Part 3 uses chromakey and rota to manipulate a second video feed. Finally the fourth part used xfade to fade the two videos together as the user would like.

Screen Shot 2019-02-14 at 8.18.22 AM.png

Part 1:

Screen Shot 2019-02-14 at 8.26.58 AM.png

Part 2:

Screen Shot 2019-02-14 at 8.25.54 AM.png

Part 3:

Screen Shot 2019-02-14 at 8.27.05 AM.png

Part 4:

Assignment 1: Short Video Clip Repository


For the first assignment we had to record 5-10 minutes of short video clips to use as a sample bank. Factors for us to think about were less about narrative and theme and more about visual elements like Light, Color, and Shape. I come from a film photography background and in that type of work I usually try to think think about the visuals of an image I’m composing as well as theme and developing a narrative. Thinking in a different context proved challenging but interesting.

The videos I ended up taking were mostly clips from walking around my neighborhood in Bushwick, to work in Manhattan, and to and from ITP. During these walks I tried to pay attention to light, texture, and juxtaposition, both within the single video and between other clips I had taken. I tried to vary the clips in terms of scale of focus once I found a rhythm with the video taking process.


Below are a few of the clips I recorded: