Research Post 8

When I saw the display of the Super Mario clouds in the Whitney Museum, I immediately recognized it, but did not understand the significance of just having the clouds there. I thought it was perhaps a reference back to the start of gaming taking off. After watching the video, it is surprising to see how much history lies behind this work, with the advent of ROM hacking and what this implied about privacy of code.

The narrator pointed out the holes in Arcangel’s story of how he had used the original Mario Cart to extract the clouds. What was strange to me throughout the video was how obvious these mistakes were. For example, the difference in the blue color between Arcangel’s and the real mario cart is very noticeable. Also, he pointed out the way he removed the PRG with clippers destroys the mask rom, and his use of the number 37 instead of 38 to display the clouds. If Arcangel was making a whole how-to tutorial on how to do this, you would think he would be sure that there were no mistakes. There are also many references to Arcangel’s work on the mario clouds, and from what I could gather from the video it seems they were just bypassing these mistakes.

This video made me think about how things can be manipulated in this field when it is all about delivery of a final product. It also made me wonder why it was so significant to able to manipulate an original work, not just make a copy of it. Overall, it seems the use of rom hacking can take someone’s work to be used in a way they had not intended it for, bus also build off another’s creativity when inspired.

Sophie_Final Project

For my final project, I decided to make sketches that could then come to life through animation. The goal of the sketch is to convey an idea/message from my own experiences that could hopefully connect with a viewer.

The first sketch shows a girl with a balloon stemming from her head that grows, with her face tightened. In the second half of the animation, she breathes out, which you can see with her facial expression changing, air being exhaled, and shoulders relaxing. It is supposed to show how we can build up these worries in our heads that are often just nothing (like how a balloon is filled with air). Her exhale is like letting air out of the balloon.
The second sketch is of a young girl on a swing, and when she jumps off of it, instead of falling to the ground she begins to fly. It is supposed to capture the imagination of children and how it is important not to lose that optimism for what is possible as you get older.

When first beginning this project, I attempted to use both the P5 library Scribble and the built in shapes to create objects that could then be moved around. I struggled with this method to make an accurate drawing. I then moved on to hand drawing each part, and scanning it into Adobe Illustrator where I made it look computerized. For my first sketch featuring a girl with a balloon, i created the animation by varying the speed of the pieces of the drawing. With this method, I had limitations on how the image could be moved. For the next sketch featuring a girl swinging, I again hand drew the parts of the image but this time created each frame in illustrator. With reference to Daniel Shiffman’s tutorials, I made an JSON array that featured the coordinates, width, and height of each frame. In the P5 editor, with the frames and JSON uploaded, I used the frame reference points to cycle through the animation, and then loop back to the beginning. Now that I am familiarized with the methods of creating this project, I feel moving forward I could make these sketches with finer details and fluidity with practice.

https://www.openprocessing.org/sketch/641828

https://www.openprocessing.org/sketch/641736

https://www.openprocessing.org/sketch/644313

Some sketches:

Gifs of work

Resources:

https://www.youtube.com/user/shiffman

Instagram: @bymariandrew

Instagram: @alecwithpen

Research Post 7

This work was made by Lillian Shwartz in the 1970’s, who was a big leader in creating computer-mediated artwork. When I saw this display, I thought the colors blocks were really cool and was wondering how they were generated in a way that was both methodically changing and aesthetically working together. Schwartz made these at AT&T Bell Labratories, at a time when this type of computer-generated artwork with color had not really been seen before. For one of these, called Enigma, she had to use a sequence output program called EXPLOR, which divided the screen into grids. In one part of the film, it alternates back to black and white to create a strobe effect. In the second part, it explores interaction between chromatic colors. The program randomly selected the areas/shapes, and this random function intrigued Schwartz.

Final Project Ideas

Idea 1:

A series of short animations styled in a hand-drawn/notebook style. I have these old journal drawings and ideas that I’ve always wanted to make into full on animations. They require some captions and talk bubbles, which I would use what we recently learned about with JSON text. I would have to make individual elements of the drawing move very fluidly, and would also need to use the walking man project we worked on. I would want to present it in the specific style I have in mind of looking hand-drawn.

 

Idea 2:

Go back to the time visualization project. When I was working on this, I was struggling with just getting the clock to work in the way it was meant to, and didn’t get to fully explore the aesthetic side of it. I would want to go back and ramp up the idea on which the clock is based on. Because it is meant to be about the perception of time, I could put this plain clock I made inside of someone head as they move around and go through different scenes. Because parts of the clock are sped up, I could have these scenes look like they are sped as well like a time-lapse. To do these things, I would need to work on using objects and arrays to would make up the elements of the scene, and create a sort of path that they move on, with varied speeds.

 

Idea 3:

There is an API called Houndify that is speech-enabling. I could create an animation of a character that tells you you’re schedule for the day. I would use a JSON file to list the schedule, using Houndify to then read it off. And then would want to make a cute character design to “read” it off, perhaps showing the scenes of what is happening within the schedule.

Sophia_ResearchPost6

In the OpenStreetMap API, it is able to retrieve and store raw geographical info on the OpenStreetMap database.  While this API provides data, it is not used for web use like google maps. It is more so used as a map editor software. You can still, however, request sections of code to use for a map. It is open sourced so anyone can use this to build what they want, and anyone can edit it. The current version, after updating it, is 0.6.

 

Sophie_Generative Landscape

https://www.openprocessing.org/sketch/623524

For my generative landscape, I created an animation of a coral reef. I drew each object in the reef individually, but then found it more efficient to compile the objects into one drawing, still using the canvas background so I could add more things. Once the drawing was made, I messed around to arrive at what I hope to be the correct zoom/canvas size. The image is moving along, and as it gets to the end, another copy of it is displayed so that it looks like you are moving along a wide space. As it moves, you can see fish swimming by. The fish I didn’t draw myself, I found them on https://www.kisspng.com/

There are some bubbles at the start, but because the bubbles are rising, I had trouble getting them to appear later on without floating up too soon. I tried using if statements so that they would only begin at a later point in the canvas, but am still working on this. I referred to these videos from Schiffman throughout:

https://www.youtube.com/watch?v=LO3Awjn_gyU

https://www.youtube.com/watch?v=o9sgjuh-CBM

Sophia_ResearchPost5

p5.Speech was created by Luke DuBois to connect sketches to speech, so that they may talk and listen. It allows you to interact with speech recognition APIs and web speech. his type of technology is really interesting as it speeds up/simplifies communication between us and a computer. It is also useful for those with physical difficulties with typing.

http://ability.nyu.edu/p5.js-speech/

An example of this: http://ability.nyu.edu/p5.js-speech/examples/02speechbox.html

In this example, you can type out what you want it to say. You can then alternate the volute, speech rate, and pitch.

Sophia_Generative Landscape Ideas

 

  1. Show a coral reef growing in the ocean. Start with sand at bottom of ocean, and slowly grow each piece till it fills up the screen. Could have unexpected parts when fish come and interact with it.

2. A garden/flowers growing. They are growing, as the background changes with rain clouds, sunlight, and dark/night

3. Show CO2 cycle, and how it builds up in atmosphere from various sources

Sophia_ResearchPost4

Robert Hodgins is a creative coder. He uses tools in animation and computer graphics to depict many real life concepts in physics and astronomy. He has a piece called Planet Earth; where he uses 3 nested spheres that show earth, clouds, and the atmosphere. This project developed into a deeply detailed depiction. Nasa lended him textures to uses to depict these in high resolution. The ocean is lighted differently to show its reflectiveness. It uses city lights as a texture to show populations in different areas of the world. These kinds of texture uses combine to show a highly realistic depiction of earth and its surrounding area. For the atmosphere texture- he used help from Sean O’Neil’s display of atmospheric differences in air pressure. Shadows behind clouds (using a mask) help show the difference in location of the clouds and earth. Glow and coloring from the sunrises and sunsets also shows the space between it and the earth. This project is a great way to help wrap our brains around the bigger picture of earth using accurate, small details to build it.