Final Project Blog

Image Distortion

 project description

For my final project, I decided to make a program that would distort images. I used the webcam in my laptop to capture images and videos and distorted them in 5 different ways. For the photos, I had 4 distortion:

(1) Draw ellipses using the color of the pixel its located at to create a blurry effect. Moving the mouse across the screen can erase some of the pixels.

(2) Create a black and white image based on the difference in color between two pixels that are next to each other. Moving the mouse towards to right of the screen will intensify the effect and making the image darker.

(3) Enlarged pixels (square) are drawn at random spots and in inverse color as the original image. Moving the mouse across the screen can erase some of the pixels.

(4) Circles with the radius of 150 are drawn with the outer edge being more transparent and the inner being more opaque. Moving mouse across the screen will display the pixels.

For the video, I had one distortion where it creates a lagging effect. I manipulated the pixels so that they would get replaced by their adjacent pixel if the difference in color is greater than a certain value. Moving the mouse onto the film camera icon on the top left would switch the program into video mode, and placing the mouse anywhere else will cause it to back to photos distortion mode.

Originally, I wasn’t going to a camera to capture images in real time, but I thought doing so would make my program more interactive, therefore more fun. But due to this change, I ran into a lot of problems during the development process. Initially I wanted to add physics for the pixels in my image so they would disperse whenever the mouse is near. They worked for still images that I imported, but did not work for the video that my webcam is capturing. I had to abandon that idea and explored more on how to manipulate and distort images/video captured with webcam, and was able to do so. For the project, I learned a lot of new functions like “import processing.video.*”, “PImage”,”Capture”, etc. And I was very content with my end product.

Process sketches

proposed way of distortion in initial sketches
developing the distortions
initial starting screen
building the start screen
final starting screen

Video demo:

Bibliography:

  1. Molitor, O. (n.d.). Fundamental Fortitude. Retrieved November 26, 2018, from http://doc.gold.ac.uk/compartsblog/index.php/work/fundamental-fortitude/
  2. Foundation, P. (n.d.). Images and Pixels. Retrieved from https://processing.org/tutorials/pixels/

Claudia_ResearchPost8

Super Mario Clouds

I wrote my last research post on this work, as it captured my attention as soon as I walked into the Whitney exhibition. Before watching this documentary, I knew that this piece is said to be made by hacking the game console of Super Mario. But after watching the video, I was shocked by the result of this reverse engineering.

Arcangel claims that there’s no generation lost because it’s the exact same image but it’s also not copying because the code was never copied or altered. But looking at the code, there are actually a lot of differences comparing to the original codes in the game. The color and saturation did not match the original either. Therefore, critiques are suspicious of whether Arcangel actually hacked into the console or coded it himself. This lead to Lemieux’s reverse engineer of the super mario clouds by using the ROM hacking process described by Arcangel. He realized that the coin at sprite zero could not be erased, suggesting that the Super Mario ROM was not contained in Arcangel’s work.

I was genuinely shocked by the result because during the research process of my last post, all the websites described Arcangel’s work as “made by ROM hacking”. It’s sounds so believable since the process is described so thoroughly and it seems very simple and doable. Therefore, the reverse engineering of the game really made me rethink everything. I don’t quite understand why Arcangel would claim that he hacked into the ROM when he didn’t. But I think reverse engineering is definitely a good way to test out what might be wrong.

Claudia_FinalProject

Final Project Ideas

  1. Variable Face Project – I can build on my existing variable face project by adding a weather API. I can change my figure’s clothings according to the weather. I can also give new outfits and have new accessaries. I’m also planning on organizing my code and add some animated objects to it. Many computer programmers have been using weather API to make weather application, but not a lot of them use it to make art. But I did find a data visualization of weather at this website: http://w2w.meteo.physik.uni-muenchen.de/cca/visualization/index.html
  2. Generative Landscape – I can build another generative landscape using the WEBGL 3D library. This library is very important because it allows me to add dimension to my figures instead of just having a flat 2D canvas, but it’s very challenging, since it involves a lot of planning and sketching. I’ll have to imagine everything in 3D. Instead of having figurative figures, I want to make this landscape more abstract. I want to use dots and lines, rather than actually filled shapes. I also want to combine that with the time project so the landscape changes with the time. I’m inspired by Fletcher Bach’s computational art work. He works with 3D terrains and uses lines and points to form him arts.  http://fletcherbach.com/COMPUTATIONAL-ART
  3. text/image – I want to load an images from famous movie scenes that have nice color schemes. But instead of displaying the image, I want to display text from the script/lines, and display each character as different colors, according to the color of the pixel at that position. I thinking about something like this https://www.moma.org/calendar/exhibitions/3863 but instead of having separate pixels, I want to have small text in the color of the pixel instead. I’ll have to use a function to break down the image into pixels and have a grid and store characters in each grid. If time allows, I also want to make use a video, or it something animated. I will probably need to use p5.dom.js and the loadpixel() function.
  4. distortion – I want to make a still pattern/image with lines or pixels. Moving the mouse across the screen will distort the pattern and move the lines around. Depending on where the mouse is, there will be different sounds played and different special effects showing. When the image/pattern is distorted enough, the screen will refresh into a new pattern. I’m planning on coding 3 or 4 patterns with different themes. I’m inspired by https://patatap.com/ and other pixelated art works.

Claudia_ResearchPost7

Whitney Programmed: Rules, Codes, and Choreographies in Art, 1965–2018 

  1. Super Mario Clouds by Cory Arcangel

The bright sky blue captured my attention as soon as I walked into the exhibition. The white super mario clouds are displayed with black pixelated outlines, moving across the screen. It reminded me of the mario games my friends used to play on their Nintendos, and it gave me strong nostalgic feelings that made me walk closer to see what’s really going on. I stood in front of it for a solid 5 minutes, but I didn’t see any repeated pattern of the clouds. They are randomly placed and the screen is ever-changing. The whole art piece was displayed using an old bulky TV with 4:3 aspect ratio to further help transmit the nostalgic and retro feelings, and the TV is hooked to a real Nintendo console, as shown in the photo.

I learned that the piece was first released as web art oriented toward the hacker community. The game’s ROM(Read-Only Memory) was hacked digitally to create this effect, and then physically instantiated by modifying a copy of its NES cartridge. Years later, when Arcangel was asked to exhibit Super Mario Clouds as an installation in a gallery setting, he set up a multi-channel projection, with the projectors hooked up to an NES console, displaying the output from the actual cartridge. This piece represents a rare, early interaction between the disparate contexts of the art world, Web 1.0, video gaming fandom, and hacker culture. Arcangel’s code, tightly wedded to the NES’s software, takes advantage of its color palette limitations and its method of drawing on the screen using CRT scan lines. His aesthetic and conceptual decision in the creation/exhibition of the work effectively emphasized the workings of the NES console and its programming. It did not break from the original design and technology, but as a result of good replication, it raises the questions about copy right issues in necessary preservation of game culture.

  1. {Software} structures by Casey Reas

This work took me awhile to notice because it took up the whole wall, and the whole color palette is muted and dark, comparing to the other pieces around it. But when I look closely at it, the figures are actually changing as time passes. The lines and dots are visible, but not too crowded or overpopulating, so it’s really comfortable to watch. I sat on the bench in front of it for awhile to observe the changes. It seems to be continually changing, erasing, and redrawing while never repeating. The darkness of the black background really made the white lines pop, and the simplicity and minimalism made it satisfying to watch.

This piece is inspired by Sol LeWitt’s wall drawings. Reas explores the relevance of conceptual art to the idea of software as art using javascript. He tries to directly address the rules and instructions used in this piece’s creation.  He created “a surface filled with 100 medium to small circles. Each has a different size and direction, but moves at the same slow rate”. And he tries to display the instantaneous intersections of the circles, as well as the aggregate intersections of the circles. The lines I see on the screen connections the intersections of overlapping circles.

Casey’s website that shows the design process and concepts behind his piece: http://artport.whitney.org/commissions/softwarestructures/map.html

other works in the exhibition:

 

Claudia_ResearchPost6

API: Chute

Image result for Chute api

Chute is a visual marketing platform that allows users to search for media such as photos and videos that align with their marketing needs. Their developer API provides access to its services so developers can integrate the functionality of Chute with other applications. It enables developers to add media capture, management, metadata, and publishing to applications or websites. Even though this API allows users to access many media files, I believe the sources of the media files might not be accessible. It probably exists because Chute has to collect data about the media files but I don’t think users will be able to access the specific information about who posted the media file or through what kind of device it’s uploaded.

Examples of how you can use the Chute API include:

  • Count the number of likes for media assets such as photos and videos.
  • Explore photos and videos by location filters to see what’s trending in certain areas.
  • Programmatically import media assets from websites and social media services such as Instagram.

API overview

all API access for Chute

Claudia_GenerativeLandscape

Water

For this project, I made a generative landscape under water. I used WEBGL to make the canvas 3D, and used triangle strips to create a terrain. Then I used Perlin noise to create wave-like features to make my terrain look like water. I also have a fish class with a lot of fishes that swims in the ocean, a bubbles class that have bubbles coming up, 4 jelly fish, as well as 2 submarines. I used vectors to manipulate the positions of the fishes and bubbles, and I used random() to make them randomly distributed. The bubbles’ size also changes so they move more realistically. The water and the background changes color according to the current seconds. The mountains and submarines in the back shift to the right while the fishes swim to the right to make it seem like the canvas is moving.

During the process of coding, I encountered a lot of problems. A major one being the 3D terrain. After I made my water waves, I realized that I cannot load image the regular way. Instead, I have to “shatter” the image and put them in as vertexes. It was really hard to manipulate the images, and I couldn’t really find anything that fits my overall aesthetics, so I decided to not to use any. Instead, I coded everything, so that took me a really long time. Another problem I noticed was that the canvas size changes when it’s displayed on different screens. So instead of using windowWidth and windowHeight, I used fixed dimensions(1400 x 700).

If I had more time, I will probably populate the ocean with more creatures and plants. I would also make the jellyfish loop around and come back to the bottom. The reason why I didn’t figure it out is because I’m moving them using translate() as oppose to using a variable like what I did for the fishes, mountains, submarines and bubbles.

landscape
original idea
the gif image I was going to use

project code on open processing

(the mount() function is commented out for a smoother-running program)

references:

Triangle Strip

3D WEBGL

Claudia_ResearchPost5

p5.js Library: Dimension

This add-on to p5.js is intended to extend the p5 vector functions to any number of dimensions. The users can use as many points and dimensions as they like, as long as it’s less than 52. Currently I believe all the standard vector functions in p5 can be used in the same way using this add-on. The function used to do the calculations for sum of vectors:

result = nAdd(pos1, pos2);

And the vectors can be created like this:

var pos1 = nVector(int, int, int, int);

link to library: https://github.com/Smilebags/p5.dimensions.js

example of an animated rotating 4D hypercube: https://github.com/Smilebags/p5.dimensions.js/blob/master/libraries/p5.dimensions.js

Generative Landscape Ideas

1.an alien figure is sucked up into an UFO. It travels from an apartment to the rooftop then to the UFO, then the UFO travels out of earth into space.

2. a person travels on a small boat on ocean waves. As he goes to different places, different sea creatures appear.

3. create a mountain landscape and display it in a first person perspective so it looks like viewers are flying above the earth and the earth is rotating.

Claudia_ResearchPost4

EYEO 2017 – Zach Lieberman

Image result for zach lieberman play the world

Zach Lieberman is an artist and computer programmer who wants his work to fully human and break down the boundary between the visible and invisible. Lieberman is very interested in drawing, especially the feeling of drawing on a computer. Therefore, he decided to explore the intersection of drawing and code. He’s goal is to develop new tools that allows people to utilize their artistic abilities digitally because the current tools available are limiting.

While exploring art and drawings, Lieberman focused on changing the direction and angle of lines in order to create patterns. He observed the pattern that often shows up on airplane walls, and each of the line segments in the pattern seems to be pointing at something/somewhere. And that led to the creation of his project “play the world”.

Zach Lieberman thinks of radio as a mechanism for interacting with the world. He is interested in the visual language behind radio and radio devices. Play the world is essentially an instrument. When users are listening to radio streams live, they can identify musical notes that sounds like notes played with musical instruments. Lieberman programmed an interface that allows you to play those notes and used databases to create a software that listen to radio streams and find notes. It would isolate a specific moment that sounds like a musical note. He then integrated the idea into maps and the world cup, etc. That allowed user to play sounds or broadcasts from all over the world on a single keyboard.

Lauren McCarthy’s Talk

Lauren McCarthy | the 2017 Gray Area Festival

Project: Follower 

Follower is a service that provides real-life follower for a day and in order to be followed you answer two questions: why should you be followed? and why should someone follow you? If your answer gets selected, you would get a picture at the end of the day took during the day by your follower, Lauren. One of the answers that really got to me was when one person said that they believe their life has more of a online importance than real life. Having people following them would help them shit their presence from the online world to the real world. I feel like this answer is relatable to a lot of us. In this digital world, we loses a lot of the real life connections, and are only left with intangible links like social media. Even though I think having a real life follower is a little creepy, it’s a good idea that can actually make people be more connected in real life. I thought it’s interesting how Lauren McCarthy plays with the idea of how we hate surveillance, but we also want to be seen.