JiYoung_Final Project

My final project was inspired by my daily commute on the monorail of my hometown (Daegu, South Korea). It showcases places I regularly visited, landmarks, and the train I took “moving” through the landscape. I focused my project on the aesthetic more than its functions, because I wanted to make the project personal through making all of the assets by hand, through Adobe Illustrator, with a drawing tablet.

I started by brainstorming the locations I wanted to include in my sketch by looking through pictures I had taken at certain places in Daegu. Based on that, I compiled a list of feasible locations to draw and arranged them generally by location based on the Daegu subway map. After a few days of work, I completed the illustrations, exported them as pngs, and brought them into openprocessing. I decided to keep the subway stationery in the center and move the background image to show movement. I included a forloop to loop the image to make a continuous landscape.

Initially, I wanted to use a weather API to change the background colors (the sky) based on the temperature and the sunset/sunrise times. I made a separate openprocessing document to include the api. After getting data from the weather api, I used if statements to determine the colors, which I picked from Warhol’s sunset print collection. Generally, I went with warm colors for hot weather and cool colors for cold weather. After that, I added the code to the landscape animation, but the sketch would not load or disappear when I included the background changes, so I had to leave the weather segment out to present a functioning project.

Resources:

Warhol, A. (1972). Untitled from Sunset. [Screenprint on Paper].

Lee, H. (2016). Seoulite [Album Cover]. YG Entertainment.

Final Project

Link to my Project:

The basic idea of this project is to develop the assignment we have had called rainGame. The purpose of this assignment was to give us a chance practice class. So far, I have not used class in my projects, so I would love to build classes in my final project.

Instead of collecting raindrop, I want to generate a game of a person collecting food: 

-Four type of food will drop from the sky: hotdogs, wings, legs, and dumplings.

-If the player misses the food, it will stay on the floor and constrain your space to move. And food will only drops within the space player is able to move. If there’s no more place to move, game over

-When the player “eat” 30 food, you win; however, there will also be bombs falling down. If the player eats the bomb, game over

To further develop the game, the background will various based on data from weather API and time.

The color of sky will become lighter from 6 am to 2am, then gets darker from 2 to 7pm. After 7, the sky will just be dark blue. Then when cloud coverage is bigger than 0, there will be clouds on the sky. The gap between each cloud depends on API value of cloud coverage. If cloud coverage equals to 0, there will be no clouds but a sun. While there is cloud, little raindrop will also show up if humidity value is greater than 90%.

Process:

At first, I let the food that players misses stay on the ground, and the player can still go through the food. This made me think about how we avoid the food on the ground instead of stepping on it, so I came up with the idea to restrain the space to move. It was not hard to use constrain function to set a range for mouseX; however, setting the specific value was not easy until I use imageMode(CENTER). When I first learn about all the Mode() such as rectMode(); I did not pay attention to them; nevertheless, through the course, I notice more and more how useful they are. And I also learned how essential practice are.

2. Sketches:

 

I tried to use 2D for loop to draw raindrops; however, the program get stuck with that. So I use single for loop to draw less raindrops instead.

3. Final screenshots:

This is the screen when the player wins the game

This is the screen when the player encounters the bomb

This is how it looks when cloud value is 0. The player is during the game, the center shows how many food player has ate

This shows the screen when there is no space to move.

 

References:

I was inspired by a meme popular in China. It’s a guy eating a hotdog. So I drew the player and food on my iPad and then load png files into p5js. I was also inspired by an illustrator on instagram called nu1t. He adds all kinds of illustration to photography, making objects and foods much more vivid. So I decided to add a sad face to my food when they are on the ground. The Food class in my project was refer to class in rainGame. Then I created subclass extends Food class and a bomb class. Before this project, class was very abstract to me; however, after playing around with its function, inheritance, methods, I became much familiar with classes and objects. I’m really excited about exploring more about classes in the future!

https://furtech.typepad.com/furtech/2009/01/is-humidity-100-when-it-rains.html

http://www.aihami.com/a/gongkai/xuexi/381464.html

http://tieba.baidu.com/p/5943310030https://www.openprocessing.org/sketch/476152

https://www.apixu.com/doc/current.aspx

https://www.teepr.com/231886/annezheng/可愛食物01/https://www.instagram.com/nu1t/

 

 

 

 

 

Sophie_Final Project

For my final project, I decided to make sketches that could then come to life through animation. The goal of the sketch is to convey an idea/message from my own experiences that could hopefully connect with a viewer.

The first sketch shows a girl with a balloon stemming from her head that grows, with her face tightened. In the second half of the animation, she breathes out, which you can see with her facial expression changing, air being exhaled, and shoulders relaxing. It is supposed to show how we can build up these worries in our heads that are often just nothing (like how a balloon is filled with air). Her exhale is like letting air out of the balloon.
The second sketch is of a young girl on a swing, and when she jumps off of it, instead of falling to the ground she begins to fly. It is supposed to capture the imagination of children and how it is important not to lose that optimism for what is possible as you get older.

When first beginning this project, I attempted to use both the P5 library Scribble and the built in shapes to create objects that could then be moved around. I struggled with this method to make an accurate drawing. I then moved on to hand drawing each part, and scanning it into Adobe Illustrator where I made it look computerized. For my first sketch featuring a girl with a balloon, i created the animation by varying the speed of the pieces of the drawing. With this method, I had limitations on how the image could be moved. For the next sketch featuring a girl swinging, I again hand drew the parts of the image but this time created each frame in illustrator. With reference to Daniel Shiffman’s tutorials, I made an JSON array that featured the coordinates, width, and height of each frame. In the P5 editor, with the frames and JSON uploaded, I used the frame reference points to cycle through the animation, and then loop back to the beginning. Now that I am familiarized with the methods of creating this project, I feel moving forward I could make these sketches with finer details and fluidity with practice.

https://www.openprocessing.org/sketch/641828

https://www.openprocessing.org/sketch/641736

https://www.openprocessing.org/sketch/644313

Some sketches:

Gifs of work

Resources:

https://www.youtube.com/user/shiffman

Instagram: @bymariandrew

Instagram: @alecwithpen

Final Project

Here’s the link to my project on OpenProcessing; I was having issues uploading gifs.

www.openprocessing.org/sketch/642896

For my final project I chose to revamp my generative landscape project. I thought that my project felt a bit too rushed and it deserved more time to work on the details. Originally, I just used simple eclipses in code to represent the planets in my landscape, but it was too hard to draw and animate using only code. Instead, I drew out the individual frames for each of the planets in Adobe Illustrator and cycled through each of the frames like a gif. I also changed the plain, black background of space into a wallpaper of stars of various colors. I found this image online and uploaded it. These edits added much better graphics and more detail to the landscape. The added dynamic motion of the rotating planet animation also added to the overall project. In addition, the astronaut now floats into the frame from offscreen in the beginning. I feel like this adds to the immersive feel of the project because there is a clear starting point, instead immediately starting with the astronaut in the center. The final, large element that I added was a randomly occurring event where the astronaut floats past a large galaxy. I think that this also adds to the immersive feeling because more unique, interesting events are happening instead of just random objects.

Final Project

 

 

 

 

For my Final Project, I created a divination simulation of a three card spread. The spread could function as representative of – Past, Present, and Future – Mind, Body, and Spirit – or Background, Problem, and Advice, as well as other personal interpretations. The cards were drawn by me and were originally inspired by the 22 Major Arcana class of typical tarot decks since they’re the core of the deck. For right now, they could be seen as Oracle Cards which have more room for personal interpretations. Though I intentionally planned to create a full deck, I didn’t have as much time to do so and my laptop was already freaking out from loading the amount of files that are already in. I do plan on adding more into the code just for my free time. My original format was to be on a table with candle lighting but some weirdness with my code caused the cards to be covered so I changed it to more of an open space. The cards are generated randomly when you click within each of their parameters and the textual interpretation will appear as well. The divination descriptions are based from dariusk’s JSON file of tarot interpretations. A star like vortex circulates in the background and a pair of mysterious eyes blink over the cards. Using the p5.sound library, I added the instrumental version of Billie Eilish’s “idontwannabeyouanymore” to run in the background since I think it contributes to the calm and mysterious atmosphere of the coding work.

Link:

//https://www.openprocessing.org/sketch/621998

Bibliography:

Ameyasrealm | art. (n.d.). Retrieved from https://www.ameyasrealm.com/realm-series-1

Dariusk. (n.d.). Dariusk/corpora. Retrieved from https://github.com/dariusk/corpora/blob/master/data/divination/tarot_interpretations.json

Mills, R. (n.d.). The Major Arcana of the Tarot. Retrieved from http://www.byzant.com/mystical/tarot/MajorArcana.aspx

Three Graces. (n.d.). Retrieved from https://merakilabbe.ca/offerings/

 

 

 

Angie Li-Final Project

My final project was inspired by the generative landscape project, with the idea of a car driving down a city. However this car will be “driven” by a drunk driver, shown not only by it’s sporadic driving pattern, but the fact that it will hit and kill a person that is crashes into. A person will be crossing the street and the car will hit it, a brief “crash” will appear on screen, causing the person to disappear and a grave appearing on the bottom of the screen. The grave will display the name and age of an actual drunk driving victim. However, on the top of the screen, there will be a description of that person by someone who knew them. After the screen is filled with graves, the text box will display drunk driving facts and statistics. These statistics and information about the victims were found on various news articles and websites, as well as the website Mothers Against Drunk Driving. The setting of the landscape represents a city. The black, white, and gray colors used(except for in the crash), and the overall simplicity of the visuals is used to draw more focus on the text that shows up on the screen.

Sketches:

Link: https://www.openprocessing.org/sketch/638893

Screenshots:

  

Bibliography:

“U.S. Gun Deaths in 2013.” Periscopic, guns.periscopic.com/?year=2013. Accessed 10, Dec. 2018.

“Impaired Driving: Get the Facts.” Centers for Disease Control and Prevention, 16 June 2017, www.cdc.gov/motorvehiclesafety/impaired_driving/impaired-drv_factsheet.html. Accessed 10, Dec. 2018.

“Statistics.” Mothers Against Drunk Driving, www.madd.org/statistics. Accessed 10, Dec. 2018.

“Drunk Driving Fatalities.” Foundation For Advancing Alcohol Responsibility, www.responsibility.org/get-the-facts/research/statistics/drunk-driving-fatalities/. Accessed 10, Dec. 2018.

“Nancylee Salerno.” Mothers Against Drunk Driving, 30 May 2017, www.madd.org/blog/keeping-nancylees-memory-alive/.Accessed 10, Dec. 2018.

“Damion Henderson.” Mothers Against Drunk Driving, 31 Jan. 2017, www.madd.org/blog/damion-henderson/.Accessed 10, Dec. 2018.

“The Story of Cody Dewitt.” Mothers Against Drunk Driving, 6 April 2017, www.madd.org/blog/voices-of-victims-the-story-of-cody-dewitt/. Accessed 10, Dec. 2018.

Rayman, Graham, et al. “Man with passion for dance killed after drunken driver going wrong way slams into his car in Brooklyn” New York Daily News. 12 July 2018,www.nydailynews.com/new-york/ny-metro-wrong-way-driver-kills-passenger-belt-parkway-20180712-story.html. Accessed 10, Dec. 2018.

Dimon, Laura, et al. “Man fatally struck by drunk driver in Staten Island” New York Daily News. 17 June 2018, www.nydailynews.com/new-york/nyc-crime/ny-staten-island-fatal-pedestrian-crash-man-killed-drunk-driver-20180617-story.html. Accessed 10, Dec. 2018.

Torres, Ella. “Designated driver who died from crash involving alleged drunk driver remembered as big brother who ‘always did the right thing'” New York Daily News. 5 Dec. 2018 www.nydailynews.com/news/national/ny-news-designated-driver-dies-after-collision-20181205-story.html. Accessed 10, Dec. 2018.

Thorne, Kristin. “Family of Long Island Boy Scout killed by alleged drunk driver speaks out” Eyewitness News. 3 Oct. 2018, abc7ny.com/family/family-of-boy-scout-killed-by-alleged-drunk-driver-speaks-out-/4403927/. Accessed 10, Dec. 2018.

 

 

 

Kasper_finalProject

Video:

Photo:

OpenProcessing Link: https://www.openprocessing.org/sketch/642863 

Description:

In my final project, I create a solar system which is consisted of tons of cells. The main concept for my project is that everything is consisted of cells, and cells will end up in death and disappeared as flowers will wither finally. As you can see, each cells’ characteristic is actually inherited by their own mother planet. When the cells are propagating their children cells, their genes are inherited as well. If you spend more times watching these cells and play around with the flower interaction, you might notice that when every cells born from the same mother planets are dead, the flower won’t exist anymore. This express one of my main concept, each planets will finally end up with death, so no matter cells, flowers, or human beings are a very minute existence compare to the infinite universe.

For the interaction, player could place the red cross on each planet in order to see the name of each planet. If players would like to see every planet’s name showed at the same time, they could randomly press any key on keyboard. The reason I drew a red cross is that for reminding players to play with the planets.  If the mouse is pressed once, each planets will become cells, and starts propagating or dying. If the mouse is pressed again, the flowers will show in their mother planets’ positions, and the corresponding cells will start moving toward the flower. I make this effect in order to let the flower be like a sort of gravity, which make each cells fall to them.

In terms of the difficulties or challenging parts of this project, I would say that making each variables or assigning data to be used in different class is the most challenging part. I met several times that my codes didn’t work, the most of the time the bug is that the data is not caught correctly before using it.  However, during each debugging process, TA helps me a lot, and  we  always came up with some more interesting ideas for my project, and I feel the sense of achievement and accomplishment one I debug successfully.

Sketch:

draft_1 draft_2 draft_3 draft_4 draft_5 draft_6 draft_7

 

 

Citation:

Shiffman, D. Solar System in Processing –Part 1-3. (2016, May). Retrieved from https://thecodingtrain.com/CodingChallenges/009-solarsystemgenerator3d-texture.

Yan, Z. VIZA626 Generative Art — Project 1: Color Cell. (2013, Sep. 02). Retrieved from http://woshiyanzhao.blogspot.com/2013/09/viza626-generative-art-project-1-color.html.

Final Project

1) My final project is titled “Food Quiz: What Food Should You Eat Today?” and is essentially a short multiple choice style quiz with different questions that lead to different answers. It’s a food quiz, so based on their responses to random questions, people end up with a food that they should eat or try. I implemented classes and return functions into this project to make it work similarly to the way a Buzfeed quiz works, which is where I got part of my inspiration from. I also took influence from the Instagram account goodfoodcrapdrawing when determining how what the quiz results would look like. Similar to that Instagram account, I drew the food I wanted to show, and used my drawings as the final results for when people finish the short quiz. I like food and I like keeping myself entertained, so this was something that could allow me to be entertained and look at food, which seems like a win-win situation.

2)

 

3)

https://www.openprocessing.org/sketch/640635

4) Bibliography

Dominick, N. (2018, Dec 10). Everyone Has A 2018 Netflix Show That Matches Their Personality – Here’s Yours [Blog Post]. Retrieved from https://www.buzzfeed.com/noradominick/everyone-has-a-2018-netflix-show-that-matches-their?origin=nofil

n.a. n.d. goodfoodcrapdrawing [Blog Post]. Retrieved from https://www.instagram.com/goodfoodcrapdrawing/?hl=en

 

Final Project

:

link: https://www.openprocessing.org/sketch/642889

For my final Project I decided to make a game where you can jump and attack the cat and the bird that comes toward it. The dog shoots out bones to hit the enemies that are running toward it. The dog only has 5 lives so when he uses up all his lives he has to restart the game all over again. I made a background that represents like a park because that is usually where dogs go so I made it so that the animasl is in the city park. When you first enter the game it will say start game and you click on the screen and when you lose it will show a screen saying gameover. You have to press x to throw the bones and space to jump.

 

This is my sketch that I started with of my dog I wanted to add this specific one because although I revised the idea I still made my dog jump. as I went on I then made it move and made some sprites for the animation. It took a while to try to get the position of the dog ready but I give credit to this tutprial for helping me this dog https://www.youtube.com/watch?v=jPGayv9MQ1U. I did not make it exactly the same since I changed the color a bit to not make it exactly the same. This was just a guide for me to make a simple and cute dog for my work. also for the life bar that is what I started with and developed it.

resources:

http://molleindustria.github.io/p5.play/

Tues 23rd and Thurs 25th

https://itunes.apple.com/us/app/bouncemasters/id1340929016?mt=8.

http://megaman.capcom.com/.

 

Final Project Blog

Image Distortion

 project description

For my final project, I decided to make a program that would distort images. I used the webcam in my laptop to capture images and videos and distorted them in 5 different ways. For the photos, I had 4 distortion:

(1) Draw ellipses using the color of the pixel its located at to create a blurry effect. Moving the mouse across the screen can erase some of the pixels.

(2) Create a black and white image based on the difference in color between two pixels that are next to each other. Moving the mouse towards to right of the screen will intensify the effect and making the image darker.

(3) Enlarged pixels (square) are drawn at random spots and in inverse color as the original image. Moving the mouse across the screen can erase some of the pixels.

(4) Circles with the radius of 150 are drawn with the outer edge being more transparent and the inner being more opaque. Moving mouse across the screen will display the pixels.

For the video, I had one distortion where it creates a lagging effect. I manipulated the pixels so that they would get replaced by their adjacent pixel if the difference in color is greater than a certain value. Moving the mouse onto the film camera icon on the top left would switch the program into video mode, and placing the mouse anywhere else will cause it to back to photos distortion mode.

Originally, I wasn’t going to a camera to capture images in real time, but I thought doing so would make my program more interactive, therefore more fun. But due to this change, I ran into a lot of problems during the development process. Initially I wanted to add physics for the pixels in my image so they would disperse whenever the mouse is near. They worked for still images that I imported, but did not work for the video that my webcam is capturing. I had to abandon that idea and explored more on how to manipulate and distort images/video captured with webcam, and was able to do so. For the project, I learned a lot of new functions like “import processing.video.*”, “PImage”,”Capture”, etc. And I was very content with my end product.

Process sketches

proposed way of distortion in initial sketches
developing the distortions
initial starting screen
building the start screen
final starting screen

Video demo:

Bibliography:

  1. Molitor, O. (n.d.). Fundamental Fortitude. Retrieved November 26, 2018, from http://doc.gold.ac.uk/compartsblog/index.php/work/fundamental-fortitude/
  2. Foundation, P. (n.d.). Images and Pixels. Retrieved from https://processing.org/tutorials/pixels/