Color Phoneography
A riff on color photography.
In 2012, I created a series called Results, where I would run Google searches then take a screen capture, then run treatments on them.
How neural networks "see" and interpret images has evolved since then, but they are still fairly stupid (or simply biased) in the results. Take for example the images that come up when you search "beauty", or "truth". What does truth look like to an algorithm?
"Classification of Color" is one of the Human Universals, and is an interesting way to organize images. With Color Phoneography, I am doing it manually and is less effective in sussing out the predominant color based on the pixels and metadata, but less effective, and definitely more biased in terms of the cultural contexts.
How does machine learning understand cultural context, and how does it effect the Results?
Comments