Sound & Vision
What I see happening in the future is a neural network that "sees" and "feels" what is happening spatially and temporally, and sound and music respond accordingly. (A FaceNet for sound.)
Here are two examples that I think are interesting models for neural networks for sound, driven by vision:
Temporal: Computers Watching Movies, an algorithmic system which plots on a canvas in response to films.
Spatial: The Top Grossing Film of All Time in which film frames from Titanic are arranged vertically in rows from beginning to end. Visually we see color and luminance, which theoretically can be associated with moods. Through machine learning, neural nets could score films without human intervention. At best, it would remain a hybrid system, perhaps installed within synths. This goes back to the idea that film formulas are very common and can be shown in a graphical way. (If you did this with films that had car chases, they'd all be at about the same level on the Y-axis.)
Looking at the Pictures
I have a friend that is extremely knowledgeable about politics--all through TV news. But she doesn't read anything. When flipping through books, she likes to look at the pictures ("liking" on social media). We may be moving from a text-based culture to those of mostly images and films as manifestations of spatiality and temporality. But both text and images have unique capabilities that can stand alone as a code, yet need explanations from the other in order stay accessible and contemporary. We are moving towards more of the watching and scanning of sequences of images rather than sequences of words, but cannot be separated: words suggest images as much as images can be made more profound through text. Simply watching something (such as neural nets), has only half-meaning:
"A reader interested in eating disorders or divorce would probably read books written by doctors and psychiatrists, but the television viewer interested in such topics often watches people who have experienced the problems themselves."
Comments