Collision Control
After reading the excellent article in the Boston Globe Can Elon Musk Succeed In Developing Generative AI ChatGPT Knockoff “TruthGPT”..., it reminded me of an audio art project I did about 20 years ago that was called Atmosphere Generation ("AG"). It involved making a universe of sounds, melodies, and textures which I would then burn onto a CD that had approximately 50 tracks. I made three copies of it and put it in three different players with three different sets of speakers and set them on shuffle play. So the first one might be on track 6, the second on 28, and the third on 14. Letting them run continuously, would create interesting atmospheres. But what I had to think about before I made them was whether certain sounds clashed with other sounds because eventually, they would. This has an interesting corollary with AI systems because we're making similar "universes". With AG I had complete control of what the universe would produce, and would edit the tracks if I discovered certain collisions, pitches a minor second apart. for example. Would we want three minutes of a minor second? Perhaps we might simply experience that particular permutation. On one occasion when I was running AG, I walked into the room and all the players were on the same track. Watching how the tracks would shuffle was interesting in itself, and we're doing a similar "watching" with ChatGPT.
One of the major considerations about random sound generation is that you can't include rhythms: they are in different meters and tempos and you can't include them with the other universe of sounds because they create a train wreck of noise. Similarly, with data universes, we don't know where the collisions (disinformation) are, and as it gets larger and larger, it's impossible to control outputs. With the AG project, I made a separate CD (its own universe), which would prevent bad collisions, but generally, the whole project was about collisions.
What will be interesting (and scary) is that people will start to make their own AI universes, just as they made their own websites in 2000.
From the Boston Globe article:
"Each day that generative AI continues to spew out errors, falsehoods and AI hallucinations in the outputs is a bad day for just about everyone. The people using generative AI are bound to be unhappy with those fouled outputs. People that rely upon or need to use the fouled outputs are at risk of mistakenly depending upon something wrong or worse still going to guide them in an endangering direction."
Comments