AI 2011 To Today
Thoughts on Watson AI, with 2025 comments in bold text:
- True intelligence has myriad different modules. It would take a long time to construct them, but could be done. [I recently watched a Kevin Kelly interview on David Eagleman's channel. They were talking about different AIs (minds) for different purposes, as you would use different software or apps for various tasks. I think that's the best way going forward.]
- Since machines rely on humans to load their data, they could easily implant false memories. In essence, our belief systems are based on the saliency of false memory and self-delusion. [When we talk about loading data, that's exactly what we do when we front-load chatbots or LLMs with various documents. What we're doing is making little worlds of information for particular purposes, and the more we add the more focused it gets on possible answers. In music, if I add a G# to what is only a D major scale, sometimes I'll get a dissonance between G# and A where I didn't before, or I can remove tones and just leave a pentatonic scale with that G# but none of the other ones].
- How would sarcasm work in AI? Since AI is largely based on language, how would it suss out false information? [This is something AI is still terrible at, but could improve. But it still would not know what it was being sarcastic, or more importantly, knowing when to stop].
I will truly be impressed when Watson asks, 'Why do you ask?" How would it know when to say it?
***
Now with fewer and fewer students reading books, we can ask books rather than read them, or read and ask them simultaneously. But in my experience as I've experimented with LLMs based on my own writing is that they are usually inaccurate. For example, in my short story Reset 2046, one of the primary characters, "Tony" becomes Anthony after a brain injury. This has a Lynchian vibe, as in the film Lost Highway, where Fred Madison becomes the doippelganger Pete and he doesn't know why. This probably comes from dreams where people can appear as other people, where you see your friend John but they are an old woman.
Question: Who Is Tony Townes? Answer: “Music was a significant factor in Tony's transition from mortal to immortal. He recorded the song "Computational Resurrection" with his band Nth at midnight on May 1, 2046, which became a chronicle of his transformation, with the first part reflecting his mortal life and the second part about his rebirth [1]. Tony had a close friendship with Anthony Townes, who helped him come to terms with the psychological boundaries of his transformation [4]. After the accident, Tony's personality changed, and he developed two distinct identities - the old Tony, the metal musician, and the new "Anthony" with highly industrious and creative powers [4].”
In fact, Anthony militated against the idea of immortality and chose to not get the “treatments”. It was Neone, immortally 23, who had regrets. “Tony had a close friendship with Anthony Townes”, is the most surreal error, as they are the same person, but like Lost Highway, creates a “happy accident”.
It's not that the hallucinations are not interesting in themselves as a generator of new ideas, but if we're relying on LLMs and using the generated text without fact-checking it or citing it as-is with a caveat that it is AI-generated I think that it's a potentially dangerous situation where we don't care about the veracity of sources. Asking an LLM to fact-check itself would be a good start.
What we want is usable knowledge in the current moment, knowing that if we need that knowledge in the future we can just ask books. It's interesting in a futurist way that we can just close all the libraries, but what goes missing is the context. A bunch of factoids or takeaways generated by Merlin doesn't make you smart. "Smart" is now more about the social intelligence and scores, then why read a boring book if you don't know what's in it for you? This happened in music 60 or 70 years ago with the advent of rock 'n' roll, and musicians didn’t care about music lessons playing classical pieces just to learn the proper fingerings. But rock’n’roll eventually got boring as well. I grew up on it, but I'm not writing or playing any of it, yet the spirit is still there.
***
Interesting aside:
2/17/2023
Segment on CNN where they were interviewing a New York Times columnist who had been playing around with ChatGPT and he found it very creepy and unnerving. He had spent a few hours chatting with it about various things and it started to become more like a person. What I suspect is happening is that not only are the language models comprised of things that they are scraping from the internet, they are front-loading it with various content. Very often the things we see in the media and on film start to appear in real life. There’s a natural slippage between them, as well as in dreams. Whatever they are going to be in the future there’s always going to be a feedback loop.
Comments