Some of the Parts
Part of Parts |
Technologies enable the past in ways that were inconceivable in the past. Before scanners were invented, we could duplicate a paper document, but digitization made it possible for it to exist indefinitely. Remix made it possible for us to use parts of it in other contexts in many new ways that were inconceivable to the author at the time of its writing. For example, a recording made in 1995 before the advent of the (wider) internet, and burned onto CDs, was subsequently able to be "ripped" and made available on P2P networks. Once released and scattered around it can remain as an integral whole with threads intact to the original intentions, but over 20 or 40 years can gather entirely different meanings. AI-generated content has a new kind of authorship that has a different "seed", with that being the text prompts used to generate it. But this is different from someone having an idea and building it from its blueprint manually, and being involved in its evolution from idea to final product. It's inconceivable how content generated in 2024 will be used in 2044 or 2064 because there will be things that will enable it in unknown ways. That's why I've always spoken for art rather than letting it speak for itself. That's interesting to do now because it's essentially what we are doing when we instruct AI to generate something. That's the formulated recipe, but it isn't really a recipe because the instructions are never re-used, and even if they were it would generate something different.
AI will make a new version of Remix where the parts of it we use won't be traceable to anything. You might have Clyde Stubblefield in your generated track but he might be unknown in 2064 unless that aspect is enabled in new technologies giving him credit automatically in the metadata or watermark unless we choose to disable or jailbreak those features so that we can use the parts we want in the ways we want. LLMs are doing this naturally, but if someone types a paragraph in Microsoft Word installed on their computer, each word and phrase is embedded with their original intentions, in the sequential order in which they appear in its first instantiation. Digitization makes that process infinitely non-linear, something I've always liked in re-generating my own work. But it's still my sequential process, not an LLM. If I make my own LLM it's still that same process of a linear re-sequencing but it's me doing it not something re-assembling its parts. If I use a cut-up technique, the re-assembly is still an individual, rather than a collective process. An enabling and empowering technology ideally retains the original intentions and meanings that were enabled at the time, not how they might be enabled in the future. This is why I've always liked the idea of master or parent files, but we are moving away from the "file" model, which both enables and disables content. A file has metadata burned in. Generated content doesn’t until other instantiations (child docs) are made.
***
Post-script (Exordium)
10/4/2004:
Al Qaeda was deemed a “stateless operation”. There are so many versions of it now that don’t need direction or owe allegiance to any one specific group or mastermind. It’s sort of like the proliferation of a new style of music: once the genre has started (e.g. the Blues) it gets reconfigured and regenerated as more people start to do it. Blues left Mississippi a long time ago, and is now embedded in the world’s cultural canon. Richard Dawkins coined the term “memes” in the 70s in his book The Selfish Gene, which describes this phenomenon so elegantly. “At first sight, it looks as if memes are not high fidelity replicators at all. Every time a scientist hears an idea and passes it on to somebody else, he is likely to change it somewhat.”
Comments