Containers of Music
I was thinking about the connection between instruments and Identity and the music which exists naturally within a particular instrument. For example a violin has an identity and a music that exists within it; a guitar has a different life within it; a bass guitar has a different life within it. When I play them there's a resonance–the instrument is changing me. When I'm playing a guitar I'm a different person than when I'm playing bass or piano. The body adjusts to its form and is changing me in a kind of reciprocity, a feedback loop, with you having to surrender to the physical characteristics, which in turn changes you cognitively.
Drummers understand music in a different way because something is going on cognitively that is different from other instruments. What I like in a drummer is the ability to react to whatever is happening in the music rather than forcing something into the music, all the while multitasking independent rhythmic parts. There's a complexity there that doesn't exist in other instruments. Bass parts, even though married to what the drums are doing, can be underwhelmingly simple. In classical music, many instruments which have their own individual sound worlds are combined in a score and there's a conductor who has to interpret it. But each individual player also has to be listening to what's going on because the conductor is bringing up a dynamic in one instrument group and maybe lowering it in another to get a blend, and the player has to also adjust on their own to go for the totality of it. In terms of it being a collaboration or team, there could be kind of a “spirituality” that emerges as a byproduct.
Yesterday I read an article about quantum music. They seem to have found a correlation between subatomic particles and their effect on an actual sound. I would assume that quantum music is more probable with symphonic music–not necessarily classical because that would tend to corrupt the whole idea of it because when you say “classical music” people have a preconceived mental model. I would just say “polyphonic symphonic sound” which can be an ensemble of traditional instruments combined with synthesizers, using quantum computing or not. Traditional acoustic instruments create sound in a different way than electronic music. Electronic music relies on loudspeakers [and headphones] and there can be a lot of variation in them.
There are other videos that I've done where I've talked about “scoring for sound”. This seems to be the new way that we're making music because sound is becoming disconnected from the instrument. This whole process started with synthesizers which could approximate the sounds of other instruments and then sampling moved it farther on to actually sound like the instruments. It's become so perfected sometimes we can't tell the difference between sampled orchestral sounds and the actual orchestras. In the post-pandemic world where symphonic musicians have to think about how to evolve and adapt, I think we're moving towards music just being sound and it doesn't necessarily have to be associated with particular instruments. There are both good and bad aspects, especially in the relationship between the musician in reciprocity with the instrument. In some ways, it’s a sad prospect that we won’t merge together with instruments anymore. But I also like the idea of “miming” other instruments, playing a bass, and having it sound like something else. I like the idea of playing the piano on my bass or playing a saxophone on the guitar. Of course, keyboards can be anything–like the keytar thing which has been used for a long time. But it can't exactly be a guitar because there's a different physicality to it. Many times musicians will simply play violin parts on a keyboard just using a violin sound and then when it's transcribed it couldn't possibly be playable by an actual violin. But does it matter how a violin player is going to play it? The audience isn't going to tell the difference–it just sounds like a violin. It's interesting because you can play music out of the ranges. A violin can't go below the G below middle C. All strings have their limitations based on the open strings so it's nice that we can sometimes extend those and people can't tell the difference. Who cares if middle C is being played by a violin, a viola, a cello, a bass--or a guitar for that matter?
What I’ve noticed in a lot of bands is that the bass and the guitar share a lot of the same range and it becomes muddy. Understanding the instruments' qualities, we can compose more contrapuntal lines between guitars in the low range of the instrument and bass players playing basically in the same range. They can shift roles. A flute is perceived to be able to play higher because of its hierarchy, but all the woodwinds can dovetail and shift roles. So that's one of the benefits of disconnecting the
instrument from its [perceived] limitations.
Whether it be in a live performance or a recording, you're always reacting to whatever is there. Every musician understands that there has to be this balance between playing the music and listening to the music, which takes a lot of practice because you have to keep reminding yourself to tune in to what's going on around you and play the appropriate part. In jazz, it's something that becomes automatic over time. In order for the music to be listenable to yourself you have to be playing the things that make it listenable overall. The problem is that your brain is so busy playing the music that you can't step outside of it to see it in context with everything else. This is probably a matter of neuroscience.
What happens when the brain is multitasking in music? Listening and playing are two different tasks but they have to be done simultaneously in order for the music to work. Even recording a part in a recording is a live performance even though you can do many takes. The final composite is an illusion of a spontaneous performance. It’s the same illusion that films give us. It appears seamless but is nonlinear and shot out of sequence over months or years. It takes on a different dimensionality.
“Dimensionality” was one of the terms that was used in a podcast that I listened to this morning between Marianne Williamson and Jean Houston. It’s a good way to describe it. It’s a multi-dimensionality in which the brain has to move to a calm flow state in which the musician is being played more by the instrument. I've had that experience with a lot of live music where it started to have a “spiritual” quality where it seemed like the musicians were playing on a different level–happening more with jazz–where it reached a higher level. Jazz improvisation often relies on tension and release as an “engine” for emotional feedback between players and listeners–a feedback loop of expectation and satisfaction, or disappointment of expectation. It's something that I've always wanted to bring from jazz into pop music where there would be a combination between song structure and improvisation. I’ve always thought that David Bowie did that well. He understood that there was a sweet spot between things that are composed, like a song form, and improvising within that space. This goes against the idea of the tribute band that plays music that sounds exactly like the record: there can't be any flow within that even though the flow might have been in the creative process during the recording sessions. They're trying to duplicate that point in time and I don't think it's really possible –which is the reason that when you see the real bands it sounds more like an improvisation, even though sometimes the guitar solos are played verbatim. David Gilmour usually would play more or a “scored” solos which I think the audience expects. I think it works well because he makes them seem new every time. It’s nice if you can get to that level in an ensemble where it's a combination of scored and improvised in terms of reacting to what's already there–which also relates to visual art: Everybody has heard an artist talk about the fact that they're reacting to whatever is in front of them, figuring out what's next in the sequence and how it will move towards a resolution. But in terms of a live performance or a recording, it’s getting the music to a place where it's comfortable for the player to react to. Creating a good headphone mix for a singer makes a world of difference between what's being perceived by the ears and the way that the performer is performing. A while back I had been doing some recording of just me talking and I had an idea and wanted to get it down, so I had my headphones on and I was listening to music or some other program and the quality of the voice was so different. It was louder because I was adjusting my voice based on what was in the headphones. When I listened back it had a different quality to it. Context matters a lot to sound and music because it’s always reactive–either relaxed or anxious and it affects the tone. Guitar players and bass players are always talking about getting the right tone because tone and feel make a big difference in the overall vibe. I’ve noticed that there's a big difference between an instrument that is cold and an instrument that is warm. When the instrument is warmed up the action changes and that makes a big difference in how it creates tone at the instrument level feeding to the amp. This would be a good use for artificial intelligence–to have it determine what the best tone is, and compile a training set of sounds that you like and when you're playing, it finds those sounds and it makes those adjustments. Or you can run it on final mixes From the master buss to recreate the tone that's learned from other music that you've done. It creates a “characteristic”--it makes you “you” in sonic terms.
In terms of levels of attention at the tactile level, I think we can perform better if our attention is diverted in some way, our mind is placed elsewhere and the music steps in on its own.
Years ago there was this craze with spinners where you would just kind of spin something in in your other hand and it would just put something in the background and it would help you think and perform better and I think that's how people use music to get into a flow if they have music on in the background the music becomes the spinner–the thing that runs in the background, a cognitive “white noise”. And it doesn't necessarily have to be sound–it's simply a background thing that's comforting us, and we can just go about our business and probably do a much better job at whatever we're doing because it takes us out of our amygdala and into the [prefrontal cortex]. It’s similar to the use of microdosing or nootropics as a background thing as well–in order to take our attention off the monkey mind and into flow areas of the brain.
Everything in life is getting “on a level”. Music is definitely getting on level: it’s getting to the level where the chord changes are the “spinner”: you've learned it so well it just goes on in the background and you can surf over that. That's the ideal situation but it doesn't usually happen that way. But that's the sweet spot I want to be in. Flow experiences tend to be fairly rare because it requires a lot of preparation, including a background of good health, a state of well-being, and being well-rested. There's a body-mind aspect which has nothing to do with artificial intelligence, which is now being sold as the cure-all for everything, so we could just apply AI to everything but it's not necessary.
5/12/2021
Comments