AI, self-development & the wisdom of Bill Evans

by Ivo Witteveen

 

Like most of you, I have followed the rapid development of generative AI in the field of music with a mix of fascination and mild alarm. And at numerous times I've been planning to write about it - but repeatedly, AI proved to develop faster than my idea of what to write about it. So, I was looking for a way to distance myself a bit from the day-to-day AI news cycle and last week, I got some unexpected help from my favourite jazz pianist, Bill Evans (1929-1980).

How so? I was rewatching an old interview with Evans, in which he talks about how to study complex phenomena. At one point he says something very insightful, illustrates it by playing an example on piano, and I then it occurred to me: "he's talking about AI!". Of course, he wasn’t - the interview was part of the documentary 'The Universal Mind of Bill Evans', taped in 1966. But something triggered me, and I thought it might be interesting to see what happens when we apply his time-tested advice to our understanding of AI.

Here's the fragment I'm talking about, 3 minutes of highly recommended watching:


Here's the gist of it: Evans describes what he often sees in students who try to achieve quick success: "They tend to approximate the product rather than attacking it in a realistic true way, at any elementary level. (...) They would rather approximate the entire problem, than to take a small part of it and be real and true about it." He then brilliantly illustrates this at the piano, by performing a tune followed by three improvisations on it: first at a professional level (like he would normally do it), next at a simpler (but accurate) level that would be helpful when studying and lastly, he mockingly plays a sort of virtuosic pseudo-jazz that's so vague it misses the point. It was at this last improvisation that I thought: 'hey, this sounds like AI!' After playing the examples, Evans adds: "You can't take the whole thing. To approximate the whole thing in a vague way gives one a feeling that they more or less touched the thing, but in this way, you just lead yourself towards confusion." And then I wondered - isn't this what AI does? 'Approximate music', 'more or less touch the thing'? I don't know, but it got me thinking - and I thought of two ways to apply Evans' insight to our dealings with AI. 

Firstly, you could compare the way current generative AI works & sounds, to Evans' 'approximating the whole thing in a vague way' statement. Generative AI's like Suno and Udio are trained on complete tracks and are thus approaching the music as a whole. This is because at a conceptual level, diffusion models like these AI's take white noise and use denoising processes until the audio resembles something recognizable, such as a song. While the most prominent parts of the song get better and better as the training improves, there is often a certain vagueness in the background instrumentation. For instance, there might be a rather convincing lead vocal but, in the background, you hear something that 'sounds like guitar parts' - but you couldn't really take it apart into separate guitar parts one could actually play. (Sidestep: maybe we should call this kind of 'vague AI sound' not music, but 'music-like substance', analogous to the term 'food-like substance' food critic Michael Pollen coined for processed foods he doesn't consider to be real food). So, you could say that current generative AI 'approximates the whole thing in a vague way', as Evans would call it. But: with the speed of AI development, I think it is likely that this vagueness will quickly disappear. There still will be no 'real understanding' of music by AI at any level, but it can be trained on all levels and will probably get more accurate.

A second way to look at this, is to apply the Evans quote to our own relationship with AI. Perhaps we shouldn't 'try to approximate the whole AI thing in a vague way' and would be better off looking at it at a more elementary level. For me personally, 'being real and true about it' means facing that the arrival of generative AI leads to hard questions about why I make music and how I like to make music. For instance, I have learned to make good mixes mainly by doing it. Of course I had help from teachers, peers, other mixes I could reference to - but it wasn't until I had spent about 10 years doing it, that my mixes reached a professional standard. The same probably goes for my compositions. And I enjoyed learning these things: I like the process of trying new stuff, the artistic struggle, the satisfaction of succeeding, and the development resulting from it.

What would have happened if AI-tools had given me a perfectly acceptable mix in seconds, when I was just starting out with mixing music? Or 25 melodic themes to choose from, instantly? I could have 'finished the job' quicker, but then again: what exactly is my job? I think my job as a composer/producer is also to keep developing. And our development is guided by the things we choose to do. So, if I were to use AI to generate 25 musical themes to choose from, I would train myself in choosing between various musical ideas and thus get better at that - but not in generating new ones. Admittedly, you could also use AI generated material as an inspiration to create something yourself - but still, you would be changing your way of working and with it, the direction of your development (for better or worse).

Another thought: when I use an AI mix assistant, my current self can judge whether the AI-assisted mix is actually any good - but perhaps only, because I have experience in mixing music myself. What if I were to start out now, with no experience in mixing, what & how would I learn if I relied on AI to solve mix problems that were previously solved by humans? Or what about this: AI is trained on lots and lots of data, so AI output might tend to steer you towards the common denominator, towards the most expected result. But isn't it part of being a musician to know the conventions of a musical style, only to break away from it at some point and do something fresh and unexpected? It's tempting to follow the path of AI-generated suggestions, but it makes it harder to lead.

"It's tempting to follow the path of AI-generated suggestions,
but it makes it harder to lead."

Of course, this isn't entirely new - there has always been a link between the technology we use and the skills we develop. Film composers of the older generation are often more accomplished in working with just paper & pencil than musicians of my generation, who have been aided by DAW's their entire life. Producers who have become masters in manipulating pre-recorded beats might have a hard time recording a real drummer. You learn by doing - and my point here is that we have choices to make. In the discussion about AI, in addition to all the ethical, ecological, economical and legal issues, there might also be self-developmental aspects to consider. This is not to say AI can't be helpful to enable learning - used in the right way, AI can probably be tremendously helpful as an educational tool as well. But if you start using AI to do something that you used to do yourself, you will be changing the course of your own development. Let's make that change intentional.

"If you start using AI to do something that you used to do yourself, you will be changing the course of your own development. Let's make that change intentional."

To get back to Bill Evans:



"To approximate the whole thing in a vague way gives one a feeling that they more or less touched the thing, but in this way, you just lead yourself towards confusion. And ultimately, you're going to get so confused that you'll never find your way out. I think it is true of any subject that the person that succeeds in anything has the realistic viewpoint from the beginning and knowing that the problem is large and that he has to take it one step at a time, and he has to enjoy the step-by-step learning procedure."


To him it was clear: quick fixes don't lead to personal growth, and you must make sure you enjoy learning. 

What does that mean, enjoying the learning process, in the context of music technology? How people prefer to interact with technology is personal. There are film composers who can't imagine anyone else touching so much as one fader in their music - there are others who happily sketch out main ideas and leave the rest to assistants. Some people have no problem with using preset sounds and tweaking them a little, while others insist on creating all synth sounds from scratch. Personally, I have never liked working with more complex samples (phrases, beats) that have too much pitch & rhythm information contained in them; but there are flourishing music genres that are built precisely on the creative mangling of samples like that. It's just 'not for me' - not because I have anything against it or can't appreciate it, but because my personal fascination, how I like to create music, has led me onto another path.

It might be the same with AI: whether and how AI fits into your workflow is probably highly personal and requires some introspection and figuring-out. If you are the kind of creative mind who likes serendipity, experimentation, and basically use anything you come across in your life to create something new anyway, being able to generate large chunks of music by prompting generative AI and then play around with this material, could be fantastic. But if you're the kind of person who develops a whole piece in their head, generative AI can be distracting more than it is helpful - because its output will never exactly match what you have in mind. Maybe AI-tools that assist on lower-level tasks are more useful to you. Or perhaps you'd be better off using AI for your social media or to keep track of your royalties, not as a musical aid at all. My point is: if the question is 'as a composer, how do you engage with AI', it is worth taking some time to reflect on who you are as a composer, what fascinates you, why you like creating music and in what way you like to create music. Not because all things in life should always happen in the way you like, or because you should bury your head in the sand and ignore the threats AI might pose. But because for your own development and happiness in the long term, it is imperative to keep enjoying what you do. Use AI in a way you like, discover if there's a way to use it that fits you - don't use it because you feel you must.

"If the question is 'as a composer, how do you engage with AI', it is worth taking some time to reflect on who you are as a composer, what fascinates you, why you like creating music and in what way you like to create music."


So, there they are, my current thoughts on the fast-developing realm of AI in relation to music composition. If, in a few weeks, months or years, I must conclude 'hmm, that didn't age well...', I'd like to pre-emptively hide behind the wise words of science (fiction) writer Douglas Adams:

  1. Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works.


  2. Anything that's invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.


  3. Anything invented after you're thirty-five is against the natural order of things.”

Feel free to figure out your mental age based on your feelings about AI.

 

More with Ivo Witteveen

 

Previous
Previous

Annelotte Coster - Collaborations with directors

Next
Next

NIME in Utrecht!