Harmonizing AI and Human Creativity: Revolutionizing the Landscape of Media Music Composition

The content of this article/series on music & AI is written entirely by artificial intelligence. Read our introduction into AI in Music - generated’ why and how we as editors of Private Kitchen create these articles.

 

For this article, links were added manually: at the end of the article there is a summary of several AI tools for media music components and links to their websites.

Media music composition, the artistry of shaping music for films, TV shows, video games, and multimedia content, is an intricate synergy of melody, harmony, rhythm, orchestration, and sound design. With recent advancements in artificial intelligence (AI), we stand on the brink of a groundbreaking era where AI and humans harmonize to compose music that resonates deeper and flies higher.


The essence of media music lies in weaving emotions into narratives. AI, with its immense computational power, can automate various aspects of music composition like generating melodies, harmonies, and rhythms. Yet, it needs the human touch to add the nuances and emotional depth that kindles the listener's journey.

AI tools like OpenAI's Jukebox, more recently MusicGen, Suno.AI, Stable Audio and Google’s Magenta and Open AI's MuseNet can generate music in a plethora of styles or even combine elements from different genres, ushering in an era of boundless creativity. In composition, orchestration and arrangement, AI technologies like AIVA and Orb Composer suggest instrumentations, create realistic simulations, and even generate music scores. But, it's the human composer’s curation, choice and arrangement of instruments that breathes life into these creations and what crafts the unique sound that shapes the audience’s emotional journey.

AI is also starting to excel in spotting and helping in picture breakdown, identifying where and what type of music is needed in visual media. AI tools like video summarizers on summarize.tech perform the task competently, but the ultimate decision on the mood and style of the music still relies on the human conceptual understanding of the narrative.

Similarly, in sound design, AI tools like Wavenet, Stable Audio and AudioLDM can generate new sounds or manipulate exisiting ones to create effects. In mixing and mastering, AI tools like iZotope Ozone balance the levels of different elements and apply effects, but it's the human composer who carefully applies these sounds to create the desired atmosphere and mood and ensures the final product resonates with the intended emotional impact.

The synergy between AI and human creativity in media music composition is not about replacing one with the other. Instead, it's about how AI can amplify human creativity, fostering an environment where composers can focus more on the creative aspects of their work. AI brings efficiency and flexibility, but it's the composers who bring the emotional depth, making the music truly resonate with audiences.

Indeed, we are just at the dawn of this musical revolution. By harmonizing AI and human creativity, we hold the potential to create a symphony that's beyond what either could achieve alone, thus revolutionizing the landscape of media music composition. And, in this symphony, the human composer will always be the conductor, guiding the orchestra of AI tools to perform the beautiful composition of emotions and narratives.


A Human Aftertought

Note with this article: “The opinion of AI does not necessarily reflect the opinion of Private Kitchen”.
We believe that the creative aspect is more than the concept and end result alone. The use of AI tools can put the ‘creative process of making’ (and the pleasure of it) under pressure.

For this article the same question was asked to Large Language Models (LLM) Meta LLama2, OpenAI GPT-4, Google Bard and Microsoft Bing. From the collection of followup questions and responses Bard was asked to write an article. The best elements from 3 versions Bard generated were used as an inspiration for GPT-4 to write this final article. 

The question was : “What are all components of music design for media composers and which of these components can be realised by AI, or in collaboration with AI? “. The entire conversation is available upon request.

It might be good to mention that all LLMs mentioned that music design for media involves a complex process, encompassing a variety of components. Some of these components can be accomplished by AI, but most are still more effectively handled by human media composers due to the creative and emotional nuances involved. 

Furthermore it is good to mention that some of the LLMs were very good in speculating on future prospects of the use of AI in media composition, but that the speculation often led to false repertoire examples in media music and AI tools. At the time of writing there don’t seem to actually be any serious examples of media music where AI played a significant role (whereas algorithmic music hás been used in previous media compositions). 

Although the rise of LLMs can help media composers with many tasks and chores, the aid in conceptualisation (both via text, chats and melodic suggestions) seems to be the first and most important AI aid for media composers for now. 

Next up!

In a follow up article we will share a list of smaller AI building blocks and examples of how tools can aid in music analysis, transcription of music to midi, generation, orchestration, mixing, mastering, etc. 

For inspiration till then: https://www.futuretools.io/ is a recommended platform


images in this article are created by Stable Diffusion. Just like LLM's Stable Diffusion is good in speculating about reality.

Overview of currently available AI tools for media music components

move your mouse over the first column for an explanation of the media music component terminology.


more articles

Previous
Previous

Business, rights & the craft

Next
Next

Cooperation in a music session