In 2021, technology the role in the generation of art is pending debate and discovery. From the rise of NFTs to the proliferation of techno-artists who use generative contradictory networks to produce visual expressions, to smartphone apps that write new music, creatives and technologists are constantly experimenting with how it is produced, art is consumed and monetized.
BT, the 2010 Grammy nominated composer These hopeful machines, has emerged as a world leader at the intersection of technology and music. Beyond producing and writing for cuts such as David Bowie, Death Cab for Cutie, Madonna and the Roots, and composing scores for The fast and the furious, Smallville, and many other programs and films, has helped pioneer production techniques such as stuttering editing and granular synthesis. Last spring, BT launched GENESIS.JSON, a software that contains 24 hours of original music and visual art. It features 15,000 individually sequenced audio and video clips he created from scratch, spanning different rhythmic figures, field recordings of cicadas and crickets, a live orchestra, drums and a myriad of continuously playing sounds. And he lives on the blockchain. In my opinion, it is the first composition of its kind.
They could ideas like GENESIS.JSON be the future of original music, where composers use artificial intelligence and the blockchain to create completely new art forms? What does an artist do in the age of algorithms? I spoke to BT for more information.
What are your core interests in the interface of artificial intelligence and music?
I’m really fascinated with this idea of what an artist is. Speaking in my common language, music, is a very small set of variables. We have 12 notes. There is a collection of rhythms that we normally use. There’s a kind of vernacular of instruments, of tones, of timbres, but when you start adding them, it becomes that really deep dataset.
On its surface, it makes you ask, “What’s so special and unique about an artist?” And that’s something I’ve been curious about all my adult life. Seeing the research that was going on in artificial intelligence, my immediate thought was that music is a little hanging fruit.
Today, we can take the total sum of artists ’output, we can take their artwork, and we can quantify the whole into a training set, a massive, multivariable training set. And we don’t even name the variables. RNNs (recurrent neural networks) and CNNs (convolutional neural networks) automatically call them.
So you mean a set of music that can be used to “train” an artificial intelligence algorithm that can then create original music that resembles the music in which it was formed. If we reduce the genius of artists like Coltrane or Mozart, for example, into a training ensemble and we can recreate their sound, how will musicians and music connoisseurs respond?
I think as we get closer, it becomes this strange idea of valley. Some would say that things like music are sacred and have to do with very basic things about our humanity. It is not difficult to get into a kind of spiritual conversation about what music is as a language, what it means and what power it has, and how culture, race and time transcend. So the traditional musician might say, “That’s not possible. There are so many nuances and sensations, your life experience and that kind of stuff that goes into music production.”
And the kind of engineer on my part is going, so look at what Google has created. It is a simple type of MIDI generation engine, where they have taken all the works of Bach and is able to spit [Bach-like] run away. Because Bach wrote so many fugues, he is a great example. In addition, he is the father of modern harmony. Musicologists listen to some of these leaks from Google Magenta and can’t tell them apart from Bach’s original works. Again, this makes us question what constitutes an artist.
I am both excited and have an incredible concern about this space to which we are expanding. Maybe the question I want to ask is less “We can, but should we?” and more “How do we do it responsibly because it’s happening?”
Right now, there are companies that use something like Spotify or YouTube to form their models with live artists, whose works are protected and protected by copyright. But companies are allowed to take someone’s work and form role models right now. Should we do it? Or should we first talk to the artists themselves? I think we need to establish protection mechanisms for visual artists, programmers and musicians.