Beggining of July, following the AI and Music lab, international group of artist gathered in St. Petersburg (RU) to perform music aided, infused, inspired and generated by artificial intelligence. The stage was curated by Natalia Fuchs of Artypical and facilitated and by Peter Kirn of CDM media. Peter wrote a nice follow-up and interviewed Natalia as well (linked below) so I’ll just talk a bit about my part in the picture.
I was playing drums and live electronics as part of the Improvisation group together with Symphocat, KMRU, and Ilya Selikhov. We worked almost exclusively with sound material that AI generated on the training material we feeded it (my drumming was, too, triggering AI samples). The input data were large number of compositions by Morton Feldman, Cornelius Cardew, Mika Vainio, Fela Kuti and KMRU’s grandfather’s recording. The initial idea was to use AI to momentarily resurrect our favourite dead composers as post human digital sonic artefacts and play a concert together. Even though training dataset was fairly ambient, the resurrection turned out extremely noisy. It seems that even the most powerful computers do take time to learn how to interpret music, just as we humans do (we only trained for about four weeks, though). Nevertheless, I can humbly say all the performances were a blast! 🙂
Go and have a listen!
Symphocat uploaded our full performance on his soundcloud:
Here is a short preview of how it all looked:
And this is how that impro-sition was structured:
Finally, grand thanks to everyone involved for making this all happen. Curious where the AI will lead us next.