Pangalatic Graphic Score for Human-AI Interfaced Music Composition



Over 11 years, I've used a process I call V-A Snthesis (Visual-Aural Synthesis) where images are broken down and converted into units of computer data which are then interpreted and expressed in sonic form. Initially, I used algorithms to do this. Recently, I have been using artificial intelligence neural networks to reimagine the visual materials in aural form.

Many of you will be aware of the new images captured by NASA's James Webb Space Telescope. The first of these, is one I am using as a graphic score to create new music. Part of the beauty of using this image as a score for composing music, is the sheer scale, in a spatial sense, it is pangalatic, in a temporal sense, it is 13.5 billion years (as the galaxies are billions of light years away, it has taken billions of years for the light to reach us). Looking at the image is literally looking back in time to not long after the creation of the universe.

The AI I have chosen to work with has recently been updated with new features, so that instead of the AI doing most of the work, there are knobs and dials that can be adjusted in realtime by a human operator, altering the parameters, and playing the AI like an instrument. I experimented a little with this last night. I'll do some more work with it in the next few days and may incorporate some of the material into my soundtracks for Luis' Buñuel's and Salvador Dali's film Un Chien Andalou. Incidentally, working on these has also felt like a kind of time travel, communing with the writers, directors, cinematographers, and actors, while creating the music. There has been one of the most profound connections I've felt working with material created by other artists who are no longer living. It does make me wonder about the plasticity and malleability and porousness of time, and our capacity to move beyond it as a perceptual barrier.