SOUND OBSESSED
9. The Glad Scientist x Tom Guida
Solarpunk Migrations
THE STORY
As goals represent a future mindset based on intention and planning, we imagine a world where humans, nature, and technology are unified in the process of healing and advancing each other. The generative story travels between humans of different cultural backgrounds, in different natural environments, where together they can communicate and connect on deep levels, using these new and natural technologies.
We use poetry and the sonification of imaginative imagery of these futures to tell this story of a positive future, in hopes of igniting your own intention and imagination towards building it with us in community.
THE TECHNIQUE
Using select AI-generated images over the course of 3 years, the process undergoes a few iterations. Firstly, the images are turned into video using ComfyUI (LCM if you are into it), and at times using Steerable Motion techniques. The videos are brought into our custom system built in TouchDesigner, which creates a series of sequencers out of different rows of pixels in the video, sequencing and looping over 1024 steps, and using the RGB values of the pixels as MIDI notes to different channels.
On the same network, we use RTP-MIDI to receive the MIDI notes in Ableton, allowing mapping of the different notes and channels to various instruments.
As the generative performance of the data processes, a differing time window is used for the sequencers, allowing a more sparse sample of the curve/trajectory of the data at times (<10sec window), and a more real-time sampling of the data at other times (<1sec window) in order to give natural dynamics to the ways the instruments are played by the data.
Using select AI-generated images over the course of 3 years, the process undergoes a few iterations. Firstly, the images are turned into video using ComfyUI (LCM if you are into it), and at times using Steerable Motion techniques. The videos are brought into our custom system built in TouchDesigner, which creates a series of sequencers out of different rows of pixels in the video, sequencing and looping over 1024 steps, and using the RGB values of the pixels as MIDI notes to different channels.
On the same network, we use RTP-MIDI to receive the MIDI notes in Ableton, allowing mapping of the different notes and channels to various instruments.
As the generative performance of the data processes, a differing time window is used for the sequencers, allowing a more sparse sample of the curve/trajectory of the data at times (<10sec window), and a more real-time sampling of the data at other times (<1sec window) in order to give natural dynamics to the ways the instruments are played by the data.
ARTIST BIOS:
The Glad Scientist
Daniel Sabio (b. Augusta, GA, 1988) is a non-binary Puerto Rican conceptual media artist based in Barcelona. Their diverse work spans music, poetry, VR performances, brain/heart-controlled artworks and 3D interactive game experiences. Their concepts range from personal (the non-binary, mental health) to global and universal (quantum time, environmental futures), and delve into human-technology relationships to invite deeper introspection. Their solo and collaborative showings have been welcomed at LEV Festival, Venice Biennale, ISEA, Ars Electronica, and shown in galleries in Shanghai, Tokyo, New York, Basel, and Berlin.
Tom Guida
Tom Guida is a dynamic music producer and sound engineer who has evolved into a Game Audio Artist and Project Manager. With over four years in the video game industry, he delivered music composition and sound design.As a key member of Gin Tronic , an electronic live band, he contributed to innovative records and immersive stage productions. Blending formal and self-taught education, Tom is now at the forefront of interactive multimedia experiences, redefining how we engage with sound and music in digital realms.