Integrating 3D Experiences Into Electroacoustic Performances
Digital artist Daniel Iglesia shares thoughts about his work manipulating sound and video to put a new edge on traditional musical performances.
Daniel Iglesia creates music and media for humans, computers, and broad interactions of the two. He works with live manipulations of sound and video, with automation and algorithmic composition, the magnification of inherent chaos in sounds, and real-time media performance with traditional instruments. His works have taken the form of audio and video performance, instrumental works with live electronics, gallery installations, and collaborations with many disciplines such as theater and dance.
Daniel will be speaking about his work at the upcoming TEDxBrooklyn conference this weekend, and shares his thoughts with us below:
What other projects are currently inspiring your work?
The niche of electroacoustic/live performance is small enough in New York that a lot of people know each other, and so I regularly see my friends’ and colleagues’ work up close which is most inspiring. A few names in that orbit: Douglas Repetto (a lot of cool robotic installations), Sam Pluta, Luke Dubois, and Blake Carrington.
Some outside stuff I recently encountered that was relevant to me:
-Ryoji Ikeda http://www.fiaf.org/crossingtheline/2010/2010-09-ctl-ikeda-datamatics.shtml
I am pretty interested in simultaneous audio/video synthesis on the same dataset, and this project did that in a very low-level and literal way that I enjoyed.
While I don’t really follow the chiptunes/demoscene world in depth, I enjoy it whenever I encounter it: people making music and video on old pieces of game hardware (Nintendos, Ataris, Commodores, etc), hacking/repurposing them and stretching them beyond their original intended use. I make a lot of audio samples on a gameboy with a custom synth/sequencer cartridge, and it’s a fun way to get sounds that are both idiosyncratic/unexpected, and laden with recognizability and nostalgia.
What things are you looking forward to being able to incorporate with emerging or developing technologies?
There’s a lot of work out there that is just using “new” technologies for the sake of saying that they are the first to do so; I don’t really like that, and often those works, while perhaps conceptually attention-getting, are not very interesting aesthetically; the new tech has not facilitated a new or interesting aesthetic experience.
A lot of my projects are with either very simple or old technologies: 3D glasses and their predecessors have been around for a century, game boy synth chips, simple control chips (like Arduino, a microcontroller chip that is popular in the artsy/hacker world), openGL graphics (standard graphics language on all video cards, which is what I use to generate the 3D graphics), hacked radios, etc. Often, it’s merely repurposing these things into the live art/music performance world that is interesting or different.
The speed of technological development (and desire to be on the tip of it), means a lot of technologies aren’t deeply explored as an art-making tool before being left for the newest fad. Also, the current wave of fetishized consumer electronics is extraordinarily closed to the hacker/inventor/artist/programmer community; these have been deliberately constructed in order for the company, instead of the developer community it has always depended on, to control software and content distribution.
But then again, a lot of my performance software does indeed use contemporary computer speed and hardware, especially for live audio and video manipulation (see my project phase vidcoder for an example: http://vimeo.com/2290600 ). And pedagogically, there’s a lot of great current work in the free/open source universe that has helped make tools more open and approachable to artists/musicians (programming languages like Processing, SuperCollider, ChucK, hardware tools like arduino and makerbot). One emergent technology that I’ve yet to fully explore, but look forward to doing so, is 3D printers (which are currently plummeting in price) in order to make physical objects as both control tools and remnants of live performance.
What has been the most interesting or surprising response to any of your work or performances?
I once did an informal solo laptop performance while in residence at an institution dedicated to creating sustainable cities. There were some serious anti-technology people there who resented my presence. Apparently the only ideal music in their envisioned utopia was cliched folk songs on acoustic guitar.
While that is an extreme example, there is still a divided audience out there on digital manipulation of human-created sound; a few musicians I’ve worked with haven’t liked that their instrumental sound has become fodder for a computer to mangle; many people are attached to the human/gestural importance of a tangible vibrating object, and distrust the perceived opacity of a computer. But this has started to change dramatically with newer crops of contemporary performers, where the computer is starting to be treated as a normal ensemble instrument with its own (vaguely defined) practice of musicianship.
An interesting response: when I first incorporated 3D glasses in to some of my live performance systems, I was struck with its effect; it allowed an otherwise jaded/indifferent art-going crowd to momentarily regress into a more childlike state, where it’s okay to enjoy something on a more visceral aesthetic and non-conceptual level. I liked that, and that’s been a big reason that I’ve continued to add it to both solo performance and instrumental ensemble pieces.
To move beyond novelty activations and one-time gimmicks, PSFK equips marketers with the insights, templates and analytics to develop high-reach campaigns that meet consumers in the moment, collect and build upon experiential data, and build scale through content creation.
Wearable X CEO Billie Whitehouse spoke to PSFK 2017 about designing wearables for all five senses and maintaining a sense of humor