My three creative principles and parametric model frame the development of a personal audio-visual instrument, which outputs acoustic sound, digital sound, and digital image. The instrument includes a zither, that is an acoustic multi-string instrument with a fretboard, and 3D software which operates based on amplitude and pitch detection from the zither input. Technically, the software would operate upon any audio input detection. But the design specifications, mappings, parameterisations and structural sections are made for a specific zither with aged strings, a personal tuning system, and personal zither playing techniques.
The work stresses a distinction between the notions of 'play' in music and gaming. The audience does not interact with the instrument, there are no allusive icons or player-paradigms, and the performer does not face the screen. The interaction with the instrument is not simple, and the image creates a reactive stage scene without distracting the audience from the music.
A compositional language emerged with the first version of the instrument, which includes the AG#1 software. Clarifying artistic insights with the aid of cognition/ attention research led to its further development. The subsequent version of the software, the Arpeggio-Detuning, focuses on sound organisation. Its creative strategies were later estended to audio-visual software, AG#2.
All software versions are
developed in collaboration with John Klima, who kindly made the code according to my specifications.