The base idea of my sketch is to take prerendered footage from software like cinema 4D or After Effects into processing to change and modify it through sound with the goal to create generative visuals wich are responding to the live audio input.

how does it work?

First the unnecessary areas of the square shaped source are cut out that only a little “piece of cake” is left. In the next step the masked image got duplicated, mirrored and rotated. But that only creates a static kaleidoskop image.

Through a FFT analysis of the incoming audio signal I got access to the single wave bands. One wave band is triggering the X & Y movement of the source image, another one is triggering the rotation of the whole kaleidoskop and a third wave band is responsible for the playback of the animation sequence.

To bring in still more variability the kaleidoskop can be controlled through 3  sliders on an Ipad additionally.

The first slider controls the subdivisions, with the second slider one of the 3 source sequences can be picked and the third parameter is a multiplier for the x & y movement of the source image.