Infratonal Profile

About

INFRATONAL is an artistic project led by Louk AMIDOU, a Parisian multidisciplinary artist working at the intersection of AI creativity, digital arts, and electronic music. Using code, AI, and diverse technology, he explores new dialogues between algorithms and human beings. He is the founder of Artefunkt Studio, an entity dedicated to AI learning, experimentation, and the production of interactive AI art installations and AI-AV performances.

As an interactive designer, creative practitioner, musician, and coder, he has been immersed in creative and technological practices since his teenage years, starting out in the Atari Demoscene. He has created art involving sound and image, from visual productions and electronic music records to digital art projects. He holds degrees in Fine Arts, Computer Graphics, and Communication, and has spent several years working in agencies, helping brands and organizations navigate the digital landscape.

His recent artistic work explores the relationship between humans and algorithms, focusing on hybrid forms that blend the musical, the visual, the conversational and the gestural into new performative artworks. His technological approach experiments with the tension between AI and human input, seeking a creative gesture where the algorithm becomes an invisible instrument, a partner in creative dialogue rather than only a tool. At the same time, he takes a critical stance on the artist’s relationship to the algorithm, questioning how human imperfection remains a necessary element of the creative process in the age of AI.

Interview

To know more about my process you can read this interview from TouchDesigner Website 

Artist Statement

VISUAL AND MUSICAL ALGORITHMIC ARTWORKS

I use artificial intelligence and generative algorithms to create and control interactive visual and musical pieces. They are an abstract and poetic representation of the algorithms themselves and they question their usage in our society. This hybrid data-based artwork is designed to be performed in real-time.

ALGORITHMS AS A NEW INSTRUMENT TO PERFORM

In my approach, an algorithm’s objective is not to directly produce the artwork but to function as a visual and musical invisible instrument that responds to free human inputs such as gestures. I then seek to perform this intangible algorithmic instrument like a musician would play a physical one.

EXPRESSIVE GESTURE TO DIALOGUE WITH A BEHAVIOURAL INSTRUMENT

By removing the tangible, it forces me to develop a specific performative gesture and look for a new language to communicate on an emotive level with this algorithmic instrument. And because of its algorithmic essence, this instrument can perceive the real world, look at the performer and respond with his own vision : It’s then a sensitive dialogue between the artist and the machine.

RETHINKING THE ARTWORK’S NATURE

In this approach, a deterministic nature of an art piece is unleashed as much by the gesture than by the autonomous and unpredictable behavior of the algorithm. This hybrid visual and musical artwork is an instrument and, at the same time, the creative object produced by the performance.

HUMANIZED ALGORITHMIC ARTWORK

In our society, algorithms are not neutral and not explicitly created for the well-being of humans, and they are often used to constrain people and to optimize processing efficiency. Another concern is the gradual loss of control over the algorithmic process due to the high level of abstraction, the increased complexity, and the “black-box” of the algorithm themselves. The result could be massive, cold, dangerous, biased services that conforms blindly to an economic ideology instead of human needs.

My artistic experiment is meant to question the relationship between humans and algorithms. It tries to re-establish a dialogue by bringing embodied presence into the opaque and deterministic algorithmic system. In this vision, the algorithm is a creative instrument that only produces artistic form by connecting with imperfect and spontaneous human gestures and creative intention.

On the tech side, I've worked on : 

  • Installation using a wide range of computer vision capabilities
  • Depth sensor integration (Kinect, Realsense …)
  • Generative AI custom animation (ComfyUI, AnimatedDiff)
  • Generative AI UI/UX for ideation (LLM implementation) 
  • Realtime and interactive generative AI (Streamdiffusion)
  • Tools for Touchdesigner (Instant Probe, ...)
  • Visual reactive to audio / audio reactive to visual
  • Mapping interface for Audio controlled by Gesture
  • Realtime skeleton for interaction
  • Particle system for physical presence
  • Lidar Touch wall and Touch surfaces 
  • Object detection in HMI interfaces
  • Realtime immersive space projection (360° Mapping)
  • UX for Interactive design interface : Touchscreen, controler, onair
  • And much more ...