Introducing the
‍‍LumaSonic Engine

Powering the future of multi-sensory experiences

250M+
Traded monthly
300K+
Active users
10M+
Saved in commissions
75M+
Capital in funding

Multi-Sensory Evolved

The LumaSonic Engine is a powerful suite of tools with a familiar workflow for authoring and distributing the next generation of immersive, multi-sensory content. Whether you are creating relaxing inner journeys, precise brainwave entrainment sessions, high framerate audiovisual art, or exploring the deep somatic benefits of adding vibration to light and sound, the LumaSonic Engine was designed for you.

Light painting (Sergey Katyshkin, Pexels)

Unified Workflow

Composing experiences with multiple senses can feel intimidating and cumbersome. Combining and synchronizing different technologies together is possible, but rarely straight-forward. This can impede the creative process and complicate distribution. And yet, the benefits of well-crafted multi-sensory experiences are beyond profound.

Using a unified workflow, and accessible encoding and decoding technology, the LumaSonic Engine allows you to spend more time in creative flow, and less time troubleshooting and configuring.

Features

Industry Standard

The LumaSonic Engine is designed around industry standard audio technology, building on top of the audio skills, software, and hardware that you already know and love. Color control information is invisibly encoded into digital and analog audio signals, and efficiently decoded at playback by the LumaSonic Decoder. Choose your favorite Digital Audio Workstation and use our VST3 or Audio Units plug-ins to get creating multi-sensory content quickly on macOS and Windows.

Author Once, Run Anywhere

The LumaSonic Encoder and Decoder are lightweight, cross-platform software modules that can run on anything from a $5 microcontroller to a Raspberry Pi, as well as all major operating systems: macOS, Windows, Linux, iOS, and Android, including on Virtual Reality devices like the Meta Quest and Meta Quest 2. AudioVisual Stimulation (AVS) devices that support AudioStrobe or SpectraStrobe encoding like the MindPlace Kasina are also compatible.

Use Existing or Custom Lighting Hardware

The decoder is easy to port to nearly any device. Create your content once, and run it on the hardware that best fits your needs. No decoder? No problem - your composition still plays as a normal audio signal on any audio device or music player.

Exquisitly Fast

Achieve visual frame rates up to 250 frames per second, tightly synchronized with the underlying audio. Precise high-framerate visuals are critical for applications like brainwave entrainment, Virtual Reality, or moving animated light. Light modulated at these frame rates can reveal exquisite closed and open eye visuals, producing bursts of fractal patterns over the viewer's visual field via the Ganzfeld effect.

Adding a New Dimension to Audio

Since the color information is encoded into the audio signal, anything that you can do with audio, you can also do with light. Fade, pan, modulate, or animate light along with sound. Distribute your content as standard audio files such as .wav or .mp3, or create a live multi-sensory broadcast using established audio streaming technology like Icecast.

A Complete Solution

Industry Standard

The LumaSonic Engine is designed around industry standard audio technology, building on top of the audio skills, software, and hardware that you already know and love. Color control information is invisibly encoded into digital and analog audio signals, and efficiently decoded at playback by the LumaSonic Decoder.

Choose your favorite Digital Audio Workstation and use our VST3 or Audio Units plug-ins to get creating multi-sensory content quickly on macOS and Windows.

The decoder is easy to port to nearly any device. Create your content once, and run it on the hardware that best fits your needs.

Author Once, Run Anywhere

The LumaSonic Encoder and Decoder are lightweight, cross-platform software modules that can run on anything from a $5 microcontroller to a Raspberry Pi, as well as all major operating systems: macOS, Windows, Linux, iOS, and Android, including on Virtual Reality devices like the Meta Quest and Meta Quest 2.

AudioVisual Stimulation (AVS) devices that support AudioStrobe or SpectraStrobe encoding like the MindPlace Kasina are also compatible.

No decoder? No problem - your composition still plays as a normal audio signal on any audio device or music player.

Exquisitly Fast

Achieve visual frame rates up to 250 frames per second, tightly synchronized with the underlying audio. Precise high-framerate visuals are critical for applications like brainwave entrainment, Virtual Reality, or moving animated light.

Light modulated at these frame rates can reveal exquisite closed and open eye visuals, producing bursts of fractal patterns over the viewer's visual field via the Ganzfeld effect.

A New Dimension for Audio

Since the color information is encoded into the audio signal, anything that you can do with audio, you can also do with light. Fade, pan, modulate, or animate light along with sound.

Distribute your content as standard audio files such as .wav or .mp3, or create a live multi-sensory broadcast using established audio streaming technology like Icecast.

Our partners

Companies we are collaborating with to bring our vision to life.

Interested in Licensing or Beta Testing?