Powering the future of multi-sensory experiences
The LumaSonic Engine is a powerful suite of tools with a familiar workflow for authoring and distributing the next generation of immersive, multi-sensory content. Whether you are creating relaxing inner journeys, precise brainwave entrainment sessions, high framerate audiovisual art, or exploring the deep somatic benefits of adding vibration to light and sound, the LumaSonic Engine was designed for you.
Composing experiences with multiple senses can feel intimidating and cumbersome. Combining and synchronizing different technologies together is possible, but rarely straight-forward. This can impede the creative process and complicate distribution. And yet, the benefits of well-crafted multi-sensory experiences are beyond profound.
Using a unified workflow, and accessible encoding and decoding technology, the LumaSonic Engine allows you to spend more time in creative flow, and less time troubleshooting and configuring.
The LumaSonic Engine is designed around industry standard audio technology, building on top of the audio skills, software, and hardware that you already know and love. Color control information is invisibly encoded into digital and analog audio signals, and efficiently decoded at playback by the LumaSonic Decoder.
Choose your favorite Digital Audio Workstation and use our VST3 or Audio Units plug-ins to get creating multi-sensory content quickly on macOS and Windows.
The decoder is easy to port to nearly any device. Create your content once, and run it on the hardware that best fits your needs.
The LumaSonic Encoder and Decoder are lightweight, cross-platform software modules that can run on anything from a $5 microcontroller to a Raspberry Pi, as well as all major operating systems: macOS, Windows, Linux, iOS, and Android, including on Virtual Reality devices like the Meta Quest and Meta Quest 2.
AudioVisual Stimulation (AVS) devices that support AudioStrobe or SpectraStrobe encoding like the MindPlace Kasina are also compatible.
No decoder? No problem - your composition still plays as a normal audio signal on any audio device or music player.
Achieve visual frame rates up to 250 frames per second, tightly synchronized with the underlying audio. Precise high-framerate visuals are critical for applications like brainwave entrainment, Virtual Reality, or moving animated light.
Light modulated at these frame rates can reveal exquisite closed and open eye visuals, producing bursts of fractal patterns over the viewer's visual field via the Ganzfeld effect.
Since the color information is encoded into the audio signal, anything that you can do with audio, you can also do with light. Fade, pan, modulate, or animate light along with sound.
Distribute your content as standard audio files such as .wav or .mp3, or create a live multi-sensory broadcast using established audio streaming technology like Icecast.
Companies we are collaborating with to bring our vision to life.