Generative AI Meets Augmented Reality: Snap’s Real-Time Image Rendering

Snap Inc., the company behind the popular Snapchat app, unveiled a preview of its new real-time image generation model at the Augmented World Expo on Tuesday. The cutting-edge technology, powered by generative AI, can create vivid augmented reality (AR) experiences directly on users’ smartphones in real-time.

During his keynote address, Snap co-founder and CTO Bobby Murphy demonstrated an early version of the on-device diffusion model. He highlighted its ability to render realistic AR frames guided by text prompts at impressive speeds. This breakthrough could revolutionize how AR content is imagined and created.

“While generative image models have been exciting, they need to be significantly faster for impactful augmented reality experiences,” Murphy explained. “Our teams have been accelerating machine learning models to achieve real-time performance on mobile devices.”

Snapchat users can expect to see the first AR Lenses utilizing this generative model in the coming months. By the end of 2024, Snap plans to roll out the technology to its creator community through the new Lens Studio 5.0.

The updated studio includes AI tools that can generate highly realistic face effects, stylized transformations of faces and environments, and even 3D assets based on text or image prompts. Creators will also gain access to an AI assistant to answer AR development questions.

Snap’s real-time generative AI model represents a significant leap for mobile AR experiences. As the technology evolves, it could open up new creative possibilities and reshape how we interact with the digitally enhanced world around.

I've loved music and writing all my life. That's why I started this blog. In my spare time, I make music and run this blog for fellow music fans.