Skip to main content

Build immersive worlds with your voice: Generative AI on ENGAGE XR

If you’ve ever wished you could step into a holodeck-style space and simply say what you want to see, that future has arrived on ENGAGE XR. Our new Generative AI lets you build custom environments by talking to intelligent, in-world agents—no complex menus or 3D software required. It’s a natural, conversational way to create immersive scenes for lessons, workshops, onboarding, product demos, and more.

What “voice-first creation” means in practice

Inside ENGAGE, you describe the setting, objects and mood you need—your AI agents get to work and assemble the world around you in real time. It’s fast, flexible, and ideal when you want to iterate live with a class or team. All content generated through AI is automatically stored in the ENGAGE Cloud File Manager so you can reuse it across sessions or export it for other applications.

Create VR-ready assets instantly

From first launch, Generative AI supports the kinds of assets spatial creators use every day: VR-ready 3D models, AI-generated images, and 360° skyboxes for immersive environments on VR devices. That means less time hunting for assets and more time teaching, training, or storytelling inside your own world.

Meet AI Builder—the engine behind your AI characters

Generative AI runs alongside AI Builder, our toolkit for crafting embodied, AI-powered characters. AI Builder blends models like Meta’s LLaMA and OpenAI’s ChatGPT so your characters can act as subject-matter experts, role-play partners, or customer assistants and now they can help orchestrate your scenes, too.

Recent upgrades deepen what those characters can do, including better integration with ENGAGE features so they can trigger experiences and even perform live web searches while you’re in session. You can also set custom 3D model lists for characters to fetch on command, and take advantage of early multilingual support (v0.1) to build language learning scenarios or interact in your preferred language.

Designed for education and enterprise outcomes

Educators can spin up role-play training, historical walk-throughs, or science labs on the fly. Enterprise teams can prototype simulations, stage onboarding, or run sales demos that adapt to live questions. Our Athena AI assistant showcases this direction combining verbal interaction with image and 360° background generation and ad-hoc role-play to support real workflows.

Built for the devices and workflows you already use

ENGAGE runs across major spatial computing devices as well as desktop and mobile, so your teams can contribute from anywhere. Assets you upload or generate can be organized securely in your cloud file manager and shared into sessions when you need them.

How teams typically get started

Start a session with your stakeholders, describe the environment you want, and watch it appear. Iterate by asking your AI character to swap props or add context images and skyboxes. When you’re finished, save everything to your cloud files for reuse, or export assets to bring into external tools. It’s creation that keeps pace with your ideas.

Frequently asked questions

What can I generate today?
VR-ready 3D models, images, and 360° skyboxes suitable for use inside virtual reality environments on ENGAGE.

Where do my generated assets live?
They’re stored in your ENGAGE Cloud File Manager so they’re easy to reuse across projects or export for other programs.

How do AI characters fit into world-building?
Create characters with AI Builder using LLaMA and ChatGPT class models; they can fetch approved 3D models, trigger in-world experiences, perform live web lookups, and converse in early multilingual mode (v0.1).

Which devices are supported?
ENGAGE supports PCs, Macs, iOS, Android, and a wide range of VR/MR headsets, making it simple to collaborate across your organization.


Ready to build your first voice-generated world? Request a demo and we’ll show you how Generative AI and AI Builder can accelerate your next class, training, or event on ENGAGE XR.