VRFocus speak to Michael Ludden, Program Director and Senior Production Manager of IBM Watson about VR Speech Sandbox and the future of virtual reality (VR) and augmented reality (AR) interactions.
Ludden used to work on user based solutions for developers at IBM Watson when he started to recognize the large number of VR headsets with microphones for consumers becoming available to the market in 2016. He realized that IBM Watson’s platform was tailor fit for the onslaught of VR content that would soon be available and that nobody was making great use of the microphones on the headset. Ludden says, “I consider a fourth dimension of immersion, if you can speak to a world and have it be altered around you.” IBM Watson have a robust speech interaction system and Ludden wants developers to make full use of this with the VR Speech Sandbox.
The VR Speech Sandbox allows users in VR to ‘speak’ to their headset. In the video below you can see Nina use the VR Speech Sandbox separately as well as inside Ubisoft’s Star Trek: Bridge Crew videogame. She creates various objects and interacts with them, trying to discover what objects can be created with various commands. At the moment you can create 100 objects. Ludden explains that VR Speech Sandbox showcases the ability of the system to handle modification and variations of what you say, then take action when you ask for the program to take action.
Ludden says that it is surprisingly, ridiculously simple to integrate VR Speech Sandbox into an existing or new application. He says that, “you can have a sandbox with physics in five minutes and put on a HTC headset and walk around in it.” The way IBM Watson have created VR Speech Sandbox is an easy drag and drop solution, very much the same way Unity works, which would allow developers to barely touch scripts. He says the barrier to entry is so easy that with VR Speech Sandbox you can have a voice interaction system in an hour in an existing or new application.
At the moment VR Speech Sandbox has a Unity SDK but the team are also working on supporting Unreal Engine as well as other game engines used to build for VR or AR content. The VR Speech Sandbox has ‘VR’ in its name but is also “perfectly usable for augmented reality use cases” Ludden says. He describes a few examples like starting or stopping a demo, getting help when the user is lost in a puzzle game and the most obvious one – eliminating our dependency of riffling through menus.
“There’s 7 million self-identified AR and VR developers as of 2017, last year it was around 100,000. It’s exploding, it’s only going to get bigger from here and we want to be right there at the cutting edge, at the bleeding edge serving this community from the very start”, Ludden says.
Ludden believes that AR and VR are one platform, “in the future going forward it’s just limited by form factor at the moment but once we get full six degrees of freedom VR on a mobile headset that’s reasonably glasses sized – that’ll start the merge with AR and VR will simply be full screen.” It looks like IBM Watson are also working on non-voice applications when working on AR. Ludden cannot name any examples but IBM Watson will soon be making some announcements at the end of the year to the beginning of next year. He gives more tips and pointers for what developers should seek to build in VR and AR in the video below.
You can find a link to the VR Speech Sandbox here, and VRFocus will keep you up to date on further developments
via Mint VR