There’s a scene early in Stanley Kubrick’s sci-fi masterpiece, 2001: A Space Odyssey, where an ape discovers it can use a bone as a weapon (a tool) to conquer a competing group of primates. After the victory, the ape tosses the bone into the sky, where we watch it spin in slow motion — until the film cuts to a spaceship (another tool) floating in space millions of years in the future.
It’s one of the most iconic transitions (or “match cuts,” in filmmaking parlance) in the history of cinema. It’s also a commentary on the nature of tools — how much they have evolved, along with our civilization. The tools of one era don’t necessarily serve the needs of another era. Even the best bone will never transport humans to distant planets. Tools enable us to do remarkable things, but each has its limits. New, disruptive tools continue to emerge to meet our ever-evolving needs. Think, gunpowder, the printing press, the internet.
It’s the same with the tools used by developers. Consider the tools designed for video game software development. There was a time when you literally had to build everything yourself. The most valuable asset for many early video game companies was their internally developed tools and technology. But then, in 2005, a startup called Unity disrupted the video game landscape. Unity transformed game development with an affordable, approachable software development environment, which continues to dominate the modern video game market to this day.
Because of Unity, 3D software development is more accessible than ever before. Video game developers have game engines and geometric modeling tools freely available to them, along with mountains of documentation and community support. It’s great news for these developers, and you might think it equally good news for augmented reality (AR) and virtual reality (VR) platforms.
But here’s where it gets tricky. AR/VR are radically different technologies, and we’re only just beginning to get a glimpse into their revolutionary potential. Fully realizing this potential will require radically different tools. Paradoxically, the advances in video game development tools may end up holding back the full promise of AR/VR technologies.
Why? Because so many developers are already using these game development workflows that AR and VR device platforms must support them in order to maximize the content available in their app stores. Supporting only these tools has an unintended consequence: It ends up excluding a very large population of creative people including 2D designers — the very people who could discover and create uniquely valuable use cases for AR and VR. By limiting accessibility to a wider group of developers, this could spell a long, slow haul for any meaningful market adoption. Worse, it could pave a fast track to mediocrity and obscurity.
The (Unfulfilled) Promise of AR/VR
We’ve been hearing about the massive potential of the AR/VR and 3D computing market for years. There is no shortage of imaginative ideas that show the unique strengths of these new technologies. Digi-Capital estimates that AR revenue could reach $85 Billion to $90 Billion and VR revenue $10 billion to $15 billion by 2022, as the technology moves from gaming into consumer products, eCommerce, financial services, automotive, real estate, travel, and other business markets.
So what is keeping these compelling use cases from coming to market? The main reason is that the individuals most capable of conceiving unique use cases find current development tools inaccessible.
I saw this first hand as an early employee of Magic Leap, where I spent a lot of time thinking about, and working on, the future of immersive tech. I observed hundreds of smart, creative people working on this new future of spatial and immersive computing. But only a small percentage of them — those with engineering aptitude — actually had direct access to iterate with the technology.
In a recent poll of professional designers and developers my company conducted, nearly half reported that authoring and creating in 3D was their biggest challenge. Much of this difficulty is rooted in designers’ unfamiliarity with the tools currently available, as well as the fact that many of the tools designers are using aren’t made for AR/VR design.
When creative individuals are unable to work directly with the technology or lack a full understanding of it, they make assumptions and dream up ideas that cannot be accomplished with existing tools. At the other end of the spectrum are those who play it safe with uninspired ideas that don’t take full advantage of all the technical capabilities. There’s a delicate balance between brainstorming without limits and carefully considering what’s practical. The best way to achieve this balance is iterating with the technology to fully understand all of the constraints.
Built-in Biases
How did we get to this point? In lowering the barriers to entry for developing video games, Unity enabled small teams to create innovative and experimental experiences, and to think in interactive 3D. Logically, one would expect companies embracing new technologies, like AR and VR, to integrate these more accessible tools into their own workflows. After all, these companies lack any real alternative, since gaming engines have become the foundation for creating all sorts of non-gaming applications and experiences.
On one hand, this state of affairs is a testament to the flexibility of these gaming technologies. But on the other, this flexibility obscures the fact that these tools, no matter how feature rich, have limitations. And those limitations ultimately shape, and restrict, what can be built. Like all tools — from bones to gunpowder to spaceships — the tools designed for video game software development have built-in biases toward the tasks for which they were originally designed.
Even more concerning, the newer tools focused on AR — Sumerian, Snap Lens Studio, AR Studio and others — all riff on the Unity-inspired production workflow in one way or another. As such, they are reinforcing many problematic assumptions.
A Start: Google and Apple Design Guidelines
About a year ago, Google and Apple announced their mobile AR platforms (ARCore and ARKit, respectively), piquing interest of 2D mobile application developers who were interested in moving into 3D. Accompanying ARCore is Google’s Augmented Reality Design Guidelines (GARDG), which are a good starting point for those interested in getting started in AR.
Evaluating GARDG, though, we made several observations:
- GARDG excels when it encourages designers to build applications that focus on motion and environmental engagement, drawing attention to the critical role movement, specifically user movement, plays in AR.
- GARDG reminds designers of one of the most overlooked aspects of AR in our experience: end-user mobility and how this shapes interactions with immersive designs.
- Intertwined with its guidance on mobility, the GARDG stresses the outside environment and ensures designs never sacrifice user safety. Making users backup blindly, or encouraging them to move forward while the device is pointed in a different direction are strongly discouraged.
- GARDG struggles to keep pace with the demands of designers and developers, revealing how both Google and Apple may well trail behind users in understanding the potential of their platforms for leveraging 3D to accomplish complex tasks or communicate complex experiences.
- GARDG is missing multi-scene use cases, which automatically excludes many modes of interactivity or conditional behavior that leads to transitions, complex or more interesting changes of state, personalization, and ultimately a deeper, more immersive experience.
- There is no discussion of animations (a common topic in our interviews with designers), either triggered or timed, or the notion of a shared or collaborative environment.
GARDG is a good starting point, but it seems to be too conservative an approach. By being conservative, Google and Apple present both an obstacle and an opportunity. Current workflows available for their platforms increase friction, cost time and money, and ultimately limit the projects. Designers already are developing clever workarounds and companies are starting to develop tools.
Thinking Beyond the Screen
The difference between video game development and developing for AR/VR is not merely a matter of iteration. They are radically different technologies. The new future experiences and applications enabled by AR/VR are contextually aware and responsive to our intents. As a result, when we consider the future of computing for AR and VR, we must more deeply integrate it within our physical world — beyond the confines of the screen, as GARDG suggests.
Achieving this level of natural interaction in software development requires combining layers of intelligence across multiple integrated, complex systems. We need methods, reusable patterns and constructs for controlling artificial intelligence (AI) and integrating services that can be managed at a higher level of abstraction than code.
If we are thinking beyond the screen for computing, why aren’t we thinking beyond the code for development?
This is not only possible, but necessary, if we are to realize the true potential of new digital reality platforms. Tools must be simultaneously more accessible and more powerful than the traditional software-based workflows that dominate today.
We must rethink how the new computing world will be conceived and built. Digital reality technologies, like AR and VR with 3D inputs and displays, could lead to visual workflows that are much more accessible, attracting a wider, and more diverse user base. As an additional bonus they would be more supportive of real-time collaboration. In other words: More power to more people, and more opportunity to make compelling AR/VR use cases a reality.
via Mint VR