Computer revolutions don’t happen that often. First there was the mouse (Mac), then multi-touch (iPhone) and now, according to Apple, we are ready for the next computer revolution: space. With spatial computing, or spatial computing. Everything you want to know about this – and whether it is really as revolutionary as Apple wants you to believe – can be found below. Apple has deliberately chosen the term spatial computing, as it is broader than AR and VR. Apple has also deliberately ignored the term ‘metaverse’. Logical, because Apple does not seem to have confidence in this and it would also be too associated with competitor Meta.
- Is spatial computing new?
- What is spatial computing?
- Apple and spatial computing
- This is what spatial computing benefits you (according to Apple)
- Can Apple make spatial computing a success?
Is Spatial computing new?
Spatial computing seems like yet another marketing term from Apple, but it has been around for much longer. It was defined in 2003 by Simon Greenwold in a thesis called spatial computing. He describes it as: “human interaction with a machine in which the machine retains and manipulates referents to real objects and spaces”. With the rise of VR, AR and mixed reality, companies such as Microsoft have already started using the term for digital environments in a 3D space. On June 5, Apple announced its own spatial computing platform, based on the visionOS operating system. The first device that works with it is the Vision Pro headset. This also works with spatial audio.
What is spatial computing?
Spatial computing is thus a broad term that encompasses various technologies and human-computer interactions. The computer understands the physical world and makes digital manipulation of the physical space possible.
Spatial computing includes:
- 3D Vision: Systems can perceive and understand three-dimensional space. The system can detect depth and can recognize objects and surfaces (and track them as they move).
- Real-time interaction: Spatial computing systems can respond in real time. The interaction with the user and the environment also takes place without delay.
- Contextual understanding: The systems understand the context in which they operate. When the user looks at a specific object or performs a certain gesture, the system must understand what this means.
- Autonomy: Finally, spatial computer systems must often also be able to operate autonomously. They must be able to make decisions and perform tasks without direct input from the user. This is less the case with Apple’s Vision Pro.
Various technologies fall under the umbrella of spatial computing. Some frequently mentioned ones are:
- Augmented Reality (AR): AR overlays digital information on the physical world, so that reality is not replaced, but supplemented with extras.
- Virtual Reality (VR): VR ensures that you can fully immerse yourself in a digital world. Interaction with that world is also possible.
- Mixed Reality (MR): MR combines AR and VR, by superimposing digital objects on the real world. This is done in a way that allows interaction with the physical environment.
- Robotics and drones: Autonomous vehicles such as drones and robots require a high degree of spatial awareness to navigate in the real world. This doesn’t have much to do with Vision Pro.
- Internet of Things (IoT): Some IoT devices use spatial computing to understand the environment and enable interaction. However, these IoT devices are often not that fast.
Apple is not the only one working on spatial computing. Google, Microsoft and Facebook (now Meta) have also invested heavily in it, especially in AR and VR. Meta has the Quest headsets (formerly Oculus Quest), Microsoft has developed the HoloLens and Google has ARCore, the counterpart of Apple’s ARKit.
The latest Meta Quest 3 headset only costs 500 euros.
Apple and spatial computing
In Apple’s vision, Spatial Computing has three fundamental elements: windows, volumes, and spaces. You can use these to build an inviting, immersive experience. With tools from Apple (particularly Xcode and the new Reality Composer Pro tool), you can get started creating apps yourself.
Apple CEO Tim Cook presents it as a new revolution in computing. “Just as the Mac introduced us to personal computing and the iPhone introduced us to mobile computing, Apple Vision Pro introduces us to spatial computing.” Cook thus gives the impression that Apple has invented something completely new, which is different from existing headsets. And also that it will be a pioneer in this new revolution. Even though Spatial Computing has been around for much longer, as you could read above.
“Vision Pro builds on decades of Apple innovation and features a revolutionary new input system and thousands of breakthrough innovations. It unlocks incredible experiences for our users and exciting new possibilities for our developers.”
Apple did NOT use the following terms in the announcement:
- “Virtual Reality” (VR)
- “Mixed Reality” (MR)
- “Extended Reality” (XR)
- “Metaverse” (a term Meta mainly uses)
- “Artificial Intelligence” (a term that was used extremely often during Google I/O)
But precisely these terms:
- “First spatial computer” (Vision Pro)
- “Spatial computing”
- “Augmented reality” (AR)
- “Machine learning” (ML)
- “Immersive environment”
So it is clear that Apple does not call the Vision Pro an AR/VR/XR headset or mixed reality headset, but a spatial computer. The word “spatial” was even mentioned 51 times in the press release, even though it is close to VR and XR.
This is what spatial computing benefits you (according to Apple)
Apple is mainly trying to make it clear what the benefits are for users and how this new way of spatial computing can enrich your life. The focus is on four elements:
- It replaces an external screen
- It offers impressive entertainment
- It offers new ways to collaborate and connect with other people
- It gives you the opportunity to capture and relive memories
Can Apple make spatial computing a success?
Apple is relatively late with such a headset, but has an advantage in other areas. Here are 5 assets that Apple can take advantage of.
#1 Existing ecosystem
A big advantage for Apple is that there is already an impressive range of apps. Thanks to the close similarities between visionOS, iOS, iPadOS and macOS, you can try hundreds of apps on day one. These are apps that you already know and that you can now use in a new, spatial dimension.
This applies, for example, to FaceTime and Zoom: if you can look around at your conversation partners, it gives a much more natural effect. From now on you can watch films and sports matches on an extra large screen and you can play games as if you were in the middle of it. Apple can also enrich all kinds of existing services with Spatial Computing, such as Fitness+ for doing workouts or Apple TV+ for watching series.
#2 Attention to the user experience
Apple has always paid close attention to the user experience. To prevent you from walking around like a zombie in your own world, others can see your eyes and facial expression through the glasses. You can also look at your surroundings through the glasses and if someone walks towards you, they will gradually become visible, so that you are not shocked.
#3 Attention to privacy
As we have come to expect from Apple, a lot of attention is paid to privacy and security. With Optic ID you can unlock your headset, authorize transactions and log in to services. Apps also cannot view your living room. Google Glass users were given the unflattering nickname “glassholes” because people found it unsympathetic to being constantly filmed. It remains to be seen whether it will become normal to see a colleague walking around with a headset all day. This is somewhat reminiscent of people who wear headphones all day long to avoid talking to others. But at least your personal data is safe, because it remains local on the device.
#4 Attention to design
Everything shows that the Vision Pro is an expensive product, made of high-quality materials. With close-ups of the headband, Digital Crown and other parts you immediately see that this is an expensive gadget and of a completely different order than the plastic headsets from other manufacturers. This justifies the price somewhat. Apple also tries to justify the price by making it clear that the Vision Pro can replace your widescreen TV and your high-end camera. This also happened previously with the iPhone and iPod.
#5 Attention to developers
The announcement at WWDC 2023 was a perfect moment: not only can Apple show the headset, but there are also a lot of sessions on spatial design, spatial input and the like. If you want to know more about visionOS, Apple also has an explanation page with reference to all relevant sessions. Modifying existing apps will be relatively easy for developers who have been working with the iOS ecosystem for some time.