+358 44 971 1645
Urho Kekkosen katu 4-6 A, 00100 Helsinki, Finland
info@immersal.com

On Immersal’s AR Cloud SDK and localization in AR

Reading Time: 5 minutes

The AR Cloud

After Super Ventures’ Ori Inbar uttered the term “AR Cloud” in September 2017, the AR industry hasn’t been the same. There are now committees and foundations defining and refining its actual meaning, but the bottom line is this: it’s a 3D point cloud representation of the real world. Several startups (e.g., 6D.ai, Ubiquity6, Blue Vision, Scape, and not least us at Immersal) are now tackling this problem, working on real-time software for mobile devices to accurately learn their position and persistently display the same AR content across all devices and platforms. This is often accompanied by multi-user features, so that several users can interact with the same AR content in real-time. For all this to work, the real-world space needs to be accurately mapped beforehand; the result of this mapping is the before-mentioned point cloud, or the AR Cloud. For indoor spaces, GPS is not a solution, that’s why the AR Cloud is needed.

Standards?

These days I’m hearing a lot of different terminology for the process of finding out a user’s location: localization, Visual Positioning System/Service (VPS), Indoor Visual Positioning, etc. — or even just “tracking”. Internally we talk about our SDK as a “mapping and localization SDK”, but as even one of our main investors from a Silicon Valley-based VC firm — who’re focused in AR/VR startups — thought we had radically pivoted into language localization (l10n), I think “visual positioning” makes a better term. 🙂

Google is also using the VPS term for their upcoming technology. Even though they basically photographed the whole world for Google Maps’ Street View, I think it’s still safe to say that building a global AR Cloud is a task that no one company can tackle. Maybe in the future there will be several AR Clouds where you can roam, or select which cloud you want to use, etc. This calls for standardization, and that’s why Immersal is also a proud member of the VR/AR Association’s AR Cloud committee.

Immersal AR Cloud SDK

Immersal’s AR Cloud SDK has an early-access waiting list here, and we hope to get the first closed beta for developers in a few weeks. I thought I’d write about the advantages compared to some AR Cloud SDKs from other players in the mobile AR ecosystem, as well as what to expect and how to use it in your projects.

So, what makes our SDK stand out from the rest? It would be the accuracy. In our ‘laboratory tests’ at the office we can localize the user’s device within 1-3 cm of accuracy, which is pretty crazy for mobile (or any!) AR — consider the fact that ARKit/Core easily drift a lot more than this when the conditions (lighting, the amount of feature points etc.) aren’t perfect. Our approach actually DOES utilise ARKit/Core, but it extends upon them and has the prowess to increase the accuracy every second, thus eliminating many drifting issues caused by ARKit/Core’s SLAM. Another advantage is our low-level massively multi-user (MMU) server, which allows up to 1,000 simultaneous users sharing the same AR space, where they can interact with AR objects and each other (however, we’re not releasing the MMU technology yet, it will be included with the SDK at a later date). And last but not least, our approach is ‘battery friendly’; it doesn’t consume any more battery than a typical ‘vanilla’ ARKit/Core session. It does (currently) require a constant network connection, but utilizes minimal bandwidth, so it works just great with 3G/4G. And your apps can run in 60 fps (iOS) or 30 fps (Android) because the SDK doesn’t consume all your precious CPU and GPU time.

Just plug it in!

Our SDK is 100% native C++ code (well, not quite, there’s some arm64 assembler involved when needed 🙂 ). The initial release will be as a Unity plug-in, as we’re using Unity ourselves and doing most of the internal demos with it. But this means we can push out native iOS/Android and Unreal Engine plug-ins shortly. Also, although we’re targeting mobile AR initially, theoretically this could be used with e.g. Varjo / Magic Leap / HoloLens headsets as well, if the SLAM offered by ARKit/Core were to be substituted for another implementation.

We want our SDK to be a nice and easy add-on, which can be added to any existing ARKit/Core projects to provide the extra functionality needed for a full-blown AR experience: visual positioning and anchoring to the real world, persistence, multi-user features and cross-platform support. Because of this we’ve built all the example Unity scenes using Unity’s AR Foundation (which is still in preview but can be installed through Unity’s Package Manager since Unity 2018.1+). The AR Foundation API makes it possible to create AR apps for both ARKit and ARCore using the same codebase, which is just nice. We also don’t want to be dependent on third party plug-ins etc., so the example scenes bundled with our SDK can be compiled after installing AR Foundation, no other packages or plug-ins are needed. The initial release will utilize Unity Networking for demonstrating the multi-user functionality, but at a later stage it will be replaced by our proprietary technology.

Mapping the world

In the first chapter I mentioned mapping. To build an AR Cloud, you first have to ‘map’ or 3D-scan the environment to construct a point cloud representation of the real world. Our SDK comes with a mapping example scene, which you can use to map your surroundings in a couple of minutes, using your iPhone or Android device. You can also select an ‘anchor’ point; a never-changing location with enough meaningful feature points. This can be used for completely re-mapping an existing space, so the AR objects that were there before will continue to do so → persistence. If you have a large space to map, you can also continue mapping the same space and increment the point cloud. With unlimited resources, you could map the whole world. 🙂 We’re planning incredible crowdsourcing features related to this actually: when enough users are using apps powered by our SDK, the end-users using the apps will silently contribute to the AR Cloud while using the app, and also update the AR Cloud — if a threshold of users are getting zero poses or ‘hits’ in a known environment, the SDK will re-map those areas and automatically update the cloud.

What about occlusion?

Well, here’s one AR buzzword I haven’t mentioned before. Based on their videos, it looks like 6D.ai are now heavily concentrated in tackling AR’s occlusion problem in real-time. Our SDK supports occlusion for pre-mapped environments; e.g., if you want to have occlusion in your retail store, museum, airport or whatever large space, that can be easily calculated after mapping the space. For us, real-time occlusion is in the pipeline and will be there sooner or later, of course with some disadvantages (more CPU usage and battery drain). But for the time being, I think this pre-calculated approach works for most spaces.

Pricing and licensing

We haven’t yet completely decided on our pricing model, but we’re offering the SDK for free during a 3-month trial period. After that you can choose between various monthly subscription options, price depending whether you’re an indie developer or part of a bigger studio, how many simultaneous users you need to support etc. Please do contact us for more information.

Where can I see it?

Here are some videos of our tech in action:

If you have any questions, please do not hesitate to contact me.

Mikko is the EVP Engineering at Immersal. He's a tech geek and also does electronic music.

Top