Our services portfolio for augmented reality, mixed reality, and virtual reality
As a service provider in the areas of AR, MR, and VR, we offer the following services:
- Design and planning of custom solutions
- Application-oriented hardware selection
- Development with agile methods in the front end and back end
- Analysis of IT infrastructure and connection to existing systems (cloud, ERP, MES, DMS etc.)
- Training courses, workshops, and presentations on all relevant topics
- Support in the formulation of a business model
- Optimization of existing applications
- Creation and preparation of 3D content
The benefits for you
Benefit from our sound knowledge in the field of immersive technologies and our all-round expertise in IT. We’ll guide you through an agile development process, ensuring that these emerging technologies are integrated into your company with a value-creating business model. To do so, we use methods such as design thinking, Business Model Canvas, and Scrum. These methods allow us to find an efficient solution to all problems. With our experts from all specialist areas, we can support you in mastering any challenge.
Utilize the tools of the future today, and benefit from the advantages that augmented, mixed, and virtual reality can bring you:
- Optimization of work processes
- Increased productivity
- Acceleration of learning processes
- Improved customer experiences
- Shorter downtimes
- Minimization of errors
Are you not convinced by the benefits alone? Here you can read about specific usage opportunities offered by the technologies.
Make AR, MR, and VR into firm parts of your company. Contact us, and we’ll help you become a pioneer of digital transformation!
In addition, below you’ll find out about the following:
- What augmented, mixed, and virtual reality mean
- How AR, MR, and VR systems work
- Which hardware and software components are used
- Why starting to use these solutions pays off and what opportunities are on offer
- What a typical project process might look like
- What we can expect in the future
What are AR, MR, and VR?
Close your eyes and imagine being able to project everything that you want to see and know into your field of vision. Your natural environment merges with digital information to create a new world that can present you with what you’re looking for as quick as a flash. This new horizon is the universe of mixed and augmented reality. Now let’s go one step further: Let go of your tie to your natural environment and immerse yourself in a brand-new reality in which absolutely anything is possible – virtual reality.
For a long time, scenarios like this have sounded like science fiction. Today, these technologies conceal unimagined potential that can now be tapped for the first time. They do more than just offer us the opportunity to exploit new methods: In addition, they allow us to enhance traditional processes, by superimposing digital information over our real environment but also by giving us access to a completely new, virtual world.
But what exactly lies behind the mixed, augmented, and virtual reality technologies?
Definition of AR, MR, and VR
At present there are no generally valid definitions of AR, VR, and MR that are recognized by all involved parties. These terms arose along with the emerging technologies, and there tends to be a significant overlap between them. Despite this, certain characteristics that largely differentiate between the areas can be carved out. A technology does not necessarily need to be uniquely assigned to a single type. Due to the shifting nature of the spectrum between the real and the virtual world, mixed forms can quickly emerge.
Essentially, the terms AR, MR, and VR give an indication of the extent to which the digital world is merged with the real world and the perception of this by the user. The spectrum that we move through here starts with the “real” world and ends with the “virtual” world. The two end-points of this spectrum are merely ideals that are not represented by any of the technologies. This is because you do not need a technical interface to help you to experience reality. Furthermore, it is completely impossible to fully enter the virtual world and experience some kind of total immersion in which humans become completely digital.
As can be seen in the graphic, AR (augmented reality) is located fairly close to the real world. In fact, AR is a technology that merely enhances the physical world of the user with a virtual element. The enhancements might take the form of displayed information or newly generated virtual objects and structures.
VR (virtual reality) is at the other end of the scale, close to the virtual world. Unlike AR, VR denotes an environment created completely virtually. Users enter the digital world and are separated from the real world around them through the appropriation of their sight and hearing. So AR enhances the user’s reality whereas VR replaces it. This immersive experience is made possible by the use of VR headsets or VR head-mounted displays (HMDs).
Mixed reality describes the part of the spectrum where the AR limits of simple hand-held devices have been surpassed but the user is not yet immersed in the way that is normal in the field of VR. MR has a significant overlap with AR but unites the physical and virtual worlds more intensely. As a rule, smart phones are no longer used and holographic devices such as HoloLens from Microsoft are required, instead. Data glasses of this kind allow users to greatly enhance their real environment with digital tools and to interact intuitively with this new environment. A large field of vision and free hands mean that gesture-based control is now possible, too. The MR part of the spectrum ends where digital immersion starts and the user is separated from the real world.
How augmented, mixed, and virtual reality work
AR, MR, and VR are developed and used for different purposes. Accordingly, interaction between the hardware, human, and environment differs across the systems. Below, we give a rough overview of how a system for augmented, mixed, and virtual reality works, how the sensors and peripheral devices interact with the user and their environment, and the devices with which the technologies can be used. The diagrams below can vary from hardware to hardware depending on the manufacturer and model. However, the technology in question always works in accordance with a similar principle.
Smart glasses, which are also called data glasses, denote a whole range of glasses for the fields of augmented reality and mixed reality. These devices enable the superimposition of the virtual world on the real world. The specifications differ depending on the manufacturer and usage area. For example, smart glasses for augmented reality are often more slender than those for mixed reality, containing less sensor technology. This is because AR smart glasses merely show additional information in the field of vision of the user, and do not require any reference to the surrounding space. Everything remains two-dimensional. However, mixed reality smart glasses need equipment that can establish a comprehensive reference to the user’s surroundings. This is necessary so that virtual 3D objects can be generated and positioned, and so that the user can interact with them.
There are differences between how AR and MR data glasses are controlled, too. AR data glasses are often controlled via voice commands and a touch sensor. To enable touch control, part of the side piece of the glasses is made touch-sensitive. MR glasses also use voice commands, but in this case, they have more of a supportive role. Most of the interaction with MR data glasses takes place via hand gestures.
Purpose: To display information and elements in the user’s field of vision
Quality factors: Mobility, optics, slimness, affordability
The generic name for augmented reality hardware is “AR smart glasses” or “AR data glasses”. These glasses enable the enhancement of the real world with digital information. In the case of AR data glasses, the virtual information is not anchored spatially; instead, it is merely displayed in the user’s field of vision.
AR data glasses do not place excessively high requirements on hardware. They have no spatial references and do not need to perform tracking, which means that the sensors are more manageable than in the case of MR and VR systems. The way they work is most similar to a smart watch – with the difference that the display is transparent and is directly in front of the user’s eyes.
Like for smart phones, an SoC (system-on-a-chip) is used for the computing power. This coordinates the internal components. The sensors, too, are similar to those of a smart phone: The integrated devices include an acceleration sensor, geomagnetic sensor, rate sensor (which takes on the function of a gyroscope), and a color camera. These inertial measurement units enable the detection of the user’s head tilt, whereas the color camera allows pictures to be taken and QR codes to be read. The data glasses can be controlled with voice commands or via a touch-field on the side piece of the glasses. A connection with a smart phone can also be set up, allowing users to control the displayed content from their phone. Communication with other infrastructures takes place in the normal way using Bluetooth or WiFi.
The exact design of the hardware differs depending on the manufacturer and model. The main distinction is between glasses with a monoscopic display and glasses with a stereoscopic display. Monoscopic glasses only display content to one of the user’s eyes. Stereographic glasses allow both eyes to view the content, which allows virtual elements to be given a degree of depth.
As well as AR data glasses, there are glasses that have a non-transparent monoscopic display. However, these data glasses do not allow the superimposition of information over the user’s real field of vision, which means that from a technical point of view, they cannot really be seen as augmented reality devices.
Purpose: Generation of dynamic virtual structure and objects that are anchored in the user’s real environment
Quality factors: Convenience, performance, display quality, mobility
MR data glasses are faced with significantly higher requirements than data glasses in the field of AR. For mixed reality, the virtually generated content is given a spatial reference. This means that the glasses detect the room in which the user is standing and give objects fixed positions within it.
Like AR data glasses, MR data glasses (or “smart glasses”) are mobile. A powerful SoC enables location-independent work and the smooth interaction of the integrated components. The desire to display and anchor digital objects in as real a way as possible in space requires complex sensor technology. This is a prerequisite, for example, for the reconstruction and simulation of real machines.
All three inertial sensors can be found in MR data glasses, too: An acceleration sensor, rate sensor, and magnetometer. Sometimes, people talk about a 9-axis sensor, since each covers all three spatial axes in its own right. A 9-axis sensor not only helps to determine the head tilt of the user; in addition, it detects its relative movement in space. A ToF (time of flight) camera sends infrared beams, captures their reflections, and uses these reflections to calculate the depth of the observed environment. The spatial recognition features are supported by several monochrome cameras that detect static points in the room using a SLAM algorithm (for more information, see “Tracking”) and triangulate their own position. Microsoft’s HoloLens 2, for example, has four of these grayscale cameras. The inner two are responsible for the actual triangulation and the outer two record the peripheral environment.
MR smart glasses are mainly controlled by hand gestures. As well as calculating the depth of space in the field of vision of the user, the depth sensor is responsible for tracking the user’s hands. Some models only detect certain pre-programmed hand gestures, whereas further developed MR smart glasses can interpret the individual movements of each finger. Voice commands for the fast and direct execution of actions are also commonly used to control mixed reality hardware.
Eye-based control, which intends to make the usability of MR data glasses even more intuitive, is a really modern feature. Infrared sensors detect the eyes and can tell if the user wants to scroll through a window, for example. In the case of the HoloLens 2, these infrared sensors also play an important role in the adjustment of the visible content, since the user’s eyes always serve as a point of reference for the lasers that generate the image on the display. This makes the HoloLens 2 extremely tolerant with regard to the anatomical differences between individual users, meaning that no mechanical IPD controller is required.
IPD stands for “interpupillary distance”. It describes the distance between the user’s eyes. An IPD controller adjusts the VR headset or data glasses in line with the individual distance between the current wearer’s eyes. This is important to ensure that the content is displayed clearly and is not blurred or distorted. VR headsets normally have a mechanical IPD controller on the bottom of the headset so that the user can make this adjustment. However, modern MR data glasses such as HoloLens 2 by Microsoft have an integrated IPD control feature that automatically adjusts the image in line with the distance between the user’s eyes as measured by the infrared sensors.
Purpose: The generation of completely virtual, immersive worlds
Quality factors: Display quality, performance, tracking
VR glasses and VR headsets are often simply called VR HMDs (head-mounted displays). Unlike AR and MR, the aim of virtual reality is not to enhance the reality of the user, but to replace it. VR HMDs are designed to be as separative as possible, completely appropriating the sight and hearing of the user. Light that manages to penetrate through small gaps in the headsets (called “light bleed”, frequently in the nose area) is seen as a disruptive factor that reduces the perception of immersion felt by the user.
HMDs can be mobile or wired. In the case of the wired version, the headset is connected to an external computer, which means that the computing power is housed externally. The advantage is this: The hardware on the computer is considerably more powerful than anything that could be built into the headset. However, many users find the pull of the cables around their head disturbing (the number of cables depends on the model). For this reason, there’s a certain trend towards increased mobility and freedom of movement for VR headsets, too. This freedom has a price: Mobile headsets have to provide all of the computing power themselves, which means that the performance that can be achieved with a connected computer cannot be offered.
Even though the display is not transparent and the application areas differ greatly, the sensors in a VR headset are similar to those in MR data glasses. A 9-axis sensor with an integrated acceleration sensor, rate sensor, and magnetometer takes on the task of rotation tracking and – to a large extent – position tracking. The rest of the position tracking task is performed either (like in the case of MR data glasses) by monochrome wide-angle cameras with the help of triangulation and a SLAM algorithm or by a combination of infrared sensors and base stations (more details about tracking). Some models also have a small proximity sensor inside the headset. This detects whether the glasses are currently being worn. If not, they can be placed into idle mode, for example.
The operation of a VR system is performed practically entirely using controllers that are held in both hands. With these, the user can point to digital content like with a laser pointer, and can interact using buttons. Increasingly, operation using finger tracking is being further developed, too. In this case, the movements of the user’s hands and fingers are translated to the virtual world. For example, the Valve Index VR Kit contains a controller that can track the movement of individual fingers. This does not give the user complete freedom, since a controller is still connected to their hand. However, at least controllers of this kind enable more intuitive gripping and the realization of finger-sensitive actions. In the field of MR, at any rate, Microsoft has demonstrated with the HoloLens 2 that completely free hand tracking is also possible. The first scalable attempt to implement free hand tracking for virtual reality came at the end of 2019, by Oculus VR.
Free hand tracking was enabled for the users of Oculus Quest as an experimental feature. Unlike in the case of HoloLens 2, which uses its depth sensor for hand tracking, Quest uses its integrated wide-angle grayscale camera. The position of the user’s hands is determined with the help of a deep learning model and model-based tracking. VR gloves (also called “data gloves”) present another way of controlling such applications with one’s hands. These are gloves with integrated sensors that enable precise finger tracking and, depending on the design, haptic feedback.
Tracking describes the determination of the location and position of the headset and, if applicable, controllers. It has a decisive role in both MR and VR. Unlike in the case of MR, VR can access significantly more resources for tracking if a certain amount of mobility is sacrificed. VR systems can do this because in their case, mobility is merely a question of convenience. However, for MR applications, mobility and reference to the user’s environment are practically indispensable. For this reason, MR data glasses tend to rely upon the same mobile tracking method, whereas different approaches have developed in the VR market.
When it comes to the question of what should actually be tracked, a distinction is made between two types of freedom in 3D space: 3DOF and 6DOF. DOF stands for “degree of freedom”. 3DOF is also called “rotation tracking”. It denotes the tracking of the headset or controller in three degrees of freedom:
The tracking is performed by small integrated rate sensors (often called gyroscopes in colloquial terms) like those that can also be found in smart phones. They detect the current tilt or turn of the user’s head or the controller.
6DOF includes the first three degrees of freedom of rotation tracking but enhances these with three further degrees of freedom in the form of position tracking. Thus, in addition to enabling the tilt of the headset to be determined, the position tracking functions of 6DOF enable the movement of the user’s head (and controller) along the three axes to be detected, too.
Several approaches have been developed for tracking in six degrees of freedom. All of these have their advantages and disadvantages, but the underlying principle of all of the tracking methods is always the same: Acceleration sensors (translation sensors) that are integrated into the hardware measure the acceleration when the user’s head or the controller is moved. The assimilation of acceleration over time allows the mathematic determination of speed. If you then assimilate speed over time, you obtain the relative change of position.
The problem with this approach is that acceleration sensors – like all other sensors – are subject to self-noise, and the double assimilation of acceleration amplifies any error quadratically. For this reason, pure tracking using acceleration sensors only works for a brief moment in time before the errors accumulate to such an extent that the calculated position becomes too imprecise. To solve this problem, a correction mechanism that promptly addresses the sensor errors and compensates for them is required. For this reason, all tracking approaches aim to regularly correct the acceleration sensor data. Thus, attempts are made to combine the extremely high rate of acceleration sensors, which work at around 1,000 Hz (1,000 position updates per second), with the consistency of a slower external tracking system.
In the case of constellation tracking, the hardware has a predefined constellation of a large number of LEDs that is known to the system. These emit either visible light (e.g. Sony PlayStation) or infrared signals (e.g. Oculus Rift). Two external cameras record images of the sensors and send them to a connected computer. The computer knows the exact constellation of the LEDs and can therefore calculate the relative changes in their position.
In the field of VR, only Sony PlayStation has used tracking with visible light. As a result, the company was able to reuse the Move controller launched in 2010 and had to develop less new hardware. The result was an extremely cost-effective tracking method, but the quality was mediocre at best.
In the Oculus Rift, constellation tracking was used to design a cost-effective yet competitive tracking model. The approach of Oculus VR was significantly more ambitious than that of Sony for its PlayStation. For example, the cameras of the Oculus Rift are synchronized with the flashing of the LEDs. To further increase accuracy, each LED in the Oculus Rift flashes with a frequency that can be uniquely identified by the software. As a result, the Oculus Rift was able to achieve a proportionately high tracking quality in a cost-effective way. However, many computers have a problem with the fact that both sensors need to be constantly connected by means of USB and use a lot of the available bandwidth. For this reason, constellation tracking in this pure form is unusual. However, the basic principle is still used for controller tracking.
Lighthouse tracking/outside-in tracking/tracking with base stations
Lighthouse tracking is based on the installation of small base stations (“lighthouses”) in the top corner of the room. These base stations are only a few centimeters in size and are not, themselves, sensors – they do not communicate with the hardware or with a computer. They are connected only to a power supply and send infrared signals into the room in accordance with a very specific pattern and exact timing. Sensors on the headset and controllers receive and interpret the signals so that they are able to calculate their location and position in relation to the base stations themselves. Lighthouse tracking, too, constitutes only a correction of the data provided by the acceleration sensors. Whereas the internal sensors calculate a new position every millisecond (1,000 Hz, at least for the headset), the lighthouse system corrects this position with a significantly lower frequency. This means that the acceleration sensors deliver the faster data but the lighthouse system delivers the more consistent data.
Theoretically, a single base station is sufficient for clean tracking, but usually two are used. The second is installed opposite the first one in the room. With one base station, a high level of precision for two of the three spatial axes is already achieved. However, the axis pointing from the device to the base station is subject to somewhat poorer tracking performance. A second base station helps to increase the precision of the spatial axes pointing to the base station but above all aims to prevent masking problems.
Lighthouse tracking has proven to be extremely precise and reliable as a tracking method. However, the advantages for the user are associated with higher costs and require the installation of fixed devices in the room. With the trend towards more freedom and mobility, many manufacturers are increasingly using tracking methods that are more convenient for users, despite the good precision and easy scalability of the lighthouse method.
Inside-out tracking/SLAM tracking
Inside-out tracking is one of the most common tracking methods and constitutes the benchmark for any MR and mobile VR system. Additional hardware is required, since position tracking takes place via a monochrome camera that is integrated into the headset. The camera sensor data is evaluated by means of a SLAM algorithm. SLAM stands for “Simultaneous Localization and Mapping”, an active field of research in robotics. A SLAM algorithm allows robots to orient themselves in a strange environment by scanning their surroundings, creating a virtual map, and determining their own position on this map. SLAM works in a similar way for MR and VR systems, too. The algorithm (Oculus calls theirs Insight whereas Google calls theirs WorldSense) uses the evaluation of image data to identify distinctive points in the environment such as edges and corners or even whole rugs, pieces of furniture, tables, and pictures on the wall. When the user moves, the headset compares the change in the points with the data of the acceleration sensors and rate sensor and uses this information to calculate the relative change in position and the rotation of the head (visual-inertial odometry). To track the controllers, the constellation method is used, with a fixed installed constellation of LEDs on the controllers.
Inside-out tracking offers mobile, cost-effective, efficient tracking without extra efforts on the part of the user, which is why this method is becoming increasingly popular. With regard to MR systems, the only disadvantage is that the tracking system finds it difficult to identify fixed points in the room in the case of poorly lit and very stark environments. Apart from that, the disadvantages mainly relate to the use of controllers, which only affects VR systems. Inside-out tracking has to fall back upon constellation tracking for controllers, and consequently these are tracked only if they are in the field of view of the headset. If, for example, the user moves their hand behind their head, too near to their face, or otherwise masks the controller LEDs, the system loses track of the current location and position.
Cameras, depth sensors, geomagnetic and infrared sensors, algorithms, and neural networks – AR, MR, and VR systems are packed with complex, sophisticated technologies. We’re now at the point where the level of maturity of all of these micro and macro systems means that more and more use cases can be accommodated, and this increases further with each generation of new hardware. However, in addition to data glasses and HMDs, there are other devices that are capable of using augmented, mixed, or virtual reality features.
Smart phones and tablets
When it comes to work in the fields of augmented and mixed reality, data glasses and VR headsets are superior to smart phones and tablets in practically all aspects. Despite this, our hand-held devices have one important advantage: We all own one. Smart phones and tablets are heavily integrated into our everyday lives. With their advanced hardware, they open up lots of opportunities, particularly for AR and MR applications. Users are restricted in that they cannot operate their devices in a hands-free manner, but smart phones and tablets are a manageable size and always at hand. In addition, there are some really powerful developer tools that can be used to create sophisticated AR and MR applications.
The three most frequently used developer tools in this field are Apple’s software development kit ARKit, Google’s equivalent ARCore, and Unity’s AR Foundation.
In June 2017, Apple launched the ARKit software development kit (SDK) together with iOS 11. This kit is a collection of various tools that simplify the programming of augmented reality applications for iOS. ARKit allows iOS devices to detect and track surfaces, images, objects, human bodies, faces, light, and user movements through the use of the integrated camera along with various algorithms such as the SLAM algorithm (for details, click here).
In the same year as Apple, Google published its equivalent to Apple’s augmented reality SDK – ARCore. The SDK simplifies the development of AR applications for Android devices as of Android 7 and for iOS devices as of iOS 11. For the kit, Google uses similar algorithms to Apple to offer the detection and tracking of user movements, surfaces, light, faces, and images. However, unlike in Apple ARKit, the detection and tracking of objects and human bodies is not included from the start.
AR Foundation by Unity is a cross-platform tool for developers that enables the seamless programming of augmented reality applications for the platforms HoloLens, Android, and iOS. The framework acts as an interface between the ARCore and ARKit SDKs, with developers being able to access the functionalities of both when programming an AR app.
Frequently, a mixed scenario is conceivable, where VR headsets, AR/MR smart glasses, and smart phones/tablets are used together and complement each other’s functions. As well as opening up new collaboration and interaction opportunities, combinations of this kind also enable easy scaling with regard to the number of users.
The world of augmented and mixed reality devices is not restricted to data glasses, smart phones, and tablets. With increasing technological progress, we continue to find more devices that allow us to enhance our natural environment with a layer of virtuality. For example, an augmented reality mirror allows users to try on virtual items of clothing. In mirrors of this kind, a combination of a computer, depth camera, and the mirror itself track the body of the person standing in front of the mirror and superimpose virtual items of clothing in real time. The user can flexibly change their clothes using gestures without having to go to the effort of actually getting changed. The technology itself is really versatile and can be transferred to numerous other concepts. This means that in the future, we can anticipate completely new, unexpected devices and systems that use augmented reality.
Hardware and software go hand in hand when it comes to offering users a smooth experience and meeting their expectations in all aspects. Real added value can only be extracted from the hardware if the software is also just right.
The development of applications for AR, MR, and VR platforms generally takes place in the runtime and development environments Unreal and Unity or in Amazon’s browser variant Sumerian. These offer developers all of the main tools required to quickly and flexibly design versatile content. Ultimately, the best development environment depends on the choice of AR, MR, or VR platform and the complexity of the application.
Ultimately, the best development environment depends on the choice of AR, MR, or VR platform and the complexity of the application.
Thanks to a clear focus on cross-platform, mobile development, the use of Unity has become extremely common in the field of AR, MR, and VR for mobile end-devices. Particularly in the mobile AR area, this is due to the AR Foundation AR framework developed by Unity and the support of further frameworks and SDKs such as the Mixed Reality Toolkit for Microsoft HoloLens. In addition, the development environment supports more than 25 platforms, delivers regular updates and optimizations for the mobile field, and enables programming in the widespread C# programming language.
In the demo videos of the Epic Games Unreal Engine, one thing immediately catches the eye: The impressive graphical depiction of the content. This is one of the core strengths of the Unreal Engine. As a result, it is particularly appealing for static VR applications for which content needs to be displayed as realistically as possible. Unreal supports 15 different platforms and is programmed in C++. An interesting feature used by Unreal is its Blueprints Visual Scripting system, which enables the interactive design of virtual worlds without extensive programming experience.
“Content is King“: Bill Gates told us so in an essay back in 1996. Many technologies exist only to transport content and display it well for users. When we talk about AR, MR, and VR content, we generally mean 3D objects. Such objects are created by 3D artists who are specialized in the design of three-dimensional figures, objects, and artwork.
However, the aesthetic aspect of a 3D model is not the only important issue. From a technical point of view, there are lots of subtleties that need to be taken into account. The format landscape for 3D objects is unimaginably vast. All formats have specific properties and were designed for different purposes. This gives rise to questions of compatibility and formating capability, since each system supports only a certain set of file formats.
A further technical aspect is the resolution of models, which is measured on the basis of the number of polygons from which the model is formed. This has a major effect on display quality and performance. For example, high-resolution models with a high number of polygons must be scaled down to a smaller number of polygons for low-performance devices in order to ensure a clean display. This downscaling does not necessarily need to be associated with impaired quality. For example, in the case of a CAD model to be used for demonstration purposes, lots of redundant information in the form of inner polygons can be removed without impairing the external resolution of the model.
The creation of 3D models is extremely easy for many application purposes. Frequently, a high level of detail is not required, and the models can be created more quickly than might be imagined. In addition, many AR and MR applications can be implemented with extremely simple 3D objects such as markers, arrows, and circles.
Why starting to use AR, MR, and VR pays off
The trend towards more immersive, reality-enhancing technologies has been evident among companies in diverse sectors for a while. Some of these companies were early adopters who started experimenting with MR data glasses really early on. However, most were initially disappointed with early headset models. The devices turned out to be too heavy, uncomfortable, and not robust enough for daily usage. Despite this, their potential for unleashing sophisticated technology was not disregarded.
Today, analyses and forecasts all concur: AR, MR, and VR are on the up. Companies can either adapt to this and make the most of the benefits or they can be left behind by developments. For example, the IDC forecasts “strong growth” for AR and VR with regard to the sales figures of AR data glasses and VR HMDs, with a compound annual growth rate of 66.7% to 2023. With a figure of around 5.9 million units for 2018, this means an increase to 68.6 million for 2023. Market research carried out by BIS Research Inc. between 2017 and 2018 also identified huge potential. According to resulting forecasts, the market for augmented and virtual reality, worth 3.5 billion USD in 2017, should grow to 198 billion USD by 2025. A more recent investigation by ARtillery Intelligence in 2019 had similar findings. In the field of augmented reality alone, global turnover of 1.96 billion USD in 2018 should rise to 27.4 billion by 2023:
All of the analyses show a uniform picture, and other metrics support this growth forecast. The number of patent registrations for AR and VR grew by 125% to 32,083 patents from 2015 to 2018. In addition, a quickly growing number of startups are using these trends and developing the technologies further. The startup portal AngelList lists almost 2,000 augmented reality startups, with an average valuation of 5.2 million US dollars.
This trend is not escaping the attention of observant companies: In a survey published by Harvard Business Review in 2018, 87% of 394 managers claimed to have use cases for mixed reality on their radar in one way or another. 20% of them seized this opportunity early on and already use MR profitably at their company.
The companies hope that professional usage will lead to:
- Optimization of work processes
- Increased productivity
- Competitive advantages
- Acceleration of learning processes
- Improved customer experience
- Shorter downtimes
- Minimization of errors
- Improved ability of the company to adapt to changing circumstances
- More effective decision making by employees in areas that are not knowledge-intensive
- Faster market launch of products and services
- Improved employee satisfaction
A study by Capgemini provides specific examples. Boeing engineers use AR technology to display circuit diagrams in their field of vision. Working intuitively, they are 25% faster, and productivity has improved by 40%. An increase in productivity and improved collaboration was also achieved by field workers from Toms River Municipal Utilities Authority (a municipal utility company in New Jersey). There, AR and VR technology is used to display hidden supply lines in roads in real time. At Ford, VR is used in combination with movement sensors to record human movements during the assembly process. As a result, new movement patterns have been formulated, reducing the risk of injury by 70% and bringing about a 90% reduction in ergonomic problems.
Possible uses of AR, MR, and VR
How do you discover possible uses for these technologies at your company? How can you best link augmented, mixed, and virtual reality with your specialist knowledge and expertise so that you, too, can benefit from increased productivity, reduced downtimes, and the minimization of errors?
We use design thinking and can work with you at productive workshops to find out which options are open to you. Certain examples clarify the extent of the potential to be exploited through the clever use of AR, MR, and VR systems:
- Interactive production instructions guide the user through step-by-step instructions for assembling devices. This allows even complex components and machines to be assembled without prior knowledge or training.
- 3D maintenance support displays the inside of machines and all relevant operating parameters to service engineers in real time. This results in much faster diagnostics and repair work.
- Virtual prototyping allows designers to benefit from a fast, collaborative design process and brings about considerable cost savings since there is no need for clay models to be used.
- Interactive training allows trainers to work from practically any location, helps to improve the speed of learning, and increases the scalability of training courses.
- 3D room/layout plans enable real-time planning in rooms using any scale. They make purchasing decisions easier and accelerate the planning process for production halls, fair stands, and office buildings.
- Indoor and outdoor navigation without WiFi or Bluetooth beacons allows everyone to orient themselves intuitively in unknown environments.
- 3D defect management in construction and buildings management allows users to place virtual tickets in a room and navigate to these tickets. As a result, matters such as construction defects can be found, handled, and properly documented more quickly.
- Spatial building information modeling (BIM) can be used to superimpose an object with its digital twin, allowing a target/actual comparison. Deviations are detected more quickly and the planning/realization of the project is made much easier.
- 3D product displays enable any product to be presented in its actual size. They help customers to make purchasing decisions and therefore improve customer satisfaction.
- Surgery support and anatomical displays allow X-ray images to appear directly on the bodies of patients, giving surgeons insight into the actual bodily make-up of their patients. This improves the depiction of procedures, allows for more individual planning, and helps to explain procedures to patients.
AR, MR, and VR are multifaceted and have diverse possible uses. You’ve probably already tentatively thought about how you might be able to profitably use these technologies in your company. But what’s the best way to proceed? What steps are required during the development? The following example shows how you can sensibly approach and model a project.
Let’s assume that you want to help new employees to find their way around your office complex as quickly as possible. The large area and sheer number of meeting rooms are often problematic for new arrivals, causing them to be late. Recently, you happen to have heard that augmented reality enables a kind of navigation system that does not require Bluetooth beacons. It sounds promising, and you’ve made the decision to take some advice on this.
You contact Novatec. Our job is to explain the topic to you more clearly and work with you to design a possible implementation. In several design thinking workshops, we work hand in hand to analyze your existing infrastructure and possible users. We create initial implementation concepts and select suitable hardware. After a thorough check of feasibility and the added value that the project would bring for your company, you decide upon the development of an AR navigation app for smart phones.
The idea is that your employees can use a mobile app to navigate to certain destinations on your premises, both inside and outside buildings. In order to ensure the efficient realization of the project, you agree with Novatec that agile development in accordance with Scrum principles should be used.
In an iterative process consisting of planning, implementation, and testing, the PO guides you through the development process and presents the current state of development. Gradually, the first version of your application allowing handling-relevant feedback emerges – this step is particularly important, since it is the only way to reliably ensure that the planned app is easy to use, covers all required functions, and achieves the desired goal. At the end of this iterative process, the MVP is created. It consists of two applications: An admin app and a user app.
The admin app allows you to freely set waypoints throughout your company premises and to define destinations. To do this, the application scans its immediate surroundings, detects distinctive points such as corners and edges, and groups them into clusters. When a user of the admin app sets a waypoint, a certain part of the previously created clusters is recorded as a reference for the waypoint and is saved in a cloud as a spatial anchor. Multiple waypoints or spatial anchors set along a route are placed into a positional relationship with each other and are saved in a database.
Spatial anchors are part of the emerging AR/MR technology. These are fixed points in the real environment that the AR/MR system identifies and “anchors”. Even when the user moves in space, Spatial Anchors always maintain their fixed position and attitude. They serve as reference points and coordinate systems for the virtual content, which can thus be fixed in space.
On the other side of the equation is the user app, which detects the previously created waypoints and guides your employees to defined destinations using them. To do this, the user app also permanently scans its environment, creating its own clusters from information about distinctive points in the room. Since these are detected in the same way as for the admin app, the data currently being scanned is easy to compare with the data in the cloud. If the app finds similar data records, it can work out where users are and which way they are facing. If a user is next to a waypoint, the app uses a Web service to access the positional relationships of the spatial anchor previously saved in the database and can then guide the user to the next waypoint along a graph. This allows the user to select one of the destinations you created and navigate to it easily using a smart phone.
To sum up, the apps provide the following functions:
- Scanning of surroundings to collect spatial data
- Creation of waypoints and destinations
- Collection and saving of positional relationships between spatial anchors
- Recognition of created spatial anchors
- Call of positional relationships
- Creation of a graph from the saved positional relationship
- Guidance to destination
On this basis, the application is further developed iteratively in an agile manner in accordance with Scrum principles. Test-driven development should ensure the targeted improvement of the application through the further development of existing functions and the implementation of new ones. Regular close contact with users also allows high quality standards to be achieved and an efficient end product to be created without unnecessary detours along the way. We perform all of these steps in order to bring the previously defined product vision to life and to ensure a smooth development process.
A glimpse into the future
When computers were first introduced into private homes and the everyday lives of normal people, practically nobody could really grasp what this actually meant. Many people simply couldn’t imagine that the average family would get much use out of a PC. Today, we can look back and learn from our lack of vision. The PC was the first big step towards a life where we are accompanied by a digital reflection of our reality. The computer brought a new dimension of our world right into our own homes. The success of the smart phone was the second major step, releasing our connectivity from its stationary binds. Today, many people are digitally connected at all times. In what direction will the technology develop further?
Famous faces such as Tim Cook, Satya Nadella, and Mark Zuckerberg are certain that the next step is the merging of the digital world with the real world. Above all, AR and MR are expected to form the next platform for the use of virtual content, following directly in the footsteps of the computer and smart phone. In addition, VR systems, which still cost around 250,000 USD in the times of Jaron Lanier and were completely unwieldy, will increasingly penetrate our everyday life as technology continues to progress. Analysts agree that the development of technologies of this kind will increasingly gain steam in the years to come, releasing an as yet unanticipated wealth of possibilities, as our PCs once did.
We, too, think that AR, MR, and VR have massive potential, and we’re certain that they will greatly influence and enrich our working lives in future years.
So what’s your decision? Technological progress waits for no man. Don’t delay. Shape your digital future right now!