Giving voice to augmented reality

Combining voice recognition with augmented reality reduces errors and improves productivity.

A GE Aviation jet engine mechanic checks torque values in Skylight while performing assembly.

The aerospace maintenance, repair, and overhaul (MRO) market is soaring, growing by 4.2% annually. By 2028, it is expected to reach $115 billion, according to research from management consulting firm Oliver Wyman. Although this is good news for MRO providers, it also means that the talent gap will become even wider – Boeing predicts that as many as 118,000 new technicians will be needed in North America alone by 2035. That is a great deal of workers to train, without compromising throughput and quality during a time of rising demand.

In “Smart glasses transform aircraft production” (Aerospace Manufacturing and Design, Jan/Feb 2017), I discussed how enterprise augmented reality (AR) delivered on wearables is helping aerospace manufacturers and MROs close that skills gap by delivering step-by-step instructions and workflows to workers right in their lines of sight. With instant access to real-time information, workers of all skill levels can efficiently complete complex tasks with minimal formal training.

As more and more aerospace companies and MROs incorporate AR into their digital transformation strategies, it’s important to evaluate the device options available and the underlying technology that will drive the overall experience for the workforce using those devices day-to-day.

AR finds a voice

Interaction paradigms for mobile devices and wearables have come and gone. Gestures, such as pinch-and-zoom have been perpetuated by the iPhone and other consumer devices. However, these aren’t ideal in industrial settings. Imagine you’re trying to service landing gear shock struts and wheel bearings while juggling equipment in one hand and while swiping at a tablet providing the work instructions. It can be awkward and distracting, and it creates inefficiencies for workers who need to keep their hands on equipment and eyes focused on the job. Fortunately, a new interaction paradigm has emerged: voice.

Voice-assistant devices – Apple’s Siri, Amazon’s Alexa, Google Home – are taking the consumer realm by storm. We may not find Alexa on a shop floor, but the speech recognition and natural language processing (NLP) technologies that power it are solidifying a spot in the enterprise. Gartner Research predicts that by 2020, natural-language generation and artificial intelligence (AI) will be standard on 90% of modern business intelligence platforms. Research and advisory firm Forrester says demand for developers who know how to build AR and NLP-based experiences will increase.

Voice in action

Boeing is one aerospace giant combining the power of voice with AR. With Upskill’s Skylight enterprise AR platform, technicians assembling complex wiring harnesses interact with the software on smart glasses using voice commands, remaining hands-free to perform their task. Uniting voice and AR has led to a 25% improvement in productivity and effectively reduced errors to zero.

Similarly, GE Aviation leverages voice to interact with Skylight on Glass – integrated with a Wi-Fi-enabled torque wrench – to properly tighten B-nuts on jet engines. In one pilot program, GE Aviation experienced 8% to 12% efficiency improvement.

Voice recognition technology can quickly initiate a call to an expert who can provide guidance directly through AR devices. This resolves issues quicker and helps new and less-experienced workers rapidly get up to speed – closing the skills gap a little tighter.

A Boeing wiring technician receives instructions while assembling a wire harness.

What’s next?

While voice recognition technology is already making tangible impacts, we haven’t tapped its full potential. Voice is still limited in terms of the number of words or phrases someone can say to generate an accurate response or outcome, and these dictations may not seem very natural. We are now moving toward context recognition – the technology needs to recognize what you are saying and the context of what you are asking. For example, a technician may ask the system to bring up a manual on smart glasses, but the system doesn’t necessarily know that the worker needs the manual for a Boeing 747, not a 767.

Improving accuracy comes down to context – what’s happening in the environment, and what’s the user’s intent? This will also require advances in sensor technology, which will help detect the user’s environment. With the right blend of NLP and sensors, additional use cases will emerge for AR-powered wearables.

As the aerospace industry turns to AR technology to close the talent gap and drive toward Industry 4.0, it must consider which methods of interaction will make AR deployments most impactful. Voice is currently the loudest interaction paradigm, especially for workers who need to operate swiftly while keeping their hands free. And, with more seamless interactions, voice will not only drive broader adoption of AR, but also lay the foundation for the adoption other disruptive technologies, such as virtual reality (VR), mixed reality (MR), and AI. Together, these solutions will best prepare MROs and their workforces for a bright future.

Upskill
www.upskill.io

About the author: Brian Ballard is Upskill CEO and co-founder. He can be reached at 703.436.9283 or info@upskill.io.

July 2018
Explore the July 2018 Issue

Check out more from this issue and find your next story to read.