2016 was arguably one of the biggest years in hardware and software innovation in recent history. Through techniques like machine learning, our devices are finally beginning to understand us on a much more fundamental level than ever before. Though true artificial intelligence is still not here quite yet, contextual data storage combined with the simplicity of almost perfected speech recognition has changed the way we interact with our devices, and will only be iterated upon until technology is so seamless that we will forget we are even using it.

It seems quite clear that AI will be the future, but what about AR? Will augmented reality integrate with artificial intelligence to make our lives as simplistic as possible? Let’s take a look a few possible scenarios, along with technologies on the market today that seem to be headed towards this transition.

What is AI?

Artificial intelligence is used to describe a technology that can make decisions based on varying efficiency algorithms. They can be low level (such as an easy mode game AI), or high level, such as AlphaGo, Google’s AI project that defeated the world Go champion this past summer. In the consumer space, AI has been a bit different. Though not a true AI, which could theoretically do everything humans can do (even write more AI software), these assistive technologies currently thriving in the market can help make our lives just a little bit easier through automation and voice recognition.

See also:

Google Home review – the future of the home?

November 18, 2016

The two current leaders in the assistive technology space are Google (Google Home) and Amazon (Amazon Alexa), smart home technologies that can play music, look up facts, and control things like lighting with various voice actions. While these products may seem like magic to some, they are still in what many would consider the first generation of what assistive tech could become. Machine learning has arguably evolved more in the last 5 years than it did in the previous 20, largely due to huge developments in compute technologies that allow us to index and search for similar labeled objects in an instant. Through these developments, assistive technology can now understand a query, index that query, and return a result faster than ever before, allowing for almost instantaneous answers to questions and actions based on voice queries.

See also:

Why the three laws of robotics won’t save us from Google’s AI – Gary explains

September 29, 2016

But how can these technologies be integrated into our lives to an even more seamless extent? Many are thinking the next big step is AR, or augmented reality.

What is AR?

Augmented reality is a technology which overlays images or holograms over the real world. There are multiple tiers of this technology, with lower level implementations using a phone’s camera to add images to your surroundings, while higher level versions like Microsoft’s HoloLens measure space to a higher degree to allow for intractable overlays. This tech is still in its infancy, though many companies have attempted to make higher level and more interactive versions over the years. Quite a few have come and gone, in the last couple of years even, and it seems that many of these technologies are simply too early to market.

Google Glass was a simplistic and interactive way to integrate AR overlays into our everyday lives. It connected to smartphones to allow for image projection and help us get things done in a much easier way. Though it was relatively simplistic in its first generation, it allowed for integration of technology in an almost completely seamless way. Want to record a video? You could do that in an instant. Map to a location? Google Glass showed your next turn right in front of your eyes, without having to take your attention off the road. Glass was a huge first step making AR part of our daily life, as it allowed us to utilize technology whenever we wanted to with almost no effort at all. The biggest issue with the project was its timing, along with its feature set. The public was not quite ready to deal with technology having the ability to record their every move at any moment, especially when there was no indication that it was doing so. Though the project is now effectively dead, it seems to be coming back in the form of multiple other projects from various companies that are looking to take its place.

HoloLens is a holographic overlay headset developed by Microsoft. The technology has the ability to create images and objects over the real world that are intractable in various ways. Through apps like Skype, Minecraft, and others, users can projects screens onto walls, manipulate 3D objects, and more. The headset is still in a relatively early developer stage, but shows an iteration in what Google glass was originally designed to do. Check out this post over at VRSource for more information on the project.

How can AI and AR work together?

AI and AR are both technologies that are made to augment our lives more seamlessly with helpful technology and information. While one aims to automate our processes through simplistic speech processing and other forms of input, the other works to develop new ways of information manipulation to make our jobs, entertainment, and overall life better. So how might these two technologies work together? There are a few potential possibilities.

The Iron Man ‘Jarvis’ scenario

In the Iron Man series, Tony Stark has an AI system he developed called J.A.R.V.I.S. This AI is so advanced, it knows the ins and outs of Tony’s behavior better than he knows himself. It automates almost all of his processes, and is able to provide him accurate information as soon as Tony asks for it. In Stark’s Iron man suit, he has a heads up display showing multiple different metrics in the real world that could be of interest. Whether it’s a rocket flying at him at high speeds or his suit’s power levels decreasing, Tony Stark can ask Jarvis for up to date information that provides him with the power he needs to get things done more efficiently.

The closest version of the AR technology we currently have is HoloLens, which, like Iron Man’s suit can overlay images and allow us to interact with them in real time. HoloLens currently doesn’t have AI technology, but could it feasibly in the future? Assistive technologies like Google Home and Amazon Alexa give us fast access to information and can help manipulate things like our homes, but if evolved enough, could be much more useful in the space. Even mobile AI like Siri and Google Assistant can remember our individual preferences, so an integration of the two technologies would essentially be a low level form of what Stark created in a comic stemming from 1968.

A technology like this in our everyday lives could have a lot of potential use cases. In fact, if information is available to us right in front of our eyes, many of the wearable devices we currently use today could be completely replaced. Need to know your heart rate? It’s in the top right corner of your vision. Need an Uber to pick you up? Simple voice command. Want to know what someone’s name is? Their identity can be displayed above them at any time. In this way, the entire need for interaction with a handheld device could be removed in its entirety. Or can it?

Could this technology replace our mobile devices?

The Google Glass V2 scenario

To make a product like this more feasible for the consumer market, it would need to be reduced to a much more feasible size, as well as form factor. Sound familiar? This sounds a lot like what further generations of Google Glass could have become, but the consumer and media fear of recording capabilities shut the project down while still in its infancy. Still, there are ways for a technology like this to re-surface in the market. Other companies like Snapchat have worked around secretive recording hysteria by adding things like lights to let people know what you are doing, and they seem to be doing alright so far.

Imagine Microsoft’s HoloLens technology mixed in with an AI that can answer all of your questions, and helps you do things before you need them done. What previously sounded like a far off sci-fi comic in the 60’s is now a feasible possibility, and one that could only be a few years away. Though Microsoft has not announced any sort of consumer version of its HoloLens headset, an iteration of similar technology could very much be in the works, whether it be from Microsoft themselves, Google, or any other equally capable developer.

Is this something you would like to see?

As with any technology, there are some pretty substantial downsides to all this; privacy being one of the biggest concerns. Do we want to live in a world where we are so dependent on assistive technology that we don’t understand life without it? Obviously, putting that much power in the hands of one company is quite a risky move, especially when every move and metric of our daily lives are being measured to the furthest extent. You could argue that these calculations are all for the good of the end user, but sometimes it is better to be more cautious about how much data we give any one entity. I’m obviously not going to rant here about why you should be more careful with your data, but as our technologies evolve to be more and more essential to our daily lives, we need to consider who this reliance is coming from, and what we are willing to give up to use it.


As we barrel headfirst into the future of computing, there are amazing technologies coming to the forefront of our culture every day. They say technology is on an exponential curve, and based on the change we’ve seen in the last 5 years alone, it’s hard not to agree.

How long do you think it will be before this technology hits the market? There’s no doubt it is an interesting and futuristic venture, and there are probably loads of companies attempting to perfect it as we speak.

David Imel
David Imel  is a 21 year-old technology enthusiast hailing from Smartville, Califorina. He moves a lot so he's probably not really living in any one place. David loves Android, Writing, Computer Hardware, Mechanical Keyboards, Super Smash Brothers: Melee, and many other geeky things. He attended the University of California: Santa Cruz from 2013-2017 and now writes articles like no tomorrow.
  • mark

    “Though true artificial intelligence is still not here quite yet”

    If by “true AI” you mean human level AI, I’d say things are still a fair way off (unless by “quite yet” you mean on the scale of 20 years).

    I’d say it’s pretty much true for most years that each year is the biggest years in innovation, there’s an exponential advancement rate of technology.

    “almost perfected speech recognition has changed the way we interact with our devices”

    And been a total failure in terms of PCs and phones – most people aren’t talking to their computers, after years of claiming this would happen real soon now, outside of some special cases like hands-free driving. Home devices might see a better use for this (so you can activate something when you’re not even holding a device).

    Also, whilst speech recognition itself is an application of AI, there’s no understanding – the interpretation of commands, including the “assistive technology” you mention, is no more AI than using any non-voice UI. There was a good article on XDA a few days ago which pointed out how companies are incorrectly using “AI” to mean anything that’s “intuitive” or useful. Though some of the features might end up being things we should consider AI or use machine learning behind the scenes – e.g., Google search (especially image search), or Google’s public transport directions – but these can be used directly also without using some fancy overhyped “assistant”.

    • David Imel

      Really appreciate your response, Mark.

      When I mentioned ‘not quite here yet’, I do agree that human level AI is probably at least 20 years away. I used the term because while that is most likely the case, you never really know what can happen in a couple of years, especially with the exponential trajectory that technology takes.

      When I said nearly perfected speech recognition, I was more referring to the ability for our devices to interpret speech, even if it is just translating it to text and querying it. I agree, no one is talking to their devices yet. Alexa is certainly a huge jump forward in the consumer use of this speech recognition, but still, it’s going to be quite a long time until people start interacting with their devices with speech that much more commonly.

      When I said understanding I was more referring to contextual understanding – looking for key words and instances and using that data for subsequent interaction. “What is Obama’s last name? How tall is he?”, things like that. Also in relation to home automation technologies that set certain values and settings based on different circumstances. Again, this is obviously not a machine ‘understanding’ and just a dressed up if-then statement, but I just wanted to simplify it a bit because for the general consumer that is what it seems like – understanding.

      Thanks again for your thoughts!

      • mark

        Yeah, I agree things like “What is Obama’s last name?” are good examples and seem to show understanding – I just mean that it’s something not just tied to assistants (indeed, Google search returns). I agree that more recent things like Google Assistant are behaving more like an AI bot, and handling things like context, which is much more than the rules-based checking I saw with things like Cortana or the Google Now commands (where it works if you type/say the specific known commands, but any trivial variation is unrecognised – not a criticism of those tools as natural language processing is a hard thing).

        • Daggett Beaver |dBz|

          That kind of natural language processing (context, follow-up questions) has been around since the 1980s, if not before. So it must not be THAT hard.

          Speech recognition is what has improved dramatically, and text-to-speech sounds much more natural now. But the natural language processing hasn’t improved much since the ’80s.

  • Vivek Rameses

    God I hope not. These technologies are intrusive. Hopefully they’ll die down soon. Vr might have a shot but that will probably go the way of 3dtvs.

    • Scr-U-gle

      The biggest issue is the users.

      How many times were there headlines of arsehole drones demanding to use them in inappropriate places.

  • Major_Pita

    I don’t think AI will become a thing until it becomes capable of of having a conversation-driven exchange. Like :
    A guy driving down the street Saturday afternoon…
    Bob thinking out loud – I really like dogs
    AI (acquires car audio system) – they drool…
    Bob – how many dogs do you think are in the world right now?
    AI – Somewhere around 450 million including ferrule …
    Bob – Wow, how many of those are undomesticated?
    AI – approximately 23 percent.
    Bob – Amazing, how many of those are here in the U.S?
    AI – 17.
    Bob – Huh?
    AI – Kidding Bob, about 12 million
    Bob – Hey look (Bob’s wearing Oakley’s latest AR wrap-arounds) gazing at a theater up the block – the new Star Wars flick is playing there. I wonder when the next showing is…
    AI – (querying GPS, firing up optical recognition and acquiring the video stream from Bob’s Oakleys) In… 45 minutes…but it’s already sold out. Should I buy a ticket for tomorrow?
    Bob – Uh…yeah. Like in the afternoon – after 2 o’clock?
    AI – Wait one…done. 2:25. Got a seat in first row, second section, like last time.
    Bob – Sweet…
    AI – PEDESTRIAN! – there! Flashes citron colored highlight in Bob’s Oakleys around a figure walking out from two parked cars.
    Bob – Ah…thanks.
    AI – When do I get to drive?

    Once the conversation starts the topic remains Dogs until the subject is changed. AI is running in ‘always on’ mode, has authorized access to devices like Bob’s Oakleys, the car’s Nav system and bob’s phone. Running personality type: Helpful/Snarky.
    Bob has put a permanent NO ACTION on keyword: Hookers after last Saturday night.

    • I have earned 104 thousand dollars previous year by working on-line a­n­d I manage to earn that much by wor­king in my own time f­­o­­r several hours every day. I’m using a money making model I came across from this website i found online and I am so thrilled that I was able to earn such great money. It’s newbie friendly a­­n­­d I’m so blessed that i learned about it. Here is what i do… STATICTAB.COM/dntj48t

  • The future is ‘Thinness’.

  • Great post. yes the future of mobile is AI, AR and also VR.

  • Scr-U-gle

    I find it hilarious that google glass mainly failed because the of the users being arseholes.

    You pricks made google glass untenable.

  • Daggett Beaver |dBz|

    Amazon Echo and Google Home aren’t examples of AI. The only thing they can do that even comes close to AI is understanding follow-up questions. And software has been doing that since Lotus HAL and Symantec Q&A, products from the 1980s that had natural language processing.

    What Amazon Echo and Google Home add to the picture is speech recognition and text-to-speech. And that’s been around since forever, too. It’s just improved a lot.

  • Jackson Bailey

    I personally would love to see Google glass v2

  • Exciting things happen when AI is paired with business mobile apps, and tech-savvy enterprises are capitalizing on the advantages it can bring.