Best daily deals

Affiliate links on Android Authority may earn us a commission. Learn more.

Always listening devices and the question of privacy vs safety

Alexa is already embroiled in a murder trial and now she has been breaking up domestic abuse. Could always-listening devices be a force for good? Or evil?
By
July 21, 2017

Was Harrison Ford really a replicant in Blade Runner?

Turns out it might not matter. There’s possibly already an AI detective living in your home. Her name is Alexa.

Alexa has been busy lately, solving murders and breaking up domestic disputes. But do these heroics represent a breach of our privacy? Or should these super-sleuthing skills be considered a good thing?

In the smart home, with the rope

Alexa’s first brush-in with the law came in the form of the high profile case of Victor Collins, who was found dead in a bathtub on November 22, 2015. Victor’s wounds suggested he had been strangled and drowned and the bathtub was not his own, but rather belonged to his friend James Bates, who had been hosting a hot tub party. Bates was considered the chief suspect in the murder and now awaits his bond hearing at Benton County Jail. But in a science-fiction twist of fate, it may be Amazon’s Alexa who holds the key to the whole trial.

In a science-fiction twist of fate, it may be Amazon’s Alexa who holds the key to the whole trial

The prosecution determined that James’ Echo device might be able to shed some light on the situation. If someone had issued a command to Alexa during the night, then background noises picked up during recording might help to prove his guilt or innocence.

Amazon however, refused to hand over the data:

Amazon will not release customer information without a valid and binding legal demand properly served on us. Amazon objects to overbroad or otherwise inappropriate demands as a matter of course.

Before the matter could be taken further, Bates himself gave permission for his data to be used in court. As of yet, we do not know if any useful information has been gleaned from the device. Either way though, it could mark a significant change in the way that we view the smart devices in our homes.

“Alexa, call the cops”

More recently, Alexa has also played a role in a domestic abuse case. One Eduardo Barros reportedly struck a woman in the face using a hand gun in a home in Beralillo, New Mexico. According to the victim, Eduardo then demanded to know if the woman had called the sheriff, at which point the nearby Echo took matters into its own hands and called 911. In 911 recordings, the victim can also be heard shouting ‘Alexa, call 911’. Eduardo was arrested and the victim refused medical attention, despite some injuries to the face. All in a day’s work for Alexa, it would seem.

The twist this time, is that Amazon reports that Alexa does not have the capability to call 911. Despite this, the sheriff’s department claims that the recording itself, along with the victim’s statement, leads them to believe otherwise.

So, what happened this time? Is Amazon lying? Is this a case of a unique Echo gone rogue? Or is the sheriff’s department mistaken (which, let’s be honest, is the most likely scenario)?

Does Alexa record everything you say?

The key thing to keep in mind is that Alexa does not record everything you say. Neither do other smart devices, such as Google Home or Cortana. Instead, they ‘listen’ for the keyword using ‘on-device’ keyword detection and only begin recording once they have been called to action. This is handled using voice recognition but your day-to-day conversations are not stored.

But when you do give a command, these devices will then start recording and send that data away to a server located miles away. This is where the data is interpreted, before a response is formulated and sent back to your device. These devices themselves don’t have the processing power but rather they outsource it to the cloud and that’s where the potential breach in privacy comes in.

It is actually quite creepy, especially when you realise that you’re recording other people who never gave any kind of consent.

And further concern might be raised by the fact that this data is then kept on the servers. In fact, you can listen to it yourself by heading over to the Alexa app and then going to Settings > History. Listen to some of these recordings and you can hear voices in the background, ambient music and more. It is actually quite creepy, especially when you realise that you’re recording other people who never gave any kind of consent. This is what the prosecution on Bates’ case hope will provide some insight, though if Bates was smart (and guilty) he could easily have just tapped the clear ‘delete’ button to remove any suspicious recordings. (You can do that too to remove any recordings you aren’t too happy with.)

Is Amazon right? What does the law say?

It’s a fair question to ask why this data needs to be permanently stored in the first place and isn’t just deleted. Presumably it’s to support Alexa’s machine learning (in which case, deleting the data may not really delete it). Or perhaps it’s to help with more nefarious schemes – such as to assist with shopping recommendations. Amazon itself would probably say it’s a feature for our benefit.

Regardless, the company is happy to record and store all this information, but not to use it to help with a crime. Is that a double standard? Are they valiant defenders of personal privacy or are they standing in the way of justice? Would they have handed over the data if they didn’t think it would lead to consumer privacy concerns and potentially hurt sales?

This opens a huge can of worms but let’s remember that it wasn’t the police who were requesting the data. Amazon’s statement reinforces that the company would have handed over the data had the request gone through the proper legal channels.

And there are legitimate concerns regarding customer privacy. After all, that data could include all manner of other information. It might contain personal information about innocent friends and relatives, for example.

What’s worse is that data such as this could actually be used to incorrectly incarcerate someone. In fact, one other piece of evidence from the same case came from a smart water mete. This meter registered that James had used 140 gallons of water between 1am and 3am (despite James claiming that he went to sleep at 1am). Could this water have been used to clean up evidence of a crime?

What’s worse, is that data such as this could actually be used to incorrectly incarcerate someone.

This might have been quite damning, had it not transpired that the timer on the meter was incorrect and the water had actually been used earlier – presumably to fill the hot tub. Technology does not see shades of grey but it is prone to error.

So, while you might consider using Alexa as evidence in court as being potentially no different from searching someone’s home with a warrant, there is certainly more to consider here. And more worrying is that something like this could be seen to set a precedent. If police and government officials are allowed access to devices that record our everyday activities, then can we ever consider a conversation to be truly ‘private’? Is this not the future that George Orwell warned us against? Ever since Edward Snowden’s leaks, this has been a hot topic of debate, but with IoT, it takes on a whole extra dimension.

Who owns this data?

The other question of course is who legally owns this data. This is actually something of a blurry line and will vary from one device to the next. Law Firm Taylor Wessing told ZDNet that legally we cannot claim to own all the data collected by our IoT devices. Conversely though, a company that has invested money into building a database of user information can make claim to owning that data. Most important here is to read the fine print. Companies are required to inform users of what data they intend to collect and how they are going to use it, so if you want to know precisely what Amazon is doing with your data then you can find out by reading the privacy policy. The EU General Data Protection Regulation likewise states that consent must be obtained before data is collected and that this must include information about how the data will be used. In short, when you set up your device, you are handing over any data that may be collected. It’s scary but no more invasive than Facebook’s policies when it comes to using your photos in pretty much any way they like.

So what does Amazon say about how it handles data collected by Alexa? One relevant passage states that it will release your information to comply with the law:

“Protection of Amazon.com and Others: We release account and other user information when we believe release is appropriate to comply with the law; enforce or apply our Terms of Use and other agreements; or protect the rights, property, or safety of Alexa, our users, or others. Obviously, however, this does not include selling, renting, sharing, or otherwise disclosing personally identifiable information from users for commercial purposes in violation of the commitments set forth in this Privacy Policy.”

It goes into more detail about what kind of information is shared with third parties and why, but suffice to say that it doesn’t sell your information to third parties and only shares it in order to supply specific services or to help you buy products through the site (though it’s easy to see this as a potential loop hole). You can read the full Privacy Policy here or learn more about the law and your rights here.

Privacy vs safety

But what about allowing a device to call the police when prompted? Surely that is less harmful?

And how about fitness trackers that could alert an ambulance if your heart stops? What about letting a smart device try to listen out for signs of danger too? If such a technology could save lives, then it might be considered a good thing. And let’s not forget that similar technologies already exist: many home security devices can call a security service or even the police in the case of a break-in. And many elderly people carry personal alarms in case they fall, some of which are linked up to their heart rate.

Let’s not forget that similar technologies already exist: many home security devices can call a security service or even the police in the case of a break-in

The biggest concern here of course is false call-outs which could waste police time or even be subject to intentional abuse. Likewise, though, once you give your device a direct line to the police, it becomes instantly possibly that it could be activated without your consent – whether intentionally or accidentally.

Who wants to own a device that could dob them into the police? No matter how unlikely? No matter whether or not they have any intention to break the law? Other protestations might focus on the possibility that such devices could prove to be a ‘gateway drug’ and provide a useful way of making surveillance more ‘normal’. Is this just a slippery slope?

Is it already too late?

Ultimately, using any device like Alexa is likely to contain some element of compromise to your personal privacy. Even if Amazon protects your data, there’s nothing to say it couldn’t fall into the wrong hands. Hacking and cyber attacks are also big subjects right at the moment but the thought of ransomware attacking a smart device is particularly frightening.

Of course, the best defense against something like this is to turn off your device. Want to engage in a secretive discussion? Then just turn off your digital assistants. We’ve been doing the equivalent of this for years by sticking tape over our web cams.

But with a smart device, that somewhat neuters its capabilities; the whole point is that you don’t have to turn anything on in order to get the information you need. So, in each moment you need to weigh up the pros and cons of having such a device always listening.

Likewise, many fitness trackers collect more than enough data to make such frightening inferences as to when you’re likely to be home and when you’re likely to be out. By using a device with GPS, you are putting your trust in the company providing the service – their policies and their security measures.

The real defense that most of us have is anonymity. There is so much data out there and there are so many people using these devices, that the risk of being targeted is relatively minor unless you’re in the public eye.

Most of us take a relatively relaxed approach to this. In fact, the majority of us will happily share huge amounts of personal data online. It would be pretty easy to create a script to scrape scary amounts of information about a given person. Every time you give your email and unchanged-password to a new website, you are trusting that publisher not to go ahead and try using that combination to log into your PayPal…

The real defense that most of us have is anonymity. There is so much data out there and there are so many people using these devices, that the risk of being targeted is relatively minor unless you’re in the public eye. The data collected is generally quantitative rather than qualitative, often it is anonymous in itself. And most of us don’t do anything much worth eavesdropping on anyway!

Inconclusion

Let’s not forget that we take similar risks every single day in the ‘real world’. It’s probably not hard to pry open your window but you are ‘safe’ because you are one house among thousands and most people aren’t criminals. Every time you hand over your debit card, the shop attendant could be making a note of the details under the counter for a bit of online shopping later.

And with all that in mind, most of us are happy to continue using smart devices, despite the large amount of information they seemingly collect. Not only that, but many of us might see the merit of having a device that can call the police and potentially save our lives. But likewise, there will always be those people who prefer to live off the grid and eat squirrels.

Amazon Echo Dot image on white surface with Android figurine in background.

Ultimately then, what’s most important is that element of choice – and the forewarning necessary for that choice to be meaningful. Companies and governments need to be transparent about the data collected and from there, we can then make informed decisions regarding whether we’d rather opt-in or out. And no, that doesn’t mean hiding key details in a huge ream of small print.

No one thinks of a personal alarm as invasive because it is purchased specifically for that purpose. The issues arise when something you bought to set timers calls the emergency services without your permission because ‘feature bloat’ has happened and you never read the small print.

So far, Amazon has given us every reason to trust it with our data. Google recently announced that it was going to stop scanning emails for similar reasons. But these recent court cases outline the need for clear laws and accountability regarding the technology and the companies that provide it. Otherwise: who watches the watchmen?

In the meantime: check your recorded instructions from time to time, turn off the mic when you want some privacy and always read the small print!