It has been reported that a conversation that a couple were having within "earshot" of an Amazon Echo (or "Alexa") device was recorded without their knowledge. It then sent this audio on to one of their contacts, who was able to listen back to the whole thing.
For those that don't pay attention to the latest technological trends, the Amazon Echo is an example of the latest breed of smart helpers that also include Google Now. These are devices that you place round your home and that then listen out for questions or commands that are vocalised in its vicinity.
Up until now, most examples of these machines "gone rogue" have involved children ordering things via their parents' Amazon account, or the device overhearing and reacting to something happening on a TV. A couple of months ago, there were also reports that Alexa devices were laughing to themselves - seemingly unprompted. It turned out this was the devices mishearing general chatter, and interpreting it as "Alexa, laugh". While possibly annoying, these sorts of things are usually filed under "amusing glitches". This latest example seems a lot more serious.
Fortunately, the conversation in question didn't contain anything of note. It was just a chat between a couple about hardwood floors. It could have been much worse. The first they knew of the problem is when the person that the device had decided to send the recording to rang them and told them what they'd just been talking about.
So, is this the first sign that the robots are starting to fight back? Reminding us who's boss? Well... probably not. It's actually probably evidence that these devices aren't quite as smart as they appear. The reason that they seem to be so good at responding to commands is that they're programmed to listen for them with a wide margin of error. It actively *wants* to try and hear these commands. Amazon would receive far more complaints if the device didn't react half the time because the software was "playing it safe" and ignoring genuine requests.
Amazon have given an explanation of what happened in this case, and it falls in line with the above. The device did exactly what it thought it was being told to do. Something in the conversation had caused the device to wake up and start recording. It then interpreted another part of the conversation as "Send Message", asked (out loud, although the couple did not register it doing this) where it should send it, and then interpreted another part of the conversation as a contact that it should send it to. Nothing nefarious. Just a smart device being a little... dumb. Looks like everyone can put back the rise of the machines back a couple of years in their calendars.