Hidden Voice Commands Can Hijack Your Smart Devices

Researchers discovered how hidden voice commands that are unintelligible to humans can be used to control and hijack smart devices through their virtual assistants.

Hidden Voice Commands Takeover Smart Devices Via Virtual Assistants

Security researchers have long warned of the dangers that are attributed to the virtual assistants that are popular on consumer devices. Their main concerns so far were related to data collection and privacy issues. However a research team has showcased a much more dangerous scenario which was demonstrated in several experiments. The primary functions of the various virtual assistants are to listen to the users and follow their designated voice commands. However it was discovered that there are a variety of sounds that are not speech that can transmit instructions to the devices. There is a gap between the sounds that people and computers can distinguish and understand.

A malicious user can thus hijack a device by just transmitting the necessary sounds and be in close proximity to the relevant target device. The team has done several demonstrations and experiments and according to their results they are effective at a distance of up to 12 feet. However it is possible that such messages can also be encoded in various background audio sources or even music. By design most virtual assistant include some kind of security measures that defend them against malicious commands. However it possible that they can be defeated if a potential weakness is identified by the criminals. It has been demonstrated that hidden voice commands can cause serious damage to their victims. Linked bank account apps can be ordered to send money to various criminal hackers. Instructions can also be sent to download files which may contain malware and ransomware.

The researchers crafted special commands that are specifically targeted at Google’s voice assistant. The investigation was done using two different approaches:

  1. White-Box Model – Under this model the attacker has full knowledge of the speech recognition system. The researchers created attack commands that are demonstrated using user testing and are not understandable by ordinary humans.

  2. Black-Box Model – The potential attackers use the speech recognition system without any prior knowledge of the actual algorithm used.

The results of the attacks show that this is a dangerous attack mechanism which may institute serious damage upon the victims. They propose several defense mechanisms to safeguard against such possible attacks initiated using this technique:

  • Device alerts that notify the user when a voice command is received and initiated.

  • Various AUDIO CAPTCHA systems and additional confirmation requests.

  • Speaker Recognition and Prevention.

  • Filtering out artificial sounds.

  • Machine learning algorithms that detect if the commands come from a live human operator.

For more information about the issue you can view the specialist research site by clicking here.

Was this content helpful?

Author : Martin Beltov

Martin graduated with a degree in Publishing from Sofia University. As a cyber security enthusiast he enjoys writing about the latest threats and mechanisms of intrusion.


Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *