#1 Trusted Cybersecurity News Platform Followed by 4.50+ million
The Hacker News Logo
Subscribe – Get Latest News
Cloud Security

Voice Assistant | Breaking Cybersecurity News | The Hacker News

Hackers Can Use Ultrasonic Waves to Secretly Control Voice Assistant Devices

Hackers Can Use Ultrasonic Waves to Secretly Control Voice Assistant Devices

Mar 02, 2020
Researchers have discovered a new means to target voice-controlled devices by propagating ultrasonic waves through solid materials in order to interact with and compromise them using inaudible voice commands without the victims' knowledge. Called " SurfingAttack ," the attack leverages the unique properties of acoustic transmission in solid materials — such as tables — to "enable multiple rounds of interactions between the voice-controlled device and the attacker over a longer distance and without the need to be in line-of-sight." In doing so, it's possible for an attacker to interact with the devices using the voice assistants, hijack SMS two-factor authentication codes, and even place fraudulent calls, the researchers outlined in the paper, thus controlling the victim device inconspicuously. The research was published by a group of academics from Michigan State University, Washington University in St. Louis, Chinese Academy of Sciences, and the Un
Hackers Can Silently Control Your Google Home, Alexa, Siri With Laser Light

Hackers Can Silently Control Your Google Home, Alexa, Siri With Laser Light

Nov 05, 2019
A team of cybersecurity researchers has discovered a clever technique to remotely inject inaudible and invisible commands into voice-controlled devices — all just by shining a laser at the targeted device instead of using spoken words. Dubbed ' Light Commands ,' the hack relies on a vulnerability in MEMS microphones embedded in widely-used popular voice-controllable systems that unintentionally respond to light as if it were sound. According to experiments done by a team of researchers from Japanese and Michigan Universities, a remote attacker standing at a distance of several meters away from a device can covertly trigger the attack by simply modulating the amplitude of laser light to produce an acoustic pressure wave. "By modulating an electrical signal in the intensity of a light beam, attackers can trick microphones into producing electrical signals as if they are receiving genuine audio," the researchers said in their paper [ PDF ]. Doesn't this so
Code Keepers: Mastering Non-Human Identity Management

Code Keepers: Mastering Non-Human Identity Management

Apr 12, 2024DevSecOps / Identity Management
Identities now transcend human boundaries. Within each line of code and every API call lies a non-human identity. These entities act as programmatic access keys, enabling authentication and facilitating interactions among systems and services, which are essential for every API call, database query, or storage account access. As we depend on multi-factor authentication and passwords to safeguard human identities, a pressing question arises: How do we guarantee the security and integrity of these non-human counterparts? How do we authenticate, authorize, and regulate access for entities devoid of life but crucial for the functioning of critical systems? Let's break it down. The challenge Imagine a cloud-native application as a bustling metropolis of tiny neighborhoods known as microservices, all neatly packed into containers. These microservices function akin to diligent worker bees, each diligently performing its designated task, be it processing data, verifying credentials, or
Hackers Can Silently Control Siri, Alexa & Other Voice Assistants Using Ultrasound

Hackers Can Silently Control Siri, Alexa & Other Voice Assistants Using Ultrasound

Sep 07, 2017
What if your smartphone starts making calls, sending text messages, and browsing malicious websites on the Internet itself without even asking you? This is no imaginations, as hackers can make this possible using your smartphone's personal assistant like Siri or Google Now. A team of security researchers from China's Zhejiang University have discovered a clever way of activating your voice recognition systems without speaking a word by exploiting a security vulnerability that is apparently common across all major voice assistants. DolphinAttack (Demo): How It Works Dubbed DolphinAttack , the attack technique works by feeding the AI assistants commands in ultrasonic frequencies, which are too high for humans to hear but are perfectly audible to the microphones on your smart devices. With this technique, cyber criminals can "silently" whisper commands into your smartphones to hijack Siri and Alexa, and could force them to open malicious websites and even
cyber security

WATCH: The SaaS Security Challenge in 90 Seconds

websiteAdaptive ShieldSaaS Security / Cyber Threat
Discover how you can overcome the SaaS security challenge by securing your entire SaaS stack with SSPM.
Cybersecurity Resources