#1 Trusted Cybersecurity News Platform Followed by 4.50+ million
The Hacker News Logo
Subscribe – Get Latest News
Cloud Security

Voice Assistant | Breaking Cybersecurity News | The Hacker News

Hackers Can Use Ultrasonic Waves to Secretly Control Voice Assistant Devices

Hackers Can Use Ultrasonic Waves to Secretly Control Voice Assistant Devices

Mar 02, 2020
Researchers have discovered a new means to target voice-controlled devices by propagating ultrasonic waves through solid materials in order to interact with and compromise them using inaudible voice commands without the victims' knowledge. Called " SurfingAttack ," the attack leverages the unique properties of acoustic transmission in solid materials — such as tables — to "enable multiple rounds of interactions between the voice-controlled device and the attacker over a longer distance and without the need to be in line-of-sight." In doing so, it's possible for an attacker to interact with the devices using the voice assistants, hijack SMS two-factor authentication codes, and even place fraudulent calls, the researchers outlined in the paper, thus controlling the victim device inconspicuously. The research was published by a group of academics from Michigan State University, Washington University in St. Louis, Chinese Academy of Sciences, and the Un
Hackers Can Silently Control Your Google Home, Alexa, Siri With Laser Light

Hackers Can Silently Control Your Google Home, Alexa, Siri With Laser Light

Nov 05, 2019
A team of cybersecurity researchers has discovered a clever technique to remotely inject inaudible and invisible commands into voice-controlled devices — all just by shining a laser at the targeted device instead of using spoken words. Dubbed ' Light Commands ,' the hack relies on a vulnerability in MEMS microphones embedded in widely-used popular voice-controllable systems that unintentionally respond to light as if it were sound. According to experiments done by a team of researchers from Japanese and Michigan Universities, a remote attacker standing at a distance of several meters away from a device can covertly trigger the attack by simply modulating the amplitude of laser light to produce an acoustic pressure wave. "By modulating an electrical signal in the intensity of a light beam, attackers can trick microphones into producing electrical signals as if they are receiving genuine audio," the researchers said in their paper [ PDF ]. Doesn't this so
GenAI: A New Headache for SaaS Security Teams

GenAI: A New Headache for SaaS Security Teams

Apr 17, 2024SaaS Security / AI Governance
The introduction of Open AI's ChatGPT was a defining moment for the software industry, touching off a GenAI race with its November 2022 release. SaaS vendors are now rushing to upgrade tools with enhanced productivity capabilities that are driven by generative AI. Among a wide range of uses, GenAI tools make it easier for developers to build software, assist sales teams in mundane email writing, help marketers produce unique content at low cost, and enable teams and creatives to brainstorm new ideas.  Recent significant GenAI product launches include Microsoft 365 Copilot, GitHub Copilot, and Salesforce Einstein GPT. Notably, these GenAI tools from leading SaaS providers are paid enhancements, a clear sign that no SaaS provider will want to miss out on cashing in on the GenAI transformation. Google will soon launch its SGE "Search Generative Experience" platform for premium AI-generated summaries rather than a list of websites.  At this pace, it's just a matter of a short time befo
Hackers Can Silently Control Siri, Alexa & Other Voice Assistants Using Ultrasound

Hackers Can Silently Control Siri, Alexa & Other Voice Assistants Using Ultrasound

Sep 07, 2017
What if your smartphone starts making calls, sending text messages, and browsing malicious websites on the Internet itself without even asking you? This is no imaginations, as hackers can make this possible using your smartphone's personal assistant like Siri or Google Now. A team of security researchers from China's Zhejiang University have discovered a clever way of activating your voice recognition systems without speaking a word by exploiting a security vulnerability that is apparently common across all major voice assistants. DolphinAttack (Demo): How It Works Dubbed DolphinAttack , the attack technique works by feeding the AI assistants commands in ultrasonic frequencies, which are too high for humans to hear but are perfectly audible to the microphones on your smart devices. With this technique, cyber criminals can "silently" whisper commands into your smartphones to hijack Siri and Alexa, and could force them to open malicious websites and even
cyber security

Today's Top 4 Identity Threat Exposures: Where To Find Them and How To Stop Them

websiteSilverfortIdentity Protection / Attack Surface
Explore the first ever threat report 100% focused on the prevalence of identity security gaps you may not be aware of.
Cybersecurity Resources