#1 Trusted Cybersecurity News Platform Followed by 4.50+ million
The Hacker News Logo
Get the Free Newsletter
SaaS Security Posture Management

Voice Assistant | Breaking Cybersecurity News | The Hacker News

Hackers Can Use Ultrasonic Waves to Secretly Control Voice Assistant Devices

Hackers Can Use Ultrasonic Waves to Secretly Control Voice Assistant Devices

Mar 02, 2020
Researchers have discovered a new means to target voice-controlled devices by propagating ultrasonic waves through solid materials in order to interact with and compromise them using inaudible voice commands without the victims' knowledge. Called " SurfingAttack ," the attack leverages the unique properties of acoustic transmission in solid materials — such as tables — to "enable multiple rounds of interactions between the voice-controlled device and the attacker over a longer distance and without the need to be in line-of-sight." In doing so, it's possible for an attacker to interact with the devices using the voice assistants, hijack SMS two-factor authentication codes, and even place fraudulent calls, the researchers outlined in the paper, thus controlling the victim device inconspicuously. The research was published by a group of academics from Michigan State University, Washington University in St. Louis, Chinese Academy of Sciences, and the Un
Hackers Can Silently Control Your Google Home, Alexa, Siri With Laser Light

Hackers Can Silently Control Your Google Home, Alexa, Siri With Laser Light

Nov 05, 2019
A team of cybersecurity researchers has discovered a clever technique to remotely inject inaudible and invisible commands into voice-controlled devices — all just by shining a laser at the targeted device instead of using spoken words. Dubbed ' Light Commands ,' the hack relies on a vulnerability in MEMS microphones embedded in widely-used popular voice-controllable systems that unintentionally respond to light as if it were sound. According to experiments done by a team of researchers from Japanese and Michigan Universities, a remote attacker standing at a distance of several meters away from a device can covertly trigger the attack by simply modulating the amplitude of laser light to produce an acoustic pressure wave. "By modulating an electrical signal in the intensity of a light beam, attackers can trick microphones into producing electrical signals as if they are receiving genuine audio," the researchers said in their paper [ PDF ]. Doesn't this so
cyber security

Guide: How to Minimize Third-Party Risk With Vendor Management

websitewww.vanta.comVendor Risk Management
Manage third-party risk while dealing with challenges like limited resources and repetitive manual processes.
AI Solutions Are the New Shadow IT

AI Solutions Are the New Shadow IT

Nov 22, 2023AI Security / SaaS Security
Ambitious Employees Tout New AI Tools, Ignore Serious SaaS Security Risks Like the  SaaS shadow IT  of the past, AI is placing CISOs and cybersecurity teams in a tough but familiar spot.  Employees are covertly using AI  with little regard for established IT and cybersecurity review procedures. Considering  ChatGPT's meteoric rise to 100 million users within 60 days of launch , especially with little sales and marketing fanfare, employee-driven demand for AI tools will only escalate.  As new studies show  some workers boost productivity by 40% using generative AI , the pressure for CISOs and their teams to fast-track AI adoption — and turn a blind eye to unsanctioned AI tool usage — is intensifying.  But succumbing to these pressures can introduce serious SaaS data leakage and breach risks, particularly as employees flock to AI tools developed by small businesses, solopreneurs, and indie developers. AI Security Guide Download AppOmni's CISO Guide to AI Security - Part 1 AI evoke
Hackers Can Silently Control Siri, Alexa & Other Voice Assistants Using Ultrasound

Hackers Can Silently Control Siri, Alexa & Other Voice Assistants Using Ultrasound

Sep 07, 2017
What if your smartphone starts making calls, sending text messages, and browsing malicious websites on the Internet itself without even asking you? This is no imaginations, as hackers can make this possible using your smartphone's personal assistant like Siri or Google Now. A team of security researchers from China's Zhejiang University have discovered a clever way of activating your voice recognition systems without speaking a word by exploiting a security vulnerability that is apparently common across all major voice assistants. DolphinAttack (Demo): How It Works Dubbed DolphinAttack , the attack technique works by feeding the AI assistants commands in ultrasonic frequencies, which are too high for humans to hear but are perfectly audible to the microphones on your smart devices. With this technique, cyber criminals can "silently" whisper commands into your smartphones to hijack Siri and Alexa, and could force them to open malicious websites and even
Cybersecurity Resources