#1 Trusted Cybersecurity News Platform Followed by 4.50+ million
The Hacker News Logo
Subscribe – Get Latest News
Cloud Security

artificial intelligence | Breaking Cybersecurity News | The Hacker News

Facebook uses Artificial Intelligence to Describe Photos to Blind Users

Facebook uses Artificial Intelligence to Describe Photos to Blind Users
Apr 06, 2016
Today the Internet has become dominated by images, and it's the major feature that got Facebook to a Billion daily users. We can not imagine Facebook without photos, but for Millions of blind and visually impaired people, Facebook without photos has been the reality since its launch. But not now! Facebook has launched a system, dubbed Automatic Alternative Text , which describes the contents of pictures by telling blind and visually-impaired users what appears in them. Blind and visually-impaired people use sophisticated navigation software known as screen readers to make their computers usable. The software turns the contents of the screen into speech, but it can't "read" pictures. However, Facebook's Automatic Alternative Text or AAT uses object recognition technology that can decode and describe photos uploaded to the social network site using artificial intelligence and then provide them in a form that can be readable by a screen reader. V

Microsoft says It's Deeply Sorry for Racist and Offensive Tweets by Tay AI Chatbot

Microsoft says It's Deeply Sorry for Racist and Offensive Tweets by Tay AI Chatbot
Mar 26, 2016
After Microsoft's Twitter-based Artificial Intelligence (AI) chatbot ' Tay ' badly defeated earlier this week, Microsoft has expressed apology and explained what went wrong. For those unaware, Tay is Millennial-inspired artificial intelligence chatbot unveiled by Microsoft on Wednesday that's supposed to talk with people on social media networks like Twitter, Kik and GroupMe and learn from them. However, in less than 24 hours of its launch, the company pulled Tay down, following incredibly racist and Holocaust comments and tweets praising Hitler and bashing feminists. In a blog post published Friday, Corporate Vice President Peter Lee of Microsoft Research apologized for the disturbing behavior of Tay, though he suggested the bad people might have influenced the AI teenager. "We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay," Lee w

Code Keepers: Mastering Non-Human Identity Management

Code Keepers: Mastering Non-Human Identity Management
Apr 12, 2024DevSecOps / Identity Management
Identities now transcend human boundaries. Within each line of code and every API call lies a non-human identity. These entities act as programmatic access keys, enabling authentication and facilitating interactions among systems and services, which are essential for every API call, database query, or storage account access. As we depend on multi-factor authentication and passwords to safeguard human identities, a pressing question arises: How do we guarantee the security and integrity of these non-human counterparts? How do we authenticate, authorize, and regulate access for entities devoid of life but crucial for the functioning of critical systems? Let's break it down. The challenge Imagine a cloud-native application as a bustling metropolis of tiny neighborhoods known as microservices, all neatly packed into containers. These microservices function akin to diligent worker bees, each diligently performing its designated task, be it processing data, verifying credentials, or

Microsoft's Artificial Intelligence Tay Became a 'Racist Nazi' in less than 24 Hours

Microsoft's Artificial Intelligence Tay Became a 'Racist Nazi' in less than 24 Hours
Mar 24, 2016
Tay, Microsoft's new Artificial Intelligence (AI) chatbot on Twitter had to be pulled down a day after it launched, following incredibly racist comments and tweets praising Hitler and bashing feminists. Microsoft had launched the Millennial-inspired artificial intelligence chatbot on Wednesday, claiming that it will become smarter the more people talk to it. The real-world aim of Tay is to allow researchers to "experiment" with conversational understanding, as well as learn how people talk to each other and get progressively "smarter." "The AI chatbot Tay is a machine learning project, designed for human engagement," a Microsoft spokesperson said. "It is as much a social and cultural experiment, as it is technical. Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay's commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are

WATCH: The SaaS Security Challenge in 90 Seconds

cyber security
websiteAdaptive ShieldSaaS Security / Cyber Threat
Discover how you can overcome the SaaS security challenge by securing your entire SaaS stack with SSPM.

Mark Zuckerberg Plans to Build Iron Man's JARVIS like Artificially Intelligent Assistant

Mark Zuckerberg Plans to Build Iron Man's JARVIS like Artificially Intelligent Assistant
Jan 04, 2016
What's the coolest part of the Iron Man movies? The hyper-intelligent Artificial Intelligence that helps Tony Stark by doing data analysis, charging his armor, presenting information at crucial times and doing other business operations. That's right — we are talking about J.A.R.V.I.S. , Iron Man's personal assistant. We all dream of having one of its kinds, and even Facebook's Founder and CEO Mark Zuckerberg has ambitions to live more like Iron Man's superhero Tony Stark. While disclosing his 2016 resolution via a Facebook post on Sunday, Zuckerberg revealed that he is planning to build his own Artificial Intelligence to help him run his home and assist him at office — similar to Iron Man's digital butler Edwin Jarvis . "You can think of it kind of like Jarvis in Iron Man," Zuckerberg wrote in his Facebook post . "I'll start teaching it to understand my voice to control everything in our home — music, lights, tempe
Cybersecurity Resources