[ad_1]
In September 2019, The National Institute of Standards and Technology released its very first Warning for an attack on an advertisement artificial intelligence algorithm.
Security researchers had designed a way to attack a Proofpoint product that uses machine learning to identify spam. The system produced email headers that included a “score” for the likelihood that a message was spam. But analyzing these scores, along with the content of the messages, created a clone of the machine learning model and created spam messages that escaped detection.
The vulnerability notice may be the first in a long series. As AI is used more and more, new opportunities exploit weak points in emerging technology as well. This has spawned companies that probe AI systems for vulnerabilities, with the aim of detecting malicious entries before they can wreak havoc.
Start Robust intelligence is one of those companies. On zoom, Singer Yaron, its co-founder and CEO, demonstrates a program that uses AI to outsmart check-reading AI, a early application for modern machine learning.
Singer’s program automatically adjusts the intensity of a few pixels that make up the numbers and letters written on the check. This changes the perception of a widely used commercial check scanning algorithm. A crook equipped with such a tool could empty a target’s bank account by modifying a legitimate check to add multiple zeros before depositing it.
“In many applications, very, very small changes can lead to drastically different results,” says Singer, a Harvard professor who runs his company while on sabbatical in San Francisco. “But the problem is deeper; it’s just the very nature of how we perform machine learning. “
Robust Intelligence technology is used by companies such as PayPal and NTT Data, as well as a large ridesharing company; Singer says he can’t describe exactly how it’s used for fear of exposing potential opponents.
The company sells two tools: one that can be used to probe an AI algorithm for weaknesses and another that automatically intercepts potentially problematic entries – a kind of AI firewall. The poll tool can run an algorithm multiple times, examining the inputs and outputs and looking for ways to trick it.
These threats are not just theoretical. Researchers have shown how conflicting algorithms can trick real-world AI systems, including autonomous driving systems, text mining programs, and computer vision code. In an oft-mentioned case, a group of MIT students 3D printed a turtle that Google software recognized as a gun, thanks to subtle marks on its surface.
“If you’re currently developing machine learning models, you really have no way of doing any kind of red teaming, or penetration testing, for your machine learning models,” Singer says.
Singer’s research focuses on disrupting the entrance to a machine learning system to make it behave badly and design systems to be safe in the first place. Tricking AI systems is all about learning from examples and picking up subtle changes in ways humans don’t. By trying several carefully chosen inputs – for example, showing altered faces to a facial recognition system – and seeing how the system responds, a “conflicting” algorithm can infer what adjustments need to be made to produce a particular error or result.
Along with the deception system, Singer demonstrates a way to outsmart an online fraud detection system as part of looking for weaknesses. This fraud system looks for signs that a person making a transaction is in fact a robot, based on a wide range of characteristics including browser, operating system, IP address and time.
Singer also shows how his company’s technology can trick business image recognition and face recognition systems with subtle adjustments to a photo. The facial recognition system concludes that a subtly doctored photo of Benjamin Netanyahu actually shows basketball player Julius Barnes. Singer makes the same argument to potential customers who are concerned about how their new AI systems might be subverted and what it might do for their reputation.
Some large companies that use AI are starting to develop their own defenses against AI. Facebook, for example, has a “red team” trying to hack their AI systems to identify weak points.
[ad_2]