Advanced Computing in the Age of AI | Thursday, March 28, 2024

Fending off AI Armageddon: Threats to Our Data, Our Safety, Our Country 

(MoreThanL8ve/Shutterstock)

Bruce Schneier is scary smart. The things he talks about – AI weaponization, remote hacking of commercial airliners and self-driving cars, malicious alteration of medical records – are scarier.

The author of 13 books, including the cleverly titled Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World – and hundreds of articles and academic papers, Schneier is a public intellectual whom The Economist called a "security guru." He's testified before Congress, he’s a Harvard fellow and lecturer, a member of the boards of two technology organizations  and a successful entrepreneur. You get the idea. Schneier knows his stuff.

Schneier asserts that advances toward AI give us plenty to worry about, but not from Elon Musk's robots-will-kill-us perspective; long before we get to “super-intelligent” AI, machine learning in its current state is edging toward posing major dangers.

“I don’t worry about the risks of AIs taking over the world,” Schneier said during a Q&A discussion at the AI World conference in Boston this week. “I worry about risks much sooner, the near term precursor risks.”

Schneier discussed a range of cybersecurity issues, painting a good guys vs. bad guys picture both alarming and, assuming the good guys stay ahead of the innovation curve, encouraging. His core point: as AI and its associated technologies continue to evolve to protect information assets and networks, so does the opportunity to use AI to attack these systems.

Bruce Schneier at AI World, Boston

One of his themes, echoed by the U.S. Department of Defense, is the emerging “AI arms race,” the competition among countries and non-state actors to “leapfrog” each other militarily by adopting AI and machine learning for cybersecurity and weaponry. We wrote about this earlier in the year when Russian President Vladimir Putin declared that "the one who becomes the leader in this sphere will be the ruler of the world." We’ve also written about “algorithms at war,” DoD’s work in AI-based military systems.

The same day Schneier spoke in Boston a federal government IT publication, MeriTalk, published an unsettling story that casts doubt on whether the DoD, or Congress, is “putting enough of its money where its mouth is,” citing a recent Govini report.

“The U.S. military can either lead the coming revolution, or fall victim to it,” declared the report’s author, former Deputy Secretary of Defense Robert Work. “This stark choice will be determined by the degree to which (DoD) recognizes the revolutionary military potential of AI and advanced autonomous systems…, advanced computing, artificial neural networks, computer vision, natural language processing, big data, machine learning, and unmanned systems and robotics….”

At AI World, Schneier cited work spearheaded by DoD’s DARPA, the future-looking military R&D organization, on automated cyber defenses: systems that “discover, prove and fix software flaws in real-time, without any assistance.”

Last year, DARPA sponsored the Cyber Grand Challenge at two hacking conventions, DEF CON and Black Hat USA. Schneier called this contest “your biggest harbinger” of when AI-based military systems will be smarter than humans. He said there were machine-vs.-machine contests and also teams of computers and humans mixed together, “and the amazing thing was that the best computer didn’t come in last, the best computer beat the worst human team. That will continue to improve.”

Having said that, Schneier asserted that machine learning is the state-of-the-art on the path to real AI.

“It’s still very early… a lot of the companies say they use AI, but it’s not really AI, it’s a machine learning thing, or it’s some AI technique being brought to bear in a normally conventional product.” He regards machine learning as being used in fuzzy logic, pattern matching techniques for cybersecurity tasks such as spam detection and the identification of anomalous patterns in a network where there’s a good feedback loop. But “I don’t see anyone using what I would consider to be AI.”

Schneier has read an advance copy of Army of None: Autonomous Weapons and the Future of War, a book coming out next year in which Pentagon defense expert Paul Scharre explores giving machines authority over the ultimate decision of life and death.

“The issue is less about AI and more about autonomy – how quickly can you turn a weapon off when it starts doing something you don’t want it to do. Is it a second, a minute, an hour, a day, or never? These are really big differences and things we have to worry about when you start using algorithms to make decisions…  The issue is going to be speed, scale, scope... whether they make targeting decisions, whether they keep firing and there’s no off switch. Some of this could be malice, some could be just a mistake.”

“Human in the loop,” in which a person is in position to approve or disapprove of a machine’s decision, is a key concern. As machines get smarter, Schneier said, humans will increasingly defer to machines – not just weapons systems, which have the potential “to do things before we can stop them,” but other types of machines as well.

“Once you start moving humans out of the loop then all things can fail,” he said. “A lot of our systems have a human in the loop, but not really.”

In the case of military systems that make targeting decisions, “there’s an Army officer (in the loop) who’s saying, ‘Yep, that’s right.’ That officer just knows they’re not going to override (the machine), so it doesn’t really count. You need to have a meaningful human in the loop.”

Another example: medical diagnosis, a “great win” for machine learning. Schneier said there are programs that detect certain cancers better than people. “A human being gets that result from the machine and then approves it. But if that person knows the machine is right more often than he or she is, they’re not going to override the machine. So while there’s a human in the loop, there isn’t really.”

Security Trumps Privacy

While we look ahead AI systems that ensure the privacy of our personally identifiable information (PII) and our privacy in general, Schneier raised the interesting notion that security may at times be at loggerheads with privacy, that security may pose the higher value.

“Yes, AI will protect the security of our data, which will protect our privacy,” Schneier said. “But we’re moving into a world where it’s not that privacy matters less, it’s that everything else matters more.”

Take personal medical records.

“They’re online and I’m concerned someone will hack in on them and steal my blood type. That’s a privacy violation. But I’m way more concerned that they’re going to change it, so when I get a blood transfusion, it won’t work.”

Likewise, in a world with more connected and autonomous cars “I’m worrying about someone hacking into the car and listening in through the blue tooth to what I’m saying. That’s a privacy concern. But I’m much more concerned with them disabling the breaks. We’re moving to a world where availability and integrity threats matter more than privacy. Not because you don’t care about privacy, but because security threats are so much greater.”

Borderless AI Innovation

In the emerging good guys/bad guys AI wars Schneier warns against trying to wall off the bad guys from AI innovation. Restrictions on research was easier in the pre-Internet world, he said, “when information flowed slower through books that were in local languages that were hard to read outside of the country. But in the world of the translatable Internet, that fails very quickly.

“For a lot of these problems I think we need to tech our way out of it, rather than limit our way out of it... Even if we could impose export controls on technology, that will only save us for a couple of years, if that.”

“Today’s top secret military program is tomorrow’s Ph.D. thesis and the next day’s hacker tools,” he said. “You put a limit on a technology in 2018 and by 2021 it’s a high school science project. Any three kids who feel like taking the world down with them can do it. I’m not going to save myself that way. I really need to work on defensive technologies, it’s how can we build the defensive technologies in advance of the offensive technologies.”

Asked if, without AI regulations and government policy, we are “marching toward a cliff,” Schneier said, “I think we are in many areas of technology policy, because we have none. We did that with social media. We invented a system where totalitarian governments can impose their will on societies, and it was like an offshoot of showing people ads. Most of our technology policies have no foresight, no planning. There’s this near-libertarian myth that somehow the market will do the right thing, rather than the near-term profitable thing. That’s a fundamental misunderstanding of markets, I think."

The problem, he said, is an inability or an unwillingness to anticipate, to defend ourselves against bad future outcomes.

“In many areas of our society we are marching toward any number of cliffs,” he said. “Whether we will go over them remains to be seen. We’re pretty good as a species at dealing with the (threat) right in front of us. We’re terrible at dealing with something 20 years down the line, such as climate change. We’re just incapable of dealing with those risks.”

But despite his concerns, Schneier insists he is optimistic in the long run.

“I may sound pessimistic, but I’m not. I actually think we will solve this. I don’t think this is the thing that will destroy our species. I believe we’ll be able to innovate our way out of this. The fact that there is so much research means there will be new ideas… That is the bright side of population explosion – there are a lot more smart people working on hard problems. There will be solutions coming from unexpected spaces.”

EnterpriseAI