<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=1977396509252409&amp;ev=PageView&amp;noscript=1">

SUITE OF SERVICES services menu

Hamburger-menu.png
MobileSearchIcon.png
Brightline-BlogBanner.jpg

THE SCHELLMAN ADVANTAGE BLOG

< BACK TO BLOG HOME

Rolling the Dice on AI

Rolling the Dice on AI

Written by DOUGLAS BARBIN on Jul 23, 2018

Fear can be a great motivator. If you are afraid that a human cannot make a decision fast enough to stop a cyberattack, you might opt for an artificial intelligence (AI), machine learning system. But although fear, uncertainty and doubt — the FUD factor — of not responding quickly enough might motivate you to take this action, that same FUD factor that the action your automated system takes might be wrong is an equally strong motivator not to employ this technology. Welcome to this year’s Catch 22.

In the 1983 sci-fi classic War Games, a computer was employed to replace the soldiers who manned the intercontinental ballistic missile silos because, it was believed, the computer could launch the missiles dispassionately and not be swayed by indecision in case of a nuclear attack. A teenager hacked the system thinking it was an unreleased video game. Even someone who hasn’t seen the film can imagine the plot — the machine starts running World War III scenarios and prepares a multitude of real counter-assaults, driving the military IT experts crazy.

Those are the same fears with machine learning today. Just as in War Games, IT can enable today’s security software to not only determine if a cyberattack is occurring, but can empower a server to decide on its own to try and halt the attack, often by logging the suspected attacker off of the network or taking more aggressive actions.

The fear among “let the software do its job” opponents is that only humans should decide on an action, with the risks of autonomous software being too great. These are the experts who argued the soldiers should stay in the silos to turn the launch keys. After all, an “attack” might be false.

In fact, that very case occurred some 35 years ago. On Sept. 26, 1983, the Soviet Union’s early-warning system detected an incoming missile strike from the United States. Protocol called for a retaliatory strike if such a launch is detected, but Soviet duty officer Stanisav Petrov chose to dismiss the readout as a false alarm despite electronic warnings to the contrary. Petrov was correct — there were no U.S. missiles headed at Moscow. It can be argued that Petrov personally stopped World War III. This is a classic example against using machine learning as part of a missile defense system, where the human element would not have had the opportunity to interpret the data and make a decision.

 

Man vs Machine

The counterargument among autonomous IT systems advocates is that there is no choice. In short, cyberattacks happen so quickly that only an algorithm’s speed is enough to even have a shot at thwarting an attack before substantial damage is done.

“There is an unwillingness on the part of many security people to fully trust machine learning,” says Wade Baker, a professor at Virginia Tech’s College of Business for the MBA and Master of IT programs; he also serves on the advisory board for the RSA Conference.

“They think ‘Only a human can make this decision.’ Many have an emotional response,” he continues. “There is a strong belief that what we do in the security industry is so hard and so nuanced. A decision needs to be made very, very quickly. There is an emotional kind of irrational thing going on there” and it is compounded by a fear of bad software decisions.

“We do have these very fast decisions to make,” Hendler says, “but technology is still not at the point where it’s trustworthy to say ‘let’s trust it.’”

James Hendler is director of the Institute for Data Exploration and Applications (IDEA) and the Tetherless World Professor of Computer, Web and Cognitive Sciences at Rensselaer Polytechnic Institute and a member of the U.S. Homeland Security Science and Technology Advisory Committee. Hendler agrees that speed is a concern, but if the algorithm is not reliable yet, it is still a legitimate and perhaps an unjustifiable — at this time — risk. “We do have these very fast decisions to make,” Hendler says, “but technology is still not at the point where it’s trustworthy to say ‘let’s trust it.’”

Richard Rushing, the CISO for Motorola Mobility, says focusing on the nature of attackers — as opposed to attacks — is key to leveraging machine learning properly as a defense tool.

“Let’s understand the tradecraft of the attackers. If you look at protection tools, they are set up to block based on data, seen at one time. The attackers figured this out so they change the data every time — kind of like address or ports or information and they usually hide in plain sight,” Rushing says.

“What they do not change are things like time, size, process, activity, [and] steps,” he continues. With artificial intelligence and machine learning, systems can look for these patterns. “This is what computers are great at doing. You just need to know what to look for but also have that specific visibility to make it happen.”

Rushing adds that “layers of detection are bidirectional so you can follow the data in any direction, versus the classic outbound or inbound.”

One of the almost universally accepted truths about machine learning is that it is the subject of vast amounts of hype, both from vendors trying to sell it and analysts trying to encourage its use. This buzzword status causes machine learning to be portrayed inaccurately as the ideal fix for almost any security problem, when indeed its value is limited. It is very good at dealing with massive amounts of unstructured data, but its effectiveness quickly dilutes for many other security tasks.

“A lot of folks are trying to throw something like machine learning at a problem where it’s not necessary,” says Bryce Austin, an IT security consultant and author of the book Secure Enough?: 20 Questions on Cybersecurity for Business Owners and Executives. Many of these companies look to advanced efforts like machine learning when they have yet to tend to routine security matters such as multi-factor authentication, the elimination of default vendor-issued passwords and “reasonable network segmentation,” he notes.

Michael Oberlaender is the former CISO for Tailored Brands (which owns Men’s Warehouse, Jos. A. Bank and Moores Clothing for Men) and author of the book CISO and Now What? How to Successfully Build Security by Design. “Machine learning is completely overhyped. I would not spend a dime on it,” Oberlaender says, adding that demonstrations he saw at BlackHat 2017 — in which the tested machine learning algorithm failed to deliver — convinced him that the technology was not close to ready for the enterprise.

But Austin says that the practical security concerns should be paramount. After all, the essence of technology exploration is trying new systems — in a secure sandbox, with no ability to do anything that would impact live systems — and see how well it does.

“We have to allow the machine to make the decision to see how many false positives we get,” Austin says. “We need to let the computers try these things in real time.”

“There is this idea about some crazy bias against machines making decisions. People make mistakes on a regular basis,” Rushing says. “Why do machines have to be perfect?”

Rushing’s concern is that humans are not perfect. “There is this idea about some crazy bias against machines making decisions. People make mistakes on a regular basis,” Rushing says. “Why do machines have to be perfect?”

Rushing argues the pragmatic security position, namely that many mass-attacks today on enterprises are so large and fast that waiting around for a person to make a decision simply can never be an effective defense. “With these attacks, a human could not stop it. They are so quick and affect so many machines so quickly. The only thing that would have saved [the enterprise] is orchestration.”

In referencing the 2014 Target breach where attackers used the credentials of a heating, cooling, air-conditioning (HVAC) contractor to gain access to the internal network and ultimately the point-of-sale system network, one of Target’s problems was attributed to the massive number of potential breach alerts its system generated, overloading the security staff. Ultimately, the staff overlooked the valid alerts.

“That SIEM (security information and event management) [system] shouldn’t be giving me a million events,” Rushing says. It should only be alerting true security attacks that merit human attention, Rushing says. “You’re going to get overwhelmed because your people are missing the simple stuff.”

Using machine learning to reduce the number of alerts dramatically and thereby making real threats more apparent and therefore actionable is an excellent use of the technology, Rushing says.

Machine learning, in whatever form it takes today or tomorrow, is the only way to support a manageable workload of tickets — the unit of work for a SOC analyst — based on a timely and actionable event,” – Doug Barbin, principal and cybersecurity practice leader, Schellman & Company

Read the full article at >> SCMagazine

DOUGLAS BARBIN

MEET THE WRITER

DOUGLAS BARBIN

Doug Barbin is a Principal at Schellman & Company, LLC. Doug leads all service delivery for the western US and is also oversees the firm-wide growth and execution for security assessment services including PCI, FedRAMP, and penetration testing. He has over 19 years of experience. A strong advocate for cloud computing assurance, Doug spends much of his time working with cloud computing companies has participated in various cloud working groups with the Cloud Security Alliance and PCI Security Standards Council among others.

COMMENTS