Is It Time for Your Organization to Form an AI Ethics Committee?
Do you need to set up an artificial intelligence ethics committee if you are using this technology? Google certainly thought it did — until it changed its mind. Of course Google is one of the leaders in this space while most other companies on the spectrum are merely experimenting with AI or using a variation of it in a vendor product. Still, though, artificial technology is quite different from other technologies and software applications given its ability to think and reason like a human. It is not understated to say there are ethical considerations with its use — even with seemingly benign business operations. Indeed, Deloitte's second annual State of AI in the Enterprise survey found that 32% of executives ranked ethical issues as a top three risk of AI, but most don't yet have specific approaches in place to address this risk.
Google’s Brief Flirtation With an Ethics Committee
Yet it appears that nothing with AI is easy, including establishing an ethics committee as Google found out recently. At the end of March Google announced it had established an AI ethics panel to guide the “responsible development of AI” at the company. The panel was to have eight members and would meet four times over 2019 to consider various concerns about Google’s AI endeavors. The panel lasted just a little over a week.
From the beginning it was controversial, with thousands of Google employees calling for the removal of Kay Coles James, head of the Heritage Foundation, because the institution has voiced skepticism about climate change and because of her comments about trans people. Other members’ credentials or beliefs were similarly challenged, with one member resigning and another Tweeting about James that “Believe it or not, I know worse about one of the other people.” Soon enough Google pulled the plug, declaring it was going back to the drawing board.
With this fiasco as background, it is fair for companies to wonder if an AI ethics committee or panel is for them after all. There is no one resounding consensus on the matter though and not surprisingly opinions vary from ‘yes you do’ to ‘no you don’t’ and all points in between.
Yes You Need One
Manoj Saxena, advisor to the London Stock Exchange, first GM of IBM Watson and currently executive chairman of CognitiveScale, is resolute that companies need a backup in this area, especially those companies that will be adopting AI to build solutions. “Unlike traditional rules-based systems, AI systems are self-learning systems that need to be designed carefully so they reflect the company’s core values, comply with industry regulations, provide audit trails on how the AI is learned and finally, act as a means of remediation for AI damages or harm,” he said.
Even companies that are just beginning on their AI journey should be thinking about this, according to B12 co-founder and CEO Nitesh Banta. “With technology as powerful as AI, this is particularly true. There's so much unknown about the future of AI and it has the potential to both positively and negatively impact all aspects of society.” Companies should not only talk about the implications internally but should look for opportunities to learn from others, he added.
Perhaps what confuses companies is the fact that these discussions usually start at the societal level — such as debates over whether the technology should be sold to authoritarian regimes or whether robots will replace human jobs. Simple put these are not issues at the adopter level, said Doug Barbin, principal and Cybersecurity and Emerging Technologies Practice leader of Schellman.
"...users of ML technology need to understand the quality, quantity, and especially the limitations of source data. As such, the old saying of garbage in garbage out applies especially when business decisions are made based on the outputs of the ML technology.”
“Adopters of AI need to consider the sources and uses for AI technologies as they should with any other,” Barbin said. “For example, users of ML technology need to understand the quality, quantity, and especially the limitations of source data. As such, the old saying of garbage in garbage out applies especially when business decisions are made based on the outputs of the ML technology.” Some questions to consider, he said, include:
Does the ML technology take data from one source like a sales system or does it take from multiple sources?
Are there any glaring omissions like customer satisfaction or retention?
Are geographic, demographic, market, or other factors accounted for or do the results tilt positively or negatively towards a specific segment"
“And when dealing in personal data, a whole additional host of issues come into play. In some cases, systematic actions are applied based on the analysis that occurs,” Barbin said.
About Douglas Barbin
As Chief Growth Officer and firmwide Managing Principal, Doug Barbin is responsible for the strategy, development, growth, and delivery of Schellman’s global services portfolio. Since joining in 2009, his primary focus has been to expand the strong foundation in IT audit and assurance to make Schellman a market leading diversified cybersecurity and compliance services provider. He has developed many of Schellman's service offerings, served global clients, and now focuses on leading and supporting the service delivery professionals, practice leaders, and the business development teams. Doug brings more than 25 years’ experience in technology focused services having served as technology product management executive, mortgage firm CTO/COO, and fraud and computer forensic investigations leader. Doug holds dual-bachelor's degrees in Accounting and Administration of Justice from Penn State as well as an MBA from Pepperdine. He has also taken post graduate courses on Artificial Intelligence from MIT and maintains multiple CPA licenses and in addition to most of the major industry certifications including several he helped create.