The Ethics of Robots in the Workplace
It is predicted that, by 2025, robots and machines driven by artificial intelligence (AI) will perform half of all productive functions in the workplace – companies already use robots across many industries, but the sheer scale is likely to prompt some new moral and legal questions. Machines currently have no protected legal rights but, as they become more intelligent and act more like humans, will the legal standards at play need to change? To answer this question, we need to take a good hard look at the nature of robotics and our own system of ethics, tackling a situation unlike anything the human race has ever known.
The state of robotics at the moment is so comparatively underdeveloped that most of these questions will just be hypotheticals that will be nearly impossible to answer. Can, and should, robots be compensated for their work, and could they be represented by unions (and, if so, could a human union truly stand up for robot working rights, or would there always be an inherent tension)? Would robots, as workers, be eligible for holiday and sick leave? If a robot harms a co-worker, who would be responsible? If a robot invents a new product in the workplace, who or what owns the intellectual property?
Machines currently have no protected legal rights but, as they become more intelligent and act more like humans, will the legal standards at play need to change?
Can robots be discriminatory, and how should it be dealt with? Amazon was developing a recruitment engine to find top talent and make employing new people more efficient – the company found that the AI system had developed a bias against female candidates. It was trained to screen applications by observing patterns in old CVs – one of those patterns was that they were mostly submitted by men, and so the machine trained itself to vet out female applicants. This was certainly not Amazon’s intention, but it shows how robots can learn negative attitudes based simply on their programming. And if a robot was sexist to a co-worker, how should it be dealt with?
“We have allowed computers to drive and make decisions for us, such as if there is a semi coming to the right and a guard rail on the left, the algorithm makes the decision what to do.”
One of the key questions linked to AI is whose intelligence we’re talking about. Avani Desai, the principal and executive vice president of independent security and privacy at Schellman & Company, uses the example of autonomous cars: “We have allowed computers to drive and make decisions for us, such as if there is a semi coming to the right and a guard rail on the left, the algorithm makes the decision what to do.” But things, she suggests, may not be that simple. “Is it the car that is making the decision or a group of people in a room that discuss the ethics and the cost, and then provided to developers and engineers to make that technology work?"
About AVANI DESAI
Avani Desai is the President at Schellman. Avani has more than 15 years of experience in IT attestation, risk management, compliance and privacy. Avani’s primary focus is on emerging healthcare issues and privacy concerns for organizations. Named as one of the 2017 Global Leaders in Consulting by Consulting Magazine she has also been featured and published in the ISSA Journal, ITSP Magazine, ISACA Journal, Information Security Buzz, Healthcare Tech Outlook, and many more. Avani also sits on the board of Catalist, a not for profit that empowers women by supporting the creation, development and expansion of collective giving through informed grantmaking. In addition, she is co-chair of 100 Women Strong, a female only venture philanthropic fund to solve problems related to women and children in the community.