(Article originally published in Rx Data News, Issue 5 Vol. 1)
Even when the developments might’ve been considered fairly primitive by modern standards, technological progress has always been a definitive characteristic of humanity. Like any new tool, technology has infinite capacity to be used in all the wrong ways—in this, atomic and biological weapons come to mind. And even with better intentions, sometimes technological impact can still skew negatively, such as when society’s immense reliance on it harms our environment, health, or thought patterns.
As president of Tampa accounting firm Schellman & Company, Bizwomen Headliner Avani Desai deals with topics like emerging healthcare issues and privacy concerns for organizations, so distractions come with the territory.
A panel of security professionals discuss the top three tips for how CISOs and risk officers can help improve board communication around security
Prevention and detection aren't enough. To better defend against future intrusions, you need a strong digital forensics team that can analyze attacks. In a world where enterprises are embracing the fact that breaches are not a matter of ‘if, but when,’ it is becoming increasingly important to develop internal and external resources to investigate and oversee the impact of attacks after they have happened.
The cybersecurity risk landscape is constantly evolving, and regulations like GDPR are making it even more crucial for organizations to protect their customers' and users' privacy. By failing to implement adequate security protections, companies risk not only the loss of sensitive data, but damage to their reputations as well as regulatory penalties and fines. From ransomware attacks to insider threats, businesses face threats from every conceivable angle, making comprehensive, fool-proof cybersecurity protection an increasingly difficult feat.
It is predicted that, by 2025, robots and machines driven by artificial intelligence (AI) will perform half of all productive functions in the workplace – companies already use robots across many industries, but the sheer scale is likely to prompt some new moral and legal questions. Machines currently have no protected legal rights but, as they become more intelligent and act more like humans, will the legal standards at play need to change? To answer this question, we need to take a good hard look at the nature of robotics and our own system of ethics, tackling a situation unlike anything the human race has ever known. The state of robotics at the moment is so comparatively underdeveloped that most of these questions will just be hypotheticals that will be nearly impossible to answer. Can, and should, robots be compensated for their work, and could they be represented by unions (and, if so, could a human union truly stand up for robot working rights, or would there always be an inherent tension)? Would robots, as workers, be eligible for holiday and sick leave? If a robot harms a co-worker, who would be responsible? If a robot invents a new product in the workplace, who or what owns the intellectual property? Machines currently have no protected legal rights but, as they become more intelligent and act more like humans, will the legal standards at play need to change? Can robots be discriminatory, and how should it be dealt with? Amazon was developing a recruitment engine to find top talent and make employing new people more efficient – the company found that the AI system had developed a bias against female candidates. It was trained to screen applications by observing patterns in old CVs – one of those patterns was that they were mostly submitted by men, and so the machine trained itself to vet out female applicants. This was certainly not Amazon’s intention, but it shows how robots can learn negative attitudes based simply on their programming. And if a robot was sexist to a co-worker, how should it be dealt with?