Leadership report sparks AI debate
Futurist Tom Cheesewright reacts to new data coming from the EGN leadership Report. A brand-new leadership report has been made available including a survey from…
Whilst the revolution of AI keeps accelerating, companies need to consider the ethical consequences.
As AI continues its rapid advancement, the need for ethical considerations becomes increasingly urgent. Thomas Telving, an expert in AI ethics, emphasizes the importance of acknowledging the ethical implications of AI’s capabilities. He explains,
“AI performs an increasing number of tasks that used to require human abilities. It often does that very well, but it also makes mistakes, and that – among other things – has initiated a lot of important ethical and legal discussions.“
Telving, who holds a master’s degree in philosophy and political science highlights a critical issue, he calls the responsibility gap.
“The Responsibility Gap becomes a problem when AI-models make decisions with far reaching consequences. Lawsuits until now have shown us that it is far from simple and often counterintuitive who is blamed – and praised, for that matter – for the outputs of AI-models. It is relevant for everything from chatbots to AI-based financial tools and selfdriving vehicles. Companies should be aware of this.”
Actions have consequences and the more AI acts the more consequences will come from it. The necessity of discussing the ethical dimensions of AI therefore rises along this.
Another urgent question we need to answer when it comes to AI is employment. Telving reflects on the impact of AI on the workforce, stating,
“Companies must consider what their policy is if individual employees or entire groups of employees are made redundant by AI. Should they just fire them? Should they offer them outplacement? Or should they offer them other jobs in the organization? Being open about these questions gives employees a greater sense of security about a question that concerns many people today.”
The importance of companies taking a stance on how AI affects employment and treating affected individuals with dignity is not to be underrated. And if it ends up in favor for man or machine, Thomas Telving is unsure.
“My best guess is, we will see a sort of compromise, where the human workforce in many jobs will have their tasks altered into overseeing, that the AI-automated processes are done correctly. But in some companies, if pressured enough by competition, parts of the staff will most likely be replaced by machines.”
The responses companies may adopt will be diverse, ranging from workforce reductions to enhancing human-AI collaboration. Regardless of the approach, Thomas Telving stresses the significance of ethical considerations.
“We need to be careful not to judge companies for doing one thing or another. In the end, it is likely to be seen as substantial societal question that should be discussed and solved politically. But as for now, I believe companies must have the courage to take a position and be open about it.”
The current regulatory landscape and the legislative frameworks often lag behind technological advancements, although the EU has been taking a leading role. Thomas Telving warns against hasty implementations by tech companies and advocates for a balanced approach that prioritizes ethical considerations alongside innovation.
“I certainly hope companies will take advantage of AI technology in a broad sense, but I also urge everybody to take the ethical considerations seriously. I believe this should be done for the sake of ethics, but you can find plenty of financial reasons for doing it as well.”
Thomas Telving has written the book “Killing Sophia” about human-robot interaction and several articles about the ethics of artificial intelligence. He is featuring regularly as a keynote speaker on the matter.
If you want to know more about the ethical dilemmas, corporations may face with the coming of AI, don´t hesitate to sign up for our global webinar. It features Thomas Telving, is on May 30th and is free for everyone. Sign up here.