August 6, 2019

The key ethical dilemmas that surround artificial intelligence

BY NICK BOYLAN

By 2025, the artificial intelligence (AI) software market is expected to grow to almost $60 billion, showcasing how prevalent this technology is becoming across all industries.

Whether you’re a copywriter needing to know about AI, or working across one of the many industries set to be changed by machine learning, it’s important to understand the key ethical dilemmas surrounding the technology and its use.

Bias in datasets that inform AI

One issue facing the use of AI is around bias; specifically the bias of human-created data being fed into any AI system. AI uses elements such as word embedding, which can unfortunately highlight prejudice in source text. As powerful as machine learning – and AI in general – can be in identifying patterns, this can result in cultural, religious, gender and racial bias.

The application of AI, such as its utilisation by data science teams can also involve bias and raise a number of questions around ethics, such as:

  • what is a ‘good’ or right outcome?
  • who does this affect?
  • is it ‘good’ against some measurements but not against others?

While AI can run automated processes, it’s up to those creating and inputting the data to ensure a lack of bias and a high standard of ethics. If you input biased data, you’ll be left with biased AI. The social impact of biased AI should not be understated.

Unethical processes affecting AI users’ trust

Another ethical dilemma centered on AI is the current lack of consumers trust. When you consider the questionable tactics of apps like FaceApp, there are fair arguments as to why you might not trust AI. However, AI can increase trust by providing more transparency around its motives and deliver valuable, consistent outcomes to users.

For example, AI is already particularly good at analysing images. The broader implications this could have for, say, the medical industry would be huge. In theory, someone could use an app on their phone to take a photo of a suspicious mole and upload that into an AI system that’s trained to identify skin cancers.

Not only could this save lives, but what about the cost and time benefit to the user who doesn’t have to leave their home for a skin check-up? Not to mention the amount of time and resources this could free up for advancements in the area of dermatology and skin cancer research.

Fear of automation

One consistent issue that employees across industries fear, is that AI is coming to replace our jobs through automation. However, this is a misconception, with AI existing to make various roles easier for those working them.

AI’s power of automation can allow for better connections between colleagues, their teams and their projects. It can also allow those in executive roles to investigate what parts of their job can be performed by AI, freeing up time to focus on more human-based decisions.

At present, these cannot be replaced by AI, leaving an important balance between human decisions and automation in creating business change. While AI can be powerful and make certain tasks easier, at the end of the day humans are still responsible for the ethical choices involved with business change.

What about a robot’s right to life?

Another tough question posed by AI is that as the systems become more complex, with structures of reward and aversion, should we be considering the way humans interact with it?

AI continues to show more and more human-like qualities, raising questions of how different AI-related decisions could be breaching a machine’s right to life. 

Take genetic algorithms for example. These create a mass of cases that have “survivors” and others with errors are eliminated. While this may improve a system, do we consider these algorithms a display of murder?

During the 2018 Australian Engineering Conference, Professor Mary-Anne Williams, Director of The Magic Lab, Centre for Artificial Intelligence, provided her insight into distinguishing between a human and robot when making decisions.

“You’ll see people attacking a very human-like creation and maybe you won’t know the difference,” she said. With processes like genetic algorithms, do we consider these to be attacks on AI? Because of its ability to show human-like qualities, it puts forward questions on how we should be treating AI. Should we be casting aside cases from genetic algorithms so dismissively?

These ethical debates are sure to progress alongside the use of AI across industries.

The impact AI will have on the future of work

With the AI software market booming and its use estimated to enhance business productivity by up to 40 per cent by 2035, organisations should be ready for how AI will impact the future of work.

It could also be a hot topic at the upcoming Top Tech Trends Debate, held by Churchill Club.

This event showcases five industry-leading professionals, who put forward their vision of the next big thing, before an audience vote. Not only an exciting debate, the Top Tech Trends Debate will highlight emerging technologies and the minds that are investing in our future. Could AI be involved in the winning top trend this year, or is it already old news?

Being held on Tuesday 3 September 2019, this is an event not to be missed.

Industry Insights

January 21, 2022

Content design in theory and practice – who’s doing it well?

November 26, 2021

Who writes your copy? Spotlight on… Marina Penderis

November 10, 2021

Fintech 2021: Who survived and what’s ahead?

October 29, 2021

Could you use a microsite for your next campaign?

October 6, 2021

5 ways technology’s impact is a force for good

SUBSCRIBE TO OUR NEWSLETTER