Back to blogUpdated · 10 min read

How worried should we be about artificial intelligence?

TechTalk

AI's everywhere, transforming lives. But let's not forget, we need to keep its ethics in check.

Artificial intelligence (AI) is making headlines around the world every day. Understandably so when you can see it changing many facets of daily life; from the likes of healthcare and education, all the way to what social media posts you see in your feed. With its exponential growth, comes the fear that AI will become an uncontrollable monster that takes over the world with its intelligence (as many fictional movies depict).

This may sound a little overly dramatic - especially when you look at how AI can be used for good. In healthcare, we’re seeing that AI can help in early detections of breast cancer without biopsies. We’re seeing AI detect fraud by analysing large amounts of transactional data in real-time, saving banks and their customers huge amounts of money. We’re even seeing AI predict natural disasters — systems are analysing patterns in earthquakes and predicting the location of earthquakes and aftershocks.

As a company that’s built on AI, we believe in its potential to do good. We also think that its dangers should be taken seriously. This is why we believe ethics need to be top of mind when creating and regulating these amazing AI creations.

Before we dive into the need for ethics in AI, let’s gather a basic understanding of what AI is and how it’s impacting the world today.

What is artificial intelligence?

There are many definitions for AI but let’s break it down to its basic form as Max Tegmark does in his book Life 3.0: Being Human in the Age of Artificial Intelligence.

He states 'intelligence' as the ability to accomplish complex goals. 'Artificial intelligence' as non-biological intelligence. 'Narrow intelligence' as the ability to accomplish a narrow set of goals, e.g. play chess or drive a car. 'General intelligence' as the ability to accomplish virtually any goal, including learning.

Humans haven’t yet achieved general intelligence (humans still sometimes struggle to solve problems, even if we lock ourselves away in a room for 3 days) but we’re way farther along than our artificial counterparts. If an AI unit ever achieves human-level general intelligence, it will likely be able to design and improve AI units better than humans can. It’s expected that this will lead to what’s called an intelligence explosion. This explosion will take AI to a level of general intelligence far higher than humans (called 'super intelligence').

The arrival of the intelligence explosion is said to be anywhere from 50 to 500 years away, yet it's still feared more than its older sibling, artificial narrow intelligence (ANI). This makes sense - there’s no chance that the chess program on your computer will take over the world - but we must be careful not to neglect the real-world harm ANI can cause to our reality right now. There’s an increasing need for ethics and regulation within the AI that’s being created every day. Let’s take a look at why.

Artificial narrow intelligence is the AI that surrounds us today. This can be seen in the form of Spotify customising your playlist, Google spitting out its tongue of suggestions and Tesla learning what speed bumps require more suspension on your way home.

To explain how these bots or agents work, let’s think of artificial intelligence as a black box. A black box that takes a set of inputs generated by the user and creates its own output based on the various decision-making mechanisms it’s learnt.

In simpler terms, we’ve created several of these boxes that can perform several different tasks. For some boxes, you can give them a picture of a dog and out will pop the label “dog”. With others, you can speak to them and they'll spit out the transcription of what was said. Other boxes can take the same speech as input, but then produce the translation in almost any language of what was said.

What impact has AI had on the world?

Along with the simple tasks these black boxes can do, they have also allowed for the responsibility of some significant business decisions shifting to algorithms — especially in conservative areas like the financial industry.

The likes of robo-advisors are being favoured thanks to higher promised prediction accuracy and/or lower costs. In fact - according to the business data platform, Statista - around 70.5 million people worldwide are entrusting $1.44 trillion worth of assets to Robo-Advisors in 2020. In 2023, they estimate the number of people using robo-advisors to rise to 147 million.

Emulating the services of financial advisors, these robo-advisors recommend and manage the investment portfolios of their customers. For example, you can supply basic information about your investment goals to your robo-advisor through an online questionnaire. From there the model crunches the data you provide and invests your assets accordingly. Once your funds are invested, the software automatically rebalances your portfolio regularly — making the changes needed to align with your investment targets.

This is another prime example of the black box in action — the user inputs data (the investment goals) to which the black box creates an output (the suggested asset allocation) based on the decision-making tools it’s learnt.

The finance industry is not the only area that’s succumbing to the rise of AI. In the automotive sector, several AI applications — from vehicle design to sales support — are being implemented. More specifically, AI is the reason we’re navigating into the space of driverless cars.

Most technical experts, when talking about autonomous cars, will refer to levels of autonomy. This refers to how much driving is being done by the car’s black box and how much is being done by humans. Many cars manufactured today place minor things like cruise control, automatic braking and blind spot detection in the control of the computer. This is done by using various remote sensors, coupled with GPS navigation and pre-programmed logic around road rules, to input the vehicle’s environment into the AI “black box” and respond with the appropriate combination of steering, accelerating or braking.

As for Tesla’s and Waymo’s, these cars are already at level 2 on the levels of autonomy. Teslas on the road have a feature called autopilot. Simply put, “autopilot enables your car to steer, accelerate and brake automatically within its lane.”. A human is still required for safe operation and emergency procedures.

Contrastingly, T-pod by Einride, an all-electric, human-free truck is transporting goods around an industrial zone in Sweden. Einride, a Swedish startup launched the first fully self-driving truck that’s allowed on public roads at the beginning of 2019. T-pod, remotely supervised by an operator, is said to run for Schenker (a German logistics giant) until the end of 2020.

Where could it all go wrong?

“Everything we love about civilization is a product of intelligence, so amplifying our human intelligence with artificial intelligence has the potential of helping civilization flourish like never before – as long as we manage to keep the technology beneficial.“ — Max Tegmark

AI is rapidly infiltrating every aspect of our consumer-driven lives and is considered the main driver of future growth and productivity, innovation, competitiveness and job creation for the 21st century. This is evident in the examples of how AI is being used for the betterment of society in the examples above. As we’re so often shown in life, there are two sides to every coin.

If you’re on Facebook or Twitter then you know that the data you give to these networking giants (including your purchasing and browsing history) is bought and used by companies that want to sell you things. But the extent to which this data harvesting has been used to influence our world (and not in a good way), has only recently come to light.

One such scandal that has had its fair share in the spotlight is that of Cambridge Analytica which unfolded in 2018. Cambridge Analytica was a British political consulting firm that specialised in what’s called ‘psychographic’ profiling — they used data collected online to create personality profiles of voters. In short, Facebook exposed over 87 million of their user’s data to Cambridge Analytica. By harvesting this data, Cambridge Analytica was able to identify those who could easily be persuaded and in turn change public opinion ahead of significant political elections.

Here, private user data was combined with algorithms and as a result targeted ad campaigns came out of the black box — used to influence opinions in elections and referendums.

Another tech giant who’s been no stranger to scandal is Google. A study done by researchers from the University of Washington and the University of Maryland found that gender bias is working its way into web searches when people search for images that represent careers and jobs.

For example, when searching for images of doctors, Google’s black box spits out male doctors whereas searches of nurses result in females. The research also found that across all professions, women, specifically black women, are underrepresented on average.

That said, Google does not deliberately prioritise images that match this criteria. Industries and technologies that have previously been dominated by men contribute to the specific dominant demographic that shows in the search results.

This is harmful as it perpetuates and reinforces old stereotypes and has the potential to inhibit social progress.

As long as it’s legal, it’s ok, right?

Something being legal doesn't necessarily make it right - think of billionaires who manage to legally avoid paying tax. But who’s to say it’s wrong and who’s going to regulate the wrongdoing in the future — especially when it comes to AI?

To help start with that, the Institute for Ethical AI and Machine Learning — a UK-based research centre — carries out highly technical research into processes and frameworks that support the responsible development, deployment and operation of machine learning systems. They have formulated eight principles used to soundboard the fairness and ethics of AI systems.

There’s one specific principle that clearly points out the wrongdoing in the Cambridge Analytica scandal.

Responsible Machine Learning Principle No 7

In the Cambridge Analytica scandal, Facebook distributed the data of about 320,000 Facebook profiles, and all of their friends’ profiles — amounting to over 87 million people’s data. This may seem sound since those 320,000 consented to their data being collected through the completion of a personality test, but they never consented to that information being sold nor did their friends consent to their data being used at all. Regardless of what followed this harvesting of data, what Facebook did was ethically wrong according to the Institute for Ethical AI and Machine Learning.

That said, what followed highlighted what data in the wrong hands has the potential to do. Like persuading a nation to favour one presidential candidate over the others. So much for the land of the free.

Mark Zuckerberg’s response was underwhelming - only addressing the scandal 5 days after it made headlines. In saying that, he has since made a commitment to change how the platform protects users’ data, as well as audit any apps that can access large amounts of information.

As for Google and its biased algorithm, it’s even harder to pinpoint the blame. Enter the Institute for Ethical AI.

Responsible Machine Learning Principle No 2

Here’s what we mean by “it’s even harder to pinpoint the blame”: we’re quick to blame the creators of the algorithm, but this bias runs much deeper. As said above, the bias in the Google image search is a perpetuation of social biases that already exist. We know that women are underrepresented in fields that were male-dominated for decades. So if you train a black box on real-world data, it’s going to pick up on these real-world biases.

Google has not taken this lightly. They have since employed their own set of AI principles to instil in their employees and have also incorporated machine learning fairness into their algorithms to “prevent their technology from perpetuating negative human bias.” But this is a complex problem. Some people argue that Google has overcorrected. In Douglas Murray’s 2019 book The Madness of Crowds, the author eloquently shines a light on what he sees as a ridiculous consequence of Google’s new policies:

“If you search on Google Images for ‘Gay couple’, you will get row after row of photos of happy gay couples. They are handsome, gay people. Search for ‘Straight couple’ by contrast and at least one to two images on each line of five images will be of a lesbian couple or a couple of gay men. Within just a couple of rows of images for ‘Straight couple’ there are actually more photographs of gay couples than there are of straight ones, even though ‘Straight couple’ is what the searcher has asked for.

...It seems that to strip computers of the sort of bias that human beings suffer from, they have been set off to create a new type of non-bias.”

When you take a step back, you can see that there’s no silver bullet for detecting biases within algorithms — even for a tech giant like Google.

So, where to from here?

“When we got fire and messed up with it, we invented the fire extinguisher. When we got cars and messed up, we invented the seat belt, airbag, and traffic light. But with nuclear weapons and AI, we don’t want to learn from our mistakes. We want to plan ahead.” — Max Tegmark

Tegmark says we should plan ahead rather than fix the wrongdoing already done. We at Naked believe that realistically, there’s room for both. By improving our current ANI, we protect ourselves from getting general intelligence wrong. You’re helping with that right now — you’re educating yourself on why there’s a need for ethics in AI by reading these very words. As shown above, in the Facebook and Google scandals, you the consumer can see how AI can negatively impact you - and with that, you have the power to call out the makers and influence great change.

Similarly, companies should build their AI with the end user’s best interests in mind. At Naked, we believe — just as AI is at the core of many businesses — ethics should be at the core of any AI system.

In saying that, we need to keep in mind that not everyone has the same goals — not everyone wants to create good AI. Just as with anything in life, there’s a fundamental need for boundaries. While AI regulation is still finding its way and lacks comprehensive government oversight, we believe it's a step in the right direction. A step in ensuring that AI is something that makes the world better, a step in changing the world as we know it.

The Responsible Machine Learning Principles
SHARE

You might also like

TechTalk

How chatbots will change insurance

Never want to speak to your insurer again? This is how artificial intelligence is replacing call centres and making insurance better than ever before.

4 min read
TechTalk

5 local tech businesses you might like

Local businesses who’ve created new business models in already established industries, using tech.

6 min read
TechTalk

What planning weddings, managing families and Naked have in common

Agile’s ways of working aren’t just confined to the walls of corporations or startups. They can help anyone be more effective and productive.

7 min read