THE DARK SIDE OF ARTIFICIAL INTELLIGENCE
THE DARK SIDE OF AI
Humanity needs to be careful lest fanciful “Sophia” becomes “Frankenstein’s monster".
Artificial intelligence is the simulation of human
intelligence processes by machines, especially computer systems. Specific
applications of AI include expert systems, natural language processing, speech
recognition and machine vision.
As the hype around AI has accelerated, vendors have been
scrambling to promote how their products and services use it. Often, what they
refer to as AI is simply a component of the technology, such as machine
learning. AI requires a foundation of specialized hardware and software for
writing and training machine learning algorithms. No single programming
language is synonymous with AI, but Python, R, Java, C++ and Julia have
features popular with AI developers.
With all the hype around Artificial Intelligence - robots,
self-driving cars, etc. - it can be easy to assume that AI doesn’t impact our
everyday lives. In reality, most of us encounter Artificial Intelligence in
some way or the other almost every single day. From the moment you wake up to
check your smartphone to watching another Netflix recommended movie, AI has
quickly made its way into our everyday lives. According to a study by Statista,
the global AI market is set to grow up to 54 percent every single year.
From a birds eye view, AI provides a computer program the ability
to think and learn on its own. It is a simulation of human intelligence (hence,
artificial) into machines to do things that we would normally rely on humans
for. There are three main types of AI based on its capabilities - Weak AI,
Strong AI, And Super AI.
Weak AI - Focuses on one task and
cannot perform beyond its limitations (common in our daily lives)
Strong AI - Can understand and learn
any intellectual task that a human being can (researchers are striving to reach
strong AI)
Super AI - Surpasses human
intelligence and can perform any task better than a human (still a concept)
ChatGPT is considered to be one of the most remarkable tech
innovations of recent times. Capable of generating text on almost any topic or
theme, it is viewed as just about the most powerful AI chatbot around.
But with the ChatGPT data input scandal hitting the news
headlines globally, generative AI has come under scrutiny, raising broader
questions surrounding the ethics of AI, its application, and how these
escalating problems can be dealt with.
Concerns about bias and discrimination in AI algorithms have
been raised, as these systems can inadvertently perpetuate existing societal
biases. This has significant implications for hiring practices, where
AI-powered resume screening algorithms may inadvertently discriminate against
certain groups.
Despite the fact that AI was originally positioned as a way
to remove the threat of personal bias from a range of decision-making
processes, AI bias has been the centre of a huge amount of attention in recent
years.
Embarrassingly high-profile cases of AI bias have hit global
headlines. From Amazon’s sexist recruitment AI to an American healthcare
algorithm used to make decisions about more than 200 million people that was
subsequently found to discriminate against black people. Because AI relies upon
the use of human labelled data, all AI systems are at risk of becoming biased.
Right now, the only way to combat this is through the
introduction of explainable AI (XAI), which enables decision-making processes
to be questioned and faulty processes to be identified and corrected. The
problem is that this approach is still widely unadopted. And it’s not the only
concern.
AI is becoming increasingly more advanced. In 2022, the
Lancet reported that AI could determine a person’s race from an X-ray.
Something that even the most experienced doctor would be unable to do. But how
can we ensure that that data is used properly and ethically?
If this advanced AI was combined with the faulty AI of the
previously mentioned US healthcare algorithm, we could find ourselves in a
position where a black patient is discriminated against before they even meet a
doctor? Lives could be put at risk?
Even moving away from healthcare and bias, AI carries a
whole range of ethical concerns. If I use my bot to do the first phase of
interviews, but one interviewee has a speech impediment or is heavily accented,
there would be an ethical duty to interview that candidate in a different way.
A human could make that decision. A bot, programmed to expect ‘normal’, would
simply dismiss that candidate as unsuitable.
Artificial General Intelligence (AGI) has always been seen
as the ultimate aim.
Fortunately, it is still a long way off. But we’re at a
point where we have some feeling of sentience in our interactions with AI, and
that raises questions about where we should allow the technology to go.
If “dumb” ChatGPT has the potential to be entirely good or
entirely evil, how do we prevent a dystopian future of rogue machines operating
for their own good, and not humanity’s?
These are questions raised by science fiction writers for a
long time, most notably Isaac Asimov with his Three Laws of Robotics, but we
seem to be diving headlong into a shark tank of science facts.
You can’t build a nuclear bomb at your kitchen table. Anyone
can purchase the components to create highly sophisticated AI. It doesn’t take
much to create something for nefarious purposes and to do so with a significant
degree of anonymity. That’s something we have to consider when thinking about
AI’s future.
Regulation is the obvious first step but it’s difficult to
know how that can be managed. There have been some tentative movements, such as
GDPR’s requirement that all automated decisions should be explainable, and the
new EU AI Act aimed at regulating “high-risk” AI. But comprehensive,
potentially intrusive, regulation, the active monitoring of data centres, the
forced compliance and intervention of tech producers – is still at a
considerable distance.
AI tech is out there now. No matter how scary it might be, there’s no putting it back in the bottle. And there are many reasons why we wouldn’t want to. Speech tech, like natural language processing (NLP), is saving companies billions through fraud detection while supporting compliance and identifying the vulnerable. Endless labor-saving processes are in place across sectors, thanks to AI and intelligent automation.
To secure a safe as well as a productive future, we need to not only be
aware of AI’s limitations but be wary of its dark underbelly and make changes
as we move into the future.
Humanity needs to be careful lest fanciful “Sophia” becomes
“Frankenstein’s monster”.
Also read:
BOLANLE HASSAN-FASSAN: ALIIWO AMAZON WHAT THE PEOPLE SAY
THE LOCKERBIE BOMBING 35 YEARS AFTER FIRE FELL FROM THE SKY ON A PEACEFUL TOWN
BUA VS DANGOTE : THREE DECADES OF RIVALRY GOING AWRY
LEGENDS OF THE ARTS: GRACE OWOOLA OYIN-ADEJOBI (IYA OSOGBO)
NIGERIAN POLITICS: ROLLING STONES AND CARPET BAGGERS
Comments