Artificial intelligence technologies are developing at an astonishing speed. It seems that barely a day goes by without it “revolutionising” another aspect of our lives. From healthcare to space exploration to writing blog posts (no, we didn’t use ChatGPT!), AI is already far-reaching in its applications. And it’s only going to get more so.
So what does that mean for us? Is AI a force for good? What, if any, are its downsides?
Let’s take a look.
What is artificial intelligence?
In the 1960s, scientists started programming computers to mimic human decision-making.
This developed into research around ‘machine learning’. Robots were taught to learn for themselves. Solve problems. And remember their mistakes to make a decision about what to do next.
In more recent years, scientists have added ‘machine perception’ to the mix. This involves giving machines and robots sensors to help them to see, hear, feel and taste things. They then adjust how they behave as a result of what they sense. The idea is that the more this technology develops, the more robots will be able to ‘understand’ and read situations, and determine their response as a result of the information that they pick up.
Why is artificial intelligence good?
It helps us
Let’s start with the big one: artificial technologies can do incredible things that make life better. From helping detect disease to monitoring illegal fishing, AI can have a profound positive impact.
Take the United Nations. They are increasingly using AI to help them achieve their Sustainable Development Goals (which Charles Sturt University has signed up to in order to guide our research). They have AI-assisted projects for everything from testing engineering products for resistance to disasters and technologies that help ships avoid collision to tools that can verify images of human rights abuses or inform climate change adaptation measures.1
Closer to home, Charles Sturt academics are exploring how artificial intelligence technologies can augment patient outcomes with medical imaging and how they can help farmers harvest rice when the crop is at its best quality.
It takes us out of danger
Artificial intelligence technologies can do tasks that put people in danger. That could be defusing a bomb. Going to space. Exploring the deepest parts of oceans. Mining. Repairing power lines. Operating in infected or polluted locations, or in adverse weather conditions.
It’s always available
AI can work endlessly. AI technologies can often think much faster than humans and perform several tasks at a time. They don’t need breaks. Don’t get tired. And they are “happy” to undertake routine, tedious and repetitive tasks (which should leave us more time to be creative and innovative).
It can reduce human error
Poet Alexander Pope wrote that “To err is human”. Basically, we all make mistakes. But that’s not very comforting if the mistake is, say, to miss a symptom of a disease you have that delays treatment for it. When AI can better detect skin cancer than experienced dermatologists, that has got to be a good thing for getting people the healthcare they need.
Why artificial intelligence is bad
It can be biased
AI is built on the data we provide it. As such, if the data is biased, the technology replicates that bias. Take ChatGPT. It belongs to the sphere of generative AI: essentially, tools that can generate original text, images and sound in response to conversational text prompts. ChatGPT is trained on data sources such as books, websites and other text-based materials. So the data that ChatGPT uses can be biased, and if we rely on something like generative AI to give us “answers” we risk reinforcing that bias.
It can be used nefariously
Some people use AI to distort facts. Many of us have seen the seemingly innocuous use of AI to create fakes such as the image of the Pope in a puffer jacket. But it is also being used to create more dangerous things like fake stories that rewrite history. As we feed more and more fake (and often biased) stories and images into ChatGPT’s “data set” (basically, the internet), the more it will draw on those to generate its responses.
It can allow cheating
ChatGPT is at the forefront of discussions in the academic world about how AI should or should not feed into students’ work. It’s a tricky question. Most of us would baulk at the idea of a medical student using AI to pass an exam. However, it can be a useful tool to start discussions or conduct research.
Charles Sturt’s Deputy Vice-Chancellor (Academic), Professor Graham Brown, acknowledges this dichotomy and sees the need to “authenticate” learning somehow in the age of generative AI.
“Assessment design should be driven by the learning outcomes we are seeking to assess. And those learning outcomes should be relevant to the practical realities of the discipline or profession we teach. Generative AI may be integral to some of those learning outcomes and assessments. But for others, it has the potential to undermine academic integrity and needs to be responded to accordingly.
“One option I’m particularly keen on – largely predicated on its use for the PhD thesis – is the oral defence: establishing the person presenting the thesis was indeed the author. It should be uncontroversial to extend this principle to all assessment items. If you submit an assessment, you need to be willing to show up and demonstrate that it was indeed your work.”
It may take our jobs
A report by the World Economic Forum found that companies are looking to accelerate their adoption of artificial intelligence and related technologies in the near-term.
The report states that: “These new technologies are set to drive future growth across industries, as well as to increase the demand for new job roles and skill sets. Such positive effects may be counter-balanced by workforce disruptions. A substantial amount of literature has indicated that technological adoption will impact workers’ jobs by displacing some tasks performed by humans into the realm of work performed by machines. The extent of disruption will vary depending on a worker’s occupation and skill set.”2
This represents a very real conundrum at the heart of AI adoption. Yes, artificial technologies will replace some jobs. And this is, of course, very problematic for people reliant on those jobs. But equally, the fact that AI will take over some of the more dangerous and repetitive tasks could mean opportunity for people to undertake more meaningful work. It needs investment to make it happen, but that should be a goal of the use of AI – to help people lead more fulfilling lives.
The rise of the machines?
The great mind of Steven Hawking saw potential pitfalls with AI. He said that although the AI we’ve made so far has been very useful and helpful, if we teach robots too much, they could become smarter than humans and so cause problems for its creators. That’s us.
As such, we must not forget that artificial intelligence technologies are human inventions. Created by – and controlled by – humans. Yes, they “learn” and “evolve” but ultimately we must decide how they will be used. Government regulation needs to foreground people, whether that be protecting jobs or retraining people for new opportunities, preserving access to accurate information, and protecting people from nefarious forms of surveillance and intrusion.
Artificial technologies can be an incredible benefit to us. We just have to be good custodians of AI’s abilities.
Harness the power of artificial intelligence technologies
Find the best way for you to learn how to put AI to use. Explore our information technology, computer science and mathematics courses.
1https://www.itu.int/dms_pub/itu-s/opb/gen/S-GEN-UNACT-2022-PDF-E.pdf
2https://www3.weforum.org/docs/WEF_Future_of_Jobs_2020.pdf
You must be logged in to post a comment.