Introduction
Today we're going to be talking about something that has been in the headlines for a while now, which has seen a great increase in popularity due to new internet services that utilise machine learning on a large scale such as Chat GPT.
We're also going to cover general fears and misconceptions about artificial intelligence and explain why some opinions should be considered more than others.
As per usual, we're not going to use any academic references in this blog as it is not exactly an academic piece of work.
The Dangers
Logically, it seems natural to start with the "looming dangers" of artificial intelligence. Are there even any? What do we need to consider when developing and operating or even using these systems?
Looking at systems such as AI chatbots, which allow their users to directly interact with the machine learning network using a simple interface we need to consider the motives behind the development of these systems. In our example, AI chatbots are developed to answer as many questions as possible with as great of an accuracy as possible.
Where is the danger in this? Misinformation or a lack of understanding. People often emphasize the dangers of AI in the completely wrong way; such as it taking over the World or destroying humanity. The true danger lies in validity; like a typical politician, many people will just blindly believe the first thing they read or hear. It is no different when it comes to AI chatbots, people will genuinely think that they are right when researching a new topic or asking a question without doing further research.
Additionally, there have been many popular public polls to vote on "freezing" AI chatbot services such as Chat GPT until we "truly understand" the risks of such services. These people are not necessarily uneducated or misinformed members of the public, they are simply scared, cautious, and proactive.
If you sign such a petition because you are simply scared and think that AI will take over the World and destroy civilization, that may be a slightly ridiculous reason. If you're signing it because you understand the true dangers of artificial intelligence (especially at large scale) then perhaps you have more validity behind your signature.
Misconceptions
Can we really say for certain that AI won't one day take over the World and eradicate humans? No. Can we say that people often like hopping onto the latest train to conspiracy theory town? Yes. Are systems that utilise AI often built to serve humans and have a very human-focused objective? Of course.
There are several public opinions that I do have some issues with, such as services like Chat GPT being capable of "abstract thought" and being portrayed as closed to being "sentient". These systems and services are still a LONG way from being able to do such things. The reason why people often "hype" Chat GPT is due to the fact it seems incredibly human and insanely intelligent.
In reality, these systems really are still quite basic in terms of how they actually operate behind the scenes. Or at least, more simplistic than most would believe. As someone who has worked extensively with AI in the past and has even built my own chatbots, these claims are completely bogus.
I personally believe that we need to improve our models for artificial intelligence thousands of times over in order to achieve anything that even remotely resembles sentience or having the ability of abstract thought. This is something that could either take hundreds of years to happen or could take just a few; technology has been developing at an unprecedented rate and it's hard to say for certain what certain companies have been working on.
People often look at the small picture when discussing AI and create what are often completely irrational fears of technology. Before you just immediately believe in something, do your own research first and ensure that your opinion is backed by science.
Conclusion
So, what is the bottom line? Do we need to fear AI and "locked it up" until we know what to do with it? I would argue not. Developers of such platforms are fully aware of the ethical issues and legal issues regarding artificial intelligence. Responsibility is something that is taken very seriously when developing these systems and these systems will always have a positive human intent, as that is why they are built.
Do we need to stop relying on services such as Chat GPT to tell us everything and only believe the things that it says? Yes. Don't be lazy, stay curious and do your own research. There is simply no reason to exclusively use these systems to do your research, they are simply mere tools that are built to help assist you in many different ways; they are not a replacement or substitute.
Understanding something, regardless of how hard it is can lead us to higher places. If there is no cost in the conquest for knowledge, have we truly learned anything at all? If your brain doesn't feel like it's about to explode when writing in lower-level languages, is there really any point in doing it at all? No pain, there may be gain but there will be consequences.
Comments
Post a Comment