By Evans WOHEREM, Ph.D
As Artificial Intelligence (AI) continues to evolve, it is expected to have a significant impact on how we live and work. The integration of AI is a complex issue that requires a thorough understanding of the potential benefits and risks of this technology.
In this serialized piece, DR. EVANS WOHEREM throws open an honest conversation about the implications of this technology and the need for collaboration to ensure that its development and use align with human values and promote the well-being of all individuals.
- Introduction
I have always believed that Artificial Intelligence (AI) is a Promethean technology that can be used for good or evil. I, therefore, feel that while we welcome its positive uses, the negative impacts could be too deleterious for humanity to control or endure, and as such, we should have international treaties and controls put in place to regulate the technology.
With the arrival of ChatGPT and the promise of Artificial General Intelligence (AGI) and possibly Artificial Super Intelligent (ASI) systems, I am even more convinced that some internationally agreed sets of controls and laws guiding the behaviour of such systems, and hardwired into the AI systems, should be implemented as soon as possible.
- My Early Years in AI
I started studying Artificial Intelligence (AI) in 1983 when I enrolled in the Master’s degree program in Cognition, Computing, and Psychology at the University of Warwick in England. The course was all about AI – how to build AI systems by studying humans to determine what makes them intelligent and then applying that knowledge to develop intelligence-based systems. Since then, I have been keeping track of the development of artificial intelligence and its uses in various industries.
The knowledge and understanding of what makes humans intelligent have been applied to the development of AI systems capable of performing tasks previously thought to be unique to human intelligence. It is crucial to consider any possible disadvantages of these intelligence-based systems as we continue to push the limits of AI. That is why it is significant to develop internationally agreed-upon sets of controls and laws to govern the development and use of AI or intelligence-based systems.
Intelligence-based systems, also known as cognitive systems, imitate human intelligence and execute activities that typically require human intelligence, such as interpreting a spoken language, identifying objects or photographs, making judgments, and learning from prior experiences. When such systems are built, those who believe in the strong AI hypothesis refer to them as having human intelligence or intentionality.
On the other hand, those who believe in the weak AI hypotheses view intelligence-based systems as a tool or technology used to automate or enhance specific jobs rather than as an end in itself. They see AI as a tool for better decision-making, increased productivity, and the provision of new insights and information. However, to those that believe in the strong hypothesis, if a system can display all the cognitive traits of people, it should be considered human.
During my above study course, I came to believe that humans are very creative and capable of building the above types of systems over time. However, I did not believe in the strong hypothesis; rather, I believed and still believe in the weak hypothesis of AI, as did people like Professor John Searle. He is widely known for his “Chinese Room” thought experiment.
In his thought experiment, Prof. Searle proposes that a person in an enclosed room who does not speak Chinese be given a set of rules and symbols that allow him or her to communicate in Chinese with a Chinese speaker outside the room. The person in the room can receive Chinese words, convert them into English, understand the communication, respond in English, and then convert the response back into Chinese. It appears that the person in the room speaks Chinese fluently to the Chinese speaker outside the room. However, the person in the room does not understand Chinese; he or she is simply manipulating symbols according to a set of rules.
Searle argues that this thought experiment illustrates that a machine or algorithm can simulate an understanding of a natural language without actually understanding it. He further argues that this is true of AI systems in general: they can display traits associated with human intelligence, but this does not mean they possess genuine human intelligence or consciousness. He suggests that the true objective of AI should be to produce machines that can perform specialized tasks efficiently rather than trying to build machines that can fully comprehend and have consciousness like humans.
John Searle’s opinions on the weak AI hypothesis have been widely studied and considered significant in AI. Hubert Dreyfus, a philosopher, and Daniel Dennett, a cognitive scientist, are two well-known individuals who have endorsed Searle’s opinions. It’s important to note that this thought experiment raises many questions and criticisms. Some argue that this thought experiment is not a fair representation of AI, that it doesn’t take cognizance of the complexity of the human mind and consciousness, and that the Chinese Room thought experiment is not a fair representation of how AI systems work.
I believe that Prof. John Searle is right, AI systems are mere powerful symbol manipulators. However, that they are merely manipulating symbols does not detract from the fact that humans can judge their capabilities as human-like, and so, can easily declare them to be human or even super-human.
- Motivations of the AI Intelligentsia
The motivations of the AI intelligentsia, consisting of researchers, engineers, and scientists who work in the field of AI, include scientific curiosity, technical challenges, commercial opportunities, social impact, national security, and ethical considerations. They are thus playing a significant role in shaping the future of humanity by advancing AI technology and developing Artificial General Intelligence (AGI) systems that have the potential to revolutionize many aspects of daily life for people.
The AI intelligentsia is taking humanity to a new epoch – to an age of intelligent-based systems that will co-exist with humans at home, at workplaces and at social environments, or who can eventually rule over humans. It is a new dawn, a brave new world that is about to be revealed, as we redefine the origins and nature of sapiens, taking us to a new stage of human evolution, in which humans will evolve into cybernetically and genetically engineered “Homo-Deus”, as proposed or mentioned by thinkers such as Yuval Noah Harari, Dan Brown, Michio Kaku and Ray Kurzweil.
Ray Kurzweil believes that AGI will surpass human intelligence in a wide range of tasks and could be used to solve problems like curing diseases, terraforming other planets, and overcoming death. Kurzweil’s expected new book, “Singularity is Nearer”, will expand on his previous ideas and predictions of AI. He has previously stated that he believes computers will be able to pass the Turing test, which measures a machine’s ability to exhibit intelligent behaviour comparable to or indistinguishable from that of a human, by 2029.
He also predicts that by 2045, AGI will be achieved and will be able to improve itself at an exponential rate, leading to a rapid acceleration of technological progress, and perhaps to the AGI turning into an ASI (Artificial Super Intelligent) system.
Also, Yuval Noah Harari suggests in his book “Homo Deus: A Brief History of Tomorrow” that advanced brain-computer interfaces (BCIs) could be a step towards the development of artificial general intelligence (AGI). He claims that by directly connecting human brains to computers and other machines, we can improve our cognitive abilities and eventually create AGI systems that match or even exceed human intelligence.
It is critical to remember that AGI is a speculative topic, and the notion that BCIs would be a step toward its development is purely theoretical. There are many different approaches to creating AGI, and the relationship between BCI and AGI is not yet well understood. However, the idea that BCIs could significantly improve human cognitive abilities is intriguing and warrants further investigation.
On the contrary, Martin Ford argues in his book “The Rise of the Robots: Technology and the Threat of a Jobless Future” that the development of AGI could have an impact on the job market because machines could potentially take over many tasks currently performed by humans. In particular, he asserts that low-skilled and repetitive jobs will suffer significantly from widespread automation and the rise of AGI.
He also suggests that AGI could lead to greater inequality as the people who own and control these technologies will become increasingly wealthy and powerful while a growing number of people may become unemployed and left behind. He believes society should start preparing for these changes now by investing in education and training programs and enacting new policies to help people adjust to a rapidly changing job market.
It is worth noting that this is a speculative topic and that there are many different opinions about the potential impacts of AGI on the job market and the economy as a whole. Some experts believe that AGI has the potential to create new job opportunities and boost economic growth, while others believe it will lead to job displacement and widen the gap between the rich and the poor.
Artificial General Intelligence (AGI) refers to systems that have achieved singularity with humans and as such can comprehend or learn any complex tasks that humans can. AGI systems would be able to carry out a variety of tasks like problem-solving, making decisions, and learning even without being explicitly programmed. Most AI systems in use today fall into the category of weak or limited AI due to their focus on narrow task domains.
The advancement of AGI has ethical and societal implications, as it may alter our interactions with technology and our understanding of intelligence. For example, in the 1980s, Professor Weizenbaum created an AI-based system called ELIZA, which mimics a Rogerian Therapist and was one of the first natural language processing programs. His system showed the shallowness of human-computer interaction and the risks associated with an overreliance on AI technologies. He was astonished to discover that his secretary had an emotional connection to the system while using it, as if it were human, despite knowing that he wrote the code for it.
Weizenbaum used this experience to openly criticize the field of AI after he realized that people might treat AI systems like they were people. He argued that the hype surrounding the technology was unjustified and that it was unlikely that machines would ever be able to fully understand human thoughts or emotions.
Weizenbaum’s viewpoint on AI raises the question of whether or not AGI is truly possible and whether or not the concept of AGI is well-defined. It also emphasizes the importance of being realistic about what AI can or cannot do and avoiding overhyping the technology. It is important to note that the field of AI has evolved significantly since Weizenbaum’s criticism, and some researchers have developed new methods, techniques, and theories that address some of AI’s limitations, such as Symbolic AI, Connectionist AI, and Hybrid AI. Some AI researchers are working on developing AI systems capable of demonstrating human-like intelligence and consciousness, such as creating AI systems capable of passing the Turing or Chinese Room tests.
It is crucial to remember that Weizenbaum’s worries about AI remain valid today, and the discipline of AI continues to raise ethical and societal implications. As AI systems advance and become more capable, it is critical to consider the implications of their development and application. For example, as AGI systems become competent at performing tasks previously thought to be unique to humans, it raises concerns about the future of work and the role of AI in society. Furthermore, as AI systems improve their ability to understand and interpret human behaviour, issues about privacy, autonomy, and the possibility of AI being used negatively by individuals or society as a whole arise.
In light of these concerns, AI researchers and developers must think about the moral and societal implications of their work and develop AI systems that are open, auditable, accountable, and consistent with human values. Furthermore, society as a whole must engage in informed discussions about the future of AI and its potential impacts on our lives.
This includes involving stakeholders from diverse backgrounds and perspectives, such as ethicists, philosophers, sociologists, policymakers, and members of the public, in the development and governance of AI systems. Additionally, there should be ongoing efforts to ensure that AI systems are developed and used responsibly and ethically, with measures in place to prevent unintended consequences and negative impacts on society.
***Woherem, a highly respected industry professional and alumnus of Harvard Business School, wrote in from Abuja, Nigeria