By Evans WOHEREM, Ph.D
As Artificial Intelligence (AI) continues to evolve, it is expected to have a significant impact on how we live and work. The integration of AI is a complex issue that requires a thorough understanding of the potential benefits and risks of this technology.
In this second part of the serialized piece, DR. EVANS WOHEREM continues to throw open an honest conversation about the implications of this technology and the need for collaboration to ensure that its development and use align with human values and promote the well-being of all individuals.
- Divergent Viewpoints on the Prospects of AGI Morphing into ASI
While some researchers believe that AGI is possible, others think that it’s unlikely and that the concept of AGI is not well defined. The debate over whether AGIs will eventually “morph” into ASIs is divisive among AI researchers and experts for different reasons.
One argument in favour of this viewpoint is that, due to advances in technology and machine learning, AGIs will continue to develop quickly and may one day outperform human intellect in several fields. This means that AGIs will continually grow exponentially over time and eventually transform into ASIs.
However, since there are still many unresolved issues and unknowns surrounding AGI and ASI, many researchers are less optimistic about the timeline for their development. In addition, some specialists believe that the term “superintelligence” is ill-defined and too ambiguous, making it difficult to say whether or when AGIs will develop to that level.
Despite the difficulties, many researchers are still working on developing AGI and ASI because they think the potential advantages outweigh the risks. Improvements in decision-making, increased output, and the ability to solve problems that are currently insurmountable for humans are potential benefits.
However, to benefit from these advantages, the risks associated with AGI and ASI must be decreased by implementing the proper safety measures. In addition to addressing moral and ethical dilemmas, this requires developing strategies for monitoring and controlling AGI behaviour.
As pointed out earlier, the majority of AI systems in use today are referred to as “Narrow AIs” because they are designed to carry out specific tasks. Examples of these types of AI include IBM’s Watson and DeepBlue, Expert Systems, and AlphaGo, all of which exhibit intelligent behaviour within a specific domain, but do not possess the general intelligence that humans have. Some of these systems may have high levels of proficiency or intelligence in their specific areas, but they do not have the broad intelligence that humans possess.
It is important to note that this is often a starting point in the progress of creating a general AI. Some researchers typically begin by developing AI systems that are particularly skilled in one area and then use the insights and knowledge gained from these systems to advance the development of more general AI systems.
Organizations such as the AGI Society, Berkeley Artificial Intelligence Research, CSAIL at MIT, Facebook AI Research, Google DeepMind, and the Human-Level Artificial Intelligence (HLAI) Conference demonstrate that there is ongoing work in the field of AGI and that many experts are working to create more general AI systems.
I am among those who are nervous about the unleashing of AGI systems in our society. I, therefore, believe that when designing and implementing AGI, it is critical to proceed with caution and care. Technologies are amoral; they can be used for good, such as increased efficiency and productivity in various industries, but they can also pose risks, such as job displacement, security threats, and ethical concerns, depending on the motivations of those who create them.
Before proceeding with AGI development and implementation, it is critical to consider the potential risks and benefits, as well as the consequences of AGI morphing into ASI and the impact they could have on human jobs and a sense of uniqueness in the world.
Even though we still do not have AGIs today, Ray Kurzweil, among other AI experts, predicts that AGIs will be developed by 2045, citing the Law of Accelerating Returns, which deduces that the rate of technological growth is exponential. It is crucial to keep in mind that these projections are based on current trends and advancements in the field of AI and that developing AGI is a complex and ongoing process that might not occur in a specific order.
It is also worth noting that AGI does not necessarily imply replicating all human capabilities; rather, it could refer to systems that are advanced enough that humans perceive them as AGI, but the consequences of having such systems are unknown. It could be compared to opening a Pandora’s Box or attempting to construct a new Tower of Babel, with unknown and potentially negative consequences for humanity and the world. I believe in the long run, the net effect will be negative, perhaps even catastrophic, for our earth, unless we do something to regulate them appropriately.
- Implications of AI
Humans have been attempting to increase the forms, types, places, and reach of communication, resulting in the emergence of many forms of communication, including written language, oral language, sign language, and more recently, digital communication. For example, through the use of books, letters, and other written documents, written language has made it possible for humans to communicate over great distances and long periods.
The printing press facilitated written communication after the fifteenth century. The telephone and telegraph enabled long-distance communication in the nineteenth century. The development of radio, television and the internet has significantly expanded the reach and extensibility of communication and information during the 20th century. Communication with anyone at any time is now possible thanks to the internet and mobile technology. Social networking and instant messaging are new forms of communication as well.
Most machines developed during the agrarian, industrial, and post-industrial eras have ended up deskilling and displacing humans from their traditional vocations, whether in crafts, blue-collar, or clerical work. However, they have also increased production and opened up new areas of labour for individuals who were displaced. As a result, there have been more employment increases than job losses overall. Many stakeholders believe that this will always be the case, even for artificial intelligence systems.
However, AI can replace not only monotonous administrative and physical tasks, but also virtually every other job, including those of artists, programmers, teachers, doctors, researchers, lawyers, accountants, and managers—indeed, everyone’s work. Managers have believed that having 1,000 employees will cause 1,000 headaches for them since the beginning of time. So, they will employ whatever machines or methods that allow them to eliminate numerous workers.
However, it is important to note that the impact of AI on the workforce will likely be more complex than simply replacing jobs. AI has the potential to improve human capabilities, produce more work, and create new jobs. Additionally, the rate at which AI will impact different industries and job types will vary, and some jobs may be more resilient to automation than others. It is also important to consider the ethical and societal implications of AI and its impact on the workforce.
For example, there may be concerns about income inequality and the displacement of certain groups of workers. It is crucial for policymakers and industry leaders to carefully consider these issues and develop strategies to mitigate negative impacts while harnessing the potential benefits of AI. Moreover, there is a need to think about retraining programs, education and upskilling of the workforce, and to ensure that the benefits of AI are shared equitably across society.
- The Ethical Implications of Giving AGI a Human-Like Brain
Are we trying to give AGI a human-like brain and make it self-aware? This seems to be what we are doing, advertently or inadvertently. The question of whether to give AGI self-awareness and consciousness is a contentious issue. Some argue that replicating and understanding human intelligence is a crucial step for AGI to perform tasks such as creativity, empathy, and moral reasoning. Others argue that it is unnecessary and even dangerous, as the actions of a self-aware AGI are uncertain, and it could lead to unintended consequences.
It is important to consider the ethical and moral concerns that arise from the development of AGI with a human-like brain, including the entity’s rights and obligations, and society’s treatment of it. Isaac Asimov, a science fiction author and biochemist, was one of the first to explore these ethical issues in his famous “Three Laws of Robotics” in which he proposed guidelines for the safe and ethical use of robots and AI. These laws include the prohibition on robots harming humans, the requirement for robots to obey human orders, and the obligation of robots to protect their existence as long as it does not contradict the first two laws.
Asimov’s laws provide a useful framework for considering the ethical implications of AGI, and his work continues to be relevant today as we grapple with the ethical challenges posed by the development of AGI. It is important for researchers, policymakers, and industry leaders to carefully consider these ethical implications as AGI technology continues to advance and to ensure that AGI systems are developed with a clear understanding of their limitations and potential risks.
Therefore, it is important for researchers, policymakers, and industry leaders to carefully consider these ethical implications as AGI technology continues to advance and to ensure that AGI systems are developed with a clear understanding of their limitations and potential risks.