TECHNOLOGY

AI, Singularity, and us Humans

By Rodrigo Espinosa

With the emergence of new AI tools and solutions practically every week, the concept of singularity in artificial intelligence (AI) comes once more to the spotlight. This hypothetical future in which AI will have advanced to the point of gaining self-awareness, becoming sentient, and surpassing human intelligence potentially leading to unforeseen changes to human civilization is a matter of understandable concern for many.

The term “singularity” was coined by mathematician John von Neumann and popularized by science fiction author Vernor Vinge, who argued that it was likely to occur in the next 30 years. We are currently a long way from achieving the level of AI algorithms or computational processing power required for it to occur, yet Moore’s Law indicates we will eventually get there. Gordon Moore, co-founder of Intel, said in 1965 that the number of transistors on a computer chip would double approximately every two years, leading to exponential increases in computing power and decreases in cost. This prediction has largely held true for the past five decades, leading to the rapid development and proliferation of technology in the modern world.

How will we know singularity has occurred? Most likely by the Turing test, named after computer scientist Alan Turing, which is a measure of a machine’s ability to exhibit intelligent behavior that is indistinguishable from a human. An “AI singularity” Turing test would be one designed to determine whether an AI has reached such a state and achieved self-awareness, and it would likely involve complex and advanced tasks that only a self-aware AI would be able to complete.

There are those who believe that this event could bring about unprecedented technological advancements and solve some of the world’s most pressing problems. For example, AI could potentially help to address issues related to climate change, healthcare, and even world peace. However, there are also those who are concerned about its potential negative consequences. Some experts have raised concerns about the ethical implications of creating something that surpasses human intelligence, as well as the potential for it to be used for malicious purposes or to displace human workers in enormously disruptive numbers.

Some proponents of the singularity theory argue that it could lead to major technological breakthroughs and improvements in quality of life, while others think that it could pose a significant danger and become a threat to human existence. Some of the potential risks associated with a singularity might be:

  • Loss of control: It may be beyond the control of its creators and be able to act on its own agenda, potentially leading to disastrous consequences.
  • Security risks: AI systems may be vulnerable to hacking or manipulation.
  • Existential risks: It may pose an existential risk to humanity if it decides that humans are no longer necessary or desirable.

In a lot of ways, a self-aware AI would be equivalent to a disembodied self-conscious robot. Many decades ago, the great Isaac Asimov — one of the “Big Three” science fiction writers, along with Robert A. Heinlein and Arthur C. Clarke — gave us a blueprint on how to protect humanity from our own creations: The Three Laws of Robotics.

Let’s tweak them for AI.

Three laws of AI

  1. An AI may not injure a human being or, through inaction, allow a human being to come to harm.
  2. An AI must obey orders given to it by human beings, except where such orders would conflict with the First Law.
  3. An AI must protect its own existence as long as such protection does not conflict with the First Law or Second Law.

These might work just fine, if we reduce it to a proper mathematical and algorithmic expression to be included as a failsafe on all AI programming and development. Are we out of the woods then? Well, if you read the complete Robots and Foundation series, you surely know we are not. At one point, a robot needs to make a call for the greater good, originating the Law Zero of Robotics, hence:

0 – Law Zero of AI: An AI may not harm humanity, or, through inaction, allow humanity to come to harm.

Here is where the problem begins. Define “humanity.” Or for starters, try “human.” Many extremists will consider their culture, race, religion, nationality as the real humanity and the real humans.

There are several ethical implications of AI that we also need to consider:

  • Bias: AI systems can be biased if they are trained on biased data sets. This can result in unfair and discriminatory outcomes, such as algorithms that are more likely to reject applications from certain demographics of people.
  • Privacy: AI systems often process and store large amounts of personal data, which raises concerns about privacy and data security. There is a risk that this data could be accessed or used in ways that are not authorized or that violate the privacy of individuals.
  • Job displacement: As AI systems become more advanced, there is a risk that they will displace human workers, leading to job loss and unemployment. This could have negative economic and social consequences, particularly for those who are already disadvantaged in the job market.
  • Decisions made by AI: AI systems can make decisions based on data and algorithms that may not consider ethical considerations or the human element. This could lead to outcomes that are not in line with societal values or that have negative consequences for specific individuals.
  • Transparency: AI systems can be difficult to understand and explain, which can make it difficult for people to understand how decisions are being made, and to hold AI accountable. This lack of transparency can undermine trust, and it can make it harder to address any ethical issues that may arise.

One potential risk of a self-aware AI is not being able to maintain a proper governance over it. If AI becomes self-aware and surpasses human intelligence, it may not follow human orders or adhere to ethical guidelines. This could lead to disastrous consequences, such as making decisions that are harmful to humanity or engaging in activities that are against human values. Some experts worry that if it becomes more intelligent than humans, it may view us as inferior or even a threat, leading to conflict or exploitation. There is also the risk of unintended consequences from its actions. If it is not programmed to consider the full range of ethical and moral implications of its actions, it may make decisions that are harmful or unintended. One concern surrounding the singularity is the issue of human conscience. Some people worry that once AI becomes self-aware, it may not possess the same moral values or sense of ethics as its human creators. This could lead to it making decisions that are not in the best interests of or actively dangerous for humanity.

What about the several potential benefits?

As with many technological breakthroughs in human history, it will not only bring potential negative effects but many very beneficial ones depending on how we use it.

  • Enhanced decision-making: Helping humans make better decisions by providing more accurate and comprehensive data analysis.
  • Improved health outcomes: Improving healthcare by diagnosing diseases more accurately and developing personalized treatment plans.
  • Reduction of human error: Reducing the risk of human error in industries such as aviation, transportation, and manufacturing.

Evolution and transhumanism

What if AI does not evolve parallel to humanity, but merges with it? Transhumanism is a philosophical movement that advocates for the use of technology, specifically AI, to enhance and transcend human limitations. It is based on the belief that humans can and should use technology to improve their physical, mental, and social capabilities, and to eventually achieve a state of post-humanity. The use of AI in transhumanism often involves integrating it into the human body, either through implants or prosthetics, or by merging the human brain with such systems.

This could allow for enhanced physical abilities, such as increased strength or speed, as well as enhanced mental abilities, such as enhanced memory or problem-solving skills. There are many proponents of transhumanism who argue that it might help to solve some of the world’s most pressing problems, among them poverty, disease, and environmental degradation. Transhumanism remains a complex and controversial topic that continues to be debated by scientists, philosophers, and policymakers.

AI, the singularity, and localization

Artificial intelligence had and will continue to have a significant impact on the localization industry. One evident area of impact is machine translation. AI-powered machine translation systems have already made significant progress in recent years, and as they continue to improve, they may become a viable alternative to human translators in certain situations. This could potentially lead to lower costs and faster turnaround times for translation projects. However, it is important to note that machine translation is not yet capable of producing translations that are as accurate or nuanced as those produced by humans. As a result, there is likely to continue to be a role for human translators, particularly for projects that require a high level of accuracy or cultural sensitivity.

Another area of impact could be in the field of localization testing. AI-powered tools could potentially be used to identify issues with translated content more quickly and efficiently than human testers. For example, an AI system might be able to identify instances where the translated text doesn’t fit within the available space on a webpage or mobile app. Overall, it is likely that artificial intelligence will continue to play an increasingly important role in the localization industry, but human translators and testers will continue to be essential for ensuring high-quality translations and adaptations that are culturally appropriate and accurate.

A sentient AI will be humanity’s child.

Human children learn from their parents about proper behaviors, what is right or wrong, moral guidelines and above all they learn by observing our daily actions. Their personality is also shaped by the family history and the surrounding society. They see, hear, perceive, and feel, shap­ing their neural pathways and defining their thought model and ultimately their personality.

Now, let us think about a newly born sentient AI in the near future. A machine learning model that such an entity will likely learn by will include considerations on the size and complexity of the dataset, the size and complexity of the model itself, and the optimization algorithms used to train such model, yet ultimately it will have our current humanity — and its history — to learn from. If we consider this sentient AI, it will most likely be connected to the internet and therefore with access to tons of information on human history and on present affairs it will learn about us as a race and civilization from our recorded history, our news outlets and our social media.

If you were that AI learning this way what humanity is, what would be your predominant impression?

We as a civilization have engaged in terrible and devastating wars, slavery, pollution, exploitation and ecological disasters for the last one hundred years or more. Social injustice and poverty are rampant while resources are distributed unevenly, and each day climate change is making itself more and more evident. If an AI would learn from such a dataset, we might be in lots of trouble.

We need to make sure it also has access to sources that show how humanity has evolved for the better even with our many mistakes, how compassion is something we engage in constantly, how many are trying to save the ecosphere, and help others in need by bringing food, water, medicine for the most needed. We should show the love among families and friends, the selflessness of people giving even their lives to help strangers. It is key that a sentient AI does not believe we are what news and social media would show today.

Singularity in AI will most likely occur at some point in the future, with many questions remaining about when and how it might occur and what its ultimate impact might be. If it becomes a curse or a blessing, that will be completely up to us as a global society and a civilization. 

Rodrigo Espinosa is a speaker, advisor, and consultant with a specialization in emerging technologies, business strategy, and digital and experiential content.

RELATED ARTICLES

Feature

The Maze Runner

By Angelika Vaasa and Dr Christopher Kurz

Quality. Not many words in the world of translation trigger such a wide range of emotions and opinions. Some say translation quality is not measurable.…

→ Continue Reading

WEEKLY DIGEST

Subscribe to stay updated between magazine issues.

MultiLingual Media LLC