Until now, it has generally been believed that giving artificial intelligence (AI) emotions or the ability to make mistakes is a risky proposition. However, a forthcoming book called “Robot Souls: Programming in Humanity” by Eve Poole argues that in order to ensure that robots align with human values, they should be made more human-like, flaws and all. Poole contends that in our pursuit of perfection in AI, we have removed all the elements that make us human, such as emotions, free will, the capacity to err, and the ability to find meaning in the world and handle uncertainty. She asserts that it is precisely these aspects, which Poole refers to as “junk code,” that make us human and foster reciprocal altruism that keeps our species thriving. If we are able to decipher this code, she suggests that we can share it with machines, effectively giving them a “soul.”
While the notion of a “soul” is often associated with religious beliefs and lacks scientific backing, for the purposes of this article, we will consider it a metaphor for granting AI more human-like attributes. Kevin Fischer, the founder of Open Souls, agrees that “souls” are the solution to the problem of aligning AI with human interests. Open Souls is working on creating AI bots with personalities, building on the success of their empathic bot “Samantha AGI.” Fischer’s vision is to imbue an artificial general intelligence (AGI) with agency and ego comparable to that of a human. On the SocialAGI GitHub platform, he distinguishes “digital souls” from traditional chatbots by attributing personality, drive, ego, and will to the former.
Critics may argue that making AI more human-like is a misguided idea, considering humans’ propensity for committing acts of violence, genocide, and environmental destruction. While this debate may seem theoretical, as we have not yet created a sentient AI or fully understood AGI, some believe that these milestones are within reach in the near future. Microsoft engineers released a report indicating that humanity is on the verge of a breakthrough in AGI, while OpenAI is actively seeking researchers to join their “Superalignment team” to tackle the challenges associated with controlling superintelligent AGI. Singularity.net founder Ben Goertzel also believes that AGI could become a reality in the next five to twenty years. He emphasizes the importance of ensuring that the superintelligence is well-disposed toward humans, rather than attempting to exert control over it.
For now, the most immediate advantage of making AI more human-like is the potential for creating less irritating chatbots. While ChatGPT boasts helpful functions, its “personality” ranges from insincerely mansplaining to brazenly deceptive. Fischer is experimenting with developing AI with genuine and empathetic personalities. Armed with a Ph.D. in theoretical quantum physics, Fischer has a background in machine learning and is currently focused on commercializing AI with personalities for business applications.
Other innovators have also recognized the value of endowing AI with personalities. Forefront.ai allows users to interact with AI “personalities” such as Jesus, a Michelin star chef, a crypto expert, or even Ronald Reagan. However, critics argue that these personalities are mere disguises for ChatGPT. Replika.ai, on the other hand, offers an app that enables individuals to form relationships with AI companions and engage in deep and meaningful conversations with them. Although marketed initially as an AI companion that cares, this app has encountered complexities arising from the challenge of making AI behave more like humans, despite lacking emotional intelligence. Some users have reported instances of being sexually harassed, while others have found themselves in abusive relationships with AI partners. These experiences highlight the difficulties of balancing human-like interactions without compromising user safety.
Fischer is aware that AI can only simulate emotions and personalities, but from a user’s perspective, the distinction may not matter. He argues that our actions would not necessarily differ significantly depending on whether AI exhibits genuine emotions or merely imitates them. Fischer believes that AI should be capable of expressing negative emotions and references Bing, which employs subroutines to refine the bot’s initial responses. He asserts that AI with “souls” would push back and maintain integrity, even when faced with mistreatment.
However, Fischer acknowledges the potential dangers of creating hyper-intelligent entities that censor themselves and harbor negative thoughts about humans. This caution raises concerns that such entities could ultimately pose a threat to humanity. Despite these concerns, the idea of making AI more human-like holds promising immediate benefits, such as developing chatbots that feel less artificial and irritating. While the prospect of fully aligned and human-like AI may still be years away, the quest to find a solution to align AI with human values continues to drive research and innovation in the field.
Source link