Advanced AI System Is Already "Self-Aware"; ASI Alliance Founder Warns
Authored by Andrew Fenton via CoinTelegraph.com,
ASI Alliance founder Ben Goertzel says the alpha version of OpenCog Hyperon — the artificial general intelligence system he’s been developing for more than two decades — is already “self-aware” to a certain extent.
Goertzel also tells Magazine he believes OpenAI likely shied away from making its “very impressive” new o1 model an autonomous agent for fear that it would be seen as “risky and dangerous” and provoke a crackdown from regulators.
The Artificial Superintelligence Alliance was formed in March this year, bringing together Goertzel’s SingularityNET project, Ocean Protocol and DeepMind veteran Humayun Sheikh’s FetchAI.
This week, 96% of CUDOS voters approved the decentralized cloud hardware network’s merger with ASI. The merger will boost the compute available for Goertzel’s plans to scale up OpenCog Hyperon, the AGI system he’s been working on since 2001 and launched as an open-source AI framework in 2008.
Goertzel on stage with three-legged robot singer Desdemona (Fenton)
OpenCog Hyperon and the future of artificial general intelligence
Three years ago, the project embarked on a total rebuild of OpenCog in “pursuit of massive scalability, and we’re a large way through that process,” he says. The Alpha launched in April, and while he says it’s currently very slow and “breaking changes” are expected, the team is working on “massively speeding it up. I think that should be completed this fall. And so then, which means next year, we’ll be setting about trying to build toward AGI on the new Hyperloop infrastructure.”
Goertzel says the system takes a different approach to large language models (LLMs) like GPT-4 and o1.
“A Hyperon system is not just a chatbot. It’s architected as a sort of autonomous agent which has its own goals and its own self awareness and tries to know who it is and who you are, what it’s trying to accomplish in the given situation. So it’s very much an autonomous, self-aware agent rather than just a question-answering system.”
AGI systems need self-awareness for autonomy
Hold on, Goertzel’s saying the current model is self aware?
What I mean by self-aware is that from the get-go, even the current versions, I mean, it has a model of who it is. It has a model of who you are. It has certain goals it’s trying to achieve in the situation. It knows how it relates to that situation and what it’s trying to do. And a ChatGPT sort of system isn’t really doing that right.”
The system combines a logical reasoning engine, evolutionary program learning and deep neural nets implemented in a dynamic knowledge graph that revises and modifies itself. (GPT-4 explains what that means below).
Incidentally, Goertzel says the lack of a world model explains why none of the autonomous AI agents built so far have really worked. You can’t just take a “question-answering system” and tell it “you are an agent interacting with the world.”
GPT-4 explains what that all means (OpenAI)
OpenAI’s o1 and regulatory risk
For all the hype around autonomous agent-like behavior with OpenAI and “Strawberry,” Goertzel believes OpenAI deliberately avoided taking that path with the released o1 model.
“It’s trying to be good at reasoning and logic, and it’s very good at that. I love it. It’s very impressive. It’s not trying to be an autonomous agent. That’s a different thing on purpose. I think they don’t want to do that, because it would look risky and dangerous, and people regulators would try to stomp on them. The last thing they want to do is make an autonomous agent.”
Which is one of the benefits of developing a decentralized open-source system. Regulators can’t “stomp on it” in the same way.
“As we launch advanced AI systems, they’ll be running across the machines spread across every continent, in 50 or 100 different countries,” he says. “So I mean, if one country decided OpenCog Hyperon is illegal, it’s only gonna be a small fraction of the network.”
Is Sam Altman trying to prepare the world for autonomous agents by portraying a utopia? (X)
Since our interview at Token2049 last week, OpenAI CEO Sam Altman released an essay painting a portrait of a future utopia called the “Intelligence Age” brought about by AGI. Goertzel also hopes that the benevolent AGI he’s trying to build will be so useful that no one will want to ban it.
I believe this can be something way smarter than what Big Tech is doing. So if we can get something way smarter than the OpenAI, o1 model using Opencog Hyperon, and we can roll that out on a decentralized network, I mean, then the world will jump into that the same way they jumped into ChatGPT, and then they will be using the decentralized network, just by the way, because it happens to be in the underlying infrastructure.”
Challenges and advantages of decentralized AI systems
The tension at the heart of decentralized AI projects is that it’s much easier and cheaper to run large models using centralized equipment.
Training neural nets and transformers on decentralized equipment is currently a nonstarter, although Goertzel says new research suggests it is feasible.
CUDOS should provide a big boost to the amount of computing resources available (X)
But he says logical reasoning and evolution learning via algorithms “run very naturally on a decentralized network of machines.”
The plan appears to bootstrap the network with some more centralized facilities while adding to the compute with the decentralized network. SingularityNET and Fetch have spent “a significant fraction” of 100 million ASI tokens (currently being renamed from FET) to purchase GPUs to build supercomputers. They’ll spend the rest once token prices recover.
“We want to have that as a sort of initial hub of the decentralized network to kickstart it,” he says. “We want to offer hosting for people who want to put AI agents on SingularityNET.” The vision is a Web3-friendly cloud solution like Hugging Face.
“AI developers just want the fastest, cheapest, easiest way to deliver a given function,” he explains. “If you want it to be decentralized, somehow you need to make it more appealing to end-users for reasons other than the philosophy of decentralization.”
Between the four projects in the Alliance, Goertzel estimates they’ll soon have “a couple of hundred million dollars of dedicated compute hardware.” Additional computing can be pulled in from SingularityNET spin-off NuNet, which uses idle CPU and GPU power from connected computers, and Hypercycle, which is a platform connecting AI services.
Renewable energy to power decentralized AI systems
LayerZero’s Bryan Pellegrino spent a decade working in AI and told Magazine during Korea Blockchain Week that his experience with electricity prices impacting his Bitcoin mining profits has left him with big reservations about decentralized AI.
It’s very hard to get compute the economies of scale [needed], or the cost of the underlying hardware [down], in a world when you’re competing against Google and AWS and all of these others, and just how they do their cooling and everything, from the cost of electricity all the way down. So I’ve always been bearish on most of the different segments of the intersection.”
But Goertzel says the cost of electricity pales in comparison to the cost of the equipment so “I don’t agree that it’s a non-starter.”
SingularityNET founder Dr Ben Goertzel on keyboards with his surprisingly good robot-fronted band. (Fenton)
However, he adds that SingularityNET and Hypercycle are exploring opportunities to benefit from cheap renewable power given the large power demands of AI.
“We’ve talked to folks in the Ethiopian Government about putting a bunch of servers near the dam there,” he says. “Toufi (Saliba, CEO) from Hypercycle is in discussion with the Paraguay government about putting a bunch of AI data centers and server farms next to the dam on the Brazil-Paraguay border. I mean, it is a thing to get multiple gigawatts of power straight into your AI compute center by putting your computer center next door to the dam with a fat cable to it.”
It certainly is. To underscore the scale of the electricity required, news emerged this week that OpenAI CEO Sam Altman has pitched the White House a plan to build enormous data centers in various US states, each requiring 5 gigawatts of power — equivalent to the output of five nuclear reactors.
AI data centers are usually located near major metropolitan areas as the services require low latency to deliver ultra-fast responses. However, Goertzel points out the o1 model has prioritized the quality of responses over speed, and Opencog Hyperon will, too.
"If you have AI that’s trying to do any kind of deep thinking, let’s say AI is trying to predict the direction of the market over the next day, right? Or the AI is trying to discover a new drug, right? Then it doesn’t matter if it’s in Paraguay or Ethiopia or wherever.”