A lot of talk about super-intelligent machines has been circulating in social media and news reports in the past few months, fueled by recent advances in applied genAI technologies. One of the most trusted and revered sources of discussion on this topic is MIT Technology Review. In its March German issue, reporters dive into questions surrounding this hot topic, including whether general machine intelligence – also called AGI – is anywhere on the horizon. To answer this question they contacted a few respected researchers in AI, including Dr. David Chalmers of New York University, Dr. Jurgen Schmidhuber of IDSIA, Dr. Katharina Zweig of TU Keiserslautern and IIIM Director Dr. Kristinn R. Thórisson.
In the issue Dr. Thórisson says “A valid AGI test would need to measure an AI’s capacity to learn autonomously, innovate, and pursue new objectives, while also being able to explain, predict, create, and simulate various phenomena.” These are capabilities that his team’s AGI-aspiring system AERA (Autocatalytic Endogenous Reflective Architecture) is capable of. AERA learns from experience, is capable of what Dr. Thórisson calls ‘machine understanding,’ and corrects its own understanding when it gets things wrong. Thórisson continues “[when learning from experience] we can misunderstand things. When a piece of a puzzle is missing, we seldom choose to start [learning] from scratch – instead, we adjust our existing knowledge based on what we’ve identified as incorrect.”
Dr. Thórisson explains that a general machine intelligence must be adept at formulating and managing its goals and subgoals, methodically crafting a roadmap to achieve its objectives. Dr. Thórisson believes that any real AI must be capable of conducting experiments within its environment: “True understanding comes from interaction and experimentation,” he noted, meaning that an AGI must have the ability to pose and test hypotheses, and learn from the outcomes, much like a scientist in the lab or a child’s mind making sense of the real world. “[AERA] starts with a small amount of designer-specified code—a seed—that autonomously evolves” as the system learns, explains Dr. Thórisson.
This framework shows impressive results using a unified learning on a variety of different, complex tasks, without resorting to classic machine learning techniques or reinforcement learning. It can transfer knowledge from one task to another without outside help, for example, controlling a robot arm to grasp objects of different sizes and shapes and construct semantically and syntactically correct sentences appropriate for the subject of materials recycling.
At the core of AERA lies the principle of reflectivity, which means that AERA can inspect its own learning content and mechanisms. Thórisson outlines AI’s potential in the future to address some of humanity’s most pressing challenges – disease, poverty, climate change, and the flaws within our social and economic structures. His vision extends to transforming education, refining market economies, and combating fraud. After all, an AGI would be an expert jack of all trades which anyone could wield to optimize their tasks and pursuits, hopefully helping transform the world to an increasingly better one.
Dr. Thórisson does not consider super-intelligent or generally intelligent machines to be “just around the corner,” as it requires numerous fundamental questions about the nature of intelligence to be answered. While recent applied AI progress such as Large Language Models (LLMs) has opened the world’s eyes to the potential of AI application, it has also made increasingly clear the limitations of contemporary artificial intelligence technologies. For superintelligence, the technology must move far beyond the current replication of human responses, through statistical acrobatics, to a deeper, genuine, abstract understanding of the world.