stub Why Industry 5.0 Needs Artificial General Intelligence - Unite.AI
Connect with us

Thought Leaders

Why Industry 5.0 Needs Artificial General Intelligence

mm
Updated on

By: Bas Steunebrink, Co-founder and Director of Artificial General Intelligence, Eric Nivel, Lead AGI Engineer & Jerry Swan, Research Scientist at NNAISENSE.

We take automation for granted in our modern world, benefiting each day from supply chains which span the globe, delivering a vast selection of goods to our shelves. But behind the scenes, the production and movement of goods generate many optimization challenges, such as packing, scheduling, routing, and assembly-line automation. These optimization challenges are dynamic and constantly changing in tandem with the real-world. For example, expected supply routes may suddenly become compromised due to unforeseen circumstances – for example, the Suez Canal may be blocked; air routes may change due to volcanic eruptions; entire countries may be inaccessible because of conflict. Changes in legislation, currency collapses and scarce resources are also examples of supply-side variables constantly in flux.

To provide another example, sometimes a novel component must be incorporated into a machine or workflow (users may want different materials or colors, for instance). Currently, expert human labour is required to make changes to the system, or—in the case of machine learning—to additionally re-train and redeploy the solution. In a similar manner, the “digital twins” of Industry 4.0 are still heavily dependent on the notion that the problem description and distribution of inputs can be specified once-and-for-all at the point of initial system design.

The recent pandemic highlights the fragility of “just-in-time” supply chain planning. It becomes more apparent that, in an increasingly complex and uncertain world, industry can no longer afford such inflexibility. At present, manufacturing has to make a fixed choice between “Low-Mix High-Volume” (LMHV) and “High-Mix Low-Volume” (HMLV). Industry 5.0 anticipates the prospect of “High-Mix High-Volume” (HMHV), in which the workflow can be reconfigured at low cost to meet fluid requirements. To achieve this, it is required to “automate automation,” in order to eliminate the need for human intervention and/or system downtime when the problem or the environment changes. This requires systems that “work on command,” reacting to such changes, whilst still having a reasonable prospect of completing its assigned tasks within real-world time constraints. Consider, as an example, instructing an assembly-line robot, currently engaged with task X, as follows:

“Stop assembling X immediately: here’s a specification of Y, and here are most of your old and a few new effectors. Now start assembling Y, avoiding such-and-such kinds of defects and wastage.”

Despite widespread recent talk of the imminent arrival of “Artificial General Intelligence” (AGI) via so-called Large Language Models such as GPT-3, none of the proposed approaches is genuinely capable of “work on command.” That is, they cannot be tasked with something completely outside their training set without the downtime of offline re-training, verification, and redeployment.

It is surely clear that any real-world notion of intelligence is inextricably associated with responsiveness to change. A system that remains unchanged—no matter how many  unexpected events it is exposed to—is neither autonomous nor intelligent. This is not to detract from the undoubted strengths of such deep learning (DL) approaches, which have enjoyed great success as a means of synthesising programs for problems which are difficult to explicitly specify.

So what kind of system functionality might enable AI to move beyond this train, freeze, and deploy paradigm, toward one which is capable of uninterrupted adaptive learning? Consider the need to replace a defective component in a manufacturing workflow with one from a different vendor, which might enjoy different tolerances. With the end-to-end black box modeling of contemporary AI, the digital twinning process must be done anew. In order to address the limitations of contemporary approaches, a radical change is required: a model that can directly reason about the consequences of a component change—and indeed more general counterfactual “what if” scenarios. Decomposing a workflow into components with known properties and recombining them as needed requires what is known as “compositionality.”

Compositionality has so-far eluded contemporary AI, where it is often confused with the weaker notion of modularity. Modularity is concerned with the ability to ‘glue’ components together, but this fails to capture the essence of compositionality, which is the ability to reason about the behaviour of the resulting workflow in order to determine and ensure the preservation of some desired property. This ability is vital for reasons of verification and safety: for example, the ability of the system to reason that “adopting an engine from an alternative manufacturer will increase the overall plant's power output while all its other components stay within temperature margins.”

Although contemporary neural network approaches excel at learning rules from data, they lack compositional reasoning. As an alternative to hoping that compositional reasoning will emerge from within neural network architectures, it is possible to make direct use of the constructions of category theory, the mathematical study of compositionality. In particular, its subfield categorical cybernetics is concerned with bidirectional controllers as fundamental representational elements. Bidirectionality is the ability to perform both forward and inverse inference: prediction-making from causes to effects and vice versa. Compositional inverse inference is particularly important because it allows the incorporation of feedback from the environment at any scale of structural representation—this facilitates rapid learning from a small number of examples.

Given some desired system behaviour, the learning task is then to build an aggregate control structure which meets it. Initially-learned structures act as a skeleton for subsequent learning.

As the system’s knowledge increases, this skeleton can be decorated with learned compositional properties, similar to how an H2O molecule can be determined to have different properties than those of its constituent atoms. In addition, just as “throwing a ball” and “swinging a tennis racket” can be seen as related musculoskeletal actions for a human, so related tasks can share a skeletal controller structure which is embellished in a task-specific manner via feedback from the environment. This decoupling of causal structure from task-specifics can facilitate learning new tasks without the catastrophic forgetting that plagues contemporary approaches. Hence, a hybrid numeric-symbolic approach of the form described above can combine the strengths of both neural and symbolic approaches, by having both an explicit notion of structure and the ability to learn adaptively how properties are composed. Reasoning about compositional properties is grounded on an ongoing basis by the work the system is currently commanded to perform.

In conclusion, it is clear that a new approach is required to create truly autonomous systems: systems capable of accommodating significant change and/or operating in unknown environments. This requires uninterrupted adaptive learning and generalising from what is already known. Despite their name, deep learning approaches have only a shallow representation of the world that cannot be manipulated at a high level by the learning process. In contrast, we propose that the AGI systems arising in the next generation will incorporate deep learning within a wider architecture, equipped with the ability to reason directly about what it knows.

The ability for a system to reason symbolically about its own representation confers significant benefits for industry: with an explicitly compositional representation, the system can be audited—whether by humans or internally by the system itself—to meet vital requirements of safety and fairness. While there has been much academic concern about the so-called x-risk of AGI, the appropriate focus is rather the concrete engineering problem of re-tasking a control system while retaining these vital requirements, a process which we term interactive alignment. It is only through the adoption of this kind of control systems, which are trustworthy and efficient continual learners, that we will be able to realize the next generation of autonomy envisioned by Industry 5.0.

From an early age, Bas has asked how intelligence allows one to perform competently despite inevitably insufficient resources. To better understand natural bounded rationality, his research initially focused on artificial emotions before moving to silicon-friendly approaches to general intelligence as an IDSIA postdoc, where he received several best paper awards and a grant from the Future of Life Institute. At NNAISENSE, Bas heads up the effort to develop general-purpose AI.