Notes on Intelligence for the ‘AI’ Era

Author

Mariana Emauz Valdetaro

Published

January 2, 2025

Notes on Intelligence for the ‘AI’ Era

Abstract

The aim is to draft an informal overview of recent developments in research efforts tied to intelligence, paralleled with the undergoing and pivotal social, techno and ecological shift in the artificial intelligence age. Standing at the cusp of a new era for disciplines, the internet, knowledge, and humans, it seems pertinent to examine the convergence of various takes on intelligence - from computer science to biology, neuroscience, philosophy, and even cybernetics - so that knowingly we may shape the tools of today into the world and systems of tomorrow. This is a open note, and a work in progress.

1 Introduction

“What is a friend? A single soul dwelling in two bodies.” - Aristotle

We begin with some apropos foundational concepts, and distinctions relating to intelligence. As per convention, we first must propose a working definition for it. What is intelligence? There are many proposals (Legg and Hutter 2007) and a few certainties, thus, it is truly important to recognize that as for now, we do not have a single definition for it. While intelligent individuals may simply smile if prompted with the such question, it is nonetheless relevant the attempt to have a functional, and empirically tractable answer to it, one suiting the necessary perspective when addressing across domains. In practice, having a working definition of intelligence means exercising the bridge that connects theory to observations and from biological to engineered scopes one that allows us to make predictions towards potentially more effective relational strategies, and closer to the bottom of its meaning.

Chalmers, David J. 1996. “Facing up to the Problem of Consciousness.” Https://Consc.net/Papers/Facing.pdf.

Something may be perceived as intelligent, but is the awareness of such intelligence by said thing a determinant of it? The arguable ambiguity between intelligence and self-awareness point to open questions in our understanding of cognition and even the hard problem of consciousness(Chalmers 1996).

The difficulty in defining intelligence is not merely a semantic one, as it reflects this concept’s multidimensional nature. When asked to define intelligence, two dozen prominent theorists provided two dozen somewhat different definitions(Neisser 1996), which isn’t in itself a failure in defining it, neither it is a matter deemed to be forever open to interpretation. Rather, it seems to be evidence that no matter how lightly we may use the term intelligence namely in artificial intelligence, we do it, even if not fully aware or its meaning, or in agreement of what it means. The are several semantic cases of this nature, and it does not impede conveying the idea of intelligence or any idea intelligently.

The misconception of intelligence referring to human-only or human-like abilities such as self-awareness or reason is intriguingly informative of a deeper insight of the relationship a thing has to itself, it’s identity, to what differs from it.

When intelligence is broadly conceptualized as the ability to acquire and apply knowledge and skills(Neisser 1996), there’s an implicit extension to adapt effectively to the environment, and learn from experience. Here said entity must have some sort of embodiment, one that allows for a dual relationship with it’s internal states and external events. In computational contexts, intelligence is then related to goal-directed behavior(Duncan et al. 1996), where something acts so as to maximize performance towards a goal or task, which is based on past experience. Conversely, conventional methods to access intelligence coefficients in Humans, regardless of arguments in their efficiency, contradict the premise (Ree and Earles 1998), as higher scores have not been found to translate systematically in job performance (Steel and Fariborzi 2024). Yet, as an information processing capability, a useful working and general definition may be intelligence as the competency to achieve goals / solve problems in a given space [Legg and Hutter (2007)](Michael Levin 2021)[^1]. [^1]: I should expand this at some point to include material, formal, ideal, structural and post structural debates

Neisser, Gwyneth; Bouchard, Ulrich; Boodoo. 1996. “Intelligence: Knowns and Unknowns.” American Psychological Association 51 (2): 77–101. https://doi.org/10.1037/0003-066X.51.2.77.
Duncan, J., H. Emslie, P. Williams, R. Johnson, and C. Freer. 1996. “Intelligence and the Frontal Lobe: The Organization of Goal-Directed Behavior.” Cognitive Psychology 30 (3): 257–303.
Ree, Malcolm James, and James A. Earles. 1998. “Intelligence Is the Best Predictor of Job Performance.” Personnel Psychology 1 (3): 431–42. https://doi.org/10.1111/j.1744-6570.1998.tb00770.x.
Steel, Piers, and Hadi Fariborzi. 2024. “A Longitudinal Meta-Analysis of Range Restriction Estimates and General Mental Ability Validity Coefficients: Better Addressing Overcorrection Amid Decline Effects.” Intelligence 88: 101703.
Legg, Shane, and Marcus Hutter. 2007. “A Collection of Definitions of Intelligence.” arXiv Preprint arXiv:0706.3639.
Levin, Michael. 2021. “Technological Approach to Mind Everywhere (TAME): An Experimentally-Grounded Framework for Understanding Diverse Bodies and Minds.” Frontiers in Systems Neuroscience 15: 709301.

This perspective allows us to recognize various forms of intelligence across species and systems. An interdisciplinary approach seems relevant here precisely because while single discipline may offer many perspectives, across disciplines the fragmentation and ambiguity hinder any chance of consensus.While the study of intelligence across humans, animals, and artificial systems reveals similarities and profound differences, looking at intelligence as a competency spectrum with expression in multi-dimensional spaces, places the ability to acquire, process, and apply information to achieve goals, adapt to environments, and problem-solving as mere manifestations of said expression. Here, we understand spaces as including physical, transcriptional, anatomical, physiological, and other abstract domains, combining Legg and Hutter’s (Legg and Hutter 2007) survey of intelligence definitions with Michael Levin’s developmental and biological perspective (Michael Levin 2021).

In regards to competency expression, we may be still tied to the limitations our own competencies allow us to observe, however it also offers the possibility through this framing to develop new means to expand our competencies, the same way we’ve developed tools to see beyond the spectrum of visible light. In this context, the hypothesis of an interdisciplinary and meta-entity Turing test could assess intelligence through a lens that transcends anthropomorphic mimicry. Traditional Turing tests focus on linguistic deception: “Can a machine imitate human conversation?” Yet this approach, while historically pivotal, needs some adjustments to address a competency-based intelligence understanding. A meta-entity Turing test would instead evaluate how systems recognize and interact with intelligence across domains, and allow competency mapping of problem-solving in physical, transcriptional, or abstract spaces (e.g., optimizing metabolic pathways, or negotiating ethical dilemmas). When slime molds exhibit problem-solving competency through gradient navigation and resource optimization(Reid et al. 2012), and systems like [[AlphaFold demonstrate competency in protein-folding spaces(Jumper et al. 2024)previously considered exclusive to evolutionary processes challenging traditional notions of problem-solving and adaptability in temporal scales(Michael Levin and Martyniuk 2024), while the later have been said “aware but not conscious”(Wang 2023), where does the first one stand?

Reid, C. R., T. Latty, A. Dussutour, and M. Beekman. 2012. “Slime Mold Uses Externalized Spatial ‘Memory’ to Navigate in Complex Environments.” Proceedings of the National Academy of Sciences 109 (43): 17490–94. https://doi.org/10.1073/pnas.1215037109.
Jumper, John M. et al. 2024. “Accurate Structure Prediction of Biomolecular Interactions with AlphaFold 3.” Nature 614 (7949): 1–12. https://doi.org/10.1038/s41586-024-01425-0.
———. 2024. “The Bioelectric Code: An Ancient Computational Medium for Dynamic Control of Growth and Form.” BioSystems 164: 76–93. https://doi.org/10.1016/j.biosystems.2017.08.009.
Wang, Jinchang. 2023. “Self-Awareness, a Singularity of AI.” Philosophy Study 13 (2): 68–77. https://doi.org/10.17265/2159-5313/2023.02.003.
Walsh, Toby. 2022. “The Meta-Turing Test.” arXiv:2205.05268.

Is said that conventional Turing tests have a design flaw of asymmetry, and the Meta-Turing test (Walsh 2022) resolves this asymmetry by a peer grading approach where both entities, the tester, and the testee are accessed mutually through interaction, evaluating abilities to identify and critique flawed problem-solving approaches. This of course is useful in a conversational human-machine setting, but we could take the premise further to assess intelligence profiles across competency spaces.

Here, we’d faced with obvious challenges of intelligibility between entities, and moreover levels on competency withing entity configuration.

Although, if we keep the asymmetry, we overcome the intelligibility issue but we remain lacking the tools to correctly identify competencies in multi-scale embodiments, and temporal scales. From energy allocation “decisions” to dynamic reallocation of resources, outside a conversational scenario we re-think if we do have the means to assess how intelligent are the current AI systems actually are.

The meta-Turing test reintroduces the Chinese Room argument with a new perspective: “Could a system that explains its symbol manipulation processes (meta-cognition) cross the understanding threshold?”, and while the early experiments with large language models (LLMs) some degree of self-reflection capabilities was posit, these capabilities and the premise of the meta-Turinng test remain rooted in pattern recognition rather than experiential understanding, and rely on our ability to comprehend their explanations and identify underlying patterns.

It would be a considerable improvement to find a entity-agnostic method perhaps even to so focussed on testing but in identifying or translating competencies from the testee to the tester and vice versa.

So, how intelligent is the AI of today?

While Artificial Intelligence (AI) and Machine Learning (ML) are often conflated, their meaning differs. AI is a broad concept, encompassing systems designed to solve problems or perform tasks and can use rule-based, or learning-based models in their components. Machine Learning (ML), by definition, expresses the engineered intent to learn by design. Not all Artificial Intelligence systems are designed to learn after training, thus Machine Learning could be seen as a subset of AI, aiming for systems that improve throughout exposure. Almost as if AI was the pursuit of creating “minds”, while ML is about creating “learning devices”.

Proposed by Joshua Bach in his 2007 thesis “Principles of Synthetic Intelligence PSI: An Architecture of Motivated Cognition” building on Dietrich Dörner’s Psi theory, it offers a comprehensive and philosophically grounded approach towards the creation of true intelligence, while at the same time attempting to understand it. Touching on enactive theories of agency(De Jesus 2018), where intelligence, autonomy, behaviour, purpose, and teleology(Rosenblueth, Wiener, and Bigelow 1943), suggests that embodiment, composition, and a variety of features would then allow for systemic preconditions for learning and perceiving, having goals, motivations, and the ability to navigate complex, uncertain environments.

De Jesus, P. 2018. “Thinking Through Enactive Agency: Sense-Making, Bio-Semiosis and the Ontologies of Organismic Worlds.” Phenomenology and Cognitive Sciences 17: 861–87. https://doi.org/10.1007/s11097-018-9562-2.
Rosenblueth, Arturo, Norbert Wiener, and Julian Bigelow. 1943. “Behavior, Purpose and Teleology.” Philosophy of Science 10 (1): 18–24. https://doi.org/10.1086/286788.
———. 2021. “Technological Approach to Mind Everywhere: An Experimentally-Grounded Framework for Understanding Diverse Bodies and Minds.” Frontiers in Systems Neuroscience 15: 732372.

When combining computational and biological views, a technological approach to mind everywhere, TAME (M. Levin 2021), gives us experimental evidence of a truly diverse spectrum of intelligent entities, and through which experimental means of interaction in spaces that are not our own.

Drawing from evolutionary of biology, and the challenges faced by early life forms in surviving and adapting to their environments may actually be quite significant when aiming for truly adaptive AI (M. Levin, Pezzulo, and Finkelstein 2017) (Baluška and Levin 2016)

Levin, M., G. Pezzulo, and J. M. Finkelstein. 2017. “Novosphingobium Sp. PP1Y as a Model for Studying Adaptive Decisions.” Frontiers in Microbiology 8: 2571.
Baluška, F., and M. Levin. 2016. “On Having No Head: Cognition Throughout Biological Systems.” Frontiers in Psychology 7: 902.
Bach, J. 2009. Principles of Synthetic Intelligence PSI: An Architecture of Motivated Cognition. Oxford University Press.
Levin, M. 2019. “The Computational Boundary of a "Self": Developmental Bioelectricity Drives Multicellularity and Scale-Free Cognition.” Frontiers in Psychology 10: 2688.

Moreover, the views of embodied motivated cognition (Bach 2009), and the computational boundaries of a “self” (M. Levin 2019), also seem to suggest that architectures may vary, which challenges the static notions of intelligence as a property, into a process subject to interference, and where internal and external states interplay creating diminishing or augmented competency-state chains.

As such, and in a reduced form, “intelligence seems less connected to correct outputs in an abstract setting and more deeply intertwined interaction-based products of things and their environment.”

Seeing intelligence as a process also allows for a substrate-independence view of what constitutes a mind, meaning that a mind could be the physical mapping of said process in a configuration space, instead of a brain or any embodied part. Associated with a mind are often mental representations and memories, and so is “self-reflection”, and it has been an accepted convention the the mind - body distinction. This notion is challenged (not to say outdated) when the memory of previously faced challenges persisted through complete brain regeneration of planarian flatworms (Shomrat and Levin 2013). As for the mapping of the process of intelligence in anatomical space, the above suggests that a mind would fit better a description of a network contributing to a form of body-wide information system(Michael Levin and Martyniuk 2018).

Shomrat, Tal, and Michael Levin. 2013. “An Automated Training Paradigm Reveals Long-Term Memory in Planarians and Its Persistence Through Head Regeneration.” The Journal of Experimental Biology 216 (20): 3799–3810. https://doi.org/10.1242/jeb.087809.
Levin, Michael, and Christopher J. Martyniuk. 2018. “The Bioelectric Code: An Ancient Computational Medium for Dynamic Control of Growth and Form.” BioSystems 164: 76–93. https://doi.org/10.1016/j.biosystems.2017.08.009.

1.0.1 References