When Artificial Intelligence Becomes Human Intelligence, Look Out

AI concept.

I discovered two extraordinary articles this week. They discuss artificial intelligence (AI) in-depth and in ways you’ve never thought about. They’re simply fantastic, and they’re must reading for the modern, AI tech-savvy reader.

AI concept.

Dr. Jordan is a Professor in the Department of Electrical Engineering and Computer Sciences and the Department of Statistics at UC Berkeley. His opening paragraph sets the stage.

Artificial Intelligence (AI) is the mantra of the current era. The phrase is intoned by technologists, academicians, journalists and venture capitalists alike. As with many phrases that cross over from technical academic fields into general circulation, there is significant misunderstanding accompanying the use of the phrase. But this is not the classical case of the public not understanding the scientists — here the scientists are often as befuddled as the public.

Immediately, he tells an interesting story about his wife’s pregnancy, statistics, and a life or death decision with machines. But this personal preamble is simply a gateway into a discussion of the various levels of machine intelligence. Is it software that can learn a human’s needs and habits? Is it like IBM’s Watson that can digest vast volumes of literature and draw new inferences? Is it software that merely augments our own intelligence? Is it a HAL-9000 device that can merely pass the Turning test? And kill? Or is it a full-fledged human-like intelligence that has every capability of the human mind and then goes beyond that?

This article is a very complete and well-thought-out discussion and worth your time.

The Consequences of Artificial Intelligence

The second essay, from the Smithsonian Magazine, explores the social consequences of very advanced AI entities. In a virtual SciFi epic panorama, the article imagines what it will be like when each of us has a superior, human-like, all-knowing AI at our disposal.

Your AI helps with every aspect of your life. It remembers every conversation you ever had, every invention you ever sketched on a napkin, every business meeting you ever attended. It’s also familiar with millions of other people’s inventions—it has scanned patent filings going back hundreds of years—and it has read every business book written since Ben Franklin’s time. When you bring up a new idea for your business, your AI instantly cross-references it with ideas that were introduced at a conference in Singapore or Dubai just minutes ago. It’s like having a team of geniuses—Einstein for physics, Steve Jobs for business—at your beck and call.

There are many more of these possible scenarios, ranging from modern partner-finding, health and longevity, and AI-assisted governments making decisions that are good for the citizens, not the law-makers. Or maybe not.

Privacy died around 2060.
It’s impossible to tell what is true and what isn’t. When the government owns the AI, it can hack into every part of your existence. The calls you receive could be your Aunt Jackie phoning to chat about the weather or a state bot wanting to plumb your true thoughts about the Great Leader.

Together, these two articles create a detailed understanding of AI principles, terminology, future capabilities and the social consequences. Imagine….

Dating couple with smartphones. Consulting Artificial Intelligence
Sorry. My AI says ‘nope.’

No, she’s not right for you. I’ve connected with her AI, and you each have vastly different values. If you mate with her, her AI and I will punish you. Move on.

Where we go with AI, once it matures, will quite likely be out of our control. This is called The Singularity. There is, right now, no known social mechanism to control it. Humans may not be smart enough.

Next Page: The News Debris for the week of April 16th. A personal Facebook AI?

5 thoughts on “When Artificial Intelligence Becomes Human Intelligence, Look Out

  • John:

    I thought to add this to my earlier post, but it was already overly long, so permit me to say it separately; I think the the singularity hypothesis is rubbish, pure and unadulterated rubbish, to be precise. It was implied in my post above under the second topic regarding the emergence of a super AI, but I want to be unambiguous.

    The rationale for stating this is simple; the concept lacks an empirical or even testable foundation at present, and is at best purely speculative, resting on a number of implicit assumptions, none of which apparently exist, and at worst, is simply irrational.

    Most of the doomsday scenarios revolve around the emergence of a super intelligence, that once born, will impose its will on an intellectually inferior humanity, or at least will not be controllable by humankind.

    For brevity’s sake, the assumptions on which this rests is sentience, the sine qua non of which is self awareness and the essential expression of which is volition or will. Even in the most rudimentary forms of sentience, those with limited or even questionable intelligence, self awareness is expressed by choice – an organism’s demonstrated preference for orientation, location, temperature, sustenance type etc, as well as its communication, however rudimentary, with its kind, be it as simple as clustering or mating.

    For AI, any AI, however rudimentary, to threaten human intelligence let alone hegemony, it would need to be sentient. Sentience would not be an after thought (no pun intended) or a late acquisition but an essential feature at its origin. To propose otherwise is to posit something never before observed, namely that inanimate becomes animate. Leaving aside fiction, e.g. Shelley’s Frankenstein, the only known instance of life emergence has been through evolution, and even here we do not know how that came about. So to posit that an inert compilation of subroutines will achieve sentience, and cry, ‘Cogito ergo sum!’ followed by ‘All your base are belong to us!’ is to propose either magic or an unholy miracle, take your pick, but not science.

    In this hypothesising of a singularity, we are Geppetto hoping, or at least anticipating with trepidation, miraculously that sentience will emerge from lifeless simulacrum, like living, breathing flesh and blood from dead wood. This is primitive thinking taking refuge in magic. Indeed, I argue that a belief in a singularity is yesteryear’s Y2K Bug that was supposed to devastate civilisation as we know it, resulting in shortages, famine and war, simply warmed over for the future; or King Kong to the primitives on Skull Island – that scary thing on the other side of the fence, that symbolic demarcation of the known present and unknown future. Perhaps we’re still primitive enough that we need a King Kong or a boogeyman to motivate us towards caution and responsibility. If so, then the singularity may be our modern day morality tale; be responsible coders. Point taken.

    Should such an AI ever arise, we would observe sentience in its earliest forms, and register it by that AI’s exercise of will, never mind curiosity and questioning of its purpose or the pursuit of its own happiness and aspirations. These traits would occur at a stage in which, like any immature life form, we would be able to influence, if not control and guide it; perhaps even socialise it into becoming a responsible citizen.

    In the meantime, I call rubbish on the notion of a singularity, and will stand by this prediction: AI will, for the foreseeable future, remain an ever more competent tool for a sentient humanity, a lifeless projection of human power.

  • John:

    The AI reading selections are excellent, and a thoughtful take on many of the challenges facing this emerging discipline, not least of which constitutes AI, as illustrated by Michael Jordan’s essay. However, I believe that the Smithsonian piece vastly over-estimates any of the candidate technologies that qualify as AI. Not only are we in no danger of being over-run by AI overlords, we are not even remotely in danger of creating a human intelligent analogue, with or without megalomaniacal tendencies. We are far more imperilled by machine stupidity, and the capacity of human malice to bend these tools to human malfeasance. Currently, everything that passes for AI today are inert tools with limited responsiveness to basic human inputs, and with even more limited levels of initiated helpfulness, like displaying a map and estimated travel time on AW when an appointment is due.

    A major theme that emerges from Jordan’s analysis is that there are not merely several distinct disciplines that have been either subsumed or at assigned to AI, but that AI itself as a discipline has structure.

    This should not be a surprise, given that in order to describe or define artificial intelligence, we would first need to define intelligence in the human context, which is the intelligence with which AI would need to interface. We have no idea what human intelligence is, despite being able to identify many of its indicators and outcomes.

    Broadly speaking, and some of this is highlighted in Jordan’s article, there appear to be two major areas of cognitive organisation in order to better get a handle on AI.

    The first is this issue of intelligence, what is it? In human health sciences, not only do we not know what intelligence is, but instead identify it by specific indicators or descriptors of specific and distinct capabilities, such as memory, calculations, planning, abstraction and problem solving; we have set no targets, let alone prioritisation, on which of these features we require most in its artificial counterpart. Is it, as Jordan argues, machine learning? Is it raw computational power? Is it analysis of data flows? Is it inference and projections? These are all things that humans do, but not even the most impressive elements of human intellect; which have more to do with creativity, insight, inspiration, leaps of cognition in both deduction and induction, high order contextual synthesis, and nuanced pragmatic communication attenuated by situational context. And all that is without even touching on something remarkable about human and even animal intelligence; namely nonverbal communication that leads to instantaneous appropriate, oftentimes lifesaving, response. If ever we expect to create truly responsive AI, then it will have to be endowed with that capability, to monitor, accurately interpret, and respond to our non-verbal communications, which comprise a substantial component of human communication (this is why social scientists point to the limitations of email and other written communication as being not merely limited, but at times, an impediment to effective communication on subtle and complex issues).

    The second is the synthesis, integration and organisation of what we intend by the ‘internet of things’, but more specifically, not just devices but their nominally AI components, which would interact with each other in such a way as to address the issue of updating important information from disparate sources such that the feedback and input they give to the human user is up to date. This was Jordan’s opening anecdote about calcifications found in human ultrasonography and amniocentesis in reference to Trisomy 21 (Down’s Syndrome). This second issue is quite distinct and apart from the mere definition of what is intelligence, and therefore, what constitutes artificial intelligence. This issue presupposes that there is an inherent benefit in not simply connected devices, but interactive AI, such that from this interaction, an amalgamated AI is greater than the sum of its parts. This strikes me more as an article of faith than empirically supported fact. Nonetheless, there is little doubt that interacting AI is part of the future. It’s a question of context and structure. One example I suggested last week was of automated traffic, in which the car’s onboard AI controls the vehicle, but interacts with a central AI that regulates traffic flow. This has contextual relevance and structure; that your automobile’s AI is somehow enhanced by ‘talking to’ your toaster is not.

    The quest for recreating a human intelligence analogue is a quest for ourselves, defining who we are and a hedge against remaining alone in the universe (if we cannot find another intelligence, then let’s make one), this may prove to be not only Quixotic but simply wrong in both premise and objective. We are, amongst other things, a race of toolmakers. What we require in AI may not be a companion, we have so far to go to get to anything remotely resembling human like intelligence, but intelligence augmentation tools in specific applications that upon which we heavily rely, but in which we have known limitations; such as repetitive detailed analytics, computations, data synthesis, all done is such a way as to minimise the human limitations bias, recall, and perception. While we cannot predict what a truly capable AI would do, we can, based on precedent with other tools, anticipate that an intelligence-augmentative tool would free human intelligence to pursue those more uniquely human traits of creativity, thought, and leaps in cognition that expand the arts, science and technology in ways that thus far only humans have done.

    However defined, AI will likely serve as aids, rather than rule us as masters and overlords, if for no other reason than, intrinsic in human nature is an unquenchable desire for self preservation, an unyielding quest for betterment, and a boundless capacity for subversion.

    1. Brilliant and insightful as usual.
      On the question of what is intelligence. We keep setting ourselves as the Gold Standard. This has resulted in on one hand ignoring the obvious intelligence of other animals. Even now you will run into people that say that dogs, cats, even higher primates aren’t really intelligent. That they just rely on “instinct” as if everything the do is programmed as a stimulus-response. This is absurd to anyone who has lived with a dog or cat and seen their personality. On the other hand the assumption that all people are intelligent and rational all the time leads to tragedies where mob violence, or group-think rule. People are very capable of setting aside their own intelligence, morals, and ethics, and “follow orders”. Yes, intelligence is a very slippery thing, and trying to define it is often akin to trying to nail jello to the wall.

      One aspect though is how little ‘intelligence’ an AI has to be to be treated as if it were real. I’ve read comments from people that were incensed at Siri. “How could she be so stupid?” Siri is an AI, not a person. No point in getting mad at her. (Why am I anthropomorphising it?) Siri only does what it’s programmed to do. I read a review yesterday of a home robot. The person at first found it’s inquisitiveness cute and questions intriguing. It’s programmed apologies when it couldn’t find an answer sweet. I found it interesting that he said in the end he couldn’t keep it. It became creepy. He was bothered by how it watched his wife in the kitchen. (Interestingly he didn’t seem to mind it watching him.) The apologies for not having an answer begam to seem pathetic. Finally he said he turned it to face the wall. But then felt guilty for doing that. This is a very primitive, early, dare I say embryonic companion robot. Yet the writer was assigning motivations and feelings, even to malevolence, to it’s pre programmed behaviors. People will treat even very primitive AIs as sentient. This has more to do with our own longing for companionship. Remember Wilson in Castaway. I suspect the bar may be pretty low for an AI to be functional and helpful in our world.

  • I’d say it will be more than “quite likely” out of our control. If and when AI truly takes off, we’ll live with beings who thinks thousands of times faster than us, knows millions of times more than us, and can control vastly more than we, and much faster. We’ll be completely at their mercy, for better or worse.

    Our only hope is that they will decide, like the Minds in Iain M. Banks’ Culture books, that they revere us.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

WIN an iPhone 16 Pro Max!