Why does big tech think AGI will emerge from binary systems?

It’s time to stop the training of radiologists. AI can predict where and when crimes will occur. This neural network can tell if you are gay. There will be one million Tesla robotaxis on the road by the end of 2020.

We have all seen the hyperbole. Big tech’s boldest claims make for the most successful headlines in the media, and the general public can’t get enough of them.

Ask 100 people on the street what they think AI is capable of, and you’re guaranteed to get a cornucopia of nonsensical ideas.

Humanoid greetings

Subscribe now for a weekly recap of our favorite AI stories

To be perfectly clear: we definitely need more radiologists. AI can’t predict crimes, anyone who says otherwise is selling something. There is also no AI that can tell if a human is gay, the premise itself is flawed.

And, finally, there are exactly no autonomous axis robots in the world right now – unless you count experimental test vehicles.

But chances are you believe that at least one of these myths is real.

For every sober prognosticator calling for a more moderate view of the future of artificial intelligence, there are a dozen exuberant “just around the corner” who believe the secret sauce has already been discovered. For them, the only thing holding the general artificial intelligence industry back is scale.

The big idea

What they preach is complex: if you scale a deep learning-based system big enough, feed it enough data, increase the number of parameters it works with by factors, and create better algorithms, an artificial general intelligence will emerge.

Just like that! A computer capable of human-level intelligence will explode into the flames of AI as a natural byproduct of the intelligent application of more power. Deep learning is the chimney; calculate the bellows.

But we’ve heard that one before, haven’t we? It’s the infinite monkey theorem. If you let a monkey type on a keyboard endlessly, it will randomly produce all possible texts, including, for example, the works of William Shakespeare.

Only, for big tech purposes, it’s actually monetizing the infinite monkey theorem as a business model.

The big problem

There is no governing body to officially declare that a given machine learning model is capable of general artificial intelligence.

You’d be hard-pressed to find a single recording of open academic discussion on the subject in which at least one apparent expert on the subject doesn’t quibble over its definition.

Let’s say the folks at DeepMind suddenly yell “Eureka!” and claim to have witnessed the emergence of widespread artificial intelligence.

What if the folks at Microsoft called bullshit? Or and if Ian Goodfellow says it’s real, but Geoffrey Hinton and Yann LeCun to disagree?

What if President Biden says the age of AGI is upon us, but the EU says there is no evidence to back it up?

There is currently no single measure by which an individual or governing body could declare that an AGI has occurred.

The Sacred Turing Test

Alan Turing is a hero who saved countless lives and a queer icon who suffered a tragic end, but the world would probably be a better place if he had never suggested conjuring was a show of intelligence enough to deserve the label “human-level.”

Turing recommended a test called the “imitation game” in his seminal 1950 paper”Computing machines and intelligence.” Basically, he said that a machine capable of tricking humans into thinking it was one of them should be considered intelligent.

In the 1950s, that made sense. The world was a far cry from natural language processing and computer vision. For a master programmer, world-class mathematician, and one of history’s greatest code breakers, the path to what would become the rise of Generative Adversarial Networks (GANs) and Large Language Models ( LLM) must have seemed like an artificial street-path cognition.

But Turing and his ilk had no way of predicting how good computer scientists and engineers would be at their jobs in the future.

Very few people could have predicted, for example, that Tesla could push the limits of autonomy as far as it has without creating general intelligence. Or that DeepMind’s Gato, OpenAI’s DALL-E or Google’s Duplex would be possible without inventing an AI that can learn like humans do.

The one thing we can be sure of about our quest for general AI is that we’ve barely scratched the surface of the usefulness of narrow AI.

Opinions may vary

If Turing were still alive, I think he would be very interested in knowing how humanity has accomplished so much with machine learning systems using only narrow AI.

World-renowned artificial intelligence expert Alex Dimakis recently offered an update to the Turing test:

According to them, an AI that could convincingly pass the Turing test for 10 minutes with an expert judge should be considered capable of human-level intelligence.

But isn’t that just another way of saying that AGI will magically emerge if we just extend deep learning?

GPT-3 sometimes spits out snippets of text that are so consistent they seem salient. Can we really be so far from being able to maintain the illusion of understanding for 10, 20 or 30 minutes?

It kinda looks like Dimakis could put the goal posts on the 49-yard line here.

don’t stop believing

That doesn’t mean we’ll never get there. In fact, there’s no reason to believe that DeepMind, OpenAI, or any of the other AGI-is-nigh camps won’t figure out the secret sauce today, tomorrow, or anytime sooner (like somewhere around 2100s).

But there’s also little reason to believe that clever application of math and yes/no statements will eventually lead to AGI.

Even if we end up building planet-sized computing systems powered by Dyson spheresthe idea that scaling is sufficient (even with concurrent advances in code/algorithms) is still only a guess.

Biological brains can actually be quantum systems. It goes without saying, if this were the case, that an artificial entity capable of manifesting any form of intelligence distinguishing itself from the conjuring of intelligent programming would have difficulty in emerging from a classic binary system.

It may sound like I’m scolding the played battle cry “scaling is all you need!” with the equally obnoxious “quantum all things”, but at least there’s precedent for the fantasy I’m pushing.

Humans do exist, and we’re pretty smart. And we can be 99% sure that our intelligence emerged as a result of quantum effects. Perhaps we should turn to the field of quantum computing for clues regarding the development of an artificial intelligence intended to mimic our own.

Or, perhaps, AGI will not “come out” of anything on its own. It may require some clever design.

Comments are closed.