Will Computers Catch Humans?

By the year 2030, famed inventor and Google futurist Ray Kurzweil predicts a singularity. The idea that because of Moore’s Law, an idea proposed by Intel’s co-founder Gordon Moore, which states that the number of transistors that can be packed into a given unit of space will roughly double every two years, that humans and machine will become one, indistinguishable being.

Ray Kurzweil
Ray Kurzweil

You may be wondering if Ray is somewhat like a modern-day Nostradamus, but that would be rather insulting to Ray.

Where Nostradamus had predicted very generic events that could have been attributed to just about anything, and thus people often correlated to very specific events and called his predictions a hit, Kurzweil predicted very specific things to occur in very specific time periods, and has a success rate of about 86%.

So much so, that Google hired him as their futurist, to help guide their own corporate endeavors in the direction Ray predicts the future is going.

Ray’s singularity prediction is rather interesting, because what he’s ultimately arguing is that because of the advances in memory technology, computers will meet the human brain’s computing power in this time frame.

While I don’t profess to have the knowledge Ray has, one thing I would like to point out, is that humans are not just a product of our memory, we are also a product of our intellect. Let’s look at how we’re different from computers, as an example.

Kim Peek - Autistic savant; the man the movie Rain Man was based on.
Kim Peek – Autistic savant; the man the movie Rain Man was based on.

Imagine a Microsoft Excel spreadsheet, with 1,000 rows and columns in size. Your computer remembers them flawlessly; every single character, like a mechanical Rain Man.

But ask a human to do this feat, and nearly no one can. So the argument that computers haven’t caught humans yet is somewhat misleading.

The average human brain has about 100 billion neurons, and many more glial cells. If we think of neurons as computer bits; the smallest level of computer memory, or the thing that is actually a one or a zero; then we can extrapolate how much memory a computer must have to match the human brain.

Four computer bits make a byte, 1,024 bytes make a kilobyte (KB). This 1,024 successive unit pattern then progresses as follows: megabyte (MB), gigabyte (GB), terabyte (TB), petabyte (PB)…and the list goes on.

This means, that the human brain has about 12.5 gigabytes of memory in the neurons alone. Add in the glial cells, and that number grows by at least double, since there are more of them. The above link references that opinions vary wildly about the real storage capacity of the human brain, but put it somewhere between 1 to 1,000 terabytes, the latter of which seems awfully high to me based on the number of neurons.

But the point I think that is missed in Ray’s hypothesis is that where computer memory is virtually flawless, the human brain seems to have mastered what it should and shouldn’t forget in a rather advantageous way. Where the human can’t remember the aforementioned massive spreadsheet data, it makes up for in its ability for inferring things not provided to it. This being the difference between memory (or knowledge), and computing power (intellect).

It’s this human ability to forget, that actually makes it better at processing information. For instance, you might talk to a co-worker all day, and entirely forget what color their shirt and pants are. Why? Because your brain has developed the ability to know that’s not important information, and immediately dumps it into your brain’s recycle bin.

But if your co-worker misspells a word in an email, your brain doesn’t crash and end its comprehension of the data like a computer might. Instead, you quickly understand what was inferred.

The fact is that there are computers with 1,000 terabytes, or nearly one petabyte already; they have the brains memory power. And one look at IBM’s Watson on Jeopardy, shows you that computers can already beat humans in knowledge alone quite easily.

IBM's Watson
IBM’s Watson

So how is it that a computer could beat Jeopardy’s best competitors, yet still cannot replicate human behavior?

One point to remember is that computers are digital, whereas the human brain is analog.

For instance, think of today’s modern digital cameras, which store a massive amount of mega pixels. We marvel in how much memory they can store, yet an analog camera from 50 years ago, effectively stores more, because it isn’t storing it digitally, as ones and zeros, but instead, as just one big picture on a film. Effectively, each molecule of film is one pixel, and that’s a significantly higher amount of data.

Blow up a digital picture, and eventually, you will see it displaying in its smallest constituents (pixels).

Example of a normal digital picture, when blown up, showing the individual pixels.
Example of a normal digital picture, when blown up, showing the individual pixels.

But if you blow up an analog picture, it never pixelizes, it just becomes so small of an area you can no longer make out what it is.

It’s this difference between analog and digital, that makes Ray’s prediction so uncertain for me. While he may be right, as long as computers rely on digital memory, I’m not convinced they’ll ever be on the level of humans. But instead, machines, and natural life, will always be somewhat separate.

A complete overhaul in the way computers memorize and process information will be needed, not the Moore’s Law doubling of memory in the digital realm.

But it is also worth noting, that Moore’s Law is inappropriately named. It is not in fact scientific law, nor even scientific theory, it is simply something Moore noted, and a pattern that has simply been repeated over the last 50 years, but is not by any stretch going to continue for eternity.

As the Journal of Nature reports, after fifty years, it may indeed be starting to break down. Whereas actual scientific law, such as gravity, and Isaac’s laws of motion; Moore’s “Law” almost invariably must fail at some point, once a transistor has been shrunk to its smallest level.

Speaking theoretically, a transistor, having two states (on and off), if it were shrunk down to one atom, with either one or two electrons depending on whether it’s “on” or “off;” making it smaller would likely prove impossible, and in that moment, Moore’s Law is no more.

Do I believe Kurzweil is crazy? Heck no, the man’s a genius. Do I believe he’s wrong? Not necessarily. More than anything, I would love to ask him about the things I pointed out, and just have an amazing discussion with an amazing man.

Instead, what I’m offering is that you should always be skeptical, and question everything. Whether it’s someone you respect and consider more brilliant than you, or someone you suspect is more likely to be wrong than you. It’s how you learn, and occasionally, it’s how they learn as well. Even the smartest of people can over-analyze something, and miss a simple key aspect, a lesser mind might have caught.

Drop some genius on me here.

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s