Tuesday, January 31, 2017

The machines are here

Every time the machines beat humans at something new there is a big kerfuffle.  It was a huge deal when Deep Blue beat Kasparov at chess years ago, and last year Go was finally dominated by a computer.  Now it is poker's turn.

There are many kinds of poker and various kinds have been beaten at various times but no limit heads up Hold Em was taken down just now, and that to me feels like the proper benchmark for computer dominance.

I recall when chess finally fell many people said that Go was simply too big, that machines would never beat humans.  They laughed at the idea of machines defeating the best poker players, because poker is so much about reading human expression and mood.  They were wrong.  The machines will beat us at everything.

And I do mean everything.  Not just mathematical games like Go, but far more difficult to quantify things like reading facial expressions and predicting behaviour.  They are even going to beat us at sex.  They aren't there now, not by a long shot, but eventually the robots will come for sex just the way cars came for carrying goods, computers came for chess mastery, and factories came for making shaped pieces of iron.  It is just a matter of time.

I am reading the Ancillary series of books by Ann Leckie, which is a science fiction series set in the distant future.  I recommend the series both as science fiction and as commentary on gender and sex, though the actual story and characters we never that compelling to me.  In it there are extremely powerful AIs that watch people all the time and learn their patterns by speech, facial expressions, and more.  They know when people are upset before the people themselves know.  This is going to be a reality given enough time because every time we have set ourselves the challenge to build a tool that will be better at something than a raw human we have succeeded, or are on the way to succeeding.  Reading and predicting human behaviour will be no different.

The idea of AIs watching me through myriad cameras and knowing what I think before I do isn't frightening to me.  Maybe it should be, but I honestly just shrug at the thought.  The machines already do pretty much everything better than me and my life is shaped by the rest of the world already so somehow it doesn't feel like it will be all that different.  Would machines watching me, trying to predict me really be that different from humans trying to do the same thing?  The only difference I see is one of efficiency.


  1. Machines are here but for now they are a *super* long way off from being able to beat humans for real. Deep Blue beat Kasparov. Do you know who Deep Blue didn't beat? A team of hacker who had six months to toy with different strategies to mess up Deep Blue's logic.

    In a recent post on superheroes you mentioned facial recognition software. It turns out you can make a pair of glasses to make facial recognition algorithms think you are whoever you want them to think. Researchers printed a pair of glasses to convince recognition systems that a 24-year-old South Asian female researcher was Colin Powell, that a man was Mila Jovovich. The glasses cost approximate $0.22 to produce.

    That "efficiency" thing is a big problem because the "inefficiencies" that we use are things like double checking that someone isn't Mila Jovovich by looking a their beard instead of just at a narrow band of colours around their eyes.

    Computers are hackable, and they'll have to become far more intelligent to stop being hackable. Part of that will be picking up a lot of inefficiencies along the way.

  2. Edit (since I can't edit): I'm not saying hackers *did* beat deep blue. I'm saying they could, easily. In the myriad of openings available in chess, I'm completely confident that some of particular bizarre set of moves would send Deep Blue spiralling down a very exploitable path.

    1. I don't know about that. Deep Blue, when it first beat Kasparov, was highly exploitable since it was tuned so well against Kasparov rather than other players. A similar machine today would be many times more powerful. Is it possible that some combination of moves would fool it? Maybe, but that is true also for every human player, and the computer is vastly more difficult to fool.

      You are right though that facial recognition isn't perfect, by any means. But that just means it isn't perfect today. The computers are going to get better at it until eventually they are far better than we are. It will be possible to fool them, just like we can fool each other, but I am not predicting perfection, simply superiority.

    2. I was really just trying to point out that computers aren't already better than us at a wide variety of things. Chess and Go are *easy* problems.

      The facial recognition thing is a great example of where we are with computing. What people do is take a massive amount of information and cram it through some heuristic that's not bad at recognizing faces. What computers do it only pay attention to the most relevant information. That gives them a better result. But it also allows us to fool them very easily because we understand their process.

      If you built a computer lie detector, it wouldn't be hard to do better than humans (who are very bad at detecting lies) but while people are 60% for any given lie (I'm just making this up) the computer will be near 100% for a broad class of lies, or a broad class of liars, and then nearly 0% for another broad class.

      If humans systems worked that way we'd all have died out a long time ago. We need our immune system to get every virus 98% of the time, not get 98% of viruses every time.

      So the leap from where we are to where we would need to be before we could say computers are better than us isn't just a vertical climb up adding more processing power, it's an entirely different way of structuring things. Our whole approach to machine learning produces deeply flawed processes that are better on average for the set of data they learned with, but which are vulnerable to epidemic failure.

      I'm not convinced that the eventuality of machines being better than us is going to happen in our lifetimes. I'm worried about machines watching us and trying to predict us because people overestimate how much they can rely on the machines, not because the machines are good at it.