Night has fallen by the time you get off work. Machine learning helps keep you safe as you walk to your car, monitoring the video feed from the surveillance camera in the parking lot and alerting off-site security staff if it detects suspicious activity. On your way home, you stop at the supermarket, where you walk down aisles that were laid out with the help of learning algorithms: which goods to stock, which end-of-aisle displays to set up, whether to put the salsa in the sauce section or next to the tortilla chips. You pay with a credit card. A learning algorithm decided to send you the offer for that card and approved your application. Another one continually looks for suspicious transactions and alerts you if it thinks your card number was stolen. A third one tries to estimate how happy you are with this card. If youвЂ™re a good customer but seem dissatisfied, you get a sweetened offer before you switch to another one.. An algorithm is not just any set of instructions: they have to be precise and unambiguous enough to be executed by a computer. For example, a cooking recipe is not an algorithm because it doesnвЂ™t exactly specify what order to do things in or exactly what each step is. Exactly how much sugar is a spoonful? As everyone whoвЂ™s ever tried a new recipe knows, following it may result in something delicious or a mess. In contrast, an algorithm always produces the same result. Even if a recipe specifies precisely half an ounce of sugar, weвЂ™re still not out of the woods because the computer doesnвЂ™t know what sugar is, or an ounce. If we wanted to program a kitchen robot to make a cake, we would have to tell it how to recognize sugar from video, how to pick up a spoon, and so on. (WeвЂ™re still working on that.) The computer has to know how to execute the algorithm all the way down to turning specific transistors on and off. So a cooking recipe is very far from an algorithm.. One if by land, two if by Internet. Machine learning also has a growing role on the battlefield. Learners can help dissipate the fog of war, sifting through reconnaissance imagery, processing after-action reports, and piecing together a picture of the situation for the commander. Learning powers the brains of military robots, helping them keep their bearings, adapt to the terrain, distinguish enemy vehicles from civilian ones, and home in on their targets. DARPAвЂ™s AlphaDog carries soldiersвЂ™ gear for them. Drones can fly autonomously with the help of learning algorithms; although they are still partly controlled by human pilots, the trend is for one pilot to oversee larger and larger swarms. In the army of the future, learners will greatly outnumber soldiers, saving countless lives.. P and NP are the two most important classes of problems in computer science. (The names are not very mnemonic, unfortunately.) A problem is in P if we can solve it efficiently, and itвЂ™s in NP if we can efficiently check its solution. The famous P = NP question is whether every efficiently checkable problem is also efficiently solvable. Because of NP-completeness, all it takes to answer it is to prove thatone NP-complete problem is efficiently solvable (or not). NP is not the hardest class of problems in computer science, but itвЂ™s arguably the hardest вЂњrealisticвЂќ class: if you canвЂ™t even check a problemвЂ™s solution before the universe ends, whatвЂ™s the point of trying to solve it? Humans are good at solving NP problems approximately, and conversely, problems that we find interesting (like Tetris) often have an вЂњNP-nessвЂќ about them. One definition of artificial intelligence is that it consists of finding heuristic solutions to NP-complete problems. Often, we do this by reducing them to satisfiability, the canonical NP-complete problem: Can a given logical formula ever be true, or is it self-contradictory? If we invent a learner that can learn to solve satisfiability, it has a good claim to being the Master Algorithm.. [РљР°СЂС‚РёРЅРєР°: pic_10.jpg]. Since perceptrons can only learn linear boundaries, they canвЂ™t learn XOR. And if they canвЂ™t do even that, theyвЂ™re not a very good model of how the brain learns, or a viable candidate for the Master Algorithm.. We can measure a programвЂ™s fitness (or lack thereof) by the distance between its output and the correct one on the training data. For example, if the program says an Earth year is three hundred days, that would subtract sixty-five points from its fitness. Starting with a population of random program trees, genetic programming uses crossover, mutation, and survival to gradually evolve better programs until itвЂ™s satisfied.. BayesвЂ™ theorem is useful because what we usually know is the probability of the effects given the causes, but what we want to know is the probability of the causes given the effects. For example, we know what percentage of flu patients have a fever, but what we really want to know is how likely a patient with a fever is to have the flu. BayesвЂ™ theorem lets us go from one to the other. Its significance extends far beyond that, however. For Bayesians, this innocent-looking formula is theF = ma of machine learning, the foundation from which a vast number of results and applications flow. And whatever the Master Algorithm is, it must beвЂњjustвЂќ a computational implementation of BayesвЂ™ theorem. I putjust in quotes because implementing BayesвЂ™ theorem on a computer turns out to be fiendishly hard for all but the simplest problems, for reasons that weвЂ™re about to see.. [РљР°СЂС‚РёРЅРєР°: pic_17.jpg]. Siri aside, you use an HMM every time you talk on your cell phone. ThatвЂ™s because your words get sent over the air as a stream of bits, and the bits get corrupted in transit. The HMM then figures out the intended bits (hidden state) from the ones received (observations), which it should be able to do as long as not too many bits got mangled.. Nearest-neighbor is the simplest and fastest learning algorithm ever invented. In fact, you could even say itвЂ™s the fastest algorithm of any kind that could ever be invented. It consists of doing exactly nothing, and therefore takes zero time to run. CanвЂ™t beat that. If you want to learn to recognize faces and have a vast database of images labeled face/not face, just let it sit there. DonвЂ™t worry, be happy. Without knowing it, those images already implicitly form a model of what a face is. Suppose youвЂ™re Facebook and you want to automatically identify faces in photos people upload as a prelude to tagging them with their friendsвЂ™ names. ItвЂ™s nice to not have to do anything, given that Facebook users upload upward of three hundred million photos per day. Applying any of the learners weвЂ™ve seen so far to them, with the possible exception of NaГЇve Bayes, would take a truckload of computers. And NaГЇve Bayes is not smart enough to recognize faces.. War is not for humans. The next step in the insidious progression of AI control is letting them make all the decisions because theyвЂ™re, well, so much smarter. Beware. They may be smarter, but theyвЂ™re in the service of whoever designed their score functions. This is the вЂњWizard of OzвЂќ problem. Your job in a world of intelligent machines is to keep making sure they do what you want, both at the input (setting the goals) and at the output (checking that you got what you asked for). If you donвЂ™t, somebody else will. Machines can help us figure out collectively what we want, but if you donвЂ™t participate, you lose out-just like democracy, only more so. Contrary to what we like to believe today, humans quite easily fall into obeying others, and any sufficiently advanced AI is indistinguishable from God. People wonвЂ™t necessarily mind taking their marching orders from some vast oracular computer; the question is who oversees the overseer. Is AI the road to a more perfect democracy or to a more insidious dictatorship? The eternal vigil has just begun.. Chapter Ten.