Enter the learner. In politics, as in business and war, there is nothing worse than seeing your opponent make moves that you donвЂ™t understand and donвЂ™t know what to do about until itвЂ™s too late. ThatвЂ™s what happened to the Romney campaign. They could see the other side buying ads in particular cable stations in particular towns but couldnвЂ™t tell why; their crystal ball was too fuzzy. In the end, Obama won every battleground state save North Carolina and by larger margins than even the most accurate pollsters had predicted. The most accurate pollsters, in turn, were the ones (like Nate Silver) who used the most sophisticated prediction techniques; they were less accurate than the Obama campaign because they had fewer resources. But they were a lot more accurate than the traditional pundits, whose predictions were based on their expertise.. The National Security Agency (NSA) has become infamous for its bottomless appetite for data: by one estimate, every day it intercepts over a billion phone calls and other communications around the globe. Privacy issues aside, however, it doesnвЂ™t have millions of staffers to eavesdrop on all these calls and e-mails or even just keep track of whoвЂ™s talking to whom. The vast majority of calls are perfectly innocent, and writing a program to pick out the few suspicious ones is very hard. In the old days, the NSA used keyword matching, but thatвЂ™s easy to get around. (Just call the bombing a вЂњweddingвЂќ and the bomb the вЂњwedding cake.вЂќ) In the twenty-first century, itвЂ™s a job for machine learning. Secrecy is the NSAвЂ™s trademark, but its director has testified to Congress that mining of phone logs has already halted dozens of terrorism threats.. Clearly, thereвЂ™s no single factor that correctly predicts the answer: on some weekends she likes to go out, and on some she doesnвЂ™t; sometimes she likes to go clubbing, and sometimes she doesnвЂ™t, and so on. What about a combination of factors? Maybe she likes to go clubbing on weekends? No, occasion number4 crosses that one out. Or maybe she only likes to go out on warm weekend nights? Bingo! That works! In which case, looking at the frosty weather outside, tonight doesnвЂ™t look promising. But wait! What if she likes to go clubbing when thereвЂ™s nothing good on TV? That also works, and that means today is a yes! Quick, call her before it gets too late. But wait a second. How do you know this is the right pattern? YouвЂ™ve found two that agree with your previous experience, but they make opposite predictions. Come to think of it, what if she only goes clubbing when the weather is nice? Or shegoes out on weekends when thereвЂ™s nothing to watch on TV? Or-. One of HollandвЂ™s more remarkable students was John Koza. In 1987, while flying back to California from a conference in Italy, he had a lightbulb moment. Instead of evolving comparatively simple things likeIfвЂ¦thenвЂ¦ rules and gas pipeline controllers, why not evolve full-blown computer programs? And if thatвЂ™s the goal, why stick with bit strings as the representation? A program is really a tree of subroutine calls, so better to directly cross over those subtrees than to shoehorn them into bit strings andrun the risk of destroying perfectly good subroutines when you cross them over at a random point.. As in the nature versus nurture debate, neither side has the whole answer; the key is figuring out how to combine the two. The Master Algorithm is neither genetic programming nor backprop, but it has to include the key elements of both: structure learning and weight learning. In the conventional view, nature does its part first-evolving a brain-and then nurture takes it from there, filling the brain with information. We can easily reproduce this in learning algorithms. First, learn the structure of the network, using (for example) hill climbing to decide which neurons connect to which: try adding each possible new connection to the network, keep the one that most improves performance, and repeat. Then learn the connection weights using backprop, and your brand-new brain is ready to use.. And so we have traveled through the territories of the five tribes, gathering their insights, negotiating the border crossings, wondering how the pieces might fit together. We know immensely more now than when we started out. But something is still missing. ThereвЂ™s a gaping hole in the center of the puzzle, making it hard to see the pattern. The problem is that all the learners weвЂ™ve seen so far need a teacher to tell them the right answer. They canвЂ™t learn to distinguish tumor cells from healthy ones unless someone labels them вЂњtumorвЂќ or вЂњhealthy.вЂќ But humans can learn without a teacher; they do it from the day theyвЂ™re born. Like Frodo at the gates of Mordor, our long journey will have been in vain if we donвЂ™t find a way around this barrier. But there is a path past the ramparts and the guards, and the prize is near. Follow meвЂ¦. All this power comes at a cost, however. In an ordinary classifier, such as a decision tree or a perceptron, inferring an entityвЂ™s class from its attributes is a matter of a few lookups and a bit of arithmetic. In a network, each nodeвЂ™s class depends indirectly on all the othersвЂ™, and we canвЂ™t infer it in isolation. We can resort to the same kinds of inference techniques we used for Bayesian networks, like loopy belief propagation or MCMC, but the scale is different. A typical Bayesian network has perhaps thousands of variables, but a typical social network has millions of nodes or more. Luckily, because the model of the network consists of many repetitions of the same features with the same weights, we can often condense the network into вЂњsupernodes,вЂќ each consisting of many nodes that we know will have the same probabilities, and solve a much smaller problem with the same result.. CanceRx spends most of its time querying the model with candidate drugs. Given a new drug, the model predicts its effect on both cancer cells and normal ones. When Alice is diagnosed with cancer, CanceRx instantiates its model with both her normal cells and the tumorвЂ™s and tries all available drugs until it finds one that kills the cancer cells without harming the healthy ones. If it canвЂ™t find a drug or combination of drugs that works, it sets about designing one that will, perhaps evolving it from existing ones using hill climbing or crossover. At each step in the search, it tries the candidate drugs on the model. If a drug stops the cancer but still has some harmful side effect, CanceRx tries to tweak it to get rid of the side effect. When AliceвЂ™s cancer mutates, it repeats the whole process. Even before the cancer mutates, the model predicts likely mutations, and CanceRx prescribes drugs that will stop them dead in their tracks. In the game of chess between humanity and cancer, CanceRx is checkmate.. Robot armies may indeed make wars more likely, but they will also change the ethics of war. Shoot/donвЂ™t shoot dilemmas become much easier if the targets are other robots. The modern view of war as an unspeakable horror, to be engaged in only as a last resort, will give way to a more nuanced view of war as an orgy of destruction that leaves all sides impoverished and is best avoided but not at all costs. And if war is reduced to a competition to see who can destroy the most, then why not compete instead to create the most?. One Algorithm to rule them all, One Algorithm to find them,. IвЂ™m lucky to work in a very special place, the University of WashingtonвЂ™s Department of Computer Science and Engineering. IвЂ™m also grateful to Josh Tenenbaum, and to everyone in his group, for hosting the sabbatical at MIT during which I started this book. Thanks to Jim Levine, my indefatigable agent, for drinking the Kool-Aid (as he put it) and spreading the word; and to everyone at Levine Greenberg Rostan. Thanks to TJ Kelleher, my amazing editor, for helping make this a better book, chapter by chapter, line by line; and to everyone at Basic Books.. Chapter One. вЂњEvolutionary robotics,вЂќ by Josh Bongard (Communications of the ACM, 2013), surveys the work of Hod Lipson and others on evolving robots.Artificial Life, by Steven Levy (Vintage, 1993), gives a tour of the digital zoo, from computer-created animals in virtual worlds to genetic algorithms. Chapter 5 ofComplexity, by Mitch Waldrop (Touchstone, 1992), tells the story of John Holland and the first few decades of research on genetic algorithms.Genetic Algorithms in Search, Optimization, and Machine Learning,* by David Goldberg (Addison-Wesley, 1989), is the standard introduction to genetic algorithms.. Chapter Seven.