This wiki has had no edits or log actions made within the last 45 days and has been automatically marked as inactive. If you would like to prevent this wiki from being closed, please start showing signs of activity here. If there are no signs of this wiki being used within the next 15 days, this wiki will be closed in accordance to the Dormancy Policy (which all wiki founders accept when requesting a wiki). If this wiki is closed and no one reopens it 135 days from now, this wiki will become eligible for deletion. Note: If you are a bureaucrat, you can go to Special:ManageWiki and uncheck "inactive" yourself.

How to break the Internet, destroy democracy and enslave the human race (or not)

From The Cory Doctorow Wiki

Quote[edit | edit source]

Content[edit | edit source]

Download[edit | edit source]

Haven't found a place to download this. Please add it if you do.

Metadata[edit | edit source]

Published[edit | edit source]

March 17, 2014

Website[edit | edit source]

Verbraucherzentrale Bundesverband's YouTube

About[edit | edit source]

There's only two sides to the fight over technology's destiny: either you believe that you should never, ever design computers to betray their owners; or you believe that some problems are important enough to build "I Can't Let You Do That Dave" into our gadgets. If you're on "I Can't Let You Do That Dave" side, you're dangerously wrong.

Keynote of Cory Doctorow, science fiction author and journalist, held at the World Consumer Rights Day 2014 in Berlin. Filmed and produced by On behalf of Federation of German Consumer Organizations – vzbv.

License[edit | edit source]

Creative Commons Attribution 3.0

Summary[edit | edit source]

Transcript[edit | edit source]

[01:11] As I came into this event today someone asked me what the connection was between my work and consumer rights, and it struck me that this connection may not be obvious on its face. My history is working in civil liberties and traditional freedoms and their importation into the digital world. This has become increasingly a matter of product design and of product regulation things like privacy are inherent to the design of our products. Products that are designed to leak your privacy are products that have both a civil liberties dimension and consumer rights dimension. Products that are designed to by intention not allow you to secure them have both a consumer rights dimension and a civil liberties dimension. Now, it's very bad today, the world of civil liberties and consumer rights. It's very bad today in the world of product design. And this talk is really a quick history of how we got to this very bad place, this place where a single wiretap on a fiber-optic line can compromise the privacy of everyone in Europe; on how the NSA can be contemplating surveillance of millions of people at once through infections of their computer carried on on an automatic basis; on how single companies can – trough their compromise – harm hundreds of millions of people through the disclosure of their financial information; and so on. But it's also a talk about how we can make it better.

[02:48] I believe that the story starts with the transition of policing into the digital world. When the police investigate crimes. They often want to listen in on a suspects telephone, and for understandable reasons. Now, before digital phone switches came along this is a very difficult undertaking. You would have to get a court order and then you would have to bring that court order to the phone company and you have to ask them to attach a special piece of apparatus to a switch that would allow the policeman to listen in on the phone conversation of one person at the phone company. After digital switches things changed radically. All it took to reroute a phone call from anywhere to anywhere else was a few keystrokes at a keyboard. Now that sounds like good news for the police, but the bad news is that digital telephones ushered in the era of digital encryption. Now digital encryption is an amazing and difficult to get your head around thing. We sometimes see techno-thrillers in which someone has encoded a message and the good guys need to decode the message, and so they bring in the codebreakers and the codebreakers get some really big computers. They do everything they can, and after some suitably dramatic interval the message is decoded . But that's not how cryptography works. There is something amazing underway in the world of mathematics. We have discovered, over the last fifty or so years, that it seems that the universe wants us to have secrets. That if we scramble a message using average technology in a realistic amount of time, that we can make it so hard to descramble that our adversaries could convert every atom of hydrogen extent into a computer that did nothing until the heat death of the universe but labor to crack our code, and they would never, ever be able to crack it. There is something amazing at work in the universe, we can keep secrets from any adversary, provided that our technology does what it's supposed to do. And this was very bad news for the police.

[05:15] It's a weird irony, the moment at which it becomes vastly simpler and cheaper to listen in on anyone it also became impossible to find out what they were saying. So in the early 1990's governments around the world took action on this, and in particular the administration of the Democrat President of the United States, Bill Clinton, brought two bills before Congress. The first one was called the Communications Assistance for Law Enforcement Act of 1994, or CALEA. And what CALEA said was that if you made a phone switch, you would have to put a backdoor in it. And that backdoor would allow the police, from the police station, to remotely access the phone company's equipment, type a few commands, enter a password and listen in on any conversation underway on the phone company's equipment. Eventually CALEA was actually expanded to data switches as well as voice switches. A compromise that allowed CALEA to get passed in 1994 was that would only impact switches that carry voice calls. By the early 2000's voice over IP was a reality and so this mandate was extended to all switches. And it is present in virtually every switch available commercially on the market, because if you want to serve the American market, or any other market has a similar initiative, you have to build this backdoor in to it.

[06:44] The other law that they tried unsuccessfully to pass was a law regarding something called the clipper chip. The clipper chip was supposed to solve the 'understanding what people are saying' problem. The idea of the clipper chip was that there would be a legal mandate requiring anyone who made a scrambling system to give the keys to the government. And the government would keep those keys in escrow, and if the police ever needed to listen in on an intercepted message, they could go to the key escrow service, retrieve your keys and descramble your message. So they wouldn't have to try to brute force the keys to your message. The state would maintain a set of keys. It's like giving a set of keys to every door of every house to a centralized authority. And then, if the police want to come in without you knowing it, they could get a warrant, get your key, walk into your house, search your belongings and leave again without you ever knowing that they were there. But CALEA [He probably means the clipper chip bill] did not pass.

[07:52] Now years have gone by since these two legislative fights. It's been a little more than a decade, and in that time the world been completely transformed by digital computers. Everything we do today involves a networked computer and everything we do tomorrow will require a networked computer. And so the risks of bad computer code have become more and more clear. For example, in November 2012 a young man named Barnaby Jack went to Australia and gave a presentation at a security conference on his research into implanted defibrillators. Implanted defibrillators have probably save the life of someone you know or love. If you have a heart problem – your heart doesn't keep to its rhythm – you can see your doctor and she will cut you open, spread your ribs and reach into your chest and attach a computer with a powerful battery to your heart. And it will listen to your heart beating and if your heart stops beating the battery will give you a shock that brings you back to life. Now doctors want to be able to monitor your implanted defibrillators. They want to know how often the shock is taking place. They want to be able to update the software. And since this defibrillator is lodged inside your chest cavity attaching a USB cable to it is impractical and messy. So it has a wireless interface, because everything has a wireless interface. This room is basically a microwave oven.

[09:24] Barnaby Jack's demonstration showed that from ten meters away. He could hijack your defibrillator. He could reprogram it. He could cause it to seek out other defibrillators, say when you went to the hospital, to the ward where they check up on patients who have defibrillators, he could cause them to reprogram all of those other defibrillators and then deliver lethal, killing shocks to everyone whose defibrillator had been compromised. Whether or not your computer works has ceased to be an issue of whether or not you can get a memo to your boss and has become an issue of life and death. In a million ways we can be compromised by our computers.

[10:07] Consider that your laptop, your tablet and your phone are devices with cameras and microphones that know who all of your friends are, that knows everything that you say to them, and knows every private moment of your life. And that you keep these in your bedroom, you take them with you into the toilet, and they're always on.

[10:27] The beauty queen who won the Miss teen USA pageant last year, Cassidy Wolf called the FBI when she started getting mysterious e-mails from someone who had hijacked her computer using what's called a remote access Trojan. He had covertly operated the camera on her computer and he'd used it to photograph her in the nude as she moved around her room. He had hijacked the passwords for her social media. And he sent her e-mails saying 'unless you perform live sex acts on your camera for me I will put your naked photos on the Internet.' So she called the FBI and they arrested him, and he had hundreds of victims, including children, including children in the EU. And he is just one of many people who been arrested for a practice called "RATing", Remote Access Trojaning. Creepily enough RATers call their victims their "slaves". They gather on forums in which they trade tips on how to trick their "slaves" into installing the remote-access Trojans.

[11:33] We have our computers in our bodies, but we also put our bodies in computers. The night before last, I flew back from SXSW in a Boeing 747 jet, a Boeing 747 is a flying Sun Solaris workstation in a in a very fancy aluminium case, connected to some very badly secured SCADA controllers that happens to hurtle through the air at hundreds of kilometers per hour. Cars are computers that hurtle down the motorway at hundreds of kilometers an hour with us trapped inside them. Most modern buildings are computers in fancy cases. If you take the computers out of the buildings, they become immediately uninhabitable. And in many cases, if you leave the computers out of the building for any length of time, those buildings will never be habitable again. It will be cheaper to knock them down and start over again than it will be to try and move back in.

[12:27] We have our bodies in computers and we have computers in our bodies. It's not just implanted defibrillators. If you're my age and you grew up with the Walkman, or you're a little younger and you grew up with the iPod, you will have logged enough punishing earbud hours by the time you reach a certain age that you will almost certainly need a hearing aid. And it is vanishingly unlikely that that hearing aid will be a retro, beige, hipster, analog transistor hearing aid. It will be a computer that you put in your body, and depending on how it works it can make you hear things that aren't there and can make you not hear things are there and can tell other people what you've heard. So we've got to get this right, and over the last couple of decades, we've had a steady drip of stories that have shown us what happened to CALEA and Clipper and how they've set us wrong.

[13:27] So I mentioned before that it doesn't matter whether or not your government has a law that allows for the kind of wiretapping that the American government does, all the switches have the facility for American-style wiretapping built-in. And you either turn it on or don't turn it on when you install the switch. That is much simpler than making different switches with different software for different markets. Now in Greece they don't have a lawful interception law like the American one, and there is no reason for their switches to have a backdoor that the police can access. So when they install the switches at their major national phone companies they did not activate their wiretapping backdoors. During the 2005 and 2006 Olympic bidding process someone broke into the Greek phone companies and turned the wiretapping on, and listened in on the Prime Minister, the president, the cabinet, the Olympic Committee, captains of industry. And then they turned it back off again. And it's only because they forgot to erase the log file that we know that it happened. When you design devices with insecure backdoors it should not surprise you to find that people go in and out of those backdoors without permission.

[14:46] But that isn't the end of the CALEA and Clipper story. As we've heard from Snowden it's much worse than that. But we've known for a lot longer than we've known about Snowden that things getting bad. In 2005 an AT&T technician named Mark Klein walked into the offices of an NGO in San Francisco called the Electronic Frontier Foundation – This is my former employer, for whom I was the European director. And mister Klein said 'I am a retired AT&T engineer, and when I was working for AT&T my boss ordered me to build a secret room at our Folsom Street switching center' – which is their main switching center on the American West Coast – 'to install a beam splitter into our fiber-optic trunk and to make a copy of all of our Internet traffic, directed into a secret room operated by the National Security Agency.' And since two thousand five, the Electronic Frontier Foundation has been suing three successive presidential administrations for access to information about this program. This information has only become clear in the last year, though, because since the Snowden revelations we've been able to actually find out what's going on without the American government saying this is a state secret. It's very hard to claim that something is a secret when it's on the front page of the New York Times.

[16:09] So there's a connection here between mass surveillance and mass decryption. And it's the companies that supply the lawful interception equipment to governments. And increasingly those are companies from Europe. Companies like the French firm [???], which operates out of the UK, and many others. These firms are in the business of selling malicious software – software like the software that was used to compromise the beauty queen Cassidy Wolf – to governments so they can use them to implant them on the computers of people they don't like.

[16:48] In Germany you have a very intimate experience with this sort of technology, you have the "Bundestrojaner". The "Bundestrojaner was a not very sophisticated version of this kind of technology, and since then it's only ramped up. Now all of these companies, in order to implant their remote-access Trojans, must seek out vulnerabilities in commonly used system. They seek for them, and when they find them they use them to leverage in their attack. It becomes the kind of chink in the armor through which they can slide their attacks. And so they all have a stake in keeping these vulnerabilities secret. Because the longer these vulnerabilities are secret and not patched – at the infrastructural level, in our switches, and also at the personal level, in our phones and our computers – the more criminals there are, and other persons of interest, whom states can infect with Bundestrojaner and other Trojans.

[17:48] in 2011 an American activist named Jacob Applebaum – who now lives here in Berlin and is in exile because of his activities with WikiLeaks – bought a ticket to the major tradeshow for this industry, which is called the lawful interception industry. They have a trade show in Washington DC every year that they call 'the wiretapper ball.' And somehow these superspies didn't notice that someone from WikiLeaks had bought a ticket for their show under his own name, after publicly announcing himself to be a spokesman for WikiLeaks. And so they let him in. And in 2011 he walked around, gathering information about their products, and then the British newspaper the Independent published it all. What he found was that common software like iTunes and operating systems like Windows and Mac OS were widely exploited by these companies. They hoarded enormous caches, troves of vulnerabilities in these commonly used programs, that rather than disclosing and repairing they had kept as secret as possible so they could use them to deploy their products.

[18:59] Now this is alarming because security is a very different kind of discipline to most others. It is unlike most engineering trades in that it is a process and not a product. Now with engineering we have experimental methodologies for testing an engineering hypothesis. If I say 'I plan on putting a structural steel beam in this room of such and such a dimension and so many centimeters and of the following impurities and purities, we can test whether or not this beam is going to hold up the ceiling. Generally we get it right, the ceilings don't fall down. But in security there is no way to test the hypothesis, except to invite other people to invalidate it. Anyone can design a security system that works so well that that person can't think of a way of breaking it. But all that tells you is that you developed a security system that works against people who are stupider than you. Unless you're the smartest person in the world you have to tell everybody else how you do it and wait for them to come forward with flaws in your thinking. Security is a process that requires continuous repair in order to make us secure. As new vulnerabilities are discovered, they are patched and we all become more secure. In this regard security is a lot more like public health than it is like engineering. Now it's true that engineers have to contend with powerful adversaries, hurricanes, frost, winds, floods. But those adversaries are indifferent to the engineers. You don't experience a flood because the water is angry with you or wants to defeat you. You 're experience a flood because of physics.

[20:41] But you experience an attack because your attacker is looking at what you've done and trying to figure out a way around it. And in that regard an attacker is much like a pathogen in a public health context. An attacker is someone who is always trying new techniques to slide past your defenses, in the way that germs are always trying new ways of sliding past our health defenses. And when governments, and the firms that serve them, hoard information about vulnerabilities it's like hoarding information about dangerous pathogens in our water, in our air, and weaponizing them as part of our military-industrial efforts instead of endeavoring to cure them as part of our public health efforts.

[21:30] Now last spring, Edward Snowden came out of the cold, and we have had one revelation after another. It is the Mark Klein story on steroids. In 2014, in September, we got, to my mind, the most shocking of all of the revelations from Edward Snowden. And that was the revelation that the American spy agency, the NSA, and British spy agency, GCHQ, have been spending one quarter billion dollars a year on programs called Bullrun and Edgehill whose objective is to sabotage the security of our information technology. They have infiltrated standards bodies in order to undermine the standard, so it is as though they have gone into a standards body that specifies the characteristics of the structural steel and they've insured that there were weak points in it that they could attack to make buildings fall down. We can understand on its face why this is a bad idea. They've infiltrated companies and had them deliberately weaken their products. And again, it's as though you bought a lock only to discover that that lock had deliberately been made vulnerable to lockpicks so that the police could get into your house if they needed to.

[22:44] CALEA and Clipper have come to mean that insecurity is a feature, not a bug, from a national security perspective. Our spies considered the risks of making technology insecure – the risks to all of us – and decided, on all of our behalfs, that those risks are something that we should bear in order to help them catch their enemies. Now a year ago I gave a talk like this while I was touring with a novel called Homeland, that is all about many of these subjects. The first stop on the tour was in Seattle, at the library there. And I gave a talk much like this one – about defibrillators that can be hijacked and give lethal shocks to their owners, about beauty queens that are being spied on in their bedrooms by computers that betrayed them – and at the end a woman put her hand up and she said 'Well you've scared me, now what do I do?' And I said 'You know, you wouldn't ask this if I had just given you an hour-long talk about waterborne parasites. You're not a parasitologist, and I'm not a parasitologist, but we both understand that if our government decided that weaponizing waterborne parasites was more important than addressing them and curing them, that we would need a new government.'

[24:03] I was a systems administrator fifteen years ago. Today that barely qualifies me to plug-in a Wi-Fi router. I can't make my systems secure, and neither can you. Because no matter how many steps you take to secure your personal computer, it exists in ecosystem – an epidemiological ecosystem of devices, networks and users who all have the power to compromise your security. If the water that came out of your tap was drinkable, but the water that came out of your neighbor's tap was full of cholera, you would get cholera. If your e-mail is secure and private, but your neighbor's e-mail is being stored on Google's servers, and all the messages passing in and out of Google's servers are being tapped on underseas fiber-optic cables, your e-mail will be read by spies.

[25:05] When we attack information technology systems with the intention of keeping long lived vulnerabilities in them, instead of improving them, we attack the health of our whole society. And increasingly our cyber war efforts are aimed at some of the most critical of our computers; or embedded systems, the kinds of systems that were attacked by Stuxnet. These are the controllers that are at the interface of the material world and the digital world. The controllers that run our phones, our cars, our airplanes, our nuclear power plants. And these controllers, when we find vulnerabilities in them, is especially important that those vulnerabilities are reported and patched – and not weaponized. Weaponizing a vulnerability in an embedded system makes it less secure. So if you fly, if you live near a nuclear power plant, if you drive, your life depends – literally – on the security of those embedded controllers. The problem is that any zero-day – any vulnerability that is discovered by the good guys, and kept a secret – will also be independently discovered by the bad guys and weapon. I doesn't matter if you trust your spies. So, a vivid example of this was the Edward Snowden revelations just before Christmas. They published a leak that is the catalog of all the attacks that the NSA has for all of its agents. So if you want to attack someone who has an iPhone they have a catalog that lists all of the different ways they can attack an iPhone. If you are in attack someone who is using a Cisco switch, they have a catalog that shows you all the ways to attack a Cisco switch. Each one of these builds on some vulnerability, some defect, in the underlying code.

[26:52] So every year in Hamburg there's a conference called the Chaos Communications Congress. It's a very good technology and security conference. And on one morning two programmers stood up and said 'we have discovered this fatal flaw in the iPhone, and here's how it works. We've independently discovered it, we expect Apple now to fix it because it compromises everyone with an iPhone.' The next day Jacob Applebaum stood on the stage and said 'I have the new catalog of the NSA's exploits, and in it is this vulnerability that was presented yesterday.' It was independently discovered. We don't know how many times it was independently discovered. We don't know what other entities independently discovered it. We know that smart people work for criminals. We know that smart people work for other governments. And by hoarding that vulnerability instead of responsibly disclosing it to Apple, the NSA put every single iPhone user in the world at risk of exploitation.

[28:00] So security is a public health issue, and the willingness to trade a moments instrumental convenience for the long-term existential risk is endemic and affects every realm of our computer use. It's not just about security. Many of us are willing to put up with anti-copying technology – what's sometimes called Digital Rights Management technology – in order to gain access to some movie or book or game that we want to see. So we buy a book for the Kindle, even though we know it's locked and will only play on a Kindle device, and even though we know that our Kindle device will probably be landfill in two years, and by then we might've bought a computing device. But we say 'it's good enough for now, it works fine.' It works fine, but it fails very, very badly. And over and over again we have been willing to make that trade-off. But the trade-off is worse than it appears on its face. Digital rights management is a very fraught proposition, because digital rights management by definition treats the owner of the computer as its adversary. The only reason you need digital rights management in a computer is to stop the owner of the computer from doing something she wants to do it. If all you want to do is keep honest users honest, you don't need it at all. Honest users don't need a lock on something that belongs to them. The only reason to put it on there is to stop the owner from doing what she wants to do.

[29:29] Now, we don't have a theoretical model in computer science for a computer that can run every instruction, except for an instruction we don't like. All computers can run every instruction, that's what it is to be a general purpose computer. And so you can't make a computer that can run all the programs, except for the program that exports a Kindle book to a non-Kindle device. You can't make an iPhone that can run every program, except for the ones that didn't come from the app store. In order for these devices to work, they have to be designed so that there is some program that watches while you use them, and waits for you to ask them to do something that's on the forbidden list. And when you try to do the thing that's on the forbidden list that program swims to the fore and says 'I can't let you do that, Dave.' And none of us want the "I can't let you do that, Dave" program on our computers. And if we find it, we delete it. So in order for an "I can't let you do that, Dave" program to be sustainable, it has to be hidden from us. The operating system and the file system needs to be designed so that there are programs and files that aren't revealed to you under normal circumstances. If you say to your computer 'Is there a file here called "I can't let you do that, Dave,"' your computer has to say 'no', even if there is such a thing, otherwise you'll delete it. And if you ask your computer 'Is there a program running called "I can't let you do that, Dave"', your computer has to say 'no', even if there is such a program running, otherwise you'll delete. But in order for that to be sustainable you have to make it illegal to tell people about how the "I can't let you do that, Dave" program works. Otherwise, they'll go to Google and they'll type in 'how do I delete the "I can't let you do that, Dave" program?' and they will find a file that says 'it's here , it's called this, and here's the command you enter.'

[31:34] And so, under the European Union copyright directive of 2000, under the American Digital Millennium Copyright Act of 1998, under Canada's Bill C-11 of 2012, and under laws that have spread like an infection all over the world, it is illegal to tell people information that can be used to remove a Digital Rights Management product. And this means that it's illegal to tell people about long lived vulnerabilities in the programs and services and devices that they use every day and have the power to compromise them in a thousand ways, large or small. This is terrible, terrible, terrible policy.

[32:27] This is a recurring problem in the way that we evaluate technology and its especially relevant when we talk about consumer rights and consumer reviews. Because when we review products, we tend to discount the potential harms arising from this kind of failure to about zero, and we tend to elevate other values like elegance and ease of use more or less to infinity. Now I'm the first person to say that we need to make our secure, owner respecting technologies easier to use and more elegant. But the problem is that magazines and services that review consumer technology tends to ignore any potential risks arising from using devices where it's illegal to tell people about bad things going on in them. And because there's no way the manufacturer can be comprehensively certain that there's nothing bad in it at the time of manufacture, that means that we tend to put our users and the people who put their trust in us at risk. We need to fix that.

[33:34] Information and Communications Technology are not just another example of an area where we underweight a future risk for a present value. That happens all the time. We do it in environmental cost, we do it when we eat cheesecake, we do it when we smoke a cigarette. We're always underrating some future risk for some present value. But the problem with information and communications technology is that they are the very infrastructure of our world. They are the nervous system of the 21st century. And under-weighting those risks at a policy level as well as an individual level puts the entirety of our civilization at risk. It is an existential risk to our species.

[34:16] So this is a problem I struggle with every day, and I'm involved in lots of initiatives to try and make the secure stuff better and easier to use. But we need your help. We need the help of consumer agencies and consumer rights agencies and review agencies. Because we need you to help rate and review these things appropriately, to make people aware of the dire risks that they enter into when they use software and hardware where it's illegal to tell them about the vulnerabilities. We need you to address states on our behalf, as respected advocates for consumers, and tell them that it is desperately inappropriate and a huge moral hazard for them to rely upon the "Bundestrojaner" and its progeny as part of their policing strategies. That the amount of vulnerability that we should deliberately insert into the communications technology of the world is zero. Anything less puts us all in terrible danger.

[35:14] Thank you, I'll take your questions now. And thank you to the interpreters that has been typing from abroad, you have my sympathy.

[35:31] [Host] Thank you, Cory.

Questions and Answers[edit | edit source]

[35:51] [Host] You've been describing a world that we can all say is broken on all levels for communications systems. Do you see some way to really fix that problem you described?

[36:02] [Cory] I do. I think that it requires a shift in how we view these things, from a kind of instrumental view to an environmental view. That shift that I talked about it about looking at this not as an engineering problem, but public health problem. I think the underlying issue – in part from a policy perspective – is that before general-purpose computers we are accustomed to thinking of complicated things as being special-purpose. So a car, for example, is relatively special-purpose, while a wheel is very general purpose. And you might say to an auto manufacturer, 'you have to design a car that doesn't have a phone in it, because when people talk on the phone they're distracted and they have car accidents.' And the manufacturer would say 'yes, that sounds okay.' But you would never say to a wheel manufacturer 'every bank robber uses a car with four wheels on it, can you design a car whose wheels can't be used by bank robbers?' Because we understand that wheels are general purpose, and they are not tractable by that kind of regulatory intervention – while cars, having a special-purpose nature, can have features added and removed.

[37:17] But computers have this special nature, this thing that computer scientist call "Turing completeness" – named for Alan Turing, the code breaker and computer scientist. And Turing completeness is the property of a computer that can execute all instructions that can be expressed symbolically. And that is really the only model for a computer we have. Maybe somewhere out there, lurking in potentia, is the computer that runs all programs except one week. But we don't know about that computer. We only have this one computer, and I think regulators are accustomed to thinking of computers features as being like their software. So if we say 'make me a computer that doesn't have spreadsheets on it', they think of that as 'you just go in and you delete the spreadsheets program.' But that doesn't stop the computer from being able to run spreadsheets. It just means that the spreadsheet is gone for now. The only way to keep the spreadsheets out of the computer is to put spyware on it. And we've treated that as functionally equivalent to a computer that doesn't have a spreadsheet function, but it's not the same at all.

[38:22] [Host] Do we have another question?

[38:31] [Question] So there's this post-privacy movement that I've heard about. People that put everything online, all their data. They check in everywhere they are so anyone can follow anything they do. What do you think about that reaction?

[38:47] Well, there are also people who don't like wearing clothes. That's fine, right. I think that it's a danger to observe that some people don't care about their privacy and therefore decide that nobody deserves to have privacy. The statement that privacy is dead is often a demand and not an observation. And it's often a demand by companies that would make a lot more money if privacy was dead. You rarely hear people who don't have a financial stake in your willingness to kill your privacy announcing the privacy is dead. I think particularly when it comes to young people, we tend to overestimate how intentional their actions are. danah boyd – who is this amazing American sociologist who studies how marginalized young people use the Internet and has just published a brilliant book on this that is open access called "It's complicated" from Yale Press. She talks about the hazard of the phrase 'digital natives' It ascribes to adolescents a kind of ninja-like perfection in their use of the Internet. And she says when we see young people using the Internet in a way that compromises the privacy , we assume, because they must be 'digital natives', that they are doing so because they have a near mystical understanding of the future of privacy and that it doesn't matter anymore. And she says that there is another hypothesis that's a lot more plausible, and that hypothesis is that they're kids who made a mistake. And in fact when you look closely, what you find is that young people are extremely jealous of their privacy but in ways that reflect a narrow understanding of privacy that's quite a immature. They're very worried about their privacy from their parents, and they're very worried about the privacy from their teachers and they are very worried about privacy from bullies and they are very worried about privacy from people that they're romantically interested in. But they're not worried about privacy from the police and they're not worried about privacy from governments and they are not worried about privacy from news agencies who periodically find children drinking beer or something and put it on the newspaper and say 'look at how horrible our children are.' I think it's much more plausible to think that at kids are not post-privacy and rather just not very good at it yet.

[41:14] Privacy is one of those things where the consequences are separated from the action by a lot of time and space. And those are the kinds of things that we have a hard time getting good at. Smoking is a really good example of this. If you smoke, you won't get a tumor right away, but if you smoke long enough, you'll get a tumor. And the tumor will come fifty years after you start smoking. It's very hard to make a good decision the next time around, when already fifty years has gone by, and you've got cancer. Cheesecake. If every time you took a forkful of cheesecake you got an ounce of cellulite, we wouldn't eat a lot of cheesecake. There's this huge gap between cause and effect that makes us bad at counting our calories. And privacy is another one of those things. When I was 8 years old I had an elementary school teacher who went to the hospital to have his first baby with his wife, as you did in Canada in those days. And when the baby was born, as was common in those days, a representative from a marketing firm came up to him and his wife and their newborn and said 'I have a basket of gifts for you from from companies that sell products to parents of newborns. I have some nappies and some cream and so on, and all I want from you is your baby 's date of birth and name and your address, and we'll stay in touch with you.' And it's a very small privacy bargain for something that is clearly valuable, so they took the deal. Now as happens sometimes a few weeks later their baby unexpectedly died and every year on their babies birthday they got a package in the mail from this marketing company and its partners. And it's hard to say that they should have just understood better what the privacy consequences were when they made the trade. It is somewhat monstrous, really, to say 'well, why didn't you think about your baby dying as he sat there in the maternity ward with your newborn, why was that on your mind before you gave your information to marketing company?' Having had that very hard lesson it's hard to imagine how they might generalize that lesson and use it to get better.

[43:21] In the old days of pre digital cameras, we would take about two rolls of pictures a year on film. We would do one on our annual vacation. We do one on Christmas and birthdays. We get them back from the lab and some will be good and some would be terrible. But unless you took notes with each picture you wouldn't know what you did to make the good ones good. But now we take a picture and right away we see what happened. We've gotten so good at taking pictures through no deliberate action of our own, just by closing the cause-and-effect loop that we now buy filters to make our photos look like they were taken in the 1980's by someone using a film camera. Because they look too posed, they look like they came out of a news bureau and not out of a family vacation. I don't know how we do that for privacy, but until we can close that cause-and-effect loop more tightly I think that it's far too early to declare privacy dead.

[44:20] In particular I think that we struggle with this when it comes to kids. Because we want to spy on our children. Our schools and our libraries and even parents want to put filters on the network that capture every click to make sure they're not looking at bad pages. Those filters don't work very well. There are several billions of webpages on the Internet. If they make a mistake one percent of the time, that means tens of millions of pages that they shouldn't block that do get blocked and it means tens of millions of pieces of eye watering pornography that got through. And so they are by no means an effective remedy. But what they do, by definition, is capture every click to make sure it's not a click for one of the prohibited websites. And these are operated by companies that we have no insight into, their often offshore, and often their major customers are repressive governments in the Middle East and they just repackage this for schools and libraries. They're really some of the worst people in the world and we give them our kids entire click streams. But the worst part of this is it puts us in a position where if our kids do anything to protect their privacy we have to punish them. Because we have decided that in order to keep them safe they must have no privacy at all. That we must know every click they see to make sure they don't look at the bad webpages. If we really cared about whether or not kids cared about privacy. Our school curriculum would consist of teaching kids to jailbreak every device, break through every sensor wall, use proxies… I mean , that's how you stay private! And instead we're like the parents whose stubs out a cigarette, lights another one and says, 'you mustn't ever smoke!'