Our world is becoming more and more connected, and with that it also becomes less safe. Following a talk he gave at the 2017 ITBN Conference in Budapest in September, F-Secure’s Mikko Hyppönen agreed to also give us interview.
Known for the Hyppönen law in IoT, which states that if a device is connected, it’s also vulnerable, Hyppönen has had a long career fighting malware and other cyber threats, even getting credit for improving the security that little tool many of us use each day – Twitter.
With his hair sleeked back in a ponytail, still smiling after the round of applause the room rewarded him with following his presentation, Hyppönen opened up about the cyber threats we live with, the dangers to our privacy, Artificial Intelligence and more. Without keeping you waiting any longer, here is our interview.
Q: Machine learning will help us detect new threats, but will AI ever be able to find new threats, without human input?
A: That’s going to be in the very far future. Machines are good at understanding trends. But trends are based on what’s been happening so far so to detect something completely new is hard. I can give you an example. 2010 – Stuxnet – one of the most famous pieces of malware in history. Stuxnet was spreading in the wild infecting real user systems for two years. No one detected it. Everyone was using automation. We were using automation, all our competitors were using automation. Every single learning system missed Stuxnet because Stuxnet didn’t look like malware. It looked nothing like malware.
Normal malware, especially at the time, was obfuscating – encrypted, packed, compressed – really small, really tight, hard to understand. Stuxnet was huge – no encryption, out in the open; there was nothing malware-like in Stuxnet. If you looked at what it did, it looked like an installer that would install different device drivers to a system. But it wasn’t, it was malware. And every automation which had been taught to look for small, encrypted, malware, completely missed Stuxnet for two years. So that’s one challenge.
There’s another way of looking at it, which is – forget malware, let’s think about vulnerabilities. Let’s find the root cause. Malware can only infect your system if you’ve got some sort of vulnerability. The vulnerability can be the operator, the human, like clicking on links, but quite often, it’s a technical vulnerability – some kind of a bug in the code which allows remote exploit to run code on your system, which in the end is a programmer mistake. And programmers are people – people make mistakes. We can’t change that.
But in my talk (NB: ITBN speech) I mentioned the idea of programming programs. I believe that’s going to happen. I believe we will eventually have all the programs written by programs. And they are so good – a million times better than we are – they will not write bugs, there will be no vulnerabilities. Or, if there is a bug, it’s so goddamn complicated we will never be able to exploit it. So, in that sense, machine learning and Artificial Intelligence, especially self-programming programs, might fix all the technical vulnerabilities we have. They won’t fix people; people will still be clicking on wrong places, but they will fix technical vulnerabilities.
That might lead to an arms race – good AIs write programs with no vulnerabilities and then “bad AIs” try to find vulnerabilities in those programs. Humans can’t find them, but maybe “bad AI” could. If that’s going to happen, I hope I am retired by then.
Q: So is this the future?
A: I gave a talk at Black Hat last month and they asked me to speak about the next twenty years because it was the 20th Black Hat. I made some educated guesses, but I started my talk by telling them that the best part is that in 20 years I’m going to be 67, hopefully still alive and retired, but if I’m still in good shape, I’d be happy to come back to Black Hat in 2037 and we can look at my talk from today and laugh at all the forecasts I did which went completely wrong. If someone would have tried forecasting 2017 in 1997, I think he would have got it wrong.
I remember, 1997 was the time when everyone had homepage. You had your own server, and your own homepage on someone else’s server, and you had your picture there, your hobbies and stuff. Homepages were becoming a huge thing and I remember thinking that one day everybody will have their own server, and their own homepage. And you can find everybody’s photo and everybody’s face. That’s exactly what happened, but it’s just that not everyone has homepages, everybody went to one single website. I would have never believed something like Facebook become the norm.
TCSF: Since we’re talking about what we tell the others about our lives, what do you think the regular Internet user, not necessarily tech-savvy, need to do to stay safe and protect their cyber secrets?
MH: The problem of security and the problem of privacy are totally different problems and solved with different mechanisms. Many of the problems we have with our privacy probably cannot be solved anymore at all. If you’re using the Internet, you have to make tradeoffs with your privacy – you can’t be completely private. On stage, I said “privacy is dead,” which is a big statement to make. I don’t think it’s completely dead, but clearly, we are in a much worse state.
I tried living my life without Google and I failed. You can replace some of the services, you can avoid using Google Maps and use Nokia Maps. You can avoid using Gmail and use your own email server. You can avoid using Google Search and use DuckDuckGo, but none of these are really that good compared to Google.
Google products are really, really good. And the worse part is that someone sends you a funny video – a link to YouTube – and what are you gonna do? Click it! And you can’t watch it anywhere else other than the Google services. So I gave up.
Practical guidance – what can you do regarding your privacy? Well, first step is awareness – understanding that there are no “free lunches” on the Internet. All these free services are collecting information from you and you don’t have to give them everything, you don’t have to give them unnecessary information. One thing which is fairly easy to do and understand for normal users is that on your computer you can have different browsers for different things. You can have Edge for Facebook. Then you have Chrome for Google services. But that means that Facebook, which would really like to profile which websites you visit – can’t because you use DuckDuckGo on Edge. With Chrome, you never log into Facebook so they never get your cookie. They can’t combine your life across different services. Another thing I recommend are VPNs, especially on your laptops that you travel with, especially on your tablets, on your smartphones. Especially when you’re connected to a WiFi. Without a VPN it’s very easy for websites to track you, and it’s easy for anyone else in the same WiFi to see everything you do and every website you visit. VPNs are fairly easy to use.
TCSF: I think the problem with many VPNs, the one your company has included, is that it costs money to use and people associate the Internet with the notion of “free of charge”.
MH: Why should the Internet be free?
TCSF: Well, it’s just the concept – you are given free stuff, things that you don’t pay money for.
MH: But you pay with your data.
TCSF: Yes, you pay with your data, but not physical money, which is something more tangible.
MH: Would you rather pay money or would you rather pay with your privacy?
TCSF: Well, I call Google my “pact with the devil” – I know what I’m signing up for and how my data is used, but the tools are too good to pass up. I tried switching off from Google too, and failed miserably.
MH: But there are big companies out there that are willing to protect your privacy. I think Apple is a good example. Apple’s product is not the user – Apple’s product is hardware. They’re selling you things so they don’t need to monetize you and your data. They are totally different in the way they handle user data. Google has great security and it does a great job securing your data against anyone else, except themselves.
TCSF: I was actually in a conversation earlier about online advertising. More specifically, the huge amount of data these companies have on you, even if it’s anonymous. The question was, whether some hacker somewhere could find a way to backtrace that unique code each of us has that is used to serve us ads based on the sites we visit. Even worse, what would it mean for us for such data to be exposed.
MH: Our Freedome app, when you put it on, it has a visualization mode where it shows you all the cookies it stops and makes a map and you can see how your NY Times cookie links you through five different cookies to your YouPorn cookie. It’s very interesting to see the link between different cookies.
TCSF: There’s been a study recently about AI and how one specific AI was trained to scan the facial features of some 35,000 people and tell whether they were gay or not. Now, the study is said to have been flawed, but it doesn’t mean that it can’t be duplicated in the future with proper data. Do you think there are things that AI shouldn’t be used for from an ethical point of view, especially given the context that there are some 70 countries where it’s illegal to be gay and a dozen of them where you can die because of this?
MH: There’s definitely lots of ethical dilemmas with Artificial Intelligence and there definitely are lines we should not cross with AI and that’s one of them – profiling people in malicious ways is problematic. I had a slide in my talk (NB: for ITBN) which I didn’t show, but I’ll show it to you as another example of ethical lines which are being crossed with today’s technology and shouldn’t. It’s a Russian website which has a very efficient face recognition algorithm. They have plenty of demos where they are in the subway and take a photo of a pretty girl, go to the system and find out who she is on Facebook. It gets worse. They have a photo – 15 years old – a small girl with her mother and sister – they look at the small girl and find her today.
If we don’t think about AI, let’s talk about drones and robots which aren’t that far away. One thing that I feel strongly about is that we should not give privacy rights to robots. I don’t think robots should get privacy. Practical example – a drone flying there shooting video of us is a robot. It shouldn’t have any privacy. We, humans, we should be able to always point at the drone and get information about whose is it, why is it here, who owns it.
TCSF: Do we need to redefine the term of privacy? Because it doesn’t mean the same today as it did ten years ago.
MH: That’s what I was going to say – it doesn’t mean the same. When you speak to 20 year olds, they don’t remember a time before the Internet. For them Google has always been there, Wikipedia has always been there, YouTube has always been there. They don’t remember a time when you paid money for media, it’s always been free. In that sense, the idea of privacy has already changed. Another example, you go to a service as Facebook and Facebook says you can send private messages between two users. Private? The word private used to mean a conversation between two people, when no one else is around, but this is giving your messages to Facebook and Facebook gives it to the other person. How is this private? They are redefining the words as well.
TCSF: Do you think the interest for end-to-end encryption will increase?
MH: I can only hope. Sometimes people ask what changed after Snowden. I tell them that before Snowden, and it’s not just Signal or WhatsApp, just the amount of websites you use – before Snowden Google was HTTP, LinkedIn was HTTP, Amazon the same, except the purchase part. Now everything is HTTPS. That’s what Snowden changed. Awareness.
TCSF: But has anything changed at the level of the NSA and intelligence services in terms of what they collect. Do you think they comply?
MH: I assume the NSA complies with the new rules about tracking private information of US citizens. I’m assuming they stopped doing that, but that doesn’t help you and me because we’re not US citizens.