Security Now 346
Topic: Q & A
Recorded: Wednesday 28 March 2012
Security Now 340: Q & A #140
=== Stories that made the news ===
Law enforcement tools can bypass the iPhone passcode in under two minutes
Study finds major flaws in single sign-on systems
Flawed sign-in services from Google and Facebook imperil user accounts
Loose-lipped iPhones top the list of smartphones exploited by hacker
Stalking iPhones at Starbucks Follow-Up
Is Microsoft Challenging Google on HTTP 2.0 with WebSocket?
New Java Attack Rolled into Exploit Packs
Do I have Java
Just for Reading...
Coffee Section of the Podcast: Funranum labs - http://www.funraniumlabs.com/the-black-blood-of-the-earth/
Toddy T2N Cold Brew System - http://preview.tinyurl.com/88c728t
Cold Drip Coffee and Tea Maker, 8-Cup - http://preview.tinyurl.com/7ec9djw
"I am a computer maintenance freak. I had been experiencing a problem which turned out to be a software glitch. However, I was at the time afraid my drives were going to go. I learned about SpinRite while reading up on SmartComputing." He has those capitalized, so maybe that's a site. And he said, "Double-checked with my office computer guy, who highly recommended SpinRite. Purchased and downloaded it today. It took a few hours to run through my drives. Seems like things are running better and faster than ever. So thanks. This was a great investment, and I will add SpinRite to my maintenance schedule." So he's got the right idea. Get it before your drives die, and they probably never will.
Questions & Answers
Question: [ 01 ]
Question: Doubling key size is security theater ...
Remember we had a question two weeks ago, somebody says, well, why don't we just - we went from 1024 to 2048. Why don't we just go to 5096 or something? If your key is large enough, he says, or she says, to make a brute force attack infeasible, a longer key doesn't add security. Beyond that point, a determined bad guy will try to exploit a weaker link, and there are plenty of those, like buggy software, spear phishing, social engineering to get a keylogger installed. Bad guys know the old story: If you're hiking in the woods, you don't need to outrun the bear as long as you can outrun the other hikers.
So it's the weakest link. And at some point we know - in fact, true crypto failure is almost unknown. I'm trying to, I mean, almost, I want to say, because there have been, historically, there have been some flaws found in older technology. We no longer use MD4. But even RC4 that was the crypto used in the very first WiFi, the WEP WiFi, it wasn't the fault of the crypto, it was the fault of, again, the implementation wrapper that the crypto contained. And so that we continue to see, just like we were talking about OpenID and other things. It was like, hey, all of this is all super security and signed and keys and all that, but then they forgot to check, see if the signature was valid.
So but the one thing I did want to remind us of, sort of that is a counterpoint to this notion that doubling key size is security theater, is the notion of future-proofing because that's something to keep in mind. There's this spectre of quantum computers hovering out there that are sort of going to just instantaneously try all possible keys at once. We're a long way from there. And lord knows what happens when those exist because the end of the world as we know it.
But for now, it is significant, I think, that that NSA facility is not attempting to crack things today. They're going back and going to crack things that they've been recording for the last 50 years, back when the underlying security technology was strong enough for then, but not for now. So there is this notion of the future. At the same time, 128 bits is plenty for connection-oriented things, like Carbonite, for example, is using 128 encryption. That's a session key used on a point-to-point link which is regenerated and changed every time you reconnect. And also sometimes on the fly you're able to renegotiate a key on a running basis on these connections.
So 256 bits is plenty for data at rest as opposed to data in motion. So you want to choose key lengths properly. But the anonymous listener who wrote this question is certainly correct that, once you are future-proof, then all you're doing is wasting space and time and processor cycles.
By: An Anonymous Listener
Question: [ 02 ]
Question: I found some buffer bloat
SN-345 equals fantastic. I ran the test and got "Network buffer measurements" - he's talking about the Netalyzr test. In fact, bit.ly/sn345 will take you to that Netalyzr test. I gave it to Russell, our IT guy, said this is a great thing to have in your toolkit because we learned a lot. So this guy had an uplink of - I guess an uplink latency of 490ms, downlink of 2,000ms. Yikes. That's what we call a bloated buffer. So what can I do about it? Is there any way to tell where the bloated buffer is? My router, a D-Link DI-602, is about eight years old. Could that have too much buffer?
By: Al Kraybill in Arlington Heights, Illinois
And this is the problem, that I saw some amazing measurements from our listeners. One guy was at 7.5, wait, not minutes, 7.5 seconds. So, I mean, 7,500ms in one direction. And the problem is that we're in that awkward place where something is getting a lot of attention, yet as the stickers say on the back of our televisions, there are no user-serviceable parts inside. Once upon a time you had tubes, and you'd take the back off the TV set, and remember you'd not want to use one of those cheater cables that allowed you to keep the thing fired up with the back off because you wouldn't want to electrocute yourself. And you'd pull the tubes out and take them down to the drug store and run them through the tube tester. I'm sure you remember those days, Leo.
And now we've just got boxes that are closed. And at the moment, while this issue is still so new, there just isn't anything for us to tune. There is, like, the newer version, the newest version of Linux, 3.3, is beginning to address this. Hopefully that will make it into some router firmware, like the Tomato or the DD-WRT stuff, where we'll begin to get this addressed. At the moment, right now, I don't think there's much that anyone can do. And, I mean, this is where I guess I'm glad that I'm as busy and backlogged as I am with existing projects because, I mean, I could just go off on this and never return.
I would love to do a utility that would tell you where in your link the problem was, and it's possible. But no, don't worry, I'm not going to let myself get distracted by that. So Al, I just, unfortunately, it's useful to know we have the problem. There's not much we can do except to work to minimize the buffering, which is to say, if you know you've got a - when something is saturating your bandwidth, when you know that in addition to saturation you also have delay, then the only thing you can do, if you're unable to find the delay and remove it, is work on whatever it takes not to allow that buffer to get full. Which, for example, means being careful not to be uploading a big file when other people in the household are trying to be interactive on their computers, do that some other time. And so at least now we understand what's going on, which is a big step forward, although it also creates some frustration in people who want to fix it.
Question: [ 03 ]
Question: I found router buffer excessive buffering
I ran the Netalyzr utility you mentioned on Security Now! - again, bit.ly/sn345. And besides excessive buffering, I got a lot of strange errors about DNS not working correctly. It doesn't look like I can change the DNS server in my Qwest Actiontec Q1000 DSL modem/router. It's set to some IPs ... I then changed it to 184.108.40.206 and then ran the test again. This time all the odd DNS errors went away, and it only found two problems. The first one, network packet buffering may be excessive. We estimate your uplink as having 5,700ms of buffering. Yeah. 5,700ms of buffering. And we estimate your downlink as having 450ms of buffering. Wow. 5.7 seconds is a long time. Can anything be done about that? Can I even tell where it's happening? This is kind of like the previous question. The second problem, DNS resolver properties lookup latency was 340ms. That doesn't seem so bad.
By Steve Coakley in Phoenix, Arizona
So I did want to mention that that test tests a lot of other things. And many users found, just as Steve Coakley did, found other problems with their network that they were unaware of. My sense is that 340ms is a little slow for DNS lookups. I don't know why it would be so slow. Maybe it's just the DSL connection that he's got. I wanted to remind people that my own, GRC's DNS Benchmark, exists, and that that might be a good thing to use. There may be some solid, publicly available DNS servers other than the Level 3 servers, although those generally do perform up near the top of the list.
But the DNS Benchmark from GRC, you just - in fact, I think you can just put "DNS Benchmark" into Google, and I pretty much claim that territory now because the Benchmark is a good one. It is Windows-only, but it is friendly with Linux under WINE, and you can run it on MAC with WINE, as well. And again, we've discovered huge buffers. One of the other problems that we have is that it could be ISPs buffering in their routers, so those buffers are completely inaccessible. And also we know that later model network adapters have large ring buffers in the kernel, so that's introducing delay. And we may never have access to that.
So again, the best thing we can do is, like, ask everybody, make noise, jump up and down. Hopefully this is a problem which just sort of, well, we know that it just sort of crept up on us. Nobody was really paying attention to it. Now a lot of attention is coming to it because people are downloading large content, not just being interactive. This was never a problem when everything was just web surfing, clicking on links and being interactive. This because a problem when some member of the family wants to watch TV over their Internet connection, which was crazy five years ago.
Question: [ 04 ]
Question: Comment about our server security conversation
I listened with horror, Steve, as you agree with Leo's supposed expert. You did not agree, by the way. I want to make this clear. You sat there and nodded, but you didn't necessarily agree.
I maintain websites with in excess of 14 million unique visitors a day. We have never been compromised, yet we see hundreds of attempts per hour. It's not PHP that is the problem. It is the code that is written in PHP - well, okay, thank you, master of the obvious - and the willingness of a system administrator not to correctly set file and directory permissions. And that one I might agree with. Bad code can be written in any language, as can good code. The difference is bad coders are tempted toward the use of toy languages such as PHP. I don't think he understands what I was talking about.
There's no excuse for the injections that have happened and the placement of code on Leo's system. I was especially horrified by your acquiescence, which again, you did not do, to my comments that in the good old days we had a cgi-bin directory. We still have a cgi-bin if we want, he says.
By Magic John in Colorado, USA
And so I went over, and I said, you know, guys, I don't - we don't have all the facts. We don't know what's going on. I'm unwilling to pile on someone who I don't know, and it just doesn't seem fair to me. And I said, so we just don't have enough information. And Bear, someone must have said hey, you know, this is being talked about o