Security Now 346
Topic: Q & A
Recorded: Wednesday 28 March 2012
Security Now 340: Q & A #140
=== Stories that made the news ===
Law enforcement tools can bypass the iPhone passcode in under two minutes
Study finds major flaws in single sign-on systems
Flawed sign-in services from Google and Facebook imperil user accounts
Loose-lipped iPhones top the list of smartphones exploited by hacker
Stalking iPhones at Starbucks Follow-Up
Is Microsoft Challenging Google on HTTP 2.0 with WebSocket?
New Java Attack Rolled into Exploit Packs
Do I have Java
Just for Reading...
Coffee Section of the Podcast: Funranum labs - http://www.funraniumlabs.com/the-black-blood-of-the-earth/
Toddy T2N Cold Brew System - http://preview.tinyurl.com/88c728t
Cold Drip Coffee and Tea Maker, 8-Cup - http://preview.tinyurl.com/7ec9djw
"I am a computer maintenance freak. I had been experiencing a problem which turned out to be a software glitch. However, I was at the time afraid my drives were going to go. I learned about SpinRite while reading up on SmartComputing." He has those capitalized, so maybe that's a site. And he said, "Double-checked with my office computer guy, who highly recommended SpinRite. Purchased and downloaded it today. It took a few hours to run through my drives. Seems like things are running better and faster than ever. So thanks. This was a great investment, and I will add SpinRite to my maintenance schedule." So he's got the right idea. Get it before your drives die, and they probably never will.
Questions & Answers
Question: [ 01 ]
Question: Doubling key size is security theater ...
Remember we had a question two weeks ago, somebody says, well, why don't we just - we went from 1024 to 2048. Why don't we just go to 5096 or something? If your key is large enough, he says, or she says, to make a brute force attack infeasible, a longer key doesn't add security. Beyond that point, a determined bad guy will try to exploit a weaker link, and there are plenty of those, like buggy software, spear phishing, social engineering to get a keylogger installed. Bad guys know the old story: If you're hiking in the woods, you don't need to outrun the bear as long as you can outrun the other hikers.
So it's the weakest link. And at some point we know - in fact, true crypto failure is almost unknown. I'm trying to, I mean, almost, I want to say, because there have been, historically, there have been some flaws found in older technology. We no longer use MD4. But even RC4 that was the crypto used in the very first WiFi, the WEP WiFi, it wasn't the fault of the crypto, it was the fault of, again, the implementation wrapper that the crypto contained. And so that we continue to see, just like we were talking about OpenID and other things. It was like, hey, all of this is all super security and signed and keys and all that, but then they forgot to check, see if the signature was valid.
So but the one thing I did want to remind us of, sort of that is a counterpoint to this notion that doubling key size is security theater, is the notion of future-proofing because that's something to keep in mind. There's this spectre of quantum computers hovering out there that are sort of going to just instantaneously try all possible keys at once. We're a long way from there. And lord knows what happens when those exist because the end of the world as we know it.
But for now, it is significant, I think, that that NSA facility is not attempting to crack things today. They're going back and going to crack things that they've been recording for the last 50 years, back when the underlying security technology was strong enough for then, but not for now. So there is this notion of the future. At the same time, 128 bits is plenty for connection-oriented things, like Carbonite, for example, is using 128 encryption. That's a session key used on a point-to-point link which is regenerated and changed every time you reconnect. And also sometimes on the fly you're able to renegotiate a key on a running basis on these connections.
So 256 bits is plenty for data at rest as opposed to data in motion. So you want to choose key lengths properly. But the anonymous listener who wrote this question is certainly correct that, once you are future-proof, then all you're doing is wasting space and time and processor cycles.
By: An Anonymous Listener
Question: [ 02 ]
Question: I found some buffer bloat
SN-345 equals fantastic. I ran the test and got "Network buffer measurements" - he's talking about the Netalyzr test. In fact, bit.ly/sn345 will take you to that Netalyzr test. I gave it to Russell, our IT guy, said this is a great thing to have in your toolkit because we learned a lot. So this guy had an uplink of - I guess an uplink latency of 490ms, downlink of 2,000ms. Yikes. That's what we call a bloated buffer. So what can I do about it? Is there any way to tell where the bloated buffer is? My router, a D-Link DI-602, is about eight years old. Could that have too much buffer?
By: Al Kraybill in Arlington Heights, Illinois
And this is the problem, that I saw some amazing measurements from our listeners. One guy was at 7.5, wait, not minutes, 7.5 seconds. So, I mean, 7,500ms in one direction. And the problem is that we're in that awkward place where something is getting a lot of attention, yet as the stickers say on the back of our televisions, there are no user-serviceable parts inside. Once upon a time you had tubes, and you'd take the back off the TV set, and remember you'd not want to use one of those cheater cables that allowed you to keep the thing fired up with the back off because you wouldn't want to electrocute yourself. And you'd pull the tubes out and take them down to the drug store and run them through the tube tester. I'm sure you remember those days, Leo.
And now we've just got boxes that are closed. And at the moment, while this issue is still so new, there just isn't anything for us to tune. There is, like, the newer version, the newest version of Linux, 3.3, is beginning to address this. Hopefully that will make it into some router firmware, like the Tomato or the DD-WRT stuff, where we'll begin to get this addressed. At the moment, right now, I don't think there's much that anyone can do. And, I mean, this is where I guess I'm glad that I'm as busy and backlogged as I am with existing projects because, I mean, I could just go off on this and never return.
I would love to do a utility that would tell you where in your link the problem was, and it's possible. But no, don't worry, I'm not going to let myself get distracted by that. So Al, I just, unfortunately, it's useful to know we have the problem. There's not much we can do except to work to minimize the buffering, which is to say, if you know you've got a - when something is saturating your bandwidth, when you know that in addition to saturation you also have delay, then the only thing you can do, if you're unable to find the delay and remove it, is work on whatever it takes not to allow that buffer to get full. Which, for example, means being careful not to be uploading a big file when other people in the household are trying to be interactive on their computers, do that some other time. And so at least now we understand what's going on, which is a big step forward, although it also creates some frustration in people who want to fix it.
Question: [ 03 ]
Question: I found router buffer excessive buffering
I ran the Netalyzr utility you mentioned on Security Now! - again, bit.ly/sn345. And besides excessive buffering, I got a lot of strange errors about DNS not working correctly. It doesn't look like I can change the DNS server in my Qwest Actiontec Q1000 DSL modem/router. It's set to some IPs ... I then changed it to 22.214.171.124 and then ran the test again. This time all the odd DNS errors went away, and it only found two problems. The first one, network packet buffering may be excessive. We estimate your uplink as having 5,700ms of buffering. Yeah. 5,700ms of buffering. And we estimate your downlink as having 450ms of buffering. Wow. 5.7 seconds is a long time. Can anything be done about that? Can I even tell where it's happening? This is kind of like the previous question. The second problem, DNS resolver properties lookup latency was 340ms. That doesn't seem so bad.
By Steve Coakley in Phoenix, Arizona
So I did want to mention that that test tests a lot of other things. And many users found, just as Steve Coakley did, found other problems with their network that they were unaware of. My sense is that 340ms is a little slow for DNS lookups. I don't know why it would be so slow. Maybe it's just the DSL connection that he's got. I wanted to remind people that my own, GRC's DNS Benchmark, exists, and that that might be a good thing to use. There may be some solid, publicly available DNS servers other than the Level 3 servers, although those generally do perform up near the top of the list.
But the DNS Benchmark from GRC, you just - in fact, I think you can just put "DNS Benchmark" into Google, and I pretty much claim that territory now because the Benchmark is a good one. It is Windows-only, but it is friendly with Linux under WINE, and you can run it on MAC with WINE, as well. And again, we've discovered huge buffers. One of the other problems that we have is that it could be ISPs buffering in their routers, so those buffers are completely inaccessible. And also we know that later model network adapters have large ring buffers in the kernel, so that's introducing delay. And we may never have access to that.
So again, the best thing we can do is, like, ask everybody, make noise, jump up and down. Hopefully this is a problem which just sort of, well, we know that it just sort of crept up on us. Nobody was really paying attention to it. Now a lot of attention is coming to it because people are downloading large content, not just being interactive. This was never a problem when everything was just web surfing, clicking on links and being interactive. This because a problem when some member of the family wants to watch TV over their Internet connection, which was crazy five years ago.
Question: [ 04 ]
Question: Comment about our server security conversation
I listened with horror, Steve, as you agree with Leo's supposed expert. You did not agree, by the way. I want to make this clear. You sat there and nodded, but you didn't necessarily agree.
I maintain websites with in excess of 14 million unique visitors a day. We have never been compromised, yet we see hundreds of attempts per hour. It's not PHP that is the problem. It is the code that is written in PHP - well, okay, thank you, master of the obvious - and the willingness of a system administrator not to correctly set file and directory permissions. And that one I might agree with. Bad code can be written in any language, as can good code. The difference is bad coders are tempted toward the use of toy languages such as PHP. I don't think he understands what I was talking about.
There's no excuse for the injections that have happened and the placement of code on Leo's system. I was especially horrified by your acquiescence, which again, you did not do, to my comments that in the good old days we had a cgi-bin directory. We still have a cgi-bin if we want, he says.
By Magic John in Colorado, USA
And so I went over, and I said, you know, guys, I don't - we don't have all the facts. We don't know what's going on. I'm unwilling to pile on someone who I don't know, and it just doesn't seem fair to me. And I said, so we just don't have enough information. And Bear, someone must have said hey, you know, this is being talked about over at GRC. So he, to his immense credit, came over and said, hey, it's me, I have a thick skin, so let's talk about this.
And I what I had posited was that TWiT was in transition, and this is from things that you had said, Leo. I mean, you were quite literally a cottage industry for quite a while, just down the street. And Bear commented in the newsgroup, for example, that he had found the problem and turned it off, but somebody else turned it back on, and that that was really the way that this came to notice and was the problem that it was. And what that really said was that you guys are growing, and that there's a need for policy.
LEO: But let's put that aside because of course that's a bad mistake. But the larger question is, and this was the thing that I really would love to get to the bottom of, is that as long as a website is being changed in any way, you're going to have security flaws, and that breaches are not - this is my real question. Are breaches uncommon or common?
STEVE: And I'm not an expert. I cannot say.
LEO: Bear is of the opinion and knows - runs a very big site and knows others who run other well-known sites, and he's of the opinion that, as we have become more and more high-profile, we are certainly getting more attacks; and that it is very, very difficult, if nigh impossible, to prevent breaches of some kind. The question is really how quickly you see them and modify them. But on the other hand, people like our commentator here do raise the point, well, Magic John says you shouldn't allow file systems to be written to and shouldn't be able to do - but I think that that's the point of an exploit is that it somehow allows access, higher level. I don't know. I'm not an expert. I don't know.
STEVE: Well, and we don't know, none of us know except whoever is your, I mean, the gurus of your web server, for example...
LEO: There's the guy, he's the guy who runs our server, so...
STEVE: Are all of the directories read-only except, like, very carefully tuned so that the equivalent of cgi-bin, I mean, for example, at GRC there's only one location where anything can be run. Every...
LEO: And this was my problem with PHP. PHP, unlike cgi-bin, can be put in any directory and run from any directory. So, I mean, admittedly, all directories should be write-proof except that you have to have some directories that are going to be written to; right?
LEO: Right. This is all server-side.
So what we can definitively say is it is really hard. There are old curmudgeons out there who want to say, oh, no, this is really easy. It's not easy. I mean, my site is easy because I don't have any server-side scripting stuff like that. I haven't had to deal with this. I do have static HTML. And then my own stuff generates dynamic pages like the ShieldsUP! page and so forth. So that's why I really can't speak to the challenges, because I haven't faced them myself. But we know that security is difficult. And so I think, I would imagine, that the lesson that TWiT has learned is it's time to really focus on security. I mean, there probably shouldn't be a situation where Bear could disable something which is now a known problem, and somebody else could turn it back on again. So...
LEO: Well, yeah. And that was a miscommunication, and that certainly, you know, that's not going to happen again.
STEVE: Yeah. And so there needs to be a single point of responsibility, somebody who really has that job, and sort of like moving...
LEO: Well, that's a problem because we have web developers that are working on the site. Now, I think a big problem was, with this previous TWiT.tv development, we didn't have a production and a development site. We were doing stuff - code went live live. And that was a big mistake. We're not going to do that anymore.
STEVE: So also there's some live and learn.
Question: [ 05 ]
Question: Spidey - or SPDY
I understand why there's a scaled-down version of server push which hints to the client what they should be asking for next. But that appears to hinder the speeding-up of the bandwidth as the client would still have individual requests. I would think the parallelization of requests and content returns would have a better payoff. Let the browser request all it wants, after filters like NoScript or image block, and fill the bandwidth with only desired content. All in all, SPDY sounds good. But from the little I've heard and read it seems focused on delivering all content, a business perspective, rather than just desired content, a user perspective. I'd love to see a more middle-of-the-road perspective. I hypothesize adoption would be much faster on both sides. Thanks, G. Sveddin
By G. Sveddin in Southern California
Well, it certainly changes the model from a client-side request to one which does offer some server push. Now, in fairness, the server push side is regarded as an advanced feature. It isn't part of the base SPDY spec. Both of those things, the server hints and the server push, are sort of more on the experimental fringe. It's like, well, this is in the spec. We didn't want to, like, not design it in so that we wish we had it later because maybe it would be a good thing. My sense is it isn't something which is being actively used and deployed at this point, probably for much the same reason. And I really do, I mean, as a proponent of only getting what we ask for and of things like NoScript, he's right. Some of these scripting libraries are just big blobs which someone gives you the URL to it, and your browser's going to suck it down and take all the time to do that, which is crazy if you've got scripting turned off.
So, no, I think his point is well taken. For what it's worth, my sense is it isn't happening now. And I would argue that the business perspective is giving the user the most responsive page possible, and that you could, for example, give them images. Well, of course, here he was saying he wants to be able to turn images off. So that's a problem. Anyway, so I guess it's certainly a tradeoff, and I thought he raised a good point, which is why I wanted to include it.
Question: [ 06 ]
Question: Wondering about SPDY and CDNs
By Brian M. in Edmonton, Canada
So what he's assuming is that this - we talked about the notion of a single connection being made between the browser and the server, and sort of had to get over the idea that that could be faster than multiple separate connections. And as long as we've got a single connection highly used, a fully utilized single connection, which is what SPDY for the first time allows because we can overlap things, that single connection can be overall much more efficient than making separate connections. But that absolutely doesn't keep you from establishing additional SPDY connections to entirely different domains.
So the idea would be you would have one connection per domain or per server from the browser out to that remote asset, and a different connection to the CDN. So you're still going to get the benefit of SPDY. In fact, the CDN might not support SPDY, so the browser would seamlessly use a non-SPDY connection to the CDN, yet a SPDY connection to the server in order to pull all of those assets from one place. So there definitely is no assumption or presumption on the part of SPDY that it will only be making a single connection to a remote location. It'll make a single connection per server. But if you're pulling content from 36 different places, it'll set up 36 connections, and those that are SPDY can run faster than they would otherwise.
Question: [ 07 ]
Question:Just wondering about the future readability of computer media
Mr. Gibson, sir. My father's church is getting ready to celebrate its 100th anniversary pretty soon. They plan on opening the time capsule under the cornerstone - that's neat - and adding a CD with photos on it. It hasn't been opened since the '60s. I was thinking about this. What would be better for them to use, figuring the technology still exists to read them, a CD/DVD or a thumb drive? Thanks for your time.
By Jon in Kentucky
Because, for example, I have around here some MFM hard drives with data on them, yet no MFM controllers. Or if I have an MFM controller, it's got an ISA, the original bus from the PC, and I have no motherboards with ISA buses anymore. It is a really good point that we think in terms of the technology we have and are using, but this whole issue of will we be able to read it in the future, not just will the medium itself hold up its integrity, I mean, that's a question, will the writable CD be readable even if you had the technology? I mean, well, if I had an eight-track tape from back in the day, I don't have any eight-track tape players. Of course vinyl has come back into vogue, so there's an exception to the problem of needing a turntable to play the disks that you and I were listening to in college, Leo.
Well, you know what you should do, you should print prints. Because those will be good.
That's exactly right.
And better yet, put it on papyrus. It's the only thing we know lasts thousands of years.
Put it on acid-free paper.
You want it on acid-free paper so it will not yellow. All of my DEC manuals from the early 1970s are just...
They're crispy, yeah.
...incredibly yellow because they were not on acid-free paper. I printed all of the "Passion for Technology" books that I published on art-grade, acid-free paper, not that I particularly thought they'd be in any time capsules for any reason, but I just thought, well, I want them to look the same way a hundred years from now.
But music, if you're talking about music, a vinyl record is going to be very simple to reverse-engineer because you can look at it, and you say, oh, I see, these are wave forms. I just need something to read these wave forms. Maybe, it's possible a CD will be so easy to reverse-engineer. Certainly there'll have been a lot of CDs around, and presumably any future archeologists a hundred years from now will certainly know how to read CDs. Hard drives, a little more opaque, if you ask me.
Yeah, I was thinking maybe put the CD and a USB CD drive in. That is...
Ah. A reader. Put a reader in.
Put a reader in. Now, the problem, of course, is the USB interface. We're already moving...
...to USB III. And so a hundred years from now we're not going to have, well, I don't know when they're going to open this again. But 50 years from now we're not going to have USB. We'll be onto some Twilight Zone technology. I mean, Intel's already pushing stuff that's, like, can it even work, it is so fast. It's like, okay.
Make prints. The human eye has not changed its interface in tens of thousands of years.
That really is the answer is print these things out. Instead of doing a CD, print them out. That's what you need.
But this is actually a very big topic. And what I find fascinating is it is now the province of librarians because librarians have become information specialists and archivists. And they are at the forefront of this. It's fascinating stuff. It's something I talk about on the radio show a lot because this is something real people do want to think about and really don't know what to do about.
Well, especially photos that are sort of, by definition, for the future.
But if the future can't look at them, then you've sort of defeated your purpose.
Question: [ 08 ]
Question: Listening to your discussions on the show about how SpinRite works and how it shows the drive's bad sectors and induces the drive to map out weakened sectors before they become terminal is all well and good. But I've never heard you discuss what happens if the drive runs out of spare sectors and cannot map anymore bad sectors? Oh, you're in trouble if that happens [laughing]. You said that sectors are mapped out as bad at the factory. So when the drive is new, its capability to map out additional sectors would already be somewhat compromised. Is there any way for us to know where the drive stands? Can you run out of places to put stuff?
By: Jared in Western Australia
Oh, yeah. You can. Well, what happens is that drives generally store extra sectors at the end of their track. And I've talked about mapping sectors. Well, the way that actually happens is they will move all of the sectors from the bad spot to the end of the track down by one. They shift the whole, all of that block of data down so that the drive can continue to read at normal speed. It just sort of reads past the bad sector and finds the one it was looking for, that it expected to see in the bad spot. Now it finds it a little bit further down. So they actually just shuffle the balance of the track downwards toward the end of the track.
Well, there is a limit to how far they can go. And this is the one place that that SMART, S.M.A.R.T., the Self-Management Analysis And Reporting Technology, SMART is the acronym, and it will show you - there are different parameters that SMART has. And one is relocated sectors. And what you don't want is for that to get down to zero because essentially it doesn't - the problem with SMART is that manufacturers never wanted to tell us anything about what was going on inside the drive. They wanted it to be a black box that we buy, and we're happy with.
But Compaq, back in the old days, said no, we insist - Compaq was like a major IBM clone manufacturer, for those who don't remember the name. They insisted, and they had enough purchasing power to force drive manufacturers, the big ones at the time - Western Digital and Seagate, Micropolis and Maxtor - to force them to give them a means of asking the drive how it feels. And Compaq actually famously used SpinRite on their dock where they were accepting drives. They would over-order drives and use SpinRite to prequalify them before putting them into their machines, and would send back the drives that SpinRite said were weaker, less good, than the majority of the drives. And manufacturers didn't like that they were doing that.
So they said, okay, fine, we'll - and that's where this whole SMART system came from. It is a sadly weak specification. It's not something that anyone should be proud of. Manufacturers were compelled to do it, or they would lose a major vendor in the form of Compaq. And so they added this. The point is that it doesn't give you a lot of data. It sort of gives you a happiness indication. And when that runs to zero, then you're really in trouble. So that's one of the things that SpinRite monitors while it's doing its work. There's a screen there that monitors the SMART parameters on the fly and also is able to show you the rate at which error correction is occurring, which allows you to get some sense for the relative health of the drives.
So they are black boxes. The manufacturers don't want us to see what's really going on. There is no specification for asking a drive for its bad block tables, for how many spare sectors it has remaining, for which tracks are almost out of spares. There's no interface like that, that is available to the outside world. So we do the best we can. And of course running SpinRite on the drive from time to time allows the drive to at least know what's going on with it, for what good that does.
Question: [ 09 ]
Question: I found news of Astaro and Stochastic Fair Queuing what does this mean?
Guys, I thought it worth a mention, I've been playing with the Astaro product, the home ISO download. Good for you. I think this is a really great product. A few weeks ago I enabled the QoS (Quality of Service) settings. The Astaro Security Gateway (ASG) allows you to enable an Upload Optimizer and a Download Equalizer.
They are described as follows by Astaro: The download equalizer: If enabled, Stochastic Fairness Queuing (SFQ) - this is becoming the acronym winner of the week - and Random Early Detection (RED) queuing algorithms will avoid network congestion. In case the configured downlink speed is reached, packets from the most downlink-consuming stream will be dropped. That's the download equalizer. The upload optimizer, if enabled, will automatically prioritize outgoing TCP connection establishments, that is, TCP packets with the SYN flag set; acknowledgment packets of TCP connections, that's the ACK flag set and a packet length between 40 and 60 bytes; and DNS lookups, UDP packets on port 53. I thought you might find this interesting. Thanks for the great podcast. Bob. Talk about dumping this in your lap. So there. What is that?
By: Bob Lindner in La Crosse, Wisconsin
What's really interesting, okay, we talked about the first part, the download equalizer that uses this Stochastic Fairness Queuing and Random Early Detection, the idea being that, as a buffer is filling, the router will begin discarding packets statistically more often to prevent the buffer from ever getting completely full. And that means that, if a particular stream is hogging the bandwidth, the chances are its packets will get dropped, which will send its endpoints the message that they need to back off and slow down. So that's this RED, Random Early Detection, is the most often used Active Queue Management - there's another acronym, AQM - which is what we're going to be dealing with now for the next decade, is smarter queue, active queue management.
But the upload optimizer is really interesting. This is a move to the front algorithm. It says, if enabled, this option will automatically prioritize outgoing TCP connection establishments, that is, TCP packets with a SYN flag, as in synchronized set; acknowledgment packets of TCP connections with the ACK flag set and short packets, that is, between 40 and 60 bytes; and DNS lookups. So this is neat because it means that the little tiny packets which we need in order to keep our connections running move to the front of the queue. They don't have to wait behind a long delay of blob that's being uploaded or downloaded. The Astaro technology gets them out right away. That doesn't delay the blob because these are necessarily very small packets. They're going to be 40 bytes rather than 1,500 bytes. So in terms of the packet delivery time, there isn't much cost, but they do allow the system to act as if there is no bloat in the buffer. So bravo, Astaro. They clearly gave this some thought, like before the rest of the industry had.
Question: [ 10 ]
Question: Just wondering about private email
By: Mr. G., from Kyle D. in B.C.
STEVE: So that's really one for you, Leo. I have my own servers at GRC, so I've never needed to think about the repository. But for people who, like with Gmail, have all of their mail living somewhere, Kyle is becoming a little nervous about that. And so I thought - I wondered if you had any suggestions for probably smaller solutions who maybe are a little more honorable.
LEO: I do. But I would say, first and foremost, that any service that you're going to use is - you're trusting them. And maybe we've all lost trust in Google, Yahoo!, and AOL; maybe not. But most services have similar access to your content and, if they provide antispam services, are doing similar scanning of the content. So, I mean, that's just the nature of antispam. They're looking at keywords in the same way that Google does. Google does it for both antispam and for advertising, so maybe you don't like that.
Another solution that you might want to look into, I think I'm moving my email to them, is a U.S.-based company called Island Email. I know about them because they support MailRoute, and so they have very good antispam filtering, but that means that MailRoute is scanning your email. Your Internet service provider is probably similar to one of these services in the sense that they also may do some scanning of your mailbox. Unless you host your own email server, I think that's the only way to for sure know that only you control that content.
Finally, I'll tell you what I am currently using for my IMAP. They were recently purchased by Opera, so that may be a nonstarter. It's called FastMail. I like FastMail a lot because they are, I think, the most sophisticated IMAP server out there. And Opera has not changed them in any way. In fact, the only thing Opera has done, as far as I can tell, is give them more resources.
Question: [ 11 ]
Question: [ 12 ]
- Link URL and optional brief description
- Audible URL
| TBD by TBD (ABRIDGED/UNABRIDGED)|
Narrated by TBD
http://carbonite.com offer code "TWiT"
- Edited by:
|This area is for use by TWiT staff only. Please do not add or edit any content within this section.|