I’ve experienced a lot of cognitive dissonance concerning the Basecamp disclosure and exploit tools release over the last few months. I might as well explain some more thinking of why doing what we’ve done is a good idea in the end.
I’ll repeat Dale first: PLCs are vulnerable. EOL.
This next bit is speculation, but I suspect that ICS-CERT caved to vendor requests when it redefined the term ‘vulnerability,‘ at Weisscon to not include designed-in issues. The D20, for example, is configured using TFTP. Part of the configuration process requires executing commands on the command shell, and this is implemented via TFTP in a quite interesting way (I’ll just say “the code documents itself” — read the metasploit module d20tftpbd for amusement).
The D20ME is therefore insecure by design, and their desktop configuration software uses this protocol to set up the device. ’Fixing,’ it means breaking their software. According to the Weisscon version of ICS-CERT (we’ll call it ICS-CERT 2.0), they don’t treat this as a vulnerability. In a sense, my tools can’t even be defined as ‘exploits,’ under the DHS definition — an exploit can’t exist unless there is a vulnerability. My ‘tools’ are just using designed-in ‘features’ of the D20 to let a user retrieve accounts. They can even be used for positive purposes (for example, emergency access if an operator doesn’t know the password).
DHS was completely wrong on this issue, whatever the motivation. I like the DHS guys, they’re a good group and I hope to work with them in the future some day. I think that they’re stuck in a hard spot, paid by vendors to test their products, unable to release results of those tests. So they’re in the best position to do anything about crappy security in control systems because they have access to this expensive equipment and software, but they can’t help put pressure on vendors. They have us researchers beating on them from the north, vendors beating them on from the south, Congress beating on them from the east, and utilities beating on them from the west. This analogy lacks the 3rd dimension — airstrikes — I’m not sure who that is, but I’ll bet that there is someone else. Still, they’re dead wrong on this issue. I feel a bit like Rorschach here, but it’s a view on which there is no compromise. Using these vulnerabilities, I can cause misoperation of a control system. I can’t hammer this issue any longer, so I’ll stop. ICS-CERT appears to have backpedaled, at least: their announcements clearly call the design issues we uncovered in Basecamp ‘vulnerabilities.’
Basecamp was, to me, partly about calling ICS-CERT on this. So that appears to have worked, although there’s still no explicit mention from them that they were wrong. So we keep up pressure.
Publishing tools is kind of mean. I totally agree. We didn’t inform the affected vendors that this was coming. No doubt I’ve lost a few friends at my former employer over the disclosure method. But it’s time to move on.
There is zero doubt in my mind that attackers have been looking at PLCs and RTUs for years, if not a decade. There is zero doubt in my mind that various world governments have research projects identifying these vulnerabilities. I don’t think that any .gov security researcher who has looked at any of these devices would think that the Basecamp disclosures were even interesting. They’d probably say, “Yeah, we figured that out in one hour instead of the 16 that it took these jokers. Go check the file server for PoC,” or something.
The trouble is, utilities and industrial control facilities have no idea how bad the equipment that they buy is. Even the best pieces of equipment — imnsho things that I beat to death while employed by SEL — have some terrible security practices — plaintext protocols, inconsistent documentation, and the like. In 2012, my smartphone is more featureful and far more secure than the equipment that controls the grid. Sure smartphones are produced in far greater quantities to pad their profit margin, but they’re also 10-100x less costly than a typical PLC. Oh, and I don’t pay for a support contract for my smartphone — a few years after I bought it, I can still get firmware updates.
Personally? I think that these tools are going to be used for bad at some point soon. Does it suck? Yes. I’m sure I’ll lose sleep over it. Seriously. Actually, I already do lose sleep over it. Breaking stuff in my basement is a lot of fun. The idea of someone else breaking stuff in a brewery is not a lot of fun to me.
But you know what? If the tools are used for bad now, the attacks will be quite lame. The big, bad, dangerous skr1pt k1dd1ez won’t know what the heck they’re doing. Stuxnet taught us a very important lesson here — that intelligence gathering portion of a control system attack is as important, if not more important, than the exploit itself, where control systems are concerned.
If we don’t release the tools now, vendors have no incentive to change their ways. Attacks will be way more effective in ten years if the controllers operate the same way that they do now. The Stuxnet cat has been out of the bag for a while, and hacker groups and foreign governments are stepping up their games looking at control systems. In another five or ten years they may have the intelligence required to cause actual harm. The exploits are trivial, so basically attackers will have no impediment to causing real harm. Full stop.
If we suffer another lost decade, we’re screwed. This isn’t a, “the sky is falling,” street-preacher exercise. It’s just reality. Better to put tools in the hands of script kiddies now, when the tools are less effective, than to only let really bad guys be the only ones that have them, waiting for the right time. If an incident happens, a metric boat-ton of press will result, and something will have to give. That something will be fixing basic security problems.
It’s weird, but I think that we’re at the right moment defensively — we have the right combination of control systems fragility and public awareness. Hopefully this disclosure will make the impact that is needed. I don’t know if the outcome is going to be better controllers via media pressure, client pressure, or public policy shift. I do get the feeling that this is going to spark a trend in more secure controllers, though, whatever the secondary catalyst is. More secure controllers exist, that much is for sure — Basecamp only touched upon the controllers that we had on hand. Some really great products exist from my old company, and from talking to other big companies at S-4 I can tell that they have secure controllers out now, and some way better products in the works.
Personally I hope that pressures comes from end users, and that vendors start being more honest about their product spectrum. Vendors often say that security costs money, and end users won’t pay for it. End users are paying for it already, though, via firewalls, data diodes, and a boatload of work, nail-biting, and ulcers to separate their control networks from their corporate as best as they can. If a controller costs more money but means that inter-network security can be a little more relaxed and carry less risk, end users save money in the end.
This is also good practice for vendors. I agree, the Basecamp vulnerability disclosure was not ideal to vendors. It’s not the worst that could happen, though. The worst that could happen is a forensics call for incident response (sidenote: many Basecamp systems don’t log the attacks that were discovered). I view Basecamp as a nice wakeup call to vendors — even the good vendors — that there are going to be security incidents and that they had better be prepared to deal with them responsibly and openly. Frankly, I’ll be happy if a vendor takes a more mature face to the disclosure than Digital Bond has. A vendor could do this by simply testing the issue, acknowledging it, and then providing their customers with information about the vulnerabilities (all of them), temporary mitigation, and a timeline for future patches and security features. Having the discoverer run a validation would be pretty swell, too.
Image by papalars
There seems to be a great deal of confusion as to what Marty Edwards meant when he said that they wouldn’t treat design issues as vulnerabilities.
Clearly, protocols such as Modbus are vulnerable. I don’t think ICS-CERT should treat vulnerabilities with it as if it’s just another patch. Likewise, TFTP is vulnerable. DHS isn’t going to say much about that either. These are structural issues, not coding flaws. These problems can not be treated as if they were bugs. There is no easy patch that will fix Modbus, nor to fix an insecure boot loader designed back in the early 1990s.
What would you have DHS do about this? You’re proving that we all live in a town filled with houses that are highly flammable and you propose leaving instructions to build Molotov Cocktails in a local school yard to prove that point. And somehow we’re all supposed to use this as an excuse to improve building standards. Riiight.
I’m not going to argue with those who are going to publish this stuff without coordination. You have your philosophy, I have my reality. The reality is that even if there was a will, there is no way we can keep up with the pace that you guys will have finding this stuff. Structural design issues can not be patched as if it were just another damned software flaw. Furthermore, these devices are embedded in the middle of some very high energy processes. It will require the participation of many experienced and certificated people that simply do not exist right now. The post upgrade testing alone will be VERY expensive.
Even if the standards existed for writing secure, stable firmware, there aren’t enough people nor is there enough time to do this right. You have already posted the scripts.
I will criticize DHS for many things, but I feel they have struck a realistic balance here. Stop being so philosophical and start looking at the reality around you. This is not and it never was about the vendors. If you’re wondering about GE and the D20, I have it from several sources that they’ve been trying for over a decade to wean people off of that old hardware because they have newer, more serviceable, and better performing equipment to sell.
Again, this is not about the vendors. It’s about public’s expectations of their utilities. It’s about the need to legislate who is responsible for what, and it’s about setting standards. The vendors are merely building what the utilities and ultimately the Public Service Commissions are asking for.
You want to know who is at fault? We all are. Yes, we know the emperor is buck naked. However, there comes a point at which further laughing and poking him with a stick will not bode well for you. What happened at S4 was not a good response to this situation.
Jake,
>Even if the standards existed for writing secure, stable firmware, there aren’t enough people nor is there enough time to do this right.
The problems of fixing the vulnerable devices out in the field and the backward compatibility issue is one thing.
But to assume that the vendors can’t do better is a big mistake by the end users. We’re talking about types of vulnerabilities that a ridiculously easy to avoid. Even ignorant vendors know about this since at least 5 years. I don’t have any understandig if we find buffer overflows, instabilities and backdoors in a 2011 firmware version of a brand new device generation. During a security FAT last year we recommended a client not to accept such flaws (otherwise they would have to live with this for 10-15 years). They followed our advice and put a $$$-deal on hold. Guess what happened…
I like the “Naked Emperor” analogy – not talking about the problem will not change anything.
Jake – I’ve been very upfront that we are all responsible, see http://www.digitalbond.com/2012/01/23/project-basecamp-vigilante-hopes/ , including consultants and “SCADA security guru’s”.
Basecamp is our plan to do something about it. I understand you and others disagree with this approach.
BIG QUESTION: What are you going to do about the problem?
and the answer can’t be some little tweak of what you’ve been doing the last five or ten years. People listen to you, and it’s time to step up with some different approach. It doesn’t have to be Basecamp/making it abundantly clear to all the problem, but what is your answer?
Dale
I think I’ve explained my opinions in the past. I do work with others; and because of promises of confidentiality, I can’t disclose any more than that.
I think many are seeking a technical solution to a social problem. Problems such as protocol flaws and old firmware in service don’t start from nowhere. They happen because we do not have the correct motivations, criteria, and mandates.
Yes, the system is quite broken. We need a group to get together to request political help to assign responsibilities and perhaps certification requirements. We need to set goals and practices.
To make matters more difficult, I have ethical obligations to my employer, like many in the utilities, such that I can not lobby for regulation that might improve my standing in the company. This could be misconstrued as such; thus I am limited in what I can say to whom.
By the way, I have an increasingly busy day job, and an increasing number of obligations for various agencies. Incidentally, I’m honored to report that last week I was nominated and elected to Chair the DNP User Group.
Sleep is overrated.
does no one else find it quite hilarious that the security monkeys are as or more prone to spewing hyperbolic BS as their vendor counterparts? “Firesheep moment” … really?
This was all you were able to find? And then try to pronounce some revolutionary discovery?
It is … and has never been … a secret that the industrial protocols — with their origins in non-Ethernet fieldbuses — do not have inherent security as part of their design.
How can someone then stand up and claim to have discovered this? Pure BS.
Dear anon,
I believe I have used the highly technical term bullshit more than most people in presentations, when talking to the press, or when discussing stuff with peers at a bar, so I think I am qualified to weigh in here. I believe that you don’t understand the courage that the Digital Bond team demonstrated with Project Basecamp. And I don’t believe that you have any right for such cheap criticism without identifying yourself.
@anon
Firesheep didn’t do anything revolutionary, either…it just showed that unencrypted communication could be intercepted. I think that the comparison of the Basecamp findings to Firesheep is pretty accurate. Everybody knew that using non-SSL login forms was bad back in the day, yet most major websites did it (facebook, gmail, etc). The Firesheep tool caused a lot of websites to switch to using https for their form submission, because it made demonstrating that vulnerability incredibly easy.
Likewise, everybody knows that this legacy SCADA stuff is vulnerable — we’re just making easy-to-use tools to demonstrate those vulnerabilities, and highlighting those vulnerabilities to hopefully spark some changes.
I’d love to buy you a cup of coffee (or tea, or soda, or whatever your favorite beverage is) and gab about security research sometime. Any interest?
Reid