The Null Device
Posts matching tags 'bruce schneier'
Bruce Schneier has a writeup of the facts we know about the Stuxnet worm, the sophisticated and unusual-looking Windows worm that has been speculated to have been designed by the intelligence agencies of the USA/Israel/Germany (delete as appropriate) to attack Iran's nuclear facilities. Or possibly not:
Stuxnet doesn't act like a criminal worm. It doesn't spread indiscriminately. It doesn't steal credit card information or account login credentials. It doesn't herd infected computers into a botnet. It uses multiple zero-day vulnerabilities. A criminal group would be smarter to create different worm variants and use one in each. Stuxnet performs sabotage. It doesn't threaten sabotage, like a criminal organization intent on extortion might.
Stuxnet was expensive to create. Estimates are that it took 8 to 10 people six months to write. There's also the lab setup--surely any organization that goes to all this trouble would test the thing before releasing it--and the intelligence gathering to know exactly how to target it. Additionally, zero-day exploits are valuable. They're hard to find, and they can only be used once. Whoever wrote Stuxnet was willing to spend a lot of money to ensure that whatever job it was intended to do would be done.
None of this points to the Bushehr nuclear power plant in Iran, though. Best I can tell, this rumor was started by Ralph Lagner, a security researcher from Germany. He labeled his theory "highly speculative," and based it primarily on the facts that Iran had an usually high number of infections (the rumor that it had the most infections of any country seems not to be true), that the Bushehr nuclear plant is a juicy target, and that some of the other countries with high infection rates--India, Indonesia, and Pakistan--are countries where the same Russian contractor involved in Bushehr is also involved. This rumor moved into the computer press and then into the mainstream press, where it became the accepted story, without any of the original caveats.Schneier also looks at strings found in the Stuxnet worm's code, some of which suggest, somewhat tenuously, either that it's of Israeli origin or that the authors wish to give the impression that it is.
Basically, all that's definitely known is that Stuxnet was elaborately expensive to create (containing not only zero-day vulnerabilities but stolen driver certificates) and was designed to attack Siemens plant control computers. It also has been around for a while, possibly having gone undetected for a year, and has updated itself remotely during that time.
Security ninja Bruce Schneier was recently recognised by an airport screener who presumably reads his blog:
TSA Officer: A beloved name from the blogosphere.
Me: And I always thought that I slipped through these lines anonymously.
TSA Officer: Don't worry. No one will notice. This isn't the sort of job that rewards competence, you know.
As reported elsewhwere, Bruce Schneier, the Chuck Norris of computer security, leaves his home wireless network open:
To me, it's basic politeness. Providing internet access to guests is kind of like providing heat and electricity, or a hot cup of tea. But to some observers, it's both wrong and dangerous.
I can count five open wireless networks in coffee shops within a mile of my house, and any potential spammer is far more likely to sit in a warm room with a cup of coffee and a scone than in a cold car outside my house. And yes, if someone did commit a crime using my network the police might visit, but what better defense is there than the fact that I have an open wireless network? If I enabled wireless security on my network and someone hacked it, I would have a far harder time proving my innocence.
I'm also unmoved by those who say I'm putting my own data at risk, because hackers might park in front of my house, log on to my open network and eavesdrop on my internet traffic or break into my computers. This is true, but my computers are much more at risk when I use them on wireless networks in airports, coffee shops and other public places. If I configure my computer to be secure regardless of the network it's on, then it simply doesn't matt
The UK's terror threat level has been downgraded from "critical" to "severe". It is not clear whether this is a result of confidence that the worst threat is over, or because airports have been unable to cope with the new security measures.
And it now emerges that the attack may not have been imminent (the suspects had not purchased tickets and some didn't even have passports), but the timing of the arrests was forced by US officials. And this (somewhat more sensationalistic) article (via jwz) claims that the timing was "nothing more than political fabrication". And here is the Independent's roundup of what we know and don't know.
And Bruce Schneier has weighed in, on the subject of effective security and "security theatre":
None of the airplane security measures implemented because of 9/11 -- no-fly lists, secondary screening, prohibitions against pocket knives and corkscrews -- had anything to do with last week's arrests. And they wouldn't have prevented the planned attacks, had the terrorists not been arrested. A national ID card wouldn't have made a difference, either.
The new airplane security measures focus on that plot, because authorities believe they have not captured everyone involved. It's reasonable to assume that a few lone plotters, knowing their compatriots are in jail and fearing their own arrest, would try to finish the job on their own. The authorities are not being public with the details -- much of the "explosive liquid" story doesn't hang together -- but the excessive security measures seem prudent.
But only temporarily. Banning box cutters since 9/11, or taking off our shoes since Richard Reid, has not made us any safer. And a long-term prohibition against liquid carry-ons won't make us safer, either. It's not just that there are ways around the rules, it's that focusing on tactics is a losing proposition.
The goal of a terrorist is to cause terror. Last week's arrests demonstrate how real security doesn't focus on possible terrorist tactics, but on the terrorists themselves. It's a victory for intelligence and investigation, and a dramatic demonstration of how investments in these areas pay off.
Bruce Schneier has a post about an interesting way to beat buffer overrun attacks:
Fortunately, buffer-overflow attacks have a weakness: the intruder must know precisely what part of the computer's memory to target. In 1996, Forrest realised that these attacks could be foiled by scrambling the way a program uses a computer's memory. When you launch a program, the operating system normally allocates the same locations in a computer's random access memory (RAM) each time. Forrest wondered whether she could rewrite the operating system to force the program to use different memory locations that are picked randomly every time, thus flummoxing buffer-overflow attacks.
Memory scrambling isn't the only way to add diversity to operating systems. Even more sophisticated techniques are in the works. Forrest has tried altering "instruction sets", commands that programs use to communicate with a computer's hardware, such as its processor chip or memory.
This produces an elegant form of protection. If an attacker manages to insert malicious code into a running program, that code will also be decrypted by the translator when it is passed to the hardware. However, since the attacker's code is not encrypted in the first place, the decryption process turns it into digital gibberish so the computer hardware cannot understand it.
Bruce Schneier looks at the question of whom your computer's loyalties really belong to, with not only crackers and criminals competing for them but also rightsholders, software vendors and other companies, whose behind-the-scenes deals often mean that the software they sell you serves other masters:
Entertainment software: In October 2005, it emerged that Sony had distributed a rootkit with several music CDs -- the same kind of software that crackers use to own people's computers. This rootkit secretly installed itself when the music CD was played on a computer. Its purpose was to prevent people from doing things with the music that Sony didn't approve of: It was a DRM system. If the exact same piece of software had been installed secretly by a hacker, this would have been an illegal act. But Sony believed that it had legitimate reasons for wanting to own its customers' machines.
Antivirus: You might have expected your antivirus software to detect Sony's rootkit. After all, that's why you bought it. But initially, the security programs sold by Symantec and others did not detect it, because Sony had asked them not to. You might have thought that the software you bought was working for you, but you would have been wrong.
Internet services: Hotmail allows you to blacklist certain e-mail addresses, so that mail from them automatically goes into your spam trap. Have you ever tried blocking all that incessant marketing e-mail from Microsoft? You can't.
Application software: Internet Explorer users might have expected the program to incorporate easy-to-use cookie handling and pop-up blockers. After all, other browsers do, and users have found them useful in defending against Internet annoyances. But Microsoft isn't just selling software to you; it sells Internet advertising as well. It isn't in the company's best interest to offer users features that would adversely affect its business partners.Schneier warns that the present situation could have dire consequences:
If left to grow, these external control systems will fundamentally change your relationship with your computer. They will make your computer much less useful by letting corporations limit what you can do with it. They will make your computer much less reliable because you will no longer have control of what is running on your machine, what it does, and how the various software components interact. At the extreme, they will transform your computer into a glorified boob tube.
You can fight back against this trend by only using software that respects your boundaries. Boycott companies that don't honestly serve their customers, that don't disclose their alliances, that treat users like marketing assets. Use open-source software -- software created and owned by users, with no hidden agendas, no secret alliances and no back-room marketing deals.
How to win a basketball game: go online before the game, pretending to be an attractive young woman, chat up one of the opposing team's players and agree to meet him after the game to "party"; then, at the game, get your team's supporters to chant her name and flash her (purported) phone number:
On Saturday, at the game, when Pruitt was introduced in the starting lineup, the chants began: "Victoria, Victoria." One of the fans held up a sign with her phone number. The look on Pruitt's face when he turned to the bench after the first Victoria chant was priceless. The expression was unlike anything ever seen in collegiate or pro sports. Never did a chant by the opposing crowd have such an impact on a visiting player. Pruitt was in total shock.
The chant "Victoria" lasted all night. To add to his embarrassment, transcripts of their IM conversations were handed out to the bench before the game: "You look like you have a very fit body." "Now I want to c u so bad."Via Bruce Schneier, who called this the cleverest social engineering attack he has read about in a long time. And coming from someone who comments on the various ATM skimming/phishing scams as they comes out, that means something.
After alleged British spies were caught in Russia using a wireless receiver hidden inside a rock to communicate with recruits (though it has been suggested that the story was partly if not wholly made up by Russian government agencies to justify a crackdown on non-government organisations), security guru Bruce Schneier's blog discusses the possibility of wireless "dead drops"; and, if anything, there would be less easily detectable ways of doing it than hiding a device in a rock:
Even better, hide your wireless dead drop in plain sight by making it an open, public access point with an Internet connection so the sight of random people loitering with open laptops won't be at all unusual.
To keep the counterespionage people from wiretapping the hotspot's ISP and performing traffic analysis, hang a PC off the access point and use it as a local drop box so the communications in question never go to the ISP.And various commenters propose other suggestions for undetectable ways of passing spy information to otherwise innocent-looking WiFi access points, and receiving it afterwards:
Replace one access point at a support provider for Starbucks and then have someone figure out which one it is after it's up. Use an asic mac filter to send traffic to a special part of the access point itself.
Port knocking on that dangling PC. The PC stays in stealth mode and only replies (briefly) when knocked upon.
Even better, how about hacking one's wireless configuration manager to hide the contraband data in unused header fields, passing it to a similarly hacked access point that would be an otherwise functional dead end. The spy's laptop wifi antenna could be accidentally left activated and innocently trying to associate with whatever WAP it sees (like my wife's does in our neighborhood). Hit the right WAP(s) and the data is passed.And then there is this suggestion:
All that spam you get in your in-box is merely steganography. The word "viagra" isn't mis-spelled to get around the spam filters, it's a complicated encoding allowing the spammers and their prospective recipients to exchange messages without anyone suspecting that there are people who want the message in the message. That's why spammers don't care if they send it to people who don't want it, their goal is to make people think of their communications as discardable trash, rather than something that may have a value.
Security guru Bruce Schneier turns his professional paranoia to the Papal election, and looks at how vulnerable it is to fraud or rigging. The answer: not very. There are a few minor flaws, though much of the mechanism is quite robust.
What are the lessons here? First, open systems conducted within a known group make voting fraud much harder. Every step of the election process is observed by everyone, and everyone knows everyone, which makes it harder for someone to get away with anything. Second, small and simple elections are easier to secure. This kind of process works to elect a Pope or a club president, but quickly becomes unwieldy for a large-scale election. The only way manual systems work is through a pyramid-like scheme, with small groups reporting their manually obtained results up the chain to more central tabulating authorities.
And a third and final lesson: when an election process is left to develop over the course of a couple thousand years, you end up with something surprisingly good.