EternalBlues: The cyberwar is afoot, funded by US taxpayers against themselves

For nearly three weeks, Baltimore has struggled with a cyberattack by digital extortionists that has frozen thousands of computers, shut down email and disrupted real estate sales, water bills, health alerts and many other services.
But here is what frustrated city employees and residents do not know: A key component of the malware that cybercriminals used in the attack was developed at taxpayer expense a short drive down the Baltimore-Washington Parkway at the National Security Agency, according to security experts briefed on the case.

In Baltimore and Beyond, a Stolen N.S.A. Tool Wreaks Havoc


They are not capable of controlling their stuff. Which is a basic reason why its a bad idea giving governments master password keys or backdoors to encrypted products. Do that and you can be certain criminals and terrorists will get access.

Cory Doctorow explains further:

https://boingboing.net/2017/06/04/theresa-may-king-canute.html

It’s impossible to overstate how bonkers the idea of sabotaging cryptography is to people who understand information security. If you want to secure your sensitive data either at rest – on your hard drive, in the cloud, on that phone you left on the train last week and never saw again – or on the wire, when you’re sending it to your doctor or your bank or to your work colleagues, you have to use good cryptography. Use deliberately compromised cryptography, that has a back door that only the “good guys” are supposed to have the keys to, and you have effectively no security. You might as well skywrite it as encrypt it with pre-broken, sabotaged encryption.
There are two reasons why this is so. First, there is the question of whether encryption can be made secure while still maintaining a “master key” for the authorities’ use. As lawyer/computer scientist Jonathan Mayer explained, adding the complexity of master keys to our technology will “introduce unquantifiable security risks”. It’s hard enough getting the security systems that protect our homes, finances, health and privacy to be airtight – making them airtight except when the authorities don’t want them to be is impossible.
What Theresa May thinks she's saying is, "We will command all the software creators we can reach to introduce back-doors into their tools for us." There are enormous problems with this: there's no back door that only lets good guys go through it. If your Whatsapp or Google Hangouts has a deliberately introduced flaw in it, then foreign spies, criminals, crooked police (like those who fed sensitive information to the tabloids who were implicated in the hacking scandal -- and like the high-level police who secretly worked for organised crime for years), and criminals will eventually discover this vulnerability. They -- and not just the security services -- will be able to use it to intercept all of our communications. That includes things like the pictures of your kids in your bath that you send to your parents to the trade secrets you send to your co-workers.



Also: Baltimore should be updating its Microsoft security patches more often than once every two years.   


dave said:
Also: Baltimore should be updating its Microsoft security patches more often than once every two years.   

Having worked in a corporate environment we found updating a patch can be difficult and expensive. Technically its easy. 

The problem are the risk people and the lawyers. These days updating a patch requires testing of every product on the various machine configurations with tons of CYA documentation, to "prove" the patch will not affect the business. Everyone gets involved, programming, QA, operations, IT security, product management, etc.

It could be justified. Look what happened to many when they updated to Win 1809. That update wiped out their data in the Documents folder.

Heaven knows how updates work in the Baltimore govt bureaucracy. Probably no resources to do CYA patches.

ps - I don't keep my data in Microsoft designated folders like the Documents folder. I don't keep them on the system C: drive.


BG9 said:


dave said:
Also: Baltimore should be updating its Microsoft security patches more often than once every two years.   
Having worked in a corporate environment we found updating a patch can be difficult and expensive. Technically its easy. 
The problem are the risk people and the lawyers. These days updating a patch requires testing of every product on the various machine configurations with tons of CYA documentation, to "prove" the patch will not affect the business. Everyone gets involved, programming, QA, operations, IT security, product management, etc.
It could be justified. Look what happened to many when they updated to Win 1809. That update wiped out their data in the Documents folder.
Heaven knows how updates work in the Baltimore govt bureaucracy. Probably no resources to do CYA patches.
ps - I don't keep my data in Microsoft designated folders like the Documents folder. I don't keep them on the system C: drive.

If the company you work (or worked) for required this for every patch, they're not very bright.

We apply patches on a regular basis - at least once a month. 99.9999% of the time the patch works correctly on all servers. Consequently, prudence dictates that you don't need to fully regression test every application for every patch, (doing so would be kind of nutty.) All that's needed is some very basic functionality testing. If a problem is found with the patch, you just roll back the patch on the affected system and figure out the problem.

There's no excuse for large organizations to not patch on a regular basis.

BTW, WIN 1089 was not a "patch". It was a product update. They're not the same thing. Product updates are far more infrequent and do need to be treated differently. In a corporate environment there is no need to accept product updates as they come out, and they do need to be tested more rigorously. Security patches, otoh, need to be applied regularly.



drummerboy said:equired this for every patch, they're not very bright.
We apply patches on a regular basis - at least once a month. 99.9999% of the time the patch works correctly on all servers. Consequently, prudence dictates that you don't need to fully regression test every application for every patch, (doing so would be kind of nutty.) All that's needed is some very basic functionality testing. If a problem is found with the patch, you just roll back the patch on the affected system and figure out the problem.

They do patches in batches. 99.9999% is not good enough considering the cost of failure, both regulatory and financial. That company has never failed. If they lost just one days data the whole world will know and talk about it. We don't do basic testing. 

Many, many years ago I worked on systems that had Tandem computers. Disk space was very expensive then. That company had six disk drives. I had the bright idea to use a compression option on our data files, saving us the expense of a 7th drive.

The inevitable happened, we lost data, a programing bug. Data that could have been retrieved from our partial nightly tape backup, leading to at most one days partial loss of data. So we did a restore from tape and got error 59, file is bad. No restore. OK. We went to the previous night, same thing. We went back a whole week, including the full weekly backup. No go. Called Tandem. They looked and blamed us because our tape drive heads were supposedly dirty. Claimed its bad data written to tape. They cleaned the heads. I tested by backing up some other files, no errors. OK. I guess it was that. So we were screwed. Then we did a full backup which included the compressed key sequenced files. I said, do a restore (was oper manager then), and verify. Error 59. Called Tandem back. They knew the drives were cleaned, so they huffed and puffed telling us we must have done something wrong. I told them, you do a backup and restore. Error 59. Well, it seems the tape heads could be misaligned. Not really. I looked at the backup listing, hundreds of files and noticed the file that had error 59 on restore lost 4 bytes. For example, the listing showed 999,996 bytes whereas the file is 1,000,000 bytes. Tandem, what's this? They grabbed the tapes and took them to their office. They got back and said "you compress your key sequenced files?". Yes, it saves space. Oops. When the index of compressed key sequenced file crossed a cylinder (group of tracks), data is lost. A  bug introduced on some previous release.The problem with error 59, even if rare and only affecting just one byte causes the restore to terminate. They extracted the raw data onto new tapes which we then uses to load (not restore) into the needed files. They instantly patched their system and distributed their new system release worldwide. A new system release required a sysgen install of the whole system.

Nothing is simple. Nothing is basic.


the typewriter, carbon paper?


mtierney said:
the typewriter, carbon paper?

There you have a point. At times I think we were better off with our typewriter and carbon paper era.

When computers got started, many said we'll have a golden age of no paper, the paperless office. If anything, we now drown in paper made possible by the easy generation of reports on anything and everything.

And we have the added benefit of a new age, the Golden Age of Surveillance. An age where everything known about you is retained eternally and never forgiven.



In order to add a comment – you must Join this community – Click here to do so.