Equifax Breach

So do you also fire the entire team that designed a data system that kept critical data on storage accessible directly to the web server?

Firing CEOs is boring. They'll have a job offer making more than the entire development team in less than a year.

And how about the chief security officer? CEO leaves but CSO doesn't? Largest and most information packed data breach in history (so far... there will be a bigger one, these won't stop happening) and the top security person still has a job?

The reality is, the code all sucks. Building any modern system is layer on layer of bugs. Mitigating data theft has to be done by the design up front. The code should always be assumed to have holes in it the size of a barn door.

Or don't collect the data in the first place. Best security of all.

Every server admin out there knows of at least one big security hole somewhere in a system they maintain. If they don't push hard enough to patch it before it's exploited, but they did push, did they do their job? Should they be fired if the timing of the rain dance doesn't go their way?

When the Dev team says they don't have enough people to upgrade some module they used from the net and didn't write, that completely changed its API in the upgrade to fix the security bug upstream, should they be fired if they can't get that change through QA and out to Production before the Production side is hacked?

I don't know the answers, I just know the code quality goes down by the size of the code base -- and nobody can afford to fix all the bugs. Nobody puts in 80 hour weeks auditing trusted modules. That's how OpenSSL had a hole the size of a Mack truck for a decade that nobody noticed. And that's a team who does audit code. Most dev teams simply don't. Not to that level.

The warning of my wise old CS prof keeps echoing. She said, "Be careful what you put in databases." Nobody's been careful about that, ever.

And in this particular case, you can't opt-out of your data being in there. You're literally not allowed to say no. Blocks and freezes aren't the fix for this type of data breach. Being able to say no, I don't want you putting that data in a database in the first place about me, is.
 
Well you know, someone had to be on the chopping block. Most companies these days have been very quick to fire top level leadership when ANYTHING major happens to disrupt image. And, quite honestly they should, in addition to anyone else who was negligent.

It was only a matter of time.
You'd expect that it would be lower level executives, but the CEO probably got so much from cashing out and gold on top of it that he'll be laughing until he has to show up to testify.

Expect Joe Barton to apologize to him.
 
Right but again, that just goes right to executive incompetence. If your business involves that kind of information you have to demand more. Honestly they should axe the whole C level as a matter of course as well as anyone associated directly with the breach itself. Johnny engineer is only going to do what he's told to do. Getting breached due to a KNOWN vulnerability has stupidity written all over it from the IT team responsible for that patch all the way up to everyone who ignored it.

Perhaps its just me, but I'm sick of leaders who refuse to lead. I hate those people who just ride on the shoulders of overworked, underpaid staff and contribute ZERO anything to the company aside from taking credit for the successes of others..

I'll get off my soapbox now :)
Executives can only know or retain so much -- they're only human. Leading doesn't mean being intimately involved in the actions of every one of your subordinates. We'd be bitching if the issue was them micromanaging, too. Sometimes leading is letting folks do their job while you steer the ship.
 
Executives can only know or retain so much -- they're only human. Leading doesn't mean being intimately involved in the actions of every one of your subordinates. We'd be bitching if the issue was them micromanaging, too. Sometimes leading is letting folks do their job while you steer the ship.

Wayyyyy to lenient when the corporate culture is so far down the tubes that critically sensitive data about people is stored ON webservers. That’s a fireable offense for anyone who ever sat through ANY meetings and allowed that, in the medical world and most of the finance world.

An exec can’t really get away with that one anymore. Maybe in the late 90s or even real early 2000’s but even then there were security experts who were well published that said that was a bad idea. The “outer” and “inner” firewall concept was well known by early 90s. SANS.org was founded in 1989 and has been offering publications and training since then.

I took an Internet network security course from a former NSA computer and network security expert who worked for Network Solutions back when they were their own entity and handled all the domain registrations for the .com .net and .org TLDs. Guy not only showed us the same techniques crackers use today to break into stuff but demonstrated a real life social engineering break in to the entire class. That class was in 2000 I think since they’d just been or were about to be acquired by Veritas.

Nothing has really changed since then, so any exec saying they don’t have a clue about data security or that they’ll “just let the staff deal with it” without oversight is looney tunes.
 
Wayyyyy to lenient when the corporate culture is so far down the tubes that critically sensitive data about people is stored ON webservers. That’s a fireable offense for anyone who ever sat through ANY meetings and allowed that, in the medical world and most of the finance world.

An exec can’t really get away with that one anymore. Maybe in the late 90s or even real early 2000’s but even then there were security experts who were well published that said that was a bad idea. The “outer” and “inner” firewall concept was well known by early 90s. SANS.org was founded in 1989 and has been offering publications and training since then.

I took an Internet network security course from a former NSA computer and network security expert who worked for Network Solutions back when they were their own entity and handled all the domain registrations for the .com .net and .org TLDs. Guy not only showed us the same techniques crackers use today to break into stuff but demonstrated a real life social engineering break in to the entire class. That class was in 2000 I think since they’d just been or were about to be acquired by Veritas.

Nothing has really changed since then, so any exec saying they don’t have a clue about data security or that they’ll “just let the staff deal with it” without oversight is looney tunes.
It depends on how you define oversight. If your folks and your auditors are able to attest to SSAE16 compliance, or you can get somebody to sign SOC type 2, or whatever else you're looking for, I'm not sure leadership is really at fault. The CxO isn't to go validate these things, and we all have framed stories before. My point really is that I'm not going to armchair quarterback it or pass judgement without knowing what happened in their meetings, and what the CxO did or didn't know. Somebody is responsible, even if the chiefs are being held accountable.
 
It depends on how you define oversight. If your folks and your auditors are able to attest to SSAE16 compliance, or you can get somebody to sign SOC type 2, or whatever else you're looking for, I'm not sure leadership is really at fault. The CxO isn't to go validate these things, and we all have framed stories before. My point really is that I'm not going to armchair quarterback it or pass judgement without knowing what happened in their meetings, and what the CxO did or didn't know. Somebody is responsible, even if the chiefs are being held accountable.

I get what you’re saying, but security “certifications” are often also an excuse for a poor security culture. “We have the cert, and we got hacked, so it’s the certification or auditor that’s at fault.” That’s problem number one with security certs. CxOs think because they spent money a LOTS of people’s time on them that their security is assured.

That leads to the second comment. Those certs cost $$$$. If an organization is doing them, the CxOs certainly know about it. Not cheap. So any place that isn’t doing them has probably at least seen a quote for one if they have any security sense at all. Again, hard to dismiss the CxO who decided security wasn’t worth the price tag.

Normally I’d agree with you on the armchair quarterbacking but in the case of a company entrusted with a who crap-ton of personal data that no one explicitly authorized them to collect, I will happily slam their asses and even make up crap about them if needed to pressure them into allowing an opt-out. I didn’t want their products, I don’t use their services, and I find it more than a little disconcerting that I can’t make them stop.

When doing our blocks for our stuff, their database asked me questions about my NEPHEW. They’re data mining the crap out of that data. Our nephews have absolutely no connection to our financial lives whatsoever but they obviously have made the family connection in their database enough to ask me personal questions about those kids in order to AUTHENTICATE that I am who I said I am to block them from giving their reports to whatever fiscal entities ask for it.

They can go straight to Hell for not allowing a full and complete opt-out. Delete all the data you’ve collected on me and my family and never add any more to your database. You have proven you do NOT deserve to use it for your own fiscal gain.

Money is the only thing that will get their attention. No data, no money. A full opt-out option (which could put them out of business if enough people opted out) is demanded. Until then, I can gladly hold every person there who allows that data to reside in an insecure location, fully responsible. Because I didn’t have a business relationship with them in the first place, nor did I want one.

This pressure, of course, has to come from people asking their banks if they do business with Equifax and asking for an opt-out from credit reporting at the banking level. I know, good luck with that, right? But the bank has to be held accountable for choosing a bad vendor for their clearing house simply out of convenience of not having to do the underwriting legwork themselves for credit issues. Lazy bastards.
 
It depends on how you define oversight. If your folks and your auditors are able to attest to SSAE16 compliance, or you can get somebody to sign SOC type 2, or whatever else you're looking for, I'm not sure leadership is really at fault. The CxO isn't to go validate these things, and we all have framed stories before. My point really is that I'm not going to armchair quarterback it or pass judgement without knowing what happened in their meetings, and what the CxO did or didn't know. Somebody is responsible, even if the chiefs are being held accountable.

The problem is that there are too many people who "could be" responsible. How many people work in the company's INFOSEC departments? Probably most or all of them "could be" responsible, at least theoretically. But how many juries would be savvy enough to rule on the question of who actually was responsible if a case ever went to trial?

I'm starting to think that the only way to stop these kind of breaches is to hold the senior management of companies that stockpile personal data, any geeks who can be proven to have screwed up, and the companies themselves, criminally responsible for any data breaches that could have been prevented by proper data hygiene. Nothing will change at these outfits until the people running them (and their shareholders, because the companies themselves could be criminally charged) have some of their own skin in the game.

Rich
 
The problem is that there are too many people who "could be" responsible. How many people work in the company's INFOSEC departments? Probably most or all of them "could be" responsible, at least theoretically. But how many juries would be savvy enough to rule on the question of who actually was responsible if a case ever went to trial?

I'm starting to think that the only way to stop these kind of breaches is to hold the senior management of companies that stockpile personal data, any geeks who can be proven to have screwed up, and the companies themselves, criminally responsible for any data breaches that could have been prevented by proper data hygiene. Nothing will change at these outfits until the people running them (and their shareholders, because the companies themselves could be criminally charged) have some of their own skin in the game.

Rich

I've been saying that for a long time, Rich. And the natural progression from criminal liability then leads to standards and what I call, "building codes"... stuff that if you "do it this way" your liability is significantly lowered. Insurers get involved like any other big business.

This stuff ain't rocket science. Coders like to pretend it is, but if someone slapped the possibility of criminal negligence on them, you know what would stop? Undisciplined use of questionable code. "I'm not going to jail for that mess..." after the coder looks at a library someone else wrote that some boss said to use because it's the new hotness. "You put it in writing that I have to use that dog's breakfast, or I walk... your insurance company won't even allow that, I bet. It doesn't 'meet code'."

It's very like a building. When the foundation is bad, the whole rest of it constantly has problems. Even having operating system remote exploits at this point in the history of computing, is an embarrassment to the industry at large. Not enough discipline to see the OS is bad news and to find something better to base products on?

Imagine if like in any other business, Microsoft were liable for damages caused by security holes in their product? And their staff could go to jail for it? Very different computing industry in that world. More expensive and only used for things a computer is needed for. Too expensive to use for frivolous stuff.

Of course, most real servers run on open source and community built software, and that's a whole different kettle of fish, legally. Your warranty is essentially, "If it breaks, you get to keep both pieces." It's all essentially in a constant state of brokenness. Only geek CxOs really understand that.

Thus, why Apache wasn't patched. If you understand Apache is NEVER done and always broken, you never manage it like it's a product. You handle it like it's a constant disaster waiting to happen. It's better than some alternatives, but you never fully trust it. You always cast about for the best thing from the open source world... maybe Nginx is significantly better for your environment... you don't know unless you continually evaluate it all.

Unless you continually evaluate and kick out the worst of the buggy software, it never really gets better.

The industry doesn't want standards. Once a data breach or denial of service attack (the latter is more likely, easier to do) is big enough and harmful enough to get people killed (it'll happen, eventually -- it already has in intel), it'll have discipline forced on it by regulators who will be clueless. And it won't be pretty. Avoiding that is pretty easy if the industry figures out it needs its own quality standards before that happens. I doubt it will.
 
The problem is that there are too many people who "could be" responsible. How many people work in the company's INFOSEC departments? Probably most or all of them "could be" responsible, at least theoretically. But how many juries would be savvy enough to rule on the question of who actually was responsible if a case ever went to trial?

I'm starting to think that the only way to stop these kind of breaches is to hold the senior management of companies that stockpile personal data, any geeks who can be proven to have screwed up, and the companies themselves, criminally responsible for any data breaches that could have been prevented by proper data hygiene. Nothing will change at these outfits until the people running them (and their shareholders, because the companies themselves could be criminally charged) have some of their own skin in the game.

Rich
The CISO (chief information security officer) is typically the one holding the baton, whoever that has been defined as.
 
I've been saying that for a long time, Rich. And the natural progression from criminal liability then leads to standards and what I call, "building codes"... stuff that if you "do it this way" your liability is significantly lowered. Insurers get involved like any other big business.

This stuff ain't rocket science. Coders like to pretend it is, but if someone slapped the possibility of criminal negligence on them, you know what would stop? Undisciplined use of questionable code. "I'm not going to jail for that mess..." after the coder looks at a library someone else wrote that some boss said to use because it's the new hotness. "You put it in writing that I have to use that dog's breakfast, or I walk... your insurance company won't even allow that, I bet. It doesn't 'meet code'."

It's very like a building. When the foundation is bad, the whole rest of it constantly has problems. Even having operating system remote exploits at this point in the history of computing, is an embarrassment to the industry at large. Not enough discipline to see the OS is bad news and to find something better to base products on?

Imagine if like in any other business, Microsoft were liable for damages caused by security holes in their product? And their staff could go to jail for it? Very different computing industry in that world. More expensive and only used for things a computer is needed for. Too expensive to use for frivolous stuff.

Of course, most real servers run on open source and community built software, and that's a whole different kettle of fish, legally. Your warranty is essentially, "If it breaks, you get to keep both pieces." It's all essentially in a constant state of brokenness. Only geek CxOs really understand that.

Thus, why Apache wasn't patched. If you understand Apache is NEVER done and always broken, you never manage it like it's a product. You handle it like it's a constant disaster waiting to happen. It's better than some alternatives, but you never fully trust it. You always cast about for the best thing from the open source world... maybe Nginx is significantly better for your environment... you don't know unless you continually evaluate it all.

Unless you continually evaluate and kick out the worst of the buggy software, it never really gets better.

The industry doesn't want standards. Once a data breach or denial of service attack (the latter is more likely, easier to do) is big enough and harmful enough to get people killed (it'll happen, eventually -- it already has in intel), it'll have discipline forced on it by regulators who will be clueless. And it won't be pretty. Avoiding that is pretty easy if the industry figures out it needs its own quality standards before that happens. I doubt it will.
Open source doesn't mean no warranty. My employer is incredibly risk averse but still loves open source software when possible. As such, we only use open source software that we can purchase support for -- and that support contract has to hold them liable for the product.

There is already criminal liability in some realms of infosec (HIPAA, for example, 42 U.S. Code § 1320d–6).
 
The CISO (chief information security officer) is typically the one holding the baton, whoever that has been defined as.

She's still there. Weak security background with only one "security" job on it and all the rest censored on her public info.
 
Open source doesn't mean no warranty. My employer is incredibly risk averse but still loves open source software when possible. As such, we only use open source software that we can purchase support for -- and that support contract has to hold them liable for the product.

There is already criminal liability in some realms of infosec (HIPAA, for example, 42 U.S. Code § 1320d–6).

Understand the HIPAA thing.

Good luck getting damages money from most of those open source contracts. Go read it sometime. You'll find they're worded in such a way as that if an upstream project screws up, the vendor won't be paying you a dime.

Doesn't matter really... commercial vendors won't be paying anyone a dime for their bad code either.
 
Class action?
At least one was already filed, the next day, I think.

It took forever for OPM to fire their incompetents; I'm not sure the VA ever did. . .a friend had his identity stolen - great attitude: "Not my problem - it's an issue for the unfortunate enterprises who get taken in by it". He had a form letter drafted, mailed it to whoever called looking for payment on the new RV, unpaid credit cards, etc. Got car loan during the mess, as well, from his credit union - they just asked for some documentation on the theft, and went ahead and made the loan.
 
Understand the HIPAA thing.

Good luck getting damages money from most of those open source contracts. Go read it sometime. You'll find they're worded in such a way as that if an upstream project screws up, the vendor won't be paying you a dime.

Doesn't matter really... commercial vendors won't be paying anyone a dime for their bad code either.
Our legal team refuses to ok contracts with those kinds of stipulations. Procurement of software can be a bit of a pain for us lol. There is some software we've purchased where a VAR actually ate the liability to make the sale when the publisher wouldn't.
 
Our legal team refuses to ok contracts with those kinds of stipulations. Procurement of software can be a bit of a pain for us lol. There is some software we've purchased where a VAR actually ate the liability to make the sale when the publisher wouldn't.

Pretty impressive. Ever seen a real claim though? I haven't seen a successful one yet. I've seen insurers pay off with no admission of guilt, is all I've seen in the real world. Nobody is insane enough to put the software dev industry experts on the stand.
 
Pretty impressive. Ever seen a real claim though? I haven't seen a successful one yet. I've seen insurers pay off with no admission of guilt, is all I've seen in the real world. Nobody is insane enough to put the software dev industry experts on the stand.
As far as I know, all liability issues have been settled without the need to go to court. Whether or not insurance was footing the bill was irrelevant to me, as would be the admission of guilt so long as we were made whole. A huge example is why we still use Symantec for our TLS certs -- they were the only ones willing to agree to our terms. Sometimes it just means we can't always get the cheapest, but they like being able to forecast exact risk.
 
As far as I know, all liability issues have been settled without the need to go to court. Whether or not insurance was footing the bill was irrelevant to me, as would be the admission of guilt so long as we were made whole. A huge example is why we still use Symantec for our TLS certs -- they were the only ones willing to agree to our terms. Sometimes it just means we can't always get the cheapest, but they like being able to forecast exact risk.

That's interesting. I knew that about Symantec but wouldn't ever pay their price tag for what little protection that provides. Not when OpenSSL itself had a hole anyone could drive a Mack truck through for over a decade and nobody noticed. The cert is the least risky part of the entire SSL end to end process.

We've actually gone to using the renewable free certs from Let's Encrypt for anything non-Production that needs to be out on the public side of things, and automated the renewals. $0 doesn't suck. We're still using (cough, gag, ack) GoDaddy certs on the Production stuff and I want to deep six them, but inertia and more important things to do...

I haven't considered SSL 100% secure, ever. Too many changes and screw ups over the years. Lately I consider it barely better than the cheap lock on the front door next to the breakable window. Nobody bothers trying to break SSL much unless they're an entity with massive compute power. They just get in through much easier means.

Heck, most of the web database attacks can come in *through* SSL and still work just fine.
 
That's interesting. I knew that about Symantec but wouldn't ever pay their price tag for what little protection that provides. Not when OpenSSL itself had a hole anyone could drive a Mack truck through for over a decade and nobody noticed. The cert is the least risky part of the entire SSL end to end process.

We've actually gone to using the renewable free certs from Let's Encrypt for anything non-Production that needs to be out on the public side of things, and automated the renewals. $0 doesn't suck. We're still using (cough, gag, ack) GoDaddy certs on the Production stuff and I want to deep six them, but inertia and more important things to do...

I haven't considered SSL 100% secure, ever. Too many changes and screw ups over the years. Lately I consider it barely better than the cheap lock on the front door next to the breakable window. Nobody bothers trying to break SSL much unless they're an entity with massive compute power. They just get in through much easier means.

Heck, most of the web database attacks can come in *through* SSL and still work just fine.
Nobody should ever consider any one technology to be 100% secure. Defense in depth is a must-have regardless. While there have been lots of TLS issues with OpenSSL and the Debian PRNG issue of a few years back, we just have to roll with it and fix it as we find these issues. TLS1.2+PFS makes for a decent setup so long as you have securely managed keys, and TLS1.3 is just around the corner to close it up even more.

https://cipherli.st/ makes it easy for the laymen to have an A+ Qualys SSLLabs rating, if nothing else.
 
The day I read that security engineers are criminally responsible if their company is hacked is definitely the day I find a new career.

It’s fun to demand **** like that - but if you work in the industry - you’d know how insanely ridiculous that would be.

For those of you demanding that things should be on a “separate server” with some “firewall” in-between are grossly grossly grossly oversimplifying things.

A setup like that is how you pass an audit. But in function - actually doesn’t do jack **** in 99 percent of implementations. If your application server can access the data - so can the hacker once he takes over your application server. You need a lot smarter people implementing a lot smarter architecture to actually make any real gains there. Reality is..there are handfuls of folks capable of work this good. The huge majority of the professionals out there are absolutely awful at their job when it comes to building “actually secure systems”.

How the hell is any exec supposed to know if his guys are good or not? At scale, most of them are bad. Even the talented guys rarely have the time or authority to design the architecture as securely as they’d want.

You’ll notice I didn’t post any solutions. That’s because I would need to know a lot more about what actually happened and I’d need to interview a lot of parties to begin to recommend any suggestion that would make more a meaningful gain. These are complex issues and the little information that is given to the public isn’t enough to understand it.

Anyhow..just some thoughts from a guy that makes his living protecting PII.
 
You’ll notice I didn’t post any solutions. That’s because I would need to know a lot more about what actually happened and I’d need to interview a lot of parties to begin to recommend any suggestion that would make more a meaningful gain. These are complex issues and the little information that is given to the public isn’t enough to understand it.
I think we should speculate the snot out of it, because that's how we can learn from it. ;)

Nauga,
and the matter of whose ox is gored.
 
I think we should speculate the snot out of it, because that's how we can learn from it. ;)

Nauga,
and the matter of whose ox is gored.

Like I said, I’ll speculate until they take my ******ned information out of their computer. Eff them. I didn’t ask nor want any business relationship with them in any way, and I sure as hell don’t need one. My banks can get off their lazy asses and ask me what assets and liabilities I have and do their own legwork. They have a vested interest in protecting my information or they know I’ll walk. I have no such option with Equifax.

Like Jesse, asked my professional opinion I’d need more information. Asked my opinion of their stupid asses as far as a consumer product goes, I’ll demand they be regulated away completely irrationally until they are. Screw them. Like bend them over a table and use a telephone levels of screw them.

No data, no money, no business... go under and go away Equifax. Die. Die. Die.

Are my feelings on the topic of them collecting and mining fiscal information about me, clear enough, or should I expound on it more? LOL.

For effs sake. They still have “fax” in their name. Talk about a company name you don’t want handling anything related to computer security duty. Hahaha.
 
I think we should speculate the snot out of it, because that's how we can learn from it. ;)

Nauga,
and the matter of whose ox is gored.
Speculation is always bad, because no one can ever get any value out of it, right? ;)

Richard,
who can build straw men as well as anyone.
 
Back
Top