We've had these numbers and facts for awhile, as well. It hasn't changed things, much.
For example, since at least 2007, it has been widely known that certain Autonomous Systems belonging to telecoms in Eastern Europe have have been a principle source of malware (without providing anything of appreciable value). So why do ISPs in the US still have peering agreements with these?
According to the "number and facts" of the 7Safe report
http//www.forensicfocus.com/index.php?name=Forums&file=viewtopic&t=5255the problem is the U.S. and Vietnam.
(just to underline that maybe number and facts are not as accurate as they could or as I wish they were)
No doubt. In fact, in a breach investigation that we handled, we developed evidence that while the attack appeared to come from Europe, it actually originated from the Western US. However, this does not change my original point that the information is out there but nobody is using it. In fact, as the paper that you referenced mentioned, part of the problem is that something as simple as network monitoring may be turned off precisely because it generates too much information.
Unless of course all the bad Russian guys managed to grt hold of the U.S. ISP's, but if this is the case, then why getting control of the Vietnam facilities too?
jaclaz
The name Russian Business Network was a bit tongue in cheek. Much of the traffic associated with the RBN was routed through or hosted by various states which were formerly a part of the Soviet Union but not Russia.
However, it is generally accepted that the RBN has probably been reconsitituted using different networks. In addition, there are others much less discussed but well known.
How many people are still using one of the vulnerable versions of Acrobat? My guess is alot.
The issue is that in business, everything is going to hinge on either cost-benefit, or usability. A former colleague of mine used to say that the only safe firewall was an air-gap, and he was right. Connectivity by its very nature involves risks, and not all risks can be avoided.
I know from discussions that I've had with people in a CERT that from a technology point of view, the enemy has already planned multiple generations ahead of what we are seeing now. It's like the Christmas bomber, the security guys are always adjusting their security based on the last attack, when the next attack might not look anything like that, and the myriad of security techniques only serve to slow traffic without providing any real security.
Simply, business requires accessibility to do business. If they didn't then they probably wouldn't have internet accessible computers in their network. That access creates a risk, and so then how much should they be doing to manage that risk. If they are a multibillion dollar business, then they can do quite a bit before the security becomes a large percentage of their running costs, but they are also a bigger target if they have a bigger footprint on the internet which means the potential to be hit with a higher level of finesse.
The smaller a business gets, the more costly effective security becomes, and also the smaller the risk from being compromised. At this point, cost-benefit analysis becomes a simple formula of
Probability of compromise X cost of compromise vs cost of security.
It reminds me of Tyler Durden's car recall analogy.
Whilst I think we all need to encourage a reasonable level of security consciousness, some threats are simply not preventable. As President Eisenhower said
"We will bankrupt ourselves in the vain search for absolute security."
The problem may be, as in the case of the December 25th attempted bombing of the Northwest flight, that we have too much data but not enough resolve to do anything about it.
Yep.
I think we, ourselves also cause the lackadaisical attitude with overfeeding senior management with data, warnings and such.
I am not trying to be crude but . . . senior management isn't the swiftest when it comes to security. Big pictures, few buzz words, and the stuff is sold to them. Try to explain it in a board meeting, and they fall asleep and can't even remember what is your name.
Yes, allow them to follow up and get extra details, but a 4 slide PPT presentation with pretty pictures and vague ideas does much more to sell the project than 20 charts, spreadsheets and such proving how truly beneficial it is. Go figure.
Just my personal experience.
I think we, ourselves also cause the lackadaisical attitude with overfeeding senior management with data, warnings and such.
😯
Thank you very much, I just learned a new adjective. )
Now I have something to keep in the same drawer as "palimpsestuous" for future use D
jaclaz
I agree that many businesses tend to follow the cost-benefit analysis model that Patrick pointed out, but I also agree with jhup's thoughts on the C-level executive's approach. The fact is that the equation is misunderstood
Prob. of compromise X cost of compromise vs cost of security
First…apparently, things such as not using communal admin accounts with easy to guess passwords, not using "password" as a password, and putting a password on the MS SQL Server 'sa' account all have a steep "cost" associated with them.
Second, the "cost of compromise" is very often miscalculated…not just in the sense of "clean up", but also in the sense of fines, the PCI forensic assessment (if appropriate), etc. Most folks don't count "loss of customer confidence", because history shows us that it really doesn't matter. However, what has happened following notification is civil suits being filed (another cost), and then there's the cost to the consumer of having to deal with identity theft and other issues.
The fact is, this stuff isn't a priority. None of these organizations feels that they have a "sacred trust" in processing the type of PCI/PII/PHI information that they do.
There's no such thing as "absolute security"…we all know that, so mentioning it or striving for it is irrelevant. But what can happen is that organizations responsible for storing/processing sensitive data, as well as those that comprise the critical infrastructure, can, in fact, with some small cost and effort, raise the bar such that at least alarms are triggered and footprints are left, and LE can go after the bad guys.
The absolute security issue is relevant. I once knew an information security manager who would lock down absolutely everything because instead of managing risk, she was trying to avoid risk. Her approach was equally as wrong as an ISM who doesn't lock down anything because there were times when her policies substantially reduced the benefit from using IT that it would have been easier to go back to paper or fax machines. When your policies get sufficiently draconian as to be counterproductive, then your client base will start ignoring your rules, including the effective ones, in an effort to get the job done. And your client base doesn't generally know which rules are the effective ones, so the net result is a lower level of security than if you balanced access with security properly in the first place.
Second, the "cost of compromise" is very often miscalculated…not just in the sense of "clean up", but also in the sense of fines, the PCI forensic assessment (if appropriate), etc. Most folks don't count "loss of customer confidence", because history shows us that it really doesn't matter. However, what has happened following notification is civil suits being filed (another cost), and then there's the cost to the consumer of having to deal with identity theft and other issues.
In my experience "customer" confidence is not truly the issue because, it at least the case of many financial services, the customer is not the individual to whom the PCI belongs but the institution for whom I am managing the information. Many of these businesses know that the consumer will continue to come back because of price and convenience. Many comsumers also firmly believe that if their data is compromised, they'll have a remedy.
The bigger concern is are the companies/credit unions, etc., for which the compromised business is managing the data and here is where you have the problem because companies managing the data do not want to expose themselves to potential liability and loss of business so there is an incentive to not be honest about the problem and its root causes.
I can't go into details but having been involved in a couple of data breach cases, in my experience, the compromised business is often looking for a scapegoat to which liability can be shifted rather than the answer to the question of how did this happen? In fact, in a perverse way, there may be reasons not to try to understand (or admit, publicly), what was the actual cause of the breach.
In a sense, by not having too much control, you allow yourself some plausible deniability some guy used a zero-day exploit to hack into my system and get my data and my firewall/AV didn't detect it. As embarrassing as that would be to admit, it offers greater protection than saying one of my trusted employees made off with the data or, worse, saying that the nature of the business is such that it is impossible to guarantee that unauthorized access to PCI will never happen.
A part of this goes back to the advice that these business get from legal counsel. While I have great respect for the lawyers that I worked with on these cases and pleaded with them to advise their clients to come forward with details of what happened, in all cases, legal counsel advised them against this. From their perspective, there was nothing that their clients could gain from such admissions and their client's business was not about helping other companies deal with their security issues.
Even Heartland's unusual public admissions regarding their breach had an air of "not my fault". By their own admission, they had been "hacked into" sometime in May of 2008, did not hire forensic teams until October of 2008 and did not detect the nature of the problem until January of 2009.
In other words, the miscreants were so clever that they managed to infiltrate a system and operate, undetected, for five to six months and continue to evade detection and identification for an additional five to six months. The implication is that these were some sort of "uber" criminals using technology so advanced that it took two teams of investigators months to uncover.
I don't buy it but I wonder if, even after the lawsuits, we'll ever know what really happened.
In other words, the miscreants were so clever that they managed to infiltrate a system and operate, undetected, for five to six months and continue to evade detection and identification for an additional five to six months. The implication is that these were some sort of "uber" criminals using technology so advanced that it took two teams of investigators months to uncover.
I don't buy it either. I've been involved in several cases where east-european groups (as far as we could tell) were trying to get in, do their thing, and never come back again.
When they succeeded, they made a lot of money. In almost every case, they used tactics and tools that where years old. Even the vulnerabilities were months, if not years old.
In most cases, monitoring, perimeter defense and/or incident response was either non-existent, or hadn't changed since the 90's. Organisations still try to make security into a product, instead of a process.
Ueber criminals? No way.
Bad security practices? Oh yes..
Roland