Artifacts and class...
 
Notifications
Clear all

Artifacts and classification thread

9 Posts
4 Users
0 Reactions
603 Views
keydet89
(@keydet89)
Famed Member
Joined: 21 years ago
Posts: 3568
Topic starter  

Gents,

I felt that given how the discussion was going, it's time to separate it from the "Scanning Images" thread and start a new one…

H


   
Quote
keydet89
(@keydet89)
Famed Member
Joined: 21 years ago
Posts: 3568
Topic starter  

Okay, at this point, my thought process is that we're looking at a medical taxonomy. It's already been mentioned that we have a classification process for malware, and at this point, I'd agree.

I'll use an example to illustrate a point…I had an issue where I was told that a remote access program had been hacked. I was told by the client that there was a vulnerability to the program and that they believed that the program had been compromised. Some of the behavior they'd observed was that the remote access program wasn't working correctly, or even available at times to users, even though the port used by the program was found to be open.

The problem was that the remote access program had been installed in '03 and was the same version (2.1.3) that was originally installed. The vulnerability that the client insisted had been exploited was to version 1.0 of the application.

In this case, the client had observed a limited range of symptoms and made a diagnosis…one that was incorrect, again, due to a lack of information. Discussing the issue, it became clear that the IT staff had no idea how to diagnose an issue…rather, their diagnosis technique was "limited observation, no testing, then make an assumption". This seems to be a pretty popular route to go.

If the IT staff knew more about how Windows systems operated, then there might be artifacts that they would look for based on their assumption of what was going on. Hopefully, failing to locate those artifacts, they would then pursue a more diagnostic mode.

The thing is that many folks will read these words and say, "it takes too long…we don't have time for that." My answer is…the first time you bake cookies, it takes a while to prepare the dough. The first time you do anything, it will take a while. However that is not an excuse for NOT doing IR. From a business perspective, with many of the regulatory bodies putting forth standards for "security", it's simply irresponsible and potentially exposing your organization to liability to say that.


   
ReplyQuote
hogfly
(@hogfly)
Reputable Member
Joined: 21 years ago
Posts: 287
 

In this case what seems to be missing is

a definition of normal. "We observed abnormal behavior in the remote access program." We must define normal in order to define abnormal. i.e, How frequently does the remote access system fail or suffer from accessibility issues?

Using the medical taxonomy or method of diagnosis, the problem should be approached from broad to narrowing scope.
Please excuse my overly simplistic approach to this. In fact it's much more complex but this is the gist of a possible approach to problem diagnosis.

Top Level - Identify the issue at hand
Remote Access issue.
Symptoms - failed login attempts, unavailable to users. Port was open.

Potential reasons - Identify potential reasons for failures. As you point out, had they known their system they could have done a better job at diagnosis..
Branch 1 - Compromise
Branch 2 - System Malfunction
Branch 3 - Normal Behavior

Diagnostic phase - Get background information.

Branch 1A - Identify attack surface (What is running or available on the system that could lead to compromise)
Branch 2A - Identify Components of the remote access system.
Branch 3A - Does this problem occur frequently?

Observation and Testing - Make a hypothesis, and test it. When I go to the doctor with a respiratory problem, he doesn't just say "oh you have lung cancer". He runs diagnostic tests(listens to breathing, asks me questions etc.)

Branch 1B - A direct observation of the abnormal behavior. -Testing Ask a few users to try to log in to the remote system. Create a threat model and attempt to replicate.

Branch 2B - Check to see if any individual system components have failed or show signs of failure.

Branch 3B - Is there a histogram of failures for this particular system - a knowledgebase to be searched or some other database of past events.

Final diagnosis
Make a declaration, it's cause, it's symptoms that led you to the declaration, and next actions.

So, yes a procedural approach to incident response can be used and should be lest we make mistakes and are held liable.


   
ReplyQuote
keydet89
(@keydet89)
Famed Member
Joined: 21 years ago
Posts: 3568
Topic starter  

Okay, I can see now that I'm communicating in a manner that some folks are understanding…

You seem to have picked up on the issue right away, so maybe the medical taxonomy isn't such a bad idea.

To take the example we're using a bit further, the system in question was Win2K. The IT staff confirmed that the remote access port was still in LISTENING mode, via netstat. However, they never bothered to find out *what* was listening on that port.

The issue, as I see it, is that most responders don't know what information is available to them, so therefore they have no idea how to get it. If the OS doesn't show me something with the use of native tools, then where else can I go to get the info? A doctor can feel your pulse, but can also use a sphygmomanometer to get more detailed information about you…and a sphygmomanometer isn't a "native" tool that we're all born with.

Okay, so that takes us back to formulating classes…I'll have to think about the structure. For right now, though, I think I'd like to stay away from a flow chart decision making process, as it's too much like a signature based approach, and I'm afraid folks would get too locked into it. I'd rather teach folks to fish, than just give them fish.


   
ReplyQuote
hogfly
(@hogfly)
Reputable Member
Joined: 21 years ago
Posts: 287
 

I don't know..is it that they don't know how to get the information using a non-native approach or that they simply don't think to because they a) forgot their training or b) have jumped to a conclusion already? I guess I've seen a bit of both.

A new responder will most likely only be as good as the courses he/she studied and the books they've read. A good responder will conduct his or her own research and tests and will gain diagnostic experience in doing so. Therefore, like any good incident responder, we know that preparation is the key. It really all boils down to the 6P principle.

I'd almost say a step back is needed before classes are derived. An index of terms and definitions should be collected and then organized. I agree that a flowchart is not the right way to go, but rather a heirarchical structure that someone can refer to is what's needed.

My example structure, while "flowchartish" should be dynamic and can only be filled out by someone with in depth operational knowledge of the system(s) in question. I, as an incident handler might understand more about windows than the system administrator, but the system administrator should understand how windows is implemented in their environment.

Unfortunately, everything does in fact have a signature and we arrive at a declaration because our diagnosis has led us to the creation of said signature. When we compare against say, an index of artifacts, we are in fact checking a signature. That is not to say that a signature based approach of response is what we're after. Rather a dynamic structure or method that can lead to a declaration of identity or match (as I outlined in my post in the other thread). The signature is the identifier that something is in fact what we say it is. When we identify a new signature, we add it to our index of artifacts and can compare against it when we use the dynamic structure in another case. Thereby we may have an empirical approach.


   
ReplyQuote
skip
 skip
(@skip)
Trusted Member
Joined: 20 years ago
Posts: 57
 

Interesting!
In the previous thread I suggested that classifing artifacts is as difficult as classifing vulnerabilites…

and here you are, running into the same problems.

and by the sound of it you think that you are making headway…
try and implement it….

And just so this thread isn't totaly pesimistic some constructive advice.

Lock the thread and throw away the key.
)
Skip


   
ReplyQuote
hogfly
(@hogfly)
Reputable Member
Joined: 21 years ago
Posts: 287
 

Harlan,
I've been doing some thinking offline and if this is something you'd like to collaborate on I'd be willing to participate. If not, and you just want to throw ideas out there for response that's cool too.

Skip,
I don't think any problems were run in to. I've said this before…if we are to be taken seriously as a science then we must treat it as one. Science comes with debates and disagreements. Theories are proven or disproven. Fact is, and I think Harlan has tried to stress this…we have nothing yet, so an honest effort is needed to prove or attempt to prove that it can be done. We can't just be naysayers.


   
ReplyQuote
az_gcfa
(@az_gcfa)
Estimable Member
Joined: 19 years ago
Posts: 116
 

I apologize for commenting on this thread. However, this subject is very interesting.

Based upon the discussion's if I was to classify this system. I would be forced too chose between IDS and vulnerability scanner. A better classification would be hybrid - a system level IDS scanner.
No matter how you peel this onion, you are going to be returning to the signature discussion. A signature by any other name is still a signature.

Personally, I'm an advocate for baselines and configuration management policies and protocols. I always process any system I've worked on through a baseline filter (known good, known bad, other [basic file/data classification categories]). Yes, this is not practical in the middle of an IR scenario.

Now, is the purpose of this tool (methodology and respository) to be an IR tool or Forensic tool or both. IDS's and Vulnerabilities scanners all depend upon "a signature" regardless of the construct of the signature or the interrogration of the particular dataset. As such every signature goes through a period of validation positive, false positive, negative, and false negative testing.

If the basis for this tool is IR, then I think a different level of probability protocols would be acceptable. The reason for an increase in probability would be due do the availability of data and timeframe for analysis of the data.

Now, why not treat this as a hybrid system by taking the strengths from both systems. By utilizing a combination of signature and detection protocols (different form of signature) you can produce reports similiar to most Vulnerability reports.
Signature identification, activation protocol, and probability. Then cross- reference information concerning the signature and possibilities.

This would be an aid to IR personnel and forensic analysts. Also, the tool could be the basis for an expert system for Help desk and training purposes.
Couple the above capability with a hash databases for file system object validation and verification you have one heck of a tool.

I much prefer my computer to do all the boring tedious work – finding the golden needle in the haystack.


   
ReplyQuote
skip
 skip
(@skip)
Trusted Member
Joined: 20 years ago
Posts: 57
 

Harlan,
I've been doing some thinking offline and if this is something you'd like to collaborate on I'd be willing to participate. If not, and you just want to throw ideas out there for response that's cool too.

Skip,
I don't think any problems were run in to. I've said this before…if we are to be taken seriously as a science then we must treat it as one. Science comes with debates and disagreements. Theories are proven or disproven. Fact is, and I think Harlan has tried to stress this…we have nothing yet, so an honest effort is needed to prove or attempt to prove that it can be done. We can't just be naysayers.

I understand. And it is because I understand that I am "naysaying."

I have approached the issue of application and system vulnerability categorization in the same way. Taking a scientific approach, testing my thoughts, and then revising my thoughts based on test results….

So, I am not "JUST" a naysayer. But regardless, the best of luck to you. Sometimes the journy is better then the destination. And you may just need to see for yourself….which isn't so bad because you will learn something along the way.

— And because you are so full of hope and the will do discover I will share the one positive results of my efforts (which ultimately failed to find a good way to categorize application and system vulnerabilities).

You must base you categories and criteria for an artifact/signature (or whatever you want to call it) so that they fit into on a desired end result. Do not just give a base foundation.
For example, "This categorization is designed to prevent data disclosure"
"This categorization is designed to prevent data destruction"
"This categorization is designed to prevent future unauthorized access"

Skip
PS. Try not to digress into Pneumonics


   
ReplyQuote
Share: