Notifications
Clear all

IEF False Positives

9 Posts
5 Users
0 Likes
456 Views
4Rensics
(@4rensics)
Posts: 255
Reputable Member
Topic starter
 

Hi,

I'm just wondering if somebody could maybe help me understand some of the garbage IEF is pulling out.

I have an export that is claiming (after a keyword search) xx number of Google search hits (which look correct as the are showing as queries) but then nearly the same amount again in Chrome/360/Opera profile. Which 360 and opera is not installed?
On closer inspection, they are the same Google queries. Why is it reporting this? Its confusing matters for pointless false positives and false reporting of search browsers/hits! (I know it says "chrome"/360… but it should be under Chrome Browser History surely?)

Please tell me its just not me seeing this constantly? I'd hate to leave it out of the initial search for fear of missing something, but don't want it in when its duplicating data in browsers that are not there!

Thanks,
4R

 
Posted : 22/01/2016 3:52 pm
Chris_Ed
(@chris_ed)
Posts: 314
Reputable Member
 

As far as I understand it, it is to do with provenance; certain deleted sqlite databases have a similar structure, so IEF doesn't "hedge it's bets" on which app it came from.

 
Posted : 22/01/2016 6:39 pm
4Rensics
(@4rensics)
Posts: 255
Reputable Member
Topic starter
 

Thanks. That kinda makes sense. Some research shown that they are "loosely" based on IE (from the good old days of browsers) and this would make sense.

Its a little annoying, but knowing it, helps me (almost) ignore it!

 
Posted : 22/01/2016 7:13 pm
(@mcman)
Posts: 189
Estimable Member
 

Hey 4Rensics,

Chris is correct. That artifact for Chrome/360/Opera is carved and all 3 of those apps have the same structure when we carve it so there's no way for us to tell 100% what app it came from. Usually it's pretty easy for the examiner to tell, like you said the other two aren't installed on the computer, or if you look at the source, it still may be under the Chrome folder location which makes it more than likely Chrome data.

From an automation standpoint, we can't be certain and we definitely don't want to guess and be wrong so we will always err on the side of caution.

Feel free to reach out if you need any more help.

Jamie
jamie.mcquaid@magnetforensics.com

 
Posted : 22/01/2016 7:18 pm
4Rensics
(@4rensics)
Posts: 255
Reputable Member
Topic starter
 

Thats great thanks.

Personally on my side, I'm OK with it, however we have some new viewers and its trying to explain to them (and being certain and confident at what I'm telling them) is correct.

From my forensic side, we can quickly check to see what browser is installed like you say and eliminate it.

Thanks again.

4R

 
Posted : 22/01/2016 7:35 pm
keydet89
(@keydet89)
Posts: 3568
Famed Member
 

Fascinating thread…

 
Posted : 22/01/2016 7:58 pm
steve862
(@steve862)
Posts: 194
Estimable Member
 

Hi,

I know I'm late on joining in this thread but I have noted irregularities in IEF for quite a long time. I find IEF a great 'heads up' tool but I do my provenancing via some other means.

In the wake of ISO 17025 and software validation I've started to think of my software tools as being in two distinct categories; finding and previewing tools and evidencing tools.

I was then going to suggest spending time validating finding and previewing tools is a complete waste of time, whereas validating evidencing tools is only a moderate waste of time!

Steve

 
Posted : 22/09/2016 6:20 pm
Chris_Ed
(@chris_ed)
Posts: 314
Reputable Member
 

I think IEF is an almost impossible tool to validate for ISO17025, sadly. There would be too many things to test.

Out of interest Steve, what do you use to validate findings? Do you check it against the offset? Do your own DB queries? Use NetAnalysis *shudder*?

 
Posted : 22/09/2016 8:54 pm
(@mcman)
Posts: 189
Estimable Member
 

That's really a problem with any tool that provides analyzed data and not just the raw data or file system structure (even the file system is analyzed results but it's just less volatile).

We do extensive testing across our own data but there are an infinite number of possibilities out there. There's no way tools can account for every possibility. Certifications would be even more difficult. It's not scalable to data and apps that don't also follow the same certification for their own testing. I have 14 versions of WhatsApp for Android that were released in August 2016 alone. Testing and validation takes a lot of time and it's still not perfect which is why the onus has always been on the examiner to test/validate their own data/evidence.

The best thing the tools can do is make it easy for you to validate by telling you exactly where it found the data so that you can verify with another tool or by manually testing yourself.

Even if there was a standard set of data to compare against, it would be out of date the following week.

At the end of the day giving a raw hex dump is reliable and can be easily certified but no examiner wants to wade through it manually for every case nor do they have the time for it given most caseloads.

Jamie
jamie.mcquaid @ magnetforensics.com

 
Posted : 22/09/2016 9:49 pm
Share: