Join Us!

Notifications
Clear all

RAID Metadata  

  RSS
aandroidtest
(@aandroidtest)
Junior Member

Is there any specific metadata present in raid disks to signify there are part of a raid system?

I saw like "LVM" which is Logical Volume Manager in some sectors. But other than that what other metadata is present in the disk to prove that it is part of a raid system?

Forensic tools can rebuild the raid is all the disk image are available. If one is missing it can't detect? How to tell, what is the raid config from the disk if possible?

Quote
Posted : 06/06/2018 3:12 am
athulin
(@athulin)
Community Legend

Is there any specific metadata present in raid disks to signify there are part of a raid system?

Yes and no. Yes, or particular RAID 'device' (hard or soft) would not be able to say 'this drive is not initialized' or 'this drive is not part of the RAID'. No, as RAID in general is not specified to that detail of individual fields that are presumed to be unique over all possible RAID implementations.

I saw like "LVM" which is Logical Volume Manager in some sectors.

There's as far as I understand nothing that prevents LVM and RAID from coexisting. Assuming that you are identifying a LVM setup correctly, of course – without further details you may just have a false postive.

Forensic tools can rebuild the raid is all the disk image are available.

No. They can only rebuild RAIDs they recognize. Not all RAIDs. They can also build the RAIDs that their user can identify, and specify in a manner thhat makes sense to the tool.

How to tell, what is the raid config from the disk if possible?

A particular RAID implementation may do it in a similar way that LVM identifies that the disk is LVM. Not by looking for 'LVM' in some sectors, but looking for the exact metadata structures that LVM uses, such as the LVM metadata header. (You can find some info about that – I think – here http//people.redhat.com/agk/talks/linuxtag_2006/LVM2-LinuxTag2006.html , particularly the part about Metadata. However, it's old, so there could easily be later modifications, and source code is always preferrable.)

Decide which RAID implementation you're interested in, then research that one.

ReplyQuote
Posted : 06/06/2018 5:31 am
JaredDM
(@jareddm)
Active Member

Most often there will be RAID metadata recorded on the disks which, if properly interpreted, can tell you a lot about the array. However, the metadata of hardware RAID is always in a proprietary format which is difficult to read straight out. We've done a fair bit of examination of various metadata formats so we can read out certain settings in our data recovery cases, but there's still often a lot of guesswork.

Some RAID cards will have in plain text the drive number in the array. Others will actually have serial numbers of all RAID member drives stored in the metadata (albeit not always in an easy to read format).

If you can determine the original RAID card type and acquire one of them, you can often just connect a clone of the drive to the card and see all the original settings through the RAID BIOS. Occasionally we've had to do that to expedite recovery for larger arrays where drive order starts to have too many possibilities.

If it's any sort of Linux/Unix based RAID, you can probably get most of the metadata info out by simply using mdadm –examine against the drive.

ReplyQuote
Posted : 10/06/2018 10:50 pm
passcodeunlock
(@passcodeunlock)
Senior Member

There are pretty many tools with auto detecting features to rebuild hardware or software RAIDs from raw images.

We had great success with ReclaiMe RAID Recovery, Runtime RAID Recon­structor, R-Studio, etc.

For software raids, as JaredDM already posted, mdadm is your friend )

ReplyQuote
Posted : 10/06/2018 11:00 pm
jaclaz
(@jaclaz)
Community Legend

Is there any specific metadata present in raid disks to signify there are part of a raid system?

I saw like "LVM" which is Logical Volume Manager in some sectors. But other than that what other metadata is present in the disk to prove that it is part of a raid system?

Forensic tools can rebuild the raid is all the disk image are available. If one is missing it can't detect? How to tell, what is the raid config from the disk if possible?

It depends on the type of RAID, of course.

A "recoverable" RAID (such as an example 5 or 6 ) can always be rebuilt even if one (or more) images are missing or corrupted, after all that is the whole pooint of a "recoverable" RAID.

And it is not like there are thousands of millions of possible configurations, all in all there are just a bunch of them so even testing them all blindly won't take forever.

See also
https://www.forensicfocus.com/Forums/viewtopic/t=12274/
https://www.forensicfocus.com/Forums/viewtopic/p=6583245/

jaclaz

ReplyQuote
Posted : 11/06/2018 12:22 pm
JaredDM
(@jareddm)
Active Member

And it is not like there are thousands of millions of possible configurations, all in all there are just a bunch of them so even testing them all blindly won't take forever.

If you're talking about 2-4 drives in a RAID 5 then yes, you are correct. However, if we're talking about large arrays then I'd beg to differ. Here are some numbers for you on the possible number of drive order combinations

4 Drives = 24 possible combinations of drive order
6 Drives = 720 possible combinations
8 Drives = 40,320 possible
10 Drives = 3,628,800 possible
12 Drives = 479,001,600 possible

As you can see, the number of possible drive order rotation jumps exponentially. So by the time we're working on a 16 drive array we're into quadrillions of possibilities and brute force becomes impossible.

That's just the drive order. Then there's the other factors which all multiply the complexity. For RAID 5 it's not that many, just parity rotation scheme (4 different possibilities), block size (around a dozen possibilities) and parity delay (not used too often). But, if it's RAID 6, there are literally hundreds of ways it's implemented, and that's assuming you know the drive order. Is it Reed-solomon or double xor. Does the xor parity block include the RS bock? Is the RS block before or after the parity block? Is it single step or wide step parity rotation.

That's why we've got to spend time reverse engineering this stuff so we can figure out some of it and only have to brute force the final bits. I've had to handle RAID cases where there were literally trillions of possible ways it could be combined. The only way we get it done is by reverse engineering the metadata, analyzing the layout of file system structures, and often writing custom software to brute-force what we can't figure out easily.

ReplyQuote
Posted : 11/06/2018 3:11 pm
jaclaz
(@jaclaz)
Community Legend

If you're talking about 2-4 drives in a RAID 5 then yes, you are correct. However, if we're talking about large arrays then I'd beg to differ. Here are some numbers for you on the possible number of drive order combinations

4 Drives = 24 possible combinations of drive order
6 Drives = 720 possible combinations
8 Drives = 40,320 possible
10 Drives = 3,628,800 possible
12 Drives = 479,001,600 possible

As you can see, the number of possible drive order rotation jumps exponentially. So by the time we're working on a 16 drive array we're into quadrillions of possibilities and brute force becomes impossible.

Sure ) I am talking of what it is likely the OP is probably going to find in the real world.
I.e. IMHO 4-6 disks represent (outside enterprise datacenters) 95% to 99% of RAID's the OP will ever see in his life.

And I don't consider drive order a common "variable".

I mean, before taking the images it is normal to try and understand which order the disk drives have, and name the image as image_01, image_02, etc.

And usually these drives/images come from a given (hardware) RAID system, that only have a bunch of options.

I will have to confess however that I have rarely (like in "never" 😯 ) seen myself a RAID with more than 6 disks, or if you prefer, if you have something more complex than a 4 disks or 6 disks RAID, you won't call me (nor the OP) for imaging or recovery. wink

My earlier post is to be read in the contest of this one-week old thread by the same OP
https://www.forensicfocus.com/Forums/viewtopic/t=16687/
No need to scare him more than the bare minimum …

jaclaz

ReplyQuote
Posted : 11/06/2018 6:12 pm
passcodeunlock
(@passcodeunlock)
Senior Member

The real life practice shows that "collapsed" or faulty raids got 50% chances for recovery, no matter on the RAID type or the number of disks used.

Instead of building complicated RAIDs, which even with low chances, but could fail, always have a RAID + external (physically separated) backup!

ReplyQuote
Posted : 11/06/2018 9:17 pm
jaclaz
(@jaclaz)
Community Legend

The real life practice shows that "collapsed" or faulty raids got 50% chances for recovery, no matter on the RAID type or the number of disks used.

Instead of building complicated RAIDs, which even with low chances, but could fail, always have a RAID + external (physically separated) backup!

Yep ) , that is lesson #1 and #2 in RAID class 😯
1) DO NOT mistake a RAID setup with a proper backup strategy.
2) RAID can (and WILL) fail (before or later), backups will also fail (before or later), and that is why you should always have more than one backup, before and besides any RAID setup you may have.

Nowadays (with the rather common occurrence of crypto-malware) it has to be added "offline" backups.

jaclaz

ReplyQuote
Posted : 12/06/2018 8:02 am
Share: