forensics workstati...
 
Notifications
Clear all

forensics workstations and supercomputer ?

13 Posts
8 Users
0 Likes
1,410 Views
(@qassam22222)
Posts: 155
Estimable Member
Topic starter
 

hey folk …
what is the best company that sell forensics workstations ( towers ) ??
and how i can build supercomputer with large number of GPU's to perform brute force attacks ?
i found this on internet https://sagitta.pw/hardware/gpu-compute-nodes/brutalis/

is it effective ? and how i can link 4 or 5 of brutalis with each others !

 
Posted : 07/08/2018 6:47 am
jaclaz
(@jaclaz)
Posts: 5133
Illustrious Member
 

is it effective ? and how i can link 4 or 5 of brutalis with each others !

For 21 K bucks EACH, it must be.

Basically you convince your boss (or organization or colleagues or financial advisor, etc. ) to invest some 120,000+ dollars in a Magistos+5 Brutalis'[1].

Speed comes at a price.

The problem (the financial one) with fast password crackers is that they are fast. 😯

Let's say that using your cluster of 5 Brutalis it takes you 1/40 (to have 5*8=40 GPU's) the time needed with a single GPU to crack the same password.

If you crack one password in one week on a normal, single GPU system (with a cost/investment of roughly 1000/1200 bucks) and have a volume of cases (number of similar password to be cracked) of 26 per year (a little more than two per month, and using the machine at 50% of its capabilities) each password cracking will cost (setting for the moment aside energy consumed and assuming a three years life of the computer ) 1200/(3*26)=15 US$/password.
If you happen to have only 3 password per year to crack, the cost is 1200/(3*3)=133 US$/password, a lot of money, but still very reasonable.

With the cluster, you have the capability of cracking 52*40=2080 passwords per year (at 100% usage) or 1040 at 50 % usage, that is a password every 7*24/40=4,2 hours[2].
Do you have 1000 password cases per year?
If yes, 120,000/(1000*3)= 40 US$/password, very good.

But if you have the same 26 cases per year, that will be 120,000/(26*3)=1,538 US$/password, and if you have only 3 cases, that will be 120,000/(3*3)=13,333 US$/password.

jaclaz

[1] and you will additionally need some very good (and redundant) air conditioning/cooling system, as essentially you are putting in a room a 5-10 KW electric stove.
[2] BTW this also means that the machine needs to be attended or semi-attended 24h/day 7/7

 
Posted : 07/08/2018 7:39 am
Wardy
(@wardy)
Posts: 149
Estimable Member
 

Why not build your own supercomputer? -

https://www.techradar.com/uk/news/best-mining-motherboards

 
Posted : 07/08/2018 8:08 am
minime2k9
(@minime2k9)
Posts: 481
Honorable Member
 

Best workstations we have found are built by Lenovo, we go to them direct to create the specification.
However their 920 machines will take about 10 hard disk drives and dual Xeon CPU.

Ours came in at about 6k with dual quad core Xeon, 128gb of RAM, basic graphics card to provide multiple displays, 4 x 8TB hard disk, 4 x 6TB hard disks and 2 x SSD (1 OS(1TB) and 1 Working drive(2TB) ).

We use them 24 hours a day and have next to no failures over the past 6 years.

 
Posted : 07/08/2018 9:56 am
jaclaz
(@jaclaz)
Posts: 5133
Illustrious Member
 

Best workstations we have found are built by Lenovo, we go to them direct to create the specification.
However their 920 machines will take about 10 hard disk drives and dual Xeon CPU.

Ours came in at about 6k with dual quad core Xeon, 128gb of RAM, basic graphics card to provide multiple displays, 4 x 8TB hard disk, 4 x 6TB hard disks and 2 x SSD (1 OS(1TB) and 1 Working drive(2TB) ).

We use them 24 hours a day and have next to no failures over the past 6 years.

Hmmm, I beleive you have some dates (or disk capacities) off. 😯

6 years ago (circa 2012) there were NO 6 Tb hard disks (that were announced end of 2013, actually available in 2014) , let alone 8 Tb (announced in second half of 2014 actually shipped later in 2014 or beginning 2015) .
Same goes for SSD's, AFAICR first 1TB ones were 2011-2012, but 2 Tb ones were 2013, possibly a bit later on the market.

jaclaz

 
Posted : 07/08/2018 12:01 pm
jaclaz
(@jaclaz)
Posts: 5133
Illustrious Member
 

I'm pretty sure workstations are not meant to run 24/7, they sell servers for that right? Just install all your forensic gadgets on a server and use RDP.

Yep, that's the idea ) , but I have (and have had) "normal" workstations running 24h 7/7 (not specifically forensic workstations or GPU cracking machines) that in their total ignorance of this fact wink , ran (some are still running) smoothly for several years, additionally with no reboots[1].

jaclaz

[1] exception made for some periodical ones for cleaning dust and for the occasional replacement of hard disks and PSU's.

 
Posted : 07/08/2018 12:07 pm
minime2k9
(@minime2k9)
Posts: 481
Honorable Member
 

Hmmm, I beleive you have some dates (or disk capacities) off. 😯

6 years ago (circa 2012) there were NO 6 Tb hard disks (that were announced end of 2013, actually available in 2014) , let alone 8 Tb (announced in second half of 2014 actually shipped later in 2014 or beginning 2015) .
Same goes for SSD's, AFAICR first 1TB ones were 2011-2012, but 2 Tb ones were 2013, possibly a bit later on the market.

jaclaz

So more specifically, we have used Lenovo machines for 6 years, in various incantations, for the 6 year period. We work on a 3/4 year refresh rate, so we started with a machine (not a P920, possibly D30) then required some more machines (for new staff etc.) and had some P900 and some P910's. Originally the machines had 2TB and 4TB disks with 1TB SSD's. These have been upgraded/refreshed as we replaced old hard disks/ordered new machines. At some point we were using 4TB and 6TB's before upgrading to 6 and 8TB drives. In most cases the larger drives became the smaller drives in the new configuration. Though specifically I am speaking about the reliability of the machines over the disks, as the disk reliability varies by brand etc.

Overall, we have had few issues with all the Lenovo builds we have had over this 6 year period. All of the original machines are still working in our unit, although most have been 'retired' to other processing/tasks rather than main machines for investigators.

 
Posted : 07/08/2018 1:23 pm
jaclaz
(@jaclaz)
Posts: 5133
Illustrious Member
 

You love mocking people and hiding it between technical details don't you? D
Some computers you have there, we should get some of them too.

Naah, I am not hiding at all, only stating facts out of my personal experience.

Surely the people making (and selling) servers love to hint that non-servers (including the workstations they themselves sell) are not designed for continuous use (and very likely this is accurate), still actual hardware sometimes is much better and lasts longer then what the designers expect.

Many, many years ago (circa 2002/2003), I had this problem to solve, a couple machines that needed to operate 24h 7/7 (think of something like a Pos, no actual need for "computing power") with no or little maintenance.
Previous attempts (using normal motherboards) had failed mainly because of one thing dust (stops/crashes due to processor overheating).
So I bought a bunch of VIA Epia motherboards with passive cooling and voilà no more issues with machines stopping because the dust crippled the processor heatsink and its fan.
OS Windows NT 4.0 (and later Windows 2000), never (and I mean never) had any BSOD's, if not due to some hardware issue, that were mainly of two kinds
1) failed hard disks (they were PATA/IDE), let's say one failure per machine every 4 or 5 years
2) failed PSU's (I foolishly originally chose some smallish case that used non-standard form size PSU's), let's say one failure per machine every 3/4 years, until I bought a few 1U (Rack Server wink ) PSU's and *somehow* fitted them and never had another issue with PSU's after.

These machines were decommissioned recently (2017) after 14 or 15 years of continuous service, maybe I have been lucky, and surely those Epia mainboards were not at the time as cheap as "normal" motherboards+CPU, but they were not that much more expensive AFAICR, maybe some 10-15% more expensive.

jaclaz

 
Posted : 07/08/2018 2:25 pm
UnallocatedClusters
(@unallocatedclusters)
Posts: 577
Honorable Member
 

I recommend considering the following setup

1) A server with as much RAM as one can afford, with an SSD drive for the operating system and dual Xeon processors with as many cores as one can afford. As many USB 3.0 or eSata ports as possible.

2) An External Hard Drive SATA Enclosure Docking Station USB 3.0 connected to the Server. The docking station will hold raw SATA drives holding your forensic images.

3) A Synology Disk Station DS1815+ 8‑Bay Diskless NAS Server with 1TB SSD drives x 8 to hold databases.

One can very easily connect the Synology NAS to Amazon Glacier storage for offsite disaster recover.

 
Posted : 07/08/2018 4:47 pm
MDCR
 MDCR
(@mdcr)
Posts: 376
Reputable Member
 

Is password cracking your only concern? Can't high performance GPU boxes be rented from some cloud provider?

If i built a new forensics workstation today, i'd get a Xeon based box with PCI Express SSD drives for awesome IO performance, at least 64 gigs of ram (more preferably if budget allows), lots of harddrive space (in raid), lots of expansion/connectivity options, Drive analysis software, VMWare workstation and PCAP parsing abilities, backup solution/separate storage server.

But that's just me thinking on versatility. I used to write multithreaded applications that could use Xeon based processors quite effectively, it is far from normal that forensics programs can utilize Xeon based systems though.

 
Posted : 07/08/2018 10:14 pm
Page 1 / 2
Share: