Any digital domain as server or endpoint cannot be protected 100% from an attack. For this we developped internally a new approach called RWD Regularly Wiping Devices. Think in a process of starting with a clean device and over time unknown the device gets infected. Forensically wipping the devices in-depth and flashing back a clean image including OS, Apps and Config to restart.
But the approach has a big problem. How to disentangle data from the rest? Today its unclear what is user data, config and app.
Do you have a good idea of how to split during use the data (deeply cleaned) into a trustroom?
The protection of the trustroom is a high task, we focus on high-level encryption and manually key-exchange handshake process.
How do you keep your systems clean?
Old problem if you ever used Norton Ghost to deploy clients in the late 90's. Basically wipe everything and keep user profile+data on server share. Wipe as needed.
(In my case it was more important to wipe after a course had left and prepare for a new students)
Nowadays people can use USB sticks (which can be encrypted) and cloud storage (even internal ones).
Yup, familiar with Norton Ghost. Was outstanding in the 90's. Today MS SCCM is a good tool but Office 365 which runs hybrid local and cloud is difficult. Replicating from own Exchange server functions well but UCC towards other units fails. To recover one profile is ok, but en-masse that
is time-consuming and needs large resources server-side. We tested with 20 clients and took us 10 hours - difficult to run overnight.
Any revolutionary idea in the wild how to handle this issue better?
But the approach has a big problem. How to disentangle data from the rest? Today its unclear what is user data, config and app.
Given multiple devices and data sharing I would suggest that your wiping approach is redundant in 2017. What's to stop your shiny new device immediately getting reinfected the minute the user starts syncing their data back onto it or plugs that USB stick back in?
Focus on endpoint protection - yes, it won't catch anything - in conjunction with network sensors/probes/av/antimalware/good creds management and so on. Good security needs multiple layers - frequently wiping devices that might not be infected just seems like overkill to me.
Yup, familiar with Norton Ghost. Was outstanding in the 90's. Today MS SCCM is a good tool but Office 365 which runs hybrid local and cloud is difficult. Replicating from own Exchange server functions well but UCC towards other units fails. To recover one profile is ok, but en-masse that is time-consuming and needs large resources server-side. We tested with 20 clients and took us 10 hours - difficult to run overnight.
Any revolutionary idea in the wild how to handle this issue better?
* How much security does the certificate solution actually add to the situation?
* How much problems does it create?
Balance.
The most important thing is to keep track of what is going on with logs and other means so you can plug that hole if an attacker gets in. Sprinkle some whitelisting, system hardening and segmentation upon that and you're ahead of the game, there is generally no need for clients to be able to talk to each other and if you read up on what more qualified attackers do… well.
(It is more important to stop bad guys, not pentesters that simulate some theoretical scenario).
Very good aspects mentioned - of course you both are right. With PAN FW, Splunk and Paessler and Traps from PAN we have multi layer and segmentation. But to trust on your security - is the beginning of the end. To daily start by 'we ARE infected' keeps a better track and behaviour. All CyberSec measurements people get sick and tire of constantly getting sensitised. And thats the problem. A user is limited by feeling responsible for security. The tech should keep the level of security high.
There must be an approach to be 100% sure that in a certain moment of time We are clean.
Thinking outside the box and searching for a new gamechanger approach is my goal. Cyber Deception I dont like. Moving from Prevention to only Detection is unacceptable for us here.
A new way of Prevention is needed.
Whether wiping is effective depends on the threat model and cannot be expected for the typical undetected compromise; there is no stateless hardware anymore.
Therefore, a resource intensive wiping approach against a suspected attack is an economic blind shot. Re-establishing operations after an compromise or any sort of incident response usually costs huge amounts of money compared to the tight security budgets.
Routinely taking these measures will affect your ability to counter an actual compromise. In particular, you significantly shorten your look back period for investigating an incident, unless you image and archive every system that is reset.
In my opinion, resources should better be spent on developing the ability to detect any suspected attack and to then determine, whether action is needed. This quite certainly exceeds the reach of a wiping approach, because an attack that is successfully countered through wiping is also relatively easy to detect.
Nonetheless, I agree that there is need for starting with fresh installations in certain contexts at some point in time. To speed things up, I find it inevitable to swap pre-installed storage media instead of relying on network-based deployment solutions. Sometimes it can be necessary to start all over again within minutes, simply due to human error or whatever.
Today MS SCCM is a good tool but Office 365 which runs hybrid local and cloud is difficult.
OneDeploy does something rather like that. Don't know how good it is with Office 365, though.
Thank you! - Will check this out