Magnet AUTOMATE Enterprise and How It Can Streamline Workflow for the Corporate World

Trey: Hey everyone. Thanks for tuning in today to the Magnet Summit 2022. Today…my name is Trey Amick and with me I have Tim McAnnany from Qualcomm. Thanks for joining in today, Tim, and doing this with us. But we’re going to be talking today about Magnet AUTOMATE Enterprise, which we launched earlier this year.

And we’re going to really talk about how to streamline your workflow, your investigations within that corporate setting, really kind of dig into that…really want to kind of talk about what Qualcomm does and go on from there. So to kick things off today, let me go ahead and pass the floor over to Tim so he can introduce himself.

Tim: Thank you very much, Trey. My name’s Timothy McAnnany, I’m a cyber investigator at Qualcomm. I work on the cyber investigations team. Before that I was a counterintelligence special agent with the US Army, did a lot of stuff in the US, did the deployment to Afghanistan, but then in IT most of my adult life. Yeah.

Trey: Awesome. And if you need to reach out to Tim afterwards, there is his LinkedIn and email as well. And then quickly I’ll introduce myself. So, name’s Trey Amick, I’m the director of forensics consultant team here at Magnet. There’s my email, Linkedin, Twitter, however you might want to reach out.

I’ve been with Magnet now since 2018. My background is a little varied. I spent some time in corporate investigations. Did education awareness training as well. Through college I worked at Apple, so definitely love my Apple products, and then prior to going into corporate investigations, I worked in law enforcement. I’m working up through patrol, working my fair share of white collar to crimes against people, and now on the ICAC investigations. So please don’t hesitate to reach out if you ever need anything. We’re more than happy to help. But with that, let’s go ahead and jump into it.

Get The Latest DFIR News

Join the Forensic Focus newsletter for the best DFIR articles in your inbox every month.

Unsubscribe any time. We respect your privacy - read our privacy policy.

So to start with, we talk about the problem and really why Magnet said, “hey, there’s a need here, and let’s see if we can kind of fill that need with Magnet AUTOMATE Enterprise.” So, to start off with what that problem and what we were seeing was the challenges faced by Enterprise labs.

We wanted to reduce the number of touch points and just really the waste of time having to have just an investigator simply hit “next” or check to see if an input was online and waiting for them, quite frankly to come in to start their shifts on a daily basis when maybe an incident had started hours earlier.

Which really kind of rolls into responding to security events faster and being able to mitigate some of that damage and trying to work through how we could absolutely do that, but also give a tool to the forensics team that they can work into their workflows as well and not say, “hey, we’re going to change your entire ecosystem and your entire SOP for how you work investigations,” but how you can use additional tools to really help expect some of that.

We want to be able to get the most out the DFIR tools as well. And everyone at Magnet, all of our examiners, we all talk about toolbox approach. There’s not just one tool out there. We all use a variety of tools and we wanted that to be front and center when we start talking about additional tools and having automation, orchestration tools to help with your computing and your analysis for investigations.

And really we’ve got managing growth from data volumes that are getting increasingly larger. The data sources are very diversified from IoT devices to you’re doing forensics in the cloud, you’re doing forensics of the cloud. So, whether you’re collecting from S3 or maybe Microsoft Azure VM, you need a tool that can process all of that information and do it in a timely manner.

And really, lastly, giving insights to the lab management to be able to have better resourcing decisions about your lab and the amount of throughput in the data that you’re using, your return on your investment for the hardware that you have in your lab currently. And then also, how you can easily scale with newer technologies.

So, those are some of the challenges that we can identify when we start talking about how can we meet some of these challenges for our enterprise and corporate customers. And really that’s where Magnet Enterprise comes in. We want to automate and orchestrate your digital forensics and your tasks and really have a purpose built tool for enterprises.

So concurrent collection, processing evidence from multiple targets, sources, to really get to the why, and being able to limit damage and mitigate damage if you are encountering a breach or inside a threat investigation, and being able to do that in a timely manner.

So, talking about just a couple of the key features, and we’re going to dig into these a little bit more, and we’re going to have Tim talk about how Qualcomm has done this as well. Responding to those security events faster. We want to reduce that downtime.

We really want to automate as much of this process as we can, and that’s not the automating the analysis, that’s automating the clicking “next, next, next”, and sitting, waiting for your target endpoint to come online, or, “oh, they’re not connected to the internal network via VPN”. What are the processes there that you have in place to still be able to get that acquisition?

We want to be able to collect data at scale. So, once again, having multiple target endpoints that specify the data that you want to collect and process off each one of those at a large scale versus the standard ad hoc approach that acting cyber would be, which would be kind of a one off instance.

But, and ultimately focusing in on those high value tasks. Having examiners actually do the data analysis, not clicking “next” and waiting for processing to happen, or once again, trying to track down where log files are, and trying to figure out how best to complete analysis on those.

But have a tool that you can build and grow with and continue adapting your technology with as a part of all this process. And that’s where, once again, we would love to get Tim’s thoughts on this, but definitely something that we wanted to focus on when we were talking about AUTOMATE Enterprise.

Tim: Yeah. So, I mean, one of the benefits here that actually isn’t listed is the fact that AUTOMATE Enterprise is an off the shelf tool from Magnet. Because anyone who’s used a homegrown tool knows the headaches involved in that: updating for phones, adapting it for the future, making those little small changes that always happen.

I mean, in fact at Qualcomm, we used to have a tool that did something sort of similar. It would help us with pulling memory dumps and volatile information for incident response. And that was back when memory dumps were 4GB, maybe a big memory dump with 16GB.

But we had that and it was a nice little automated process. Wasn’t very comprehensive, but it worked. But then the person who was maintaining it left and it just kind of died out there on the vine.

So what AUTOMATE Enterprise can bring you is that…it’s basically their job to maintain that tool. They’re keeping an eye on it, they’re updating it, addressing any vulnerabilities. They’re going in and keeping an eye out on those future problems and always addressing, “okay, what’s next? What are we thinking of?” So AUTOMATE Enterprise is a…there is a significant value to that.

Trey: Yeah, absolutely. That’s, I mean…I think a lot of corporations have that “build versus buy” mentality and, it tends to sway back and forth on what you do there, but I mean, you hit the nail right on the head with the build mentality, which often times works great for a lot of organizations.

But there comes a time where the core team that initially built something, or the core person for that matter, who wrote the script or built the product that the organization’s been relying on gets transferred or gets promoted or just leaves and retires. And then you’re kind of stuck with like, “well, who in the past…?” And everybody’s kind of looking at each other because they never had to build or maintain that.

And that’s where having that buy mentality of like, “hey, we can have an off the shelf product that is being updated.” And you do have that direct support to assist when you need to change your tools, change your workflows and kind build that out. So, yeah. Great point there.

So, talking about some of the other benefits of AUTOMATE Enterprise and what we can do. We want to be able to automatically process and create exports for all different types of evidence, and we want to do that in parallel.

So, essentially we want to…if your investigation kicks off and you’ve got multiple endpoints, maybe from the same user, maybe you have cloud data, you’ve got maybe their mobile phone, you’ve got their SharePoint and their OneDrive, and then you also have their endpoint.

Being able to process all that at the same time versus putting it in a queue, essentially, how we we’ve normally done forensics for a very long time where essentially you have AXIOM or your product of choice, and you start with one piece of evidence and after that’s done, you go onto the next one and the next one.

And yeah, it can absolutely get done, but being able to really process and do analysis in parallel so that it’s all getting done at the same time is a huge, huge benefit. And essentially having computing and forensic tools that can run 24 hours a day, 7 days a week is fantastic.

Because I know, both from the law enforcement perspective, but also the corporate perspective, I would come in on the weekends and at night, because I knew “hey imaging was done”, or there was a big processing case that I knew I just needed to simply hit next to kick off the next part of the process, and being able to have a tool that will orchestrate and do all that for you.

So once again, when your examiners come in, they’re just left with, “hey, here’s the analysis. Let’s get to work on that.” And being able to leverage the workstations, whether you’re talking the physical workstations that you already have or cloud based resources and being able to scale really at will, if you have a large security incident, you need scale your library quickly, having a tool that can automate and orchestrate a lot of that is pretty phenomenal there.

Tim: So, let me just go ahead and add to that cloud deployment part. It can also give you a lot of flexibility as well. So, as you may know, Qualcomm’s a global company, we have locations all over the place.

We’re also in the process of setting up some infrastructure for collections in the cloud (AWS), but the basic concepts are pretty simple: instead of having the computer on site connected to your network, you have it connected up through Amazon’s network and you can create an Ammy or what have you, and just spin up as many as you need to do to do your collections.

It also gives you the opportunity to do off network collections, which is a capability that since a lot of folks have shifted to a work from home or a hybrid environment really, really is important, because otherwise you’re waiting for them to get on VPN or come back to the office. And it just adds so much flexibility to be able to do that.

Trey: Yeah, no, totally. With people being remote now I know…pre-COVID, it was always a challenge to get the collection done while people were bouncing from meeting to meeting in different meeting rooms all day, every day.

And now the challenge has migrated to, “hey, maybe they’re not on VPN or connecting every day all day…” and having the ability to do off-network collections on top of parallel processing is a huge benefit. And being able to do that within cloud environment whether it’s AWS or Azure, also really helps magnify the amount of data that you can process when you need to be able to scale at use, for sure.

And as we keep looking at stuff, being able to build a streamlined workflow as well, and having different workflows for different types of incidents is really important here. And I love the workflow builder that AUTOMATE Enterprise has where simply you can just create the workflow.

And really, I like to tell people, “use your imagination here, what would make your life easier? And how can you either write the scripts or find some Python scripts that are doing what you wanted to do?” Use your imagination on how you can create the best workflows for your environment. Because you and your team are going to know what would work best for your environment.

And I definitely want to get Tim’s thoughts here, but being able to seamlessly integrate lots of your forensic tools, but also think about your EDR tools. Essentially having alerts kick off from those platforms and then those alerts automatically starting workflows within AUTOMATE Enterprise.

And I think that really then creates a very quick cycle for being able to do an acquisition and having that processing done so that the examiners can jump straight in to work on mitigation and root cause analysis, versus having to let the higher ups say, “hey, by the way, we’re still trying to get access. We haven’t been able to process the image yet or acquire it”.

Having all that automated so when that initial incident kicks off from the EDR, if having that built out in a workflow could really pay dividends for a lot of organizations.

Tim: Yeah. And one of the best things about AUTOMATE Enterprise is how easy it makes it. I mean, you can see the workflow builder there. It gives you a really good way on how to visualize that organization. So, you don’t need a computer science degree or anything like that to actually use it.

While I will admit that knowing some scripting or coding really can be a benefit (almost a force multiplier), but as long as you can call it via the command line, it’ll do it. I mean, heck, you can even use batch scripts if that’s what you’re comfortable writing or, just calling anything on the command line.

Trey: Yep. Absolutely. I’m definitely, I’m not a coder, but some basic scripting, understanding some Python. And if you have CLI access to tools for APIs and all, we can absolutely work with that to build out those customized workflows for your environment. Absolutely.

And then a couple years ago we launched AXIOM Cyber. We had the ad hoc deployment, one to one, essentially. We’ve been evolving obviously with AUTOMATE Enterprise, we want to be able to do multiple endpoints in parallel and still keep that ad hoc approach where we’re creating and deploying agents, but in addition to that, being able to also get them in parallel in having multiple endpoints that we can collect from.

So, think about…you have your initial orchestration tool, you’re able to deploy to multiple endpoint, have those processing across all the various nodes that you’ve stood up for your AUTOMATE Enterprise environment, and that’s all being run in parallel.

So, once again, at the end, you’ve got your case file. You can go ahead and start your investigation versus saying, “well, I processed the hard drive in AXIOM Process, now I want to process it in X-Ways and run it through some custom scripts before I can get started.”

Do it all in parallel and have that reflection capability where we can go ahead and deploy the agent via AUTOMATE Enterprise, we can collect it, we can begin the process for you. You’ve had all this built out flows, so it’s very good at selecting, entering your IP information for those targets and having that off kickoff.

And once again, as Tim talked about, having some basic scripting and being able to script out some of this as well, so that if you have an EDR platform that has alert on multiple endpoints for us to be able to then call that EDR to be able to get that information, and then to be able to deploy those agents to deploy those collections can really be a huge benefit as well on here.

Tim: Yeah. So let me go ahead and bring up a small little bit of anecdote, I guess, about Qualcomm. So, at Qualcomm we tend not to pre-deploy stuff because we have a lot of security infrastructure in place. And a lot of security infrastructure tends to have those endpoint agents. And every time we want to put an endpoint agent, we have to, for lack of a better term, fight with the desktop admin folks, because they’re always complaining it slows everything down.

So, when we deploy things, it tends to be as necessary. So, when we’re working with the Magnet folks on some of the early AUTOMATE, we’re talking about how we can go about deploying it. So, we were talking about how we use our EDR tool to deploy our endpoint agent.

And that was a very interesting concept because we ended up working with them to be able to leverage that kind of workflow to deploy our EDR tool because that EDR tool gave us a live response capability where we weren’t forced to use our credentials to log in to run the agent. It might give you some OPSEC help as far as not disclosing that you’re on that computer.

And plus you’re just, even if it’s an incident related item, you don’t want to necessarily burn your credentials and potentially allow an attacker to get those credentials because you logged in, because you were on that system trying to deploy and get information on it.

Trey: Yep, absolutely. And that’s where…so wanted to cover off, some of the benefits and how you can use AUTOMATE Enterprise, but really I want to now take it…Tim and I are going to dig a little bit deeper here and really what you’ve all done, and some of the benefits you’ve seen, kind just dig into that a little bit more, so take it away.

Tim: So, a lot of what I wanted talk about was kind of how do you use AUTOMATE, and how us at Qualcomm, we’re looking at automating a lot of our processes. So, when you’re thinking about automation, you want to think about, “okay, what are those processes that are repetitive? What are those processes that have simple decisions based on known criteria?” Like Trey said, it’s when do you hit that “next” button?

Everything needs to be…I mean, it works better if you have consistency, so you know what the outcomes are going to be. Granted, there are going to be times where you want to keep processing human, when you have complex or important decisions. When you’re looking at the whole thing, when you’re trying to communicate complex ideas.

So, if you have to bring something up to management, it’s not necessarily something you want an automation process around. But those are generally pretty easy to identify, because you generally want that human element in there. (Next.)

So, the easiest thing I found as far as trying to break down processes and trying to figure out what works to automate is just take a process and figure out what it is at its smallest piece. You have your input, you have your process, your output.

So, in my head, I kind of have: you get a thing, you do something to that thing, and then you use the results of that. So, when you can break that down, it makes it a lot easier to figure out where your processes are, where you can benefit that, and especially where you can bring in that parallelism.

That parallelism can be something like, for example, on the slide here: you have things like collecting volatile data and collecting data from your SIEM. And you’re talking about deploying agents, and then you’re talking about collecting memory, maybe getting specific files. So, which of those require something to start?

So, collecting volatile data, if you’re using your ER tool, maybe that’s the way you get your volatile data. Does that depend on the agent being deployed? No. So, it can be kind of off on its own. Same thing with collecting data from your SIEM. That doesn’t necessarily need the agent, so that can be off on its own. That can be a parallel process.

However, when you talk about collecting memory, collecting files and such, that’s where you start depending on different things. So, that’s when the parallel processing, I don’t want to say breaks down, but becomes less possible. You start becoming more serial when that happens. And when that happens, you’re thinking about things that are depending on external factors or just one result is another input.

So, breaking those down really helps kind of organize things. So, if we go to the next slide, kind of show you what we did with one of our workflows, and it’s basically an IR triage workflow. So, you can see how going across from left to right, consider that like serial operation. So, those have to kind of happen one right after another.

Going down, you have parallel processes. So, those can happen independently, more or less, of anything else that’s going on, or it’s depending on a separate workflow. But you can see how, when you start, one of the first things that we want to do is run some scripts, do some basic data collection on the information we already have that is already in our SIEM tool, already in logs.

And that can… a lot of SIEM tools have an API, so you can use REST, Python, whatever, to run searches, collect those, put them in whatever format you want, and then toss it in the case folder, so it’s ready for the analyst or the responder to review when they want to review it.

And then while that’s happening, you can also kick off a live data collection. So part of that is like, for example, with our EDR tool, we want to do that first because you’re talking about forensic collection on an active device, you kind of want to go through that least possible means to get that information the less heavy handed you are on it, the better your evidence is.

So, and that depends on organization, et cetera. Sometimes it can be done differently depending on the needs of the organization. But generally you want to get your volatile data, put that, send it to the case folder, and then using also the EDR, we run that script to deploy it to the system in question. We can kick off a memory dump, and that memory dump gets saved off onto the case folder.

And while that is going on, we can start running Volatility. You can use almost whatever plugins you want, whatever tools you want to analyze that memory dump.

And while all that is happening, all that analysis is taking place back on your own systems, you can kick off a process that goes out and grabs specific files, or even if you want to do a full disc image, but you get those files, you bring them back and then you can kick off processes that will put them through, say, AXIOM process, or RegRipper, or even just run strings on it, or put it in your sandbox and see what results you get out of that.

And basically while…and all of those different pieces can happen kind of at the same time. So, you have your memory collection, and then while all the memory is being analyzed, you can do your disc collection, and then that can get analyzed.

So, the whole idea is the analyst or the responder doesn’t even have to do anything until the evidence hits the case folder and it’s ready. And even using AUTOMATE Enterprise, you can use a little small script to send you an email, kick off a small script to sell you an email, or some sort of other notification, a page, what have you, to say, “hey, stuff’s ready”. You can build that into almost any part of the process.

One of the things we’re looking at…because we like to know where things are in the process is just building in some status triggers that say, “okay, this is done, go and do this”, or “this is done, we’re now working on this”. So, you can see how you can take a complex entity like IR triage and kind of break it down to those smaller pieces.

And you can do this with pretty much almost any process, any procedure, just breaking it down, running it. Trey, did you have anything to comment?

Trey: Yeah, I mean, I love this and, like, I mean, you just said it: you can do this pretty much with anything. And I mean, at the end of the day, especially, when we’re talking about having certifications and SOPs and all that in place and being able to say, “hey, this is how we work every one of our cases with the different types, whether it’s data loss prevention”, or you’re working a breach, whatever the case may be, or some sort of HR violation.

Having that workflow and being able to say, “we can adjust as necessary, but this is the basis for all of that”, and we can…like you said, I love that, you have scripts that alert you when everything’s done. It’s basically kind of like the kitchen timer going off saying, “hey, everything’s ready to go, ready for you to come in and do your thing now, but we’ve done it all”. And that’s awesome. I think…I really like this triage workflow to kind of walk through that. And that’s really cool.

And that’s…kind of moving over to the next slide. This is another example, not as detailed as what Tim just highlighted, but with data loss prevention and that old workflow versus new workflows, essentially, if you have 10 employees who have access to a file or data that might be leaving, trying to figure out the source, the DLP, what can you do here? And there’s a lot of manual endpoints here.

With the old workflow, you’ve got each device with a lot of downtime in between each one of those and having to deploy agents and collect from those, not to mention then processing all of it, and then completing your analysis and reporting, which once again, there’s just lots of manual endpoints here.

And with that new workflow, you’ve got your targets, you put in your target information, you’re able to collect all that evidence in parallel, it’s all being processed and then dumped into a MFDB file or database file, that you can open up and actually examine to go ahead and start doing analysis and reporting.

So, that front loaded part of that workflow, I tell you, was the most time intensive part of most of our investigations. It wasn’t like who did what when? It was getting to that [unintelligible] getting all the different [unintelligible] the number of endpoints that some people might have, [unintelligible] and being able to consolidate all that down very quickly, I think is awesome. I – Oh, go ahead.

Tim: Oh, I was just going to say, basically, you’re trying to reduce that waste time and use your investigators where they’re used best. I mean, use them for the important stuff, not to press buttons. You can get a lot of folks to press buttons.

And one side benefit of it is you’re also increasing the consistency. So, if it comes up in court and they say, “well, how do you do this?” Well, it’s all automated to do this, this, this. This is exactly what it does every time. You get that repeatability of that evidence, should that be necessary.

Trey: Absolutely. Yeah. I love that and I think that’s something that might get overlooked initially when you’re thinking about automation and orchestration, but having a repeatable workflow for every case really helps. And especially, like I mentioned, if you’re Lang certified, things like that, you need that repeated process and this can do that for you very easily here.

I mean, this was another one and we’ll kind of end with one, but an IR sort of integration, essentially having the EDR tool has detected malware on endpoint and reported an event to the team tool. What can you do from there?

Well, essentially the overflow, there’s an alert going from one tool to another, going to the SOAR, the alerts are coming in, whether they’re from dashboards or emails that are coming in, but then there’s that downtime of like, “hey, now we gotta get to the target endpoint. We gotta do the collection. Now we’ve gotta wait for that process.”

Let’s have all of that kind of connected where the EDR might kick it off, but then it’s getting moved through that process very quickly with APIs, including with AUTOMATE Enterprise and having that automatic collection and processing.

So, once again, you’re basically getting an alert saying, “hey, there was an alert from the EDR platform, we’ve already got the evidence for you, it’s already been processed, and you can sit there and do the analysis”, versus playing catch up on all these other tools that are connected and piecing everything together, have the automation, orchestration for your forensic tools as well, so that when you walk in, you can have that done and be ready to go for your analysis and reporting.

Because at the end of the day, you’re gotta brief the stakeholders and the higher ups to how it’s been mitigated and the root cause analysis to make sure everything’s been shored up for an investigation.

So, there’s a lot you can do with automation and orchestration. AUTOMATE Enterprise is here to work with the tools you’re already using. And I think that’s the big piece to…just like Tim was talking about in the slide before with your repeated workflows and really you can have a workflow for just about anything and for any type of investigation.

And with a little bit of scripting knowledge, some basic scripts, you can amplify what you’re able to do very, very quickly and easily with that.

So, I know we’re right at time, but we’re going to be around for questions. If anyone has any questions, please don’t hesitate to reach out. I want to thank Qualcomm and Tim for joining us today. But with that, Tim do you have anything…last words of wisdom?

Tim: Nope. That pretty much covers it.

Trey: Awesome. Well thank y’all, and enjoy the Magnet summit 2022. Hopefully we’ll see you in person very soon. Thanks.

Leave a Comment

Latest Videos

Digital Forensics News Round-Up, May 22 2024 #dfir #computerforensics

Forensic Focus 22nd May 2024 6:03 pm

Podcast Ep. 85 Recap: AI-Powered License Plate Reading With Amped DeepPlate #dfir #digitalforensics

Forensic Focus 21st May 2024 1:57 pm

This error message is only visible to WordPress admins

Important: No API Key Entered.

Many features are not available without adding an API Key. Please go to the YouTube Feeds settings page to add an API key after following these instructions.

Latest Articles