Unmanned aerial vehicles (UAVs), or drones, are as fraught as they are fascinating. For every hobbyist flying a drone around the neighborhood on a sunny weekend, there is a neighbor concerned with whether a system’s onboard camera is spying on their family.
UAVs also pose far more serious threats when it comes to commercial air travel, correctional facilities, and homeland security. Counter unmanned aerial systems (CUAS) have appeared in the market as a result. Just like the systems they’re designed to counteract, they can offer a wealth of data about their mission — as well as how well they work.
In a blog series at the Unmanned Robotics Systems Analysis (URSA) Inc. website, renowned UAS forensics expert David Kovar offers insights “relating to extracting, organizing and analyzing data from counter UAS systems,” based on experience from test and evaluation exercises relying on the URSA platform.
The series’ introductory post, Counter UAS Test and Evaluation Series, outlines its audiences: CUAS evaluators, system acquisition teams, investigators and attorneys, and government regulators. “Anyone using CUAS systems or data should understand how the data is generated and what external and internal factors affect the system’s results before they can effectively use any data produced by these systems,” Kovar wrote.
Digital forensics on a CUAS starts long before the forensic process starts: by first defining the system’s effectiveness. “… [E]ven if you can kill the inbound UAV,” Kovar explained, “detection range, false positive rates, and tracking accuracy in a variety of environments are all important characteristics to know before you trust your assets to the system.”
To that end, he offered some examples of why vendors, evaluators, and operators all need clear definitions of detection, classification, location, tracking, and mitigation:
If we say “The system detected a UAS” we likely expect everyone to understand what we mean. But where is that event presented – in a log file, on the user interface, or via an audible alert? What was detected – was it really a UAS, implying that some discrimination occurred prior to the alert? Is it friendly or malicious? Is the detection part of an existing track or a new track?
Another pre-analysis necessity: ground truth data. For a CUAS, this depends on a Time, Space, Position Information (TSPI) device, “a high end GPS tracker.” In CUAS: TSPI Devices and Ground Truth, Kovar describes how “[a]n accurate and reliable TSPI device is critical for accurate CUAS (and BVLOS, see-and-avoid, etc) data analysis.”
Fault tolerance, accuracy, size, weight, power, and situational awareness are all mission-critical aspects of a good TSPI device. URSA relied on these criteria to develop its own TSPI prototype, and is at work, wrote Kovar, on a production version with enhancements.
“The telemetry and CUAS systems data will tell one story,” wrote Kovar in CUAS: The Need for Human Observers, “and the humans will provide a different perspective.” Data validation requires this balance. “However,” he continued: “like digital data collection, errors or lack of standards and consistency in human data collection may create more confusion than it eliminates.”
Because CUAS data is so volatile, Kovar recommended coming up with a pre-event plan “to determine what data needs to be collected, implement and test the process,” among other steps.
Only at this point can the digital forensics process itself commence. CUAS: Data Sources and Their Strengths and Weaknesses outlines the four major methods for extracting CUAS data: vendor log files, vendor API, vendor user interface (UI), and standard or proprietary C2 or integration layer.
“Which one you choose will depend on your use cases as well as your technical ability and relationship with the vendor,” Kovar wrote, adding that log, API, and integration layer systems are valuable because of their responsibility for life safety decisions. At the same time, he added, “…the easier it is to obtain data, the further you are away from the unvarnished truth.”
Building on his points in “The Need for Human Observers,” Kovar, in CUAS Data: The Hardest Part – Collection, Organization, and Validation, wrote: “Similar to ediscovery projects and digital forensics investigations, investing time up front to collect, organize, normalize and validate your data will save significant time later, help the project stay on schedule, and produce more accurate results.”
Offering a diagram of an URSA data collection process, Kovar argued for collecting “as much data as you possibly can as soon as you can and preserve it in multiple locations” because it will likely not be possible later on. Because there’s so much data, organizing it is necessary; likewise validating it. Kovar offered an example of how this process might proceed.
To continue with analysis after collection, CUAS: Data Normalization becomes necessary. “To accurately compare data from different sources that all relate to a common event – a UAS flight in this case – we must use common frames of reference,” Kovar wrote. “At a bare minimum, all of your data should use the same timezone and reference model for the physical location of all participating systems.”
From there, CUAS: Data Visualization argues for a departure from Excel. “We are human, we need visual data, and quickly,” Kovar wrote, offering several depictions from URSA’s telemetry analysis platform. “Appropriate near real time visualization capability should dramatically improve test and evaluation effectiveness while also supporting deep dive analysis post-event.”
In what Kovar called an “evolving” post in need of additional contributions, Comparing CUAS Track Data outlines some of the complications in analyzing data from multiple CUAS vendors. Because these systems lack standards, the analyst must come up with a common frame of reference to compare data in a useful way.
Position information — various pieces of latitude, longitude, altitude, azimuth, elevation, and/or range — from hypothetical vendors is presented, along with what coordinate system to use, idiosyncrasies among vendors, and methods of comparison. “There are a number of steps that have to be taken to compare different vendors to ground truth and each other,” Kovar wrote, “but it can be done in an automated and defensible manner.”
The final post in the series, CUAS Test and Evaluation: URSA’s Journey, describes URSA’s efforts to develop its unmanned systems telemetry analysis platform. Having recently deployed the platform on CUAS testing and evaluation, Kovar wrote about applying its lessons learned to the product, including automation of some steps and improving data ingestion and analysis.
In his introductory statement, Kovar concluded:
There is an enormous amount of work to be done on this topic, by URSA, by vendors, by standards committees, and by governments. We can collaboratively create an ecosystem where the effectiveness of CUAS systems against a variety of targets and in a variety of conditions is known in advance rather than after acquisition. But we need to talk, share data, conduct exercises where the results are made available to acquisitions teams and potential customers, and feed lessons learned back into the ecosystem for all to benefit from. This will meet some, perhaps significant, resistance but ultimately it is necessary for national security and the protection of life and property in general.
To that end, he wrote in his final post: “There is much work to be done to catch up with the current state of CUAS test and evaluation. And the work will never cease – as UAS and CUAS evolve, and their testing regimes, so will URSA’s platform.
“We look forward to this journey and to working with the community to ensure that UAS and CUAS are properly tested, evaluated, and audited.”