TurbineOne, a US intel analytics software company, has been awarded a multi-year contract from Department of Defense’s (DoD) Defense Innovation Unit (DIU) to prototype machine learning (ML) capabilities with Office of the Under Secretary of Defense’s Research and Engineering’s (OUSD R&E’s) FutureG and 5G Office. The use-cases are diverse, covering improvements to force protection, geolocation of industrial equipment, and modernising experience inside intelligence operation centres.
Aligned with this new contract, TurbineOne is announcing a recent feature of its frontline perception system called sidekick that can automatically generate data in order to build a detection model. This technology leverages a computer science method called zero shot, which is the theoretical ability for a computer to identify what it’s looking at having never before seen that exact object. Similar experiences include the image search feature within google.com or any video search on Facebook, but those search engines rely on extensive web infrastructure, metadata standards, and previously labeled training data.
In contrast, TurbineOne will deliver productive, zero-shot computer vision ML model generation on unstructured, unlabeled data that its software has never seen before. Building on previous work with military in pacific area of operations, TurbineOne is modernising the ingestion of images from video cameras and satellite images as well as non-visual sensor data feeds. This practice is known as “multi-domain operations” and “sensor fusion” within the DoD, which aligns well with “multi-modality” of TurbineOne’s Zero Shot detection capability.
Normally, computer vision applications are strictly constrained to training data types (e.g. labeling still images helps identify similarities in other still images). But with TurbineOne, users can manage multiple data types from numerous sensor vendors to build new models from a single do-it-yourself platform.
“I’m familiar with our nation’s best intelligence capabilities. I’ve never seen an Intel tool like this one from TurbineOne. It can find the needle in a haystack and increase the effectiveness of our nation’s intel analysts.” says Bob Ashley, former director, Defense Intelligence Agency.
From a technology perspective, TurbineOne’s Sidekick feature is indirectly related to generative AI and large language models (LLMs). Instead, TurbineOne leverages a data science predecessor, known as ML transformers, to produce text-conditioned object detection models. The search experience is revolutionary because the queries have an instantaneous semantic translation to visual objects, even within fully isolated data enclaves (i.e. no connection to a public cloud internet), which are common within the DoD’s secure networks. From a user-perspective an intel analyst can point to a data folder, type something like, “Show me all the ships from an aerial perspective,” and then get highly accurate results.
“Industry is still in its early days of AI-driven perception. We are grateful to be partnering with DIU to advance these capabilities. In the near term, these tools will enable straight-forward queries like, ‘Show me all the Chinese ships near Taiwan’ and receive fast, accurate results. Longer term, analysts will be able to query more complex, predictive scenarios like ‘Which indicators have strong signals that China has started its initial phase of a Taiwanese invasion?’” says Ian Kalin, TurbineOne’s CEO.
The initial area of work for DIU collaboration with OUSD R&E’s FutureG and 5G Office and TurbineOne will be the Naval Air Station at Whidbey Island. “We are eager to prototype new ML-driven geolocation and threat detection technologies with TurbineOne. The software platform is widely applicable for intelligence, surveillance, and reconnaissance use-cases because it empowers users with a do-it-yourself toolkit that is uniquely suited for military environments.” says Kurt Andrews, principal Investigator for OUSD R&E’s FutureG and 5G Office.
Follow us and Comment on Twitter @TheEE_io