DOD SBIR 24.4 Annual

Active
Yes
Status
Open
Release Date
October 3rd, 2023
Open Date
October 3rd, 2023
Due Date(s)
March 31st, 2025
Close Date
March 31st, 2025
Topic No.
A244-016

Topic

Autonomous Optical Sensors

Agency

Department of DefenseN/A

Program

Type: SBIRPhase: BOTHYear: 2024

Summary

The Department of Defense (DOD) is seeking proposals for the topic of "Autonomous Optical Sensors" as part of their SBIR program. The objective of this project is to develop a portable optical sensor that can capture high-quality real-time imagery data during missile tests. The sensor will be positioned near a missile launcher or target to analyze the terminal phase of the flight. The sensor will incorporate high-speed imaging cameras with advanced artificial intelligence and machine learning capabilities, allowing it to calibrate and manage itself and operate autonomously for an extended period. The sensor will wirelessly receive setup and calibration data from a centralized command center. In Phase I, the awardee will research and define an integrated configuration of the Autonomous Optical Sensor (AOS) that includes various types of optical sensors and an AI framework. Phase II will involve creating a prototype of the AOS based on the Phase I analysis, refining the integrated system design, and conducting functional testing in an operational context. The potential applications of this technology include collecting real-time imagery for air traffic management at airports or surveillance of sensitive areas. It can help track flights, assist in airspace coordination, and alert operators of potential safety or security concerns. The project is currently open for proposals, with a closing date of March 31, 2025. More information can be found on the DOD SBIR website.

Description

OUSD (R&E) CRITICAL TECHNOLOGY AREA(S): Integrated Sensing and Cyber; Trusted AI and Autonomy; Integrated Network Systems-of-Systems

OBJECTIVE: This project aims to develop a portable optical sensor that can capture high-quality real-time imagery data during missile tests. The sensor will be positioned near a missile launcher during the launch or near the target to analyze the terminal phase of the flight. The missile tests will occur in remote locations where proper test infrastructure is unavailable. The Autonomous Optical Sensor (AOS) system will incorporate several high-speed imaging cameras with advanced artificial intelligence and machine learning capabilities. These features will enable the sensor to calibrate and manage itself and assist in positioning itself accurately. The system will be designed to operate autonomously for an extended period with either a battery or a renewable energy source. The sensor will wirelessly receive setup and calibration data from a centralized command and control center. The command center will provide guidance or queuing data for the AOS to initiate its track of a System Under Test (SUT). The AOS system's cutting-edge technology will make it possible to collect accurate and reliable data, even in the most challenging test conditions.

DESCRIPTION: The sensor is designed to operate with minimal or no intervention from an operator. Once deployed, it will capture imagery data of a System Under Test (SUT) using advanced geospatial and optical sensor auto-calibration technologies. The sensor will be equipped with organic computing, distributed network, and power systems to manage the positioning and the collection, processing, and transport of real-time imaging data. This eliminates the need for transporting raw data to a centralized location for processing and analysis. Furthermore, the setup and calibration task will be minimized since the sensor will self-align and calibrate itself before test operations. The results of the computing work done at the edge, such as real-time imagery, sensor calibration updates, or other actionable information, will be transmitted to the main data center for review and analysis after the test.

PHASE I: In Phase I of this project, the goal is to research and define an integrated AOS configuration that includes various types of optical sensors, such as visible and electro-optical/infrared, as well as data processing, networking, and power systems. Additionally, an analysis will be conducted to determine how the system will be managed by an AI framework that employs specialized algorithms and techniques. These algorithms will facilitate positioning, calibration, real-time management, and control of the overall design. Moreover, the awardee will define the control method to include the sensor's feasibility for learning different support configurations or adaptive learning. A process of training the algorithms to adapt to changing conditions or new datasets will have to be designed. By the end of Phase I, the awardee will have defined the optimal configuration of AOS and AI framework necessary to satisfy AOS requirements.

PHASE II: In Phase II of the project, the awardee will create a prototype of the AOS based on the analysis conducted during Phase I. However, integrating AI-enabled or cognitive projects into existing operations can be challenging. Adapting the AOS to current T&E infrastructures may require refining an integrated system design (AI software/hardware) to achieve optimal performance, accuracy, and reliability. It is expected that the AI will need to be iteratively refined and optimized based on the Phase I designs. Functional testing in an operational context is a crucial part of system development. This will facilitate the AI-optimization process for this type of system since it involves an ongoing learning approach to development. The prototype should be able to achieve self-localization and alignment, obtain queuing or positioning data from an external sensor of an SUT, and maintain track of an SUT. Both self-localization and alignment are critical for AI-enabled systems to understand and navigate within their environment effectively. By accurately determining their position and aligning their measurements and actions with a common reference frame, these systems can interact with other devices, objects, or entities and perform tasks such as mapping, object recognition, navigation, or coordination.

PHASE III DUAL USE APPLICATIONS:

Primary commercial dual-use potential is tied to collecting real-time imagery supporting air traffic management (ATM) at airports or surveillance of defined sensitive areas.

Monitoring and managing air traffic flow: Help track flights in real-time using radar data or other surveillance systems primarily to identify incursions by small UAS.
Assisting in airspace coordination: I can provide information about airspace restrictions, temporary flight restrictions (TFRs), and other limitations in the defined sensitive areas. This can help ensure aircraft stay within designated airspace and avoid potential conflicts.
Alerting operators of potential safety or security concerns: Notify operators of any unusual behavior, deviations from flight plans, or potential security threats. This can help maintain the safety and security of the defined sensitive areas.

REFERENCES:

Trajectory Analysis and Optimization Software (TAOS) TAOS by Sandia National Labs: Describes a tool designed to be a three-degree-of-freedom or six-degree-of-freedom trajectory—possible application to sensor placement and calibration. [URL: https://www.sandia.gov/taos/]
Reinforcement Learning Applications in Unmanned Vehicle Control A_Comprehensive_Overviewby Kiumarsi, B. et al. (2019): This paper addresses research in reinforcement learning techniques in control systems, providing insights into their potential applications and challenges. [https://www.researchgate.net/ publication/ 361572362_Reinforcement_Learning_Applications_in_Unmanned_Vehicle_Control_A _Comprehensive_Overview]
How to train your robot with deep reinforcement learning: lessons we have learned by Levine, S. et al. (2021): This research paper delves into applying deep learning algorithms for control tasks, showcasing their capabilities and discussing their limitations. [https://journals.sagepub.com/doi/epub/10.1177/0278364920987859]
Model Predictive Control with Artificial Neural Networks by Scokaert, P. O., et al. (2005): This paper investigates the integration of artificial neural networks with model predictive control techniques, presenting a novel approach for control system design. [https://link.springer.com/chapter/10.1007/978-3-642-04170-9_2]

KEYWORDS: Artificial Intelligence; Adaptive Learning; Autonomous Control; Self-Alignment and localization; Intelligent Instrumentation

Similar Opportunities

DOD SBIR 24.4 Annual - Autonomous Optical Sensors
Department of Defense
The Department of Defense (DOD) is seeking proposals for the topic of "Autonomous Optical Sensors" as part of their SBIR program. The objective of this project is to develop a portable optical sensor that can capture high-quality real-time imagery data during missile tests. The sensor will be positioned near a missile launcher or target to analyze the terminal phase of the flight in remote locations where proper test infrastructure is unavailable. The Autonomous Optical Sensor (AOS) system will incorporate high-speed imaging cameras with advanced artificial intelligence and machine learning capabilities. The sensor will operate autonomously for an extended period with either a battery or renewable energy source and wirelessly receive setup and calibration data from a centralized command center. In Phase I, the awardee will research and define an integrated AOS configuration that includes various types of optical sensors and develop an AI framework to manage the system. Phase II will involve creating a prototype of the AOS and refining the integrated system design for optimal performance. The potential impacts of this technology include collecting real-time imagery for air traffic management at airports or surveillance of sensitive areas. It can help track flights, assist in airspace coordination, and alert operators of potential safety or security concerns. The project duration is not specified, but the solicitation is open until March 31, 2025. For more information and to submit a proposal, visit the DOD SBIR website.
DOD SBIR 24.4 Annual - Robust Computer Vision for Better Object Detection with Limited Training Data
Department of Defense
The Department of Defense (DOD) is seeking proposals for the topic "Robust Computer Vision for Better Object Detection with Limited Training Data" as part of their SBIR 24.4 Annual solicitation. The goal of this topic is to experiment with innovative AI/ML approaches to object identification and imagery scene analysis. The increasing availability of digital imagery requires automated methods to process and analyze vast amounts of multi-modal data efficiently. One critical application is the identification of objects of interest (OoI) within imagery data or the scene generated by the imagery, which can provide valuable insights and facilitate decision-making processes in various fields such as military intelligence, environmental monitoring, transportation management, and security surveillance. The solicitation is open for Direct to Phase II (DP2) proposals with a cost of up to $2,000,000 for an 18-month period of performance. Proposers interested in submitting a DP2 proposal must provide documentation to substantiate that the scientific and technical merit and feasibility equivalent to a Phase I project has been met. The focus of this SBIR topic is robust AI/ML object detection techniques for computer vision that do not rely on extensive availability of labeled training data. The use of foundational knowledge and methods, such as handcrafted features, evolutionary algorithms, and newer techniques based on transformers, can be leveraged for this topic without requiring a feasibility study. During DP2, firms should develop and implement novel or hybrid AI/ML models for object detection that do not rely on extensive training data and train models in Project Linchpin's AI Unclassified Operations Environment using Linchpin data for DOD use cases. The Phase III dual-use applications include autonomy, retail, public safety, traffic management, enhanced security, and agriculture. Computer vision solutions in the private sector encompass a wide range of applications, and companies like Amazon, Google, and Microsoft offer cloud-based object detection and recognition services. The solicitation is open until March 31, 2025. For more information and to submit a proposal, visit the DOD SBIR website: [link](https://www.defensesbirsttr.mil/SBIR-STTR/Opportunities/).
DOD SBIR 24.4 Annual - Lightweight AI-enabled image processing for Soldier-borne thermal imagers
Department of Defense
The Department of Defense (DOD) is seeking proposals for lightweight AI-enabled image processing for Soldier-borne thermal imagers. The objective of this solicitation is to leverage advances in artificial intelligence and other image processing algorithms to generate higher quality longwave thermal and fused thermal and near-infrared imagery suitable for use on embedded hardware systems for Soldier-borne use. The technology should reduce cognitive burden during long duration missions and improve user acceptance of systems that employ LWIR and NIR sensors. The algorithms should be capable of generating high-quality imagery under various illumination and ambient conditions and provide feedback to the system to adjust camera settings. The proposed processing schema should be capable of running on low size, weight, power, and cost (SWAP-C) embedded hardware. The project will be conducted in three phases: Phase I involves generating a detailed description of the proposed solution, Phase II focuses on completing the image processing pipeline, and Phase III involves instantiating the image pipeline on relevant low SWAP-C embedded hardware. The project duration is from the release date (October 3, 2023) to the close date (March 31, 2025). For more information, visit the [solicitation link](https://www.sbir.gov/node/2651327).