Advanced Perception & Cognition

Accelerating Awareness

At Scientific Systems, we approach sensing from a perception point of view: we do not collect data — we create actionable information through an active and adaptive process. Our method of active perception enables systems to sense, react, respond, adapt, and predict events in the environment.

Advanced perception mixes sensor technologies, signal processing, classifiers, state filters, and environmental models to create context-driven smart sensing. It couples tightly with our AI/ML and autonomy technologies, to create mission capabilities that address complex problems in challenging environments, from hostile fire detection and localization in dense urban spaces, to identifying and tracking targets of interest, to fast obstacle avoidance and navigation in underground tunnels.

Closed Loop Sensing

Closed loop sensing is the key technology that brings sensing into a control loop, and enables sensors to control their perspective and path within the environment, to resolve ambiguities and improve performance.  For mobile systems, this means 1) sensor-driven course and pose adjustments to optimize detection, classification, and identification of targets; 2) faster more efficient area searches given what has already been searched and detected; and 3) fast reactive capabilities to avoid obstacles or intercept targets.

An example of our closed loop sensing technology is the Squad X FOCUS Technology [Finding Objects thru Closed Loop Understanding of the Scene]. FOCUS is a platform- and sensor-agnostic software infrastructure, demonstrated on autonomous drone systems, to perform optical area searches for individuals in cluttered urban environments, and determine whether the individuals are armed or unarmed.

Onboard Reactive Navigation

Reactive navigation enables autonomous systems to respond to quick changes in the environment, which can augment other forms of navigation such as GPS and 3D maps. This can include obstacle avoidance at speed, maneuvers in GPS-denied environments, changing path due to obstacles not in pre-loaded maps or in unmapped novel environments, or intercepting moving targets.

An example of our reactive navigation technology is the Fast Lightweight Autonomy (FLA) R-ADVANCE program [Rapid Adaptive preDiction for Vision‐based Autonomous Navigation, Control, and Evasion]. The R-ADVANCE technology is a vision-based reactive navigation system for drones rapidly moving through the environment with little to no prior knowledge of potential obstacles. The system uses expansion rate information in the visual field to create a steering field that avoids obstacles at speed. 

Cognitive Sensing

Cognitive sensing uses sensor information, in context with a mission, environment, and platform, to generate better sensor information, infer additional information about targets of interest, and generate suggested actions (smart route planning) to continue to improve information. This idea of context classifiers not only improves detections and expands information about targets — it also creates a more semantic representation of targets of interest for more advanced decision-making. Applications include advanced optical search using environmental context and complex acoustic scene analysis for target association. 

Our 3D Visibility Aware technology is an example of our cognitive sensing capabilities.  The Visibility Aware algorithms use three dimensional representations of the environment to provide understanding of field of view in complex spaces such as urban environments. When performing active searches of complex environments, a 3D Aware system will take into account obstructions and camera pose, to understand whether or not a point on the ground, or on a building, can be seen by a camera on a mobile platform, and intelligently update its path to ensure full coverage of the search area.

Multimodal Sensor Integration

Using multiple sensor modalities to resolve a single target type dramatically improves detection, identification, and localization of targets, while simultaneously reducing false alarms and issues with single modality clutter and noise. Multimodal sensor integration can include cross cueing, using wide field of view sensors for initial detections, and more accurate narrow field of view sensors for detailed interrogation of targets. Any number of modalities can be integrated together, from radar, to RF and electric field sensors, to EO/IR and acoustics, to chemo sensing and seismic detectors. This multimodal integration is not simply looking at the same target in multiple sensing dimensions — it also uses context about events related to a target to improve understanding of the target and its actions.

The SWIFT system [Sense Warn Investigate Find/Fix Track/Target] is an example of SSCI’s multimodal sensor integration, using acoustic sensors and EO/IR cameras to help resolve active shooter or hostile fire situations in urban spaces.  The system uses a small array of omnidirectional non-line of sight acoustic sensors, to create a persistent and pervasive sensor network, able to detect and localize weapon fire. Cognitive sensing uses environmental information to deal with multipath, and create a small search area and provide ballistic information about the source weapon.

Learn More About Advanced Perception

Are you interested in learning more about our advanced perception capabilities? Contact a member of the Scientific Systems team to find out how we can develop solutions to meet your particular mission requirements.