top of page
website_header_less_crowded_neon_waves.png

ORION

A hybrid learning framework that delivers resilient, real-time UAV localization for indoor environments where GPS is not accessible .

Project ORION develops a lightweight, hybrid learning-based localization system that lets UAVs and mobile robots navigate reliably in GPS-denied environments. By fusing semantic terrain perception, inertial sensing, and magnetic field signatures under an adaptive arbitration layer, ORION maintains accurate, real-time pose estimation even in darkness, fast motion, or visually degraded conditions.

01

Abstract Teal Artwork

Multi-Modal Sensor Processing Layer

ORION begins with a distributed sensor processing pipeline that synchronizes and cleans data from three complementary sources:

​

  • Camera (visual frames): Used for semantic segmentation and visual odometry.

  • IMU (accelerometer, gyroscope): Provides high-frequency motion cues, drift modeling, and temporal dynamics.

  • Magnetometer: Captures ambient magnetic signatures that remain informative even when vision is degraded.

​

This layer performs bias correction, noise filtering, timestamp alignment, and feature extraction to convert raw sensor data into stable, model-ready signals.

02

Hybrid Learning-Based Localization Core

IRNN-DNN Fusion Module

A lightweight integrated recurrent neural network (IRNN) models temporal motion dynamics from IMU data, while a deep neural network extracts spatial features from camera frames. The module fuses these representations to predict short-term pose changes and suppress IMU drift.

​

Semantic Terrain Perception Module

A visual backbone (CNN/ViT) performs semantic segmentation to identify floors, walls, corners, and spatial landmarks. These semantics provide context—helping the system maintain stable localization even in repetitive or texture-poor environments.

​

Together, these components form ORION’s main “visual-inertial intelligence.”

Abstract Teal Artwork
Abstract Teal Artwork

03

Magnetic Fallback & Drift-Resilient Estimation

When vision becomes unreliable (darkness, smoke, glare, occlusion), ORION automatically activates a magnetic inference module. A 1D-CNN or temporal transformer interprets magnetometer sequences to estimate position deltas and stabilize heading.

​

This fallback pathway provides a lifeline in visually degraded conditions, maintaining continuity where conventional SLAM collapses.

04

Confidence-Based Arbitration & Sensor Fusion

A meta-layer continuously evaluates uncertainty metrics — entropy of visual segmentation, IMU variance, magnetic stability — and dynamically shifts responsibility between modules.

​

This arbitration logic controls:

  • Fusion weights inside the EKF

  • Mode switching between visual-inertial vs. magnetic-inertial estimation

  • Adaptive damping of drift when sensory confidence drops

  • ​

Instead of assuming perfect sensing, ORION actively interrogates its own trust in each modality. This is the piece that ultimately enables graceful degradation and real-time adaptability.

Abstract Teal Artwork

Publications

In progress 

Reference

Add a Title

Reference

Ackhnowledgements

  • Text

Gallery

Abstract Teal Artwork
Abstract Teal Artwork
Abstract Teal Artwork
Abstract Teal Artwork
Abstract Teal Artwork
Abstract Teal Artwork
Abstract Teal Artwork

Department of Electrical and Computer Engineering

University of Washington
185 E Stevens Way NE
Seattle, WA 98195

Connect With Us

  • LinkedIn

© 2025 by ARC Lab. All Rights Reserved.

bottom of page