Skip to content

Tracks multiple survivors and hazards in real time using smart vision to guide rescue teams when every second counts.

Notifications You must be signed in to change notification settings

metacore-stack/multitarget-emergency-response

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

86 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

RescueScope: Adaptive Multi-Target Vision for Emergency Response

RescueScope delivers a resilient perception stack purpose-built for emergency crews navigating chaotic flood, storm, or quake zones. It fuses rapid object discovery with durable trajectory management so that operators always know where survivors, responders, and obstacles are moving.

The pipeline ingests video from drones, elevated rigs, or mobile units, stabilises the feed, and hands detection candidates to a tracking core that balances speed with identity reliability. Real-time overlays and structured telemetry can be relayed back to command dashboards, providing an actionable map when visibility on the ground collapses.

Field Footage Snapshot

This capture shows the tracker maintaining IDs through intense rainfall and reflections. Bounding boxes are colour-coded per identity so coordinators can spot diverging paths in seconds.

MOT.mp4

Key Capabilities

  • Multiclass awareness for people and vehicles with prioritisation hooks for mission-specific objects.
  • Seamless switch between fast SORT-style motion models and appearance-aware Deep SORT for dense scenes.
  • Lightweight Hungarian association implementation tuned for low-latency edge deployment.
  • Configurable alert rules: dwell time inside geofences, crowding thresholds, and loss-of-tracking escalation.
  • Export-ready tracks that slot into GIS systems or timeline reports for incident reviews.

System Blueprint

1. Acquisition & Normalisation

Background_subtraction.py denoises, equalises exposure, and highlights motion pockets while maintaining a fall-back static background estimate for sudden lighting shifts.

2. Proposal Generation

The detector injects bounding boxes per frame. It is detector-agnostic, so feeds from YOLO models, classical cascades, or embedded sensors can plug in without altering downstream code.

3. Tracking Core

SORT.py keeps the loop lean with a Kalman prediction step and IOU-based assignment so small processors can keep up with 30 FPS feeds. Custom_DeepSORT.py layers appearance embeddings on top of motion cues, giving the system persistent identities even during deep occlusions.

4. Global Association

Hungarian.py wraps a clean implementation of the linear sum assignment routine. Cost matrices can mix IOU, centre distance, and appearance similarity, letting teams tailor trade-offs between responsiveness and stability.

5. Identity Maintenance

Recovered trajectories are smoothed, aged, and tagged with confidence scores. Ageing logic retires stale tracks swiftly while allowing short occlusions to resolve organically.

Change Detection Playground

The background subtraction sandbox visualises how adaptive median modelling copes with camera shake, rain streaks, and moving foliage before candidates head into the tracker.

Bg.mp4

Assignment Visuals

These diagrams walk through how the cost matrix evolves before the Hungarian solver snaps detections to legacy IDs. They help calibrate IOU thresholds and diagnose identity swaps.

Running the Toolkit

  1. Create a Python environment (3.9+ recommended) and install dependencies listed in requirements.txt or within your preferred detector package.
  2. Provide video sources via file path, RTSP stream, or webcam ID. Multiple feeds can be queued through a dispatcher script.
  3. Choose SORT.py for rapid prototyping or Custom_DeepSORT.py when appearance features are available.
  4. Tune IOU and feature similarity thresholds to match the density of the operational scene.
  5. Enable telemetry export to persist track histories for post-incident analysis.

Evaluation Checklist

  • Track continuity across 500+ frames without identity resets.
  • Latency budget below 120 ms per frame on mid-tier GPUs and sub-250 ms on CPU-only deployments.
  • False positive filtering through minimum hit streaks and unmatch age logic.
  • Configurable metrics (MOTA, MOTP, IDF1) surfaced for quick regression testing after model updates.

Operational Playbooks

  • Flood rescues: follow vehicles carried by water and flag when occupants exit or signals disappear.
  • Storm response: keep continuous visibility on power crews and overhead hazards while communicating safe approach vectors.
  • Mass gatherings: count and monitor crowd flow to guide evacuation corridors when weather or structural threats emerge.

Roadmap

  • Expand to 3D localisation using stereo rigs mounted on UAVs.
  • Integrate semantic segmentation for debris categorisation.
  • Add plug-ins for thermal sensors to improve low-light performance.
  • Deliver packaged dashboards that render tracks, heatmaps, and dwell summaries in the browser.

Full Deep SORT Run

Below is a longer deployment clip highlighting how identity preservation survives overlapping pedestrians and vehicles while the environment shifts dramatically.

MOT.mp4

About

Tracks multiple survivors and hazards in real time using smart vision to guide rescue teams when every second counts.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages