Role details
Company Overview
We are a rapidly growing technology startup focused on delivering next-generation drones for security and safety applications. Our company vertically integrates hardware and software to create leading edge capabilities in the UAV space, with a focus on saving lives.
As a Computer Vision & Autonomy Engineer, you will be joining the team responsible for the design, development, and implementation of high-speed perception and autonomy stacks capable of identifying and tracking highly dynamic objects. You will solve the unique challenges of high-dynamic sensing, where relative velocities are extreme and the margin for error is zero.
Perception Pipeline Development: Develop robust real-time Deep Learning and Classical CV algorithms for classification, and tracking (e.g., YOLO, Transformer-based architectures) of highly dynamic objects.
High-Speed State Estimation: Implement Visual-Inertial Odometry (VIO) and filtering techniques to estimate target 3D trajectories and "Time-to-Go" under high-G maneuvers.
GPS denied perception stack: Create "GPS-denied" navigation solutions and anti-jamming vision pipelines that maintain autonomy when external signals are compromised.
Guidance Logic: Design "Vision-Based Pursuit" laws and Proportional Navigation (PN) enhancements that translate visual target states into actionable steering commands.
Real-time Deployment: Optimize algorithms for ultra-low latency execution on low-power devices, ensuring the "sensor-to-actuator" delay is minimized.
Deterministic Benchmarking: Profile and eliminate "long-tail" latency spikes in the autonomy stack to ensure a deterministic sensor-to-actuator response time.
Education: Master’s or PhD in Robotics, Computer Science, or Aerospace Engineering with a focus on Computer Vision or Autonomous Systems.
Dynamic Vision skills: Expert knowledge of object tracking (KCF, SORT, DeepSORT) and the geometry of moving camera platforms.
Real-Time Software: Proficiency in C++20 and CUDA for high-throughput image processing, and Python for training ML models.
Mathematics: Deep understanding of 3D geometry, Kalman Filtering (EKF/UKF), and the physics of relative motion.
EO/IR camera: Experience working with Long-Wave Infrared (LWIR) or Mid-Wave Infrared (MWIR) sensors.
Embedded Systems: Experience deploying models on NVIDIA Jetson Orin or FPGA-based vision processing.
High-Fidelity Simulation: Proficiency in NVIDIA Isaac Sim, Unreal Engine 5, or Gazebo to generate synthetic data for rare "corner-case" scenarios.
Control Integration: Understanding of how perception latency affects the stability of flight control loops.
Ready for the next step?
Location
Oakland
Location
Approximate role location based on the employer listing.
Oakland
Open map