Sr. Software Engineer, ML Edge Inference
Serve Robotics
Location
Remote (US)
Type
Full-time
Posted
Aug 05, 2025
Compensation
USD 180000 – 205000
Mission
What you will drive
- Own the full lifecycle of ML model deployment on robots—from handoff by the ML team to full system integration.
- Convert, optimize, and integrate trained models (e.g., PyTorch/ONNX/TensorRT) for Jetson platforms using NVIDIA tools.
- Develop and optimize CUDA kernels and pipelines for low-latency, high-throughput model inference.
- Profile and benchmark existing ML workloads using tools like Nsight, nvprof, and TensorRT profiler.
Impact
The difference you'll make
This role enables advanced ML models to run efficiently on robotic platforms, helping transform robotic deliveries from novelty to efficient ubiquity while taking deliveries away from congested streets and making them available to more people.
Profile
What makes you a great fit
- Bachelor’s degree in Computer Science, Robotics, Electrical Engineering, or equivalent field.
- 5+ years experience in deploying ML models on embedded or edge platforms (preferably robotics).
- 3+ years of experience with CUDA, TensorRT, and other NVIDIA acceleration tools.
- Proficient in Python and C++, especially for performance-sensitive systems.
Benefits
What's in it for you
Base salary range (U.S. – all locations): $180,000 – $205,000
About
Inside Serve Robotics
Serve Robotics is reimagining how things move in cities through personable sidewalk robots designed to take deliveries away from congested streets, make deliveries available to more people, and benefit local businesses.