Planning and Control Algorithm Engineer (Autonomous Driving)
Develop advanced models for predicting and planning multi-agent motion in dynamic environments using learning-based, uncertainty-aware approaches.
11th April, 2025
[Job Order ID: 976422]
Responsibilities
Trajectory Prediction Models: Develop advanced learning-based models for predicting the movement of vehicles, pedestrians, and cyclists using sophisticated architectures like transformers, diffusion models, or graph neural networks.
Multi-Agent Interaction Modeling: Integrate end-to-end neural networks that can jointly predict the motion of multiple agents, enabling the modeling of complex interactions in dynamic environments.
Uncertainty-Aware Forecasting: Enhance probabilistic prediction with self-supervised learning techniques, focusing on robustness in scenarios with low-frequency events or long-tail distributions.
Social and Cooperative Behavior Modeling: Apply game-theoretic and reinforcement learning approaches to better simulate social and cooperative behaviors in autonomous systems.
Reinforcement and Imitation Learning for Decision-Making: Design decision-making systems using reinforcement learning (RL) or imitation learning (IL) that adapt to complex real-world driving situations and handle uncertainty effectively.
Multi-Agent Decision-Making: Implement multi-agent decision frameworks that allow autonomous vehicles to interact and navigate through dense, uncertain, and highly interactive environments.
Hybrid Motion Planning Systems: Develop motion planning systems combining classical planning methods (graph search, optimization) and learning-based approaches (RL, IL) for improved safety, efficiency, and comfort.
Differentiable Motion Planning: Explore differentiable motion planning architectures that allow the backpropagation of gradients through neural networks, improving the optimization of real-time trajectory generation.
Closed-Loop Simulation & Self-Supervised Learning: Create closed-loop simulation frameworks for seamless integration of prediction, decision, and planning, and apply multi-modal self-supervised learning techniques to improve system robustness.
Real-World Deployment and System Performance: Ensure the real-time deployment of models on embedded platforms with techniques like model quantization and pruning; also implement continuous performance improvement through online learning and adaptation from fleet data.
Requirements:
Strong knowledge of behavior prediction, decision-making, and motion planning for autonomous systems.
Proficiency in deep learning frameworks (PyTorch/TensorFlow) with experience in transformers, graph neural networks, diffusion models, and reinforcement learning.
Experience with planning and control algorithms (A*, RRT, ILQR, MPC, PID, model-free RL).
Strong programming skills in C++ and Python, with experience in high-performance computing and real-time embedded system optimization.
Familiarity with robotics middleware (ROS/ROS2, Apollo, Autoware) and real-time embedded systems.
Understanding of probabilistic decision-making, POMDPs, and Bayesian inference for uncertainty-aware driving strategies.
Experience in end-to-end autonomous driving, differentiable planning, or foundation models for driving.
Publications in top conferences (CVPR, NeurIPS, ICRA, CoRL, ICCV) related to behavior prediction, RL-based decision-making, or motion planning.
Background in game theory, multi-agent reinforcement learning (MARL), or traffic simulation-based planning.
To Apply, please kindly email your updated resume tocv_gary@goodjobcreations.com.sg
Please kindly refer to the Privacy Policy of Good Job Creations for your reference: https://goodjobcreations.com.sg/en/privacy-policy/
We regret that only shortlisted candidates will be notified. However, rest assured that all applications will be updated to our resume bank for future opportunities.
EA Personnel Name: Gary Ho Cheng Xuan EA Personnel Reg. No.: R1549767 EA License No.: 07C5771