Our client is building the next generation of autonomous vehicle systems—ones that don’t just detect the world, but understand it. Inspired by neuroscience and built on advanced AI, they're developing a cognition-first approach to perception that allows our vehicles to reason about complex urban environments.

As a Perception Engineer, you'll contribute directly to the real-time perception stack— building systems that transform raw sensor data into a deep understanding of the driving scene.

Responsibilities:
• Design, develop, and optimize real-time perception algorithms for autonomous driving using data from LiDAR, radar, cameras, and ultrasound.
• Implement advanced sensor fusion pipelines combining multi-modal data for robust object detection and classification.
• Build and fine-tune deep learning models for semantic segmentation, instance segmentation, and object tracking (e.g., YOLO, Mask R-CNN, DeepSORT).
• Process and analyze 3D point cloud data for spatial reasoning and environmental understanding.
• Work with tracking and filtering methods such as Kalman filters and Extended Kalman Filters (EKF) for dynamic object tracking.
• Integrate and calibrate perception sensors with high-precision requirements (camera, radar, LiDAR).
• Simulate and test perception systems in virtual environments (e.g., Carla, AirSim) and validate them in diverse real-world conditions (night, rain, fog).
• Collaborate closely with SLAM, mapping, and planning teams to ensure consistent scene representation and performance.

Requirements:
• Solid background in computer vision and deep learning (CNNs, RNNs, 3D CNNs), with a focus on real-time image and point cloud processing.
• Experience with sensor fusion, tracking, and object detection frameworks (YOLO, SSD, Mask R-CNN, etc.).
• Skilled in Python/C++ and tools like OpenCV, PCL, TensorFlow, PyTorch, and CUDA.
• Familiarity with ROS/ROS2, Carla, SUMO, or other AV simulation frameworks.
• Proven ability in calibration and integration of perception sensors; understanding of HD Maps and environmental feature extraction.
• Knowledge of SLAM and localization techniques is a strong plus.
• Experience with testing and validation of perception systems in safety-critical environments.

Nice to Have:
• Experience with reinforcement learning or decision-making algorithms in unstructured environments.
• Hands-on work with parallel processing (CUDA/OpenCL) and real-time optimization.
• Familiarity with HD maps, OpenStreetMap integration, and high-resolution semantic mapping.

Why You Should Join Us:
• Play a central role in shaping how our vehicles see and interpret the world.
• Join an ambitious, science-driven team that values deep collaboration and continuous learning.
• Thrive in a flat hierarchy with fast decision-making and real ownership.
• Work at the intersection of cutting-edge AI and real-world engineering in the heart of Berlin’s vibrant Kreuzberg.