Who we are Humanising Autonomy is a London-based startup that is building the global standard of how autonomous systems interact with people.
We were founded on the premise of enabling a safer, more human-centred implementation of autonomous technology, and called ourselves Humanising Autonomy simply because that is what we want to do.
We have developed a prediction software that is able to predict pedestrian, cyclist and other vulnerable road user behaviour and intent in real-time to improve global mobility systems. The technology combines AI-powered computer vision and psychology to understand the full, nuanced range of human behaviour and is able to predict the intentions of pedestrians, cyclists and other vulnerable road users across cities worldwide.
As a critical perception technology, the software integrates with all levels of autonomy (including autonomous and human-driven vehicles) to improve safety, efficiency, and pedestrian interactions.
Our customers include multinational leaders in the automotive and mobility space across Europe, the US and Japan. Having recently finished a $5M+ fundraise, we are looking to expand our lean team with more creative and practical problem solvers to help us develop this key technology for automated mobility, and realise our vision of better interactions between people and autonomous technology worldwide.
The role You will be part of the core technical team of engineers, data scientists, and behavioural scientists, and will be designing and implementing cutting-edge computer vision models that understand and interact with people around the world.
Your main responsibilities will include:
● Designing, training, and testing state of the art computer vision and deep learning models to understand and predict the intent of vulnerable road users (pedestrians, cyclists etc) for automated vehicles.
● Develop evaluation metrics, and validating models through hypothesis testing
● Working closely with other vision engineers, embedded engineers, data scientists, and behavioural scientists to understand how behavioural insights can be applied to the deep learning models, and how they can be applied to offline systems as well as real-time inference systems.
The role will involve large responsibility and autonomy within the company, and require the ability to work both independently as well as part of a creative core team of engineers, scientists, and commercial talent.
● Hands-on experience in designing, training and implementing computer vision models – both classical approaches as well as deep learning approaches using standard deep learning frameworks (eg TensorFlow, PyTorch)
● Excellent programming skills in Python and/or C++
● Ability to work in a fast-paced environment and collaborate across teams and disciplines, and being open to and evaluating new/creative ideas and solutions
● Expertise and hands-on experience in at least one of the following areas (please highlight which area(s) in your application): 3D Computer Vision, Training Neural Networks (CNNs, RNNs), GPU programming in CUDA /OpenCV Nice to Have
● Experience working on pedestrian or autonomous vehicle problems in Perception and Prediction
● Experience working in any of the following areas: Graphical models, SLAM, Generative models, Parametric model fitting, Bayesian modelling, Continuous optimisation (eg Ceres, g2o) Qualifications PhD in Computer Vision, Robotics, or Artificial Intelligence or appropriate industry experience (4+ years) showing a similar level of thinking. What’s in it for you
● Working with a small but exciting team of engineers, social scientists and commercial talent to shape the future!
● Have a big say in how the Company runs itself
● Competitive salary and share options
● Pension Scheme
● 25 days holiday per annum + bank holidays
● Flexible working hours
● Cycle to Work scheme
● Beautiful offices in Somerset House in Central London
● Guarantee lots of plants in the office
To apply please follow the link below:
|Job Category||Automation Engineering, Autonomous, Computing platforms for robotics, Design, development|