Oxbotica was spun out of Oxford University five years ago and now offers a suite of autonomous vehicle and fleet management software. Mike Vousden sat down with VP of Technology, Ben Upcroft at the Move 2019 conference to hear about the latest developments.

Could you give us a brief overview of Oxbotica?

Oxbotica sells software for autonomy to industries across the world. That includes mining, ports, quarries, airports and, of course, in urban environments with OEMs or taxi operators, for example.

With our partners, we continue to develop our autonomy and, as they improve, our autonomy improves with them. The solution we provide them is the ability to be able to localise, to understand where you are in the world, to understand what’s around you, and make intelligent decisions using that information. Now, as we grow, the places in which we can do that grow as well.

So far, has most of your testing been UK based?

No, we’re in Asia, North America, Australia and Europe, so we’ve got many autonomy systems all around the world. That’s both deployed products and for testing – we’re testing, evaluating and deploying into public domains – the whole spectrum.

Oxbotica has two products – Caesium and Selenium – can you briefly explain what they are?

Selenium is the brains for a single vehicle – so it’s the vehicle that makes all the intelligent autonomous decisions to drive through the world. Caesium is what you might think of as the management system for a fleet of autonomous vehicles. It gives you the ‘stubs’ or the interface for each vehicle to get information on and off that vehicle but also to pass it to a third party – that could be an operator, someone who wants to know what kind of data comes from those vehicles.

Are Caesium and Selenium usually sold as a pair?

We’re kind of an open shop – you can get the whole thing, altogether as a full stack from Selenium to Caesium. You can have either just by itself but, even cooler, is that you can take components from each of those as well – we’ll give you ‘hooks’ or interfaces into each of those types of components all the way down to our safety stack.

Looking at Selenium, how comparable is it to something like Baidu‘s Apollo platform?

I don’t think it compares. Apollo’s program is about deploying onto roads – they’re doing that full vertical sector. We’re trying to accelerate all types of industries – rather than become an expert in a particular type of vertical, we want to be an expert across autonomy in a horizontal sense. So accelerating each of those industries, so they can be better and leverage our expertise more effectively. That’s the same with companies like Cruise or Waymo – they’re looking at a vertical so it’s a very different business model.

Considering the products you sell are entirely software based, do you specify hardware from other manufacturers?

We’re sensor agnostic and hardware agnostic. However, we have preferences and a certain threshold that those sensors and hardware need to get over to be able to achieve what we want to do. So we use a whole bunch of different sensors – LiDAR, radar and vision – and we can mix and match within that for our sensor suite if the customer has a particular specification.

Say, for on-road vehicle autonomy, is it necessary to have a range of radar, LiDAR and vision systems?

Because we go into many different domains, we have that complementarity of each of the sensors. Radar is able to operate in really difficult weather conditions – through dust, through snow, through rain – but it’s not as accurate as LiDAR. Vision systems are much better with visual information rather than geometric information so they’re all really complimentary. The domain you work in might change how much you rely on one or all of those sensors.

One of the key takeaways from your presentation was how your system doesn’t rely on GPS – could you expand on that a little?

So let me flip that question – how do you know where you are? You don’t have GPS in your head, you don’t actually know within a centimetre’s accuracy where you are in the world, but you are still able to navigate.

We do something similar – we recognise the place through visual information and we recognise where we might have been through the odometry that we gather. So the route we travel is done through vision or LiDAR, and we recognise the places, rather than pinpoint their location through GPS.

So does the software constantly query the environment around it? Almost as though it thinks “I see a rock in front of me, so I’ll need to plan a route around it”.

Yeah, so if there’s an obstacle, we’ll look at all possible paths around it. We’ll check if they’re feasible, if they’re safe, and take that path if it makes sense given all the information we have around us.

This all requires processing power, have you specified particular processing hardware or had them custom built?

A bit of both. Our visual localisation system takes 30W – so there’s not a huge requirement for computing power. We don’t have a server rack in the back of the vehicle – it’s a small computer that does the whole job. We really take pride in how we’ve developed that and made sure it’s efficient both in terms of power use and processing.

What sort of customers are you thinking of when you’re developing these products?

So, because we operate in all these different domains, we’re looking at OEMs – the companies who build vehicles, and tier 1s – who help build those vehicles. We’re also looking at operators, so taxi companies or mobility-as-a-service operators. Even mine operators or those running airports or quarries. We’re interested in how we can help all of them, more at a partner level than a customer level at the moment.

Are there any parallels with Volvo Truck’s recent agreement to supply five autonomous trucks to a Norway’s Bronnoy Kalk mine?

A little bit – we don’t bend metal, we don’t build the vehicles. We can deploy our software onto any vehicle that that particular mine wanted, or retrofit onto any vehicle they want to change in the future – so we’re not tied to a particular OEM if we wanted to deploy into a mine. But, if a particular OEM wanted to deploy into that mine, absolutely it’d be something similar.

The difference and uniqueness of Oxbotica is that we’re flexible – we’re not trying to own that particular vertical, we’re trying to deploy into these industries so we can accelerate each of these verticals.

What does Oxbotica’s future product roadmap look like?

We want to be in more places, more of the time. Because Caesium and Selenium are flexible and modular, we’re able to have a diverse deployment strategies in terms of domains, industries and verticals.

So, you are a UK based company grown out of Oxford university – are there any unique benefits that come with that?

Absolutely, the UK has backed us from a government, policy, regulation and funding point of view. When I say ‘us’ I mean the technical community – absolutely, the government has been amazing. We also really benefit from being next to one of the best universities in the world – we’re able to hand pick homegrown talent in this space.

So is most of your work focused on level 4 autonomy or level 5 autonomy?

Both. It can be a little difficult to separate those two. Up until 12 February this year, the government had mandated that, for level 4, a human needed to be able to take over if required. They have now changed those rules so it’s really interesting to see how we could get to a point where there’s no driver in the driver’s seat, and we’re really excited to get there.

Have you been part of the conversation with the government in terms of how rules and regulations are set out?

Yeah, the government’s been really open. We work with them and they work with us to jointly come up with something that makes sense, rather than doing it in isolation.

Looking at all the exhibitors at Move 2019 today, have you seen anything that’s really impressed you?

What’s impressed me is the energy and excitement around here, and the drive to make this happen. You can see it in all parts of the conference and the companies presenting here. It’s amazing to be a part of.

A lot of attendees have stressed the importance of collaboration in building a cohesive real-world autonomous system. Do you see collaboration as essential?

It’s about the right kind of collaboration. To partner with the right companies that allow us to work together to accelerate that particular industry makes sense, no question about it. But we’re not keen on setting up a consortium or collaboration just for the sake of it.

Can you give us an idea of when we might see a collaboration between Oxbotica and an OEM or tier 1?

I’d love to be able to say who we’re working with but, unfortunately, I can’t at this stage. But, as you know, we’re working with [private hire firm] Addison Lee as an operator and you’ll definitely hear something coming out over the next few months. Let’s put it this way: you’ll see some pretty amazing things coming out this year.