SLAM is invading your local neighbourhood

Nii Yemo-Quarshie
4 min readJul 12, 2021

If you live somewhere in Milton Keynes and Northampton, you would have noticed one of these brilliant delivery robots moving around the place. I am aware they operate in many other cities. I have first-hand experience of seeing them in action, in Milton Keynes.

There are many companies in this race, I know Amazon is planning on releasing their version of a mobile delivery robot. I have to say I am thoroughly excited about what is happening in this field.

As someone with a keen interest in Robotics, I am curious as to how these robots work. During my undergraduate degree, I was introduced to the fundamental concepts, that operate these robots. In this article, I will speak about SLAM (Simultaneous Localisation and Mapping), which is a feature these of such robots. I will be detailing a project I undertook, with a mobile robot that used the concept.

SLAM aims to estimate the location of a robot within a given environment, whilst also building a map of the vicinity. In a scenario within an unknown location, how do you know where you are on a map when there is no map? That’s the classic chicken or egg problem when solving SLAM.

Armed with a Lego Mindstorms mobile robot from my computer science department. I undertook the challenge of programming a SLAM solution from scratch. I used a Dexter BrickPi, which is a raspberry pi attached to a device that allows the raspberry pi to control Mindstorms Ev3 components.

Dexter Bricki

As an additional component, I used a Raspberry Pi camera module V2.

To map my location, I used an occupancy grid. An occupancy grid is a grid of probabilities. Which details the chance of a location being occupied by an obstacle. Programmatically, I would need to decide on a threshold, where I would identify a location as occupied. I settled for equal to or greater than 0.7.

Occupancy grid

How are these probabilities calculated? Well, I use three occupancy grids. One that holds the number of times a location is observed, another which holds scores that are incremented (occupied) or decremented (unoccupied) based on whether a location is occupied. Finally, the output occupancy grid, which the robot would make decisions on.

To calculate the probability of a location: Number of times observed + Score’s from 2nd grid, divided by two times the Number of times observed.

So how did I solve my chicken and egg problem? I start off by exploring my environment. To build a map of my student dorm room, my start location will always be unoccupied (if the robot is at a location, there is no obstacle there). The robot scans with its infrared sensor, north, west, south, and east. It logs the locations which are free then selects a target location (with a preference for going forward).

While exploring, in order to localise, we know the robot can only be at locations it has visited, not simply just observed. I log the locations which the robot has physically moved through. Every visited location would have an object with an infrared reading, and at least one image of the view from that location. If the robot stopped at a location it would take readings for all directions. Localising depends on the values for the direction the robot is currently facing (whether that is north, east, south, west).

I must clarify my sensor data for localisation is the IR sensor and the camera. The IR sensor gives me a value, which should more or less be the same no matter when I visit that location (assuming a static state world). Then with the camera image, I use edge detection and a hit or miss algorithm to determine how alike two images are, releasing a score which I use as a sensor reading.

In a scenario where the robot wants to find out where it is. It takes an IR reading and a picture. The probability of locations with that IR score (+ 5 or — 5 ) is increased. The hit or miss algorithm is run on the pictures for the locations, and based on their scores, their probabilities are increased. A historic record of previous readings are kept for all locations. Based on all that processing, there will be a belief that is higher than the rest and that is the location the Robot will think that it is at.

I must mention I use the Markov Assumption. Which is the assumption of a static world. I assume I would get the same sensory readings from a particular location every time I arrive there.

Goes without saying, that commercial products are more complex. They may implement localisation with ROS or a SlamCore. They may have their own in-house localisation software. The point of this article is to show how some of the things being taught to students are implemented in the real world. I have gone through the concepts on a very basic level here (I had to write a whole dissertation on it).

I hope this article would give you an idea of how these brilliant robots work.

--

--