Not a member yet? Register for full benefits!

Username
Password
Improving LIDAR: Calibrate and Scan

LIDAR, or Light Detection and Ranging is perhaps the most predominant form of machine vision system used by larger robots. It is essentially the optical equivalent of radar with the LIDAR emitter sending a laser beam, (typically outside the frequency range of the human eye) across the space in front of it in a wide sweeping arc. The laser reflects off of nearby objects, and the system reads reflected beams of scattered light to determine the location of all nearby objects. Like with radar, the range to an object is determined by measuring the time delay between transmission of a pulse and detection of the reflected signal.

The result is somewhat grainy, but workable; the robot is able to build up a rough approximation of the area it is in, and visualise it internally as a 3D map. It can be thought of as analogous to the way the human eye works, except that the 'eye' is providing its' own lightsource as well.

A new technology currently being trialled by the EU, seeks to substantially improve LIDAR's accuracy. One of the major problems with the system is the accurate detection of moving objects. When the beam sweeps out and hits a human for example, it encounters an obstacle, and marks that on its map. However when it scans again, the human has moved; the original obstacle is no-longer there, and a new one is in it's place. Thus, the robot is forced to redraw the whole scene every scan - it does not know what is fixed and what is moving, and so each update throws away all the data gathered in the last.

Rather than simply leaving LIDAR as a brute-force approach to machine vision, and them applying a more intelligent sorting algorithm on top of that, it makes sense wherever possible, to allow LIDAR to differentiate between what is moving, and what is not.

Enter 3D LIMS, or 3D LIDAR Imaging and Measurement System. The idea is simple, yet profound. Before any humans are allowed into a given area, a robot wanders through. Nothing movable is permitted, save the robot. It scans the walls, ceiling, floor, steps; fixed items that are not likely to move. This information is then stored as a basic template for all robots using that facility. A map if you will, of where things are expected to be.

Armed with this map, and a system of position detection relative to the structure - GPS or another such system - a robot's LIDAR sweeps are thus pre-calibrated. If it scans a table in front of a wall section, for example, it knows the object in front of the wall is not fixed and may be moved at any time; the wall behind it being a fixed object. Thus it need not continue to scan the wall each sweep, but that table bears watching. Thus the system narrows down what it needs to sweep and doesn't need to sweep. Likewise, if there is an empty space on the pre-LIDAR, any robot moving into it, knows that any obstacles it encounters, may move at any time.

A side-effect is that if for an unknown reason, a hole is blown in a wall, or other fixed feature, the 3D LIMS system will read that hole as a temporary feature. It will watch it for changes, just as it would that table from earlier; it does not match supplied data. This has enormous potential benefits as well - should a pipe burst in factory complex with a patrolling 3D LIMS robot, the movement of the water will set up a series of obstacles that should not be there, and potentially flag an alert.

An additional advantage is that, unlike camera-based robotic vision systems it is not affected by shadows, rain or fog, and provides angular and distance information for each pixel, making it suitable for use in virtually any environment.

References

Light-based localisation for robotic systems

IRPS project

Staff Comments

 


.
Untitled Document .