. .
.
Scan Matching Experiment
.
.

 

Introduction

 

What is scan matching?

 

    Scan matching is the process of registering two sets of 3D or 2D point cloud data in a common reference frame. Given a representation of a 3D surface or a 2D plane through a point cloud of data points and another representation of the same surface or plane that is translated and rotated from the previous representation, scan matching seeks to find the transformation between the two point cloud representation. The transformation (rotation and translation) is obtained as a result of minimization of an energy function. See figure below for an example of 2D point cloud data.

 

   

 

  The figure on the left shows the real map while the one on the right shows the 2D point cloud. The robot in black and the obstacles are marked. The 2D point cloud is in black. The 2D point cloud is a set of  (r,θ) plotted in an XY plane as (rcosθ,rsinθ). The r is the range to an obstacle along the θ wrt to robot orientation.

 

    In general the point cloud representation of the same surface or a planar object appears translated and rotated since they have been represented in two different reference frames corresponding to two different viewpoints. This is shown again in figure below.

 

 

They also appear translated or rotated due to an error or noise in the origin of the reference frame. This case is more apt to a robotic setting. For example let the predicted reference frame of the robot be P(in red color) while the actual reference frame be A(in black color) as shown in the figure on the left. Then a laser range finder oriented along the actual reference frame samples the obstacles in front through its range finder. The rays emanating from the range finder are shown dashed and they sample the obstacle at places where the rays meet the obstacle. If frame A is known with respect to a global frame G then the meeting points of the ray with the obstacle when represented in G will appear like that shown in the figure on the right, below in black. Similarly the laser rays oriented along P and from P represented in G would coincide with these red points. However when frame A is unknown and what is known is only P and if the rays sent from A by the range finder are assumed to be sent from P and represented in G they appear as shown in the figure on the right, below in black. Scan matching can then find the transformation from P to A thereby correcting the error in the robot’s predicted pose. This error arises precisely due to errors in the robot’s motion.

 

  

 

What are the causes of this error that leads to the robot identifying its pose with P than with A?       

 

The reasons are quite a few, some of which are enumerated below:

 

a) The left wheel and right wheel do not move with the same velocity while moving along a straight line leading to a drift in the robot motion even with the best PID controller onboard the robot.

 

b) The left and right wheel diameters are not precisely equal, which is one of the causes for a).

 

c) The problem is accentuated when the robot rotates, for the robot is never able to rotate precisely to the commanded value. For example when asked to rotate 10 degrees it may turn by 9 or 11 degrees. The error in rotation followed by a translation leads to the robot to wrong locations rather than the desired/predicted poses.

 

Cite this Simulator:

.....
..... .....
Copyright @ 2017 Under the NME ICT initiative of MHRD (Licensing Terms)
 Powered by AmritaVirtual Lab Collaborative Platform [ Ver 00.11. ]