LESSON

Summary of inertial sensors and navigation

Transcript

Lets summarize some of the points that we’ve talked about in this lecture.

Today we take for granted that we can use GPS to determine our position anywhere on the surface of the planet, but it is a relatively recent technology. It only became operational in 1994, and only became available at reasonable accuracy for the general public in the year 2000.

Before the advent of GPS, navigating precisely on the surface of the planet or in outer space was very very difficult, and the technology that was used was called inertial navigation, and that involved measuring the acceleration and angular velocity of a vehicle, and integrating that over time to determine the position.

The key components of an Inertial Navigation System are: an inertial measurement unit; which contains the sensors that measure acceleration, magnetic field, and angular velocity, and what’s called dead reckoning software; that takes those signals and integrates them over time to continually update the pose of the vehicle.

At a simplistic level we can consider that an accelerometer is a mass on a spring inside a little box. If we do the algebra we can show that the extension of the spring is linearly proportional to the acceleration of the device and the gravitational acceleration. An accelerometer at rest on earth indicates an acceleration of 1g in the vertical or upward direction. This is a little bit counter-intuitive, because when we drop something it is going to accelerate in the downward direction. The accelerometer at rest measures 1g upwards. And this is a consequence of the equivalence principle. An accelerometer on the surface of the earth or an accelerometer mounted in a rocket accelerating upwards at 1g, both of them indicate an acceleration of 1g.

Many devices, for example my phone, contain three accelerometers which measure acceleration in the x, y and z directions of a coordinate frame B attached to my phone. I can use the components of acceleration measured in the phone in coordinate frame B to determine the roll and pitch of the phone with respect to the blue coordinate frame 0.

The earth contains a gigantic but fairly weak magnet, but the poles of that magnet are not constant. They change with time. A compass has a needle which points towards the north magnetic pole, but the north magnetic pole is different to the north geographic pole.

At any particular point on the surface of the earth we can establish a right-handed coordinate frame where one axis points towards the north geographic pole, another axis points vertically downwards towards the centre of the planet, and the other axis points in the geographic east direction. The earth’s magnetic field vector is shown here by a red arrow, and it can be described with respect to this coordinate frame by two angles. Magnetic declination is the difference between two directions; the direction of true north and the direction of magnetic north - the direction that the compass points in. The other angle is magnetic inclination, and that’s the angle that the magnetic field vector makes with respect to a local horizontal surface. It’s also referred to as magnetic dip.

In order to determine the yaw angle - that’s the direction that the body is facing with respect to magnetic north - we need to combine information from two sensors. We determine the roll and pitch angle of the body using the accelerometers, and then we use the components of the magnetic field sensed onboard the body, combine that with what we know about the local magnetic inclination angle, and the local magnetic field strength to determine the yaw angle.

A spinning disc has some interesting dynamic properties. One of these is that we can use it to convert an angular velocity - omega - about the blue axis, into a torque - tao - about the red axis.

We can measure that torque by measuring the forces that are exerted on the axle that supports the rotating disc. If we know the parameters such as the rotational inertia of the disc and the rotational velocity of the disc, then we can determine omega, the rotational velocity of this gyroscopic sensor.

Interestingly there are biological equivalents of accelerometers and gyroscopes in our bodies. In each of our ears we have two accelerometers and three gyroscopes which measure the motion of our body, and that’s integrated with visual information in our brain to understand how we are moving with resepect to the world.

Finally we looked at how to compute the derivative of a rotation matrix. It’s the product of two matrices: it’s a skew-symmetric matrix multiplied by the original rotation matrix. The skew-symmetric matrix is computed on the angular velocity of the body; that’s the vector L about which the body is instantaneously rotating multiplied by the rate of rotation about that axis.

The skew-symmetric matrix has got a number of interesting properties. In particular it’s singular. That is, its determinate is always equal to 0, and its transpose is the negative value of itself. In the case of three dimensions, as shown here, we can clearly see a zero diagonal, and the fact that the matrix only contains three unique elements, x, y and z, Each of which appear once with a positive sign, and once with a negative sign.

We recap the important points from this masterclass.

Professor Peter Corke

Professor of Robotic Vision at QUT and Director of the Australian Centre for Robotic Vision (ACRV). Peter is also a Fellow of the IEEE, a senior Fellow of the Higher Education Academy, and on the editorial board of several robotics research journals.

Skill level

This content assumes high school level mathematics and requires an understanding of undergraduate-level mathematics; for example, linear algebra - matrices, vectors, complex numbers, vector calculus and MATLAB programming.

More information...

Rate this lesson

Average

Leave a comment