Let's have a look at determining the inverse kinematics of our simple two link robot using a numerical approach.
We're going to load now, into the workspace a model of our two link robot. And that's the workspace variable P2, is this new robot and we can plot the configuration for the robot for zero joint angles, this function also defines a workspace variable QZ, which contains the value of zero zero and that's what the robot looks like in the zero angle configuration.
Now I'm going to define a desired end effector position for this robot and I'm going to define simply the translation and it's going to be one in the X direction, one in the Y direction and zero in the Z direction.
Remember that this robot has only got two joints and he can only move in the XY plane so it doesn’t make sense for me to put a value for the Z co-ordinate that's anything but zero. The robot is incapable of achieving any position where Z is not equal to zero. And here is the homogeneous transformation, representing the desired position.
Now in order to determine the inverse kinematics numerically, I write down our object and I'm going to use the method ikine, it stands for Inverse Kinematics. And this particular method performs a numerical solution to determine the inverse kinematics. First argument I pass in is the desired end effector pose T that we've just defined and I'm going to put in the initial joint angles that I'm going to use in the search for the optimal joint angles to achieve that pose.
The last argument is a little bit more complex. It's what I call a "Mask Matrix" and its axis element matrix and the first two elements are ones and the last four elements are zeros.
And what this is telling the ikine function is only try and match the end effector X and Y position, don't bother trying to match the Z co-ordinate, don't bother trying to match the orientation. Because this robot's only got two degrees of freedom, it can only achieve two constraints in its end effector pose and I've chosen those constraints to be its X position and its Y position. I don't care about any other elements of the end effector pose. I hit enter and the solution has come back a Pi and minus Pi. Well let's plot that, plot Q and there we see the desired configuration of the robot to achieve the particular end effector position that I asked for.
Let's just have a look at that robot from above and we can see very clearly, the end effector has moved to the position one in the X direction and one in the Y direction.
Now let's see if we can find another configuration of the robot that has got the same end effector pose. What we're going to do is we're going to go back to our inverse kinematics and we're going to give a different initial condition. I'm going to start the search with Q1 equal to minus one radiance instead of zero radiance and run the solver and once again we come back to exactly the same solution that I had before.
Let's try an different initial condition, let's try setting this one to minus one and hey, now we have a different solution and let's plot what that solution looks like, we can see that once again it has achieved the desired end effector position, one in the X direction one in the Y direction, but the configuration of the robot is now different. You see that determining the initial joint angles that would achieve that second configuration was not straight forward, it required a little bit of trial and error in order to guide the algorithm, to finding this alternative solution.
Now let's look at performing numerical inverse kinematics for this six degree of freedom robot, which moves in three dimensions.
The first thing we're going to do is to load a model of the Puma robot, we now have a variable in our workspace and I'm going to plot the robot in its zero angle configuration, using the workspace variable QZ which was defined by the MDL Puma 560 script. And here we see the robot in it's zero angle pose. Next thing I'm going to do is define the pose that I want the end effector to adopt. And it's going to have a translational component of 0.6 in the X direction, 0.2 in the Y direction, zero in the Z direction and I'm going to change the orientation of the end effector. It's got a roll angle of zero, pitch angle of 180 degrees and a yaw angle of 180 degrees, specify degrees and this is the homogeneous transformation representing the pose I'd like the robot to adopt.
I'm going to enable plot hold and now I'm going to plot that particular pose, so we can see whether the robot is right now and the pose that I would like it to adopt.
Now I'm going to compute the inverse kinematics and this time I'm going to do it numerically. And I use a slightly different method on the P560 object, I use the method ikine, I parse in the pose I'd like the robot to adopt and I parse in the initial set of joint angles for the numerical algorithm and I'm just going to make them all zero. It's not particularly imaginative, but we'll see what happens. Okay the algorithm has completed and it's computed a set of joint angles. If I now plot the robot in that configuration, we can see that the position of the robot end effector is at the origin of the blue co-ordinate frame that I drew and that the X, Y and Z axis of the robot's end effector are parallel to the X, Y and Z axes of the co-ordinate frame that I desire.
The robot's configuration looks a little bit awkward, that's because with the numerical technique, I cannot explicitly control the pose that it adopts. I need to try some different initial conditions in order to perhaps get a configuration where the robot's elbow is up. You'll notice an error message here in red, that's nothing to be worried about, it's because the initial set of joint angles that I chose, all zeros, results in a robot configuration that is singular and singular configurations is something that we're going to discuss along with Jacobians in the next two lectures.
Solving the inverse kinematics numerically looks like a pretty easy thing to do, are there any drawbacks? Well there are a few.
First of all it's important to choose a good initial set of joint co-ordinates. If the joint co-ordinates are too far away from the actual joint co-ordinates required to achieve the pose then the numerical solution may not in fact converge. Another problem is that we can't guarantee a particular robot configuration. The robot configuration that is found, depends on the initial joint co-ordinates that are chosen, so the numerical algorithm will start with a particular set of joint angles that you give it and it will perform the optimization until it matches the desired end effector pose. The particular configuration that it ends up with, depends on your initial choice of joint co-ordinates. Now importantly, you cannot control the robot configuration as you can with the analytic solution.
And finally the numerical solution can be computationally expensive. It's an iterative algorithm and the number of iterations depends in some way on how good your initial joint co-ordinates are. So if you wanted this to run in real time, to be able to compute inverse kinematics say in a thousandth of a second, then a numerical algorithm may not be the most appropriate way to go.
Let’s look at numerical approaches to inverse kinematics for a couple of different robots and learn some of the important considerations.
For RTB10.x please note that the mask value must be explicitly preceded by the ‘mask’ keyword. For example:
>> q = p2.ikine(T, [-1 -1], 'mask', [1 1 0 0 0 0])
This content assumes high school level mathematics and requires an understanding of undergraduate-level mathematics; for example, linear algebra - matrices, vectors, complex numbers, vector calculus and MATLAB programming.