Here we have an image of a redundant robot. And this is a concept that we introduced in the previous lecture. This is a robot with a very large number of degrees of freedom or joints. We also introduced the concepts of task space and configuration space. Now the task space of this robot is a subset of SE3 and that's exactly the same task space as the Puma robot has, that is within a working volume, it can move to a particular point and achieve a particular end effector orientation. However the configuration space of this robot is much, much bigger. We're assuming that this robot has got a hundred joints, then its configuration space is R to the 100, with a Puma robot with only 6 joints, it has got a configuration space of R to the 6th. The task space has got a dimension of 6, the configuration space has got a dimension space of 100.
So we could consider that six of those degrees of freedom are used to control the position and orientation of the end effector. The other 94 degrees of freedom are used to define an arbitrary shape for the robot arm, so that we can independently control the position and orientation of the end effector and the path that the arm follows in order to get from the base of the robot to the end effector. When it comes to determining the inverse kinematics then, we have a bit of a problem; there are in fact an infinite number of sets of joint angles and infinite number of robot configurations that will lead to exactly the same end effector pose. In the case of the Puma robot, there were six configurations that gave the same end effector pose; for this robot there are an infinite number. To solve the inverse kinematics then for a redundant robot, we have to rely on a numerical solution.
We're going to do an example now with a redundant robot and we're going to use a Toolbox model called MDL Hyper 3D. So this is a Hyper-redundant robot, in 3 dimensions and I'm going to ask for a robot with 20 dimensions. We can see that there is a new serial link object in the workspace called H3D and if we look at the parameters of the robot, we can see that it has got 20 joints, each joint is only 0.05 meters long. Let's compute the inverse kinematics numerically and we're going to apply the ikine method on that particular object and I'm going to pass in the pose that I want the robot to adopt, that I have already defined and I'm going to pass in the initial joint angles and I'm going to use the workspace variable QZ, that was defined by the script that created the robot for me.
And here we have the solution. We've computed a 20 element vector, which are the joint angles necessary for this 20 joint robot to adopt the pose T2. I can test the correctness of this by computing the forward Kinematics for the set of joint angles that I just computed and that's what it looks like and this is the original pose that I asked for. So we can see that they are identical. I can plot the configuration of the robot at the set of joint angles computed by the inverse kinematics and it looks something like that. Hold on so I can overlay on that now the original pose that I asked for, which was in the workspace variable T2. And there we see that the end effector is in the correct position and that it has the correct orientation.
For a redundant robot the inverse kinematics can be easily solved using a numerical approach.
This content assumes high school level mathematics and requires an understanding of undergraduate-level mathematics; for example, linear algebra - matrices, vectors, complex numbers, vector calculus and MATLAB programming.
I am Sunandan from Japan.
Thank you for brilliant and unparalleled lectures. Is it allowed to discuss about the course material here ?
Please feel free to discuss in the comments section.
Building inverse kinematic functions for redundant robots seems like a task suited for neural networks. Just takes a lot of input and output data for training, especially for a high capacity neural network. The nice thing is that the input/output data can be computed “offline” using the numerical approach, so building a massive dataset would be as simple as picking random Ksi and a random initial configuration, and then use numeric approach to get the output, and repeat until sufficient data is acquired.
A NN could even be trained accepting coordinates of locations the arm is to avoid (while also specifying the output joint configuration to avoid that coordinate). Similarly, could add inputs to hint at the general shape the arm should take, e.g. parabolic curve vs linear with fewest rotations.
Getting a bit “meta” but, the NN concept is interesting since training uses a numerical approach as well; that is, network training accepts data (built with a numerical approach) and then itself uses a numerical approach to generate the NN’s parameters, such that in the final algorithm (inference), computation becomes a set of matrix operations, and numerical approach is eliminated. The NN is simply frontloading this computation (training), though the downside is that the initial computation is far higher, but totally acceptable since this would be done before the robot is in use.
It could be, and has been done. You are right that all the training data can be created offline. The answer will be not be exact, which may or may not be an issue. It could be good enough to seed an optimisation-based IK solution.