Analyzing a 3-joint planar robot arm


Now we are going to add a third link to my robot. So now, we have a 3-degree of freedom robot with 3 revolute or 3 rotational joints. This robot has a much larger working volume. It can reach any point within this very large circle here, but because it has got 3 degrees of freedom it means it cannot only reach any particular position within its work space. It can also achieve any orientation at that particular position.

If for example the robot's end effector position is here, then the robot with 3 degrees of freedom is able to achieve any arbitrate orientation of this last link of the robot. We could see I can move it through quite a large range of angles of that link and it is possible because it has 1 extra degree of freedom.

Let us determine the pose of the end effector of our 3-link robot. The first thing we are going to do is to annotate the robot and introduce the lengths of the various links and then we are going to introduce the various joint angles. Q1 and Q2 in this diagram are both positive and Q3 here is a negative value. You note that the direction of rotation is the opposite of that for Q1 and Q2. When we did this for the 1 and 2 joint robots, I showed an animation of the coordinate frame moving from the reference coordinate frame along the links and ending up at the end effector. I am not going to do that animation this time. I think you got the general idea and what we are going to do is just write down the transform expression by inspection.

The 1st thing we are going to do is to rotate it at Q1, translate in the X-direction by A1, rotate by the angle Q2, translate along the 2nd link in its X-direction by the amount of A2 and then rotate by the angle Q3 and then translate in the X-direction by an amount A3.

Once again, I can expand out what these matrices and multiply them together to come up with a homogenous transformation which represents the pose of the end effector of this 3-joint robot. To do this by hand for the 3-link robot is a little bit tedious. I might make an error.

So we are going to go straight to MATLAB and do it there. Now, we are going to use the MATLAB to compute the pose of the end effector of this 3-link planar robot. We are going to do it just in symbolic form this time. I create some symbols to represent the 3 links lengths: A1, A2 and A3 and they represent the 3 joint angles: Q1, Q2 and Q3.

Once again, we used the trchain2 to function and the string now I parse in describes the way the coordinate frame moves from the base of the robot to the tip. So I am going to do a rotation by the angle Q1, translation by the distance A1, a rotation by angle Q2, translation in the X-direction by the distance A2, another rotation this one by the angle Q3 and another translation by the distance A3 that is moving along the 3rd link of the robot. Now, I am going to pass the end effector of the joint angles Q1, Q2 and Q3 and here, we have a symbolic expression which represents the pose of the end effector of this 3-link robot.

You see this is now quite a complex expression as we need to scroll sideways to see the end of it.

The x component for example is given by the 1st row and the 3rd column of the result that I just computed and that is the expression for the X-coordinate of the end effector of the 3-link robot.

I can also import a model of a planar 3-link robot and that is the tool box function and the planar tree creates a serial link object in our workspace. The variable is called P3 and there is some methods that I can apply to this object and the teach method that we have looked at previously. I can do it on this 3-link robot and here it is. I can increase joint angle number 1. I can decrease joint angle number 2. I can increase joint angle number 3 and we can see it moving and we can see the position of the end effector changing as I adjust these sliders.


There is no code in this lesson.

We consider a robot with three joints that moves its end-effector on a plane.

Professor Peter Corke

Professor of Robotic Vision at QUT and Director of the Australian Centre for Robotic Vision (ACRV). Peter is also a Fellow of the IEEE, a senior Fellow of the Higher Education Academy, and on the editorial board of several robotics research journals.

Skill level

This content assumes an understanding of high school level mathematics; for example, trigonometry, algebra, calculus, physics (optics) and experience with MATLAB command line and programming, for example workspace, variables, arrays, types, functions and classes.

More information...

Rate this lesson


Check your understanding


  1. Sachin Nath says:

    Professor, should we give the angle q3 in negative in matlab too, to get the orientation of the robot as of in 1:27?

    1. Peter Corke says:

      Yes, any of the angles can be positive or negative. So, at the point you mention, q3 would be negative. But remember with angles, a negative angle is the same as a big positive angle, ie. -90deg is the same as +270deg.

  2. andressb says:

    I guess that any point and any orentation if the point is not in the limit of the robot working area.

    Is there any concept for the areas in which that statement applies?

    1. Peter Corke says:

      You raise an important point. At the limit of the robot’s reach we can achieve a position but lose the ability to achieve arbitrary orientation. We refer to this as a singularity, it’s where some degrees of freedom in the robot’s working (or task) space become unreachable. Singularity can be tested by examining the robot’s Jacobian matrix. Search the academy for “singularity” and “Jacobian”.

Leave a comment