Position Control

In this example, we create a position control system to move the robot using end effector sets. These sets define the motion constrain on the end effector - whether we constrain all 6 DOF or underconstrain, which allows for more possible solutions of the end effector motion.


The goal of this example is to familiarize the developer with more advanced concepts like creating position control systems and creating EcEndEffector and EcEndEffectorSet. This example also shows how to call set the desired placement for an end effector, and call the calculateState method on the EcPositionControlSystem. This example also introduces a simple renderer, which is useful to visualize the motion of the model robot.

Code Review

Code Explanation: example2.cpp

Most of the function in example2 are the same as example1, with these exceptions:

generatePositionController() Creates a position controller and end effector set with 1 end effector frame. Also creates a velocity controller that is required for the position controller. A more complete explanation follows further on in the text. (see Generating the Position Controller)

generateRenderer() Creates a render window, then pulls the visual system state from the window and passes it to the rendered. The visual system state contains the mesh and texture information to render the robot manipulator(s). After this, the robot can be rendered by just passing in the system state (see renderSystemState() next).

renderSystemState() using an updated system state, where the joints angles and link positions will have changed, this renders the robot manipulators in their current state.

run() This function runs a simulation in two parts, with a different number of time steps for each part.

Part 1 of the simulation: The position controller first sets the desired target pose of the end effector (position and orientation) and then runs in a loop, calculating the system state (name dynamicState here) each iteration. This updated system state is passed to the renderer.

Part 2 of the simimulation: The loop generates a series of desired positions along a circular path based on time elapsed. The robot moves the end effector in a circle in the XY plane performing three loops.

Additional Classes Referenced

  • EcRenderWindow
  • EcPositionControlSystem
  • EcManipulatorSystemState
  • EcCoordinateSystemTransformation
  • EcManipulatorLinkConstPointerVector
  • EcFrameEndEffector
  • EcEndEffectorSet
  • EcControlExpressionCore
  • EcExpressionScalarConstant
  • EcControlExpressionJointRateFilter
  • EcControlExpressionEndEffectorErrorFilter
  • EcExpressionGeneralColumn
  • EcControlExpressionContainer
  • EcIndividualVelocityControlDescription
  • EcVelocityControlSystem
  • EcVector

Reference: Actin Class List

Running Example 2

Assuming that you’ve already set up Example 1 and created an environment variable for a path to where you installed actin.

    $ cd $ACTIN_HOME
    $ cd examples/build/bin
    $ ./trainExample2

Running the example will create a rendering window with the robot first moving to the target position, and then moving the end effector in a circular path.

Generating the Position Controller

This is perhaps the most complicated part of example2. We require a position controller so that we can find the pose of the robot, set the desired pose, and call calculateState each cycle. The function Example2::generatePositionController() performs this work.

The figure below shows that the Position Controller requires a Stated System, and End Effector Set and a Velocity controller. See the right side of the object model. You can double-click on the image to enlarge.

Another view, showing the object heirarchy and the associated types :

Further Excercises

In the interest of learning Actin, it would be educational to attempt a few modifications. Try generating a different type of constraint such as a pointEndEffector, to see how the motion changes. Try creating multiple control descriptions for different motion behaviors. Another useful modification would be to print out state information each timestep, to get joint position and velocity information. This can provide basis for a simple hardware interface if you were to send that information to a physical robot system.