13.07.2015 Views

Learning collaborative manipulation tasks by ... - LASA - EPFL

Learning collaborative manipulation tasks by ... - LASA - EPFL

Learning collaborative manipulation tasks by ... - LASA - EPFL

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

x0.40.30.20.1F10506Robot is leadingRobot is following0−0.10 2 4 6t−50 2 4 6tẋ42660ẋ8 x 10−3 42xẋ8 x 10−3 F42−28 x 10 −3 F−5 0 5 1000−2−0.1 0 0.1 0.2 0.3 0.4−2−5 0 5 10Fig. 6. Illustration of the changes of correlations between force F andvelocity ẋ along the task. The trajectories in black and grey representrespectively the cases where the robot is leading and following.ẋ8 x 10−3 6420−2−0.1 0 0.1 0.2 0.3 0.4xẋ86420−2−5 0 5 10x 10−3FF1086420ẋ43215 x 10−3 tFig. 5. Demonstrations of the <strong>collaborative</strong> task to the robot (first tworows) and associated HMM model (last row). The 5 trajectories in blackand the 5 trajectories in grey represent respectively the demonstrations ofthe robot acting as a leader and as a follower. The points represent thebeginning of the motions.shows the evolution of the adaptive gain κ P defined in (9)along the task.IV. EXPERIMENTAL RESULTSThis experiment aims at validating that the proposedmodel can distinguish stereotypical following and leadingbehaviors (i.e. where the user is explicitly told to follow oneor the other behavior along the task) and that the model canlead to different controllers during reproduction. This is afirst step to determine if the proposed model could address infurther work more complex types of behaviors (and switchingacross those). We assessed the robustness of the proposedsystem in the series of simulations where different possibleinput force profiles are fed into the system to modulate thekinematic behavior of the robot.Fig. 5 shows the demonstrations provided to the robot(first and second row) and the associated HMM models (lastrow). The dataset and model of the robot acting as a leader(conversely the user acting as a follower) are representedin black line. The dataset and model of the robot actingas a follower (conversely the user acting as a leader) arerepresented in grey line. In the fourth graph, we see that thecorrelations between ẋ and F change along the motion. Inthe two situations (leading and following), the correlationscan be roughly decomposed into three parts correspondingto the beginning of the motion (user/robot initiating thex−20 2 4 6 80.30.250.20.150.10.050−0.050 2 4 6 8ttx00 2 4 6 80.40.30.20.10−0.10 2 4 6Fig. 7. Reproduction attempts in the case of perturbed force signals (firstthree graphs) and <strong>by</strong> starting from different initial positions (last graph).task), middle of the motion (user and robot lifting the objecttogether) and end of the motion (user/robot notifying theend the task), see also Fig. 6. The first and last datapointsare characterized <strong>by</strong> a force and velocity close to zero (ormoving towards zero). The non-linearities observed alongthe task show that approximating the <strong>collaborative</strong> behaviorwith a system of constant damping factor (i.e. linear relationbetween force and velocity) would be inefficient to model the<strong>collaborative</strong> behaviors. We see in the last graph that HMMscan encapsulate compactly and efficiently these differentcorrelations along the motion (two HMMs with 5 states havebeen used here for the leading and following cases).Fig. 7 shows reproduction attempts highlighting the robustnessof the system to temporal and spatial variability.To highlight the generalization capabilities of the systemin terms of temporal variations, the force signal recordedduring one of the demonstration (when the robot acts asa follower) is used to simulate the force input during areproduction attempt. These results are represented in solidt8

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!