13.07.2015 Views

Learning collaborative manipulation tasks by ... - LASA - EPFL

Learning collaborative manipulation tasks by ... - LASA - EPFL

Learning collaborative manipulation tasks by ... - LASA - EPFL

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

line for the generated force input (first graph), retrievedvelocity (second graph) and position (third graph). Threedifferent perturbations of this force signal are then simulated<strong>by</strong> distorting non-homogeneously in time the original forcesignal (shown in dashed, dotted, and dash-dotted line). Distortingthe signal in such a way simulates situations wherethe force applied <strong>by</strong> the user may appear with some temporalvariability across the different reproductions. We see that thesystem successfully adapts to these changes <strong>by</strong> changing themotion behaviors accordingly. For example, the force inputgenerated in dash-dotted line can represent the behavior ofa user first very slow at initiating the task (e.g. the user maynot be ready yet), which is reflected <strong>by</strong> a quasi constantforce of 4N sensed <strong>by</strong> the robot (first graph). Then the usersuddenly tries to lift the object, that is, after the simulatedsteady phase, a force signal similar to the demonstration butstretched in time is applied. We see in the second and thirdgraphs that the robot correctly handles this perturbation <strong>by</strong>staying still first, and then helping the user lift the beam assoon as this one is ready (null velocity for the first one thirdof the motion). We then see that the the task is <strong>collaborative</strong>lyachieved.The last graph of Fig. 7 presents reproduction attempts(in black line) where the spatial generalization capabilitiesof the system are highlighted in the case of the robot actingas a leader. By starting from different initial positions, wesee that the system is able to retrieve an appropriate motionto lift the object.V. CONCLUSIONS AND FURTHER WORKIn this paper we proposed an approach to teach a robot<strong>collaborative</strong> <strong>tasks</strong> through a probabilistic model based onHidden Markov Model and Gaussian Mixture Regression.We emphasized the role that teleoperation holds for transmittingboth dynamic and communicative information on the<strong>collaborative</strong> task, which classical methods of Programming<strong>by</strong> Demonstration have so far overlooked. We then showthrough an experiment consisting of lifting an object <strong>collaborative</strong>lywith a humanoid robot that the proposed modelcan efficiently encapsulate typical communication patternsbetween different dyads of users acting with stereotypicalcollaboration behaviors. Reproduction attempts in simulationhave finally been presented.We used here only stereotypical behaviors that were predefinedbefore the interaction between the two users to validatethe use of the haptic interface and proposed probabilisticmodel to record and encode physical <strong>collaborative</strong> skills.Further work will first concentrate on extending the frameworkto more natural interactions where the users are not toldexplicitly to behave with predetermined roles, thus extendingthe complexity of the haptic communication cues to transferto the robot. We are currently working on reproducing thelearned skill on the real robot, which will be used to evaluatethe performance of the proposed controller. We finally planto contrast the data collected in this experiment with directrecordings of the same dyads of users performing a similartask <strong>by</strong> endowing the object to lift with force sensors at bothextremities, i.e. without passing through the robot and hapticinterface to record haptic/kinematics information.REFERENCES[1] K. Kosuge, H. Yoshida, and T. Fukuda, “Dynamic control for robothumancollaboration,” in Proceedings of the 2nd IEEE InternationalWorkshop on Robot and Human Communication, Tokyo, Japan, 1993,pp. 389–401.[2] H. Arai, T. Takubo, Y. Hayashibara, and K. Tanie, “Human-robotcooperative <strong>manipulation</strong> using a virtual nonholonomic constraint,”in Proc. IEEE Intl Conf. on Robotics and Automation (ICRA), April2000, pp. 4064–4070.[3] Y. Hirata, Y. Kume, Z.-D. Wang, and K. Kosuge, “Decentralizedcontrol of multiple mobile manipulators based on virtual 3D castermotion of handling an object in cooperation with a human,” in Proc.IEEE Intl Conf. on Robotics and Automation (ICRA), September 2003,pp. 938–943.[4] S. Yigit, C. Burghart, and H. Worn, “Cooperative carrying using pumplikeconstraints,” in Proc. IEEE/RSJ Intl Conf. on Intelligent Robotsand Systems (IROS), September–October 2004, pp. 3877–3882.[5] T. Tsumugiwa, R. Yokogawa, and K. Hara, “Variable impedancecontrol with regard to working process for man-machine cooperationworksystem,” in Proc. IEEE/RSJ Intl Conf. on Intelligent Robots andSystems (IROS), October–November 2001, pp. 1564–1569.[6] V. Duchaine and C. Gosselin, “General model of human-robot cooperationusing a novel velocity based variable impedance control,”in Joint EuroHaptics Conference and Symposium on Haptic Interfacesfor Virtual Environment and Teleoperator Systems, 2007, pp. 446–451.[7] A. Billard, S. Calinon, R. Dillmann, and S. Schaal, “Robot programming<strong>by</strong> demonstration,” in Handbook of Robotics, B. Siciliano andO. Khatib, Eds. Secaucus, NJ, USA: Springer, 2008, pp. 1371–1394.[8] Z.-D. Wang, Y. Takano, Y. Hirata, and K. Kosuge, “A pushing leaderbased decentralized control method for cooperative object transportation,”in Proc. IEEE/RSJ Intl Conf. on Intelligent Robots and Systems(IROS), September–October 2004, pp. 1035–1040.[9] M. Rahman, R. Ikeura, and K. Mizutani, “Control characteristics oftwo humans in cooperative task,” in IEEE Intl Conf. on Systems, Man,and Cybernetics, vol. 2, 2000, pp. 1301–1306.[10] S. Gentry and E. Feron, “Musicality experiments in lead and followdance,” in IEEE Intl Conf. on Systems, Man, and Cybernetics, vol. 1,2004, pp. 984–988.[11] K. Reed and M. Peshkin, “Physical collaboration of human-humanand human-robot teams,” IEEE Transactions on Haptics, vol. 1, no. 2,pp. 108–120, December 2008.[12] P. Evrard and A. Kheddar, “Homotopy switching model for dyadhaptic interaction in physical <strong>collaborative</strong> <strong>tasks</strong>,” in Joint EurohapticsConference and Symposium on Haptic Interfaces for Virtual Environmentand Teleoperator Systems (WorldHaptics 2009), Salt Lake City,USA, March 2009.[13] M. Hersch, F. Guenter, S. Calinon, and A. Billard, “Dynamical systemmodulation for robot learning via kinesthetic demonstrations,” IEEETrans. on Robotics, vol. 24, no. 6, pp. 1463–1467, 2008.[14] S. Calinon, F. Guenter, and A. Billard, “On learning, representing andgeneralizing a task in a humanoid robot,” IEEE Trans. on Systems,Man and Cybernetics, Part B, vol. 37, no. 2, pp. 286–298, 2007.[15] Z. Wang, A. Peer, and M. Buss, “An HMM approach to realistichaptic human-robot interaction,” in Joint EuroHaptics Conferenceand Symposium on Haptic Interfaces for Virtual Environment andTeleoperator Systems, 2009, pp. 374–379.[16] L. Rabiner, “A tutorial on hidden Markov models and selected applicationsin speech recognition,” Proc. IEEE, vol. 77:2, pp. 257–285,February 1989.[17] Z. Ghahramani and M. Jordan, “Supervised learning from incompletedata via an EM approach,” in Advances in Neural Information ProcessingSystems, J. D. Cowan, G. Tesauro, and J. Alspector, Eds., vol. 6.Morgan Kaufmann Publishers, Inc., 1994, pp. 120–127.[18] D. Bullock and S. Grossberg, “Neural dynamics of planned armmovements: Emergent invariants and speed-accuracy properties duringtrajectory formation,” Psychological Review, vol. 95, no. 1, pp. 49–90,1988.[19] J. Butcher, Numerical Methods for Ordinary Differential Equations.Chichester, UK: Wiley, 2003.[20] N. Mansard, O. Khatib, and A. Kheddar, “Integrating unilateralconstraints inside the stack of <strong>tasks</strong>,” IEEE Trans. on Robotics, 2009,in press.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!