28.01.2013 Views

International Workshop PROCEEDINGS - gichd

International Workshop PROCEEDINGS - gichd

International Workshop PROCEEDINGS - gichd

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

VIENNA UNIVERSITY OF<br />

TECHNOLOGY - AUSTRIA<br />

ROYAL MILITARY<br />

ACADEMY - BELGIUM<br />

<strong>International</strong> <strong>Workshop</strong><br />

Robotics and Mechanical<br />

assistance in<br />

Humanitarian Demining and<br />

Similar risky interventions<br />

Brussels-Leuven, Belgium<br />

16-18 June 2004<br />

<strong>PROCEEDINGS</strong><br />

WITH THE SUPPORT OF THE ‘FONDS NATIONAL DE LA<br />

RECHERCHE SCIENTIFIQUE ‘ and the General Direction of<br />

Scientific Research of the FRENCH COMMUNITY OF BELGIUM


CONTENT OF THE HUDEM’2004 <strong>PROCEEDINGS</strong><br />

HUDEM4-1 The Mine Ban Convention: past, present and<br />

future<br />

HUDEM4-2 The Mine Ban Convention and the Mine actions<br />

technologies<br />

HUDEM4-3 The Outdoor Robotics: challenges and<br />

requirements, promising tools and dual-use<br />

applications<br />

HUDEM4-5 Robotics Systems for unstructured Outdoor<br />

Environments<br />

HUDEM4-6 MAXML and PARADIS: Mine Actions<br />

software management tools<br />

HUDEM4-7 Remote Sensing Systems for robotics and<br />

aquatic related Humanitarian Demining and<br />

UXO Detection<br />

HUDEM4-8 Air/Ground Robotic Ensembles for Risky<br />

applications<br />

HUDEM4-9 Light, Low-cost, hybrid robotic platform for<br />

Humanitarian demining<br />

HUDEM4-10 Ground adaptative manipulation of GPR for<br />

mine detection system<br />

HUDEM4-11 A system for monitoring and controlling a<br />

climbing and walking robot for landslide<br />

consolidation<br />

HUDEM4-12 Sensor Head and Scanning Manipulator for<br />

Humanitarian Demining<br />

HUDEM4-13 Life Driving Video System (LDV) command,<br />

control and communicate<br />

With the robotic device<br />

P.T. Gscwind, M.Stuber, Innosuisse Corp,<br />

Geneva<br />

HUDEM4-15 Robotic systems for Humanitarian Demining:<br />

modular and generic approach. Cooperation<br />

under IARP and ITEP<br />

HUDEM4-18 EUDEM-2, a useful tool for the Humanitarian<br />

Demining Researchers/End-Users Community<br />

HUDEM4-19 A concept of implementing technology to<br />

encourage economic growth in de-mining<br />

M. Lint, Ambassador<br />

Ministry of Foreign Affairs<br />

Belgium, President of the 4 th MSP<br />

Prof M.Acheroy, RMA<br />

D. Caltabiano, D.Longo, G.Muscato,<br />

m;Prestifilippo,<br />

G.SpampinatoUniversità degli Studi di<br />

Catania, Chairman WG CLAWAR<br />

Prof V.Gradetsky, M.Knyazkov,<br />

L.Meshman, M.Rachkov , Institute for<br />

Mechanical Problems, University of<br />

Moscou and Ministry of the Russian<br />

Federation for Civil Defence<br />

Sébastien Delhay, Vinciane Lacroix ,<br />

Mahamadou Idrissa, SIC, RMA<br />

Dr Ch.Bostater, T.Ghir, L.Bassetti,<br />

Florida Institute of Technology<br />

S.Lacroix, R.Chatila, LAAS Toulouse<br />

Prof G.Genta, Carabelli,S Polytech<br />

Torino, Italy<br />

Drs Hibeno.Yabushita, Kazuhiro<br />

Kosuge, Yasuhisa Hirata, IRS Tohoku,<br />

JPN<br />

Dr L.Steinicke, et Al SAS, Brussels<br />

Dr P.Gonzalez-de-Santos,E.Garcia,<br />

J.Combano,J.Estremera, IAI-CSIC,<br />

Madrid<br />

P.T. Gscwind, M.Stuber, Innosuisse<br />

Corp, Geneva<br />

Yvan Baudoin, RMA<br />

Mrs K.Debruyn,Prof H.Sahli, Prof<br />

J.Cornelis, VUB Brussels<br />

R. Dixon, Demining Systems, UK<br />

HUDEM4-20 A concept for a humanoid demining robot. Peter Kopacek, Man-Wook Han,<br />

Bernhard Putz, Edmund Schierer,<br />

Markus Würzl, Vienna University of<br />

HUDEM4-21 Towards a Semi-Autonomous Vehicle<br />

for Landmine Neutralisation<br />

Technologies, Austria<br />

Kym M. Ide, Brian J. Jarvis, Bob<br />

Beattie, Paul Munger and Leong Yen<br />

Australian Govt, Dept of Defense, Def


HUDEM4-22 Results of Open-Air Field Trials on Chemically-<br />

Science and Technology Organisation<br />

B.C.Maglich, T-F Chuang, M.Y.Lee,<br />

Specific Identification of UXO Fillers and Ch.Druey, HiEnergy Technologies, Inc<br />

Buried AT landmine using portable Non-Pulsed Calfornia,USA, J.W.Price,G.Miller, ,<br />

Non-directed Fast Neutron Stoichiometer L.A U.California<br />

HUDEM4-23 Feature-Level Sensor Fusion for a Demining Svetlana Larionova, Lino Marques and<br />

Robot<br />

Anibal T. de Almeida, University of<br />

Coïmbra<br />

HUDEM4-24 European Project of Remote Detection SMART Dr Yann Yvinec, RMA<br />

HUDEM4-25 Modelling of a Metal Detector Dr Pascal Druyts, RMA<br />

HUDEM4-26 Extraction of Landmine Signature<br />

I van den Bosch, S.Lambot, M.Acheroy,<br />

from Ground Penetrating Radar Signal UCL, Belgium-RMA<br />

HUDEM4-27 NQR-based detection: an overview K.Althoefer, et al, Kings College,<br />

London<br />

HUDEM4-28 The PMAR lab in Humanitarian Demining M.Zoppi, R. Molfino, E. Cepolina,<br />

Effort<br />

PMAR Lab, University Genova, Italy<br />

HUDEM4-30 Robosoft’s advanced robotic solutions for<br />

outdoor risky intervention<br />

HUDEM4-31 Behaviour based motion control for offroad<br />

navigation<br />

HUDEM4-32 Planning Walking Patterns for a Biped robot<br />

using Fuzzy Logic Controller<br />

HUDEM4-33 Adaptative Neuro-fuzzy Control of AMRU-5, a<br />

six-legged walking robot<br />

HUDEM4-34 Multi-Agent-System: an efficient approach for<br />

Sensor and Robotics Systems used in<br />

Humanitarian Demining<br />

HUDEM4-35 4-D GPR Image Based on FDTD Parallel<br />

Technique<br />

HUDEM4-36 Robotic Agents for dangerous tasks. Features<br />

and Performances<br />

HUDEM4-37 How to design a Haptic Telepresence System for<br />

the Disposal of Explosive<br />

HUDEM4-38 Detection of material and structure of mines by<br />

acoustic analysis of mechanical drilling noise<br />

P.Pomiers,V.Dupourqué, Robosoft,<br />

France<br />

M.Proetzsch, T.Luksch, Prof K.Berns,<br />

University of Kaizerslautern<br />

Dr A.Pajaziti, Ahmet Shala, Bujar Pira,<br />

Technical University of Prishtina,<br />

Kosovo<br />

Dr JC Habumuremyi, Y.Baudoin Royal<br />

Military Academy, P.Kool, VUB<br />

Belgium<br />

E.Colon, Royal Military Academy<br />

Baikunth Nath, jing Zhang, University<br />

of Melbourne, Australia<br />

Stezfan Havlik, Slovak Academy of<br />

Science, Slovakia<br />

OrdnancesB.Petzold, M.F.Zaeh,<br />

A.Kron,G.Schmidt,B.Deml,B.Färber,<br />

T.U. München – Ifa University of the<br />

Armed Forces<br />

G.Holl, B.Schwark-werwach, C.Becker


Past, present and future of the Anti-Personnel Mine Ban Convention<br />

HUDEM’04 - RMA - 16 th June 2004<br />

Speech by Ambassador Jean Lint, President of the 4 th Meeting of the States Parties<br />

Since its entry-into-force on 1 st March 1999, the Anti-Personnel Mine Ban Convention,<br />

crafted in Vienna, launched in Brussels, negotiated and adopted in Oslo and finally signed in<br />

Ottawa on 3 rd December 1997 has been a true success story as almost three-quarter of the States<br />

of the world have accepted the responsibility to never use, produce or transfer anti-personnel<br />

mines and to cooperate in addressing the devastating impact of those mines.<br />

This success is due to the widespread recognition of this international norm and to the<br />

spirit of cooperation between all States Parties, the <strong>International</strong> Campaign to Ban Landmines<br />

(ICBL) and the <strong>International</strong> Committee of the Red Cross (ICRC).<br />

This success is also due to mechanisms foreseen in the Convention, such as<br />

• the annual Meetings of States Parties and<br />

• the obligation to provide international cooperation and assistance for mine<br />

clearance, stockpile destruction and victim assistance.<br />

It is also due to those mechanisms created by the States Parties to facilitate coordination<br />

and exchange of information:<br />

• the Intersessional Work Program, where four Standing Committees are active on:<br />

• the General Status and Operation of the Convention,<br />

• Mine Clearance, Mine Risk Education and Mine Action Technologies,<br />

• Victim Assistance and Socio-Economic Reintegration and<br />

• Stockpile Destruction.<br />

The next Intersessional meeting will be held in Geneva from 21 to 25/6/04 where<br />

all participants are welcome.<br />

• the Coordinating Committee, responsible for the organization of meetings, and<br />

• the Implementation Support Unit, taking care of the secretariat of the Convention.<br />

Finally, it is due to those mechanisms that have emerged on an informal basis:<br />

• a sponsorship program for mine-affected and developing countries;<br />

• a Contact Group designed to promote universal adherence to the Convention<br />

(initiative of Canada);<br />

• a Contact Group on transparent reporting by States Parties (Belgium) and<br />

• a Contact Group on the mobilization of resources for mine action (Norway).<br />

Thanks to that informal exchange of information and the formal provision of information<br />

through compulsory annual transparency reporting, we now have a clear view of our progress in<br />

the pursuit of the Convention’s core humanitarian objectives and the challenges ahead.<br />

With respect to universalization, the international norm established by the Convention is<br />

now consolidated as 142 States have formally accepted the Convention, including almost every<br />

country in Europe, the Americas and Sub-Saharan Africa. Additionally, 9 signatory States are<br />

expected to ratify the Convention in the foreseeable future.


In Europe, all countries are parties to the Convention, with the exception of Finland, Latvia<br />

and Poland. I hope that those States will be able to join us soon. We have received encouraging<br />

signals in this respect, as Latvia and Poland submitted article 7 reports on a voluntary basis.<br />

We are particularly concerned by those States remaining outside of the Convention, which<br />

still use and/or produce anti-personnel mines as well as by those that have huge stocks of antipersonnel<br />

Mines.<br />

We urge them to stop using and producing and to destroy their stockpiles. We need to<br />

increase our efforts to stress that no conceivable utility of anti-personnel mines could possibly<br />

justify the devastating human costs of these weapons. In this context, I have called repeatedly not<br />

only on all States but also on armed non-state actors to abide by the principles of the Convention<br />

and comply with them.<br />

In the field of mine clearance, we know that 50 States Parties suffer from the impact of<br />

landmines and we are working effectively to know the extent of the problem, and establish and<br />

support national mine action programs with a view to respecting each State’s 10-year deadline to<br />

clear mines.<br />

As for stockpile destruction, 117 States Parties have declared they do not possess<br />

stockpiles any longer. Together they have destroyed more than 30 million landmines. What is<br />

important is that even States Parties with few resources took full ownership over this obligation<br />

and that some destroyed huge numbers of mines. In addition, by taking decisive action to destroy<br />

these weapons, the States Parties have clearly demonstrated that their armed forces can continue<br />

to fulfill their responsibilities without anti-personnel mines. 15 States Parties still have stockpiles<br />

to destroy before the four-year deadline prescribed by the Convention.<br />

In the field of victim assistance, up to 40 States Parties may require assistance to meet the<br />

care, rehabilitation, and social and economic reintegration needs of landmine survivors. Even if<br />

that responsibility rests which each affected State Party, we also know that the countries with the<br />

greatest numbers of mine victims are amongst the poorest of the world. In addition, the<br />

commitment to assist survivors is not expressed in a time limit in the Convention but in the<br />

lifetime of the victims.<br />

Today, we need to ensure that we sustain our efforts to truly achieve our aims of a antipersonnel<br />

landmine-free world through a more sophisticated approach to humanitarian<br />

demining.<br />

The Nairobi Summit for a Mine-Free World, the name given to the Convention’s first<br />

Review Conference, which will take place in Kenya from 29 November to 3 December 2004,<br />

will constitute a crucial opportunity to review our achievements and our remaining challenges, as<br />

well as envision the way ahead.<br />

I could not conclude this speech without congratulating the Royal Military Academy for<br />

the role it has been playing for more than five years in the field of Mine Action Technologies.<br />

It has been a pleasure for me to work with the Academy and especially with Professor<br />

Marc Acheroy. Together, that is with politicians, diplomats, militaries, NGO’s and victims,<br />

experts in Mine Action Technologies have an important role to play in the noble cause of putting<br />

an end to the human suffering and casualties caused by anti-personnel mines, which are all<br />

indiscriminate, cruel and inhumane and which destroy the lives of thousands of innocent civilian<br />

victims each year.


Mine action technologies: building a roadmap to bring<br />

appropriate and improved technologies into operational use<br />

Marc Acheroy, Royal Military Academy, Brussels, Belgium<br />

The present document wishes to reflect the outcome of three expert hearings in mine<br />

action technologies, chaired by the author and which took place on the margins of the<br />

Standing Committee on Mine Clearance, Mine Risk Education and Mine Action<br />

Technologies in February 2003, May 2003 and February 2004.<br />

1. Introduction<br />

In 1997, at the workshop, which accompanied the signing of the Ottawa Convention,<br />

concern was expressed at the lack of international coordination and cooperation in mine<br />

action technology. It was noted that there were no universal standards for technology,<br />

no common view on where resources should be directed, and inadequate dialogue and<br />

understanding existed both within the research and development community and also<br />

with other actors in mine action.<br />

Even if there is still a lack of international coordination and cooperation in mine action<br />

technologies, especially between the end-users, the donors and the R&D communities, a<br />

lot of work has been done and some success stories can be reported. Significant progress<br />

has been made:<br />

- in the performance of metal detectors and handheld dual sensors, which combine<br />

metal detectors with ground penetrating radar (GPR),<br />

- in the development and use of mechanical devices,<br />

- in the development of applications based on information technologies (IMSMA is a<br />

good example),<br />

- in manufacturing personal protective equipment and prosthetic feet,<br />

- in the training of rodents to detect landmines,<br />

- in the suitability and cost of personal protective equipment.<br />

Thanks to the <strong>International</strong> Test and Evaluation Programme (ITEP), much work has been<br />

undertaken to test and evaluate equipments, systems and methods against agreed<br />

standards (e.g. the CEN <strong>Workshop</strong> Agreement – CWA 14747:2003 "Humanitarian Mine<br />

Action - Test and Evaluation - Metal Detectors", published by CEN in July 2003, and the<br />

CEN <strong>Workshop</strong> Agreement for Test and Evaluation of Demining Machines (CWA12)<br />

approved on April 20, 2004).<br />

Nevertheless, efforts must continue, especially to initiate and increase the coordination<br />

and the cooperation between users, donors and technologists in order to develop and<br />

bring to the field equipment and tools based on real needs and not assumed needs.<br />

2. Mine action technologies: a very difficult problem


Several factors slow down real progress in the development and fielding of new<br />

technology, with the most significant of these factors related to the fact that mine action<br />

solutions are not simplistic and that no “silver bullet” is available. It can be said that<br />

finding all mines in the ground without a false alarm is a challenge comparable to<br />

sending a person to the moon but with much less money. Some of the significant<br />

challenging factors include:<br />

- A lack of a procurement path makes fielding a technology very difficult.<br />

Consequently, developers can face a dead-end when research and development as<br />

well as prototyping and test and evaluation / validation (if any) are achieved!<br />

- Mine action solutions are not universal but rather often country / region specific<br />

(e.g., related to specific soil type, climate, vegetation, socio-cultural environment,<br />

level of education, etc). A system approach needs to be used.<br />

- Mine action technologies are diverse, e.g. ITEP recognizes six different categories:<br />

survey, detection, mechanical assistance, manual tools, personal protection and<br />

neutralisation.<br />

- Requirements for technologies are not easily set, nor satisfied.<br />

- Some major advances have not been well appreciated (e.g. the very significant<br />

improvements in metal detectors, personnel protective equipment, information<br />

technology support tools).<br />

- It is now clear that the market for mine action equipment is not large enough to<br />

support bringing products to market.<br />

- Both donors and demining organizations are naturally conservative – especially<br />

regarding safety.<br />

- Donors are reluctant to insist on new and more efficient technologies and deminers<br />

often do not change successful clearance methods (even if not efficient) as long as<br />

donors accept the status quo.<br />

- Some of the problems of new mine action technologies are not technical (e.g.<br />

computer staff in field offices leaving once they are trained).<br />

These challenges emphasize the responsibilities of each of the actors of the demining<br />

Community: the donors, the end-users and the technologists.<br />

a. Donors responsibilities<br />

Clearly, donors have a key role to play, especially in supporting the fielding of<br />

appropriate technologies in order to optimize the funding of technology in the<br />

long-term (e.g., by supporting the introduction of new technologies on the<br />

condition that they will lead to faster operations, saving lives, and saving<br />

money).<br />

b. End-users responsibilities<br />

End-users need to have a pro-active role and to be understanding and open<br />

regarding the process of bringing appropriate technologies in the field.


c. Technologists responsibilities<br />

Technologists need to understand the importance of bringing appropriate<br />

technologies to the field. They should visit the field to truly understand the real<br />

needs of end-users, avoid building technologies based on assumed needs and<br />

work interactively with end-users. They should understand that technologies<br />

must be affordable, adaptable and manageable in the field, and must fit in<br />

existing procedures (IMAS). They must be aware that ergonomic and<br />

physiologic aspects are strongly influencing the efficiency of mine clearance<br />

activities. Technologists should increase their understanding of the fact that, in<br />

addition to technologies related to detection, technologies related to area<br />

reduction (to know where the mines are not), strategic planning and programme<br />

management are also important.<br />

3. Need for a roadmap<br />

The Convention states that “each State Party undertakes to facilitate and shall have<br />

the right to participate in the fullest possible exchange of equipment, material and<br />

scientific and technological information concerning the implementation of (the)<br />

Convention.” This implies that such an exchange is an important underpinning to<br />

assisting States Parties in the fulfilment of their obligations. It is in the spirit of this<br />

provision of Convention that all actors are urged to apply the recommendations in this<br />

document. Donors need to understand that technologists need their support to<br />

establish a sound procurement process for fielding appropriate technologies in order<br />

to have a more cost-effective mine action. For their part, end-users need to be proactive<br />

and be understanding and open to the process of introducing new technologies<br />

in the field and to make use of existing tools. End-users need to understand that<br />

appropriate technologies could save human lives and increase mine action efficiency.<br />

Finally, technologists must accept that nothing is more important than understanding<br />

the working environment.<br />

The co-chairs of the ad hoc Standing Committee decided to mandate an informal<br />

expert group, meeting on the margins of the Standing Committee and including endusers,<br />

donors and technologists,<br />

- To define a coherent roadmap to field as soon as possible effective mine<br />

action technologies, taking into account real needs of end-users, priorities of<br />

donors and mine-affected countries as well as the state of maturity of<br />

technologies.<br />

- To identify means to establish a sound procurement process for fielding<br />

appropriate technologies in order to have a more cost-effective mine action.<br />

- To investigate means to encourage and organise a close dialogue between<br />

mine action actors<br />

4. A possible roadmap


It is a matter of urgency to define a roadmap to secure rapid fielding of appropriate<br />

and improved mine action technologies. This roadmap can only be defined by<br />

achieving the following sub-objectives:<br />

- to establish a list of prioritized operational needs (SON), based on the political<br />

priorities and constraints;<br />

- to establish a portfolio of priority technological projects, based on an inventory<br />

and an analysis of existing and potential technologies as well as on the<br />

determination of their gaps regarding real operational needs;<br />

- to establish technology action plans in conjunction with end-users;<br />

- to facilitate procurement processes. Thereby getting appropriate and improved<br />

mine action technologies into operational use;<br />

- to identify means to secure maximum support to enhance technologies, from<br />

donors, understanding benefits of appropriate and improved technologies, from<br />

end-users, in terms of understanding and openness, and technologists in terms of<br />

understanding end-users’ real needs.<br />

These main topics of this roadmap and their interconnections are sketched in Fig. 1<br />

and described below.<br />

Securing<br />

maximum<br />

support<br />

from<br />

Donors ,<br />

End - Users<br />

and<br />

Technologists<br />

Defining &Prioritizing Operational Needs SON<br />

Analysing technologies &<br />

Classifying technology needs<br />

Technology action plans<br />

Portfolio<br />

Appropriate technologies<br />

into operational use<br />

GICHD<br />

ITEP<br />

Lessons<br />

Learned<br />

JMU<br />

Fig. 1: A roadmap for getting appropriate and improved technologies in operational use


a. Statement of operational needs<br />

It is expected that organizations (such as GICHD, UN and NGO’s) who are close<br />

to end-users, continue to facilitate in defining operational needs (SON). They<br />

should, when appropriate increase the quality of the knowledge regarding endusers’<br />

real needs and enhance the transfer of this knowledge towards<br />

technological organizations such as the <strong>International</strong> Test & Evaluation<br />

Programme (ITEP).<br />

b. Portfolio of prioritized projects: end-users’ involvement<br />

It is expected that Organizations (such as the <strong>International</strong> Test & Evaluation<br />

Programme (ITEP)), who are close to technologists, continue to facilitate testing<br />

and assessing mine action technologies. They should increase the involvement<br />

of end-users in the test & evaluation process in order to improve the confidence<br />

of end-users in appropriate and improved mine action technologies, include cost<br />

/ effectiveness considerations in their test and evaluation reports and share this<br />

information with end-users, and welcome direct requests issued by end-users<br />

and donors.<br />

c. Portfolio of prioritized projects: importance of a cost / benefit analysis<br />

Equipment testing should be subject to an estimated cost-benefit analysis. To<br />

take into consideration:<br />

- a description of the costs of the technology (including ALL the extras<br />

like training, down-time, transport costs etc).<br />

- a description of the potential scenarios where the technology will be<br />

useful, including financial analysis of how much is being spent currently<br />

in these regions on mine clearance<br />

- a realistic delivery date for the first small batch for testing – and a<br />

contractually agreed price.<br />

- a detailed cost-effectiveness analysis, which shows comparison with<br />

existing methods when appropriate<br />

d. Technology action plans to bring appropriate technology into operational<br />

use<br />

Donors should consider increased support for appropriate or improved<br />

technologies. Possible provisions could include:<br />

- Contractual obligation for donor-supported, mine-clearance to make<br />

available at a fair price the facilities, staff, access, etc for large-scale field<br />

trials of appropriate or improved tools and equipment.<br />

- Emphasis on contracts based on IMAS using the best available<br />

technology for clearance, by insisting that demining organisations


demonstrate an objectively measurable steady improvement in efficiency,<br />

not simply retaining existing methods.<br />

- Funding to secure end-users participation to meetings and trials.<br />

Donors must be willing to invest in emerging and improved technologies now,<br />

for the best results in the near future!<br />

The informal expert group has adopted this roadmap. To support its objectives, it is<br />

mandatory to have a close collaboration of end-users, donors and technologists, and a<br />

good exploitation of lessons learned by making use of the structure which exists in<br />

JMU / MAIC.<br />

The roadmap process must be a living interactive process, as statements of<br />

operational needs (SON) evolve and depend on variable mechanisms and scenarios.<br />

End-users must be involved in the whole process, from the definition of SON ’s to the<br />

testing of equipments into operational conditions. A cost / benefit analysis is a keyelement<br />

in the process of bringing appropriate and improved technologies into<br />

operational use.<br />

5. Building a test case<br />

As, already a long time ago, technologists are promising to bring appropriate and<br />

improved technologies into operational use and have created a structure to test and<br />

evaluate mine action technologies (ITEP), end-users strongly suggest proving by a<br />

“test-case” that the proposed roadmap is working and could be an effective process.<br />

Further, it is felt that results need to be presented in NAIROBI at the end of 2004,<br />

during the meeting of State Parties. Otherwise, the credibility of technologists will be<br />

dramatically affected and end-users as well as donors will definitely fail to believe in<br />

the possibility of bringing new, appropriate or improved mine action technologies<br />

into operational use. Hence, it is particularly important to demonstrate the<br />

effectiveness of the proposed roadmap by processing “test-cases” through it.<br />

Therefore, dual sensors (metal detector – GPR) have been proposed as test-case: the<br />

HSTAMIDS system from US and the system of ERA Technologies (UK) – VALLON<br />

(GE). The objectives to achieve include:<br />

- To present at the Nairobi meeting (beginning of December 2004) the results of<br />

the process of bringing, according to the proposed roadmap, a dual sensor<br />

(metal detector + ground penetrating radar (GPR)) into operational use.<br />

- To involve GICHD in the definition of the Statements of Operational Needs<br />

corresponding to the region selected for the operational tests.<br />

- To involve ITEP in performing the technical testing and the comparison of<br />

test & evaluation results against end-users’ requirements.<br />

- To involve end-users in all the steps of the process and especially in the<br />

organisation of operational tests in Bosnia.


End-users (Norwegian People Aid – NPA) will select a region in Bosnia (to be<br />

confirmed) and GICHD will define the SON for this region in close collaboration with<br />

end-users.<br />

The process described in the roadmap will include:<br />

- the establishment of the SON for the region of interest by GICHD in close<br />

collaboration with end-users;<br />

- the technical testing and evaluation as well as the comparison against SON by<br />

ITEP (dual sensors must be included in ITEP work plan – Portfolio of<br />

technology projects) with the participation of end-users;<br />

- the operational testing by end-users in the selected region of interest.<br />

6. Conclusion<br />

Even if a lot of progresses have been booked in technological development,<br />

especially in close-in detection, mechanical clearance and in demining campaign<br />

management, there is still a lot of work to be done in order to convince donors and<br />

end-users of the possible increase of effectiveness, thanks to mine action technology.<br />

It is just the aim of the proposed roadmap to give a coherent frame, accepted by all<br />

actors, for bringing appropriate technologies into the field.<br />

Acknowledgement<br />

The author wishes to thank all the participants to the informal group of experts on mine<br />

action technologies, meeting on margins of the Standing Committee on Mine Clearance,<br />

Mine Risk Education and Mine Action Technologies.<br />

References<br />

1. http://www.<strong>gichd</strong>.ch<br />

2. http://www.itep.ws<br />

3. http://maic.jmu.edu<br />

4. M. Acheroy, "Mine Action Technologies: Problems and Recommendations" Journal<br />

of Mine Action 7.3, Dec 2003.<br />

5. M. Acheroy and I. van den Bosch, “Humanitarian Demining: sensor technology status<br />

and signal processing aspects”, CNRS – GDR ondes, Dec 2003 Marseille, France,<br />

(invited paper).<br />

6. M. Acheroy, "Humanitarian demining : sensor technolology status and signal<br />

processing aspects" , IEEE-ICRA conference 2003 (invited paper).<br />

7. M. Acheroy, A. Sieber, "Defining in Europe a possible methodology to field faster<br />

improved Mine Action Technologies (MAT) " Dubrovnik workshop on humanitarian<br />

demining, Oct 2002.


The Outdoor Robotics: challenges and requirements,<br />

promising tools and dual-use applications<br />

Daniele Caltabiano, Domenico Longo, Giovanni Muscato<br />

Michele Prestifilippo, Giacomo Spampinato<br />

DIEES Università degli Studi di Catania<br />

viale A. Doria 6, 95125 CATANIA ITALY<br />

e-mail: gmuscato@diees.unict.it<br />

Abstract<br />

In the last years many new robots have been designed and built specifically for outdoor<br />

applications. Within the Workpackage 3 of the CLAWAR Thematic Network several results of<br />

outdoor robotic projects have been collected and analysed. In this paper a short overview of some<br />

innovative solutions, innovative locomotion strategies and their requirements will be exposed.<br />

Finally some projects developed at DIEES University of Catania are presented with particular<br />

reference to the possible applications to humanitarian demining.<br />

1. Introduction<br />

The main aim of CLAWAR WP3 is to formulate the requirements for CLAWAR machines in<br />

specific sectors that have already been identified as having good potential for automation [1].<br />

Two tasks are carried out per year focusing on applications sectors that are felt to offer good<br />

potential for exploiting the CLAWAR technology. In specifying the requirements, the concepts of<br />

modularity and modular components that have been formulated are. In Year 2 (2003/04) one of the<br />

tasks was concerning Natural/Outdoor applications. The activities of the task consisted in analysing<br />

some application sectors in dangerous environments, in evaluating the capabilities of innovative<br />

locomotion systems and in the study of methods for autonomous navigation [2].<br />

In the following sections some dangerous applications of outdoor robots will be exposed, and then<br />

several locomotion architectures will be discussed. Finally some projects for outdoor robotics<br />

developed at DIEES will be briefly shown.<br />

2. Robots for Dangerous Environment<br />

The following sectors have been analysed in some detail and are briefly reported:<br />

• Fire-fighting<br />

• Exploration of volcanoes<br />

• Demining<br />

• Quarrying and mining<br />

21 Fire fighting<br />

Mobile robots are becoming an option of choice where there is a need to remove people from<br />

danger. An example of the new commercial thrust has been to develop from existing military<br />

technology a range of remotely operated robotic fire fighting vehicles for use in a variety of<br />

commercial operations. The financial and human cost of fire fighting is substantial. Robotic systems<br />

have the potential to reduce this figure quite dramatically. For natural disasters such as forest fires


and 9/11 type disasters, it may not be possible to solve these issues but there will be opportunities<br />

where levels of assistance may be appropriate.<br />

As a risk reducing measure, robotic machines are a valuable addition to any fire-fighting team.<br />

Robotic fire fighters are well equipped to handle extreme temperatures and hazardous atmospheres.<br />

As well as the ability to contain the blaze, the machines are fitted with a low cost thermal imager<br />

and other onboard sensor options to allow them to investigate operational risk and provide statistical<br />

monitoring by assessment.<br />

Two of the most prominent vehicles are 'Firespy' and 'Carlos' developed by QinetiQ. As might be<br />

expected, given the company's history, the systems developed by QinetiQ are not aimed at the fire<br />

service's bread-and-butter deployments but are intended for use in emergency situations. The robots<br />

have been designed for robustness and reliability in a variety of hazardous situations such as largescale<br />

petroleum fires; aircraft engine fires; hazardous chemical leaks; and fires with the risk of<br />

building collapse. QinetiQ's two fire-fighting robots can therefore cope with a range of applications.<br />

A detailed analysis of the requirements of a fire fighting robot has been carried out by Ulrich<br />

Schmucker of the Fraunhofer IFF, (Magdeburg, Germany) and is summarised in the scheme<br />

reported in Fig. 1. [3].<br />

Fig. 1. Fire fighting robot requirements.<br />

22 Exploration of volcanoes<br />

Today there are 1500 volcanoes on the Earth potentially active, 500 of them have been active<br />

during last 100 years and about 70 are presently erupting. Ten percent of the world population live<br />

in areas directly threatened by volcanoes, without considering the effects of eruptions on climate or<br />

air-traffic for example. About 30.000 people have died from volcanic eruptions in the past 50 years,<br />

and billions of euros of damage have been incurred. As a consequence it is important to study<br />

volcanoes and develop the technologies to support the volcanologists in this process. In the last<br />

decade alone, due to both the unpredictable timing and to the magnitude of volcanic phenomena,<br />

several volcanologists have died surveying eruptions. So a major aim of a robotic system is to<br />

minimise the risk for volcanologists and technicians involved in working close to volcanic vents<br />

during eruptive phenomena. It should be noted that observations of, and measurement of the<br />

variables relating to volcanic activity are of greatest interest during paroxismal phases of eruptions,<br />

which unfortunately are also the time of greatest risk for humans. Among the projects for volcano<br />

exploration we can mention the Dante II walking robot developed by Carnegie Mellon University<br />

and NASA, the RMAX helicopter developed by YAMAHA and the recently EC funded<br />

ROBOVOLC project [4].<br />

2.3 Demining<br />

There exists a big difference between the military and the civilian mine clearance. The military<br />

demining operations accept low rates of Clearance Efficiency (CE). For these purposes it is often


sufficient to punch a path through a minefield. For the humanitarian demining purposes, on the<br />

contrary, a high CE is required (a CE of 99.6% is required by UN). This can only be achieved<br />

through a ‘keen carding of the terrain and an accurate scanning of the infested areas’ that implies<br />

the use of sensitive sensors and their slow systematic displacements, according to well-defined<br />

procedures or drill rules, on the minefields. The robots, carrying the mine-detectors, could play here<br />

an important role. Obviously, the automatization of an application such as the detection, the<br />

marking and removal of AntiPersonnel mines implies the use of autonomous or teleoperated mobile<br />

robots following a predefined path, sending the recorded data to their expert-system (in charge of<br />

processing the collected data), marking the ground when a mine is detected with a probability of<br />

predefined level and/or, possibly removing the detected mine. This complete automatization is<br />

actually unrealistic and may surely not be entrusted to a unique robot: the technologies allowing it<br />

exist, but the integration of those technologies in mobile Robotic Systems moving in unpredictable<br />

outdoor environmental conditions is not yet mature.<br />

Prof. Y. Baudoin of the RMA Belgium has proposed an Humanitarian Demining catalogue intended<br />

to (1) disseminate the results of research-activities related to the development of mobile roboticized<br />

carriers of detection sensors with the support of the GICHD (Geneva <strong>International</strong> Centre for<br />

Humanitarian Demining) , (2) assist actual and future T&E activities of ITEP (<strong>International</strong> Test<br />

and Evaluation Programme), (3) help the research-centres focusing on this kind of solutions to<br />

refine and continuously adapt and update generic modules that may be transferred to useful<br />

Robotics Systems. This catalogue includes existing robotics systems but also current projects<br />

focusing on the possible use of roboticised solutions.<br />

A detailed classification of the requirements for a robot for humanitarian demining has also been<br />

prepared subdivided into three levels:<br />

- the system-level requirements<br />

- the robot-level requirements<br />

- the mobility-level requirements<br />

2.4 Quarrying and mining<br />

Quarries are an important economic source for many countries. Working in quarries is a good job<br />

opportunity, but is very hard and dangerous. In fact, mineworkers work at open-air, subject to all<br />

weather conditions, in a dangerous environment. Quarries change aspect every day because as<br />

mining proceeds the ground and walls change shape. The infrastructures such as energy supply<br />

cables and access paths result temporary and unreliable. The problem of safety in quarries is urgent<br />

to be solved; especially for poorer countries where protection for workers is not foreseen as life<br />

value is low, although regulation to prevent accidents exist, but for richer countries as well,<br />

because, although protections are higher, accidents happen every day because satisfactory<br />

technology, able to guarantee mineworker safety, doesn’t exist. Some recent project on quarrying<br />

and mining are the ROBOCLIMBER [5], the Microdrainage, and the applications developed by<br />

Caterpillar.<br />

3. Next stage type locomotion system<br />

Innovative robots for outdoor robotic applications require new locomotion systems. The analysis<br />

performed within WP3 was concentrated on the following typologies:<br />

• Hybrid locomotion<br />

• Peristaltic locomotion<br />

• Self-reconfigurable modular systems<br />

• Aerial systems<br />

In the following sections a brief overview of each architecture will be discussed.


3.1 Hybrid Locomotion<br />

The design of a new mobile robot is process that entails satisfying several requirements with<br />

respect to the environment in which the robot will be asked to work. Most classical solutions adopt<br />

wheels as a locomotion technique. It is, however, clear that wheeled robots have some limitations in<br />

highly unstructured environments or in difficult structured environments (like stairs). On the other<br />

hand the current technology for walking robots is not yet so advanced as to represent a valid<br />

alternative. Most current walking robots are very slow, have a low payload capability, are difficult<br />

to control and, due to the high number of sensors and actuators, have low fault tolerance<br />

capabilities.<br />

A way to find a compromise between these two solutions is to choose a hybrid system adopting<br />

wheels and legs together. Several examples of hybrid robots exist in the literature and belong to two<br />

main categories.<br />

The first category includes articulated-wheeled robots, with the wheels mounted at the end of legs.<br />

This category comprises the WorkPartner robot [6],[7],[8] with its particular kind of locomotion<br />

called rolking, the Roller Walker [9], where the wheels are not actuated but the robot simply skates,<br />

the biped type leg-wheeled robot developed by Dr. Matsumoto of AIST [10][11] able to climb up<br />

stairs, the Walk'n Roll with four legs with a wheel attached at the end of each leg [12] and the minirover<br />

prototype developed by LRP [13], just to name a few examples. Most of these systems can<br />

have three type of locomotion: wheel mode, where they act as a conventional wheeled mobile robot;<br />

step mode, where the wheels are locked and they act as a simple legged system and hybrid mode<br />

with cooperation of legs and wheels.<br />

The second category consists of robots with wheels and legs separated, but acting always together<br />

to locomote the system. This category includes for example the Chariot II robot [14], [15] that has<br />

four articulated legs and two central wheels, the RoboTrac robot [16], [17] with two front legs and<br />

two rear articulated wheels, the hybrid wheelchair proposed by Krovi and Kumar [18] and the<br />

ALDURO, [19], [20]. This category of robot has usually only the hybrid type of locomotion but are<br />

from a mechanical point of view more simple and consequently with a more light weight.<br />

Legged locomotion is very effective in extreme ground conditions, but bad with high speed.<br />

Wheeled locomotion is good in high speed but bad on natural terrain. Helsinki University of<br />

Technology introduced a way to combine legged and wheeled locomotion to gain effective natural<br />

terrain mobility. The mode of locomotion is called rolking (rolling-walking). In addition to effective<br />

mobility rolking mode makes it also possible to measure the shapes and unevenness of the ground<br />

only by touching it with the wheel. WorkPartner, as illustrated in Fig. 2, is a futuristic service robot<br />

designed to be used mainly in urban outdoor environment. Mobility is based on a hybrid<br />

locomotion system, which combines benefits of both legged and wheeled locomotion to provide<br />

good terrain negotiating capability and large velocity range at the same time.


Fig.2 Workpartner robot (HUT).<br />

The Wheeleg robot (Fig. 3) was designed and built by the University of Catania in order to<br />

investigate the capabilities of hybrid wheeled/legged structures on rough terrain [21-27]. Possible<br />

applications that have been envisaged are humanitarian demining and exploration of unstructured<br />

environments like volcanoes, etc.<br />

Fig.3. The Wheeleg robot<br />

3.2 Peristaltic locomotion<br />

Peristalsis has always been deeply studied because of its diffusion in nature. This mechanism is<br />

often used when a fluid has to be pushed in contact with a natural surface, such as blood in veins<br />

and other biological fluids, and some low-order animals use it for their locomotion.<br />

All these animals are invertebrates, but there are differences between them according to different<br />

anatomies and behaviours. The most studied from the point of view of locomotion are snails and<br />

earthworms. Main applications of these robots are for diagnostic and surgery purpose, but several<br />

prototypes have been designed and built also for disaster-relief.


3.3 Self-reconfigurable modular systems<br />

In the last years, many efforts have been made in research in modular robotics and especially in<br />

self-reconfigurable systems (SRS), i.e. systems which are able to change dynamically their<br />

topology. Such systems are made of sets of mechatronic building blocks which have the capacity to<br />

connect and disconnect together. By manipulating themselves their own building blocks, those<br />

systems can change their topology. The advantages of modular systems are several:<br />

• Easy and fast to deploy: the robot does not need to be designed and constructed; the modules<br />

are already available and can be fast and easily manually assembled. The required topology<br />

for a task is automatically generated by software and the control algorithm is downloaded to<br />

the robot.<br />

• Easy and fast maintain: a deficient module can be easy and fast replaced.<br />

• High versatility and little stuff: a set of modules can be assembled in various topologies<br />

optimised for various tasks.<br />

• Adaptability: it is possible to perform tasks which were not originally foreseen.<br />

• Low cost: the genericity of the modules authorizes mass-production<br />

Furthermore, a self-reconfigurable system has even more advantages:<br />

• Autonomy: the robot can reconfigure and repair itself<br />

• Increased versatility: the robot can adapt its topology during a task, and not only before.<br />

The applications of SRS are numerous: i.e. industrial manipulations, locomotion on rough and<br />

difficult terrain, pipe inspection, space station construction, etc.<br />

Because of their versatility and robustness self-reconfigurable systems are very interesting for space<br />

applications. A set of robotic modules could do a very wide range of tasks, which would else need<br />

numerous big and heavy non-modular robots, including tasks which are not originally foreseen.<br />

Moreover a low gravity context reduces tremendously the mechanical constraints and permits much<br />

more efficient SRS. The system can move more economically, carry heavier loads and form bigger<br />

structures. All candidate sites in space, like orbital stations, asteroids, moons, and non-gaseous<br />

planets, have less gravity than on Earth.<br />

The mechatronic design of self-reconfigurable robotic modules is quite complex because it involves<br />

a high degree of technology for energy storage or production, efficient and low energy consumption<br />

of the connexion mechanism, distributed control system, etc.<br />

The optimisation of module characteristics (like connectivity, mobility, geometry) which gives the<br />

best versatility is a non-trivial problem because they are only components of a modular systems<br />

which should be able to form an undefined number of different topologies, depending of the task<br />

and the context, like the local shape of the terrain. Therefore the modules design must be optimised<br />

for some main and general functions which must be defined first.<br />

3.4 Aerial systems<br />

Over the recent years the use of low cost Uninhibited Aerial Vehicle (UAV) for civilian<br />

applications has evolved from imagination to actual implementation. Systems have been designed<br />

for fire monitoring, search and rescue, agriculture and mining. In order to become successful the<br />

cost of these systems has to be affordable to the civilian market, and although the cost/benefit ratio<br />

is still high, there have been significant strides in reducing this, mainly in the form of platform and<br />

sensor cost. Many new research projects are currently developed with the aim to build totally<br />

autonomous systems, groups of cooperating UAVs and to study the cooperation between ground<br />

vehicles and UAVs.


4. Overview of some prototype of robots for outdoor applications developed at DIEES<br />

4.1. The Wheeleg Robot<br />

The Wheeleg robot is a hybrid system with two pneumatically actuated front legs, each one with<br />

three degrees of freedom, and two rear wheels independently actuated by using two distinct DC<br />

motors. The main idea was to use rear wheels to carry most of the weight of the robot and front legs<br />

to improve the grip on the surface and to climb over obstacles [27].<br />

From an historical point of view there are several examples of systems with both wheels and legs.<br />

What can be observed from all these systems is that the legs have a better adhesion with the soil and<br />

perform most of the work traction, while the wheels support most of the weight of the structure.<br />

The robot's dimensions are W ×L × H = 66cm × 111cm × 40cm and its total weight is 25kg.<br />

This robot has been experimentally tested both in laboratory and in outdoor environments, including<br />

very unstructured volcanic terrains (Fig. 4).<br />

As a general consideration it should be remarked that the proposed robot was built as a research<br />

prototype and is not suitable for work in real outdoor applications. Several modifications in the<br />

mechanical structure and in the electrical connections are required to increase the reliability of the<br />

system in real applications. In particular the kinematics of the legs is very simple and low-cost, but<br />

it is not reliable enough for potential applications. Spherical joints should be inserted in the feet to<br />

allow the robot to better adapt itself to different surfaces.<br />

It is the authors' opinion that this hybrid robot does not include simply both advantages of wheeled<br />

and legged robots, but also some of their drawbacks. For example some of the slow speed and<br />

stability problems of legged systems are also present in the Wheeleg. Moreover some of the traction<br />

problems typical of wheeled robots have been also experimented. However in many situations, like<br />

in the case of surfaces with rocks, the help of the legs has demonstrated to greatly improve the<br />

locomotion capabilities of the robot, without excessively reducing its speed. Such a situation is<br />

probably common also to other hybrid architecture and is mainly due to the actual limitation in the<br />

technology in designing and building more efficient mechanical legs.<br />

The design of the control architecture was an important aspect, since a strong interaction between<br />

the many different sensors and actuators was needed. The solution developed has shown to be<br />

suitable for the intended purposes and has not revealed any particular problem.<br />

In any case the experience gained from this robot was considered very useful and will be adopted<br />

in the design of next generation of hybrid and legged systems.<br />

Figure 4. The Wheeleg on Etna volcano


4.2. The ROBOVOLC system and the small scale prototypes<br />

The main objective of this EC funded project was the development and demonstration of an<br />

automatic robotic system to perform measurements in a volcanic environment. Partners of the<br />

projects were University of Catania, University of Leeds, INGV, IPGP, ROBOSOFT and<br />

BAESYSTEMS.<br />

The selection of the most appropriate locomotion technique for the ROBOVOLC system was<br />

carried out by evaluating the most promising techniques by comparing certain criteria e.g.<br />

reliability, rough terrain performance, suitability for purpose, etc.<br />

The selection was also based on the trial of the M6 and of the P6W platforms. The M6, shown in<br />

Fig. 5, has an articulated chassis capable of adapting to very rough terrain [3]. However not having<br />

a substantial central body, the payload of this system was low. Some improvement was obtained<br />

with the use of the P6W system, a small scale prototype with a reduced mobility, but with the<br />

capability to carry a more substantial payload (Fig.6). In particular this prototype was used to test<br />

traction control strategies in laboratory. Following these testing, also performed on volcanic sites,<br />

the final locomotion concept was a six wheeled rover with an adaptive chassis, with a large range<br />

semi-active suspension and differential steering.<br />

The final ROBOVOLC system is shown in Fig. 7 and 8. The system is composed by a<br />

Communication system, a Power supply system, a Control system, a Navigation system, a User<br />

interface and a Science Package.<br />

Communication system: The telecommunications system is based on a redundancy system. This<br />

used a standard high-power wireless LAN interface, for most of the data exchange at a high data<br />

rate (10Mbps) and a low speed radio modem (100kbps), for essential tele-operation e.g. when the<br />

robot is out of the line of sight of the base station. Other radio links comprises two analog video<br />

channels and a differential correction channel for the GPS.<br />

Fig 5. The M6 robot Fig 6. The P6W prototype platform.<br />

Power supply: The chosen power supply solution employed lead acid batteries, however to extend<br />

mission duration a hybrid Internal Combustion (IC) engine has been designed and manufactured.<br />

The optimum mode of operation uses the IC generator in combination with the batteries, as a back<br />

up and recharges system. This arrangement greatly extends mission duration and enhances the<br />

capability of the system.<br />

Control: The infrastructure design is based on the utilisation of general purpose embedded<br />

computer boards, with a combination of COTS and bespoke software running on these platforms.<br />

The various tasks are distributed over three PC104 based computers located in the rover to execute<br />

the main tasks of manipulator control, locomotion and communication control and sensors control.


The low level motion controllers selected were RoboSoft RMPC-555. These motor controllers are<br />

to be used for both Rover and Manipulator motion and are interfaced to the PCs through a CAN<br />

network.<br />

Navigation: This covers localisation and path planning by sensing by the use of multisensor data<br />

fusion algorithms and map data. For most operations the decisions are made by a human operator<br />

(teleoperation), but automatic obstacle avoidance has been incorporated.<br />

User Interface: A graphical man-machine interface was designed to allow the volcanologists to<br />

operate the robot via two joysticks and a touch screen during the missions without specialist<br />

knowledge of the system. Specific functions have been developed to allow teach and repeat<br />

operations, intervene preset missions to explore manually and to set additional waypoints.<br />

Science tasks: The Science package comprises three main subsystems: the Manipulator, the Pan-tilt<br />

turret and the gas sampling system. A 5 degree of freedom SCARA manipulator has been<br />

specifically designed for the Robovolc system to allow the collection of samples of rocks, to place<br />

and pick up instruments at specific locations and to collect gas samples in the proximity of<br />

fumaroles. A three-finger gripper, has been designed with force sensors to pick up rocks with a<br />

maximum diameter of 15 cm. The Pan/tilt turret has the following sensors: a digital video-camera<br />

recorder, a high resolution still-image camera, an infrared camera and a Doppler radar for gas speed<br />

measurement. The user can orient the turret and can tele-operate all the installed sensors. Gas<br />

sampling is a specific and detailed operation needed for the Robovolc project. The main<br />

problems for the gas sampling are: gas temperatures can rise above 600°C ; extremely corrosive<br />

acid components are present; and contamination of the gases with the surrounding atmosphere must<br />

be avoided at all cost. To perform such precise gas sampling a new system has been designed,<br />

comprising of a probe for sampling the gas, a gas collection system and specific gas collection<br />

bottles.<br />

Several lab test and indoor and outdoor were carried out to assess the functionality of the final<br />

system. In addition three on-site test campaign on Etna volcano were performed in September 2002,<br />

June 2003 and August 2003, to demonstrate the functionalities of locomotion, rock and gas<br />

sampling, terrain reconstruction, tele-operation capabilities, operational life, telemetry. In Figures 7<br />

and 8 some pictures concerning some test campaign are reported. In the project web site more<br />

details concerning the final tests and results are reported [4].<br />

Fig.7. The final ROBOVOLC system tested inside the ETNA crater formed during the January 2003 eruption.


4. Conclusions<br />

Fig.8. ROBOVOLC system collecting a lava rock sample and tested on a rocky terrain.<br />

In this paper some of the activities carried out within the WP3 of the CLAWAR network. have been<br />

briefly shown. In particular it resulted that many research projects are carried out within Europe to<br />

find outdoor robotic solutions to solve different kind of problems. In many case the obtained<br />

prototypes are valid solutions that could be efficiently also tested for humanitarian demining. It is<br />

the authors’ opinion that even if up to now most of the robots do not appear as reliable and useful as<br />

they should, by continuing the common research effort and by testing new solutions the distance<br />

between the research laboratories and the real adoption of these systems in the minefields will be<br />

rapidly reduced.<br />

5. Acknowledgement<br />

This work has been prepared using some part of the contributions received by many CLAWAR<br />

members. In particular we thank H. Warren and N. Heyes (QinetiQ), Y. Baudoin (RMA), R.<br />

Molfino (University of Genova), G. Pezzuto (Dappolonia), A. Haalme and S. Ylonen (Helsinki<br />

University of Technology), P. Bidaud (LRP) and B. Bell (BAESYSTEMS). Their help is gratefully<br />

recognized and acknowledged.<br />

6. References<br />

[1] The CLAWAR Network, http://www.clawar.net.<br />

[2] G. Muscato, “CLAWAR 2 WP3 Application sectors Year 2 Report”, GIRT-CT-2002-05080, May 2004.<br />

[3] U. Schmucker, Fire fighting robots requirements, notes from CLAWAR meeting Catania 09/2003 and Karlsruhe<br />

11/2003. See also [1].<br />

[4] The Robovolc project homepage http://www.robovolc.diees.unict.it<br />

[5] The Roboclimber web pages http://www.t4tech.com/roboclimber<br />

[6] A. Halme, I. Leppanen and S. Salmi , "Development of Workpartner Robot - Design of actuating and motion control<br />

system", CLAWAR 99, Portsmouth 1999.<br />

[7] A. Halme, I, Leppanen, S.Salmi, S, Ylonen, "Hybrid locomotion of a wheel-legged machine", CLAWAR 2000<br />

<strong>International</strong> Conference on Climbing and Walking Robots, Madrid, Spain , 2-4 October 2000.<br />

[8] I. Leppanen, S. Salmi and A. Halme, "Workpartner - HUT Automations new hybrid walking machine", CLAWAR<br />

98, Brussels 1998.<br />

[9] G. Endo and S. Hirose, "Study on Roller-Walker (Multi-mode steering Control and Self-contained Locomotion),<br />

ICRA 2000, <strong>International</strong> Conference on Robotics & Automation, San Francisco, April 2000.<br />

[10] O. Matsumoto, S. Kajita, M. Saigo and K. Tani, "Dynamic Trajectory Control of Passing Over Stairs by a Biped<br />

Leg-Wheeled Robot with Nominal Reference of Static Gait", Proc. of the 11th IEEE/RSJ <strong>International</strong> Conference on<br />

Intelligent Robot and Systems, pp. 406-412, 1998.<br />

[11] O.Matsumoto, S.Kajita, M.Saigo and K.Tani, "Biped-type leg-wheeled robot", Advanced Robotics, Vol.13, No.3,<br />

pp.235-236, 1999.<br />

[12] H. Adachi, N. Koyachi, T. Arai, A. Shimuzu, Y. Nogami, "Mechanism and Control of a Leg-Wheel Hybrid Mobile<br />

Robot", Proc. of the IEEE/RSJ <strong>International</strong> Conference on Robotics and Automation, pp. 1792-1797, 1999.


[13] F. Benamar, P. Bidaud, F. Plumet and G. Andrade, "A high mobility redundantly actuated mini-rover for self<br />

adaptation to terrain characteristics", CLAWAR 2000 <strong>International</strong> Conference on Climbing and Walking Robots,<br />

Madrid, Spain , 2-4 October 2000.<br />

[14] Y.J. Dai, E. Nakano, T. Takahashi, H. Ookubo, “Motion Control of Leg-Wheel Robot for an Unexplored Outdoor<br />

Environment”, Proceedings of the 1996 IROS Intelligent Robots and systems, Nov 4-8, 1996.<br />

[15] E. Nakano, S. Nagasaka, “Leg-Wheel Robot: A Futuristic Mobile Platform for Forestry Industry”, Proceedings of<br />

the 1993 IEEE/Tsukuba <strong>International</strong> <strong>Workshop</strong> on Advanced Robotics”, Tsukuba, Japan, Nov. 8-9, 1993.<br />

[16] K. Six., A. Kecskeméthy, "Steering Properties of a Combined Wheeled and Legged Striding Excavator", In:<br />

Proceedings of the Tenth World, Congress on the Theory of Machines and Mechanisms, Oulu, Finland, June 20-24, vol.<br />

1, pp. 135-140. IFToMM (1999)<br />

[17] C. Schuster, T. Krupp, A. Kecskemèthy, "A non-holonomic tyre model for non-adhesive traction and braking<br />

manouvres at low vehicle speed", Proceedings of the NATO Advanced Study Institute on Computational Methods in<br />

Mechanisms, Varna, Bulgaria, June 1997.<br />

[18] V Krovi, V. Kumar, "Modeling and Control of a Hybrid Locomotion System, " In ASME Journal of Mechanical<br />

Design, Vol. 121, No. 3, pp. 448-455, September 1999.<br />

[19] M. Hiller, J. Müller, U. Roll, M. Schneider, D. Schröter, M. Torlo, D. Ward ,"Design and realization of the<br />

anthropomorphically legged and wheeled Duisburg robot ALDURO." Proc. of the 10th World Congr. of theory of<br />

Machines and Mechanisms. IFToMM, Oulu, Finnland, 1999.<br />

[20] J. Müller, M. Hiller, “Design of an energy optimal hydraulic concept for the large-scale combined legged and<br />

wheeled vehicle ALDURO”, Proceedings of the 10th World Congress on the Theory of Machines and Mechanisms,<br />

Oulu, Finland, June 1999.<br />

[21] S.Baglio, G. Muscato, G. Nunnari, N. Savalli, “Robots and sensors for HUDEM at the University of Catania”,<br />

European Journal of Mechanical and Environmental Engineering, Vol. 44, N.2,pp.81-84, 1999.<br />

[22] S. Guccione, G. Muscato, "Experimental outdoor and laboratory tests with the hybrid robot WHEELEG",<br />

Proceedings of the 3rd <strong>International</strong> Conference on Field and Service Robotics, Helsinki (Finland) June 11-13, 2001<br />

[23] S. Guccione, G. Muscato,"Test of the hybrid robot WHEELEG on a Volcanic Environment", Video Proceedings of<br />

the 2001 IEEE/ASME <strong>International</strong> Conference on Advanced Intelligent Mechatronics, Como, Italy July 2001, p.3.<br />

[24] G. Lami, S. Guccione and G. Muscato, "Control strategies and architectures for the robot WHEELEG", CLAWAR<br />

2000 <strong>International</strong> Conference on Climbing and Walking Robots, Madrid, Spain , 2-4 October 2000.<br />

[25] G. Muscato and G. Nunnari, “Leg or Wheels ? WHEELEG a hybrid solution”, CLAWAR 99 <strong>International</strong><br />

conference on climbing and Walking Robots, Portsmouth, U.K. , 14-15 September 1999.<br />

[26] M. Lacagnina, G. Muscato, R. Sinatra, "Kinematics of a hybrid robot Wheeleg", 5th IEEE <strong>International</strong><br />

Conference on Intelligent Engineering Systems 2001, Helsinki (Finland), 16-18 Sep 2001.<br />

[27] S. Guccione, G. Muscato, "Control Strategies Computing architectures and Experimental Results of the Hybrid<br />

Robot Wheeleg", IEEE Robotics and Automation Magazine, (IEEE Piscataway, U.S.A.), Vol.10, N.4, pp.33-43,<br />

December 2003.


“Robotic System for Unstructured Outdoor Environments”<br />

V.Gradetsky*, M.Knyazkov*, L.Meshman**, M.Rachkov***<br />

*The Institute for Problems in Mechanics Russian Academy of Science<br />

Tel.: (7095) 434-41-49 ; Fax: (7095) 938-20-48 ; e-mail: gradet@ipmnet.ru<br />

**Federal State Establishment All-Russian Research Institute for Fire Protection<br />

Ministry of the Russian Federation for Civil Defence, Emergencies and Elimination of<br />

Consequences of natural Disasters<br />

(FGU VNIIPO EMERCOM OF RUSSIA)<br />

12, VNIIPO, Balashikha, Moscow Region, 143903<br />

Fax: (095) 529-82-52<br />

***The Moscow Industrial University<br />

Tel.: (095) 277-28-39 ; Fax: (095) 274-63-92 ; e-mail: rachk-v@mail.msiu.ru<br />

Abstract<br />

The paper presents some results of a study the mobile robot’s motion in emergency<br />

conditions to produce such operations as fire-fighting preventions, search of the mines<br />

and demining or rescue operations after nature catastrophes and explosion results.<br />

The mobile robot’s motion in dangerous environments is distinguished by special<br />

design of mechanical and sensory control systems, providing the important qualities of<br />

actions including the reliability of working operations, high level of maneuverability,<br />

necessary capacity for cross-surface travel, possibilities for autonomous motions over<br />

some parts of the route, availability for supervision mood of operations using manmachine<br />

interface, cooperative control between transport system and onboard tools, the<br />

fast exchange of measuring information and decision making ability in some arising<br />

nonpredicted situations of environments.<br />

The survey is delivered to emphasize some key problems of mobile robot’s motion<br />

in unstructured environments.<br />

1. Introduction<br />

Operation fulfillment is very attractive for robot application when it is necessary to<br />

work in dangerous environments instead of people. Such environments may be<br />

characterized by high temperature, exploisive conditions, high level of radiation,<br />

possibilities of mechanical trauma accidents. To move in such conditions, robots have<br />

to be equipped with special protection material, and transport, sensors, detectors,<br />

control systems, that guarantee an achievement of definitive task.<br />

Problem of demining automation was considered in [1], where suggested that robots<br />

may produce predicted job. Navigation and inspection methods [2, 3, 5] were evolved


to solve demining problems. Mine detection equipment developed in many countries,<br />

for example [4, 6].<br />

Robots for fire-fighting operations were developed up to day by some organizations<br />

[7]. It was necessary to determine the mobile robot’s motion over the predicted<br />

trajectory to produce reliable the operations in dangerous environments [8, 9]. The<br />

problems of mine clearance robots were presented in Proceedings of IARP <strong>Workshop</strong><br />

on Robots for Humanitarian Demining and other Conferences [10–13].<br />

The mobile robot’s motion in dangerous environments is distinguished by special<br />

design of mechanical and sensory control systems, providing the important qualities of<br />

actions including the reliability of working operations, high level of maneuverability,<br />

necessary capacity for cross-surface travel, possibilities for autonomous motions over<br />

some parts of the route, availability for supervision mood of operations using manmachine<br />

interface, cooperative control between transport system and on-board tools, the<br />

fast exchange of measuring information and decision making ability in some arising<br />

nonpredicted situations of environments [10-14].<br />

The survey is delivered to emphasize some key problems of mobile robot’s motion<br />

in dangerous environments, such as control motion and trajectory planning to avoid the<br />

obstacles, detection methods for automation of demining using mobile robots for<br />

horizontal motion and wall climbing robots, fire-fighting robotic operations and some<br />

examples of mobile robots.<br />

The paper presents some results of a study the mobile robot’s motion in emergency<br />

conditions to produce such operations as fire-fighting preventions, search of the mines<br />

and demining or rescue operations.<br />

2. Main requirements for mobile robot’s motion in unstructured environments.<br />

Properties of dangerous environments can be characterized by undeterminateness,<br />

dynamically changing and nonprediction.<br />

To satisfy the demands of reliable motion, high level of maneuverability and<br />

inspection for producing operations, robots have to be equipped with multifunctional or<br />

fusion sensory system. The multifunctional sensory system intended for to supply the<br />

robot control system with detection information about the route for navigation to<br />

identify the obstacles, to looking for the objects for operations and to inspect the<br />

technological operations.<br />

Control system has to realize on the base of sensory information not only<br />

programming motion, but to create special algorithm, including pattern recognition<br />

algorithms for obstacle avoidance, algorithms for simple decision making to satisfy the<br />

possible motion in undetermine environments. In some cases environments can be<br />

dynamically changed, for example on the case of fire-fighting situation or moving<br />

obstacles. Control system has to form trajectory planning with the possibility of<br />

obstacle avoidance and realize simple decision making. Decision making for intelligent<br />

motion can be on the base of fuzzy logic or neural networks.<br />

In this case control system has to be rather sophisticated, and consists of navigation,<br />

estimation and decision making blocks. The structure of such system may include local<br />

and global feedback loops, to control robot motion not only along cruise-control curve,


ut also in near by zone of obstacles or working objects, such can be a mine. Special<br />

blocks are responsible for the maneuverability and passibility capacity of robot for<br />

cross-country motion. One of solution can be the combination of robot for horizontal<br />

motion with wall climbing robot on the design of transport mechanical system.<br />

Another solution can be multilink biped locomotion robot [7] in which the<br />

combination of legs, pads and wheels. The combination of climbing transport system<br />

with ordinary wheel permits to increase significantly the robot motion in dangerous<br />

conditions. Fast reaction and small response time of control and mechanicalmechatronic<br />

robot systems are needed for the high quality mobile robot motion.<br />

Adaptation to changing conditions is important quality of such robots. Due to the<br />

adaptive possibilities of the pedipulators of climbing robot to various obstacles, for<br />

example stones, small hills, stumps of tree or artificial obstructings or barriers, robot<br />

has to keep the working position of the demining, navigation and others sensors during<br />

searching operations [13, 14].<br />

The decision making production rules of control system have to satisfy the selection<br />

of the actions, for example to overcome the obstacle or avoid it, that depends of<br />

identification of obstacle parameters.<br />

3. Demining mobile robots motion in unstructured environments.<br />

The robot that has a task to move across an unknown minefield should have four sensor<br />

blocks (Fig. 1).<br />

Fig. 1. Sensor blocks of the robot<br />

The internal sensors allow implementing feedback control of transport motion<br />

and identification of robot position. They include optical sensors and accelerometers.<br />

The navigation sensor block has an electronic compass, inertial sensors and global<br />

triangulation system that uses position sensitive detector. Obstacle sensors are<br />

presented by a sonar system. A metal detector, an IR detector and a chemical sensor are<br />

applied as the mine sensors [1].<br />

A minefield on a rough slope terrain with mines of both metal and plastic cases<br />

was chosen as the target environment for the system to be developed. The minefield can<br />

have stones of a height up to 150 mm. The angle of the slope can be up to 50 degrees.<br />

The diameter of the mines can be up to 200 mm. The sensor block can have a weight up<br />

to 30 kg.


Walking robots have high adaptability on the rough terrain and are suitable to<br />

overcome stones and move along slope surfaces. A pedipulator (legged) robot can<br />

achieve stable, also active, footing. When robot mistakenly detonates a landmine, the<br />

damage will be limited to the end of the leg, although this approach requires<br />

lightweight, inexpensive, replaceable legs. Also, by changing the broken leg, this robot<br />

can be restored.<br />

Mines of both metal and plastic bodies can be detected by means of a combined<br />

sensor block that contains devices based on different principles. An analysis of the<br />

combined sensor block information provides high reliability of mine detection.<br />

The design of the transport module{ XE "transport module" } of the robot is<br />

shown in Fig. 2. It consists of longitudinal pneumatic cylinders and latitudinal<br />

pneumatic cylinders, which bodies are connected symmetrically and have 200 mm<br />

stroke to cover maximum mine size in one stroke. Each pneumatic transport cylinders{<br />

XE "transport cylinder" } has two pedipulators that are fixed at the ends of the piston<br />

rods. The pedipulator consists of a lifting cylinder{ XE "lifting cylinder" } of 150 mm<br />

stroke to overcome maximum stone obstacles and a foot with toothed contact surface to<br />

improve robot climbing possibilities. The mine detection block{ XE "detection block" }<br />

is connected to the front part of the robot. Linear position sensor of longitudinal motion<br />

is placed on a body of the longitudinal cylinder. Linear position sensor of latitudinal<br />

motion and the detection block is placed on a body of the latitudinal cylinder.<br />

6<br />

5<br />

10<br />

11<br />

1<br />

9<br />

4<br />

14<br />

Fig. 2. Design of the transport module of the robot.<br />

1, 2 - longitudinal pneumatic cylinders; 3, 4 - latitudinal pneumatic cylinders; 5 -<br />

lifting cylinder; 6 – foot; 7 - metal detector; 8 - IR detector; 9 - chemical sensor; 10 -<br />

linear position sensor of longitudinal motion; 11 - linear position sensor of latitudinal<br />

motion; 12 – valve units; 13 - supply rotation block; 14 - electronic compass; 15 - onboard<br />

control computer.<br />

13<br />

8<br />

7<br />

2<br />

15<br />

3<br />

12


The valve units are placed at both sides of the robot providing minimum tubing<br />

length. They are supplied from a supply rotation block connected to the main air<br />

pressure line. The rotation block allows free rotation of the robot in relation to the<br />

umbilical cord with the air supply pipe. An electronic compass is installed in the front<br />

of the platform. The on-board control computer is placed in the centre of the platform.<br />

The pneumatic cylinders can be actuated in two modes:<br />

- the first mode is a transport mode. In this case, the longitudinal cylinder performs<br />

motion of its pedipulators with maximum velocity by using all length of the piston rods.<br />

During this motion the robot is connected to the ground by means of the latitudinal<br />

cylinder pedipulators. They serve as support cylinders during motion in this direction.<br />

The longitudinal cylinder pedipulators are lifted. After the first step the longitudinal<br />

cylinder pedipulators must be connected to the ground and the latitudinal cylinder<br />

pedipulators must be lifted. In such a position the sensor block can be moved for one<br />

step towards the working zone, and so on. The robot can change a motion direction by<br />

90°, by actuating the latitudinal cylinders as transport ones instead of the longitudinal<br />

cylinders. The rotation of the robot can be carried out by means of simultaneous motion<br />

of longitudinal or/and latitudinal cylinders in opposite directions during a contact of all<br />

their feet with the motion surface;<br />

- the second mode is a searching mode. During this mode the sensor block must carry<br />

out searching functions and be moved along a scanning trajectory. This trajectory is<br />

performed by means of latitudinal and longitudinal cylinders, which are actuated with a<br />

nominal searching velocity. The value of the searching velocity is determined by the<br />

characteristics of the sensor block.<br />

The robot motion and mine sensing are controlled by means of a distributed control<br />

robotic system. This system provides remote control from a safe distance in an<br />

automatic mode or in a teleoperated mode by an operator. The distributed control<br />

robotic system architecture is shown in Figure 3.


Location<br />

sensor<br />

Detection<br />

block<br />

Linear<br />

sensor block<br />

Compass<br />

Optical<br />

sensor<br />

Operator<br />

Central<br />

computer<br />

On -board<br />

computer<br />

Interface<br />

Robot drive<br />

system<br />

End-effector<br />

signals<br />

Navigation<br />

program<br />

Covering<br />

program<br />

Data fusion<br />

program<br />

Fig. 3. Structure of the distributed control robotic system.<br />

A search head of the metal detector ATMID{ XE "ATMID" } [2] is used in the<br />

detection block of the robot. Metal detectors are based on the fact that variable<br />

electromagnetic fields produce responses in metallic objects. The transmitting coil,<br />

embedded in the search head of the metal detector, generates such a field. As the search<br />

head is swept over the ground, the receiving coil in the search head detects the<br />

variations in the electromagnetic field caused by metallic objects. The variations are<br />

then processed to generate a signal indicating the presence of metal in the ground<br />

beneath the search head. The metal detector is installed on the front part of the robot as<br />

it is shown in Fig. 4.<br />

2<br />

5<br />

3<br />

7<br />

4<br />

1<br />

9<br />

6<br />

8


Fig. 4. Position of the metal detector on the robot<br />

1 - metal detector, 2 – robot platform, 3 – support, 4 – height adjusting unit, 5 –<br />

pedipulator, 6 – mine, 7 – metal part of the mine, 8 – transmit field, 9 – receive field.<br />

The detector was tested to detect a real mine but without explosives. It was the<br />

Portuguese mine of MAPS type with the plastic case of 85 mm diameter that contains a<br />

1 g metal part. A picture of the mine is shown in Fig. 5.<br />

Fig. 5. Mine of MAPS type on the surface of the test field.<br />

Figure 6 shows the influence of the ground type on the mine detection signal for<br />

sand, soil and stones at the mine depth of 10 cm.<br />

Frequency [Hz]<br />

2500<br />

2000<br />

1500<br />

1000<br />

500<br />

METAL DETECTOR - Mine at 10cm depth<br />

Sand<br />

Soil<br />

Stones<br />

0<br />

0 20 40 60 80 100 120 140 160 180 200<br />

Distance [mm]<br />

Fig. 6. Influence of the ground type on the mine detection signal


Sand allows receiving the strongest signal from the mine. Stones reduce the output<br />

signal about 5% and the compact soil leads to a reduction of about 2%. The mine in<br />

sand was detected at the 20 mm longer distance that other grounds while scanning.<br />

A new infrared detection sensor for detection of plastic or metallic landmines<br />

was developed to be integrated in the robot. The mine sensing is based on an infrared<br />

image analysis during microwave soil heating and posterior cooling. The detector<br />

prototype contains a microwave klystron emitting 1 kW power at the frequency of 2.45<br />

GHz and two infrared sensors sensitive in the range of 8-14 µm. Depending on the soil<br />

dielectric properties, the emitted radiation will be absorbed, reflected or transmitted<br />

through. Common plastic material is transmissive, metals reflect the microwaves, and<br />

wet soil absorbs and converts the radiation to heat. Using this sensor, it is possible to<br />

image thermal gradients in the soil surface and detect different rates of temperature<br />

changes depending on the soil content [4].<br />

The results shown in Fig. 7 were obtained after heating of the simulated minefield. It is<br />

the cut of a volume of soil with a plastic landmine in the interior. For the simulation,<br />

the initial temperature of 290°K in all materials was considered.<br />

The simulation is composed of two phases. In the first phase, the workspace is<br />

exposed to homogeneous electromagnetic radiation of 2.45 GHz, 1 kW in 0.1225 m 2 .<br />

The temperature raises according to the materials properties. The initial values of the<br />

second phase are the final values of the first phase. In this phase the material is not<br />

exposed to radiation, so only diffusion phenomena’s occur by heat conduction.<br />

cm<br />

air<br />

mine<br />

soil<br />

cm<br />

K ο<br />

ºC × 10<br />

cm / 20 cm × 2<br />

Fig. 7. Heat distribution. Fig. 8. IR sensor output after<br />

scanning of the heated mine location (mine<br />

of MAPS type)<br />

If a mine is laid at the level of the ground and it was heated up to 40 ºC, the<br />

output configuration of the mine location corresponds to Fig. 8 after the sensor<br />

scanning of the ground by means of the robot.


Fig. 9. Example of the scanning Fig. 10. Multisensor demining robot on a test field.<br />

trajectory to avoid an obstacles or a mine.<br />

The robot starts to check an obstacles or a mine at a working distance of a<br />

corresponding sensor at the detection point and uses this information to avoid them.<br />

The developed multisensor robot is presented in Fig. 10 on a test field searching a<br />

mine.<br />

Due to the adaptive possibilities of the pedipulators to obstacles (a stone in the<br />

picture), the robot can keep the working position of the demining sensors during<br />

searching operations.<br />

4. Examples of mobile robot motion in dangerous conditions<br />

Basic conception of the control system intended for mobile robot motion in<br />

dangerous environment make provision for navigation and estimation (Fig. 11). The<br />

control system includes local control loops for ordinary feedback control and global<br />

control loop for estimation and simple decision making producing. The local control<br />

loops are working all time of the motion. Navigation subsystem produce sensory<br />

information processing, image and pattern recognition, task and path planning and<br />

control of drives and transdusers.<br />

The estimation subsystem produce analysis of prescribed tasks and nonpredicted<br />

situations as so as motion analysis including manoeuvrabilities.<br />

Wall climbing robots with on-board manipulators (Fig. 12 and Fig. 13) are<br />

controlling by means of this kind of control system. Appropriate path generation<br />

algorithm (Fig. 19) is intended for obstacle avoidance in dynamically changing<br />

environments.<br />

Prototype of wall climbing robot for fire-fighting application was designed with cooperation<br />

with All-Russian Research Institute for Fire Protection (Fig.14). Robot has<br />

two arms on-board manipulator with load capacity 120 kg. Cutting and fire-fighting<br />

material in the arm end-effectors intended for an implementation of various fire-


fighting operations or cutting holes in petroleum storage tanks under fire. Diagram of<br />

the heat shielding and plot of temperature change are illustrated by Fig. 15 and Fig. 16.<br />

Structure of the control system for the trajectory planning with obstacle avoidance<br />

(Fig. 17) permits to generate necessary actions for the robot motion with the simple<br />

decision making on the base of computer simulation. Example of the robot motion is<br />

delivered in Fig. 18, when the robot produce obstacle avoidance with manoeuvre over<br />

trajectory planning. The picture is reflected the computer simulation.<br />

One of the way for mobility and reliability increasing of mobile robots is achieved<br />

by combination of mobile robot for horizontal motion with on-board wall climbing<br />

machine (Fig. 19). The simple manipulator of mobile robot is attached the wall<br />

climbing machine to the wall to move this machine over the wall.<br />

Examples of mechanical leg system (Fig. 21) and mechanical multilink system (Fig.<br />

22) show the possibilities to overcome the angles between crossing surfaces in the<br />

space.<br />

Conclusion.<br />

The results of R&D of mobile robots intended for the motion in dangerous<br />

nonpredicted environments are delivered. The mobile robot motion in emergency<br />

conditions is distinguished by means of special design the mechanical, sensory and<br />

control systems, providing the high level of manoeuvrability, the motion along irregular<br />

surfaces in the space, possibilities for autonomous motion and supervision mood of<br />

operations to produce such working tactics operations as demining, fire-fighting and<br />

rescue.


Fig. 12 Fig. 13<br />

Fig. 11


Fig. 15<br />

Fig. 16<br />

Fig. 19<br />

Fig. 14


Fig. 21<br />

Fig. 20<br />

Fig. 22<br />

Fig. 17<br />

Fig. 18


References<br />

1. M. Rachkov, L. Marques, A. T. de Almeida, Automation of demining, Textbook,<br />

University of Coimbra, Portugal, 2002, 281 p.<br />

2. Hofmann, B., Rockstroh, M., Gradetsky, V., Rachkov, M., 1998. Acoustic<br />

navigation and inspection methods for underwater mobile robots, In Proc. 1st Int.<br />

<strong>Workshop</strong> on Autonomous Underwater Vehicles for Shallow Waters and Coastal<br />

Environments, Lafayette, USA, S. IV: pp. 1-13.<br />

3. Moita, F., Nunes, U., 2001. Multi-echo technique for feature detection and<br />

identification using simple sonar configurations, IEEE/ASME Int. Conf. On Advanced<br />

Intelligent Mechatronics, pp. 389-394.<br />

4. ATMID mine detecting set, Operating manual, Schiebel Elektronische Geraete<br />

GmbH, Austria, 2000.<br />

5. D. Noro, N. Sousa, L. Marques and A.T. de Almeida, Active Detection of<br />

Antipersonnel Landmines by Infrared, Annals of Electrotechnical Engineering<br />

Technology, Portuguese Engineering Society, 1999.<br />

6. Explosives detection equipment, ARLI SPETSTECHNIKA, Catalogue 2001,<br />

Moscow, 2001.<br />

7. V.Gradetsky, V.Veshnikov, S.Kalinichenko, L.Kravchuk “Mobile Robot’s<br />

Control Motion Over Arbitrarily Oriented Surfaces in the Space”, Moscow, “Nauka”<br />

publ., 2001, pp. 361 (in Russian).<br />

8. Y.Baudoin “Improving the Safety and the Quality assurance with Robotics<br />

Tools”, ISSC Conference, April 18-19, 2002.<br />

9. Y.Baudoin, M.Acheroy “Robotics systems for Humanitarian Demining: Modular<br />

and Generic Approach and Cooperation Under IARP/ITER/ERA Networks, IARP<br />

<strong>Workshop</strong> on Robotics for Humanitarian Demining, HUDEM’02, Vienna, Austria,<br />

November 3-5, 2002, pp.1-6.<br />

10. Proceedings of VR-MECH’01 <strong>Workshop</strong>, BSME, Royal Military Academy,<br />

Brussels, Belgium, November 22-23, 2001, pp.241.<br />

11. Proceedings of IARP <strong>Workshop</strong> on Robots for Humanitarian Demining<br />

HUDEM’02, Vienna, Austria, November 3-5, pp. 110.<br />

12. A.Almeida, L.Marques, M.Rachkov, V.Gradetsky “On-Board Demining<br />

Manipulator”, Proceedings of IARP <strong>Workshop</strong> on Robots for Humanitarian Demining,<br />

Vienna, Austria, November 3-5, pp. 45-50.<br />

13. Ch.Grand, F.Ben Amar, F.Plumet and Ph.Bidaud “Simulation and Control of<br />

High Mobility Rovers for Rough Terrains Exploration”, Proceedings of IARP<br />

<strong>Workshop</strong> on Robots for Humanitarian Demining, Vienna, Austria, November 3-5, pp.<br />

51-56.<br />

14. V.Gradetsky, V.Veshnikov, S.Kalinichenko, G.G.Rizzotto, F.Italia “Fuzzy<br />

Logic Control for the Robot Motion in Dynamically Changing Environments,<br />

Proceedings of 4 th CLAWAR 2001 Conference, September 24-26, 2001, pp. 377-386.


PARADIS: Focusing on GIS Field Tools for<br />

Humanitarian Demining<br />

Sébastien Delhay Vinciane Lacroix Mahamadou Idrissa<br />

Signal And Image Center (SIC)<br />

Royal Military Academy (RMA)<br />

30, avenue de la Renaissance 1000 Bruxelles<br />

sdelhay@elec.rma.ac.be Vinciane.Lacroix@elec.rma.ac.be idrissa@elec.rma.ac.be<br />

1. Introduction<br />

PARADIS stands for a Prototype for Assisting Rational<br />

Activities in Demining using Images from Satellites. The<br />

aim of this project is to improve the planning of<br />

Humanitarian Demining campaigns using Remote<br />

Sensing data and GIS (Geographic Information System)<br />

techniques. In this context, a user interface has been<br />

developed in ArcView GIS to integrate the tools needed<br />

for the management of the campaigns. The interface is<br />

built upon four scales (global, regional, local and<br />

advancement) and really follows the evolution of the<br />

campaign from global to local scale by providing the user<br />

with scale-dedicated tools. The system uses high and very<br />

high resolution satellite images and their interpretation,<br />

onto which one can overlay vector data taken from the<br />

IMSMA (Information Management System for Mine<br />

Action) database, which is used as data repository. The<br />

tool is also compatible with the Belgian EOD<br />

Champassak database, aimed at gathering information<br />

related to the collection of UXO’s on the field. The<br />

PARADIS project is funded by the Belgian Defense; the<br />

developed system will be used by the SEDEE-DOVO<br />

(Belgian Armed Forces Bomb Disposal Unit) during their<br />

campaigns in mine- and UXO-affected countries.<br />

Several problems are inherent to the process of<br />

supporting Humanitarian Demining campaigns with<br />

useful data. Within the framework of a demining<br />

campaign, contaminated areas are often very large and<br />

represent a lot of information. This huge amount of data<br />

has to be compiled and safely stored in a central<br />

repository to avoid loss or corruption of data. Also, these<br />

data need to be represented in an explicit manner, in order<br />

for the user to work with them as effectively as possible.<br />

Even when a system has been developed to solve this<br />

first problem, errors of encoding may appear when the<br />

user enters new data into that system; in order to ensure<br />

the integrity of the data, these errors must be eliminated.<br />

Maps are key elements for campaign management and<br />

field work; however, as demining campaigns generally<br />

take place in developing countries, there is often a lack of<br />

accurate and recent maps for the zone of work.<br />

Information systems have been developed in order to<br />

solve those problems and are now used in countries<br />

affected by Explosive Remnants of War (ERW). IMSMA,<br />

the UN-standard information system for Mine Action,<br />

addresses the first problem. It consists mainly of a<br />

database located at the Mine Action Centre (MAC) of a<br />

specific country or region, into which all Mine Action<br />

related data are centralized. It also contains management<br />

and planification tools that process the information in the<br />

database in order to help decision makers.<br />

EOD IS (Explosive Ordnance Disposal Information<br />

System) focuses on the second problem – data integrity.<br />

This system provides a GIS (Geographic Information<br />

System) based interface to the field worker to ensure that<br />

the data collected on the field is safely conveyed to the<br />

IMSMA database.<br />

This paper explains how PARADIS addresses these<br />

problems, then focuses on the tools being developed for<br />

the field work.


2. Overview of the system<br />

2.1. Original ideas<br />

PARADIS addresses the first problem by compiling<br />

and organizing the data related to the campaign in a<br />

geographic database (GeoDb). As PARADIS is built on a<br />

GIS, it presents the data to the user on a map rather than<br />

on a form. The map representation is more contextual than<br />

the form representation, and enables the user to really<br />

visualize the information, as it is shown on the figure 1:<br />

GIS representation<br />

(map)<br />

X Y<br />

574,659 1,608,404<br />

574,700 1,608,450<br />

574,602 1,608,360<br />

574,563 1,608,306<br />

Database representation<br />

(form)<br />

Fig. 1: Perimeter of a minefield<br />

IMSMA is database-centered, thus it mainly presents<br />

the data on forms. In order to fill the IMSMA database<br />

using a map, the user will be able to enter the data in the<br />

PARADIS system, then export them to IMSMA using the<br />

maXML (Mine Action eXtended Markup Language, [2])<br />

protocol.<br />

Concerning the second problem, because it is<br />

GIS-based, the PARADIS interface intrinsically enables<br />

the user to verify that the data have been correctly entered.<br />

For example, when defining the perimeter of a minefield,<br />

if the user inverts the X and Y coordinates of a turning<br />

point, the latter appears at a totally different spot than<br />

expected. The wrong location of the turning point on the<br />

map warns the user about the encoding error he just made,<br />

and he then is able to correct the error.<br />

Finally, in order to provide the user with useful maps of<br />

the zone of interest, PARADIS uses satellite imagery (see<br />

[1]).<br />

2.2. Designing two dedicated interfaces<br />

The needs of the campaign manager differ from those<br />

of the field operator. While the manager needs tools to<br />

help him make decisions at country and regional levels,<br />

the field operator needs an easy-to-use, lightweight tool to<br />

bring data to the field and collect new information while<br />

there. The PARADIS interface as it is described in [1] has<br />

been split into two separate interfaces in order to better fit<br />

the needs of both the campaign manager and the field<br />

operator.<br />

The manager interface is located in the Mine Action<br />

Center (MAC); it is built on a full-featured GIS. As it<br />

needs to access the central GeoDb and may run big<br />

processes while manipulating the data, it is based on a<br />

desktop –or laptop- computer. It is hence referred to as the<br />

Desktop Interface.<br />

The field operator will work on a lightweight GIS<br />

interface called the Field Interface. The latter is running<br />

on a Personal Digital Assistant (PDA), which small size<br />

and light weight make it the ideal tool for the field work.<br />

Fig. 2: A Personal Digital Assistant<br />

The following figure shows the procedure that a field<br />

activity involves:


Desktop<br />

Interface<br />

1<br />

3<br />

Field<br />

Interface<br />

(PDA)<br />

GeoDb Field data<br />

MAC Field<br />

Fig. 3: Field activity procedure<br />

Desktop Interface (Point 1):<br />

As the field operator needs geographic data such as maps,<br />

satellite imagery and its interpretation to support his<br />

activities, the first step is to export these data to the PDA.<br />

Note that the PDA user only needs data for the zone where<br />

he will be sent, so only a reduced set of geographic data<br />

needs to be exported. In the Desktop Interface, the<br />

manager reduces the set of data to the region of interest by<br />

drawing a bounding rectangle on the map. He then exports<br />

the enclosed data set to the PDA.<br />

Field Interface (Point 2):<br />

On the field, the PDA is used to collect new data<br />

such as: location of UXOs, bridges, audio/video files of<br />

people interviews, etc. The user may also edit the existing<br />

set of data, which is very useful to enter information that<br />

cannot be known a priori. In particular, the only way to<br />

know the state of the roads is to go to the field.<br />

Desktop Interface (Point 3):<br />

When the field operator comes back to the office the<br />

manager gets the data from the PDA, verifies their<br />

integrity and puts them into the central GeoDb.<br />

As more and more data comes into the GeoDb, the whole<br />

region gets covered by the data brought in by the different<br />

field operators, which enables the manager to make<br />

decisions at regional level.<br />

2<br />

3. Description of the Field Interface<br />

The process described above will be used during three<br />

separate phases of the demining campaign: Impact Survey,<br />

Technical Survey (Clearance) and Quality Assessment. In<br />

the following we describe the tools that are being<br />

developed for the Field Interface. Some tools are<br />

dedicated to one phase and will only be used during that<br />

specific phase, while other tools are of a more general use.<br />

3.1. Impact Survey – Road Map tool<br />

During this phase, surveyors are sent to the field to<br />

collect the information that will be used to evaluate the<br />

socio-economic impact of ERWs on the population. In<br />

order to interview the population about the ERW situation,<br />

they go from village to village along the roads of the<br />

region. In this context, the Road Map tool helps managing<br />

the itineraries the different teams of investigators will<br />

follow.<br />

Desktop Interface:<br />

This tool allows a road map to be created for each team of<br />

surveyors. In the Desktop Interface, the manager draws<br />

the itinerary to be followed by the team; the resulting road<br />

map is then exported to the PDA.<br />

Field Interface:<br />

Each segment of the road is namely given:<br />

- a name;<br />

- a status (not practicable, poor or good);<br />

- seasons of practicability (if any);<br />

- a type (tarred road, track);<br />

- a mean speed depending on season of<br />

practicability;<br />

- a mined status reflecting the estimated presence<br />

of ERWs along the road and the types of ERWs.<br />

As one goes along the road, the status of the road may<br />

change (e.g. from poor to good). In order for the Field<br />

Interface to reflect this information, the road segment can<br />

be easily split up into several segments of different status.<br />

Conversely, two segments can get merged into one single<br />

segment.


Fig. 4: The road color reflects the status of the road<br />

(brown= unknown; red= not practicable; orange= poor;<br />

green= good). The user can toggle labels to show the road<br />

name and its mined status.<br />

When the team goes back to the MAC, these data are<br />

imported into the central GeoDb; after grouping the road<br />

maps of the different teams, a regional map of roads is<br />

obtained.<br />

3.2. Technical Survey - Grid tool<br />

This tool will allow the user to follow the work on a<br />

minefield. After delineating the minefield, the user will be<br />

able to draw a grid on the minefield. Each cell in the grid<br />

represents the small area that a deminer works on at a time.<br />

Then, each cell will be edited to show the ammunition<br />

found in each cell and the status of the cell (not cleared,<br />

ongoing, cleared). After a certain number of cells have<br />

been edited, the system will be able to compute the time<br />

for complete clearance of the minefield, given an<br />

estimated future numbers of detectors on the minefield.<br />

3.3. General tools<br />

These tools are of a general use and can be used in<br />

whichever phase of the demining campaign (Impact<br />

Survey, Technical Survey or Quality Assessment). The<br />

Layers tool makes the process of introducing new data<br />

into the system simple and powerful; the GeoNote tool<br />

enables the user to quickly associate different types of<br />

data to the same location on the map.<br />

3.3.1. Layers tool<br />

Using this tool the user may select dedicated layers<br />

from a list and insert them in the map (ex: airports, gas<br />

stations, hospitals, towns, UXO’s, etc). The layers are<br />

then loaded with the appropriate pre-defined symbology.<br />

Possible<br />

layers<br />

Layers<br />

to load<br />

in map<br />

Fig. 5: Layers tool form<br />

Layer to edit<br />

The user may also choose to edit one layer directly,<br />

using the “Edit layer” box.<br />

Note that the way of editing a layer changes from one<br />

layer to another. For example, the Bridges and Minefields<br />

layers differ in that a bridge is represented as a point in the<br />

interface, while a minefield is represented as a polygon.<br />

Hence, to define the bridge point the user simply taps on<br />

the screen at the appropriate location. As the minefield is<br />

a polygon, the system behaves differently: when the user<br />

taps on the screen, the benchmark gets defined, then a<br />

form opens and lets the user define the other points. This<br />

is shown in the following figure:


Fig. 6: The form used to define the turning points of a<br />

minefield; these points can be defined by bearing and<br />

distance to the previous point, or by their X and Y<br />

coordinates.<br />

The advantage of the Layers tool is that the system<br />

activates the appropriate editing method depending on the<br />

layer the user has chosen to edit.<br />

3.3.2. GeoNote tool<br />

Sometimes it is useful to associate heterogeneous<br />

information to a single point on the map, such as text,<br />

voice recordings, photos, or videos; in the interface, this<br />

point is called a GeoNote. As an example, if the PDA user<br />

interviews a mine victim, he would create a GeoNote by<br />

tapping the screen at the location of the interview. He<br />

would then associate to the GeoNote the audio file of the<br />

conversation, along with a sketch and a textual<br />

description of where the accident happened.<br />

4. Conclusions and future prospects<br />

The current state of the Field Interface has been<br />

presented; the two interfaces are still being developed<br />

based on the ideas and methods of work of the end-users –<br />

the deminers of the SEDEE-DOVO. Hands-on training<br />

sessions will be put in place in order to teach the end-users<br />

how to use the Desktop and Field interfaces. The whole<br />

system will then be validated in situation during test<br />

campaigns.<br />

In particular, the validation of the Field Interface will<br />

focus on practical details such as the possibly difficult use<br />

of the PDA in outdoors conditions (because of the sun,<br />

dust, etc), and the handiness of the interface for the field<br />

operator.<br />

5. References<br />

1. S. Delhay, V. Lacroix, M. Idrissa, PARADIS : a<br />

GIS Tool for the Management of Humanitarian<br />

Demining Campaigns, In <strong>International</strong><br />

Conference on Requirements and Technologies<br />

for the Detection, Removal and Neutralization of<br />

Landmines and UXO (EUDEM2-SCOT), VUB,<br />

Brussels, Belgium, September 2003<br />

2. N. Klemm and R. Bailey, Sharing Mine<br />

Action Information,<br />

http://www.hdic.jmu.edu/conference/casualty/m<br />

axml_files/frame.htm


Remote Sensing System for Robotics and Aquatic<br />

Related Humanitarian Demining and UXO Detection<br />

Dr. Charles R. Bostater Jr. 1 , Teddy Ghir, Luce Bassetti<br />

Marine & Environmental Optics Lab and Remote Sensing Center<br />

College of Engineering, Florida Institute of Technology<br />

Melbourne, Florida, USA<br />

Abstract<br />

It is becoming more important to understand the remote sensing systems and<br />

associated autonomous or semi-autonomous methodologies (robotic & mechatronics) that<br />

may be utilized in freshwater and marine aquatic environments. This need comes from<br />

several issues related not only to advances in our scientific understanding and<br />

technological capabilities, but also from the desire to insure that the risk associated with<br />

UXO (unexploded ordnance), related submerged mines, as well as submerged targets and<br />

debris left from previous human activities are identified and reduced through detection<br />

and removal. This paper will describe (a) remote sensing systems, (b) platforms (fixed<br />

and mobile, as well as to demonstrate (c) the value of thinking in terms of scalability as<br />

well as modularity in the design and application of new systems now being constructed<br />

within our laboratory as well as future systems. New systems – sensing systems as well<br />

as autonomous or semiautonomous robotic and mechatronic systems will be essential to<br />

secure domestic preparedness for humanitarian reasons. These same systems hold<br />

tremendous value, if thoughtfully designed for other applications which include<br />

environmental monitoring in ambient environments.<br />

I.1. Basic Concepts<br />

I. Background<br />

The engineering talent, effort and economic costs involved with the design of<br />

remote sensing systems aboard robotic and mechatronic systems are not inconsequential.<br />

Typically, the design of remote sensing systems first involves considerations with respect<br />

to which part of the electromagnetic spectrum will be used. Second, one must consider<br />

the “medium” the target is embedded in. Both of these considerations are necessary to<br />

understand which part of the electromagnetic spectrum and associated sensor system may<br />

be appropriate for the detection problem. In essence, most targets are thus imbedded<br />

inside an environmental medium (water, air, soil or submerged land). In most cases, the<br />

“ambient” environment (outdoors environment) is the location we consider when<br />

thinking of UXO’s and mines in water or submerged land underling the water column.<br />

1 Dr. Charles R. Bostater Jr., Associate Professor, Physical Oceanography and Environmental Sciences;<br />

Director Marine and Environmental Optics Lab & Remote Sensing Center, College of Engineering,<br />

Florida Institute of Technology, Melbourne, Florida, US, 32937 email: bostater@probe.ocn.fit.edu Ph:321-<br />

258-9134, Fax: 321-600-9412. Paper submitted to Proceedings and to be presented at the HUDEM04<br />

Conference, June 16, 2004, Brussels Belgium http://www.ihrt.tuwien.ac.at/hudem04/.


This is what we call “ambient environmental monitoring” with respect to dual uses of<br />

sensing systems for environmental monitoring applications. Figure 1 describes the remote<br />

sensing selection process with respect to the sensing system to be used as described<br />

above.<br />

Remote Sensing Systems:<br />

Measure A Portion of the<br />

Electromagnetic Spectrum<br />

Targets or Objects Are Located<br />

In an “Ambient” Environmental Medium.<br />

The Medium & the Target both determine the Selection of:<br />

(a) the portion of the EM Spectrum to Sense (use) &<br />

(b) thus Remote Sensing System to apply.<br />

Figure 1. Schematic decision tree describing the selection of the sensing system for object<br />

detection in an “ambient” environmental medium (air, water, soil, submerged land).<br />

I.b. Remote Sensing Platforms<br />

Next, when designing a remote sensing or object recognition system, after<br />

considering the above, one needs to consider the platform of choice for the detection and<br />

localization (spatial location detection) of the object(s) or targets to be identified or<br />

discriminated from the background medium. Generally speaking, we consider two types<br />

of platforms, (a) fixed or (b) moving. In aquatic environments, the fixed platform could<br />

be mounted to a (1) pier, (2) piling, (3) fixed platform (bridge, tower, or other fixed<br />

platform away from or extending from the shore, e.g. an oil rig platform or other such<br />

specially designed platform – e.g. breakwaters) and a (4) moored bottom, subsurface or<br />

in-situ (at depth) or water surface moored platform (e.g. buoy). The moving platform<br />

with a remote sensing system can be conceived of as either a (1) drifting-autonomous, (2)<br />

semi-autonomous (tethered) or (3) autonomous-controlled platform. We commonly<br />

would call the latter two moving platforms as a “vehicle”. Each of these moving<br />

platforms could be classified as either: (a) surface vehicles or (b) sub-surface and/or<br />

bottom roving vehicles. After the terminology of the CLAWAR network, we can consider<br />

the vehicles as either the (1) walking type, (2) or crawling type both of which may<br />

change its elevation with the land or submerged land surface in which case we may say<br />

the vehicle is a “climbing” vehicle. The vehicles can next be classified as to either<br />

manned or unmanned (another way to infer autonomous). Figure 2 below is a schematic<br />

depicting the categorical types of remote sensing platforms described above with respect<br />

to object detection and localization for objects that are spatially fixed or not moving with<br />

the environmental medium (water, soil or submerged land).


Another important aspect with respect to remote sensing systems to keep in mind<br />

is that systems are designed with the purpose to either (a) detect targets within a<br />

medium or (b) to detect the characteristics of the medium itself with respect to its<br />

properties (absorption, scattering, transmittance and or reflectance). In some cases, one<br />

may need to consider both aspects, especially in the event that the object’s recognition is<br />

dependant upon correction or removal of these selected processes from the remote<br />

sensing signal in order to detect the target or object’s characteristics and location within<br />

in the aquatic environment as depicted below in Figure 3.<br />

Piers, Pilings, Docks, Bridges, Towers,<br />

Rig Platforms, Breakwaters, Moored<br />

Remote Sensing Platforms:<br />

Fixed or Moving<br />

Surface, Bottom, Sub-Surface<br />

Manned or Unmanned<br />

Drifting, Autonomous-semi (tethered)<br />

Autonomous (controlled).<br />

Flying, Swimming, Floating, Walking,<br />

Crawling – Climbing or changing elevation<br />

(roving) with terrain.<br />

Figure 2. Schematic of remote sensing platforms for detection of objects that are spatially fixed<br />

(not moving) within the environmental medium (water, soil or submerged land).<br />

Detecting Properties<br />

The Medium:<br />

•scattering<br />

•absorption<br />

•trasmission<br />

•reflectance<br />

•homogeneous/nonhomogenius(layered)<br />

Remote Sensing Systems:<br />

Object:<br />

•Detection<br />

•Recognition<br />

•Discrimination<br />

Detecting Properties<br />

The Object or Target:<br />

•scattering<br />

•absorption Object<br />

•transmission Properties<br />

•reflectance<br />

•size - shape<br />

Figure 3. Schematic depiction of the use of remote sensing systems as being used to detect or<br />

characterize (a) the properties of the medium, and/or (b) the properties of the object or target. These<br />

two considerations in the remote sensing system design and operation, including data analysis or<br />

algorithm development and use leads to (3) the detection and recognition of the object and its<br />

discernment or discrimination from the background medium.


I.c. Remote Sensing Systems<br />

After considering platforms, one needs to consider the basic remote sensing<br />

systems in terms of their detecting an object through use of the electromagnetic spectrum<br />

they sense. For water environments one classifies the systems as UV, Visible, Near-<br />

Infrared, Mid-Infrared, Far-Infrared, Microwave, and Acoustical systems. Each of these<br />

systems can be either passive sensor systems or active-passive systems. Passive systems<br />

include spectroradiometers, spectrographs, cameras (film or digital), hyperspectral<br />

imagers, passive acoustic hydrophones and microwave radiometers. Active-passive<br />

systems include, laser based Lidar (light induced detection and ranging) or ladar, Radar<br />

(microwave) imaging, Sonar (acoustic) imaging, fluorescence systems which may<br />

include Raman spectrometers and imaging systems which utilize the Raman scattering<br />

signal. All of these systems may include the use of multiple wavelength excitation and<br />

emission detection systems, magnetic field and acoustic or Sonar devices. Typically, all<br />

of the active-passive systems utilize a passive detector component which measures the<br />

medium or object’s response to an active continuous (CW) or pulsed radiation source.<br />

Remote Sensing Systems: Measure a Part of the EM Spectrum<br />

X-Ray, UV, Vis., NIR, Mid-IR, Far-IR, Microwave, Acoustic<br />

Passive Systems: Spectroradiometers,<br />

Spectrographs, Radiometers,<br />

Hydrophones, Cameras (film, digital)<br />

Surface, Bottom, Sub-Surface, In-Situ<br />

Active-Passive: Lasers (CW),Lasers<br />

(Lidar) Mulitwavelength Excitation &<br />

Emission (Fluorescence, Raman)<br />

Microwave (Radar), Acoustic (Sonar)<br />

Magnetic (Field Induction)<br />

Point, Line Imaging, 2-D Imaging, 3D-Spectral<br />

Data Types<br />

Figure 4. Schematic describing the different remote sensing systems and categories such as passive,<br />

active-passive systems, spatial categories of data obtained from these systems and their data types.<br />

II. Detection of Objects in Water<br />

II.1 Methods of Risk Assessment & Risk Reduction<br />

The goal of detecting objects such as mines and unexploded ordnance and related<br />

debris using robotic systems may ultimately provide a unique method of securing a safe<br />

environment for human activities. With the increased humanitarian need to secure a safe<br />

environment by many nations around the world, especially safe waterways, ports and<br />

associated coastal areas such as bridges, piers, breakwaters, etc., the detection of objects<br />

in water is an essential aspect of national preparedness. The detection of dangerous<br />

objects from other natural objects, such as rocks, submerged bottom types (submerged<br />

vegetative areas, etc.), man made debris, first needs to be demonstrated before a remote<br />

sensing system is routinely utilized in a robotic or unmanned system. The determination<br />

of the risk associated with false identification of dangerous objects must also be


determined, estimated and then minimized. This minimization process or function is a<br />

necessary element in the demonstration and improvements of any remote sensing system<br />

utilized aboard a robotic autonomous or semi-autonomous platform.<br />

Conceptually, there are 3 major methods useful to minimize this risk in false<br />

identification of objects. The first risk assessment methodology is the use of sensor<br />

system modeling and simulations, the second method is through system simulations and<br />

modeling of the robotic and mechatronic systems, and the third method is through<br />

conducting field trials where actual integrated remote sensing systems and associated<br />

robotic and mechatronic systems are tested in an artificial environment (e.g. example a<br />

test mine field with various targets objects or debris to be classified as to their potential<br />

risk as depicted below.<br />

Methods For Minimizing Risks In<br />

Detection of Dangerous Objects:<br />

•Remote Sensing Systems modeling & associated<br />

detection algorithm simulations.<br />

•Robotic-Mechatronic Systems modeling & motion<br />

control simulations.<br />

•Filed Trial Tests in Artificial Object Environments<br />

Estimation Of Risk Reduction Through:<br />

• Systems Hardware Modifcations<br />

• Data Analysis & Detection Algorithm Modifications<br />

• Changing Field Trial Environments & Object<br />

Characteristics Modifications<br />

Figure 5. Systematic process for risk assessment and risk reduction in development,<br />

testing and utilization of possible UXO and mine detection systems through modeling,<br />

simulations, and resulting system modifications.<br />

II.b. Example Applications: Shallow Water Remote Sensing Systems<br />

In the follow discussion we present design concepts for shallow water operations.<br />

The designs have incorporated three important concepts we believe are useful<br />

considerations and design concepts. These are (1) utilization of the “sensor payload<br />

concept”, (2) the design of both the sensor and the robotic-mechatronic elements using<br />

the concept of “size scalability” and (3) the use of components using “modularity” or<br />

modular components. All three of these design elements have been used by the main<br />

author as a guidepost in the systems constructed to date, and those currently being built.<br />

Figure 6 shows moored or tethered shallow water remote sensing system which was built<br />

by the author. This system is designed as a tethered shallow water system and was<br />

designed with the concept of a “sensor payload”. This particular example contains an<br />

internationally patented non contact scalable backscatter probe (Bostater, 2000) within


the payload sensor well which is essentially a hyperspectral remote sensing system. This<br />

particular system was designed to measure identify and quantify chemicals in a shallow<br />

water environment and the payload and probe hold an active electromagnetic energy<br />

source and a passive multi-wavelength sensor. Next to the photograph is shown the side<br />

and top perspective of the moored or fixed remote sensing platform which is scalable in<br />

size with respect to the flotation collar as well as with respect to a sensor payload holding<br />

area and as well as the probe.<br />

Figure 6. Example of a coastal shallow water observation and surveillance system which utilizes a<br />

scalable in size payload area, scalable chemical probe for multi-wavelength EM source and detector<br />

system such as a Raman scattering system or a pulsed Laser line fluorescence based hyperspectral<br />

sensor.<br />

Following the design and fabrication of this system we developed the concept of<br />

an autonomous system which can hold several payload sensor wells. Figure 7 present the<br />

floating autonomous vehicle which is currently being built for wireless controlled active<br />

motion and for active-passive (LIDAR) imaging, hyperspectral imaging, Raman imaging,<br />

acoustic (SONAR) and magnetic imaging systems. This system is planned for mapping<br />

surveys of shallow water areas around bridges, waterways, canals, channels and ports in<br />

order to map the submerged land or bottom feature types and to identify and localize<br />

natural and manmade objects and debris, as well as unidentified objects such as UXO<br />

and/or mine like objects. This system is “scalable” in size, accommodates payload areas<br />

or “sensor wells”, and is made from off-the-shelf components that are modular for<br />

construction purposes. It is scalable in size (thrusters) to power a 3-axis motion control<br />

system, wireless radio based fast ethernet communications and fast maneuvering control.<br />

Figure 7. Figures showing the scalable in size, modular structural and motion control components<br />

and scalable sensor payload or sensor well for shallow water mapping and object detection. The<br />

vessel is currently under construction and utilizes wireless Ethernet communications, an onboard<br />

sensor system for around bridges, canals, waterways, piers and enclosed seaport areas as well as in<br />

lakes and coastal lagoons. The system is battery and solar powered for WAAS GPS motion control.


In addition, this robotic shallow surface water vessel is designed for easy assembly and<br />

disassembly making it ideal for transport in a small light weight container, along with<br />

selected, optional, scalable and modular remote sensing systems for object detection and<br />

discrimination. The above 2 examples are next contrasted with futuristic versions of the<br />

concepts embodied above. In Figure 8, below we depict the embedded concepts described<br />

above to “submerged walking robotic systems”. These “walker robots” were conceived<br />

and motivated based upon recent examination of underwater flagellate organisms found<br />

in a European freshwater lake, Lake Balaton, Hungary. As will be noted, these robotic<br />

concepts contain the design elements used above in addition to the additional concept of<br />

utilizing acoustic sensors in the feet of the multi-legged robotic system, surrounding the<br />

center payload area.<br />

Sensor Payload<br />

Robotic Operating System<br />

(ROBOos)<br />

Object Buried<br />

In Sediments<br />

Figure 8. Designs of submerged walking robots hold unique possibilities and advantages for<br />

mapping and detecting objects buried in submerged land or bottom types. The center of the walking<br />

holds one suite of sensors and the feet (which help support in soft bottom types) contain acoustic and<br />

magnetic detectors which receive the returned signal from the central robotic EM impulse generator.<br />

II.c. Example of Sensor Imaging System Modeling and Algorithm Simulation<br />

In order to demonstrate the value of designing security related and homeland<br />

preparedness systems, an example is provided of simulations for detecting targets in<br />

different water types (clear water versus turbid water types. Analytical and Monte-Carlo<br />

based radiative transfer models designed to quantitatively describe, simulate and predict<br />

the electromagnetic transport properties, in an environmental medium such as water,<br />

provide powerful tools for educating future engineers and remote sensing scientists as to<br />

the design of remote sensing systems as well as the possible robotic systems which<br />

contain and move the sensors. Over the last decade, the Marine Environmental<br />

Laboratory and Remote Sensing Center at Florida Institute of Technology has developed


a suite of instruments and associated models which help describe the optimal design of<br />

remote sensing instruments. Below are shown selective results of model simulations<br />

regarding the detection of an object. In this case it is a line target with 80% and 20%<br />

constant reflectance lines across the spectrum. This target could have a complex shape<br />

and unique property characteristics in terms of its EM reflective/transmission<br />

characteristics. In addition, the medium (in this case water) is composed of clear water<br />

and then different chemical properties (dissolve organic matter, chlorophyll-a pigment<br />

concentrations and suspended sediment concentrations) which make optical detection<br />

difficult, unless the active-passive systems are employed. Figure 9 shows the simulated<br />

line target simulated on the water bottom at a specified depth (≈2m). We also use a<br />

spectral water surface wave model demonstrating the combined effects of wind on the<br />

water surface and water quality influences on the target at 3 different wavelengths where<br />

the resulting RGB image 480 nm, 530 nm and 650 nm simulated channels as shown.<br />

Figure 9. Synthetic image sensor simulations of a hyperspectral imaging system with continuous<br />

source illumination using a radiative transfer model (Bostater, et. al. 2000, 2004). A line target is<br />

simulated with 80% and 20% reflectance lines. The line target (upper left) is simulated as a ≈140 cm 2<br />

target. The water wave surface is generated using the JONSWAP water wave spectrum (upper<br />

middle) modified for shallow water use in the Sebastian coastal inlet area (upper right) of the<br />

Atlantic Ocean, Florida. The wind speed is 5 ms -1 from the NNE direction (30 0 ). The solar zenith<br />

angle is set to 20 degrees off nadir (late morning) and the sensor viewing angle is near nadir. The<br />

bottom reflectance type in the background of the target is a seagrass or vegetative bottom type. A<br />

resulting synthetic image is presented from the clear water simulation (lower left) and the zoomed<br />

region of the panel for the target in this area (lower middle) shows the influence of the water surface<br />

and clear water column properties on the target. The lower right zoomed line target area shows the<br />

influence of wind and water quality with suspended matter of ≈ 30 mgL 2 , 20 mgL 2 C as dissolved<br />

organic matter (DOM) and chlorophyll-a pigment concentration of 20 ugL 2 .


Also, similar sensor simulation results as shown above demonstrates the above water<br />

surface detection of an object will be influenced by the scattering and absorption effects<br />

within the atmosphere if the sensor simulation is carried aboard an airborne sensor. The<br />

atmosphere will decrease the contrast observed of a submerged target further than the<br />

results shown above. Therefore detection of UXO and mine like objects and debris in<br />

shallow waters would be best to utilize sensors flown at a low altitude (less than 3,000<br />

meters), preferably lower, or from sensors used aboard vessels as described above. The<br />

models developed by Bostater, et. al., 2003 and 2004 also demonstrate the effects of<br />

subsurface imaging results from the model simulations by examining the upwelling EM<br />

energy from just below the water surface, without the water wave effects. The lower<br />

middle and left panels shown above result from simulations after image enhancements<br />

have been performed upon the synthetic images in the zoomed region. Image algorithm<br />

detection modeling may improve the results for detection, as shown in Figure 5 above.<br />

II.d. Detection of Objects in Using Other Sensor Systems<br />

The above sensor model results and simulations were conducted in order to demonstrate<br />

the need for these types of sensor simulations in order to develop risk reduction<br />

information regarding detection of objects in water. These types of modeling results are<br />

useful not only in sensor and platform design considerations but in training applications<br />

of future engineers, scientists, and command & control operations personnel. All of these<br />

types of professionals involved in the detection of dangerous objects need to work<br />

together and share a common scientific understanding for successful preparedness and<br />

detection of dangerous objects in ambient environments. In the following discussion are<br />

selected results of applications of sensors and robotic systems with potential use for<br />

detection of objects in shallow water types. Figure 10 shows three different robotic<br />

systems designed and built at Florida Tech. The upper left system shows the application<br />

of a semi-autonomous underwater vehicle used to collect water samples in ice covered<br />

regions. The middle panel below shows a system constructed for use as a camera<br />

inspection system autonomous vehicle. The lower right system shows the application of s<br />

surf zone or littoral zone robotic crawler type system. These systems are just examples of<br />

possible future vessels that could be outfitted with more modern remote sensing systems<br />

for object detection applications.<br />

Figure 10. Examples of a subsurface tethered water semi-autonomous underwater vehicle used in sea<br />

ice cover conditions and developed at Florida Tech. The middle image above shows the near surface<br />

autonomous vehicle which was developed to move around shallow water environment applications.


The last image above on the right shows a littoral or surf zone crawling robotic system<br />

for entering the high energy surf zone or beach area. Additional examples of imagery<br />

obtained from remote sensing systems include sonar images as well as robotic systems<br />

for roving in shallow waters are shown below.<br />

Figure 11. Example of a crawling underwater robot with an active-passive acoustic (SONAR)<br />

system. The middle image is an object identified with an acoustic system (see http://marinesonic.com)<br />

reported by Scott and Wilcox (undated). The image on the right shows the results of an image<br />

derived from a small vessel carrying ground penetrating radar from a shallow freshwater study<br />

reported by Versteeg and White (undated manuscript).<br />

Many other examples exist of remote sensing systems and resulting data for detecting<br />

objects and bottom characteristics in shallow water environments. The above examples of<br />

robotic systems and sensing systems are described to demonstrate that technological<br />

advances are developing many possible systems for humanitarian applications regarding<br />

UXO and demining applications as well as applications and related training for homeland<br />

security and preparedness activities as well as for environmental monitoring needs.<br />

III. Summary & Conclusions<br />

Remote sensing systems and robotic systems utilizing advanced mechatronics<br />

concepts hold a unique capability and thus require careful evaluation and training for<br />

future applications and activities related to detection of mines and UXO in aquatic<br />

environments. The above discussion summarizes a practical approach for conceptualizing<br />

the design of such integrated waterborne sensing systems. We have presented an example<br />

of how to model and simulate the phenomenology related to detection of targets and<br />

objects in a shallow coastal inlet, with application of the results to sensor design, and<br />

sensor data analysis or algorithm development. Examples of existing, under construction,<br />

and futuristic remote sensing systems and robotic systems have been described,<br />

We have also described those types of systems which have great potential for<br />

future detection, removal, deterrence and preparedness activities related to securing risk<br />

free and safe aquatic environments. The systems also have a dual use utility in<br />

environmental monitoring in aquatic environments. All applications described above


need to always consider the sensor system requirements in terms of spatial, temporal,<br />

spectral, radiometric and digital resolution in order to be successful.<br />

.IV. Acknowledgements<br />

The research and applications described in this paper have been supported in part<br />

by the following organizations: KB Sciences; Northrop Grumman Corporation,<br />

Melbourne, Florida; NASA; S&C Services; The Link Foundation; US Department of<br />

State. Graduate students Teddy Ghir and Luce Bassetti are acknowledged. Teddy is<br />

acknowledged for putting on paper the designs we discussed and Luce is acknowledged<br />

for running the model runs requested to demonstrate the use of our radiative transfer<br />

models in the context described in this paper.<br />

V. References<br />

Bostater, C., 2000, “Buoy Instrumented for Spectral Measurement of Water Quality”,<br />

NASA Tech Briefs, Vol. 24, No. 11, p. 69.<br />

Bostater, C., et al., 2004, Synthetic Image Generation Using an Iterative Layered<br />

Radiative Transfer Model With Realistic Water Waves, SPIE, Vol. 5233, pp. 253-268.<br />

Bostater, et al., 2004, Developing & Testing a Pushbroom Camera Motion Control<br />

System: Using a Lidar-Based Streak Tube Camera for Studying the influence of Water<br />

Waves on Underwater Light Structure Detection, SPIE Vol. 5233, pp.1-16.<br />

Donald M. Scott, D., Wilcox, T., Side Scan Sonar Suitable for AUV Applications,<br />

http://www.marinesonic.com/documents/SuitableAUVApplications.PDF,<br />

Marine Sonic Technology, Ltd., 5508 George Washington Memorial Highway, P.O. Box<br />

730, White Marsh, Virginia 23183-0730 USA, 4 pp.<br />

Versteeg, R., White, E., Rittger, K., (undated), Ground-Penetrating Radar and Swept-<br />

Frequency Seismic Imaging Of Shallow Water Sediments In The Hudson River<br />

http://water.usgs.gov/ogw/bgas/publications/SAGEEP01_133/SAGEEP01_133.pdf, 11<br />

pp.<br />

Meyers, R., Smith, D., 1998, Development of an underwater GPR system. In: Seventh<br />

<strong>International</strong> Conference On Ground-penetrating Radar. University of Kansas, Lawrence,<br />

Kansas, USA: Radar Systems and Remote Sensing Laboratory, University of Kansas,<br />

2291 Irving Hill Road, Lawrence, KS 66045-2969,USA.


Charles R. Bostater Jr. is Associate Professor in Physical Oceanography & Environmental Sciences,<br />

College of Engineering, at the Florida Institute of Technology. He is the Director of the Marine<br />

Environmental Optics Laboratory and the Remote Sensing Center. His research interests include remote<br />

sensing systems, remote sensing platforms, shallow water research, coupled modeling of marine<br />

environmental systems and sensor system modeling. Dr. Bostater is the named inventor on four<br />

international patents concerning scalable non-contact remote sensing probes. He has won awards from<br />

NASA for outstanding research and serving in the role of a Principal Investigator, he has managed over<br />

four million dollars in research grants & contracts.<br />

bostater@probe.ocn.fit.edu


Air ground robotics ensembles for risky applications<br />

Simon Lacroix and Raja Chatila<br />

LAAS-CNRS<br />

7, Av. du Colonel Roche<br />

31077 Toulouse Cedex 4 France<br />

Simon.Lacroix@laas.fr, Raja.Chatila@laas.fr<br />

Abstract<br />

Numerous projects deal with the development of aerial robots, aiming at endowing them with<br />

the functional and decisional abilities required by the autonomous execution of exploration<br />

or monitoring missions. The cooperation of such robots with robots operating on the ground<br />

is very relevant in various risky applications. In this paper, the development of cooperation<br />

schemes in which the complementarity of air and ground robots is exploited to achieve<br />

missions in risky contexts is considered. The potential advantages of air/ground robotics<br />

ensembles are discussed. Some research directions are sketched, and preliminary results<br />

obtained in the context of projects in which LAAS is implied are briefly presented.<br />

1 Introduction<br />

These last years, the robotics community has paid more and more attention to the development<br />

of aerial robots (see e.g. [3, 14, 16, 15, 9]). As opposed to drones that executes<br />

pre-programmed mission, aerial robots exhibits decisional autonomy: they are supposed<br />

to achieve high level missions, such as mapping or monitoring a given area, with little human<br />

interactions. Aerial robots rises a wide number of robotics research areas, that goes<br />

from the study of innovative flying concepts and the associated flight control algorithms<br />

(especially for micro drones) to high level mission planning, via real time environment mapping.<br />

In the context of risky applications, aerial robots are very well suited for exploration<br />

and monitoring missions: they can easily gather detailed information on the environment,<br />

without exposing themselves or the operators to any danger.<br />

But gathering data by aerial means might not be sufficient in most of the risky applications:<br />

the returned information are not always complete, especially in cluttered urban<br />

areas, some sensors can only be effective when they are close to the ground, and aerial<br />

robots can hardly physically intervene on the environment. Actually, in all the applications<br />

contexts where the development of exploration and intervention robotics is considered,<br />

air/ground robotic ensembles bring forth several opportunities from an operational point


of view. Be it environment monitoring and surveillance, demining or reconnaissance, a<br />

variety of scenarios can be envisaged. For instance, aircrafts can operate in a preliminary<br />

phase, in order to gather informations that will later be used in both mission planning and<br />

execution for the ground vehicles. But one can also foresee cooperative scenarios, where<br />

aircrafts would support ground robots with communication links and global informations,<br />

or where both kinds of machines would cooperate to fulfill a given mission.<br />

In this paper, we discuss the interest of developing air/ground robotics ensembles in<br />

risky application contexts, and we outline the main research issues raised by such systems.<br />

The paper is organized as follows: the next section describes the main missions for which<br />

robots can be employed in the context of risky applications, and describes the benefits<br />

air/ground robotics ensembles can bring for their achievements. Section 3 presents the<br />

main research issues to tackle in order to develop effective air/ground robotics ensembles,<br />

and section sec—results depicts some current work and preliminary results obtained at<br />

LAAS.<br />

2 Robotic missions in risky applications<br />

Whatever the actual application context is (fire monitoring, fire fighting, mine detection,<br />

demining, . . . ), the functions that can benefit from robotised systems in risky applications<br />

can be summarized as the following:<br />

• Exploration or reconnaissance. This function consists in gathering and structuring<br />

data related to a given area, in order to build a map of the environment. The term<br />

“map” is here understood in its most general acception: a map can be a realistic 3D<br />

model of the environment dedicated to the operators, or a simple representation that<br />

exhibits traversable areas, or a representation in which the position of particular<br />

elements (e.g. mines, fire alarms, chemical hazards, . . . ) is stored. Exploration<br />

essentially relies on perception functionalities, but decisional processes are of course<br />

also involved when it is performed autonomously.<br />

• Monitoring or surveillance. This consists in checking that any specified event does not<br />

occur in a given area, or in continuously assessing the evolution of given events (e.g.<br />

monitoring active fires). This function also relies on perception functions and calls<br />

for decisional processes. It is quite similar to the mapping function, except that time<br />

plays an important role: monitoring an area requires that it is regularly perceived,<br />

and monitoring a given event requires a continuous perception of the event.<br />

• Intervention. The exploration and monitoring functions are passive, in the sense that<br />

they do not modify the environment. On the contrary, by “intervention” we mean<br />

all the functions that actually modify the environment, e.g fire extinction or mine<br />

neutralization.<br />

These three basic functions can be declined and assembled in various mission scenarios,<br />

depending on the application context. For instance, “Search and rescue” missions calls for


an exploration phase (finding the position of victims) and an intervention phase (reaching<br />

the victims to provide them with assistance).<br />

If each of these generic functions can be achieved by a single robot, be it terrestrial or<br />

aerial, it can be more efficiently achieved by an ensemble of aerial and ground robots, that<br />

exhibits a wider spectrum of complementary capacities. Various scenarios that exhibit<br />

a synergy between the two kinds of robots can be foreseen. For instance, for mapping<br />

and monitoring functions, the complementarity of aerial and ground robots from the point<br />

of view of perception is quite obvious: aerial robots can provide a global view of the<br />

environment, while ground robots can exhibit more detailed information. Typically, an<br />

aerial robot can build hypotheses on the presence of particular elements, that are afterwards<br />

confirmed by a ground robot equipped with more reliable sensors.<br />

3 Air ground robotics ensembles: research issues<br />

3.1 Some cooperation schemes<br />

Several cooperation schemes can be foreseen between aerial and ground robots, from simple<br />

assistance to actual tight on-line cooperation.<br />

• Aerial robots assist ground robots, by providing them information on the environment.<br />

The aerial robots build beforehand a map of the environment (either a 3D<br />

representation, a traversability map or a map that exhibits the presence of given elements),<br />

that is used afterwards by the ground robots as an initial information. Such<br />

schemes can be envisaged for mapping missions, as the robots operate in sequence<br />

(i.e. the aerial robots are used in a missions preparation phase).<br />

• Aerial robots assist ground robots, by providing them a communication link with the<br />

operator station. This assistance scheme requires an on line operation of both kinds<br />

of robots.<br />

• Aerial robots assists ground robots, by estimating their localization with respect to<br />

the environment. Ground robot localization is a key problem that is essential to<br />

solve: aerial views of the ground robots can help to estimate their absolute position<br />

- this scheme also requires an on line operation of both kinds of robots.<br />

• In on line cooperation schemes, both kinds of robots cooperate on line to fulfill one of<br />

the three functions mentioned above. For instance, the map being built by the aerial<br />

robot is exploited to drive the ground robot, that actively completes it by perceiving<br />

the areas not seen by the aerial robot. Such schemes are the most complex to achieve,<br />

as it requires synchronizations and coordinations between the two kinds of robots.<br />

To achieve any of these schemes, some robotics problems have to be tackled, either<br />

from the functional and decisional point of view.


Figure 1: A schematic view of the “multi-source” environment information integration problem among<br />

air and ground devices.<br />

3.2 Air/ground cooperative functionalities<br />

Cooperative mapping. One of the most important issue to address to foster the development<br />

of such ensembles is the building and management of common environment<br />

representations using data provided by all possible sources. Indeed, not only each kind of<br />

machine (terrestrial, aerial) can benefit from the information gathered by the other, but<br />

also, in order to plan and execute cooperative or coordinated actions, both kinds of machines<br />

must be able to build and manipulate shared coherent environment representations.<br />

Basically, the problem is to integrate the information acquired from the rovers and<br />

aircrafts into a global coherent representation (figure 1). For that purpose, one must define<br />

algorithms and representations that support the following characteristics of the data:<br />

• Data types: the acquired data can be images, either pan-chromatic or color, from<br />

which geometric 3D informations can be recovered, or directly 3D, as provided by a<br />

SAR or a LIDAR for instance.<br />

• Data resolution: the resolution of the gathered information can significantly change,<br />

depending on whether it has been acquired by a ground or an aerial sensor.<br />

• Uncertainties: similarly, there are several orders of magnitude of variation on data<br />

uncertainties between ground an aerial data.


• Viewpoints changes: beside the resolution and uncertainties properties of the sensors<br />

that can influence the detection of specific environment features, the difference of<br />

viewpoints between ground and aerial sensors generates occlusions that considerably<br />

change the effectively perceived area, and therefore the detectable features.<br />

Integrating all these data is a multi-sensor fusion problem. Depending on the considered<br />

context, the difficulties may vary: for instance, for aerial sensors, occlusions caused by<br />

overheads are likely to occur because of the presence of vegetation or buildings that makes<br />

some area unperceivable from air. Also, the aircraft altitude of course strongly influences<br />

the properties of the perceived data in terms of precision and resolution.<br />

The localization problem. But before tackling the problem of integrating ground and<br />

aerial data into a global consistent environment representation, one must consider the<br />

registration issue: in order to merge these informations, a way to establish the spatial<br />

correspondence between the two kinds of data is required. This would be straightforwardly<br />

solved if sensor positions were always precisely known, which is far from being a realistic<br />

hypothesis. Designing algorithms that solve the registration problem is therefore a key<br />

prerequisite to address cooperative environment modeling. Moreover, such algorithms can<br />

help to localize the rover with respect to initial maps of the environment, as provided by<br />

an orbiter for instance 1 .<br />

Localization is also an essential problem to tackle, even if the two kinds of robots do<br />

not cooperate to build a complete environment map. Besides solutions developed in single<br />

robot contexts, the possibility to use aerial views to localize the ground robots with respect<br />

to their environment is also a solution to investigate.<br />

3.3 System architecture and decisional issues<br />

Multi-robot applications calls for the development of a specific system architecture, that<br />

embrace more concerns than a single robot architecture: in particular, designing multirobot<br />

architectures requires to define the decision making scheme and to specify the interaction<br />

framework among the different robots of the system - which in turn influences the<br />

design of the individual robot architecture.<br />

Within a multi-robot system, the “decision” encompasses several notions:<br />

• Supervision and execution: The executive is a passive, reactive management<br />

of tasks execution, whereas supervision is an active process that manages all the<br />

decisional activities of the robot, whatever their extent.<br />

• Coordination: It ensures the consistence of the activities within a group of robots.<br />

It defines the mechanisms dedicated to avoid or solve possible resource conflicts that<br />

may arise during operational activities. This is especially related to trajectories and<br />

multi-robot cooperative tasks execution (e.g. simultaneous perception of the same<br />

target by several robots).<br />

1 A problem often referred to as the drop-off problem


• Mission planning and scheduling: These decisional activities are dedicated to<br />

plan building, taking into account on one side models of missions and tasks, and on<br />

the other side models of the world: robot’s perception and motion abilities, current<br />

knowledge related to the environment, etc.<br />

• Task allocation: This deals with the way to distribute tasks among the robots. It<br />

requires to establish a task assignment protocol in the system, and to define some<br />

metrics to assess the relevance of assigning given tasks to such or such robot.<br />

These decisional components can be implemented according to different configurations:<br />

they can be gathered within a Central Decisional Node, or be partially (or even totally)<br />

distributed among the robots.<br />

But one of the most important feature of the system architecture is the consideration<br />

of the operators, that should be able to intervene anytime and at any level in the system.<br />

Indeed, in an actual operational context the robots are always operating under human<br />

requests - and operators should have the possibility to choose the robots autonomy level,<br />

depending on the current execution context.<br />

4 Current work at LAAS<br />

We briefly present here some work that is held at LAAS in the context of air/ground<br />

robotics ensembles.<br />

4.1 The autonomous airship Karma<br />

To tackle the various issues raised by the deployment of heterogeneous autonomous systems,<br />

in the context of exploration, surveillance and intervention missions, we initiated the<br />

development of an autonomous blimp project. The ever on-going developments in a wide<br />

spectrum of technologies, ranging from actuator, sensors and computing devices to energy<br />

and materials will ensure lighter than air machines a promising future. There is undoubtly<br />

a regain of interest in this domain, as shown by the recent industrial developments on heavy<br />

loads transportation projects, and on stratospheric telecommunication platforms. As for<br />

small-size unmanned radio-controlled models, which size is of the order of a few tens of<br />

cubic meters, their domain of operation is currently essentially restrained to advertising or<br />

aerial photography. But their properties makes them a very suitable support to develop<br />

heterogeneous air/ground robotics systems: they are easy to operate, they can safely fly<br />

at very low altitudes (down to a few meters), and especially their dynamics is comparable<br />

with the ground rovers dynamics, as they can hover a long time over a particular area,<br />

while being able to fly at a few tens of kilometers per hour, still consuming little energy.<br />

Their main and sole enemy is the wind (see [7] for a detailed and convincing review of<br />

the pros and cons of small size airships with regards to helicopters and planes). Let’s also<br />

note that some specific applications of unmanned blimps are more and more seriously considered<br />

throughout the world, as shown by numerous contributions in the AIAA Lighter


Than Air conferences and European Airship Conventions for instance [1, 2]. In the context<br />

of demining applications, the Mineseeker project is of course worth to mention [5].<br />

Besides long-term developments related to the coordination and cooperation of heterogeneous<br />

air/ground robots, our research work on autonomous blimps is currently twofold:<br />

we concentrate on the navigation problem on the base of automatic control, and on environment<br />

modeling issues using low altitude imagery. For that purpose, we have designed<br />

an realized Karma, a 9.5 m long blimp (figure 2).<br />

Figure 2: The autonomous airship Karma. Note the two cameras mounted on the front and rear of the<br />

gondola.<br />

4.2 The Comets project<br />

Comets is a EEC funded project, that involves various academic and industrial partners<br />

(see [12] for more information). Although it only deals with UAVs, it is worth to mention<br />

here as the objective of COMETS is to design and implement a distributed control<br />

system for cooperative activities using heterogeneous UAVs, in the context of forest fire<br />

applications. Particularly, both helicopters and airships are considered in COMETS. In<br />

order to achieve this general objective, a new control architecture has been designed and<br />

implemented. COMETS also involves cooperative environment perception including fire<br />

detection and monitoring, and terrain mapping.<br />

The heterogeneity of the UAVs is twofold. On one hand, complementary platforms are<br />

considered: helicopters have high maneuverability and hovering ability, and are therefore<br />

suited to agile target tracking tasks and inspection and monitoring tasks that require<br />

to maintain a position and to obtain detailed views. Having much less maneuverability,<br />

airships can be used to provide global views or to act as communications relay - they<br />

also offer a graceful degradation in case of failures. On the other hand, the considered<br />

UAVs are also heterogeneous in terms of on board processing capabilities, ranging from<br />

fully autonomous aerial systems to conventional radio controlled systems with minimal<br />

on-board capabilities required to record and transmit information. Thus, the planning,


Figure 3: The global architecture of the Comets system.<br />

perception and control functionalities of the UAVs can be either implemented on-board<br />

the vehicles, or on ground stations for the low cost and light aerial vehicles without enough<br />

on-board processing capabilities. The global architecture of the COMETS system is shown<br />

in figure 3. It involves remotely piloted UAVs and UAVs with on-board controllers that<br />

have the ability to perform planning and perception activities [8]. Both kinds of UAVs are<br />

equipped with radio transmitter and receivers in order to communicate with the ground<br />

Control Center and with other UAVs.<br />

4.3 The Aerob project<br />

This project is supported by the French national research institute (CNRS), in the context<br />

of the Robea research program [13]. Aerob essentially deals with the building of environment<br />

maps that integrates data acquired by aerial and terrestrial means: we present here<br />

some results of terrain mapping with low altitude imagery.<br />

Besides 3D data acquisition, the main difficulty to build a terrain model that gathers<br />

a set of data acquired during motion is to have a precise estimation of the sensor position:<br />

if this position is not precisely known, the built model is eventually distorted and contains<br />

discrepancies. A huge amount of work has of course been devoted to the localization<br />

problem in robotics. In the absence of any external absolute reference, the only way to<br />

guaranty a sound position estimate during motions is to rely on environment features, that<br />

are detected and localized as they are perceived by the robot. This approach is known as<br />

“simultaneous localization and mapping”(“SLAM” - see e.g. [6, 17]): the robot position<br />

is concurrently estimated with the position of landmarks that are detected by the robot<br />

exteroceptive sensor.<br />

Most of the achievements of SLAM are made in the context of indoor robots evolving<br />

on a 2D ground, that perceive the environment with a laser range finder that is scanned<br />

along a plane parallel to the ground. Recently, various contributions that deal with robots


evolving in the three dimensions and that use vision to detect and map landmarks have<br />

been proposed [4].<br />

We developed an approach that uses only stereovision to estimates both the robot<br />

motions (prediction) and the landmark positions (observation) [11]. Landmarks are interest<br />

points, i.e. visual features that can be matched when perceived from various positions,<br />

and whose 3D coordinates are provided by stereovision. We use an extended Kalman filter<br />

(EKF) as the recursive filter: the state vector of the EKF is the concatenation of the<br />

stereo bench position (6 parameters) and the landmark’s positions (3 parameters for each<br />

landmark). The key algorithm that allows both motion estimation between consecutive<br />

stereovision frames (prediction) and the observation and matching of landmarks (data<br />

association) is a robust interest point matching algorithm [10].<br />

Figure 4: Interests points matched in two aerial images taken from different positions. All the detected<br />

points are denoted with red crosses - the ones that have been matched are surrounded by a square. The<br />

green squares indicates the points matches that are used to estimate the motion between the two positions,<br />

the blue ones are the points that are memorized as landmarks<br />

.<br />

With this approach, it is possible to estimate the six airship position parameters on line<br />

as it flies, with an accuracy on the translations that is as good as 0.1%. Thanks to these<br />

precise position estimates, a digital elevation map is incrementally built by merging the<br />

3D points provided by dense stereovision every time a stereoscopic image pair is acquired.<br />

Figures 5 and 6 show the 5 cm resolution digital terrain maps built after a flight of<br />

Karma over the parking of the lab at an altitude of about 30 m, with a 2.2 m stereovision<br />

bench.<br />

4.4 The Acrobate project<br />

Acrobate (“Algorithmes pour la Coopération entre Robots Terrestres et Aériens”) is an<br />

other project founded by the CNRS Robea program, initiated in late 2003, that explicitly<br />

tackles air/ground ensembles issues. Both functional and decisional aspects related to<br />

this context are considered. Up to now, most of the work has consisted in specifying a


Figure 5: A DTM computed with 240 stereoscopic image pairs: orthoimage and 3D view of the bottom-left<br />

area. The map covers an area of about 3500 m 2 .<br />

simulator that will allow to evaluate distributed mapping and decisional algorithms - no<br />

tangible results can yet be presented here.<br />

5 Conclusions<br />

We sketched in this paper the potential benefits of air/ground robotics ensembles in the<br />

context of risky applications, and presented some preliminary work achieved at LAAS.<br />

Such systems are now becoming to be seriously considered in the robotics community, and<br />

it appears clearly that they will mobilize more and more attention in the near future, as<br />

they definitely bring a lot of capabilities in wide spectrum of application contexts.<br />

Among the research issues to be tackled, it seems that the overall system architecture<br />

and the associated decisional functionalities are the most demanding. In particular, the role<br />

of the operators calls for the definition of a versatile, open and reconfigurable architecture,<br />

that allows to share the decisions at any level in the system.<br />

References<br />

[1] AIAA. 14th Lighter-Than-Air Convention and Exhibition, Akron, OH (USA), July 2001.


Figure 6: A DTM computed with 400 images stereoscopic image pairs. The map covers an area of about<br />

6000 m 2 .<br />

[2] The Airship Association. 4th <strong>International</strong> Airship Convention and Exhibition, July 2002.


[3] O. Amidi, T. Kanade, and R. Miller. Vision-based autonomous helicopter research at carnegie<br />

mellon robotics institue - 1991/1997. In Heli Japan ’98. Gifu (Japan), April 1998.<br />

[4] A. Davison and. Real-time simultaneous localisation and mapping with a single camera. In<br />

IEEE <strong>International</strong> Conference on Computer Vision, Nice (France), pages 1403–1410, Oct.<br />

2003.<br />

[5] S. Christoforato and P.K. Bishop. Mineseeker deployment to kosovo for mine survey. In 14th<br />

AIAA Lighter-Than-Air Conference and Exhibition, Akron, Ohio (USA), July 2001.<br />

[6] G. Dissanayake, P. M. Newman, H-F. Durrant-Whyte, S. Clark, and M. Csorba. A solution<br />

to the simultaneous localization and map building (slam) problem. IEEE Transaction on<br />

Robotic and Automation, 17(3):229–241, May 2001.<br />

[7] A. Elfes, S.S. Bueno, M. Bergerman, J.G. Ramos, and S.B Varella Gomes. Project AURORA:<br />

development of an autonomous unmanned remote monitoring robotic airship. Journal of the<br />

Brazilian Computer Society, 4(3):70–78, April 1998.<br />

[8] J. Gancet and S. Lacroix. Embedding heterogeneous levels of decisional autonomy in multirobot<br />

systems. In 7th <strong>International</strong> Symposium on Distributed Autonomous Robotic Systems,<br />

Toulouse (France), June 2004.<br />

[9] E. Hygounenc, I-K. Jung, P. Soueres, and S. Lacroix. The autonomous blimp project at<br />

LAAS/CNRS: achievements in flight control and terrain mapping. to appear in <strong>International</strong><br />

Journal of Robotics Research, 2004.<br />

[10] I-K. Jung and S. Lacroix. A robust interest point matching algorithm. In 8th <strong>International</strong><br />

Conference on Computer Vision, Vancouver (Canada), July 2001.<br />

[11] I-K. Jung and S. Lacroix. High resolution terrain mapping using low altitude aerial stereo<br />

imagery. In <strong>International</strong> Conference on Computer Vision, Nice (France)), Oct 2003.<br />

[12] The official Comets project site. http://www.comets-uavs.org.<br />

[13] The CNRS Robea program. http://www.laas.fr/robea.<br />

[14] Georgia Tech Aerial Robotics. http://controls.ae.gatech.edu/gtar.<br />

[15] S. Saripalli, D.J. Naffin, and G.S. Sukhatme. Multi-Robot Systems: From Swarms to Intelligent<br />

Automata, Proceedings of the First <strong>International</strong> <strong>Workshop</strong> on Multi-Robot Systems,<br />

chapter Autonomous Flying Vehicle Research at the University of Southern California, pages<br />

73–82. Kluwer Academic Publishers, 2002.<br />

[16] S. Sukkarieh, E. Nettleton, J-H. Kim, M. Ridley, A. Gvktogan, and H.F. Durrant-Whyte.<br />

The anser project. <strong>International</strong> Journal of Robotics Research, 22(7-8):505–540, 2003.<br />

[17] S. Thrun, D. Fox, and W. Burgard. A probabilistic approach to concurrent mapping and<br />

localization for mobile robots. Autonomous Robots, 5:253–271, 1998.


Light Hybrid Robotic Platform for Humanitarian Demining<br />

N. Amati, B. Bona, A. Canova, S. Carabelli, M. Chiaberge, G. Genta<br />

Laboratorio Interdipartimentale di Meccatronica (www.lim.polito.it)<br />

Politecnico di Torino – Italy<br />

Introduction<br />

The combination of legs and wheels, so-called hybrid solution, seems to be a promising<br />

solution to move in a number of environment where purely wheeled machines may be<br />

ineffective and humanoid robots are too complex (and costly).<br />

Hybrid and light robots should move autonomously, they should communicate with other<br />

within a fleet and with a fixed base, they should know their exact position, they should<br />

allow remote teleoperation of their specific equipment (modular payload).<br />

In order to keep their cost as low as possible, the concept is to use a simple and reliable<br />

mechanical construction based on two frames, six legs and four/six wheel, to be driven by<br />

compact electric motors and controlled by a rather general purpose control unit based on<br />

available digital platforms.<br />

Walking robots for space applications are a long term project at Laboratorio<br />

Interdipartimentale di Meccatronica (LIM) that resulted in a number of working<br />

prototypes ([4], [5], [7]).<br />

Digital platforms for mechatronic applications are another long term project of LIM that<br />

resulted in a number of working applications.<br />

A number of other basic technologies are to be combined in order to build an effective<br />

mobile platform, namely wireless communication and positioning but these are<br />

increasingly available as modules that can be easily integrated into the digital platform<br />

controlling the machine.<br />

Work-in-Progress<br />

Vehicle architecture<br />

Among the many configurations of moving robots suggested in the past ([1], [2], [3], [6]),<br />

a number had both wheels and legs or appendages which rotate like wheels and are<br />

conformed like legs (or even some hybrid devices wheel-track-leg). Very seldom they<br />

were actually built and extensively tested and often were discarded in favour of more<br />

zoomorphic legged configurations or standard wheels ([9]). However, in many<br />

applications as humanitarian deminig, it could be of advantage to have wheels to travel<br />

on level ground and legs to deal with obstacles, both from the viewpoint of the average<br />

speed and also to increase the reliability of the machine. The leg mechanisms, which are<br />

usually highly stressed and are critical for fatigue and wear, are required to operate only<br />

when needed, while wheels supply mobility in easier conditions.


Actually there is a wide range of solutions which combine wheels with levers to increase<br />

mobility. The most conventional types are vehicles (military vehicles or machines for<br />

open air mining, construction works, etc.) in which regular wheels are suspended using<br />

long-stroke trailing arm suspensions, which can be active or at least supplied with loadlevelling<br />

devices. They are here considered as wheeled vehicles. The suspensions can be<br />

made by two articulated levers, more similar to a leg than to a trailing arm. Here the<br />

ability to walk of what is essentially a wheeled vehicle depends on the actuators and the<br />

control system used.<br />

On the other side there are walking machines with small wheels either attached at the end<br />

of the legs ([10]) or under the body ([8]), which can be put on the ground by raising the<br />

legs. Wheels at the end of the legs have the advantage of allowing the body of the vehicle<br />

to ride high on the ground, clear from obstacles, but either require that the legs behave as<br />

active suspensions or that some sort of elastic and damped suspension is placed in the<br />

feet – except if no suspension at all is used, a thing possible only for very low speeds.<br />

Wheels under the body make it easier to use a more or less conventional suspension, but<br />

the body rides very low and the legs must be able to “get out of the way” in a somewhat<br />

unnatural position, which in some cases, particularly when a mammalian configuration is<br />

used, can be impossible at all.<br />

Generally speaking, to perform well as a walking machine the device must have feet as<br />

light as possible, so the use of large wheels at the end of the legs indicate that walking<br />

has been considered as an auxiliary locomotion method to allow a wheeled vehicle to<br />

extend its capabilities to very rough ground. On the contrary, very small wheels show that<br />

the primary locomotion mode is walking. In any case, the presence of wheels not only<br />

allows a walking machine to increase its speed on level ground and to be operated in a<br />

simpler way, but also allows a limited performance in case of failure of some actuators or<br />

of the walking control system.<br />

A hybrid robot with wheels at the feet can perform in the “wheeled” or in the “legged”<br />

modes, the first one being better suited to smooth ground and the second one to rough<br />

surfaces, but it can also operate in a mixed mode. The legs supporting the robot have<br />

locked wheels and behave as legs. The advancing legs are not raised from the ground, but<br />

roll on it, maintaining a certain support and perhaps supplying a certain traction. The<br />

stability of the vehicle is so much improved and a fully static stability can be obtained<br />

also in the case of a quadruped machine with a high value of the duty factor, at the<br />

expense of added complexities in the control system.<br />

Rigid frames machines are particularly suitable for building wheel-leg hybrid vehicles.<br />

The wheels can be located under the body, all on one of the two frames (and then a<br />

steering system is required) or two on one frame and two (or one) on the other, so that<br />

steering is provided by the rotation of the frames with respect to each other. The wheels<br />

can be easily provided of a suspension system, but some of the throw of the legs is lost.<br />

As an alternative, the wheels can be located under some of the legs. In the case of an<br />

hexapod similar to WALKIE 6 (Figure 1, Figure 2) the wheels can be located under two<br />

legs of one frame and two of the other one, so that no purposely designed steering system<br />

is required.


Figure 1: Mechanical subsystem of Walkie 6.2 prototype.<br />

Figure 2: Walkie 6.2 walking on outdoor rough terrain. Mount Etna, September 2002.


However, either the speed is very low and the legs act as a load-levelling suspension, or<br />

some compliant connection is required between the feet and the wheels. For low speed<br />

operation on fairly level ground the suspension system can be dispensed of even without<br />

using the legs as an active suspension system if three wheels are used. Since there are<br />

many possible layouts (number and location of wheels, presence of a dedicated<br />

suspension and steering system, control strategy of the body and legs while rolling), a<br />

vehicle of this type needs to be designed following its tasks, the type of terrain expected<br />

and other operational constraints.<br />

Digital platform<br />

The objective of combining an embedded actuator control unit with enough software and<br />

firmware power to seemless integrate communication, positioning and payload needs can<br />

be achieved by means of a modular digital platform featuring a processing unit and a<br />

logic device.<br />

The processing unit is based on a combination of a general purpose processor (e.g.<br />

ARM9), with integrated communication devices, (802.11), and a digital signal processor<br />

(DSP) with mixed signals I/O (AD and PWM drives). The general purpose processor<br />

should run a real-time operating system, for instance Linux based RTAI.<br />

The presence of Field Programmable Logic Device (FPGA) allows a substantial freedom<br />

for the later integration and management of digital devices needed by the system or its<br />

payload, e.g. a stereoscopic vision subsystem..<br />

The proposed digital platform is to be intended as a prototyping system that can be used<br />

in real terrain conditions without the need to go through an full engineering phase, at least<br />

for small series production.<br />

The use of Open Source software as well as firmware and hardware is extensively<br />

adopted in order to use what is already available and, at the same time, leave the project<br />

fully open to specific modifications and adaptations.<br />

Wireless communication and positioning<br />

A wireless communication system is thought to be a key feature for almost any<br />

application of mobile robotics. A solution based on a widely known standard such as<br />

IEEE 802.11 should lead to a low cost solution that makes use of available hardware<br />

devices and software drivers and applications.<br />

The relatively short range, less than 1 kilometer in open field, may be conveniently<br />

extended by means of multihop connections within a fleet of mobile robots that “keep in<br />

touch” with a central operating center via an ad hoc network. During the search phase of<br />

operation data streaming is used to convey detection sensor information to the central<br />

unit to be added to the location map. On the other hand, during the actual demining phase<br />

teleoperation may be needed; transport protocol for real-time communication, such as<br />

Real-Time Protocol (RTP), is to be adopted.<br />

The combination of autonomous moving capabilities and positioning on field is another<br />

topic of current research. Global Positioning System (GPS) may be used to locate the


mobile robot on a map in order to produce a resonable reference route and leaving to its<br />

autonomous capability to deal with local obstacles.<br />

Vision<br />

The stereoscopic configuration of the proposed vision system helps to find distance of<br />

possible “visible” obstacles on the trajectory of the rover. It is composed of two CMOS<br />

cameras that are interfaced with the digital platform to perform camera handling and<br />

configuration on FPGA and image management, analysis and compression on the DSP.<br />

The vision system algorithm is based on the principle that the two images acquired by the<br />

two CMOS sensors are very similar (the distance between the two CMOS sensors will be<br />

10cm). This similarity is used to optimize the compression (scientific stereoscopic<br />

images) and for distance extraction of visible and near obstacles.<br />

This approach is costly effective and simplify the whole control architecture of the<br />

mobile robot introducing the vision system as a payload to be used also for navigation<br />

and obstacle avoiding tasks.<br />

Group cooperation<br />

It is agreed that a number of complex tasks in mobile robotics is better carried out by a<br />

team of several simple mobile platforms than by a single complex unit. Since the various<br />

tasks involved in humanitarian demining depend heavily on the capacity of thoroughly<br />

exploring an unknown mine-filled terrain, and autonomously react to sensor<br />

measurements related to mine threats, a team of autonomous or semi-autonomous mobile<br />

robots is the best choice in this case.<br />

The number of team members may depend on the extension of the area to be searched, as<br />

well as on the accepted level of a-posteriori probability of leaving a mine undiscovered.<br />

A team can in principle be composed by homogeneous or non-homogeneous robots: in<br />

the first case, each team member carries the same payload and is functionally<br />

interchangeable with other members, while in the second case there can be different<br />

specialized robots: for example, while some robots are devoted to mine recognition, one<br />

robot can be in charge of localization and mapping, or take care of the medium range<br />

radio link with the coordinator.<br />

The hybrid platform proposed in this paper suit well the mobility aims of the demining<br />

team, since it can perform both high speed deployment on a wide area, and low speed<br />

tasks, such as obstacle avoidance on difficult terrain or mine identification and<br />

neutralization.<br />

The platform can, in principle, carry homogeneous payloads or dishomogeneous ones,<br />

with minor modifications of its layout and power consumption.<br />

The control system can be partitioned into two levels: a low level behaviour-based<br />

control, assuring fast and prompt action with minimal computing power, and a high-level<br />

layer, performing model-based tasks, such as map reconstruction and safe areas<br />

recognition.


Conclusions<br />

A number of expertize and research topics currently developed at the Mechatronics Lab<br />

at Politecnico di Torino are thought to be well fitted to design and prototype a hybrid<br />

robotic platform to be used in the technical effort toward an effective humanitarian<br />

demining application.<br />

The main objective of the proposed hybrid solution for light mobile robots is make them<br />

available at reasonable cost in order to be applied in a number of demining scenarios both<br />

for the main removal task or as a support to it.<br />

References<br />

[1] D.J. Todd, 1985, Walking Machines: an Introduction to Legged Robots, Kogan<br />

Page Ltd., London.<br />

[2] J. Peabody, H. B. Gurocak, 1998, Design of a Robot that Walks in Any Direction,<br />

Journal of Robotic System, pp. 75-83.<br />

[3] M. E. Roseheim, 1994, Robot Evolution: the Development of Anthrorobotic, Wiley,<br />

New York.<br />

[4] Amati N., Chiaberge M., Genta G., Miranda E., Reyneri L.M, 2000, WALKIE 6–A<br />

Walking Demonstrator for Planetary Exploration, Space Forum, Vol. 5, N° 4 pp.<br />

259-277.<br />

[5] N. Amati, G. Genta, L.M. Reyneri, 2002, Three Rigid Frames Walking Planetary<br />

Rovers: a New Concept, Acta Astronautica, Vol. 50, N°.12, pp. 729-736.<br />

[6] S. M. Song, K. J. Waldron, 1989, Machines that Walk: the Adaptive Suspension<br />

Vehicle, Cambridge-MIT.<br />

[7] G. Genta and N. Amati, 2001, Planar motion hexapod walking machines: a new<br />

configuration, Proceedings of the Fourth Int. Conf. on Climbing and Walking<br />

Robots, London, pp. 619-629.<br />

[8] D.S. Goldin, S.L. Venneri, A.K. Noor, 2000, The great out of the small, Mechanical<br />

Engineering, Vol. 122. N°. 11, pp. 70-79.<br />

[9] G. Genta and N. Amati, 2002, Non-Zoomorphic Versus Zoomorphic Walking<br />

Machines and Robots: a Discussion, European Journal of Mechanical and<br />

Environmental Engineering, Vol. 47. N°. 4, pp.223-237.<br />

[10] A. Halme, I Leppanen, M. Montonen, S. Ylonen, 2001, Robot motion by<br />

simultaneously wheel and leg propulsion, 4 th Int. Conference on Climbing and<br />

Walking Robots, pp. 1013, 1020.


Ground Adaptive Manipulation of GPR for Mine Detection System<br />

Hidenori Yabushita, Kazuhiro Kosuge and Yasuhisa Hirata<br />

Department of Bioengineering and Robotics, Tohoku University,<br />

Aoba-yama 01, Sendai 980-8579, Japan,<br />

email:hidenori,kosuge,hirata@irs.mech.tohoku.ac.jp<br />

Abstract<br />

In this paper, we propose the mine detection system<br />

which consists of the ground penetrating radar<br />

(GPR) and the arm for operation of the GPR. The<br />

GPR is expected as a most effective mine detection<br />

sensor which can detect both metal and plastic mines.<br />

Many researchers research the GPR for detection of<br />

underground conditions. However they have not considered<br />

a rough ground surface for utilizing the GPR.<br />

In actual mine fields, a rough ground surface influences<br />

the GPR signal. The underground information<br />

would be buried in noise influenced by the ground surface.<br />

The manipulation of the GPR along a ground<br />

surface would reduce the effect of the ground surface<br />

and make the precise information of the underground.<br />

The proposed system has high capability for<br />

mine detection.<br />

Key W ords : Ground penetrating radar, Mine detection,<br />

Robotics, Ground scanning<br />

1 Introduction<br />

In more than 70 countries, 120 million anti personnel<br />

mines have been buried[1]. These mines are located<br />

in not only battlefields but also residence areas, community<br />

roads, gardens and so on. It is necessary to<br />

detect land-mines on several situations. The purpose<br />

of the study is to realize the ground adaptive manipulation<br />

of a sensor head on several environmental<br />

situations. The ground adaptive manipulation is a<br />

motion control for improvement the GPR signal.<br />

A GPR is expected as a most effective mine detection<br />

sensor which can detect both metal and plastic<br />

mines. However the GPR is not perfect because<br />

the GPR signal includes unexpected signals such as<br />

ground reflection and clutter reflection. It is very<br />

difficult to eliminate these unexpected signals from<br />

received signal for getting the expected signals from<br />

mines.<br />

Most of research topics concerning a GPR<br />

is a signal processing to extract underground<br />

characteristics[2][3][4]. Those studies try to solve<br />

Figure 1: Concept model of mine detection robot.<br />

above problems and have succeeded. However these<br />

conventional approaches are not enough, because<br />

they only assume a flat ground surface. If we use<br />

the GPR on an actual sensing area, we should consider<br />

the effect of the shape of the ground surface.<br />

Uneven ground surface makes complex reflections. A<br />

ground reflection disturbs to detect mines from the<br />

GPR signal. We could reduce an effect of the ground<br />

reflection by using signal processing, however the signal<br />

processing also affects the underground information.<br />

To reduce an effect of a ground reflection, a<br />

position control of a GPR is important[5].<br />

If we could use the position control of the GPR, we<br />

could keep a relative distance between a ground surface<br />

and the GPR as a constant, therefore a ground<br />

reflection is detected as a constant intensity signal.<br />

We could find characteristics of underground objects<br />

from a received signal. In this paper, the position<br />

control of a GPR which we call as the ground adaptive<br />

manipulation is useful for mine detection.<br />

2 Concept of proposed mine detection<br />

robot system<br />

Robot technologies are expected to be applied to hazardous<br />

fields such as space, disaster sites, nuclear<br />

power plants and so on. Minefields are also danger-


ous area, where we hope to apply robot technologies<br />

to. By applying robot technologies to a demining<br />

process, we could execute dangerous tasks in safety.<br />

In this section, we introduce the concept model of our<br />

mine detection robot. The concept model of mine<br />

detection robot is illustrated in Fig.1. The concept<br />

model has three characteristics. The first characteristic<br />

is a low-pressure tire. The low-pressure tire has<br />

a great flexibility to make a broad contact surface.<br />

A broad contact surface decrease the ground pressure<br />

less than the actuated force of land mines. The<br />

concept robot with these tires realizes a demining<br />

process on undemined fields without exploding the<br />

mines. The driving ability on undemined area enables<br />

multiple robots to execute mine detection in<br />

parallel.<br />

The second characteristic is the small-reaction manipulator<br />

for realizing the high-speed and precise<br />

sensor head maneuver. The high-speed sensing will<br />

reduce a working time and improve the cost effectiveness.<br />

The small-reaction manipulator counteracts<br />

the reaction force of the sensor head motion by<br />

using counter weights. This functionality is important<br />

for the concept robot. The concept robot has<br />

low-ground-pressure tires which are extremely flexible,<br />

therefore the fast sensor head motion causes oscillations<br />

on the concept robot. Oscillations of the<br />

robot impair the accuracy of the sensor position and<br />

oscillations also make the GPR signal complex. A<br />

signal from a buried object is hidden in complex<br />

noise based on oscillations of the robot. By using<br />

the small-reaction manipulator, we could execute the<br />

high-speed and precise detect motion without oscillations.<br />

The small reaction manipulator has been<br />

developed in our laboratory, and the functionality is<br />

already validated in our research [6].<br />

The third characteristic is the ground adaptive manipulation<br />

of a sensor head which is mentioned in the<br />

following sections. The purpose of the ground adaptive<br />

manipulation is to reduce an effect of the ground<br />

reflection. The ground reflection has a large proportion<br />

of the received signal and a complex-ground<br />

reflection hides the signal from a buried object. A<br />

ground adaptive manipulation keeps a distance between<br />

the GPR and the ground surface as a constant<br />

in order to keep a ground reflection as a constant. It<br />

is easy to extract a target signal from a received GPR<br />

signal which has a constant ground reflection.<br />

3 Sensor head maneuver along ground<br />

surface<br />

The ground adaptive manipulation is a sensing motion<br />

which traces a ground surface. By keeping the<br />

distance between the sensor head and the ground surface,<br />

it is possible to reduce the effect of the ground<br />

[m]<br />

Buried object reflection<br />

Effect of ground surface<br />

Figure 2: Cross section diagram of mine field with<br />

GPR<br />

surface and to emphasize the characteristics of buried<br />

objects. The data processing of sensing results also<br />

enables to decrease the effect of the ground surface.<br />

However the data processing also affects target signals,<br />

as a result a target signal will be deformed.<br />

On the other hand the ground adaptive manipulation<br />

could detect underground information without<br />

deforming.<br />

The sensing system uses the GPR for detection of<br />

land mines which are made with not only metal but<br />

also plastic. However, conventional GPR systems<br />

have a defect on its operation, because the position<br />

control of the GPR has not considered scanning conditions.<br />

It is difficult to extract the target signal<br />

from the received signal, because the received signal<br />

consists several complex factors such as target<br />

reflection, ground-surface reflection, and clutter reflection.<br />

The greater part of the received signal is<br />

based on a ground-surface reflection. Especially a<br />

reflected signal from shallow land mines is buried in<br />

a ground-surface reflection. A research topic of a<br />

GPR signal processing is to extract the target signal<br />

from the complex received signal.<br />

Fig.2 shows the cross section diagram of a flat mine<br />

field which is measured by the GPR. From Fig.2 we<br />

can see that the ground surface reflection has a great<br />

intensity in the received signal. Uneven-ground surface<br />

generates a complex signal depending on the<br />

shape of the ground surface. If the sensor head traces<br />

the ground surface, the ground surface reflection generates<br />

a constant intensity. In this case, we could<br />

eliminate the ground reflection and recognize the<br />

target characteristics from the received signal. We<br />

call the tracing a ground surface as a ground adaptive<br />

manipulation. By applying the ground adaptive<br />

manipulation for the GPR system, we could obtain<br />

a clear underground information which emphasize<br />

characteristics of buried objects. From next section,<br />

we show the detail of the proposed mine detection<br />

system.


Figure 3: Laser range finder (LMS 200) for ground<br />

scanning<br />

4 Mine detection system<br />

The ground adaptive manipulation is prospective for<br />

improving the GPR signal by reducing the ground<br />

reflection. The proposed mine detection system consists<br />

of the GPR, the surface capturing system, and<br />

the manipulator. The surface capturing system obtain<br />

the 3D map of the ground surface, and the manipulator<br />

realizes an effective path control of GPR.<br />

Details of these functions are discussed below.<br />

4.1 GPR performance<br />

The GPR is utilized as a land mine detector in the<br />

system. In this subsection, we introduce the GPR<br />

performance briefly. The GPR is a step-frequency<br />

radar which uses from 0 [GHz] microwave to 2 [GHz].<br />

The GPR measures the reflective intensity with each<br />

step-frequency from underground objects. By using<br />

the inverse Fourier transform, we obtain the timedomain<br />

signal from a measured information. A stepfrequency<br />

radar has several advantages compared<br />

with a impulse radar. For example, a time-domain<br />

signal which is measured by the step-frequency radar<br />

is sharper than an impulse radar. A feeding to<br />

the step-frequency radar is smaller than the impulse<br />

radar.<br />

Table 1: Performance of laser range finder<br />

Type LMS 200<br />

Range max 8 [m]<br />

Resolution 10 [mm]<br />

Statistical error ±15[mm]<br />

Scan speed 200 [ms/scan]<br />

Angular range 180[deg]<br />

Angular resolution 0.5[deg]<br />

Blind spot<br />

Laser range finder<br />

Measurable area<br />

Figure 4: Example of ground scanning<br />

Figure 5: 3D captured ground surface<br />

4.2 Ground surface capturing system<br />

In order to operate the GPR along a ground surface,<br />

it is necessary to measure the shape of the ground<br />

surface. We use the laser range finder which is called<br />

as SICK LMS 200. The specification of the laser<br />

range finder is shown in table 1. By using the laser<br />

range finder, we get the distance between the laser<br />

range finder and the ground surface with respect to<br />

a plane. The laser range finder scans the ground<br />

surface from two different view, because a scanning<br />

path from one position has possibility to have blind<br />

spots as shown in Fig.4. To avoid making blind spots<br />

we utilize a superposition of two scanning data which<br />

are measured from two different view.<br />

The superposition of the measured data makes the<br />

3D map of the ground surface. For superposition<br />

of the data it is necessary to collect several slices<br />

of the ground scanning data. To collect the ground<br />

scanning data, the laser range finder is implemented<br />

on the 2nd joint of the manipulator as shown in Fig.3.<br />

The height of the laser range finder is 60 [cm] above<br />

the ground surface.<br />

To crate a 3-D map of a ground surface, we digitize<br />

the scanning data along the x axis and the y axis. In<br />

this study we choose 1 [cm] for the grid interval of


the 3D map. An example of the 3D ground surface<br />

is shown in Fig.5. From Fig.5 we can see that the<br />

map has no blind spot.<br />

The laser range finder is connected to a PC through<br />

RS-232C. The scanning process takes 200[ms] per 1<br />

slice image. For scanning the ground surface which is<br />

700[mm] width and 900[mm] length a total scanning<br />

process takes around 30[sec] with 1[cm] interval. By<br />

applying the 3rd order function to the 3D ground<br />

surface map we could make a sensing path of the<br />

GPR smoothly.<br />

4.3 Mine detection manipulator<br />

The proposed system is designed and developed in<br />

order to realize the ground adaptive manipulation<br />

for a high-performance mine detection. On unevenground<br />

surfaces the ground-adaptive manipulation is<br />

useful to obtain a clear sensing image.<br />

In this research we assume a height of unevenness<br />

of a ground surface is less than 30[cm]. Conventional<br />

mine-detection systems could not be applied<br />

to such situations, because those systems exclude uneven<br />

ground conditions. The manipulator operates a<br />

sensor head along the 3D ground surface map which<br />

is made by the ground surface capturing system.<br />

The developed mine detection manipulator is shown<br />

in Fig.6. The manipulator has 3 degree of freedom.<br />

The movable range of the x axis is 900[mm], y axis<br />

is 700[mm], and z axis is 400[mm] as shown in Fig.7.<br />

The speed of the each axis is the max 0.6 [m/s] for<br />

x axis and y axis, and 0.4 [m/s] for z axis as shown<br />

in table2. These speed of each axis enables a mine<br />

detection widely. Conventional detection area which<br />

is executed by human is about 3-6[m 2 ] per one day.<br />

However the developed manipulator has ability of detection<br />

of 1[m 2 ] with 15 minutes. If we use the proposed<br />

robot for 8 hours, the robot can detect 32[m 2 ].<br />

Each axes of the manipulator are driven by ACmotors<br />

and the ball-screws. Ball-screws realize the<br />

straight movement and precise position control. The<br />

GPR is mounted on the end-effector of the manipulator.<br />

Fig.8 shows a GPR system which we use in<br />

this research. We apply the ground adaptive manipulation<br />

to the manipulator. Fig.10 shows the result<br />

of the motion.<br />

5 Experiments<br />

To evaluate the proposed system, we apply the control<br />

system to the mine detection system. Following<br />

subsections we explain experimental conditions and<br />

show experimental results.<br />

5.1 Experimental conditions<br />

We made a sand pool illustrated in Fig.9 for testing<br />

a proposed mine detection system. The volume of<br />

y<br />

x<br />

Figure 6: Mine detection manipulator<br />

400<br />

900<br />

z<br />

700<br />

[mm]<br />

Figure 7: Movable range of sensor head<br />

the sand pool is 1.0[m]×1.0[m]×0.42[m]. The sand<br />

pool has electromagnetic wave absorbers on each wall<br />

and on the bottom in order to prevent artificial unnecessary<br />

reflection. By using electromagnetic wave<br />

absorbers, the sand pool has a electro-magnetic property<br />

as almost same as an actual field.<br />

For evaluating the proposed system, we made two<br />

ground surface forms as shown in Fig.11. The small<br />

mountain and the small valley has a 10[cm] in vertical.<br />

In this experiments we use two dummy mines as<br />

shown in Fig.12. These dummy mines have a similar<br />

shape and a similar electro-magnetic property as real<br />

landmines. The size of M-14 is 55 [mm] in diameter<br />

and 40 [mm] in height, and the size of TYPE 72 is<br />

77 [mm] in diameter and 40 [mm] in height.<br />

Dummy mines are buried under 10[cm] at the top of<br />

the mountain or at the bottom of the valley. In the<br />

experiments, we scan the detection area at 5[mm]<br />

interval on horizontally. When the ground adaptive<br />

maniplation was applied to the manipulator, the<br />

Table 2: Specification of mine detection manipulator<br />

Range x: 900 [mm]<br />

of y: 700 [mm]<br />

movement z: 400 [mm]<br />

Maximum x: 0.6 [m/s]<br />

working y: 0.6 [m/s]<br />

speed z: 0.4 [m/s]


(a) (b) (c) (d) (e)<br />

Control unit<br />

PC<br />

Antenna<br />

Figure 8: Ground penetrating radar system<br />

Woodboard<br />

Electromagnetic wave absorber<br />

Dry sand<br />

1.0 [m]<br />

Figure 9: Sand pool<br />

Figure 10: Ground adaptive manipulation<br />

0.42 [m]<br />

height of the GPR was kept as as about 4[cm] above<br />

from a ground surface.<br />

5.2 Results of mine detection<br />

In the experiments, we made two type ground surfaces<br />

as a mountain and a valley. In each condition,<br />

we manipulate the GPR with two type operation.<br />

One is the constant height manipulation of the GPR,<br />

and another is the ground adaptive manipulation.<br />

Fig.13 shows the result of mine detection with constant<br />

height of manipulation of the GPR on the small<br />

mountain, and Fig.14 shows the result of mine detection<br />

with ground adaptive manipulation. From these<br />

¢¡¤£ ¥ ¦¤§©¨���� ¦¤���<br />

¢¡¤£ ¥ ¦¨§�©���� ¦¨���<br />

(a) Mountain (b) Valley<br />

Figure 11: Experimental ground surface<br />

(a) M14 (b) TYPE 72<br />

Figure 12: Imitation model of landmine<br />

results we can see that the Fig.13 includes the effect<br />

of ground surface, however Fig.14 shows the constant<br />

intensity of ground reflection. From the Fig.14 we<br />

can find the feature of a landmine easily.<br />

Fig.15 shows the result of mine detection with constant<br />

height on the valley form, and Fig.16 shows<br />

the result of mine detection with the ground adaptive<br />

manipulation. From these results we can see that<br />

the ground adaptive manipulation keeps a constant<br />

intensity of the effect of the ground reflection. From<br />

Fig.16 we can find the feature of a landmine easily<br />

than Fig.15. Through these experimental results, the<br />

proposed mine detection system is validated.<br />

6 Conclusion<br />

In this paper we propose the ground adaptive manipulation<br />

which is realize the high ability of landmine<br />

detection. By using the proposed system, we could<br />

emphasize the features of buried objects by keeping<br />

a constant ground reflection. The experimental results<br />

illustrated the validity of the proposed mine<br />

detection system.


GPR<br />

GPR<br />

GPR<br />

GPR<br />

References<br />

(a) Sensor path (b) Without mine (c) M14 (d) TYPE72<br />

Figure 13: Constant height manipulation on mountain surface<br />

(a) Sensor path (b) Without mine (c) M14 (d) TYPE72<br />

Figure 14: Ground adaptive manipulation on mountain surface<br />

(a) Sensor path (b) Without mine (d) M14 (d) TYPE72<br />

Figure 15: Constant height manipulation on valley surface<br />

(a) Sensor path (b) Without mine (c) M14 (d) TYPE72<br />

Figure 16: Ground adaptive manipulation on valley surface<br />

[1] Jacqueline MacDonald et al.: Alternatives For<br />

LANDMINE DETECTION. RAND, 2003.<br />

[2] Andria can der Merwe, Inder J. Gupta: A Novel Signal<br />

Processing Thechnique for Clutter Reduction in<br />

GPR Measurements of Small, Shallow Land Mines<br />

,IEEE TRANSACTION ON GEOSCIENCE AND<br />

REMOTE SENSING, vol.38, No.6, pp.2627-2637,<br />

2000.<br />

[3] Nada Milisavljevic, Isabrllr Bloch: Sensor Fusion<br />

in Anti-Personnel Mine Detection Using a Two-<br />

Level Belief Function Model, IEEE TRANSACTION<br />

ON SYSTEMS, MAN, AND CYBERNETICS-PART<br />

C:APPLICATIONS AND REVIEWS, vol.33, No.2,<br />

pp.269-283, 2003.<br />

[4] Guangyou Fang, Motoyuki Sato: GPR detection<br />

of landmine by wavelet transform. Proceedings of<br />

the 6th SEGJ <strong>International</strong> Symposium, pp.449-454,<br />

2003.<br />

[5] Claudio Bruschini, Bertrand Gros, Frederic Guerne,<br />

Pierre-Yves Piece, Oliver Carmona: Ground penetrating<br />

radar and imaging metal detector for antipersonnel<br />

mine detection, Journal of Applied Geophysics,<br />

vol.40, pp.59-71, 1998.<br />

[6] Hidenori Yabushita, Yasushisa Hirata, Kazuhiro Kosuge:<br />

Small Reaction Manipulator for GPR Sensing<br />

Head Maneuver. SSR2003 Proceedings of the First<br />

<strong>International</strong> Symposium on Systems & Human Science,<br />

pp.339-344,2003.


A SYSTEM FOR MONITORING AND CONTROLLING A CLIMBING AND<br />

WALKING ROBOT FOR LANDSLIDE CONSOLIDATION<br />

Leif Steinicke<br />

David Dal Zot<br />

Thierry Benoist<br />

Space Applications Services<br />

Leuvensesteenweg 325<br />

B-1932 Zaventem<br />

Belgium<br />

www.spaceapplications.com


ROBOCLIMBER HUDEM ‘04<br />

1. INTRODUCTION .................................................................................................................................... 3<br />

2. CLAWAR TECHNOLOGY USED FOR SLOPE/LANDSLIDE CONSOLIDATION...................... 4<br />

2.1 SLOPE CONSOLIDATION SCENARIO........................................................................................................ 4<br />

2.1.1 Setting the anchor ........................................................................................................................ 5<br />

2.1.2 Positioning ................................................................................................................................... 7<br />

2.1.3 Drilling......................................................................................................................................... 8<br />

2.1.4 Iterating ....................................................................................................................................... 9<br />

3. REQUIREMENTS ON ROBOT TECHNOLOGY IN SLOPE/LANDSLIDE CONSOLIDATION<br />

...................................................................................................................................................................... 10<br />

3.1 REQUIREMENTS ON THE ROBOT .......................................................................................................... 10<br />

3.1.1 Moving and positioning ............................................................................................................. 10<br />

3.1.2 Power supply.............................................................................................................................. 10<br />

3.1.3 On-board equipment .................................................................................................................. 11<br />

3.1.4 Other system features................................................................................................................. 11<br />

3.2 REQUIREMENTS ON THE ROBOT HMI.................................................................................................. 11<br />

- High level operations : the operator specifies a depth and the HMI automatically has the robot<br />

drill a hole of the specified depth (high level command). ................................................................... 12<br />

- Medium level operations : the operator controls the robot on a rod by rod basis and sends<br />

commands such as "insert rod", “remove rod” and so on.................................................................. 12<br />

- Low level operations : likely to be directly accessed only for testing and debugging. At this<br />

level, the operator would control the drilling subsystem on a very low level and would send<br />

commands such as “rotate joint x by n degrees". ............................................................................... 12<br />

3.3 ROBOT HMI DESIGN ........................................................................................................................... 13<br />

3.3.1 General considerations.............................................................................................................. 13<br />

3.3.2 Tablet as a command console .................................................................................................... 13<br />

3.3.3 HMI Operator Mode .................................................................................................................. 14<br />

3.3.4 HMI Super-User Mode............................................................................................................... 16<br />

4. CONCLUSION....................................................................................................................................... 16


ROBOCLIMBER HUDEM ‘04<br />

1. Introduction<br />

This paper describes the application of space robotics control technology applied to the<br />

slope/landslide consolidation sector. The description is made from the point of view of<br />

monitoring and controlling a walking/climbing machine used in the Roboclimber project.<br />

Figure 1a.<br />

Figure 1a shows an early prototype of the Roboclimber, used to test the concept,<br />

operating on a near vertical rock face. The process of stabilising a risky slope is initiated<br />

with a geological survey, after which a series of holes are made to consolidate the wall.<br />

The objective of a tele-operated climbing robotic system for the maintenance and<br />

consolidation of mountain slopes is to drill the deep holes for the insertion of 20-metre<br />

long rods. This saves time and operating costs and is less dangerous. Roboclimber is<br />

made by a consortium of European SMEs and builds upon the technology used to control<br />

space robotics on-board satellites or spacecraft such as the <strong>International</strong> Space Station.<br />

Figure 1b.<br />

Figure 1b shows the robot in an early phase of development without drilling equipment.


ROBOCLIMBER HUDEM ‘04<br />

2. CLAWAR Technology Used for Slope/Landslide<br />

Consolidation<br />

2.1 Slope consolidation scenario<br />

This section describes a typical use of monitoring and control technology in the field of<br />

slope consolidation. Figure 2 below shows a rendering of a slope to be consolidated.<br />

Figure 2. Rendering of a slope to be consolidated<br />

The part of the slope needing consolidation is indicated in light green.<br />

The following describes the different stages involved in consolidation of a slope, using a<br />

climbing and walking robot. The general approach is that the robot climbs down from an<br />

anchoring point at the top of the slope, and then is controlled to walk to a number of<br />

positions on the slope where it will drill rods into the slope. The typical steps are thus:<br />

• setting the anchor on the top of the slope;


ROBOCLIMBER HUDEM ‘04<br />

• climbing down the slope from the anchoring point<br />

• positioning the robot on the slope at a drilling location;<br />

• drilling a rod (or several rods) into the slope at drilling location;<br />

• move to new drilling location and repeat drilling, until all reachable drilling locations,<br />

from the current anchor point, have been consolidated with rods;<br />

• moving the robot back up the slope and walk it to the next anchoring position;<br />

• iterate the process until the entire slope is consolidated with rods.<br />

During the whole process, at all times the monitoring and control system will keep the<br />

operator aware of the situation through a Human-Machine Interface (HMI) providing him<br />

with necessary information about the robot, such as its position, its speed, leg position,<br />

and various relevant telemetry for each particular task.<br />

2.1.1 Setting the anchor<br />

The first step to be completed is for the robot to anchor itself to the top of the slope at the<br />

edge of the zone to be drilled. The robot may feature a so-called tirfor which will allow it<br />

to use two ropes attached to two anchor points to climb down the cliff.<br />

At this stage, the operator can control the robot and move it around so that it can go by<br />

itself to the first designated anchor point, see figure 3.


ROBOCLIMBER HUDEM ‘04<br />

Figure 3. Positioning the robot before anchoring<br />

The robot is positioned on the edge of the drilling zone, and is ready to anchor itself.<br />

Individual drilling sites are shown in red. The operator can give directions to the robot at<br />

a high level and does not have to “worry” about individual components - legs, joints - to<br />

achieve this, though he is capable of controlling the robot at this level if deemed<br />

necessary. When the robot reaches the designated anchor point, the operator orders the<br />

robot to anchor itself. The robot will in effect drill a hole and insert an anchor attached to<br />

the extremity of one of its tirfor ropes.<br />

The operator repeats the process to move and anchor the robot to the second anchor point.<br />

Now that the robot is properly anchored, with its two ropes attached to their<br />

corresponding anchor point as shown in figure 4.<br />

The robot is then ready to start its descent.


ROBOCLIMBER HUDEM ‘04<br />

2.1.2 Positioning<br />

Figure 4. Anchoring<br />

With its ropes secured on the top of the cliff, the robot must now position itself above the<br />

drilling sites. In an ideal drilling strategy, the robot will drill holes some 2 meters apart in<br />

every direction. In the light of stability issues, it is not possible to drill more than<br />

approximately three holes horizontally for each anchor point without making the lateral<br />

deviation too big. Refer to figure 5 a and b.<br />

To achieve this, the operator will send high level commands to the robot to climb down,<br />

up, left or right. The robot will automatically use its tirfors and legs<br />

in a joint effort to manoeuver horizontally and laterally, climb up and down, avoid<br />

obstacles, and so on.


ROBOCLIMBER HUDEM ‘04<br />

Fig 5.a The robot in a stable drilling position.<br />

Fig 5.b The robot at a drilling location, at the limit of its reach envelope.<br />

These figures illustrate the stability issue when moving the robot laterally on the slope.<br />

2.1.3 Drilling<br />

Before the drilling can actually commence, the robot needs to be as stable as possible.<br />

Therefore, the operator will manually fine-tune the position of each leg to ensure proper<br />

contact with the ground. Fine-tuning the position of a leg involves manually trying<br />

different triplets until the telemetry is satisfactory. When<br />

the four legs are properly positioned, the operator specifies a depth and instructs the robot<br />

to start drilling. The robot will automatically load and use drilling rods and will


ROBOCLIMBER HUDEM ‘04<br />

automatically recover them when the specified depth is reached. Of course, should an<br />

anomaly occur, manual control will immediately be teturned to the operator. The operator<br />

will be kept informed via a progress indicator displaying relevant information (drilling<br />

speed, current depth, heuristic analysis from robot, etc.)<br />

2.1.4 Iterating<br />

After the drilling is complete, the robot will be moved to the next drilling site as<br />

explained previously. If the robot has reached the bottom of the drilling zone, the<br />

operator will order it to climb all the way back up to the top of the cliff, back at the<br />

anchor site, using high level commands to climb up and walk the final distance to an<br />

anchor point.<br />

Recovering and setting up an anchor is performed using high level commands. After the<br />

new anchor point has been chosen, the process is then iterated until all the drilling<br />

locations have been processed.


ROBOCLIMBER HUDEM ‘04<br />

3. Requirements on Robot Technology in<br />

Slope/Landslide Consolidation<br />

From the typical slope or landslide consolidation scenario described in the previous<br />

section, a number of requirements can be derived for the robot technology applied in this<br />

sector, and the associated monitoring and control.<br />

3.1 Requirements on the robot<br />

3.1.1 Moving and positioning<br />

A tele-operated climbing robot shall be able to move vertically, laterally and horizontally<br />

during slope consolidation.<br />

The robot structure should be provided with an ad hoc base plate to act as a skid support<br />

for quick vertical movement and as a protection against protruding rocks.<br />

The robot shall be able to move laterally and vertically by a combination of the ropes by<br />

which it is suspended from the top of the slope and its own legs.<br />

The positioning of the robot shall be able to be provided by the robot/robe system.<br />

Due the working conditions all on-board equipment should be well-anchored with the<br />

robot frame and work independently from the position on the slope.<br />

3.1.2 Power supply<br />

A climbing robot for slope consolidation is likely to require three types of power sources:<br />

• pneumatic: compressed air (at e.g. 12-20 bar) for a drilling unit (for the operations<br />

of drilling and flushing debris from the drilling);<br />

• hydraulic: oil (at e.g. 200 bar) for a drilling unit (to rotate and advance), for the<br />

leg movements and other services;<br />

• electric: for sensors, control system, cameras and lighting.


ROBOCLIMBER HUDEM ‘04<br />

3.1.3 On-board equipment<br />

A robot for slope consolidation is likely to require the following types of on-board<br />

equipment:<br />

• drilling rig;<br />

• drilling rods;<br />

• drilling bit;<br />

• 1 camera with pan/tilt/zoom) for monitoring leg positions;<br />

• 1 fixed camera for monitoring the drilling area;<br />

• electronics unit for performing the on-board control.<br />

3.1.4 Other system features<br />

The robot shall incorporate features that actively and passively ensures that the robot will<br />

not become disconnected from the ropes that support it from the top of the slope.<br />

3.2 Requirements on the robot HMI.<br />

The HMI shall allow the operator to move the robot as a whole or component by<br />

component depending on the operational context (climbing, pre-drilling positioning,<br />

walking to anchor points, etc.).<br />

The HMI shall incorporate a simple to use and easily recognizable emergency stop<br />

button.<br />

The HMI shall allow the operator to manage and monitor the drilling, using high level<br />

macros or low level primitives.<br />

The HMI shall be able to operate in a harsh environment: wide temperature range,<br />

humidity, precipitation, dust, etc.<br />

The HMI shall be able to operate in a bright daylight environment, leading to special<br />

needs of screens to provide high contrast, and provide features to reduce reflections of<br />

light.<br />

The HMI shall be ruggedized to sustain being dropped and/or treated roughly.<br />

The HMI shall be user friendly and accessible to non computer-literate users. This<br />

requirement translates into employing devices such as joysticks and buttons, rather than<br />

keyboard and mouse.<br />

The HMI should follow a layered approach allowing the operator to access high level or<br />

low level commands depending on the situation.<br />

HMI should support different modes, each mode being tailored and adapted to a<br />

particular phase of the overall operation.


ROBOCLIMBER HUDEM ‘04<br />

These modes are likely to include:<br />

• Prestart mode<br />

Used to specify terrain-specific data.<br />

• Positioning mode<br />

• Drilling mode<br />

- Moving/Climbing/Anchoring : the operator uses the joystick to<br />

indicate the robot where to move (high level control).<br />

- Finetuning : for a given leg, the operator will send directives such as<br />

"move forward 0.5 meter”. Alternatively, the Operator can adjust its<br />

rotation, extension and elevation.<br />

- High level operations : the operator specifies a depth and the HMI<br />

automatically has the robot drill a hole of the specified depth (high<br />

level command).<br />

- Medium level operations : the operator controls the robot on a rod by<br />

rod basis and sends commands such as "insert rod", “remove rod” and<br />

so on.<br />

- Low level operations : likely to be directly accessed only for testing<br />

and debugging. At this level, the operator would control the drilling<br />

subsystem on a very low level and would send commands such as<br />

“rotate joint x by n degrees".


ROBOCLIMBER HUDEM ‘04<br />

3.3 Robot HMI design<br />

3.3.1 General considerations<br />

The command console will be PC-based. It must be light so that it can be carried around<br />

and easily strapped around the neck of the operator.<br />

As the robot has to be operated outdoors, this command console has to be harsh condition<br />

resistant (rain, dust) and the screen must have outdoor-displaying capabilities.<br />

Finally, the console must provide remote control capabilities, to prevent the user from<br />

being too close from slopes, and the display must be large enough for displaying camera<br />

views.<br />

The camera system is network based, tcp/ip cameras typically broadcast video in the form<br />

of MJPEG, MPEG-1 or MPEG-4 streams.<br />

3.3.2 Tablet as a command console<br />

The hardware solution that has been chosen is the Fujitsu-Siemens tablet-pc.<br />

Fig. 6. The Roboclimber Command<br />

The 4121 model has an outdoor screen and a harsh environment casing that enable<br />

manipulations even with dust or rain, and the screen is large enough to display all the<br />

relevant information for controlling the robot.<br />

The control station also has an embedded 802.11b wireless Ethernet card.


ROBOCLIMBER HUDEM ‘04<br />

3.3.3 HMI Operator Mode<br />

When running in Operator Mode, Roboclimber will be controlled using one joystick and<br />

a set of buttons (provided by the joystick).<br />

These buttons will be used to change the operational mode of Roboclimber.<br />

One button is used to cycle through the different modes.<br />

All the commands are sent via a TCP/IP connection.<br />

Fig. 7. The control software main display<br />

Figure 7 is an example of what the operator sees when he is controlling the robot in “high<br />

level mode”. With a single button, he can switch to the desired sub-system he wishes to<br />

control.


ROBOCLIMBER HUDEM ‘04<br />

Fig. 8. The upper left leg is selected Fig. 9. The upper right leg is selected<br />

Operating the robot is done by selecting the wanted move and setting the value or<br />

increment to reach.<br />

Figure 8 and 9 show how the user fine-tunes the rotational angle of the upper left leg, by<br />

setting the angle to 42 degrees.<br />

An arrow indicates the direction of the leg after having confirmed the command and realtime<br />

display is provided during the move.<br />

It is possible to send emergency stop signals during a move.<br />

Also, for security reason, the server running on the robot pings the client regularly and it<br />

stops immediately any move if the client does not answer (for instance, if the console<br />

runs out of battery power or if it breaks down).<br />

Fig. 10. Sending a command to rotate a joint to a specified angle value


ROBOCLIMBER HUDEM ‘04<br />

3.3.4 HMI Super-User Mode<br />

Super-User mode provides access to low-level functionalities for testing or for resolving<br />

contingencies.<br />

Fig 11. The low level console<br />

Fig 12. A dialog displaying messages sent by<br />

the robot<br />

Figure 11 shows the low level console that is used to send commands to the root. The<br />

user can select a command from a pull down menu and modify the parameters prior to<br />

sending it.<br />

The figure 12 shows the messages that are sent by the robot in response to the command.<br />

“ACKNOWLEDGE $70 2” states that the command “INITIALIZE” has been properly<br />

received and queued for processing.<br />

4. Conclusion<br />

Stabilising risky slopes is a dangerous job. Today it is done manually by workers who<br />

have to build and climb tall scaffolding and who remain exposed to siliceous dust and<br />

loud penetrating noise.<br />

Maybe the most important aspect of the Roboclimber is that it will make risky jobs safer.<br />

Thanks to automated interventions and the use of remote control, accidents related to<br />

operating on high scaffolding can be eliminated; additionally, any sudden soil movement<br />

will endanger nobody as the operators will all be working at a safe distance.


Sensor head and scanning manipulator for humanitarian de-mining<br />

Abstract<br />

P. Gonzalez de Santos, E. Garcia, J. Cobano and J. Estremera<br />

Industrial Automation Institute-CSIC<br />

Ctra. Campo Real, Km. 0,200- La Poveda<br />

28500 Arganda del Rey, Madrid, Spain<br />

Detecting and removing antipersonnel landmines is a significant humanitarian activity to<br />

accomplish all over the world. More than 100 million mines have been scattered on many<br />

countries over the last twenty years, and full de-mining would take several more decades,<br />

assuming no more mines would be deployed in future. New technologies such as improved<br />

sensors, efficient manipulators and mobile robots can help to clean fields up. This paper<br />

presents both the configuration of a sensor head and a manipulator for detecting and locating<br />

antipersonnel landmines. The sensor head can detect certain landmine types, while the<br />

manipulator moves the sensor head over large areas. This paper describes the main features of<br />

these two subsystems that are a part of a whole system under development.<br />

1 Introduction<br />

Detection and removal of antipersonnel landmines is at present a serious political, economic,<br />

environmental and humanitarian problem. There exists a common interest in solving this<br />

problem, and solutions are being sought in several engineering fields. The best solution,<br />

although not the quickest, would be to apply a fully automatic system to this important task.<br />

However, any such solution still appears to remain a long way from succeeding. First of all,<br />

efficient sensors, detectors and positioning systems would be needed to detect, locate and, if<br />

possible, identify different mines. Next—and this is of paramount importance—adequate<br />

vehicles would have to be provided to carry the sensors over the infested fields. The case<br />

posited above would require simple sensor arrays or terrain-scanning manipulators using just<br />

one simple sensor. During de-mining operations, human operators would have to stay as far<br />

away as possible for safety; thus, tele-operation is also and important issue in this application.<br />

Walking robots exhibit many advantages as vehicles for humanitarian de-mining. Legged<br />

mechanisms for such application have been under development for at least the last five years,<br />

and some prototypes have been already tested. TITAN VIII, a four-legged robot developed for<br />

general purposes at the Tokyo Institute of Technology, Japan (Hirose and Kato 1998), was one<br />

of the first walking robots adapted for de-mining tasks. AMRU-2, an electropneumatic hexapod<br />

developed by the Free University of Brussels and the Royal Military Academy, Belgium<br />

(Baudoin et al. 1999), and RIMHO2, a four-legged robot developed at the Industrial<br />

Automation Institute-CSIC, Spain (Gonzalez de Santos et al., 1999), are two more examples of<br />

walking robots used as test beds for humanitarian de-mining tasks. COMET-1 was perhaps the<br />

first legged robot developed on purpose for de-mining tasks. It is a six-legged robot developed<br />

by a Japanese consortium, and it incorporates different sensors and location systems (Nonami et<br />

al. 2000). The COMET team is currently engaged in developing the third version of its robot.<br />

These four robots are based on insect configurations, but there are also different legged robot<br />

configurations, such as sliding-frame systems, being tested as humanitarian de-mining robots<br />

(Habumuremyi 1998, Marques 2002). Summarising, there is an increasing activity in<br />

developing walking robots for this specific application field.<br />

2 The DYLEMA project<br />

The DYLEMA project is devoted to configure a semi-autonomous system for detecting and<br />

locating antipersonnel land mines and it has been conceived around a mobile robot based on<br />

1


legs. The overall system is thus broken down into the following subsystems, illustrated in<br />

Figure 1.<br />

1. Sensor head. This subsystem contains the mine detector and additional elements for<br />

detecting the ground and objects in the way.<br />

2. Scanning manipulator. The sensor head is basically a local sensor. That means it is<br />

able to sense just one point. The efficiency of such a device can be improved by moving<br />

the sensor head through a large area. A manipulator seems to be the ideal device for this<br />

task.<br />

3. Locator. After detecting a suspect object, the system has to mark the object’s exact<br />

location in a database for subsequent analysis and deactivation. We considered that an<br />

accuracy of about ±2 centimetres is adequate for locating landmines. This accuracy can<br />

be obtained with commercial systems such as DGPS (Differential Global Positioning<br />

Systems).<br />

4. Mobile robot. It is a mobile platform to carry the different subsystems across infected<br />

fields, which is of vital importance for thorough de-mining.<br />

5. Controller. The global control system will be distributed into two main computers, the<br />

onboard computer and the operator station. The onboard computer is in charge of<br />

controlling and co-ordinating the manipulator and leg joints, and also communicating<br />

with the DGPS, the detector and the operator station via radio Ethernet. The operator<br />

station is a remote computer in charge of defining the mobile robot’s main task and<br />

managing the potential-alarm database.<br />

3 The sensor head<br />

There are different sensor technologies for detecting mines. Simple sensors consist in metal<br />

detectors; they are lightweight and easy to use. However, they only detect mines that have<br />

metal parts, and they are inefficient with non-metallic mines (plastic mines). Other sensor types<br />

are required in such cases, such as sensors based on ground-penetrating radar (GPR) (Gader et<br />

al. 2000), chemical sensors (Albert et al. 1999) or artificial noses (Rouhi 1997).<br />

An efficient detection system, it is commonly thought, should blend different technologies.<br />

The DYLEMA project, however, is devoted to the development of mobile-robotics techniques<br />

for landmine identification and location. The project’s scope does not include any sensor<br />

Operator station<br />

Radio Ethernet aerial<br />

DGPS antenna<br />

Sensor head<br />

Scanner<br />

(manipulator)<br />

Figure 1. DYLEMA system<br />

2<br />

Onboard computer<br />

Walking robot


development. Therefore, the simplest way would be to select a metal detector as the de-mining<br />

sensor, just to help in the detection and location of potential alarms. After a suspect object is<br />

detected, its location must be marked in the system database for further analysis and possible<br />

deactivation.<br />

The Schiebel AN-19/2 commercial mine-detecting set is used for the DYLEMA project’s<br />

purposes. This detector is in service in the US Army as well as in several NATO countries. It<br />

has been designed to detect very small metallic objects, typically mines with a very small metal<br />

content. This detector and the full sensor head are shown in Figure 2. The sensor head consists<br />

of a support holding the metal detector plus additional range sensors (infra-red sensors) for<br />

detecting the ground and controlling sensor-head height and attitude. These sensors are located<br />

in pairs defining the upper and lower limits of the band in which the sensor head works. This<br />

array allows the controller to estimate the ground plane and thus to adapt the sensor head to<br />

terrain irregularities (see Figures 2 and 3).<br />

A sensor ring based on a set of flex sensors is located around the sensor head as an obstacle<br />

detector. This sensor ring alerts the controller about the position of objects in the sensor head’s<br />

trajectory, enabling the controller to steer around them. A flex sensor is a flexible-strip device<br />

which electric resistance changes with its form. These sensors are located around the sensor<br />

head as shown in Figure 4. When a flex sensor touches an object it is deformed, thus its electric<br />

resistance changes, and by measuring the electric current through its resistance it is possible to<br />

infer the contact. The flexibility of these devices allows a soft interaction between sensor head<br />

and objects in its way.<br />

4 The scanning manipulator<br />

The DYLEMA project uses a sensor head based on a metal detector, which is a device that<br />

senses a single point or very small areas. A scanning device is therefore needed that can sweep<br />

the sensor across large areas. The easiest system would be a manipulator tailor-made for this<br />

task. Such a manipulator would require three DOFs for positioning the sensor in a 3D area;<br />

assuming that the system is scanning a non-flat area, motions in the x, y and z components<br />

would be required. Also, the sensor head would have to be adapted to small terrain inclinations;<br />

hence, two additional DOFs would be needed at the wrist to control detector attitude. The mine<br />

detector has radial symmetry, so no additional DOFs would be needed for orientation control.<br />

To sum up, a manipulator with at least five DOFs is needed to accomplish the task.<br />

Metal detector<br />

Infra-red sensors<br />

a) b)<br />

Figure 2. Sensor head: a) solid model and b) prototype<br />

3<br />

Metal detector


Sensor<br />

Ground estimation plane<br />

Figure 3. Infra-red sensors in the sensor head<br />

The manipulator is designed to carry the sensor head, so the design is optimised to carry just<br />

this load. First, the load is balanced to move the detector ±45º in its pitch and roll wrist axes<br />

with the lower torque. This is accomplished by placing the detector in a configuration in which<br />

no torque is required in the normal position (detector levelled at rest). In the manipulator, a<br />

RRR arm configuration is good enough for this application. Mobility is adequate and, because<br />

the links lie along a single vertical plane, there will be fewer collisions with the environment<br />

(assuming the robot’s body is levelled). Another key design point is to mount the joint motors<br />

at the required position to balance the loads and decrease required torques.<br />

The motors have been integrated inside the mechanical structure to decrease the volume<br />

swept by the manipulator structure when moving. This decreases the number of potential<br />

crushes between the manipulator and the environment. Joints 2 and 3 are based on a spiroid<br />

reducer, which also changes by 90 degrees the output shaft, thus allowing the integration of the<br />

motor inside the manipulator structure (See Figure 5).<br />

Figure 5a shows a detailed design of the scanning manipulator taking into account the<br />

Figure 4. Flex sensors around the sensor head<br />

4<br />

Flex sensors<br />

Sensor head<br />

Sensor head<br />

Ground profile


Elbow joint<br />

Pitch motor<br />

aforementioned design requirements and Figure 5b shows the prototype. Table 1 lists the<br />

manipulator’s features. Some of these features, such as manipulator-link lengths, depend on the<br />

robot’s dimensions (body height and leg span).<br />

5 Conclusions<br />

Attachment to robot’s body<br />

Shoulder motor<br />

Shoulder joint<br />

Elbow motor<br />

Roll motor<br />

Pitch and roll joint<br />

a) b)<br />

Figure 5. Scanning manipulator<br />

Detection and location of antipersonnel landmines is being performed mainly by human<br />

operators using manual equipment. There is worldwide interest in eradicating deployed<br />

Joint/Link Link length<br />

Table 1. Main scanning manipulator features<br />

(mm)<br />

Motor power<br />

(watt)<br />

5<br />

Gearing<br />

Mass<br />

1 60 14 246:1 1.5<br />

2 341 72 287:1 2.1<br />

3 341 26 489:1 1.9<br />

4 -- 12 246:1 0.2<br />

5 200 12 246:1 --<br />

Manipulator 5.7<br />

Sensor head 2.2<br />

Total 7.9<br />

(kg)


landmines, and solutions are coming from new, emerging engineering fields. The robotisation<br />

of this task would provide many benefits to the human community in many countries.<br />

Although new sensor technologies are required to detect landmines efficiently, mobile robots<br />

must also be improved to carry that sensor in a safe manner. Legged locomotion offers some<br />

important advantages for moving on natural terrain and it appears to be a good solution for<br />

carrying mine sensors efficiently over infested fields.<br />

Some preliminary work has been done to study the potential of using walking robots for demining.<br />

The DYLEMA project is one more attempt to use walking robot for humanitarian demining<br />

tasks. This paper addresses the development of a manipulator and a sensor head that are<br />

part of the DYLEMA project, which is also briefly introduced.<br />

The system has to be completed with the tools it needs for forming databases of potential<br />

alarms and providing the operator with adequate images and graphs. The incorporation of new<br />

sensors, detectors, and software for signature analysis will be addressed in the second step of<br />

this project.<br />

Acknowledgement<br />

This work has been funded by the Spanish Ministry of Science and Technology under Grant<br />

CICYT DPI2001-1595.<br />

References<br />

Albert, K.J., Myrick, M.L., Brown, S.B., Milanovich, F.P. and Walt D.R. (1999). “High-speed<br />

fluorescence detection of explosives vapour”, SPIE, 3710, pp. 308-314.<br />

Baudoin, Y. Acheroy, M., Piette, M., and Salmon, J.P. (1999). “Humanitarian de-mining and<br />

robotics”, Mine Action Information Center Journal. Vol. 3, No. 2.<br />

Gader, P.D., Nelson, B., Frigui, H., Vaillette, G. and Keller, J. (2000). “Landmine detection in<br />

ground penetrating radar using fuzzy logic”, Signal Processing, Special Issue on Fuzzy<br />

Logic in Signal Processing (Invited Paper), Vol. 80, No. 6, pp. 1069-1084.<br />

Gonzalez de Santos, P., Armada, A., Estremera, J. and Jimenez, M.A. (1999). “Walking<br />

machines for humanitarian de-mining”, European Journal of Mechanical and<br />

Environmental Engineering, Vol. 44, No. 2, pp: 91-95.<br />

Habumuremyi, J.C. (1998). “Rational designing of an electropneumatic robot for mine<br />

detection”, Proceedings of the 1 st <strong>International</strong> Conference on Climbing and Walking<br />

Robots, pp. 267-273, Brussels, Belgium.<br />

Hirose, S. and Kato, K. (1998). “Quadruped walking robot to perform mine detection and<br />

removal task”, Proceedings of the 1 st <strong>International</strong> Conference on Climbing and Walking<br />

Robots, pp. 261-266, Brussels, Belgium.<br />

Marques, L., Rachkov, M. and Almeida, A.T. (2002). “Control system of a de-mining robot”,<br />

Proceedings of the 10 th Mediterranean Conference on Control and Automation, Lisbon,<br />

Portugal, July 9-12.<br />

Nonami, K., Huang, Q.J., Komizo, D., Shimoi, N. and Uchida, H. (2000). “Humanitarian mine<br />

detection six-legged walking robot”, Proceedings of the 3rd <strong>International</strong> Conference on<br />

Climbing and Walking Robots, pp. 861-868, Madrid, Spain.<br />

Rouhi A.M. (1997). “Land Mines: Horrors begging for solutions”, Chemical & engineering<br />

news, Vol. 75, No. 10, pp. 14-22.<br />

6


Live Driving Video System (LDV) command, control and communicate<br />

with the robotic device<br />

P.T. Gschwind, M. Stuber, Innosuisse Corp, Murten, Switzerland<br />

The LDV-System needed 15 years of development and now it’s time to integrate it in<br />

applications.<br />

The LDV system<br />

The LDV system (Live Driving video system) makes it possible to control or manipulate a<br />

unmanned vehicle or an object in real time, which is miles away. With this activity the socalled<br />

simulator illness normally arises. With the LDV system there is no such problem, you<br />

can work with it for hours and hours. Operations over large distances are possible. The LDV<br />

technology replaces the usual remote control that uses standard screen and standard<br />

joystick. With the LDV system the controller already believes after few seconds to be<br />

inside the moving object or part of it, if he manipulates a robot. The advantage of this<br />

procedure shows up on one hand in the response time (milliseconds) and on the other hand<br />

in the improved range of manoeuvres (speed and precision). For instance, it is enormously<br />

difficult to steer an unmanned vehicle by means of joystick and screen around a stone or a<br />

hole on the road, particularly if you need to drive at 70 kilometres an hour. With the LDV<br />

system however the pilot feels and steers this remote vehicle as if he is in it. All head,<br />

hands and feet movements are replicated instantly in the vehicle. The controller/pilot<br />

reads via the video system the instrumentation from in the remote vehicle. The driver,<br />

installed in the following vehicle (or at a stationary location), controls the remote vehicle<br />

by radio transmissions.<br />

Radio transmission is the means for remote control. In the unmanned vehicle, a camera is<br />

installed which replicates the head movements of the human pilot. Through a unique<br />

camera system the “simulator driver” reads the instruments in the car, as well as,<br />

manoeuvring for views in the rear-view or side-view mirrors. The camera transfers the<br />

picture through the simulator helmet or eyeglasses of the “simulator driver” to the human<br />

driver. After 15-30 minutes of training, the human driver controls and monitors the remote<br />

vehicle with an acute human touch despite the absence of centrifugal forces present with<br />

ordinary motoring.<br />

The main aim was to develop a remote control for objects, which allows the user, to<br />

manipulate in a natural way his object. I.e. to drive a car as you always drive a car, to<br />

pilot a helicopter as you always do, to do a demining as you always do.<br />

LDV transmits all your desired movements either to servo-motors respectively robots and<br />

most important your head-movements instantly, so that you mean you are virtually at the<br />

other place. All this is possible without that you get sick, LDV doesn’t cause “nausea”, the<br />

so called simulator-sickness.<br />

LDV and Demining: The Auto-Robot<br />

Why a Auto-Robot?<br />

Today NGO’s and peacekeeping forces stand in different crisis areas. <strong>International</strong> UN or<br />

NATO troops can be found around the world. It is on the roads where many of the relief<br />

and support vehicles are prey to undetected land mines. Over time and season’s heavy<br />

rainfall, erosion and landslides uncover such traps. In unsecured areas, mines are<br />

positioned at short notice. Yet, the time for the relief operation is limited mostly, it is not<br />

worthwhile to accomplish special mine control in advance of such a mission. Also certain<br />

roads are not at all driven on because of other uncertainties. This type of situation is<br />

typical in countries like Angola where certain villages can only be supplied by air, as the<br />

roads are dangerous.


To open the relief roads for NGO and peacekeeping forces in a simple way LARS can be<br />

used in these crisis situations.<br />

Principle<br />

Our goal is the development of a system to patrol, examine, and detect the presence of<br />

land mines on roads and bordering surfaces, thus, to enable the safe supply and support of<br />

field (non-combat) operations. The “LDV-Auto Robot System” (LARS) is simply installed in a<br />

regular road vehicle. This vehicle will be then steered remotely by the LARS system. A<br />

LARS simulator installed in a following vehicle (Convoy) and can run at the same speed as<br />

the unmanned vehicle to safely control it. The unmanned vehicle will detect a mine, using<br />

scanning equipment and special installations.<br />

How does it work?<br />

The “LDV-Auto-Robot System” (LARS) can be installed simply. INNOSUISSE would determine<br />

that the vehicle to be used is available worldwide - a model with power steering (e.g.<br />

Toyota Pick-up) and automatic gearbox. The flexibility inherent in the installation of the<br />

robotic device makes it suitable for a variety of vehicles. With each application proposal,<br />

the appropriate vehicle will be considered. Installing LARS requires: the driver's seat be<br />

removed and replaced by rails to position LARS, then the steering wheel is removed. The<br />

appropriate robot arms are connected with the steering post and the pedals. The vehicle is<br />

started through the LDV-simulator and the driver takes over the control. The instructions<br />

are transferred to servomotors. There are three sets of servomotors, installed in fallback<br />

or failsafe mode, to mitigate or avoid a malfunction. If the vehicle detonates a mine, the<br />

camera and the robot installation are protected from the usual land mine impact by virtue<br />

of the INNOSUISSE installation materials and techniques.<br />

To protect the following vehicles optimally, the ahead-driving car has so-called mine<br />

scanning and detonation installations. How these are defined, see below. They should<br />

bring both anti-person mines as well as anti-tank mines to explosion and be simply<br />

mounted at the vehicle.<br />

Vehicle Requirements<br />

“Pick-up” vehicle with a pulling and brake capacity around 3.5 ton. Power steering and<br />

automatic gearbox is required. The scanned width is normally 3 meters (expansion<br />

possible) and will be marked.<br />

Vehicle attachments (interior and exterior) include:<br />

> LDV-Auto-Robot<br />

> additional observation camera (optional) on passenger seat for second man in the Convoy<br />

At the vehicle:<br />

> fork construction for wire and stick release of camouflaged mines<br />

> magnet unit for the release of sensor-steered anti-tank mines (optional)<br />

> “jammers” for remote control mines, control signal disruption (optional)<br />

> trailers with “floating” disks, the area following, for the release of near-surface and on<br />

surface pressure-activated mines<br />

> marking unit of track secured (marking material disappears after one hour)<br />

Advantages/disadvantages:<br />

+ high speed possible (we run with the LDV today at maximum speed of 140 km/h)<br />

+ simply applicable world-wide<br />

+ trailer can be equipped with additional rubber wheels (pull down possibility), so that it<br />

can be driven simply (lateral mounting of the axle) to the place of work.<br />

+ also applicable as a forward observation and image transmission vehicle<br />

+ favorable costing


+ only short time training necessary<br />

- damage of the de-mining aids and the vehicle with mine contact<br />

- no guarantee that all anti-tank mines are brought to explosion.<br />

Price<br />

The estimated price for small series are for the LDV-Auto-Robot and Simulator below US$<br />

80’000.00. Installation for the Car are depending on used sensors.<br />

The future<br />

Innosuisse Corp. is working on a demining robot, which incorporates the LDV-System for<br />

the Head movements and touch-sensitive robot-arms controlled by your own hands to do<br />

demining manipulations.<br />

Contact:<br />

Innosuisse Corp.<br />

Hans-Christian Stuber<br />

President<br />

Ryf 21<br />

CH-3280 Murten<br />

Switzerland<br />

Phone: +41 (0)26 670 75 76<br />

Fax: +41 (0)26 670 75 79<br />

Mobile : +41 (0)79 250 47 44<br />

Direct Mail : info@innosuisse.com<br />

Webpage : www.innosuisse.com


Robotics systems for Humanitarian Demining : modular and generic approach. Cooperation<br />

under IARP and ITEP<br />

1. Introduction.<br />

Yvan Baudoin<br />

Royal Military Academy, Brussels , Belgium<br />

Chairman IARP/WG HUDEM<br />

Yvan.baudoin@rma.ac.be<br />

Robotisation of humanitarian (ref 1) demining reflects the use of teleoperated, semi-autonomous or<br />

autonomous robots with mobile platform. Robotic solutions properly sized with suitable modularized<br />

mechanized structure and well adapted to local conditions of minefields can greatly improve the safety<br />

of personnel as well as work efficiency, productivity and flexibility. Robotic research requires the<br />

successful integration of a number of disparate technologies that need to have a focus to develop<br />

- flexible mechanics and modular structure;<br />

- mobility and behaviour based control;<br />

- human support functionalities and interaction (HMI);<br />

- integration of sensors (including the control of the inteferences) and data fusion;<br />

- different aspect of autonomous or semi-autonomous navigation;<br />

- planning, coordination, and cooperation among multi robots (MAS);<br />

- machine intelligence;<br />

- wireless connectivity and natural communication with human;<br />

- virtual reality and real time interaction to support planning and logistics of robot service.<br />

In the current situation, there is a need to correctly define the usefulness and needs of robotics solutions ,<br />

essentially in pre- and post-mine detection (minefield delineation and quality assurance), to develop a<br />

network of research-centers focusing on this kind of solutions , to define standardised modules of the<br />

uused Robotics Systems . Beside the correct orientation of research activities , deduced from such<br />

definitions, it will be necessary to develop test methods and procedures in order to assess the<br />

performances of the 'System' in highly, cost-effective and most generic way. A (European) Network , with<br />

teams focusing on work-packages related to the modules defined in the figure 1, will help to clarify the<br />

role of the Robotics Systems (or Mechanical assistance) and assist the actual T&E activities of ITEP and<br />

the current R&D to be pursued under European or National funded Projects<br />

2. The Programmes and Networks<br />

IARP (The <strong>International</strong> Advanced Robotics Programme) actually focus on three major topics: (1) the<br />

robot dependability, including the reliability, the cost-effectiveness, the safety and the human-machineinteractions,<br />

for industrial as well as for service and/or environmental applications (2) the humanitarian<br />

demining, including the gathering of informations on the sensor systems that will be available in a next<br />

future and improve the usefulness of robotics systems (3) the robotics systems aiming the reinforcement<br />

of the security/safety in societal applications. Three working groups that are open to the scientific<br />

community: www.eng.nsf.gov/roboticsorg<br />

ITEP (Test and Evaluation Programme) allows truly independent testing of products produced under<br />

either national or European funding. ITEP gives an international dimansion to the test and evaluation<br />

programmes as well as to the standardization initiatives. Furthermore the European Commission issued a<br />

mandate to the European Committee for Standardization (CEN) for Humanitarian mine action,<br />

incorporating the request to coordinate these efforts with the <strong>International</strong> Mine action Standards (IMAS)<br />

through close cooperation with GICHD (Geneva center) and UNMAS (UNO coordination) (ref 2)<br />

Under ITEP the next task has been defined, that will be supported by IARP<br />

Project No. 3.1.4<br />

Title Robotics systems for the detection of Mines<br />

Description Information on the current esearch-activities aiming the introduction of<br />

automatisation techniques in Humanitarian Demining


Aim Repertory of existing experimental Robots (and projects)<br />

Collection of satisfied requirements and potential usefulness through IARP<br />

workshops (a.o.)<br />

Request<br />

Category Mechanical Assistance<br />

Type Automatisation<br />

Time frame End June/July 2004 (*)<br />

Place<br />

Lead nation BE<br />

Partners Members of WG IARP/HUDEM (www.eng.nsf.com/roboticsorg) and Eur Networks<br />

Clawar-2, Euron, Eudem-2..)<br />

Point of contact Yvan.baudoin@rma.ac.be<br />

Web site http://mecatron.rma.ac.be<br />

http://www.mat.rma.ac.be<br />

Comments (*) CD-ROM with requirements and descriptions of robots in progression (example,<br />

fig 2)<br />

Example of descriptive sheet (ref 3)<br />

a. PICTURE<br />

b. DESCRIPTION<br />

SYSTEM Intelligent modular, open and upstream compatible<br />

architecture (in revision)<br />

Controllable by just one operator at the Command and<br />

Control Station<br />

Robot equipment not yet rugged for all kinds of outdoor<br />

usage. Tracked Platform to be replaced by end<br />

december 2004 (obsolete)<br />

COMMAND/CONTROL (CC station or CCS) Operator pre-mission planning (motion and lateral<br />

scanning along preprogrammed corridors according to<br />

pre-choices of detection levels)<br />

Direct commands<br />

Robot monitoring from CCS<br />

Small control unit, integration into existent manned vehicles<br />

possible<br />

MODE OF OPERATION Remote controlled<br />

AUTONOMOUS MOBILITY Under visual control on off-road terrain way point<br />

navigation,:<br />

− speed up to 2 km/h<br />

− velocity adaptation with respect to terrain structure<br />

−restricted obstacle detection and obstacle avoidance<br />

or vehicle stop<br />

COMMUNICATION BETWEEN UGV and CCS Range: 50 m , wire-guided (radio-link optional)<br />

PAYLOAD 3D Lateral Scanning Device 850x850 mm²<br />

Digital Metal detector Vallon (MD), RMA- Ultra Wide band<br />

radar (UWB) (Interface Acquisition Programme CORODE)<br />

Digitised pictures from stereo-camera or elecetrical<br />

switches for scanning<br />

Localisation Color Camerat on fixed (CCS) pan & tilt<br />

platform


Sufficiently modular for reconfigu-ration of alternate<br />

payloads<br />

TRIALS Indoor and outdoor (RMA- Dummy minefields BE)<br />

Localisation Resolution: 0.5 %<br />

Performance: systematic (quality control) scanning with<br />

MD and UWB: 6 m²/H to 12 m²/H<br />

Not satisfactory on gravelly terrain (platform/tracks to be<br />

replaced)<br />

C. SOME RESULTS (Hardware/Software)<br />

LOCALISATION (Ref: VUB, Tracking with colour camera – P.Hong, G.De Cubber)<br />

SENSOR-INTERFACE (Ref RMA, Development of a Control Software for the Demining CORODE,<br />

E.Colon)


Figure 2. Descriptive sheet of the RMA HUNTER<br />

EUDEM-2 is a European Network that may ease the exchange of informations among the members of<br />

the Scientif Community and with the End-Users. More details will be given by Karin Debruyn during this<br />

IARP WS HUDEM’2004<br />

CLAWAR-2 (Climbing and Walking Robots and Associated Technologies) The consortium includes all<br />

the stakeholders that are needed to develop and to promote a European robotics industry able to exploit<br />

the Robotics technology; CLAWAR comprises 5 contractors (UoLeeds, QinetiQ, UNICT, CSIC and<br />

Robosoft who reflect the different viewpoints from industry, academe and research centres) and 28<br />

members from 12 countries. Many of the organisations have been involved in the initial CLAWAR TN<br />

and are heavily involved in developing the area of robotics. The consortium includes a good balance of 10<br />

Universities, 6 Research Centres, 15 from industry (SMEs and larger industrial organisations), and 2<br />

members from newly Associated States. A number of End Users and Standards and Professional<br />

Organisations are also involved as Associate Members and/or observers to assist in the development of<br />

Guidelines and Best Practice solutions for robotic systems in a range of applications areas.<br />

<strong>International</strong>ly renowned Third State Partners (non-funded) are also involved as observers to give<br />

added value as well as create a world network for this area of technology. Prof G.Muscato gave , during<br />

this IARP WS HUDEM’04 some informations on the Work-package focusing on the Outdoor Robotics .<br />

The members of the EN CLAWAR attach a major importance to the modular character of a robotics<br />

system. If the modular approach of an application may be defined by the scheme of the figure 1, the next<br />

figure 3 describes the modular definition of the robot self. Every component has been studied in details<br />

and may lead to specific guidelines: those standard requirements will be defined by the end of the actual<br />

contract (May 2005)


“Interaction Space”<br />

Generic specification of modules<br />

ACT1 ACT2<br />

CHW1 SEN1 SEN2<br />

POW1<br />

MECH1<br />

Actuator<br />

SOFTWARE<br />

Intelligent<br />

actuator<br />

SOFTWARE<br />

Comms<br />

system<br />

Analogue<br />

sensor<br />

SOFTWARE<br />

Intelligent<br />

sensor<br />

Figure 3. Clawar-Modular description of a robotics system<br />

SOFTWARE<br />

Power<br />

Databus<br />

Mechanics<br />

Analogue<br />

Digital<br />

Environment<br />

Intelligent<br />

power supply Mechanics<br />

The projects on (Robotics in) Humanitarian Demining , in the context of a European or <strong>International</strong><br />

cooperation or information-exchange (under IARP, ITEP, EUDEM-2, CLAWAR for example) aim at<br />

making humanitarian demining safer, faster and more cost-effective by developing:<br />

• a global system approach to Mine Action Technologies (MAT), while building, when possible, on<br />

the results of the previous and ongoing national and EU framework programmes. The end-users and the<br />

Standing Committee for Mine Risk Education, Mine Clearance and MAT, established to implement,<br />

monitor and control the Mine Ban Treaty, request that MAT must be affordable, simple, effective and<br />

manageable (see Roadmap defined by Prof Acheroy in the current proceedings)<br />

• an effective methodology to assess the real performances of the developed tools in the field.<br />

Under ITEP (ref 3) , the MACE T&E WG (Mechanical Assistance Clerance Equipment, Test and<br />

Evaluation working Group) is chaired by Dr. Chris Weickert and Geoff Coley (CCMAT) and focus on<br />

the mechanical mine-clearance, not on the mechanical assistance to the detection. Five objectives had<br />

been defined, namely:<br />

• Inventory of existing equipment and T&E reports<br />

• Develop Best Practices<br />

• Conduct T&E of Equipment<br />

• Develop Standards<br />

• Populate the Repository<br />

The aspects Best Practices and Standards have been entrusted to the CEN WS 12 and will inspire the tests<br />

and evaluation methodologies developed by the group.<br />

The next table summarises the current inventory.


The next picture illustrates one of those mechanical clearers in test progression.<br />

Vehicle status<br />

Motion control<br />

Sensor<br />

deployment<br />

Robot<br />

positioning<br />

sensor<br />

VEHICLE CONTROL NETWORK<br />

Cutter control<br />

DATA<br />

NETWORK<br />

Control<br />

transceiver<br />

Sensor data<br />

processing<br />

Figure 4. Armtrak<br />

Data acquisition<br />

process<br />

Data acquisition<br />

process<br />

Data acquisition<br />

process<br />

Data acquisition<br />

process<br />

Sensor data<br />

transceiver<br />

SENSOR SUITE<br />

Sensor<br />

Sensor 2<br />

Sensor 3<br />

Sensor n<br />

COMMUNICATION LINK<br />

Fig 1. Modular approach<br />

Transmit /<br />

Receive<br />

Sensor data<br />

fusion<br />

Mission<br />

management<br />

HMI<br />

Transmit /<br />

Receive<br />

Vehicle host<br />

MISSION DATA REPOSITORY<br />

CONTROL<br />

STATION<br />

BACKPLANE<br />

Ref: 1. IARP BE report 2003 (www.eng.nsf.gov/roboticsorg)<br />

Ref 2. Mine action Technologies: building a roadmap to bring appropriate technologies into operational<br />

use (IARP WS HUDEM’04 June 2004)<br />

Ref 3. ITEP MACE T&E WG, G.Coley, ITEP Meeting , London, Jan 2004 (geoff.coley@drdcrddc.gc.ca)


EUDEM2: A useful tool for the Humanitarian Demining Researchers and End-<br />

Users Community<br />

1. Introduction<br />

Karin De Bruyn, Mike Barais,et Al<br />

The EUDEM2 project found its existence as a follow-up activity of EUDEM which was<br />

carried out in 1999. EUDEM 1999 was a small scale and limited study carried out by<br />

the Vrije Universiteit Brussels, (VUB-ETRO), as coordinator and the Ecole<br />

Polytechnique the Lausanne (EPFL-LAMI) on behalf of the European Commission.<br />

The study lasted for 6 months and resulted in a web based database of contact<br />

persons and actors active in Europe in the domain of Humanitarian Demining and a<br />

report on the “State of the Art in the EU related to Humanitarian Demining<br />

technology, products and practice”.<br />

The EUDEM report 1 also provided an overview of technologies developed or under<br />

development for humanitarian de-mining in different research centers, university<br />

laboratories or in some commercial companies. Each technology was analyzed and<br />

presented taking into account its cost and effectiveness. The report also gave some<br />

insight in the personal conceptions of the actors by providing a set of 50 face-to-face<br />

interviews 2 . These interviews explained the views of industrials developing<br />

equipment for humanitarian de-mining, research institutes, academic researchers<br />

from several European Universities, some end-users (represented by European<br />

non-governmental organizations) and some donors of funding for both research and<br />

mine action assignments, or national governments from some European Countries.<br />

For details on the types of organizations interviewed we refer to the report 3 .<br />

At the end of 2001 the new EUDEM2 project was launched by the European<br />

Commission as a support measure in the 5 th framework. The new project started in<br />

the beginning of 2002 and runs towards its end (December 2004). After a first and<br />

very intensive effort to elaborate the web pages, a second intensive focus was<br />

placed on visits to the European players in Humanitarian Demining and the studying<br />

of technologies currently on the market, under development or with potential for<br />

future developments in order to produce a catalogue of equipment currently used in<br />

the field.<br />

2. EUDEM2: aims<br />

The EUDEM2 team has expanded, next to the two previous academic partners, a<br />

new third one was added, namely, the Gdansk University of Technology (GUT-<br />

MEED). Next to the additional partner the EUDEM2 team was reinforced by an<br />

advisory panel that follows the developments of the team, guides and assists the<br />

team by providing advice and information. The advisory board consists of<br />

representatives of research, commercial companies, governments or donors, endusers<br />

and military representatives. The advisory panel meets with the EUDEM2<br />

team several times per year and also assists and supports the workshops organized<br />

by the EUDEM2 team.<br />

EUDEM2 supports ongoing research and the development of technologies that<br />

could assist the research community, the European Commission and technology<br />

1<br />

Bruschini C., De Bruyn K., Sahli H. and Cornelis J. (1999) EUDEM1 – Final Report, 65p.<br />

2<br />

Bruschini C., De Bruyn K., Sahli H. and Cornelis J. (1999) Annexes to the EUDEM1 – Final report<br />

3<br />

K. De Bruyn, M. Barais, H. Sahli, C. Bruschini and Jerzy Wtorek (2003) EUDEM2: Overview and<br />

Some Early Findings, Issue 7.2, pp. 92-95


developers by offering them an up to date overview of results and achievements in<br />

the domain, but the focus of data provided is larger than just Europe. Bridging the<br />

gaps between the research world, the military expertise, the actual practice in the<br />

field and the commercial developers by providing information so that duplication of<br />

efforts is avoided is the key to the success of the project. Next to this central and<br />

most important aim, the EUDEM2 team wishes to bring the before mentioned<br />

players together to create a synergy and cooperation possibilities.<br />

3. Working Methodology of EUDEM2<br />

3.1. EUDEM2 web pages<br />

In order to fulfill its aims a new web based database was created. Initially this<br />

database was furnished with updated information from the EUDEM1999 report and<br />

database of contacts, but since March 2002 a complete new and much more<br />

elaborated version is online at http://www.eudem.vub.ac.be. Data in this new<br />

database were searched in the scattered set of information available on World Wide<br />

Web, via recent key publications, and other scientific material. All data on the www<br />

pages have been analyzed before integration into the database. The web pages are<br />

presented in a user friendly format and are constantly updated whenever new<br />

valuable information is available. The information is structured under 6 different<br />

categories as shown below:<br />

Each class of information is linked with all the others through a set of indexes<br />

defining the relationships between the objects as described in the scheme depicted<br />

below. The Data Base is not only used to store flat data structures and facts, but<br />

also supports more complex relationships between them and explicitly incorporates<br />

general knowledge about the objects described. A given Organization is for example<br />

active in a given Technical Activity, and participating to several Projects whose<br />

Publications can be listed. The figure shows the link classes that build up the<br />

relationship between the classes and hence the navigational structure of the<br />

application.<br />

Projects<br />

Activities,<br />

Products<br />

Services<br />

Practices<br />

Publications Events<br />

Organizations<br />

EUDEM2 Data Base Conceptual View


All topics are interrelated with each other in order to provide the users of the<br />

EUDEM2 site a clear and complete overview of the available information. Next to<br />

these easy navigation tools, a search engine to facilitate quick and easy retrieval of<br />

sought information and a “New” section were added in order to easily spot new items<br />

without browsing through al the pages.<br />

3.2. Face-to-face interviews<br />

One has to understand that face-to-face interviews, guarantee more effective<br />

information gathering than questionnaires, and open the road for bi-directional<br />

information exchange 4 . Coupled to a face-to-face interview is an on-site visit, with<br />

better understanding of the real-life situation within the organisation, and the<br />

possibility to collect (in person) extra information (brochures, presentations, data on<br />

CD etc.) 5 .<br />

Although the cost for interviewing face-to-face is significantly higher than for video- or<br />

teleconferencing, it is necessary to speak to representatives of organizations in<br />

person in most cases 6 . Setting up appointments for video or teleconferencing is as<br />

time-consuming as for the face-to-face interviews, and are less trustworthy since<br />

people have often unforeseen issues that require urgent dealing with. Coupled to<br />

that, not all organizations involved are technically equipped for these kinds of<br />

interview techniques. Most importantly, people tend to tell more details when you are<br />

in front of them then when they are called up. Documentation is collected in an easier<br />

and much more efficient way in person and is not forgotten or delayed.<br />

In order to facilitate analysis of the data collected during the interview, the skeleton of<br />

the general EUDEM2 interview consists mainly of a fixed set-up.<br />

When specific projects – not necessarily directly related to humanitarian demining –<br />

are discussed, the interviewer tries to identify the (i) Project aims, (ii) Maturity of the<br />

different technologies involved, and corresponding Cost estimates, (iii) Testing<br />

procedures, (iv) “Transferability” of the developed techniques to different aspects of<br />

humanitarian demining, (v) Technical specifications of the equipment, performances<br />

in certain circumstances, compatibility between different techniques, degree of<br />

success in the field, and (vi) R&D activities and strategies, research funding,<br />

commercial perspectives 7 .<br />

In order to make it easier to analyze an interview, a rather rigid structure is<br />

necessary. Also interview content needs to be approved by the interviewee prior to<br />

publishing. This needs to be done fairly quickly (one week or 10 days at most) after<br />

the interview. The questionnaires have been adapted according to the type of<br />

organization interviewed and focus has been shifted from topics of relevance, but<br />

following the same line of thinking and structure. Three different categories and<br />

questionnaires have been drafted for:<br />

� Industrial Companies/Equipment Manufacturers<br />

4<br />

Loosveldt, G. (1995) The profile of the difficult-to-interview respondent. Bulletin de Méthodologie<br />

Sociologique, 48, pp. 68-81.<br />

5<br />

Loosveldt, G. (1997) Interaction characteristics of the difficult-to-interview respondent. <strong>International</strong><br />

Journal of Public Opinion Research, 9, pp. 386-394.<br />

6<br />

Groves, R. (1989) Survey Errors and Survey Cost. New York: John Wiley and Sons.<br />

7<br />

Sudman, S. Bradburn, N. and N. Schwarz (1996) Thinking about questions: The application of<br />

cognitive processes to survey methodology. San Francisco, CA : Jossey-Bass.


� Research Centres/University Labs/ Governmental Agencies MOD, Foreign<br />

Affairs, Development Aid (R&D related)<br />

� Operators: NGO, MAC, Commercial/ Governmental Agencies MOD, Foreign<br />

Affairs, Development Aid (Mine Action Related)<br />

Although the set-up of the questionnaire is open at several places, a certain number<br />

of questions can be evaluated as quantitative data, which greatly facilitates the<br />

analysis and leads to more objective results. The open questions will be evaluated<br />

but will be presented only as summaries.<br />

3.3. The Technology Survey<br />

Another main aim of EUDEM2 is to carry a broad scope and an in depth technology<br />

survey. The technology survey activity was carried out worldwide, through literature<br />

analysis, direct contacts, participation to international conferences and foremost a<br />

large numbers of visits to the field and to some organisations.<br />

The study of some of individual technologies emphasises on the status of<br />

past/current R&D projects (overview of ongoing research), in particular the EC<br />

financed ones, and some national projects.<br />

A clear terminology and a proper classification is being established, distinguishing<br />

between Research (product/system 5-10 years way), Development (


(2) Soil conductivity study: Soil parameters are often not sufficiently considered at<br />

present in HD related scientific publications, although their knowledge is really<br />

needed by the end user to be able to predict the detector’s performance in situ. This<br />

is true not only for electromagnetic properties, but also for example for neutron or<br />

vapour sensors. This could lead to combined soil maps, which could allow assessing<br />

a priori on which fraction of contaminated land a given sensor is likely not to work<br />

satisfactorily.<br />

(3) Furthermore Research & Development (R&D) on humanitarian demining related<br />

items or on technologies with potential to be transferred to humanitarian demining,<br />

was carried out in Central/Eastern Europe.<br />

During the second year and third year of the project the gathering of information and<br />

analysis on the status of past and current R&D projects was carried out. Overviews<br />

were generated for the projects in RTD that were funded by the European<br />

Commission, and of the national projects of the Netherlands, UK and Germany.<br />

During the EUDEM2-SCOT conference which took place in September 2003, many<br />

interesting ideas were discussed and the findings and conclusions of the conference<br />

have been published on the EUDEM2 web pages 12 and in the Journal of Mine Action<br />

of JMU 13 .<br />

It is generally known, that without help and inputs from the representatives from the<br />

Humanitarian Demining communities, most studies are not valuable at all. Therefore<br />

the EUDEM2 findings will be confronted with the opinions of representatives of the<br />

end-user community, the military community, the research community and the<br />

technological community at all times.<br />

4. Overview of the work done up to May 2004 and ongoing<br />

Interviews with some of the key players in Europe have been carried out, but as the<br />

analysis of the interview results is being done at this very moment, it is too early to<br />

communicate results.<br />

Collaboration to other initiatives: The EUDEM2 team is keen on collaborating<br />

with other organisations that are involved in the collection of technology related<br />

research for Humanitarian Demining. Active collaboration to a German query on<br />

technologies currently under development resulted in a preliminary set of structured<br />

lists of equipment being developed at the moment. Further elaboration of these lists<br />

will result in a well documented catalogue which is foreseen as the final result of the<br />

EUDEM2 technology survey.<br />

Next to this further collaboration with other information collecting initiatives was<br />

launched and resulted in a recent joint study with the Geneva <strong>International</strong> Centre<br />

for Humanitarian Demining (GICHD - Switzerland) on manual mine clearance and<br />

costing issues.<br />

Help Desk: This support activity consists mostly of replies to mail enquiries, phone<br />

contacts and meetings. The Help Desk is up and running since start of project, and it<br />

advertised on the EUDEM2 Website.<br />

The Help desk provides the user with a direct reply when necessary; otherwise the<br />

queries are redirected of to relevant persons/Web pages. During the first year of the<br />

project the use of the help desk service was 5-10 requests per month, since January<br />

12 EUDEM2-SCOT conference conclusions: http://www.eudem.vub.ac.be/eudem2-scot/<br />

13 I.G. McLean (2003) Scientific Contributions to Demining Technology: Beliefs, Perceptions and<br />

Realities, Journal of Mine Action, Issue 7.3, pp. 40-42


2003, the use has slightly gone up. The complexity of the queries varies a lot but the<br />

satisfaction of the users is rather high. More than 50% of the users give as feedback<br />

that they are satisfied with the replies received.<br />

Bringing people together. One of the aims of EUDEM2 is to share information and<br />

bridge gaps between the different players in Humanitarian Demining. The initiative<br />

first initiative here was a joint venture between the Society of Counter Ordnance<br />

(SCOT-USA) and EUDEM2. The <strong>International</strong> Conference on Requirements and<br />

Technologies for the Detection, Removal and Neutralization of Landmines and<br />

UXO was very successful. About 300 international guests from nearly 40 different<br />

countries gathered in Brussels for a 4 days conference and more then 120<br />

presentations were made.<br />

5. Some findings and Conclusions<br />

Call for Collaboration: Although the Humanitarian Demining market is very small<br />

and shrinking 14 as stated and activities are mainly focused on research, rather than<br />

on development a lot of activities take place in several research labs all over the<br />

world. It seems that most of these activities have no connection with other similar<br />

initiatives. Therefore it is necessary to join efforts and work together. This could lead<br />

in the first place to savings of scarce funding available. Also duplication of activities<br />

would be obsolete and results will be speeded up and in the end, most importantly,<br />

it would also save LIVES.<br />

Future research needs to focus more on the integration of what was done in<br />

research for Humanitarian Demining up to now. How can what has been researched<br />

and developed as prototypes be taken into the field? We should focus on a<br />

distillation of the best items of all systems and merge them in new concepts or<br />

systems that are field able soon. And it is time to move to the final stages of the<br />

sensors development; there is no need to further research new sensor systems.<br />

Emphasis should be placed on technologies and applications, currently available or<br />

under development from within the Information Technology sector. Current<br />

equipment needs to be elaborated further but should ideally adopt some IT solutions<br />

into its system. Our daily use of high tech equipment like GPS, satellite use, palm<br />

tops etc. should be visible in the research and development of technological<br />

solutions to the mine problem as they could provide radical changes in<br />

Humanitarian Demining practices today!<br />

14 Assessment of the <strong>International</strong> Market for Humanitarian Demining Equipment and Technology.<br />

Prepared for the Government of Canada, p30, by GPC international.


HUDEM04 Brussels<br />

A concept of implementing technology to encourage economic growth in de-mining.<br />

During this conference we have seen and will see some wonderful and exciting new<br />

technologies and concepts. However, as de-mining techniques becomes more<br />

successful and less people are being affected, donors are starting to look at other more<br />

pressing causes to sponsor. It has been estimated that land mines were killing or<br />

maiming perhaps 28,000 people each year, because of the hard work and success of<br />

many organisations some of them represented here today, thankfully this is now<br />

dropping to what some estimates at around 15,000 people each year. So the de-mining<br />

industry is having to compete for donations and funding against this ever-increasing<br />

competition from other causes, for example AIDS. People are becoming infected with<br />

HIV at the rate of about 15,000 each day<br />

Another major factor that must be considered is this, weather we use today’s or<br />

tomorrow’s technologies it will become irrelevant if we do not change the method in<br />

which these technologies are used. Why, because the de-mining industry can only<br />

expand relevant to the amount of the funding that is available to it. So when we<br />

consider the previous point of funding now being focused on other more pressing<br />

needs, we could start to see the de-mining industry peeking out at this point and<br />

starting to decrease in size. Effectively a premature causality of its own success, even<br />

though the job is far from being finished. If we don’t comprehend this point and take<br />

action to stop it from happening, it will be a terrible shame for humanity.<br />

If the de-mining industry fails to become financially self sustainable, then it can only<br />

be the size, relative to the funding that is available. However the industry could<br />

accelerate it’s expansion by considering ways of grasping technology and<br />

implementing it into developing new de-mining methods, not just new bigger better<br />

machines, but a completely new way of thinking i.e. a method of generating it’s own<br />

funds along with wider issues such as agricultural and community development and<br />

most important; self sustainability.<br />

De-Mining Systems were once nearly involved with a potential project in Asia in an<br />

area that had been good agricultural land originally owned by small farmers.<br />

(Unfortunately, then we did not have the technology to produce the necessary<br />

machinery). An organisation using investor and development funding, were going to<br />

take a vast area of the now mine ridden and overgrown land from the government on a<br />

six year lease. We were to develop and build the MDM (Modular De-Mining)<br />

machines that would simultaneously clear the vegetation, clear the personnel mines<br />

and detect and map the location of the tank mines and larger ordinance, for later<br />

removal by hand cultivate the ground and plant a crop.<br />

The idea was to develop some of the infrastructure and harvest, export and sell these<br />

crops on the open market for five years. This would have paid for the de-mining and<br />

given a return to the original investors, or the same funding could carry on again<br />

working and multiplying elsewhere. In the fifth year the families who owned the land<br />

before that war and the landmines were laid, were to undergo an agricultural training<br />

program, as some of these people were now second generation. In the sixth year when<br />

they return home and take their land over again, there is a crop in their fields and an<br />

already established market. (They win, we win, and the investors win,)<br />

Page 1 of 7 HUDEM04 Brussels A concept of implementing technology to encourage economic growth in de-mining.<br />

PO Box 73, Hexham, NE47 0YT, UK +44 (0)870 126 9120 www.deminingsystems.co.uk


It’s basically a matter of brining business principles into humanitarian works to create<br />

all-round sustainability.<br />

One of the reasons that this has never been done before on such a large scale is<br />

because, using yesterday’s technologies, the commercial cost of removing landmines<br />

was higher than any potential agricultural returns. However, by reducing this cost<br />

through the novel Modular De-Mining system, that also has the ability to<br />

simultaneously plant crops for food, or other modern crops, we can now to tip the<br />

scales in favour of large-scale mine removal.<br />

New crops could be considered such as starch for industry or crops to help the<br />

economy such as oilseed rape to produce bio-diesel as mineral oil reserves diminish<br />

or become to expensive for developing nations. A situation often occurs in a non oil<br />

producing developing nations where they do not have any money to remove the<br />

landmines because they are having to pay out to rich nations for their oil, whereas if<br />

they removed the mines and planted bio-oil crops this money would remain<br />

generating up in their own economy. Many countries have now shown a commitment<br />

to remove landmines, the potential of this is huge.<br />

There are many spin-offs to using these business principles in humanitarian works to<br />

create economic sustainability. Another example of this is to assist with the logistical<br />

difficulties and expense of providing the accommodation for the de-miners. One<br />

possible way round it is, if we were to develop a low cost method of portable<br />

accommodation and can also generate a revenue from selling it commercially, this<br />

would then allow us to provide it at a subsidised rate for the de-mining industry. Here<br />

is one example…<br />

QuickSpace animation video.<br />

Page 2 of 7 HUDEM04 Brussels ¡ A concept of implementing technology to encourage economic growth in de-mining.<br />

PO Box 73, Hexham, NE47 0YT, UK +44 (0)870 126 9120 www.deminingsystems.co.uk


In trying to define the main problems in de-mining, it’s been found that in general,<br />

most mines are laid in developing nations and most developing nations are in warm<br />

climates. So, after the local farmers have moved off because of the mines, mother<br />

nature then takes over and within a couple of years these often vast, mostly vital<br />

agricultural areas can become heavily overgrown with dense vegetation. Although<br />

mine detection systems work, vegetation removal systems work and mine removal<br />

systems work, they often don’t work together causing a perpetual catch 22 situation<br />

for the de-mining operations. As most mine detection systems can not be used close to<br />

the ground because of the dense bush/vegetation we then can’t go in and clear the<br />

bush because of the mines, then were unable to remove the mines because we could<br />

not detect where they are. It’s not just a matter of developing new technologies but<br />

also working methodology or how that technology is implemented into the industry.<br />

(PowerPoint presentation)<br />

P O Box 73, Hexham, NE47 0YT, England<br />

T el + 44 (0)870 126 9121<br />

www.deminingsystems.co.uk<br />

With mulch blower and seeder<br />

R em ote control version<br />

Over the last seven years, we have self funded the development of a basic machine to<br />

prove a new theory in soil grinding and simultaneous vegetation separation. The test<br />

machine worked beyond expectations with many spin off benefits. Now that the basic<br />

mechanical theory has been proven, it is now time to implement some of these new<br />

technologies that are now available such as detection, GPS plotting and machine<br />

control systems.<br />

The Modular De-Mining System is built around a high-speed agricultural tractor. It<br />

comprises two Ground Claimer 3000 units fitted to the multi-purpose vehicle/tractor.<br />

This high speed tractor allows the machine to travel without the need for a transporter<br />

and in places with no road infrastructure.<br />

Page 3 of 7 HUDEM04 Brussels ¢ A concept of implementing technology to encourage economic growth in de-mining.<br />

PO Box 73, Hexham, NE47 0YT, UK +44 (0)870 126 9120 www.deminingsystems.co.uk


In the working position the tractor is lifted up off the ground by the two Ground<br />

Claimer 3000 units, which are mounted at the front and rear of the tractor. This<br />

elevated position protects the driver, tractor unit and tyres from the effects of blast<br />

detonation. Each GC3000 has two hydraulically driven traction rollers that provide<br />

the forward motion for the whole machine. A hydraulically driven grinding drum<br />

counter-rotating at high speed against a shear bar is located between the two traction<br />

rollers on each GC3000. This has the effect of digging up and grinding/pulverizing<br />

any material it encounters such as soil, stones, vegetation and anti-personnel mines.<br />

The positions of the two GC3000 units are controlled through intelligent 3-point<br />

linkage units. These devices allow the GC3000s to move independently, while<br />

accurately following the contours of the ground at the same time as keeping the<br />

tractor unit suspended on a straight and level course. The hydraulic drive control<br />

system, working through the workload governor, ensures that power is efficiently<br />

channelled to where it is needed most. The forward speed is automatically reduced<br />

when difficult working conditions are encountered such as dense vegetation and/or<br />

hard soil.<br />

Detection equipment may be used independently, but it is hoped that a new system<br />

could eventually be mounted and protected inside the front rollers of both GC3000s to<br />

identify the presence of larger devices that could not be safely destroyed in the<br />

grinding drum. The on-board computer, using information from the detection systems<br />

would be able to determine the size statistics of an object. If the object's statistics are<br />

greater than that of the pre-set parameter in the software then the system<br />

automatically stops the forward movement of the machine, before the object is<br />

disturbed. The driver is then able to look at the object's statistical outline on a monitor<br />

in the cab and decide whether to carry on and grind it up or to leave it behind for later<br />

removal by hand. If the object is left behind then the size statistics and location<br />

coordinates are registered and mapped into the on-board computer using a global<br />

positioning system.<br />

The MDM system can be used in a variety of working modes to tackle different<br />

problems:<br />

¤¦¥ §©¨©� ������¥ ���<br />

Ground Level<br />

28 November 1999<br />

MDM System<br />

3 Metre Working Position<br />

Patent De-Mining Systems UK<br />

Slide 5<br />

The 3-metre wide position is<br />

used in overgrown areas. In<br />

this working mode the front<br />

GC3000 unit is tilted back so<br />

that only the rear traction<br />

roller is in contact with the<br />

ground. With the mulch<br />

blower fitted, this then acts as<br />

a vegetation-clearing device.<br />

The rear unit then follows,<br />

pulverizing the soil, safely<br />

destroying any anti-personnel<br />

mines not detonated buy the<br />

front machine.<br />

Page 4 of 7 HUDEM04 Brussels £ A concept of implementing technology to encourage economic growth in de-mining.<br />

PO Box 73, Hexham, NE47 0YT, UK +44 (0)870 126 9120 www.deminingsystems.co.uk


� � � � � �����©� � ����� � �<br />

Ground Level<br />

28 November 1999<br />

���������¦�<br />

��������� ���<br />

���¦�����<br />

T ra n sp ort position<br />

MDM System<br />

5.75 Metre ‘ Crabwise’ De-Mining Position<br />

Patent De-Mining Systems UK<br />

��� ��� � ������� � � ����� � �<br />

5.75 Metre ‘Crabwise’ de-mining<br />

position, seen from above.<br />

Brown areas have been cleared,<br />

green areas have not yet been<br />

cleared.<br />

�¦� � � � � � � ��� ��� � � � � �<br />

Slide 6<br />

Another operating mode is the 5.75<br />

metre wide position. The front mulch<br />

blower is replaced by a secondary<br />

seeding unit and the machine is<br />

operated in the crab steer mode.<br />

Slide 7<br />

This is for mine clearing in areas<br />

with little or no vegetation, thus<br />

allowing the greater operating width.<br />

Slide 8<br />

View auto running animation sequence<br />

of slides 9 to 97<br />

The MDM system is unique in its<br />

ability to simultaneously:<br />

• Travel to the place of work<br />

independently and at a relatively<br />

high speed.<br />

• Once at the place of work, it is able<br />

to protect the tractor unit and<br />

operator by lifting them up off the<br />

ground.<br />

Page 5 of 7 HUDEM04 Brussels � A concept of implementing technology to encourage economic growth in de-mining.<br />

PO Box 73, Hexham, NE47 0YT, UK +44 (0)870 126 9120 www.deminingsystems.co.uk


• It will be able to detect un-exploded ordinance, automatically, stopping the<br />

machine if the ordinance detected is above a pre-set parameter, again<br />

protecting the operator and machine.<br />

• It is able to remove the vegetation and bush while keeping it separate from the<br />

soil-processing operation.<br />

• It will be able to grind up and detonate anti-personnel mines while registering<br />

the size statistics via the detection system and location co-ordinates via the<br />

GPS system of the larger UXOs for later removal by hand.<br />

That was a small piece of vegetation, but the prototype is able to deal with trees up to<br />

15 cm in diameter or more. Most personnel mines are too small to detect while<br />

quickly on the move like this, and they are either detonated by the pressure of the<br />

traction roller, or they are ground up often detonating inside the counter torque blast<br />

suppression unit. The whole machine is able to leave the land in a more valuable<br />

condition for agricultural viability and even seed, fertilise and roll (plant a crop), and<br />

this is all carried out simultaneously with the one unit.<br />

With this de-mining system being modular in design it is able to help tackle the wider<br />

picture of global de-mining. For example after an area has been cleared the front and<br />

rear machines can be taken off and any other machine with the standard agricultural<br />

three point linkage system can be attached, such as a digger on the back and a cement<br />

mixer on the front to help with small construction and infrastructure development<br />

works.<br />

We are designing a double acting forklift mast that can be attached on the back of the<br />

tractor. This can be used to unload seed or fertiliser when crop planting or building<br />

materials and we’re designing another unit called the QuickDrill system that the<br />

forklift backs into it clips on and the whole unit then becomes a water well drilling<br />

rig.<br />

R em ote control version with mulch<br />

blow er and seeder<br />

Another use for the MDM system is in<br />

humanitarian work such as crop planting.<br />

The machine is able to re-establish the<br />

crop cycle as part of a famine prevention<br />

programme following wars, droughts and<br />

floods etc. It can very efficiently plough<br />

up, cultivate, seed fertilise and roll at a<br />

width of 6 metres. (Slides 98 to 99<br />

showing the remote control version)<br />

This may seem futuristic, but we are going<br />

to have to come up with something better<br />

than spending more money on clearing the<br />

land, than what the land is currently worth.<br />

Our objective has to be, too develop and<br />

produce machinery/systems that increase<br />

the safety and efficiency of the landmine<br />

removal industry while facilitating a<br />

reduction in the operating costs and an<br />

increase in revenues.<br />

Page 6 of 7 HUDEM04 Brussels � A concept of implementing technology to encourage economic growth in de-mining.<br />

PO Box 73, Hexham, NE47 0YT, UK +44 (0)870 126 9120 www.deminingsystems.co.uk


With it now being possible to send video and other data back via satellite, this could<br />

have many other benefits such as in the transparency of charitably supported<br />

humanitarian demining. An example of this would be in the hope of implementing a<br />

method where donors who donate on a direct debit basis and having a sense of<br />

ownership or people who are buying a share in, or sponsoring a particular machine,<br />

could go on the internet and see their machine actually working or a video clip of part<br />

of that days work. With the GPS data, donors could see a model of the area that their<br />

machine has done. This system would also assist in persuading companies to take on<br />

the sponsorship of a machine, It’s good public relations as their employees and<br />

customers could go on the internet and actually see there machine working.<br />

To clear landmines for humanitarian reasons is extremely worthwhile, but if we don’t<br />

start to be more competitive for the limited funds against what many funders are<br />

considering to be more pressing humanitarian problems such as Aids, malaria, TB etc,<br />

then the de-mining industry will starve to death.<br />

W ith mulch blower and seeder<br />

(Slide 100)<br />

We have to diversify and<br />

invest now in these new<br />

technologies and put into<br />

practice ways of reducing<br />

the cost of de-mining, while<br />

increasing any potential<br />

returns such as<br />

simultaneous crop planting<br />

for bio-diesel to eventually<br />

tip the economic scales in<br />

favour of de-mining.<br />

If we can all work together to develop something such as a Modular De-Mining<br />

system that could start gaining a financial return for the de-mining funders and<br />

contractors, then this could pave the way for a dramatic increase within the industry.<br />

Lets’ grasp these technologies and use them to turn back the hands of time and return<br />

the land to its original purpose, that of supporting humanity.<br />

Contact Info;<br />

Roy Dixon<br />

De-Mining Systems UK Ltd<br />

PO Box 73<br />

Hexham<br />

NE47 OYT<br />

United Kingdom<br />

www.deminingsystems.org<br />

E-mail: roydixon@deminingsystems.co.uk<br />

Tel: +44 (0)870 126 9120<br />

Fax: +44 (0)870 126 9121<br />

Mobile +44 (0)797 124 8857<br />

Page 7 of 7 HUDEM04 Brussels � A concept of implementing technology to encourage economic growth in de-mining.<br />

PO Box 73, Hexham, NE47 0YT, UK +44 (0)870 126 9120 www.deminingsystems.co.uk


A concept for a humanoid demining robot.<br />

Peter Kopacek, Man-Wook Han, Bernhard Putz, Edmund Schierer, Markus Würzl<br />

Institute for Handling Devices and Robotics<br />

Vienna University of Technology<br />

Favoritenstr. 9-11, A-1040 Vienna, Austria<br />

Tel.: +43-1-58801 31801, Fax. +43-1-58801 31899<br />

Email: e318@ihrt.tuwien.ac.at<br />

Abstract: Currently there are worldwide two categories of two legged humanoid robots available:<br />

“Professional” humanoid robots developed by large companies with a huge amount of research<br />

capacities either with the idea to assist humans in everyday working or with the background to<br />

serve mostly for entertainment, leisure and hobby or in the future as personal robots. “Research”<br />

humanoid robots: There a lot of such robots currently available or in the development stage e.g.<br />

approximately worldwide more than 500 University institutes and research centres are active in this<br />

field. The robots of this category are usually prototypes.<br />

In this paper a concept of a new humanoid robot combining the features of both categories<br />

mentioned before - professional for a reasonable price – for deminig applications is described and<br />

discussed.<br />

Keywords: Mobile robots, two legged robots, humanoid robots, demining robots.<br />

1 Introduction<br />

Removal of landmines – mostly by hand - is today a very inhuman, dangerous, time consuming and<br />

expensive task. On the other hand the number of landmines is increasing worldwide dramatically.<br />

Since several years there are several trials to use unintelligent and very heavy robots for demining.<br />

There do not exist standard equipment requirements for the humanitarian demining. But the<br />

following requirements for systems, that should improve the work of demining, can be summarized:<br />

• The system (robot, vehicle or equipment) should be low-cost.<br />

• Most of the parts and components should be commercially available in order to reduce<br />

maintenance costs and availability problems.<br />

• Approriate small and lightweight for transportation.<br />

• The system should be of a robust design and capable to work in rugged environments.<br />

• Operation time should be simple and failsafe (user-friendly operation) for all skill-levels to<br />

reduce training problems<br />

• Operation time should be at least 2 hours before re-fueling or re-charging and that work<br />

should not take more than 30 minutes.<br />

• Ability to distinguish mines from false alarms like soil clumps, rocks, bottles and tree roots.<br />

• Ability to detect both types or in fact variety of different mine types and sizes.<br />

• Operation in vegetated ground cover.<br />

This list is a huge challenge for robot designer and there are some experts which are doubtful, that<br />

the requirements can be fulfilled in the short run. But considering especially the first two points a<br />

tool kit could offer notable advantages to accomplish these requirements.<br />

In the last decade a new generation of mobile, intelligent and cooperative robots were developed<br />

and introduced. One of the reasons for this development was the availability of reasonably cheap


sensors and the increasing computer power. This offers the possibility to use robots of this new<br />

generation for humanitarian demining. Since mines are likely to be placed in many different terrains<br />

it should be especially attached value to the locomotion system.<br />

2 Locomotion Systems<br />

The most spread type today is the wheel locomotion system. Typically the number of wheels<br />

amounts to three, four or six. The extensively use of wheeled locomotion systems has many<br />

practical reasons. The underlying mechanic is simple and easy to assemble and the ratio of<br />

admissible total weight for payload to own weight is quite acceptable.<br />

The problem is that wheeled systems are in rugged terrain not as efficient as in plain. As a basic<br />

principle it can be supposed that wheeled vehicles have problems to overcome obstacles higher than<br />

the radius of their wheels.<br />

The advantage of using only three wheels is high manoeuvrability at the expense of balance. Fourwheeled<br />

systems are well developed in the car industry and all-wheel drive would be obviously<br />

desirable for off-road applications. But the best solutions seems to be the six-wheeled since this<br />

configuration is the most suitable for off-road and mine fields are mostly not plain and additional<br />

scattered with small obstacles like stones and branches. A six-wheeled robot nearly achieves the<br />

same performance in off-road terrain than a robot with a chain locomotion system.<br />

Chain locomotion systems are successful for rough, muddy and sandy terrains. The comparative big<br />

tread allows robots with such a system to pass relatively big obstacles. Beside the heavier hardware<br />

compared to wheeled robots the main disadvantage is the low efficiency of chains. Energy is<br />

dissipated by friction in the chain and by relative movement between the chain and the ground.<br />

Therefore the robot has need of a more efficient driving system which however increases the total<br />

weight. But especially in the case of demining a robot has to fall below a weight limit to avoid<br />

triggering the mines. On the other hand the use of a chain locomotion system enables the designer<br />

to decrease the pressure force of the robot onto the ground by simply increasing the length of the<br />

chains instead of reducing the weight of the robot.<br />

Walking locomotion systems are in general the better solution for rugged terrain compared to wheel<br />

and chain locomotion systems. But the development of these complex devices is only at the<br />

beginning. Nevertheless many mobile robots demining projects are using this concept. Legged<br />

vehicles can be omni-directional and can turn in place. A unique characteristic of walking vehicles<br />

is the ability to move with discrete foot placements, thus allowing the robot to avoid stepping on<br />

landmines or other delicate objects. Also, the legs of a walking robot cause much less damage to the<br />

terrain than a standard tracked vehicle. Additional due to the freedom of leg placement, a legged<br />

robot can stop walking and then choose suitable stable footing to work on a mine, even on very<br />

uneven ground. Furthermore, its body posture can be changed while keeping the feet on the ground,<br />

thus essentially adding another degree of freedom for performing the task. An unexpected explosion<br />

damages a legged robot in a much lesser degree than a tracked robot if the posture is such that the<br />

legs are stretched out of the centre of the robot’s body like the legs of a spider. If the robot does<br />

mistakenly detonate a mine the damage is reduced to the end of the leg. In this way the body can be<br />

protected, although this approach does require lightweight, inexpensive, replaceable legs.<br />

Another advantage for the use of legged robots in demining actions is that the legs are may also<br />

used as manipulators. In principal such legs are robot arms used for locomotion purposes. They<br />

only have to be equipped with a tool and can be used for other purposes. Disadvantages of legged<br />

robots are the relatively low movement speed and of course the complexity of the system compared<br />

to tracked vehicles in regard to the used mechanical parts and the control of the movement.


Therefore it could take a long time until legged robot platforms suitable for real outdoor<br />

applications are competitive to standard tracked robots.<br />

3 Two legged – humanoid - robots<br />

It was an old dream to have a personal robot looking like a human. Main features of a humanoid<br />

robot are: biped walking, voice communication – speech recognition, facial communication. For<br />

demining only the first is currently important.<br />

The main feature of a real human is the two legged movement and the two legged way of walking.<br />

Much research has been conducted on biped walking robots because of their greater potential<br />

mobility. On the other side they are relatively unstable and difficult to control in terms of posture<br />

and motion.<br />

The stability during the walking process decreases with the number of the legs. Therefore there<br />

were 8, 6 and 4 legged robots copied from the nature (insects, swarms, …) developed in the past.<br />

With new technologies and new theoretical methods two legged robots can be realised responsible<br />

for human tasks like service applications, dangerous tasks, tasks on the production level, support of<br />

humans in everyday life and also for deminig actions.<br />

Static balance is a simple variant but limited to speed and possibilities of movements that the<br />

walking looks not like human. Therefore we have to use for a two legged robot the quasi dynamic<br />

balance which is today standard for larger and efficient projects. This kind of balance requires<br />

another method for the control of stability because in this method the robot will be sometimes in<br />

unstable positions. Because of the limited computing capacity this method can be only realized with<br />

very powerful and fast computer hardware.<br />

Walking and running realized by two legged robots is only possible by means of appropriate actors<br />

and sensors. Currently there is no real alternative to the classical electrical drives. One step in the<br />

direction of humanoid walking concerning actors could be the pneumatic or artificial muscles.<br />

Concerning sensors and take into account the huge developments of sensors in the last ten years<br />

there should be no problem to use standardized sensors – one problem could be only the size and<br />

the prize of such sensors. The problem arising here is only to install a reasonable computer<br />

hardware to include all the signals from the different sensors online in the software.<br />

A humanoid robot could only move efficient if the mechanical construction parameters are<br />

optimized. First the mass distribution could minimize the complexity of the control as well as the<br />

energy consumption. Currently the energy consumption is more or less an unsolved problem in<br />

mobile robots. But there are some technologies available to solve this problem in the next years.<br />

Approximately 40% of the total weight of the robot are batteries for the power consumption. The<br />

reloading time of the currently available batteries is approximately between one and two hours<br />

guarantee an operation time of the robot of 1.5 hours.<br />

Open problems of the humanoid walking are currently the changing of the weight on each step by<br />

means of a small roll movement of the pelvis and the other leg.<br />

4 The new concept for a demining robot<br />

We are currently working on a humanoid, two legged robot called ARCHIE. The goal is to build<br />

up a humanoid robot, which can simulate in some situations a human being. Therefore Archie needs<br />

a head, a torso, two arms, two hands and two legs and will have the following features:


Height 80 - 100 cm Intelligence “On board” intelligence<br />

Weight less than 40kg Hands Hands with three fingers<br />

Operation time minimum 2hrs Degrees<br />

freedom<br />

of minimum 24<br />

Walking speed minimum 1m/s Other features Capable to cooperate with other<br />

Price low Selling Price –<br />

robots to form a humanoid Multi<br />

using commercially<br />

Agent System (MAS) or a “Robot<br />

available<br />

components<br />

standard<br />

Swarm”<br />

Technical details – according to the currently available technologies:<br />

Components 1 Head, 2 Legs, 2 Arms, 1 Torso<br />

CPU + Main Microcontroller � PDA Module<br />

� More than 32 MB SDRAM<br />

� 400 MHz<br />

� Windows CE or Linux<br />

� Serial Interface<br />

Sensors Ultrasonic, Temperature, Acceleration, Pressure, Force<br />

Image Processing + Audio PDA module which communicates with the main processor<br />

Control<br />

Image Processing 2 small CMOS-camera-modules<br />

Audio � 2 small microphones for voice recognition<br />

� 1 loud speaker for communicating with the environment<br />

� 1 amplifier<br />

Clock System time of the PDA Module<br />

Joints � Leg six DOF: two for the ankle (left and right of the feet, up<br />

and down of the feet), one to lift off the leg, one for the knee<br />

(to bend the leg), two for the hip (one for rotating and the other<br />

one for putting the leg left or right)<br />

� Arm four DOF: two for the shoulder section (one for lifting off<br />

the arm and the other one for rotating), two for the elbow (for<br />

bending the arm and for rotating the forearm)<br />

� Head two DOF: one for rotating the head, one for up and down<br />

movement of the head<br />

� Torso two DOF :one for the shoulder section, one for the hip<br />

section<br />

� Hand six DOF: one fixed finger and two fingers with 3 DOF<br />

Drives For each DOF a mini motor with a gear unit (up to 70 kg moment<br />

of force) or a servo<br />

Joint Control (Control of the For each component at least one microcontroller, e.g.: a Basic<br />

Drives)<br />

Stamp<br />

Material of the body Aluminium, Plastics<br />

Sensors: Proximity for measuring distances and to create primitive maps, temperature, acceleration,<br />

pressure and force for feeling and social behaviour, two CMOS-camera-modules for stereoscopic<br />

looking, two small microphones for stereoscopic hearing and one loud speaker to communicate<br />

with humans in natural language.<br />

The control system is realised by a network of processing nodes (distributed system), each<br />

consisting of relative simple and cheap microcontrollers with the necessary interface elements.<br />

According to the currently available technologies the main CPU is for example a PDA module, one


processor for image processing and audio control and one microcontroller for each structural<br />

component, e.g.: a Basic Stamp from Parallax.<br />

5 Summary<br />

The humanoid robot described in this paper is just situated between expensive, “professional” and<br />

cheap “amateur” robots. It should be used for demining but can also support humans in everyday<br />

live and serve for entertainment, leisure and hobby. For working with kids and handicapped persons<br />

social behaviour of the robot is necessary.<br />

In addition he should also be able to work – like a human – in an industrial environment probably in<br />

factory automation for transportation tasks like workpieces and tools between machines, production<br />

cells and storage devices, to cooperate with a human on a common workplace, to operate in<br />

hazardous environments like landmine detection and removing especially in rough terrain, to work<br />

inside of nuclear facilities….The applications will be more and more in the future.<br />

6 References<br />

� Coiffet, Ph.(1998): New Role of Robotics in the Next Century, in: Proceedings of the 7th<br />

Intl. <strong>Workshop</strong> on Robotics in Alpe-Adria-Danube-Region RAAD'98, June 26-28, 1998,<br />

Smolenice Castle, Slovakia, p. 261-266.<br />

� De Almeida, A.T., Khatib, O.(1998): Lecture Notes in Control and Information Sciences –<br />

Autonomic Robotic Systems, Springer London, 1998.<br />

� Han, M.-W.; Kopacek, P.(1998): Intelligent Autonomous Agents – Robot Soccer, in:<br />

Proceedings of the 7 th Intl. <strong>Workshop</strong> on Robotics in Alpe-Adria-Danube-Region RAAD'98,<br />

June 26-28, 1998, Smolenice Castle, Slovakia, p. 261-266.<br />

� Han, M.W., P.Kopacek, G. Novak and A. Rojko (2000). Adaptive Velocity Control of<br />

Soccer Mobile Robots, Proceedings of the Int. <strong>Workshop</strong> on “Robotics in Alpe-Adria-<br />

Danube Region (RAAD ‘00)”, Maribor, Slovenia, p 41-46<br />

� Hirochika, I.(1999): Humanoid and Human Friendly Robots, in: Proceedings 30th Intl.<br />

Symposium on Robotics, Tokyo, Japan, Oct. 27-29, 1999.<br />

� Kopacek, P. (2003): Humanoid Robots for demining, In Preprints of the On Site IARP<br />

<strong>Workshop</strong> on Robots for Humanitarian Demining - HUDEM 2003 Vision or Reality, June<br />

19 - 20, 2003 Prishtina, Kosovo<br />

� Kopacek, P., Han, M.W., Putz, B., Schierer, E., Würzl, M. (2004):, NEW CONCEPTS FOR<br />

HUMANOID ROBOTS Proceedings of RAAD'04, 13th <strong>International</strong> <strong>Workshop</strong> on<br />

Robotics in Alpe-Adria-Danube Region. Brno June 2-5, 2004 (will be published)<br />

� Wörn, H., Dillmann, R., Henrich, D.(1998): Autonome Mobile Systeme 1998, Springer<br />

Verlag, 1998.


UNCLASSIFIED<br />

Towards a Semi-Autonomous Vehicle<br />

for Mine Neutralisation<br />

Kym Ide, Brian J. Jarvis, Bob Beattie, Paul Munger and Leong Yen †<br />

Weapons Systems Division<br />

Systems Sciences Laboratory<br />

† Land Operations Division<br />

Systems Sciences Laboratory<br />

ABSTRACT<br />

Rapid Route and Area Mine Clearance Capability (RRAMCC) is a Capability Acquisition program conducted<br />

by the Department of Defence in Australia. Initial capability requirements and risk mitigation research is being<br />

undertaken within the Defence Science and Technology Organisation. This paper discusses progress of the<br />

mine neutralisation research component of the RRAMCC system concept.<br />

An earthmoving Bobcat is in the process of being converted to semi-autonomous operation for the purpose of<br />

delivering a mine neutralisation charge. The vehicle navigation system consists of several components. High<br />

quality video cameras housed in tilt adjustable pods give a 360-degree field of view. They are connected to a<br />

multi-input frame grabber card that outputs a composite image configured by the remote operator. The<br />

composite image is converted to MPEG-2 video and streamed with very low latency over a wireless local area<br />

network (WLAN) to a remote ground control station (GCS).<br />

Significant mechanical modifications have been made to the Bobcat to provide a platform for teleoperation. The<br />

original hydraulic steering system was replaced with a system of hydraulic actuators and an electronic<br />

proportional valve block connected to a “Hetronic” radio control unit. Transmissions to and from the radio<br />

control handset are multiplexed onto the WLAN allowing the neutralisation vehicle to be driven at distances up<br />

to 1 km from the ground control station.<br />

Development and programming of a robotic arm for placement of the mine neutralisation charge has taken<br />

place in a full-scale mock-up within a simulated environment. This facility including a reconfigurable road<br />

surface provides a controlled setting for testing all aspects of semi-autonomous operation in parallel with<br />

vehicle development.<br />

There is a program of planned upgrades to enhance vehicle capability. These activities are expected to<br />

lead to a field trial in the fourth quarter of 2004.<br />

UNCLASSIFIED


UNCLASSIFIED<br />

INTRODUCTION<br />

Issues associated with designing a teleoperation system for a skid-steer vehicle have been studied<br />

previously 1 . The Australian Army supplied DSTO with an earthmoving Bobcat model 843, which was<br />

surplus to requirements. This vehicle is being converted for use as a mine neutralisation vehicle (MNV)<br />

for the purpose of a concept technology demonstration (Figure 1 shows a concept drawing of the MNV).<br />

The integration and testing of the various sub-systems added to the vehicle will be discussed in this paper.<br />

Figure 1 Concept Drawing of Semi-Autonomous Mine Neutralisation Vehicle.<br />

A mine detection vehicle will mark the position of a landmine buried in a road or route with a paint spot,<br />

magnetic or radio frequency tag. The MNV will work after the detection vehicle. The MNV will be<br />

driven by an operator sitting at a remote GCS to a position close to the marker using global positioning<br />

system (GPS) coordinates as a guide (supplied by the mine detection vehicle). A visual aid indicating the<br />

operating range of the manipula tor (robotic arm) mounted on the MNV is overlayed onto the camera<br />

image at the ground control station display. The vehicle is stopped when the operator is confident that the<br />

marker is within range of the manipulator. The manipulator autonomously places a mine neutralisation<br />

charge on the ground with high precision. The MNV then retreats to a safe distance and the mine<br />

neutralisation charge is remotely detonated ensuring that the buried landmine is disrupted.<br />

1 Mechanical Conversion<br />

MINE NEUTRALISATION VEHICLE SUB-SYSTEMS<br />

A significant amount of work was required to prepare the Bobcat for teleoperation. Firstly, all extraneous<br />

parts were removed, ie. bucket, arms, canopy, seat, rear jacks and engine cover. This left a platform with<br />

plenty of capacity, in both load bearing and “real estate”, for mounting the various sub-systems. The<br />

UNCLASSIFIED


UNCLASSIFIED<br />

centre of gravity was maintained between the axles by addition of the robotic arm (discussed in Sub-<br />

Section 6). A counterweight of welded steel plate and bar functioned as a substitute for the robotic arm<br />

during radio control set-up and testing (as seen in Figure 2).<br />

The main manual hydraulic control block was replaced with an electronic proportional valve block<br />

allowing connection to a radio controller.<br />

2 Radio Controller<br />

The “Hetronic” Nova-L type radio controller (Figure 3) is a well proven commercial off-the-shelf (COTS)<br />

product often used in the operation of cranes, gantries and other material handling equipment. The<br />

controller is well suited to replace the Bobcat’s skid steering mechanism as the transmitter unit’s paddles<br />

are fully programmable with respect to the corresponding movement of the spools in the valve block.<br />

Differences between the left hand and right hand side hydraulic systems can be electronically<br />

compensated to provide precise steering control.<br />

The original radio controller units have a range of 200m and operate in the 400-470MHz band. The<br />

transmitter unit was modified to allow data to be captured prior to radio transmission. Connected by cable<br />

to the GCS PC’s serial port the data is packetised in software and transmitted over the LAN between the<br />

GCS and MNV. Software on the MNV PC converts the TCP/IP data back to serial data, which is sent to<br />

the modified receiver unit by a cable connected to the serial port. By multiplexing all controller and video<br />

transmissions the range of teleoperation is limited to the maximum range of the WLAN. This reduces the<br />

likelihood of mutual interference from having several radio frequency transmitters onboard. Only the<br />

range of the WLAN and not their individual transmitters limit the range of each subsystem.<br />

Figure 2 Current configuration of Bobcat (inset: original state)<br />

UNCLASSIFIED<br />

Radio Receiver


3 Camera Pod<br />

Cables for connection to<br />

valve block terminals<br />

Radio Receiver<br />

UNCLASSIFIED<br />

Figure 3 Hetronic Nova-L type radio controller<br />

The vision system for teleoperation consists of a number of components. Miniature cameras were selected<br />

with low blemish Sony Super-HAD colour CCDs capable of 440 lines resolution. Two camera pods were<br />

designed and manufactured for placement on the Bobcat. One will be installed facing forward and one<br />

rearward but the design allows for placement in any configuration. Each pod is fitted with three CCD<br />

cameras, as seen in Figure 4. The lenses have a horizontal field of view of slightly greater than 60<br />

degrees. When aligned such that the field of view of each camera is slightly overlapping, two pods<br />

installed back-to-back provide a 360 degree field of view. The housing provides protection against the<br />

elements and shades the CCDs from direct sunlight.<br />

Figure 4 Computer model of camera pod showing internals<br />

UNCLASSIFIED<br />

Programmable Steering Paddles<br />

Radio Transmitter


UNCLASSIFIED<br />

To ascertain the best position for placement of the camera pods the view from a number of positions was<br />

determined by modelling. The modelling results showed (Figure 5) that the most effective teleoperation<br />

of the MNV would be achieved when the cameras are pointed at or just below the horizon. The modelling<br />

also showed that to operate the manipulator for placement of the mine neutralisation charge the cameras<br />

needed to be pointed downwards with a view of the working area. To account for these requirements the<br />

cameras were mounted on a motorised platform within the pod, which can be remotely adjusted to any<br />

position between horizontal and a lookdown angle of 45 degrees.<br />

4 Video Transmission<br />

Figure 5 Model of camera views<br />

Images from the cameras placed on the MNV are captured and transmitted to the GCS by a PC based<br />

system described below. The remote PC contains a multi-input framegrabber card (manufactured by<br />

Zandar Technologies) capable of capturing images from all six cameras simultaneously. The associated<br />

software stitches these images together to form a “composite” view of any configuration desired by the<br />

operator. Each operator may position, size, crop and overlap any of the six camera images according to<br />

personnel preferences and save them to a configuration file. Different configurations may be preferred for<br />

driving the MNV and placement of the mine neutralisation charge. The application is also capable of<br />

displaying overlays to the composite view and background bitmaps, which can be used to provide<br />

navigational aids or relevant information.<br />

The composite view output from the Zandar card is input to an audio/visual (A/V) encoder card<br />

(manufactured by Vwebcorp). This card encodes the input video to MPEG-2 format in hardware in<br />

accordance with the bit rate and resolution specified by the operator. The associated server software then<br />

packetises the MPEG-2 data and sends it over the LAN using the user datagram protocol (UDP). At the<br />

GCS PC the client software (player) decodes the incoming data stream and displays it.<br />

For communications a WLAN conforming to the IEEE 802.11b standard (manufactured by Proxim Corp)<br />

was installed. This LAN connection is capable of transmitting data at 11Mbps in ad-hoc (or peer-to-peer)<br />

mode between the MNV PC and the GCS PC. In practical terms this data rate will never be achieved and<br />

our tests have shown a data rate of approximately 6Mbps can be attained. UDP has no facility for error<br />

correction (data being simply unicast or broadcast across the network) hence a very low latency of<br />

approximately 120 msec to the client software at the GCS has been achieved. Studies 2 have shown that a<br />

maximum latency of 300 msec is acceptable for a teleoperated vehicle travelling at 20 km/h and this<br />

increases to 800 msec at a speed of 8 km/h. The MNV has a top speed of 10 km/h.<br />

UNCLASSIFIED


UNCLASSIFIED<br />

Range of the video transmission system has been extended by addition of external antennas and<br />

amplifiers. In tests a stable, low latency video image was received over a distance of up to 1000m.<br />

5 Ground Control Station<br />

An example of the video images transmitted in tests of the system is shown in (Figure 6). The image<br />

shows the composite view from six cameras stitched together to form forward and rearward looking<br />

panoramas. The A/V encoder card supports a maximum video resolution of 720?576 pixels at 25 frames<br />

per second (fps). However, the frame rate of the video displayed at the GCS was approximately 15 fps.<br />

Dropped frames were expected due to the lack of error correction present in the UDP used to transmit<br />

video data.<br />

The bandwidth of the 802.11b WLAN limited the maximum bit rate for the MPEG-2 stream to 3Mbps. It<br />

is envisaged that by upgrading the WLAN to the 802.11g standard a practical limit for MPEG2 bit rate<br />

should be 6-7Mbps, which is typical of bit rates used to encode commercia l DVD-Video. The A/V<br />

encoder card supports a maximum video bit rate of 15Mbps. It also supports audio encoding and it is<br />

envisaged that sound captured by a microphone will be transmitted to aid the driver’s situational<br />

awareness.<br />

Other features of the GCS graphical user interface (GUI) may include vehicle diagnostics and GPS data<br />

on a moving map display providing location and orientation. While the manipulator operates an alternate<br />

GUI displaying video provided by cameras mounted on the arm and gripper as well as information from<br />

sensors will allow the operator to monitor and control the manipulator during placement of the mine<br />

neutralisation charge.<br />

Figure 6 Images from video transmitted over WLAN<br />

UNCLASSIFIED<br />

FORWARD<br />

REAR


6 Robotic Arm<br />

UNCLASSIFIED<br />

An industrial robotic arm (UP-20, manufactured by Motoman Robotics) was selected for integration onto<br />

the Bobcat. This arm has a maximum carrying capacity of 20 kg. The UP-20 was preferred to an “inhouse”<br />

designed and manufactured manipulator because it offered mature technology with great<br />

flexibility owing to its programming simplicity.<br />

Shown below (Figure 7) is a full-scale mock-up of the MNV constructed in DSTO’s Robotic<br />

Development Laboratory. This environment allows development and validation of control code and safe<br />

function testing of the robotic arm. The laboratory permits parallel development of the mechanical<br />

elements of the Bobcat prior to full integration of the manipulator.<br />

The laboratory features a simulated road surface comprising gravel and sand, which can be reshaped to<br />

form features such as potholes and bumps. Hence, the effect of these elements on the sensors and robotic<br />

arm can be studied. A fully developed manipulator can be readied for installation onto the Bobcat for final<br />

integration.<br />

Figure 7 Full-scale neutralisation vehicle mock-up and simulated road surface<br />

7 Manipulator Sensors<br />

At the GCS, video and other sensor information will be displayed on a GUI developed for use during<br />

operation of the manipulator. A camera mounted in the gripper provides video of the area under the<br />

manipulator (see Figure 8). A marker designating the location of a buried mine will be visible on the GUI<br />

display. The operator, by means of a mouse or other input device, clicks on the centre of the mine marker.<br />

The manipulator then proceeds to autonomously place a mine neutralisation charge on the ground. Firstly,<br />

it moves along a vector, keeping the marker in the centre of the video image, until the infrared range<br />

finder senses the ground. Calculation of the exact position in three-dimensional space is sent to the XRC<br />

(robotic arm controller) relative to the robotic arm’s base coordinate system. The manipulator proceeds to<br />

UNCLASSIFIED


UNCLASSIFIED<br />

scan over the ground near the marker and the range finder takes multiple measurements from which a<br />

plane of best fit can be calculated through the surface.<br />

The manipulator picks up a mine neutralisation charge in the gripper jaws. Using the plane of best fit a<br />

movement step for the robotic arm can be calculated so that the mine neutralisation charge can be put<br />

down perpendicular to the ground. This movement is sent to the XRC and the manipulator proceeds to<br />

slowly put the mine neutralisation charge down on the marker. One or more of the gripper pressure<br />

sensors will indicate when the bottom of the mine neutralisation charge has come into contact with the<br />

ground. The manipulator rotates the mine neutralisation charge around the point of contact until all<br />

pressure sensors indicate that the mine neutralisation charge is in full contact with the ground. The<br />

operator manually fine tunes the position of the mine neutralisation charge and/or releases it. The<br />

manipulator returns to a rest position.<br />

Camera<br />

Claws (retracted)<br />

Figure 8 Gripper showing pressure sensors and internally mounted camera<br />

CONCLUSIONS<br />

Integration and testing of sub-systems for conversion of a Bobcat to teleoperation for mine neutralisation<br />

is progressing well. Programming the Hetronic radio controller will compensate for mechanical and<br />

hydraulic variations between each side of the skid steering mechanism.<br />

The gripper sensors and manipulator require integration with subsequent programming of the robotic arm<br />

for autonomous pick and place of the mine neutralisation charges. Once fully developed in the simulated<br />

environment, the robotic arm will be installed along with its controller, PC and hydraulic generator set<br />

onto the MNV.<br />

These activities are expected to lead to a field trial in the fourth quarter of 2004.<br />

UNCLASSIFIED<br />

Infrared Range Finder<br />

LEDs<br />

Pressure Sensor


UNCLASSIFIED<br />

REFERENCES<br />

1. Barton, D., Connell, T., Tamblyn, T., Thorburn, J. and Wickham, T., “Advanced Landmine Proofing System –<br />

Teleoperation Control System Design”, Proceedings of Level IV Design & Research Project Seminar Program,<br />

Department of Mechanical Engineering, University of Adelaide, Adelaide, September 2001, pp108-113.<br />

2. “Remote Operation of Off-Highway Vehicles” CSIRO Robotics & Automation Team, CSIRO Report No. CMIT-<br />

B C#1 2003, Dec 2002 (Revised February 2003), CSIRO.<br />

ACKNOWLEDGEMENTS<br />

Mr Bob Beattie and Mr Paul Munger performed the bulk of the mechanical modifications to the Bobcat.<br />

Design and manufacture of sub-systems was performed by Scientific and Engineering Services personnel,<br />

particular thanks to: Mr Adriano Tranfa, Mr Frederico Lorenzin and Mr Peter Richards.<br />

UNCLASSIFIED


Results of Open-Air Field Trials on Stoichiometric<br />

Identification of UXO Fillers and Buried AT landmines<br />

using Portable Non-Pulsed Non-Directed Fast-Neutron<br />

Stoichiometer (Models 3B2 and 3AT2)<br />

Mu Young Lee, Tsuey-Fen Chuang, Christian Druey, Bogdan C. Maglich,<br />

HiEnergy Technologies, Inc.,<br />

1601 Alton Parkway, Irvine, California 92606<br />

www.hienergyinc.com<br />

John W. Price<br />

Department of Physics and Astronomy, University of California, Los Angeles, California<br />

90095<br />

George Miller<br />

Department of Chemistry, University of California, Irvine, California 92697<br />

Abstract<br />

Stoichiometer TM is a remote online decipherer of the quantitative empirical chemical formulas of<br />

unknown substances - through steel, soil and other barriers. It retrieves the formulas in the form CaNbOc,<br />

where a, b, c are the atomic proportions of carbon, nitrogen and oxygen respectively, which are determined<br />

to an accuracy of 5-10%. It also graphically displays on the operator’s push button command, each formula<br />

as a point on a triangular screen while flashing a message of “This is explosive”, or “Not a known<br />

explosive” or “Marginal, double check!” The contents of other chemical elements whether or not related to<br />

explosives can also be displayed on screen. The top-of-the-line version of Stoichiometer is SuperSenzor; it<br />

uses electronically directed neutrons and it is currently in the development stage.<br />

We report here the results with the already developed simpler, slower, shorter range and less expensive<br />

version of Stoichiometer named MiniSenzor, which uses non-directed 14 MeV neutrons. In open-air field<br />

UXO filler identification tests, MiniSenzor Model 3B2 scored 100% in answering "yes/no" to the question<br />

"Is it explosive?” It scored 80% in answering "What explosive is it?” The trials were conducted at the<br />

Navy's EOD Technology Division (NAVSEA), Indian Head, MD. A separate series of trials with a similar<br />

MiniSenzor, 3AT2, was carried out at a University of California test site in Irvine. Samples of 5 Kg of<br />

TNT or RDX simulant AT (Anti-Tank) landmines were chemically identified and discriminated from nonexplosives<br />

with no false alarms through 2.5 cm of wet soil.<br />

The decision time varied from 2-20 minutes, depending on the sample size. The time is projected to drop<br />

to 1-5 minutes with the improved non-directed model 3B3; and to 12-60 seconds with the directed-neutron<br />

model, SuperSenzor 7AT7, which is being developed under an SBIR Phase II contract with the U.S.<br />

Army’s RDECOM Night Vision and Electronic Sensors Directorate.<br />

1. Introduction<br />

The Stoichiometer analyzes the gamma rays emitted from substances that are probed with<br />

fast neutrons. For the detection of explosives, the key elements are carbon, nitrogen and<br />

oxygen. The Stoichiometer’s analysis results in the calculation of the relative proportions


Figure 1. A graphical representation of the three-element chemical formulae for several<br />

substances.<br />

of those three elements present in the substance. A simple way of depicting this threeelement<br />

empirical formula is in a plot such as the one shown in Figure 1. The location of<br />

the markers which are associated with various substances shown in the figure depends on<br />

the CNO ratios for that substance. The distances from a point within the triangle to the<br />

three sides are proportional to the elemental ratios for that substance. The operator does<br />

not need to actually see the results depicted this way since an automatic alert message<br />

will be generated if a threat is detected but the triangular plot is instructive in<br />

understanding how the Stoichiometer works. Notice that explosives are generally<br />

clustered around a certain region of the plot. Although some substances have fixed<br />

chemical formulas such as TNT, which is a specific molecule, there are some explosive<br />

substances which may consist of two or more other explosives (and possibly nonexplosive<br />

additives as well as inadvertent impurities) in a mixture of inexact proportions.<br />

For this reason, if the results of the interrogation fall within the region of explosives then<br />

the Stoichiometer will report to the operator that a threat is present. This ability to


calculate the empirical chemical formula obviates the need for the creation and storage of<br />

a database of gamma ray spectral signatures against which the experimentally acquired<br />

spectra must be compared. Instead, the Stoichiometer’s analysis resides in a<br />

parametrized space where the dimensional axes are the quantitative carbon, nitrogen and<br />

oxygen content.<br />

2. Key Components<br />

MiniSenzor Models 3AT2 and 3B2 consists of 5 main components:<br />

1) Portable (34 lb, 75 Watt) fast neutron generator & control unit.<br />

2) Solid state gamma ray detectors (less than 25 lbs each) made of high purity<br />

germanium crystals.<br />

3) Detector interface, control, and digital signal processing module.<br />

4) Shielding materials.<br />

5) Portable computer with Windows-XP operating system.<br />

6) Optional battery for system power.<br />

The neutron generator produces up to 10 8 /sec of 14 MeV fast neutrons which are emitted<br />

isotropically. Shielding materials between the gamma ray detector and neutron source<br />

are used to protect the detector from neutron damage as well as to mitigate neutroninduced<br />

noise signals. The fast neutrons that reach the target excite the nuclei in the<br />

target which, in turn, decay to the lower and ground states via gamma ray emission. The<br />

gamma ray detector receives the gamma ray signals and transmits them to a multichannel<br />

analyzer for digitization and processing. The digitized data then are transferred<br />

to a remote laptop computer for analysis.<br />

At first glance, the PELAN system from SAIC appears very similar to the Minizensor in<br />

that it also utilizes a 14 MeV neutron source and gamma ray detector for UXO detection.<br />

PELAN’s utilizes PFTNA (Pulsed Fast Thermal Neutron Analysis) technique instead of<br />

Minisenzor’s continuous fast neutron technique. The gamma detector used in competing<br />

devices is BGO (Bismuth Germanate) which cannot perform stoichiometry because of its<br />

much poorer energy resolution (7%) compared to the HPGe detector (0.2%) used in the<br />

Minisenzor. Without stoichiometric capability, such systems must rely on pattern<br />

recognition algorithms to determine the presence of explosives. Another consequence of<br />

the poor resolution is that the fast-neutron-induced nitrogen gamma rays are basically<br />

indiscernible from the background noise signals. Since nitrogen is the key element found<br />

in explosives, a system which is incapable of detecting nitrogen cannot conclusively<br />

detect explosives.<br />

3. Test Results<br />

In the photograph shown in Figure 2, a rocket warhead was placed in front of an<br />

experimental prototype of the Minisenzor 3B2 system. This object and a total of 14 other<br />

objects were interrogated during a blind test of the Minisenzor conducted by the EOD<br />

Technology Division (NAVSEA) at Indian Head in Maryland in January of 2003 as part


of a UXO filler identification program. Among the targets, an explosive was identified as<br />

Composition B (C1N1.2O1.3) with an empirically determined chemical formula of<br />

C1N1O1.2 in a 5 minute interrogation. The result for the target of ammonium nitrate in a<br />

cardboard box was identified in a 5 minute interrogation. A bottle of gasoline was also<br />

identified in a 5 minute interrogation. The range of interrogation times was between 5<br />

and 20 minutes for the various objects. Although the performance score of the<br />

Minisenzor on an object by object basis was not disclosed to HiEnergy by Navy<br />

personnel, it was revealed that the device had achieved overall score of correctly<br />

identifying 80% of the filler materials.<br />

More recent tests of the<br />

Minisenzor conducted at<br />

HiEnergy’s labs indicate that<br />

under similar test conditions<br />

a minimum quantity of 0.114<br />

kg of TNT simulant<br />

(chemical ratios: C=2, N=1,<br />

O=2) can be detected in 20<br />

minutes.<br />

To first order, the detection<br />

time, T, is dependent on the<br />

mass of the target, M, the<br />

distances from the target to<br />

the neutron source, L1, and to<br />

the detector, L2, as described<br />

in the equation bellow:<br />

2<br />

cL1<br />

L2<br />

T =<br />

M<br />

Here c is an empirically<br />

determined constant which is<br />

dependent on the setup<br />

configuration, target and<br />

background.<br />

Depending on the particular<br />

configuration, there is also a<br />

minimum mass threshold<br />

below which a conclusive<br />

2<br />

Figure 2. Rocket warhead being interrogated by a<br />

Minisenzor system.<br />

stoichiometric interrogation is not achievable. Different masses of TNT simulant from<br />

2.18 kg down to 0.114 kg were detected by using the MiniSenzor at HiEnergy’s labs in a<br />

geometry similar to the one shown in Figure 2. Examples of the sizes of the interrogated<br />

targets are as follows: A 4.8 lb (2.18Kg) target sample was placed in a cylindrical metal


canister, 17cm in diameter and 13.5 cm in height. A 1 lb (454g) target sample was placed<br />

in a cylindrical can 9.5 cm in diameter and 13 cm in height. A 0.5 lb (227g) target<br />

sample was placed in a cylindrical can 6.5 cm in diameter and 12 cm in height. A 0.25 lb<br />

(114g) target sample was placed in a cylindrical can 6.5 cm in diameter and 12 cm in<br />

height.<br />

In order to make a conclusive determination of explosive content, the Stoichiometer must<br />

collect a statistically sufficient number of nitrogen gamma rays in addition to those from<br />

carbon and oxygen. Since TNT has a lesser nitrogen content compared to equal masses<br />

of RDX, HMX and Composition B, the detection time for these other explosives with<br />

higher nitrogen concentration is shorter. In situations where the target is not on a table,<br />

so far, the minimum detectable mass of TNT simulant on the ground is 5 lb (2.3 kg) and<br />

11 lb (5 kg) buried under 2.5 cm of soil. The latter results were from tests conducted on<br />

the campus of the University of California at Irvine using the Minisenzor. The detection<br />

of buried explosives is more difficult due to the large amount of background gamma ray<br />

signals which originate from the soil. In order to properly compensate for the effect of<br />

this background, a careful measurement of the soil must be made. This particular result<br />

with Minisenzor is not representative of what might happen in other types of soil that are<br />

different from that of the Irvine test site. Tests conducted on rainy days did indicate<br />

however that similar results were achievable, at least for this soil with 5 kg of TNT<br />

simulant buried 2.5 cm below the surface.<br />

It is important to note once again, that the device relies on calculation of an empirical<br />

chemical formula of the object that is being interrogated. The scientific soundness of this<br />

approach was validated in the fact that prior to the testing conducted at Indian Head, the<br />

Minisenzor had never acquired any gamma ray spectra from actual explosives yet was<br />

still very successful in identifying the UXO fillers that were presented during the Navy’s<br />

blind test. Up until that point, the research, development and testing had been conducted<br />

only with various mixtures of chemical reagents that had served as simulants for real<br />

explosives.<br />

In addition to the stoichiometric evaluation of CNO content, there are certain other<br />

elements that can also be detected. Stoichiometric results for some of these elements is<br />

possible with Minisenzor whereas certain other elements are merely detectable. It is<br />

expected that with the directional-neutron Supersenzor device that many more elements<br />

can be evaluated stoichiometrically as compared to the Minsenzor. A partial list of other<br />

detectable elements includes hydrogen, phosphorous, sulfur, fluorine, chlorine, calcium,<br />

silicon, aluminum and iron. An example of the typical computer printout from the Indian<br />

Head tests is shown in Figure 3. The interrogated object that generated those results is<br />

shown in Figure 4.<br />

In this case, the red circular dot on the triangle plot shows that the content of the box is<br />

close to TNT or perhaps smokeless powder. In either case, it was identified by the<br />

computer as being explosive. The bar graph in Figure 4 shows that the aluminum from<br />

the box was also detected. The reason some of these other elements were displayed in<br />

the analysis window is that their detection was required in order to properly identify some


of the non-explosive targets that we knew might be placed before us. Examples include<br />

the detection of chlorine in order to identify bleach and the detection of sulfur to<br />

distinguish gasoline from diesel fuel.<br />

Figure 3. Computer display window from the Indian Head blind tests of Minisenzor. An<br />

automatic response message of either explosive or chemical weapon (as opposed to<br />

“inert”) is shown in the upper right. The stoichiometric ratios of CNO are prominently<br />

displayed at the top. The numerical suffix associated with Hydrogen is a nonstoichiometric<br />

result.<br />

Figure 4. An ammunition box being interrogated by Minisenzor. The corresponding data<br />

are shown in Figure 3.


4. Minisenzor vs. PELAN: quantitative analyzer vs. qualitative analyzer.<br />

Minisenzor is a quantitative analytic tool that performs ‘stoichiometric’ chemical<br />

analysis of unknown materials. Stoichiometry is the scientific term for deciphering the<br />

quantitative chemical formulas.<br />

Minisenzor measures the atomic proportions of all 3 chemical elements always<br />

present in military explosives: Carbon, Nitrogen and Oxygen as well as that of other<br />

elements, if present. Obtaining accurate proportions amounts to the empirical chemical<br />

formula of the substance investigated thus, MiniSenzor performs definite chemical<br />

identification of explosives as well as non-explosives. For example, it gets the formula of<br />

RDX, C1N2O2 to an accuracy of 5% (Figures 1 and 2).<br />

In contrast, PELAN (“Pulsed Neutron Elemental Analysis”) performs ‘Elemental<br />

Analyses.’ Elemental analysis generally establishes only what chemical elements are<br />

present but cannot determine the amounts of each of the 3 elements which are required<br />

for explosive identification. E.g. RDX elemental analysis will tell only that this material<br />

contains Carbon, Nitrogen and Oxygen; since dozens of common materials, too, contain<br />

Carbon, Nitrogen and Oxygen, it requires a subsequent inspection to confirm what the<br />

object really contains. An elemental analysis of 3 elements is not a ‘confirmation sensor’<br />

but - provided it can detect all 3 elements - an anomaly sensor or a half way between<br />

anomaly and confirmation sensor.<br />

The problem is that PELAN cannot perform even a 3-element analysis.<br />

Figure 5. HiEnergy’s core technology, STOICHIOMETER, chemically identifies explosives,<br />

contraband, and other illicit substances through steel, online, in real time with no operator<br />

intervention or interpretation.


Figure 6. What the operator sees on screen.<br />

• STOICHIOMETRIC ANALYSIS = art & science of deciphering<br />

quantitative empirical chemical formulas of unknown substance.<br />

• ELEMENTAL ANALYSIS: deciphering qualitative presence of some<br />

elements.<br />

STOICHIOMETER<br />

Continuous 14 MeV Neutrons<br />

FAST and AUTO-THERMAL<br />

γ Energy Resolution: 0.2%<br />

Deciphers CaNbOc<br />

Quantitatively Obtains a, b, c<br />

to + 5%<br />

Table I.<br />

Operating Parameters of MiniSenzor 3UXO3 compared to PELAN:<br />

5. PELAN cannot detect Nitrogen.<br />

PELAN<br />

Pulsed 14 MeV Neutrons<br />

Fast and Thermal<br />

γ Energy Resolution 6%<br />

Measures Carbon: Oxygen<br />

Ratio<br />

Detecting nitrogen in the fast neutron induced gamma spectrum is a difficult task.


First, one cubic meter of air contains nearly one kilogram (925 g) of Nitrogen. Second,<br />

the 5.11 MeV Nitrogen peak is inseparable from the 5.10 MeV Oxygen peak, (known as<br />

the ‘second escape oxygen peak’); they are on top of one another. This is illustrated in<br />

Figs 3 and 4 in which the gamma spectra observed by stoichiometric MiniSenzor and<br />

SuperSenzor are compared to those observed by PELAN. It is clear that PELAN cannot<br />

observe even the combined Oxygen plus Nitrogen 5.1 MeV peak, let alone to measure<br />

what fraction of it comes from N and what from O. Third, to observe the first escape N<br />

peak at 4.6 MeV, a gamma energy resolution of at least 0.5 % is required to separate it<br />

from the broad Carbon peak are 4.4 MeV (Figs 3 and 4). Gamma energy resolution of<br />

PELAN is 6%, i.e. 1000% poorer.<br />

PELAN 2003<br />

PELAN 2002<br />

STOICHIOMETER<br />

SuperSenzor 7UX 7<br />

Figure 7.<br />

PELAN CANNOT<br />

DETECT NITROGEN<br />

S = 1<br />

N 10<br />

S = 10<br />

N 1


PELAN<br />

2000<br />

STOICHIOMETER<br />

MINISENZOR<br />

3UX3<br />

Figure 8.<br />

PELAN CANNOT DETECT NITROGEN<br />

PELAN Cannot Detect Nitrogen<br />

GAMMA ENERGY (KeV)<br />

SIGNAL = 1<br />

NOISE 10<br />

SIGNAL = 2<br />

NOISE 1<br />

6. PELAN cannot differentiate explosive from non explosive in a realistic, nonengineered<br />

environment.<br />

PELAN claims to be able to detect explosives from the Carbon-to Oxygen ratio alone.<br />

Confirmation detector for military explosives without detecting nitrogen is not feasible in<br />

a realistic environment. Nevertheless, PELAN claims to be able to do so by measuring<br />

the ratio of the Carbon to Oxygen. There are about 4000 known materials that have a<br />

similar C/O ratio as the explosives.<br />

For example, for RDX, C/N = 0.5 while for alcohol C/N = 0.42.<br />

> TNT has C/O = 1.17, while starch C/O = 1.2.<br />

> 362 INORGANIC COMPOUNDS; 414 ORGANIC, 1,000s MIXED<br />

MATERIALS have C/O in the ballpark between TNT and RDX.<br />

> Hence, C/O alone cannot tell explosives from non-explosives<br />

> The knowledge of the percentage of NITROGEN IS CRITICAL. One can<br />

unambiguously determine an explosive only by deciphering quantitatively the<br />

empirical chemical formula involving at least 3 elements.<br />

• CaNbOc<br />

• ALL 3 must be known: a, b, c


7. PELAN cannot identify UXO filler on the ground<br />

Any soil that contains both C and O would be declared by PELAN as “explosive”.<br />

Hence, PELAN cannot be used for the UXO lying on the ground. . The only way to<br />

analyze UXO is to place it at least 3 ft above the ground. Since UXO is found on the<br />

ground or in the ground in 99% of the time, PELAN can be used only in 1% of the cases.<br />

8. How to Make an Explosive Non-Detector into an “Explosive Detector.”<br />

Tests can be engineered, however, so as to create an impression that an N/O ratio<br />

detector can differentiate between explosives and non-explosives.<br />

In his recent article ‘One in a Million’, Freemen Dyson has elaborately described<br />

two conducts by which a methodology that is intrinsically incapable of getting correct<br />

information can produce the ‘right answer’: inadequate control and biased sampling<br />

[New York Review of Books, March 25, 2004, p.4.].<br />

An example of such biased sampling would be to compare explosives, which<br />

always contain both C and O, with a selected, heavily biased sample of the nonexplosives<br />

that contain only O (no C) or contain only C (no O) . That is, by the exclusion<br />

of over 4,000 common materials thatcontain both C and O. Then ANYTHING that<br />

contains both C and O must be explosive by definition.<br />

EXAMPLE: Use explosive TNT C/O=1 C and O C/O=1<br />

and use only the following non-explosives:<br />

> WATER H2O no C C/O=0<br />

> CEMENT CaSiO3 no C C/O=0<br />

> Plaster of Paris CaSO4 no C C/O=0<br />

> Sand SiO2 no C C/O=0<br />

> Bleach NaCl+ H2O no C C/O=0<br />

> Gas/Diesel C1H1 no O C/O=∞<br />

This is exactly what was done in the competitive test on UXO detection between PELAN<br />

and MiniSenzor by U.S. Navy Explosive Ordnance Disposal Technology Center at Indian<br />

Head, Maryland, in the period December 2002-January 2003. The tests were artificially<br />

tailored as if to circumvent the shortcomings of PELAN. This is summarized in Fig. 5.<br />

Moreover, the UXO and other containers examined in the test were placed on a table 3 ft<br />

above ground, a scenario that has nothing to do with the real world in which the UXO is<br />

either on the ground or in the ground.


This Was Exactly The Case During Navy UXO Detection<br />

Tests At EODTECH DIV Indian Head Jan. ‘03<br />

Figure 9.<br />

Water no C<br />

Cement no C<br />

Plaster of Paris no C<br />

Sand no C<br />

Bleach no C<br />

C<br />

RDX<br />

TNT<br />

Only C-Deficient or O-Deficient Substances were<br />

contrasted with C-Rich and O-Rich Substances, so that<br />

PELAN can pass as an Explosive Detector:<br />

“Tailoring The Tests by Client To Fit The Client”<br />

N<br />

O<br />

4000 COMMON SUBSTANCES<br />

CONTAINING BOTH C and O<br />

ARE EXCLUDED<br />

no O (Gas/Diesel)<br />

9. U.S. Navy Explosive Ordnance Disposal Technology Center. Testing Provided the<br />

Venue for Proving the Effectiveness of Our Technology Over the Competing<br />

PELAN.<br />

Prior to the competitive test runs between PELAN and MiniSenzor, Navy’s EODTECH<br />

had given to PELAN a two-week preparation period (May 13-24, 2002) to test 164 cases<br />

of UXO.<br />

Although, EODTECH had given to the MiniSenzor one hour preparation time on January<br />

9, 2003, MiniSenzor has scored 100% in the yes/no explosive determination and 80% in<br />

the exact identification of the type of explosive and non-explosive tested. PELAN cannot<br />

approach this.


Parameter/ Function<br />

1. Can it decipher the chemical formulas of substances?<br />

2. Can it definitely answer “explosive yes or no” without<br />

secondary inspection (opening)?<br />

3. Can it detect Nitrogen quantitatively? (Can it tell how<br />

much Nitrogen is present?)<br />

4. Can it detect presence of Nitrogen qualitatively? (Does it<br />

have a chance of detecting any explosives?)<br />

5. Range in materials? (Penetration Depth)<br />

6. Emitter/Receiver Timing Synchronization*<br />

7. Time required for STOICHIOMETRIC detection of car<br />

bomb from a distance of :<br />

A.6 inches<br />

B. 1 ft.<br />

C. 2 ft.<br />

8a. Interrogation time to detect explosives at distance of 6<br />

inches<br />

8b. False Alarm Rate<br />

8c. Probability of detection<br />

(PoD) (Likelihood of<br />

detecting explosives)<br />

9. High precision gamma<br />

detector (solid state):<br />

Degree of precision<br />

(resolution) of gamma<br />

energy<br />

untested<br />

Untested<br />

NO<br />

+ 6%<br />

PELAN<br />

(Pulsed Elemental<br />

Analysis with<br />

Neutrons)<br />

NO<br />

NO<br />

NO<br />

NO<br />

2-3 feet<br />

10 %<br />

(Necessary)<br />

A. incapable<br />

B. incapable<br />

C. incapable<br />

untested<br />

10%<br />

90%<br />

YES<br />

+ 0.2%<br />

MiniSenzor 3C3<br />

100%<br />

(Not Needed)<br />

12 sec.<br />

1%<br />

99.5%<br />

YES<br />

YES<br />

YES<br />

YES<br />

2-3 feet<br />

A. 15 sec.<br />

B. 1 min.<br />

C. 2 min.<br />

30 sec.<br />

Advantage of 3C3<br />

over PELAN<br />

∞<br />

∞<br />

∞<br />

NONE<br />

NONE<br />

1000%<br />

(Represents<br />

Simplified<br />

Electronics)<br />

A. ∞<br />

B. ∞<br />

C. ∞<br />

unknown<br />

unknown<br />

Unknown<br />

∞<br />

3000%<br />

Table 2.<br />

Performance of Stoichiometer Model MiniSenzor 3C3 (CarBomb Finder 3C3) compared to Pulsed Fast<br />

Neutron Device “PELAN.”<br />


Summary<br />

MiniSenzor 3B2 was able to identify a variety of UXO fillers on a table top 75 cm above<br />

the ground. The minimum detectable TNT simulant on the table is 0.114 kg. The<br />

minimum detectable TNT simulant is 2.3 kg on the ground and 5 kg buried under 2.5 cm<br />

of soil. The results show that MiniSenzor 3AT2 is able to detect an AT landmine<br />

simulant buried below 2.5 cm of soil. Tests and development with Minisenzor 3B3 and<br />

3AT3 to detect smaller amounts of material, in shorter times and under deeper burial<br />

depths are ongoing.<br />

References<br />

B.C. Maglich et al, “Demo of Chemically-Specific Non-Intrusive Detection of Cocaine Simulant by Fast<br />

Neutron Atometry,” Proceedings of the 1999 <strong>International</strong> Technology Symposium, pp. 9-12 to 9-22, Mar<br />

8-10, 1999, Washington D.C., Office of National Drug Control Policy of the Executive Office of the<br />

President of the United States.<br />

E. Rhodes, C.E. Dickerman, C.W. Peters, “Associated-Particle Sealed-Tube Neutron Probe for<br />

Characterization of Materials,” SPIE vol 2092, SubstanceDetection Systems, pp. 288-300, Oct. 5-8, 1993,<br />

Innsbruck, Austria.


Feature-Level Sensor Fusion for a Demining Robot<br />

Svetlana Larionova, Lino Marques and A.T. de Almeida<br />

Institute of Systems and Robotics,<br />

University of Coimbra, Polo II,<br />

3030-290 Coimbra, Portugal<br />

{sveta, lino, adealmeida}@isr.uc.pt<br />

May 28, 2004<br />

Abstract<br />

This paper presents a practical implementation of landmine sensor fusion techniques in a pneumatic demining<br />

robot. The proposed method is based on a two-step strategy for the sensor fusion. This strategy allows separating<br />

two different classification tasks: Object/Background and Mine/Another Object. The first step - Regions-Of-<br />

Interest (ROIs) extraction and objects association - provides collections of ROIs extracted from several sensors.<br />

Then the second step performs classification of the extracted objects using specific landmine features. The<br />

ROIs extraction algorithm uses less parameters, is reliable in different environmental conditions and can be<br />

implemented online.<br />

The proposed strategy allows to create a database of landmine signatures which can be used for classifiers<br />

training combining data from different experiments. Results of experimental implementation and conclusions<br />

are presented.<br />

1 Introduction<br />

The problem of humanitarian demining has a well known importance nowadays. This task is very dangerous<br />

and has very strict requirements of landmine clearance. As manual demining process provokes more victims, the<br />

development of automatic demining devices becomes more important. ISR (Coimbra, Portugal) is developing a<br />

demining robot able to explore autonomously a specified area searching for antipersonnel landmines situated on<br />

it [8].<br />

One of the main problems of such robot is the lack of landmine sensing devices able to provide enough information<br />

to guarantee the required clearance rate with a low false alarm rate. Using the current sensing technologies, it is<br />

considered that a combination of several different sensors and sensor fusion techniques are required to provide<br />

acceptable results.<br />

Sensors which are used for landmine detection have very different physical principles which do not allow to apply<br />

a deep fusion provided by signal-level or pixel-level fusion techniques. Thus, there are basically two types of sensor<br />

fusion which can be used for demining: feature-level and decision-level. Feature-level fusion is a deeper process<br />

that uses particular features from each sensor to make a general decision. Both decision-level and feature-level<br />

approaches are currently in the research by different groups and from the experiments there is not still a conclusion<br />

about which approach should be considered the best (for a comparison see, for instance,[7]). In these conditions we<br />

consider a feature-level fusion being more promising for demining.<br />

2 General structure<br />

Feature-level sensor fusion can be divided into the following steps: registration, filtering and mapping; ROIs extraction<br />

(segmentation); objects association; features extraction; classification.<br />

Two sets of classes are usually used in demining: Mine/Background or Mine/Clutter/Background. This strategy<br />

provides information about landmine existence without paying attention to other existing objects (considered as<br />

clutter and background). On the other side, the method proposed in this paper takes into account all the objects<br />

which might improve the results of classification and sensor fusion in general.<br />

Landmine detecting sensors do not provide conclusive information about presence or absence of mines. They<br />

only allow to sense a heterogeneity of some physical characteristic against a background. This heterogeneity can be<br />

caused by a landmine, by other artificial objects (e.g., clutter), by natural objects (e.g., stones) or by changes in the<br />

environmental conditions (e.g., sun shining and humidity). In order to distinguish a landmine from other objects,


(a)<br />

measurement point<br />

X−direction movement<br />

Figure 1: a - Demining robot equipped with two active IR sensors with bandwidths 6.5-14 µm and 8-14 µm and a<br />

Shiebel Atmid metal detector, b - a path followed by the scanning device<br />

its particular fingerprint in spatially mapped sensors readings should be identified. According to this specificity<br />

the following sensor fusion methodology is proposed: the process is divided into two steps, the first step separates<br />

all objects from background, then during the second step a feature-level fusion allows to separate landmines from<br />

other objects (using classes Mine and Another Object). In other words the first step is a ROIs extraction and the<br />

second step is a classification which uses features extracted from the ROIs. The criterion for selecting ROIs consists<br />

in separating all possible objects from the background.<br />

The advantages of this methodology are:<br />

Unification of the classification process for classes Mine/Another object. Classifiers always need<br />

experimental data for training and evaluation. But resources for the experiments (fields and mines) are usually<br />

unavailable in most research labs. Furthermore the size of the training set required by the classifier ([9]) is much<br />

higher than the size usually employed in the current experiments (about 20-40 samples and leave-one-out evaluation<br />

method [5]). The methodology proposed in this paper helps to mitigate the influence of current experimental<br />

conditions by providing a more unified way of using landmine signature data and allowing the integration of<br />

different experiments. Databases of experimental data are already freely available through the Internet for public<br />

usage [2, 1]. This data is used to create a database of landmines signatures stored as collections of ROIs obtained<br />

from the ROIs extraction and objects association algorithms.<br />

Focusing the classifier on “landmine features”. The proposed methodology allows to focus a classifier only<br />

on the important features which distinguish a landmine from other objects. The classifier does not need to pay<br />

attention to features which reflect the difference between background and any object (including landmines): mean<br />

value, minimum, maximum etc.<br />

Controlling the algorithm failures. Requirements for humanitarian demining are very strict and it is not<br />

certain that an automatic system will achieve the required levels, being important to consider the possibility of<br />

misclassification. Selection of ROIs provides visual information that can help to recognize failures leaving for an<br />

operator the responsibility of the final decision about checking the identified region by other means, for instance,<br />

manually.<br />

robot path<br />

regular grid<br />

3 Separation of objects against the background (Step 1)<br />

3.1 Registration, filtering and mapping<br />

These tasks provide the necessary preprocessing of the sensors data for the ROI extraction algorithm. The data<br />

is gathered by a scanning device (our demining robot) that performs a Cartesian movement similar to the one<br />

presented in Figure 1(b). Due to difficulties in positioning our pneumatic scanning platform, the acquired data is<br />

usually non regular. Thus the sensed data should first be mapped into a regular grid-map in order to build an<br />

image where ROIs might be found.<br />

In the case of the demining robot represented in Figure 1(a) this task can be simplified by considering the following<br />

assumption: X coordinate does not change during movements in Y-direction, Y coordinate does not change during<br />

movements in X-direction. Then a 1D, instead of 2D interpolation can be done. This algorithm does not require<br />

significant processing time, thus a grid-map for each sensor can always be updated online. Following the same<br />

principle, 1D filtering is used instead of 2D filtering and a grid-map is formed from the already filtered data.<br />

(b)<br />

Y−direction movement


(a)<br />

interesting point<br />

(b) (c) (d) (e)<br />

Figure 2: ROIs extraction process: a - sample grid-map, b - encountering of interesting points (two iterative filters<br />

with Q = 40 and Q = 0.004), c - Segmented map, d - extrema searching in the Segmented map, e - selected ROI<br />

3.2 ROIs extraction<br />

3.2.1 Overview<br />

The task of ROI extraction is pointed by several works about feature-level sensor fusion. The main idea of these<br />

algorithms is to use computer vision techniques applying for the processing of the grid-mapped sensor readings<br />

(which can here be refered as grey-level images). Techniques used for this tasks usually are: thresholding ([5]),<br />

Tophat filter and Hough transform ([10],[5]), mathematical morphology ([6]), texture segmentation and Gabor<br />

transform ([3]).<br />

The main problems of the considered approaches are:<br />

• A region of a grid-map for processing is required before the processing can be started, in some cases this<br />

region is used for the selection of parameters which does not allow performing the process online<br />

• Tuning of the required parameters is made heuristically by experts and is hard to implement in automatic<br />

systems (the result hardly depends on the selection of thresholds, kernel sizes etc.)<br />

• Computer vision based algorithms mentioned above search for objects with certain shapes (in particular,<br />

circles), this requires images obtained under well controlled conditions which is not always possible<br />

3.2.2 Proposed Approach<br />

The proposed algorithm intends to improve the problems pointed above. The main idea of the algorithm is: ROIs<br />

can be found in places where heterogeneities in sensor readings are situated. To make the explanation of the<br />

algorithm more clear we consider a ROI extraction from a sample grid-map presented on Figure 2(a).<br />

The proposed approach has the following steps:<br />

• Detecting interesting points. Heterogeneity in sensor readings can be detected by analysing changes of its<br />

value. It is proposed to use for this purpose two iterative filters with different parameters based on the<br />

principles and equations for the Kalman filter. Selecting different values for the process noise covariance Q<br />

it is possible to achieve filters with different behaviours: the bigger Q, more exact and faster a filter follows<br />

the signal (Figure 2(b)). An interesting point is detected if the difference between the slow filter and the fast<br />

filter exceeds a threshold (which is equal to the Q of the fast filter).<br />

• Forming a 2D segmented map. Segmented map is a grid-map which represents interesting points spatially. It<br />

is formed as follow:<br />

1. An initial integer current value is chosen (it can be any number, for example 100)<br />

2. If V aluefastfilter − V alueslowfilter > threshold then current value is incremented<br />

If V alueslowfilter − V aluefastfilter > threshold then current value is decremented<br />

3. Current value is mapped to the segmented map according to the interesting point coordinates<br />

This process forms a map with homogeneous segments that differ from each other by an integer value (2(c)).<br />

• 1D maximum/minimum searching in the segmented map. An important property of the segmented map is:<br />

the centre of the ROI along X coordinate corresponds to the local maximum/minimum on the segmented<br />

map along X coordinate. A maximum/minimum searching is performed to detect all local extrema for each


value of Y coordinate (Figure 2(d)). The nature of the sensor determines which type of extremum should be<br />

used: for metal detector only maxima are appropriate, while for IR sensor both maxima and minima can be<br />

considered.<br />

• Region growing in the segmented map. Each extremum encountered on the previous step starts a region<br />

growing process:<br />

1. Selecting a segment to which the extremum point belongs - start segment<br />

2. Detecting all segments adjacent to the start segment<br />

3. For any detected segment its HW ratio and size (see Features extraction) are tested according to the<br />

rule: HW ratio of the ROI can not exceed 4, size can not be bigger when a certain threshold. If the<br />

segment passes the test it is joined to the start segment<br />

4. If there are no new segment joined to the start segment then the region growing is finished, otherwise<br />

repeat from step 2.<br />

• Association with the initial map. The final ROI is a rectangle which contains all the selected segments.<br />

This rectangle is brought in correspondence with the initial grid-map. The final result of ROI extraction is<br />

presented in Figure 2(e).<br />

3.3 Objects association<br />

The ROIs extraction algorithm selects for each sensor a set of regions which contain some objects. The ROIs from<br />

different sensors which represent the same object should be combined together for the futher features extraction<br />

and classification (by other words, each ROI should be associated with some object). A one-to-one or a one-tomany<br />

association can be made. One-to-one association provides a certain result but can cause problems due to the<br />

position errors in the sensor data. On the other side one-to-many association can provide uncertain results. Thus<br />

the proposed algorithm for the objects association is as follow: ROIs are associated together if the distance between<br />

them is less than 200 mm, one ROI can be associated with different objects, the value of the distance is included to<br />

the features set used for the classification. This algorithm allows to burden the classifier with the possible mistakes<br />

in the objects association caused by uncertainty.<br />

4 Detecting landmines among all objects (Step 2)<br />

4.1 Features extraction<br />

Selection of the features which reflect the difference between landmines and other objects is a complex task because<br />

the landmine detection sensors do not provide specific information about landmines, thus, statistical characteristics<br />

are usually used as features ([11],[5],[6],[4]). The proposed methodology does not consider features which depend on<br />

the current conditions of the experiment: mean value, minimum value, maximum value etc. Information provided<br />

by these features is not significant on the current step and only confuse the classifier.<br />

For each collection of ROIs obtained by the objects association algorithm a set of features is calculated. Two<br />

types of features are used: ROI features (calculated for each ROI) and collection features (reflect the relationships<br />

between the ROIs).<br />

ROI features: Size S = height ∗ width; HW ratio HW = height<br />

width<br />

Standard deviation σ = √ µ2; Skewness γ1 = µ3<br />

µ 2/3 ; Kurtosis β2 =<br />

2<br />

µ4<br />

µ 2 2<br />

contrast in the point (i, j).<br />

Collection features: Distance between centers of ROIs; Correlation.<br />

4.2 Classification<br />

if height > width, and HW = width<br />

height otherwise;<br />

; Contrast C = max(Ci,j), Ci,j is a local<br />

The lack of experimental data does not allow yet to make an investigation about which classifier provides better<br />

results for demining. There are many types of classifiers used in the current works about feature-level and decisionlevel<br />

sensor fusion for demining: rule-based, Bayesian, Dempster-Shafer, fuzzy probabilities, neural networks etc.<br />

For this work a decision-tree classifier was implemented. It is simple to implement and to analyse the system behavior<br />

in different situations. The classes used in this step are: Landmine, Another object and No object. Existence of No<br />

object class reflects possible mistakes in the objects association.


Number of test 1 2 3 4<br />

Training set A1 top A1 top + A5 bottom A1 top + A5 A1 top + A5 + A3 top<br />

Evaluation set A1 bottom A1 bottom A1 bottom A1 bottom<br />

N. of landmines 4 4 4 4<br />

Detected landmines 3 1 2 3<br />

False alarms 6 7 4 4<br />

Table 1: Classification results<br />

(a) (b) (c)<br />

(d) (e)<br />

Figure 3: ROIs extraction results obtained from MsMs experimental data (subplot A1): pulsed metal detector (a),<br />

continuous metal detector (b) and IR camera (c). Failure tests (d,e). Results of applying two different thresholds<br />

to the pulsed metal detector data (f ). ROIs extraction result obtained from demining robot IR sensor (g)<br />

5 Algorithms evaluation<br />

A project maintained by Joint Research Centre in Ispra provides a database (MsMs database [1]) of data collected<br />

from different sensors over different test lanes populated with surrogate mines and clutter.<br />

Evaluating the proposed ROI extraction algorithm can be carried out using data from the MsMs database. The<br />

results obtained for the pulsed metal detector, continuous metal detector and IR camera are presented in Figure 3.<br />

To show the reliability of the proposed ROI extraction algorithm two failure cases are simulated using a sample<br />

segment of data: failure of positioning system (Figure 3(e)) and changes in environmental conditions (Figure 3(d)).<br />

A thresholding technique was applied to the database data obtained from the pulsed metal detector to compare<br />

it with the proposed algorithm (Figure 3(f)). It can be noticed that two objects selected by white rectangles are<br />

never detected together (they require different thresholds), whereas the proposed algorithm detects both of them.<br />

A simple experiment was made to show performance of the proposed ROI extraction algorithm when working<br />

with the demining robot. The data from one IR sensor was processed; one cold and one hot object were used to<br />

simulate the presence of possible landmines. The result is presented on Figure 3(g).<br />

The performance of the whole system was tested using experimental data from MsMs database (subplots A1, A3<br />

and A5). Data obtained from pulsed metal detector and IR camera were processed to analyse the results of sensor<br />

fusion. The bottom half of the subplot A1 is used for evaluation. Other data are processed by ROIs extraction and<br />

objects association algorithms; the created collections of ROIs are used to train a decision-tree classifier. Different<br />

sizes of the training sets are tested. The results of the classification are presented in Table 1.<br />

(f)<br />

(g)


6 Conclusions<br />

A two-step sensor fusion method for a demining robot was proposed and implemented. The developed algorithms are<br />

included into the control software of the demining robot allowing its automated functioning. A database containing<br />

the collections of ROIs is implemented using Mysql. Special ROIs extraction algorithm is implemented. Its main<br />

features are:<br />

• Automated online operation. The nature of the algorithm allows online implementation and selection of a<br />

ROI right after an object being fully scanned by the landmine sensing devices<br />

• Reduced number of parameters<br />

• The criterion for separating objects from the background is general, so the algorithm does not require special<br />

shapes for the objects (e.g., circular)<br />

• Reliability tested with experimental and public database data<br />

The evaluation of the algorithm over database data shown promising results (e.g., in the images of Figure 3 all<br />

objects were correctly identified for metal detectors data, processing of the IR images still requires improvements).<br />

Experiments carried out with a demining robot show that the algorithm is able to provide reliable result in real<br />

conditions. The results of classification show the nature of the decision-tree classifier: during test 1 it is overtrained<br />

because the training and evaluation data are from the same subplot, then test 2 shows that the training set has too<br />

small size, the results become more stable as the bigger training set is used (tests 3 and 4). The passive IR data<br />

used for the tests provided only a little of usefull information for the classifier. Whereas, it seems that GPR data<br />

can significantly improve the classification process, their integration in the algorithms is part of future work.<br />

Acknowledgement<br />

This research was supported by the FCT (Portuguese Foundation for Science and Technology) under the project<br />

DEMINE - POSI/36498/SRI/2000.<br />

References<br />

[1] Joint multi-sensor mine signature database (joint research centre - ispra). http://demining.jrc.it/msms/.<br />

[2] Uxocoe database. http://www.uxocoe.brtrc.com/default.asp.<br />

[3] G. A. Clark, J. E. Hernandez, N. K. DelGrande, R. J. Sherwood, S. Lu, P. C. Schaich, and P. F. Durbin.<br />

Computer vision for locating buried objects. In Asilomar Conference on Signals, Systems, and Computers,<br />

pages 1235–1239, 1991.<br />

[4] G. A. Clark, S. K. Sengupta, M.R. Buhl, R. J. Sherwood, P. C. Schaich, N. Bull, R. J. Kane, M. J. Barth,<br />

D. J. Fields, and M. R. Carter. Detecting buried objects by fusing dual-band infrared images. In Asilomar<br />

Conference on Signals, Systems, and Computers, volume 1, pages 135–143, 1993.<br />

[5] F. Cremer, W. Jong, K. Schutte, A. G. Yarovoy, and V. Kovalenko. Feature level fusion of polarimetric infrared<br />

and gpr data for landmine detection. In Int. Conf. on Requirements and Technologies for the Detection, Removal<br />

and Neutralization of Landmines and UXO, 2003.<br />

[6] H. Frigui, P. Gader, and J. M. Keller. Fuzzy clustering for land mine detection. In NAFIPS, 1998.<br />

[7] A. Gunatilaka and B. Baertlein. Comparison of pre-detection and post-detection fusion for mine detection. In<br />

Detection and Remediation Technologies for Mines and Minelike Targets - SPIE, volume 3710, pages 1212–1223,<br />

1999.<br />

[8] L. Marques, M. Rachkov, and A.T. de Almeida. Mobile pneumatic robot for demining. In EEE Int. Conf. On<br />

Robotics and Automation (ICRA 2002), pages 3508–3513, 2002.<br />

[9] D. W. McMichael. Data fusion for vehivle-borne mine detection. In Int. Conf. on the Detection of Abandoned<br />

Land Mines, pages 167–171, 1998.<br />

[10] W. Messelink, K. Schutte, A. Vossepoel, F. Cremer, J. Schavemaker, and E. Breejen. Feature-based detection<br />

of landmines in infrared images. In Detection and Remediation Technologies for Mines and Minelike Targets -<br />

SPIE, volume 4742, pages 108–119, 2002.<br />

[11] M. Roughan and D. W. McMichael. A comparison of methods of data fusion for land-mine detection. In Int.<br />

<strong>Workshop</strong> on Image Analysis and Information Fusion, 1997.


Abstract<br />

European Project of Remote Detection:<br />

SMART in a nutshell<br />

Yann Yvinec<br />

Signal and Image Centre - Royal Military Academy,<br />

Av. de la Renaissance 30, Brussels, Belgium,<br />

yvinec@elec.rma.ac.be<br />

This paper shortly describes the principles and ideas behind the project SMART, a European project intended<br />

to help Mine Action Centres (MAC) or Mine Action Authorities (MAA) in their task of area reduction<br />

by providing a GIS-based environment with specific tools to ease the interpretation work of the<br />

operator. Using multi-spectral optical data as well as SAR data obtained during a flight campaign in<br />

Croatia and satellite data from before the conflict, the tools will help the land-cover classification and the<br />

detection of indicators of presence or absence of mine-suspected areas. The results of these tools will be<br />

given to a data fusion module that will summarise all data and contextual information available to facilitate<br />

the creation of maps of indicators from which mapss of danger can be derived.<br />

A more detailed and technical description of SMART has been given – and some results described – in<br />

[13]. This paper focuses on the principles of the project and shortly presents some new results.<br />

1 Area reduction: a key process<br />

Area reduction has been recognized as a mine action activity where reduction in time and resources could<br />

help a lot. Long-term empirical data from CROMAC, the Croatian Mine Action Center, show that we can<br />

estimate that around 10% to 15% of the suspected area in Croatia is actually mined. The minefield records<br />

alone do not contain enough information for the proper allocation of limited de-mining resources to reallymined<br />

areas. Their completeness and reliability are not high enough. Decision makers need additional<br />

information. SMART is intended to provide some of this additional information that would help in two<br />

ways: it can reduce the suspected area on some places and reinforce the suspicion of others. The goal of<br />

the SMART project is to provide a GIS-based system – the SMART system – augmented with dedicated<br />

tools and methods designed to use multispectral and radar data in order to assist the human analyst in the<br />

interpretation of the mined scene during the area reduction process.<br />

The usefulness of such image processing tools to help photo-interpretation has already been studied:<br />

the possibility to process automatically a large amount of data and help a visual analysis is among their<br />

advantages [6] [7].<br />

The use of SMART includes a short field survey in order to collect knowledge about the site, a flight<br />

campaign to record the data – multispectral with the Daedalus sensor and SAR with the E-SAR –, and the<br />

use of the SMART system by an operator to detect indicators of presence or absence of mine-suspected<br />

areas. With the help of a data fusion module based on belief functions [1][9][10][11][12] the operator will<br />

prepare thematic maps that will synthesise all the knowledge gathered with these indicators. These maps<br />

of indicators can be transformed into danger maps showing how dangerous an area may be according to<br />

the location of known indicators. These maps are designed to help the area reduction process as described<br />

in the present paper.


A<br />

C<br />

Abandoned Land<br />

Fields in use without vegetation<br />

Fields in use with vegetation<br />

Residential areas<br />

Roads<br />

Pastures<br />

Forests, hedges or shrubs<br />

Water<br />

Radar shadows<br />

Figure 1: A: part of the speckle-reduced polarimetric L-band image (R:HH, G:HV, B:VV) B: "detection<br />

image" for abandoned land (in bright). C: SAR classification results<br />

2 Reducing the suspected area with SMART<br />

The use of SMART can help technical surveys and reduce the clearing of non-risky or non-hazardous areas.<br />

Experience shows that, in parts of the country that are considered to be suspected, there are areas<br />

actually in agricultural use. Some farmers cannot wait for the official reduction or clearing of their fields<br />

and take the responsibility to clear them by themselves or have them cleared unofficially – leading sometimes<br />

to incomplete clearing or casualties. These behaviours lead to discrepancies between reality and<br />

CROMAC’s records of suspected areas. By using multi-spectral and SAR data and processing them to<br />

provide a classification of the areas, an operator of SMART can quickly have an objective point of view<br />

of the real land cover and land use of a large, theoretically-suspected area. Once the use of SMART has<br />

updated the MAC’s records and identified cultivated fields inside suspected areas, a short field survey –<br />

shorter than what would have been needed without SMART – can be organised to determine if these fields<br />

can be officially declared reduced. Figure 1 presents a result of classification performed on SAR data. See<br />

[4] for more details including a quality assessment of this classification by confusion matrix and [13] for<br />

results on Daedalus classification.<br />

2<br />

B


Figure 2: Left: a 400m x 400m area in Croatia as seen on optical data. Right: Detection of hedges (green)<br />

and trees (red) from SAR data. Images courtesy of DLR<br />

Using of satellite data from before the war can help determine if a field that is abandoned now was<br />

already abandoned before the war. If this is the case the neglected state of the field cannot be attributed to<br />

mine infection – although it does not mean that the field is not mined.<br />

3 Suspicion reinforcement<br />

The use of SMART can also help to detect abandoned fields in suspected areas, thus reinforcing the suspicion<br />

about these fields. For instance before the contamination by landmines, agricultural fields and pastures<br />

in Croatia were enclosed by hedges but rarely with trees. If some fields are abandoned and not used, their<br />

borders change; hedges become mixed with smaller trees; bushes grow inside the field borders; low bushes<br />

become small trees. The trees and bushes inside the field or at its borders are significant indicators that the<br />

field is abandoned.<br />

Detecting abandoned areas can be done by classification of multi-spectral data. By making it possible<br />

to make the difference between trees and bushes, SAR provides also very valuable information for this<br />

analysis. Since hedges were often used as hiding places and may therefore be mined, they are important<br />

indicators of mine-suspected areas. See Figure 2 for an example of the use of SAR data to automatically<br />

make the difference between trees and hedges.<br />

If a field is abandoned now, it may be because the soil is simply not suited for agriculture. Using<br />

satellite data from before the war can help to determine if the field was cultivated then. If it was, then it<br />

reinforces the suspicion.<br />

Multi-spectral data can also be used to detect locations where creating a minefield would have made<br />

sense: river shores, forest borders, crossroads, bridges and any other places that are better located on images<br />

than on old and obsolete maps. See also [13] for results on change detection focusing on roads and paths.<br />

As a general rule all information gained from the use of SMART must be appraised by the operator.<br />

4 From indicator maps to danger maps<br />

The indicator maps show the locations of the various indicators of presence or absence of mine-suspected<br />

areas that have been gathered. They synthetise the knowledge that has been accumulated during the area<br />

reduction process. The danger maps show how likely it is that a location may be mined.<br />

To create danger maps from indicator maps, danger zones are defined near indicators based on expertise<br />

on the mine situation. For instance borders of forests on Croatia are suspect. So danger zones around forest<br />

3


Figure 3: Discrete ’danger map’ of Glinska Poljana (preliminary) Red: Danger (buffers) Orange: Danger<br />

(areas no longer in use) Green: No danger (residential areas, cultivated areas...) Other: No status (forests)<br />

Source: ULB<br />

borders are drawn. The size of these zones are defined from mine action expertise. See Figure 3 for an<br />

example of what a danger map will look like.<br />

5 Limitations<br />

The general knowledge used in SMART is strongly context-dependent. It has been currently derived from<br />

the study of three different test sites in Croatia chosen to be representative of the country. In the case of<br />

another context a new field campaign is needed in order to derive and implement new general rules. Before<br />

using SMART the list of indicators must be re-evaluated and adapted. For instance it has been noted that<br />

the asumption that a cultivated field is not mined, although quite valid in Croatia, may not apply in other<br />

countries such as South Africa or Colombia. It must also be checked if the indicators can be identified on<br />

the data and if the new list is enough to reduce the suspected areas.<br />

6 Conclusion and acknowledgment<br />

Despite these expected limitations the ideas presented here make us confident that SMART has the technical<br />

potential to be a working solution for an airborne general survey applied to area reduction.<br />

This work is performed in the scope of the European project SMART: Space and Airborne Mined<br />

Area Reduction Tools (IST-2000-25044). It is co-funded by the European Commission. The project is<br />

coordinated by TRASYS (BE) and Renaissance/RMA (BE). The project partners are CROMAC (HR),<br />

DLR (DE), ENST (FR), ixl (DE), Renaissance/RMA (BE), RST (DE), TRASYS (BE), ULB (BE) and<br />

Zeppelin (DE). For more information see:<br />

http://www.smart.rma.ac.be/<br />

In addition to the author, points of contact for SMART include:<br />

at CROMAC Milan Bajic (milan.bajic@zg.htnet.hr) as representative of the end-users<br />

at DLR Helmut Süß(Helmut.Suess@dlr.de) and Martin Keller (Martin.Keller@dlr.de) for the data<br />

collection, pre-processing and SAR processing<br />

at ENST Isabelle Bloch (isabelle.bloch@enst.fr) for data fusion<br />

4


at RMA Marc Acheroy (Marc.Acheroy@elec.rma.ac.be)for the technical management, Dirk Borghys<br />

(Dirk.Borghys@elec.rma.ac.be) for SAR processing, and Nada Milisavljević (nada@elec.rma.ac.be)<br />

for data fusion<br />

at TRASYS Jacques Willekens (Jacques.Willekens@trasys.be) for project management and Olivier<br />

Damanet (olivier.damanet@trasys.be) for the integration<br />

at ULB Eléonore Wolff (ewolff@ulb.ac.be), Sabine Vanhuysse (svhuysse@ulb.ac.be) and Florence<br />

Landsberg (Florence.Landsberg@ulb.ac.be) for field surveys, classification, change detection and<br />

danger maps<br />

References<br />

[1] I. Bloch, “Some Aspects of Dempster-Shafer Evidence Theory for Classification of Multi-Modality<br />

Medical Images Taking Partial Volume Effect into Account,” Pattern Recognition Letters, No 8, vol<br />

17, pages 905-919, 1996.<br />

[2] D. Borghys and V. Lacroix and C. M. Acheroy, “A Multi-Variate Contour Detector for High-Resolution<br />

Polarimetric SAR Images,” Proc. ICPR 2000, Barcelona, vol 3, pages 650-655, Sep 2000.<br />

[3] D. Borghys and V. Lacroix and C. Perneel, “Edge and Line Detection in Polarimetric SAR Images,”<br />

Proc. ICPR 2002, Quebec, vol 2, pages 921-924, Aug 2002.<br />

[4] D. Borghys, Y. Yvinec, C. Perneel, A. Pizurica and W. Philips, “Hierarchical Supervised Classification<br />

of Multi-Channel SAR Images,” Proc. 3rd Int. <strong>Workshop</strong> on Pattern Recognition in Remote Sensing<br />

(PRRS’04), Kingston-upon-Thames, Aug. 2004.<br />

[5] S. Cloude and E. Pottier, “An Entropy Based Classification Scheme for Land Applications of Polarimetric<br />

SAR,” IEEE-GRS, Vol.35, No.1, January 1997.<br />

[6] P. Druyts, Y. Yvinec, M. Acheroy, “Usefulness of semi-automatic tools for airborne minefield detection,<br />

” Clawar 98 -First <strong>International</strong> Symposium, pages 241-248, Brussels, 1998.<br />

[7] P. Druyts, Y. Yvinec, M. Acheroy, “Image processing tools for semi-automatic minefield detection,”<br />

ORS99, Second <strong>International</strong> Symposium on Operationalization of Remote Sensing, Enschede, Netherlands,<br />

August 1999.<br />

[8] V. Lacroix and M. Acheroy, “Feature extraction using the constrained gradient,” ISPRS Journal of<br />

Photogrammetry & Remote Sensing, No 2, vol 53, pages 85-94, 1998.<br />

[9] S. Mascle and I. Bloch and D. Vidal-Madjar, “Application of Dempster-Shafer Evidence Theory to<br />

Unsupervised Classification in Multisource Remote Sensing,” IEEE Transactions on Geoscience and<br />

Remote Sensing, No 4, vol 35 , pages 1018-1031, 1997.<br />

[10] N. Milisavljevic and I. Bloch, “Fusion of Anti-Personnel Mine Detection Sensors in Terms of Belief<br />

Functions, a Two-Level Approach”, IEEE-SMC, 2003.<br />

[11] G. Shafer, “A Mathematical Theory of Evidence”, Princeton University Press, 1976.<br />

[12] P. Smets, “The Combination of Evidence in the Transferable Belief Model,” IEEE-PAMI, No 5, vol<br />

12, pages 447-458, 1990.<br />

[13] Y. Yvinec, D. Borghys, M. Acheroy, H. Süß, M. Keller, M. Bajic, E. Wolff, S. Vanhuysse, I. Bloch,<br />

Yong Y., and O. Damanet, “SMART: Space and Airborne Mined Area Reduction Tools - Presentation,”<br />

EUDEM2-SCOT-2003 <strong>International</strong> Conference on Requirements and Technologies for the Detection,<br />

Removal and Neutralization of Landmines and UXO, Brussels, Belgium, September 2003.<br />

5


Metal Detector Modeling<br />

Pascal Druyts<br />

Royal Military Academy<br />

Brussels - Belgium<br />

Pascal.Druyts@elec.rma.ac.be<br />

Risk assessment<br />

• Risk assessment is crucial in HD<br />

– permanent task<br />

– take into account many factors:<br />

• background information (kind of mines, etc.)<br />

• detector capability<br />

• operation procedure<br />

• accidents, QA,...<br />

Detection capability of MD<br />

• Good knowledge mandatory for risk assessment<br />

• Detection capability includes:<br />

– a) Max depth at which a mine may be found<br />

– b) Max depth at which mines are reliably found. PD<br />

close to 100%<br />

• Both criteria are not equivalent<br />

– detector ranking may change with criteria (a or b)<br />

– Deminers are mainly concerned by (b)


Evaluation of MD detection<br />

capability<br />

• On the field:<br />

– put most difficult mine expected at required clearing<br />

depth<br />

– tune sensitivity to ensure a clear alarm<br />

• Controlled tests<br />

– count number of detected mines<br />

• binary test (detection/no detection)<br />

• practicl constraints -> limited mines in limited configurations<br />

– More advanced measurement<br />

• detection margin<br />

• curve detection depth/metal content<br />

Discussion<br />

• Deminers claim near 100% detection<br />

– clear alarm on calibration mine<br />

– condition may change (soil answer, mine<br />

oxydation, …) but if margin sufficient, mines<br />

still detectable<br />

• Claim seems reasonable but not formally<br />

proven. Lack definition of the clear alarm.<br />

Discussion<br />

• Accidents are recorded in database<br />

– the accident rate is low:<br />

• gives a high lower bound on PD<br />

• OP may also be responsible for accidents:<br />

– area not scanned (marking)<br />

– detector not used properly: bad calibration …<br />

– Supports deminer claim but:<br />

• no formal prove<br />

• not proactive:<br />

– OP are evolving. If bad OP, too much time before feedback from<br />

accident records


• Quality assurance<br />

Discussion<br />

– not efficient to guarantee PD<br />

• mine density very low<br />

• area to re-check to guarantee near 100% detection<br />

prohibitivly large<br />

• usually same sensor used. Does only prove no more<br />

detectable mine<br />

– can efficiently check full coverage:<br />

• no detectable metal may be left<br />

• density of metal much higher than mines<br />

Discussion<br />

• Controlled tests:<br />

– show unsatisfactory detection rate (much lower that<br />

100%) in difficult soils even above 10 cm.<br />

– Seems to show deminer claim (close 100% detection) is<br />

false<br />

• Seen as casting doubt on their work by deminers -> critics:<br />

– usually not performed by experienced deminers<br />

– not enough time to get used to all detectors<br />

• Seen by scientist as a proof that 100% detection is:<br />

– not reached by MD<br />

– should not be requested for new technologies (GPR,..)<br />

– This is a WRONG interpretation<br />

Bad interpretation of tests<br />

• tests are maybe not fully realistic nevertheless they<br />

provide very useful information:<br />

– 50% detection depth<br />

• they DO show that in some circumstances detectors<br />

will not find all mines to an acceptable depth<br />

• they DO NOT show that mines are missed during real<br />

demining:<br />

– calibration would show no clear alarm on reference mine<br />

– other demining technique would be used (proding, dogs, …)


Can we prove near 100% detection?<br />

[Gasser]<br />

• Using crude PD (detection/no detection)<br />

– NO<br />

– prohibitive number of cases needed<br />

• Demining is not the only safety-critical<br />

application<br />

– nuclear plants, airplane, …<br />

– reliabilty CAN be proved but NEVER done by<br />

counting the number of occasional failures<br />

– other approach such as detection margin needed<br />

How to prove near 100% PD<br />

• One measurement:<br />

– detection margin bigger than expected<br />

variation expected (soil, mine, …)<br />

– difficult to reliably predict variability<br />

– security coefficient -> very high margin needed<br />

• Measure margin in a representative number<br />

configurations:<br />

– residual variability lower<br />

– lower margin needed<br />

Modeling to prove near 100% PD<br />

• Model to predict detection margin as function of<br />

relevant parameters<br />

• mine depth and orientation<br />

• scan path<br />

• soil relief<br />

• mine and soil EM properties (conductivity, electrical<br />

permittivity)<br />

• measurements to estimate error on model<br />

• measurement to estimate distribution of<br />

parameters


Modeling to prove near 100% PD<br />

• Statistical tools to compute confidence intervals on<br />

estimations<br />

• monte-carlo to estimate PD<br />

– model must be fast enough<br />

– possible to use big number of samples ( real tests)<br />

• Check conclusions of model on:<br />

– test scenarios where model predict 50% detection (50%-><br />

reasonable number of tests)<br />

– test a reasonable sample number a bit above the 100% depth limit<br />

(objective stat. relevance)<br />

– test a reasonable sample number a bit below the 0% depth limit<br />

(objective stat. relevance)<br />

Modeling allows to<br />

• Define objectively what is a reasonable detection<br />

margin obtained from:<br />

– a realistic parameter distribution obtained by<br />

measurement<br />

– an accurate prediction of the precise effect on detection<br />

margin (model)<br />

• With a reasonable number of measurements<br />

– measurements are more informative than detection/no<br />

detection<br />

• Define objectively constraints on scanning path<br />

Conclusion<br />

• PD close to 100% is plausible in demining<br />

• Was never formally proven<br />

• This can be done using a model and well<br />

chosen measurements<br />

• The model must be<br />

– Accurate<br />

– Simple<br />

• Development is ongoing and promising


Extraction of Landmine Signature<br />

from Ground Penetrating Radar Signal<br />

Idesbald van den Bosch<br />

Microwaves UCL<br />

3, Place du Levant<br />

1348 Louvain-la-Neuve<br />

Belgium<br />

vandenbosch@emic.ucl.ac.be<br />

Abstract— With its principle of detection based upon electromagnetic<br />

(EM) wave propagation parameters inhomogeneities<br />

within the medium under consideration, ground penetrating<br />

radar (GPR) is poised to be a very valuable tool in the field<br />

of humanitarian demining, especially for the detection of plastic<br />

landmines. In this regard, however, its performances strongly<br />

depend upon the EM properties of the medium surrounding the<br />

target, and upon the dielectric contrast between the target and<br />

the surrounding medium.<br />

After a short introduction to the world landmine problem,<br />

this article presents the extraction and modeling of buried<br />

targets signatures from a stepped-frequency continuous wave<br />

GPR signal. The signal is decomposed in its soil and targetin-soil<br />

contributions. The mine signature extraction process is<br />

detailed, and the outline of the GPR model used to simulate<br />

these signatures is also presented. More importantly, the reasons<br />

for mine signature extraction and simulation are discussed. This<br />

development is completed by the validation of the numerical<br />

model, which shows encouraging results for the path of research.<br />

I. MINE FACTS AND FIGURES<br />

The following figures can be found at the following<br />

adresses: www.state.gov/t/pm/wra/ and www.<br />

cirnetwork.org/info/fact_sheet.cfm.<br />

A. Mines by the numbers<br />

Landmines<br />

• are estimated to be 45-50 million infesting at least 12<br />

million km 2 of land<br />

• affect more than 100 countries<br />

• heavily affect 20 countries: Angola, Afghanistan, Croatia,<br />

Egypt, Cambodia, . . .<br />

• kill or maim a reported 10,000 people annually (UNICEF:<br />

30–40% of victims are children of 15 or less)<br />

• cost US$3–30 the unit.<br />

B. Heavy economical impact<br />

Landmines<br />

• create millions of refugees<br />

• prevent hundreds of thousands of km 2 of agricultural land<br />

from being used<br />

• deny thousands of km of roads for travel<br />

Sébastien Lambot<br />

Department of Geotechnology<br />

Delft University of Technology<br />

Mijnbouwstraat 120<br />

2628 RX Delft<br />

The Netherlands<br />

s.lambot@citg.tudelft.nl<br />

Marc Acheroy<br />

Signal and Image Centre<br />

Royal Military Academy<br />

Av. de la Renaissance, 30<br />

1000 Brussels<br />

Belgium<br />

acheroy@elec.rma.ac.be<br />

• create food scarcities, causing malnutrition and starvation<br />

• interrupt health care, increasing sickness and disease<br />

• inflict long-term psychological trauma on landmine survivors<br />

• hinder economic development<br />

• undermine political stability.<br />

C. Demining: a slow/costly process<br />

Demining process<br />

• is extremely dangerous: 1 accident/2,000 mines destroyed<br />

• costs US$300–1,000 the unit<br />

• is slow: ∼100,000 mines/year cleared.<br />

So there is a real need for a faster, cheaper and more secure<br />

demining process. It includes, but is not limited to:<br />

• improve currently used detectors<br />

• increase research on new technologies<br />

The list of currently available detectors and prototypes is<br />

presented at table I.<br />

II. RADAR SYSTEM DESCRIPTION<br />

We use a stepped-frequency continuous wave (SFCW) radar<br />

becauses it possesses several advantages over the time domain<br />

technology [1], among which:<br />

• possible to control the signal over a very large bandwidth<br />

(0.75–4 GHz)<br />

• ease of characterization of the elements of the system<br />

(cables, antenna)<br />

TABLE I<br />

LIST OF CURRENTLY AVAILABLE DETECTORS AND PROTOTYPES.<br />

sensor type sensor maturity effectiveness<br />

prodders & prodder in use high<br />

acoustics acoustic R&D high (wet soil)<br />

MD in use very high<br />

EM<br />

GPR R&D/in use high (dry soil)<br />

MWR R&D medium<br />

biosensor<br />

dog<br />

rodent<br />

in use<br />

in devel.<br />

medium–high<br />

medium–high<br />

nuclear all prototype —<br />

chemical all prototype —


• better signal-to-noise ratio (SNR) than for time-domain<br />

GPRs.<br />

The antenna is used off-ground and has the following features:<br />

• TEM horn (high directivity)<br />

• used in monostatic mode (simpler modeling, shorter<br />

round-trip path).<br />

The radar system is emulated by a vector-network analyzer<br />

(VNA). The measured signal is S11(ω).<br />

III. RADAR SYSTEM MODELING<br />

The signal measured by the radar is the sum of two<br />

contributions:<br />

S total<br />

11<br />

= Ssoil<br />

11<br />

+ Starget<br />

11<br />

which can be rewritten using a transfer functions formalism<br />

as (Fig. (b); Hm is neglected):<br />

where<br />

(1)<br />

S total<br />

11 = Hi + H (Gs + Gt) (2)<br />

• Gt is the “signature” of the target<br />

• Gs is the Green’s function of the soil, and can be<br />

computed once (ε1, µ1) , . . . , (εN, µN) are known<br />

• Hi, H are the transfer functions of the antenna.<br />

Gt depends upon many parameters, among others the depth of<br />

the target in the medium and the soil EM parameters. From<br />

(2) we have that<br />

Gt = Stotal 11 − Hi − H Gs<br />

H<br />

. (3)<br />

Gt can be computed thanks to a proper soil-target system<br />

modeling, for which we used the method of moments for<br />

objects embedded in stratified media [2] (not developed here).<br />

IV. MINE SIGNATURE EXTRACTION AND SIMULATION<br />

A. Measured signature extraction process<br />

The extraction of the mine signature is done in three steps:<br />

1) determination the characteristics Hi, H = HtHr of the<br />

antenna by measurements<br />

2) extraction of the soil EM parameters<br />

(ε1, µ1) , . . . , (εN, µN) (Fig. (a)) by inversion of<br />

the radar model for a soil without a target<br />

3) subtraction of the soil contribution from the total radar<br />

signal and division of the difference by H (see (3)).<br />

The advantage of this method w.r.t. a classical extraction by<br />

moving average window is that once the soil EM parameters<br />

are known, the background can be subtracted from the total<br />

signal, for any height of the antenna, as Gs can be computed<br />

once the soil EM parameters are known [3]–[6].<br />

B. Example of measured signatures<br />

Measurements for a soil at different moistures and containing<br />

different targets have been made (Fig. 2). Fig. 2(a) shows<br />

the time-domain S11(t) for a 4-layered propagation medium<br />

containing an AP PMN Russian mine, without filtering out<br />

the soil response; Fig. 2(b) shows the time-domain signature<br />

of the mine after extraction of the soil response in the<br />

frequency domain, which had been previously measured. The<br />

abscissa “configuration” refers to the water content of the layer<br />

containing the objects (1 is 0%, 9 is 25%). In configurations<br />

1–6, the filtered signal shows much more clearly the presence<br />

of the mine, and the time position permits to accurately<br />

retrieve the depth of the target and correspond very well to<br />

the theoretical value. The reasons why configurations 7–9 led<br />

to less satisfactory results has still to be explained.<br />

C. Signature simulation<br />

Simulating mine signatures allows one to:<br />

1) get an understanding of the physics of real experiments<br />

2) set an upper limit for the performances of a GPR given<br />

a physical environment.<br />

The first point is demonstrated at Fig. 3, where a virtual B<br />

scan in free space and in a layered medium are shown. The<br />

characteristic time domain hyperbola appears twice in the right<br />

figure, as the last layer is a PEC metal sheet. One can also<br />

see that the multiple reflections between the top and bottom<br />

of the cylinder are decaying faster in the layered medium, as<br />

the layer containing the target is lossy.<br />

The second point can be inferred from the observation that<br />

in the simulations, no statistical imperfections such as soil<br />

roughness or clutter for example, are taken into account. So,<br />

if the modeling has been validated in quasi-ideal conditions,<br />

we can use it as an upper limit as follows: if, for a given soil<br />

and mine, the simulator gives a “not detected” answer, it is<br />

highly probable that in real field conditions the detector will<br />

also miss the target.<br />

V. RADAR AND SOIL-TARGET MODELS VALIDATION<br />

In this scope experiments involving a conducting sphere<br />

embedded in a stratified medium have been led; the resulting<br />

extracted signatures are compared to simulations at Fig. 4.<br />

The PEC sphere has a 5 cm radius and the 4-layers medium<br />

is constituted as follows: air, sand with 5% moisture and 15.4<br />

cm thick, sand with 10% moisture and 13.5 cm thick, metal<br />

plane.<br />

The reason why the computed signature is smaller in<br />

amplitude than its measured couterpart (see left figure) may<br />

be due to:<br />

• directive radiation pattern of the TEM antenna not accounted<br />

for in the radar model<br />

• remove-and-replace process necessary to introduce the<br />

mine in the soil which introduces air in the sand and<br />

lowers the value of its EM parameters.<br />

The right figure shows these computed and measured signatures<br />

of the sphere in the time-domain. The time dependence


εN , µN<br />

εN−1, µN−1<br />

εN−2, µN−2<br />

ε1, µ1<br />

t (s)<br />

x 10<br />

0<br />

−9<br />

2<br />

4<br />

6<br />

8<br />

10<br />

Ht<br />

S11 = Y<br />

X Y<br />

X<br />

(a) (b)<br />

Fig. 1. Left: physical modeling of the reality; right: conceptual transfer function model<br />

1 2 3 4 5 6 7 8 9<br />

Configuration<br />

20<br />

10<br />

0<br />

−10<br />

−20<br />

S 11 ( t) (−)<br />

t (s)<br />

x 10<br />

0<br />

−9<br />

Hi<br />

Hm<br />

Gs<br />

Gt<br />

1 2 3 4 5 6 7 8 9<br />

Configuration<br />

Fig. 2. Left: measured raw GPR signal; right: measurement extracted mine signature.<br />

2<br />

4<br />

6<br />

8<br />

10<br />

Hr<br />

10<br />

5<br />

0<br />

−5<br />

−10<br />

H o ( t) (−)<br />

air<br />

soil


t (s)<br />

x 10<br />

0<br />

−9<br />

1<br />

2<br />

3<br />

4<br />

5<br />

6<br />

7<br />

8<br />

9<br />

−0.6 −0.4 −0.2 0 0.2 0.4 0.6<br />

relative position [m]<br />

t (s)<br />

x 10<br />

0<br />

−9<br />

1<br />

2<br />

3<br />

4<br />

5<br />

6<br />

7<br />

8<br />

9<br />

−0.6 −0.4 −0.2 0 0.2 0.4 0.6<br />

relative position [m]<br />

Fig. 3. Left: signature simulation of PEC cylinder in free-space. Right: signature simulation of PEC cylinder in 4-layers medium. Both images are<br />

B-scans.<br />

450<br />

400<br />

350<br />

300<br />

250<br />

200<br />

150<br />

100<br />

50<br />

computed<br />

measured<br />

1 1.5 2 2.5 3<br />

x 10 9<br />

0<br />

−2<br />

−4<br />

−6<br />

x 109<br />

8<br />

6<br />

4<br />

2<br />

0<br />

measured<br />

computed<br />

0 0.2 0.4 0.6 0.8 1<br />

x 10 −8<br />

−8<br />

Fig. 4. Measured and computed PEC sphere signature Gt in a 4-layers medium show good agreement. Left: frequency domain; right: time domain.<br />

is correctly predicted; only the amplitude of the first reflection<br />

seems underestimated, probably for the reasons exposed<br />

above. There are two reflections: the first one is due to the<br />

sphere, while the second one is due to the lowest layer, a<br />

PEC plane. This validation process is very important, as the<br />

modeling will serve to establish an upper limit of detection<br />

capabilities for a given soil and mine.<br />

VI. SUMMARY<br />

Landmines cause heavy problems in affected countries, and<br />

demining is costful and slow. It is necessary to improve the<br />

existing demining tools and methods w.r.t. speed, security,<br />

effectiveness and cost. Research has been focused on the GPR,<br />

in measurement information retrieval—that is, target signature<br />

extraction—as well as in modeling.<br />

From the measurement point of view, mine signature extraction<br />

appears promising for GPR signal SNR enhancement<br />

(noise = soil reflections) and is important in view of target<br />

classification. It is based upon the knowledge of the soil EM<br />

parameters, which we can extract from soil GPR signal.<br />

From the modeling point of view, mine signature simulation<br />

could be used for establishing an upper limit of<br />

the GPR detection capabilities for a given soil and mine.<br />

Furthermore it can be used to make “virtual” experiments,<br />

which yield a good “feeling” of what would happen in reality.<br />

A simulated signature simulation has been compared to its<br />

measured counterpart and have shown good agreement.<br />

VII. FUTURE WORK<br />

It includes, but is not limited to:


• better modeling of the antenna radiation properties<br />

• wider testing of the model<br />

• study of the degradation of the signature extraction w.r.t.<br />

soil roughness.<br />

ACKNOWLEDGEMENT<br />

This work has been partially funded by the Belgian Ministry<br />

of Defence (HuDem project) and is partially funded by the<br />

FRIA as well as by the Royal Military Academy of Brussels.<br />

The authors are grateful to Ir. Pascal Druyts for the many<br />

discussions involving this subject.<br />

REFERENCES<br />

[1] I. van den Bosch, S. Lambot, and A. Vander Vorst, “A Unified Method<br />

for Modeling Radar and Radiometer Measurements,” in Proceedings of<br />

the EUDEM2 SCOT 2003 conference, H. Sahli, A. M. Bottoms, and<br />

J. Cornelis, Eds., vol. 2, VUB, Brussels, September 2003, pp. 523–528.<br />

[2] K. A. Michalski and J. R. Mosig, “Multilayered Media Green’s Functions<br />

in Integral Equation Formulations,” IEEE Transactions on Antennas and<br />

Propagation, vol. 45, no. 3, pp. 508–519, March 1997.<br />

[3] S. Lambot, E. C. Slob, I. van den Bosch, B. Stockbroeckx, B. Scheers, and<br />

M. Vanclooster, “GPR design and modeling for identifying the shallow<br />

subsurface dielectric properties,” in Proceedings of the 2nd <strong>International</strong><br />

<strong>Workshop</strong> on Advanced Ground Penetrating Radar, A. Yarovoy, Ed., TU<br />

Delft, the Netherlands, May 2003, pp. 130–135.<br />

[4] ——, “Estimating Soil Dielectric Properties from Monostatic GPR Signal<br />

Inversion in the Frequency Domain,” Water Resources Research, 2003,<br />

in Press.<br />

[5] S. Lambot, E. C. Slob, I. van den Bosch, B. Stockbroeckx, and M. Vanclooster,<br />

“Accurate modeling of GPR signal for an accurate characterization<br />

of the subsurface dielectric properties,” IEEE Transactions on<br />

Geoscience and Remote Sensing, 2003, accepted with minor revisions.<br />

[6] S. Lambot, I. van den Bosch, and E. Slob, “Dielectric characterization<br />

of the shallow subsurface using ground penetrating radar for supporting<br />

humanitarian demining,” in Proceedings of the EUDEM2 SCOT 2003<br />

conference, H. Sahli, A. M. Bottoms, and J. Cornelis, Eds., vol. 2, VUB,<br />

Brussels, September 2003, pp. 535–541.


Detection of Landmines Using Nuclear Quadrupole Resonance (NQR): An Overview<br />

S.D. Somasundaram, J.A.S. Smith, K. Althoefer, L.D. Seneviratne; King’s College London, Strand, London WC2R 2LS<br />

{Samuel.Somasundaram, John.Smith, K.Althoefer, Lakmal.Seneviratne}@kcl.ac.uk<br />

Abstract: Estimates show that there are more than 100 million landmines worldwide and at present rates it will take more<br />

than 500 years and US$33billion to clear them. This has led to an increase in research into new techniques for demining.<br />

Nuclear Quadrupole Resonance (NQR) is one of the few detection methods that is able to detect the explosives found<br />

within landmines. This paper provides an overview of the theory and practical aspects of NQR and its advantages and<br />

limitations. A discussion on recent improved hardware and signal processing techniques that have been proposed to try<br />

and improve the sensitivity of detection is provided.<br />

1 Introduction<br />

1.1 The problem<br />

Demining is an issue affecting both the military and the<br />

general public. According to the “<strong>International</strong><br />

Campaign to Ban Landmines”, there are more than 100<br />

million land mines buried in over 80 countries and more<br />

are still being laid. They estimate that if the demining<br />

rate remains at the current rate, it will take over 500<br />

years and US$33 billion to clear them. This problem<br />

has led to an increase in demining research.<br />

1.2 Current demining technologies<br />

The military usually only require a small path through a<br />

minefield to be cleared. One method is one of “brute<br />

force”; landmines are triggered using explosives and/or<br />

heavy machinery. Such a method would not be suitable<br />

for clearing an entire minefield. A common method<br />

used in civilian demining is to detect the mines using<br />

metal detectors or ground penetrating radar (GPR).<br />

Once the mine has been found, it can be carefully<br />

disarmed or destroyed. Unfortunately, many mines are<br />

cased, not in metal, but in plastic or wood and may<br />

contain only a relatively small amount of metal. This<br />

means the sensitivity of the metal detector has to be<br />

increased; consequently lots of other metal objects<br />

(clutter) are detected. GPR has a similar drawback.<br />

Every suspicious object detected has to be carefully<br />

investigated, which usually involves probing the ground<br />

with a “prodder”, a slow, laborious and very dangerous<br />

task. These problems have lead to a worldwide drive<br />

into researching new mine detection methods, which<br />

may be used to detect mines accurately, quickly and<br />

safely within the hostile and varying environments in<br />

which mines may be found.<br />

1.3 Emerging demining technologies<br />

New mine detection methods generally fall into one of<br />

two categories. The first category is where the method<br />

seeks to detect the mine casing or, other anomalies in<br />

the ground introduced by the buried mine (table 1). In<br />

the second category, it is the actual explosive within the<br />

mine or its vapour that is being detected (table 2).<br />

Demining Technology Description Main Limitations<br />

Ground penetrating radar (GPR) Images dielectric of the soil,<br />

discontinuities such as mines can be seen.<br />

Infra-Red/Thermal Imaging Images the temperature of the ground.<br />

Soil above mines cools at a slower rate<br />

than surrounding soil.<br />

Soil is generally inhomogeneous,<br />

therefore it is not necessarily that<br />

easy to see mines, particularly near<br />

the surface.<br />

Use is limited to certain times of the<br />

day.<br />

Table 1 – Methods for detecting anomalies in the ground (caused by the mine) or the mine casing.<br />

Demining Technology Description Main Limitations<br />

Nuclear Quadrupole Resonance After excitation, detect a frequency Received signal is very weak.<br />

(NQR)<br />

signature specific to the explosive in the<br />

landmine.<br />

Thermal Neutron Activation Detects presence of nitrogen in sample by Uses ionising radiation and requires<br />

(TNA)<br />

irradiating with slow neutrons and then<br />

measuring energy levels of returned<br />

gamma rays<br />

heavy equipment.<br />

Mine Detecting Dogs Dogs trained to “sniff” out landmines. Easily tired.<br />

Table 2 – Methods for detecting the explosives within landmines.


One point worth noting is that the equipment needed for<br />

NQR detection of landmines is relatively light and<br />

simple compared to GPR, TNA and IR detection<br />

systems and could therefore be easily mounted on some<br />

sort of robotic device. Researchers at King’s are<br />

currently investigating the feasibility of integrating<br />

various mine detection methods, including NQR, onto<br />

robotic vehicles.<br />

2 NQR<br />

2.1 Theory<br />

Nuclear quadrupole resonance is a radio frequency (RF)<br />

technique that may be used to detect the actual<br />

explosives found within landmines. Many explosives<br />

used in the manufacture of landmines are nitrogen-rich<br />

(e.g. TNT, PETN, and RDX). The 14 N nitrogen nucleus<br />

has spin quantum number I = 1, and thus behaves as<br />

though it has a non-spherical charge distribution and<br />

therefore possesses an electric quadrupole moment, Q.<br />

When such a nucleus is placed in a non-zero electric<br />

field gradient (EFG) different quadrupole energy levels<br />

arise as some nuclear orientations will be at lower<br />

energy levels than others. The EFG at the nucleus is<br />

produced by neighbouring charges, both electrons and<br />

nuclei, and is therefore directly related to the structure<br />

of the compound. The symmetry of the EFG, the<br />

nuclear electric quadrupole moment and the spin<br />

quantum number of the nucleus determine how the<br />

energy levels are split. In NQR, the applied RF<br />

radiation drives transitions between the quadrupole<br />

energy levels. As 14 N has spin-quantum number I = 1,<br />

there are three energy levels -1, 0 and +1. The EFG is a<br />

tensor quantity and its three principal components can<br />

be expressed in Cartesian coordinates as qxx, qyy, and qzz.<br />

As the sum of these three components is zero it is usual<br />

to define qzz or q as the maximum principal component;<br />

the deviation from axial symmetry with respect to the zaxis<br />

of the EFG is given by the asymmetry parameter η<br />

a positive number varying between zero and unity and<br />

defined as<br />

qxx − qyy<br />

η = .<br />

(1)<br />

qzz<br />

The energy of interaction between the electric<br />

quadrupole moment of the 14 N nucleus and the EFG of<br />

the surrounding charges is given by solving the<br />

quadrupolar Hamiltonian for a spin-1 nucleus [2]. In the<br />

general case, three transitions are allowed (figure 1)<br />

with frequencies:<br />

Ey<br />

− Ez<br />

vx<br />

( 0 → + 1)<br />

=<br />

h<br />

3 e<br />

2<br />

qQ η<br />

= ( 1+<br />

),<br />

4 h 3<br />

E<br />

( 0 1)<br />

x − E<br />

v<br />

z<br />

y → − =<br />

h<br />

3 e<br />

2<br />

qQ η<br />

= ( 1−<br />

),<br />

4 h 3<br />

Ey<br />

− Ex<br />

1 e<br />

2<br />

qQ<br />

vz<br />

( −1<br />

→ + 1)<br />

= = η,<br />

h 2 h<br />

(2)<br />

where e is electronic charge; h is Planck’s constant and<br />

e 2 qQ/h is known as the “quadrupole coupling constant”<br />

[1]. These frequencies lie in the radio frequency region;<br />

hence electromagnetic radiation in this range interacts<br />

with the quadrupolar nuclei. It is worth noting that the<br />

NQR signals are observed only in the solid state.<br />

Figure 1 - Quadrupole energy levels in axially<br />

symmetric (top) and non-axially symmetric (bottom)<br />

fields.<br />

2.2 Experimental methods<br />

Currently, NQR experiments are conducted using<br />

pulsed RF techniques. The sample is subjected to bursts<br />

of RF radiation at or near the resonant frequencies and<br />

the resulting signals monitored. Two types of signals<br />

that are commonly monitored are free induction decays<br />

(FIDs) and echoes.<br />

2.2.1 Free Induction Decays (FIDs)<br />

When an RF pulse at the resonant frequency, vQ, is<br />

applied to the NQR sample at the correct geometry, the<br />

magnetic component of the RF radiation couples with<br />

the nuclear magnetic moment and alters the orientation<br />

of the 14 N nucleus within the EFG. This effectively<br />

excites the nucleus, as the nuclear magnetisation<br />

precesses about B1 away from its equilibrium<br />

orientation.


Figure 2 – Interaction of RF radiation with nuclear<br />

magnetic moment<br />

Once the RF pulse stops, the nucleus returns to a lower<br />

energy level via various relaxation processes. This is<br />

observed as a free induction decay (FID) immediately<br />

following the end of the pulse, which consists of a<br />

damped sinusoidal oscillation that is observed to decay<br />

with a time constant T2 * , known as the FID or spinphase<br />

memory decay time.<br />

The amount by which the nuclear magnetic moment is<br />

“tipped” away, by the radiation, from its equilibrium<br />

position in the EFG, is given by the flip angle α (figure<br />

2), which is defined by,<br />

, t B γ α = (3)<br />

1 w<br />

where γ is the nuclear gyromagnetic ratio, B1 is the<br />

magnitude of the oscillating RF field and tw is the time<br />

for which the pulse is applied.<br />

The FID intensity depends on the flip angle and is<br />

maximum when α =119°, and not at 90° as in pulsed<br />

NMR. This is due to the NQR sample being<br />

polycrystalline, which means that the EFG at a given<br />

nitrogen site can assume all possible orientations with<br />

respect to the B1 field. It is customary to refer to a 119°<br />

degree pulse as “90ºeff”. Equation (3) shows that the<br />

hardware must be configured to deliver an RF pulse of<br />

sufficient magnitude and for long enough to produce a<br />

90ºeff pulse and get the maximum FID intensity.<br />

One of the processes by which the nuclei lose energy is<br />

governed by the spin-lattice relaxation time, T1, by<br />

which the nucleus returns to its equilibrium orientation<br />

in the EFG by losing energy to the thermal motions of<br />

the solid. Following a pulse, a time of 5T1 is required to<br />

produce a fully relaxed system.<br />

2.2.2 Spin Echoes<br />

Another set of signals that are usually monitored are<br />

spin echoes, which are also useful as they can be used<br />

to sustain the NQR signal for longer than the FIDs<br />

described earlier. The long “echo trains” that are<br />

produced can be summed to reduce the signal to noise<br />

ratio (SNR).<br />

There are several relaxation processes that contribute to<br />

the decay of the FID. One of these processes is caused<br />

by the inhomogeneous nature of the sample, which<br />

means that the EFG at each of the 14 N sites can vary<br />

slightly, so that their respective NQR frequencies also<br />

vary slightly. This means that the signals from the 14 N<br />

nuclei become out of phase with each other and<br />

therefore lose coherence. It is, however, possible to use<br />

certain pulse sequences to refocus the signals and bring<br />

them back into phase. This is the basis for the formation<br />

of echoes. An echo can be produced by applying a<br />

second pulse a timeτ after the initial pulse, often with<br />

its phase shifted by 90° (with respect to the first pulse).<br />

This has the effect of refocusing the de-phased signals;<br />

consequently, a time τ after the second pulse, the 14 N<br />

signals are all back in phase and the peak of the echo<br />

corresponds to this point. If we keep applying<br />

successive pulses, then an echo train is produced, a<br />

sequence known as pulsed spin locking (PSL). The<br />

echo train cannot be sustained indefinitely as the nuclei<br />

dephase; the time constant for this process following<br />

two pulses is T2, the spin-spin relaxation time.<br />

However, in a PSL sequence for example, an even<br />

longer decaying echo train is produced with a time<br />

constant T2e. Once the amplitude of the echoes has<br />

decreased below noise, an experimental wait time of<br />

5T1 must be allowed before the sample can be excited<br />

again.<br />

2.3 Landmine NQR Data<br />

85% of landmines are manufactured using RDX and/or<br />

TNT. Table (3) tabulates some of their NQR properties<br />

at room temperature. Note that the last three columns of<br />

table (3) list average values.<br />

Compound v x (kHz) y v (kHz) T 1 (ms) T 2 (ms) Line width (kHz)<br />

RDX 5047, 5192, 3359, 3410, 13 7 0.64<br />

5240<br />

3458<br />

TNT 837, 843, 844, 714, 739, 743, 5000 - 1.2<br />

(monoclinic<br />

form)<br />

848, 859, 870, 751, 769<br />

Table 3 – NQR data


2.4 Signal to Noise Ratio (SNR)<br />

In NQR, the SNR can be expressed by the equation:<br />

1/<br />

2 2 3/<br />

2 1/<br />

2<br />

VCQ<br />

Aζh<br />

γN0vQ<br />

μ0<br />

SNR = (4)<br />

1/<br />

2<br />

30. 08kT(<br />

kTBF)<br />

which needs to be multiplied by a factor of 0.436 for<br />

polycrystalline samples. VC is the RF coil volume, Q is<br />

the loaded Q factor, A is a constant close to unity, ζ is<br />

is<br />

the coil filling factor, h is Planck’s constant, γ<br />

the<br />

nuclear gyromagnetic ratio, N0 is the number density of<br />

the 14 N nuclei, vQ is the NQR frequency, μ0 is the<br />

permittivity of free space, k is Boltzmann’s constant, T<br />

is temperature, B is the receiver bandwidth and F is the<br />

noise factor.<br />

The formula was derived assuming a solenoid type coil<br />

and the assumed noise is the thermal noise of the<br />

hardware. Two points worth noting are that the greater<br />

the NQR frequency the greater is the SNR and the<br />

higher the Q factor the greater is the SNR.<br />

2.5 Limitations of current NQR detection<br />

methods<br />

The main problem with NQR is that the signals detected<br />

are often weak compared to the ambient noise floor.<br />

The main sources of noise are:<br />

1. Thermal (Johnson) noise of the RF antenna<br />

and sample noise.<br />

2. Radio frequency interference (RFI).<br />

3. Spurious responses e.g. piezoelectric and<br />

magnetoacoustic responses from sand in the<br />

soil.<br />

In the laboratory, sample noise at these low<br />

radiofrequencies is less than the random thermal noise<br />

of the RF coil [8] which then becomes the limiting<br />

factor, as the sensor can be shielded from RFI and<br />

substances which produce spurious responses can be<br />

removed from the sample. Equation (4) shows that<br />

SNR ∝ v . The data in table (3) shows that the TNT<br />

3/<br />

2<br />

Q<br />

frequencies are less by a factor of at least four than<br />

those of RDX, so their SNR is less at least by a factor of<br />

about 4 1.5 =8. Since the sensitivity of detection is poor,<br />

the SNR must be improved by repeating the experiment<br />

and averaging the signals obtained. In section (2.2) we<br />

mentioned that a wait time of 5T1 needs to be observed<br />

between the last excitation pulse in one experiment and<br />

the first excitation pulse in the next. The T1 values for<br />

TNT and RDX, shown in table (3), indicate that this<br />

quantity is around 13ms for RDX, whilst for TNT its<br />

value rises to 5 seconds. For landmine detection using<br />

NQR, we would like to be able to determine whether a<br />

mine exists (in the area being investigated by the<br />

antenna) in less than a minute, which means we may<br />

only be able to collect 2 echo trains from a TNTcontaining<br />

mine, insufficient to give an acceptable SNR<br />

for many anti-personnel mines.<br />

The problem of RFI is not included in equation (4) and<br />

this further reduces the SNR in field experiments where<br />

it is not possible to fully shield the sample under<br />

investigation. Automobile ignition, radio station<br />

transmissions and lightning are just a few of the<br />

possible sources of RFI. These are commonly found in<br />

the AM band, which is specifically a problem for the<br />

detection of TNT, which has NQR frequencies lying<br />

between 700 and 900kHz [1]. These problems show<br />

that the detection of TNT presents one of the toughest<br />

challenges for NQR based landmine detection systems.<br />

In addition, the presence of certain substances in the<br />

soil, e.g. quartz, can generate strong piezoelectric<br />

signals that can completely mask the weak NQR<br />

response [1]. Magnetic materials and shrapnel in the<br />

terrain may also generate strong magnetoacoustic<br />

responses.<br />

Another problem is not being able to detect the<br />

FID/echo immediately following the excitation pulse<br />

due to “ringing” of the coil and associated hardware.<br />

Once the excitation pulse has stopped, the coil<br />

continues to ring for a time of the order of 23 time<br />

constants before NQR signal can be detected.<br />

Unfortunately, valuable NQR signal is lost during this<br />

time, known as the dead time.<br />

3 Dealing with NQR limitations<br />

There are several approaches to dealing with the<br />

limitations discussed, but they can be broken down into<br />

the two following broad categories,<br />

1. Improved antenna/hardware design<br />

2. Improved signal processing techniques.<br />

A discussion on recent technologies follows.<br />

3.1 Improved antenna/hardware design<br />

Gradiometer coils have been used to reduce the<br />

problems of RFI. A gradiometer coil has the property<br />

that it is a poor antenna for distant sources, but not for<br />

nearby sources. Unfortunately, with simple<br />

gradiometers, some signal loss occurs. This problem is<br />

partly solved by the use of asymmetric gradiometers.<br />

Here the dimensions of the antenna are optimised so<br />

that the field received in a particular inspection volume<br />

is optimised, whereas fields from outside this region are<br />

rejected. Suits et al. [6] have devised a computational<br />

method for optimising single and two-layer surface


coils. The general approach is to find the solution to<br />

Maxwell’s equation, which optimises a certain<br />

parameter. Then the current densities that produce such<br />

a field are calculated allowing the construction of an<br />

optimised coil.<br />

Equation (4) shows that SNR∝ Q 1/2 . This and the advent<br />

of high temperature superconducting (HTS) ceramics<br />

are the main reasons for investigating the use of cooled<br />

coils with high quality factors, which we are studying at<br />

the present time. To produce a high Q coil the<br />

resistance of the coil must be reduced, which also<br />

reduces the noise term in the denominator of equation<br />

(4), achieved by either cooling the coil or using<br />

superconducting materials. In magnetic resonance<br />

imaging (MRI) experiments, SNR enhancements by a<br />

factor of 3 or more have been reported from the use of<br />

HTS antennae cooled to 77K [8]. However, additional<br />

obstacles must be overcome. Equation (5) relates the<br />

quality factor to the dead time td.<br />

( πvtd<br />

)<br />

Q = (5)<br />

ln( A / A )<br />

0 d<br />

A0 is the initial circuit voltage immediately following<br />

the pulse, which could be as high as 10kV, and Ad a<br />

value less than the signal (typically < 1µV).<br />

Thus the higher the Q factor, the greater the dead-time<br />

required as the coil rings for longer. For high Q values<br />

this could pose a significant problem as the NQR signal<br />

could have decayed to negligible levels by the time the<br />

hardware could be set to receive signals. Another<br />

problem is caused by the fact that the bandwidth of the<br />

system is inversely related to the Q factor. The NQR<br />

frequency is temperature dependant, and under some<br />

soil conditions may move outside the range of the<br />

instrument.<br />

The use of a circularly polarised RF magnetic field has<br />

been investigated as a way to increase the amount of<br />

signal detected, reduce the problems of spurious<br />

responses and improve multiple pulse sequences [5]. In<br />

standard NQR experiments, RF is applied along one<br />

axis only, hence only nuclei whose EFGs are correctly<br />

aligned with this axis will see the RF field. If circularly<br />

polarised RF is used, say in the x-y plane, then nuclei<br />

aligned with the x and y axes will see the RF and hence<br />

more nuclei will be excited. Suits et al. [5] reported a<br />

net gain in the SNR of 21%. The use of circularly<br />

polarised RF also leads to a significantly more<br />

homogenous RF field, which can lead to improvements<br />

in the effectiveness of multiple pulse sequences. The<br />

technique may also be used to distinguish interfering<br />

signals, such as piezoelectric responses, from the NQR<br />

signal. The NQR signal from a powdered sample will<br />

have the same polarisation as the excitation pulse,<br />

whereas the interfering signal will be linearly polarised<br />

along an axis dependant upon its orientation, hence the<br />

signals can be distinguished. The main problem with<br />

implementation of such a technique in landmine<br />

detection is the fact the sample must be surrounded by<br />

at least two mutually orthogonal coils.<br />

Three-frequency NQR has been suggested as a way of<br />

alleviating the problem of “coil ringing” [5]. Here, the<br />

NQR sample is excited at two out of the three NQR<br />

frequencies and the third NQR frequency is observed.<br />

Unfortunately, this requires three coils which are<br />

mutually orthogonal to one another and thus poses<br />

practical problems for implementation in a landmine<br />

detector.<br />

4 Signal Processing<br />

Essentially, the aim is to decide whether a mine is<br />

present or not, using the data collected in one minute or<br />

less. Signal processing is a vast field and there are many<br />

ways of tackling the problem.<br />

As the characteristic NQR frequencies are known, one<br />

approach is to Fourier transform the time domain signal<br />

into the frequency domain and obtain a measure of the<br />

energy of the signal over the different frequencies. A<br />

threshold is then applied at the NQR frequencies to<br />

determine whether explosive is present or not. This is<br />

known as peak energy detection. This method is fine<br />

when the signal to noise ratio is relatively high, and the<br />

NQR signal is significantly higher than any spurious<br />

responses. However, if the energy of the noise at the<br />

NQR frequencies is of the same magnitude (or greater)<br />

than the energy of the NQR signal then this method is<br />

not very useful. Work has been done in the field of<br />

spectral estimation. Classical spectral estimators based<br />

on periodogram methods suffer when noisy truncated<br />

data sets are used as they do not assume any knowledge<br />

about the signal whose spectrum is being estimated.<br />

However, parametric spectral estimators which assume<br />

some knowledge of the signal to be estimated can be<br />

used to exploit the different characteristics of the noise<br />

and the NQR signal. In laboratory experiments, where<br />

the sample can be shielded from RFI and substances<br />

that produce spurious responses can be removed, the<br />

observed NQR signals can be modelled as a set of<br />

sinusoids buried in white noise. Parametric methods for<br />

line spectra, such as the matrix pencil method and the<br />

multiple signal classification (MUSIC) method, can<br />

then be employed. We have shown that matrix pencil<br />

methods can be used to improve the sensitivity of<br />

detection [9]. L. Collins et al. showed that improved<br />

detection performance could be achieved by using the


MUSIC method instead of periodogram based methods<br />

[4].<br />

One approach to reduce the RFI is by using adaptive<br />

noise cancellation algorithms. These use two sets of<br />

antennae. One main antenna set measures the NQR<br />

signal and the RFI, whilst the reference antenna set<br />

measures the RFI only. Algorithms are applied to the<br />

reference set and are used to obtain an estimate of the<br />

RFI. This estimate is then subtracted from the main<br />

antenna data. L. Collins et al. used a 2-tap least mean<br />

squares algorithm to reduce the RFI power by more<br />

than 50dB [3].<br />

Y. Jiang et al. [10] consider the case of array signal<br />

processing, where both spatial and temporal<br />

information is available. They have used maximum<br />

likelihood (ML) and Capon methods for signal<br />

amplitude estimation in the presence of temporally<br />

white, but spatially correlated noise. Both estimators are<br />

asymptotically statistically efficient for large data<br />

lengths, but not when the SNR is high. The Capon<br />

estimate outperforms the ML estimate when the SNR is<br />

low. They also consider the more general case, where<br />

the noise is both temporally and spatially correlated,<br />

which they model as an autoregressive random process.<br />

They present an alternating least squares method for<br />

parameter estimation and show this to be superior to the<br />

ML method.<br />

The problem of piezoelectric responses generated by<br />

quartz has been alleviated using phase cycling<br />

techniques [1]. These rely on the fact that the phase of<br />

an NQR signal depends on at least two of the preceding<br />

pulses, whereas that of the piezoelectric response is<br />

determined entirely by the phase of the preceding pulse<br />

which generated it. Unfortunately, since the<br />

characteristics of the piezoelectric responses change<br />

with time, they cannot be entirely mitigated using these<br />

techniques.<br />

5 Conclusions<br />

This paper presented the basic theory behind NQRbased<br />

landmine detection and highlighted some of the<br />

problems with this approach. Hardware and software<br />

methods that have been proposed to improve the<br />

sensitivity of detection have been presented.<br />

6 Acknowledgements<br />

We thank DSTL at Fort Halstead for supporting this<br />

project.<br />

7 References<br />

1. J.A.S. Smith and M.D. Rowe, “Mine detection<br />

by nuclear quadrupole resonance”, 1996,<br />

Proceedings of the Eurel <strong>International</strong><br />

Conference on the Detection of Abandoned<br />

Landmines. IEE Conference Publication No.<br />

431. 62-66.<br />

2. J.A.S. Smith, “Nuclear Quadrupole<br />

Resonance Spectroscopy”, 1971, J. Chem.<br />

Educ., 48, pp.39, A77, A147 and A243.<br />

3. S. Tantum, L. Collins and L. Carin, “Signal<br />

Processing for NQR discrimination of Buried<br />

Landmines”, 1999, Proceedings of SPIE<br />

Vol.3710.<br />

4. Y. Tan, S.L. Tantum and L.M. Collins,<br />

“Landmine Detection with Nuclear<br />

Quadrupole Resonance”, 2002, Geoscience<br />

and Remote Sensing Symposium, IEEE<br />

<strong>International</strong>, Vol.3, pp1575-1578.<br />

5. B.H. Suits, A.N. Garroway, J.B. Miller and<br />

K.L. Sauer, “ 14 N magnetic resonance for<br />

materials detection in the field”, 2003, Solid<br />

State Nucl. Magn. Reson. 24, 123-136.<br />

6. B.H. Suits and A.N. Garroway, “Optimising<br />

surface coils and the self-shielded<br />

gradiometer”, 2003, Journal of Applied<br />

Physics, Vol.94, 4170-4178.<br />

7. Pound, Phys. Rev, 1950, 79, 685.<br />

8. L. Darasse and J.C. Ginefri, “Perspectives with<br />

cryogenic RF probes in biomedical MRI”,<br />

2003, Biochimie, 85, 915-937.<br />

9. Y.Y. Lin, P. Hodgkinson, M. Ernst and A.<br />

Pines, J. Magn. Reson., 1997, 128, 30-41.<br />

10. Y. Jiang, P. Stoica and J. Li, “Array Signal<br />

Processing in the Known Waveform and<br />

Steering Vector Case”, IEEE Transactions on<br />

Signal Processing, 2004, Vol.52, No.1, 23-35.


THE PMAR LAB IN HUMANITARIAN DEMINING EFFORT<br />

1 Introduction<br />

E.E. Cepolina, R.M. Molfino, M. Zoppi<br />

PMAR Robot Design Research Group<br />

University of Genova - Dept. of Mechanics and Machine Design<br />

{emacepo, molfino, zoppi@dimec.unige.it}<br />

www.dimec.unige.it/PMAR<br />

According to current estimates, at the moment up to 90 countries throughout the world are affected by<br />

landmines [1]. Land affected by mines is inaccessible for long time (basically until fields have been<br />

cleared) and, besides mine risk education efforts, the number of injuries stays very high for long after<br />

landmine placement.<br />

In order to give their contributions in the field, researchers in engineering disciplines can direct their<br />

efforts into two different fields: effective techniques for mine localization and clearance and<br />

mechanical tools for victims care.<br />

Some premises are necessary. As manual mine clearing is very slow, costly and quite dangerous for<br />

personnel, the benefit of a possible introduction of automatic systems and devices is remarkable.<br />

Landmines are disseminated in ex-confrontation lines that divided military factions as river banks,<br />

abandoned industrial sites, residential areas as well as in strategically important sites or resources such<br />

as cultivable ground. Due to this large variety of operative environments, the field is open to specific<br />

environment oriented conventional techniques as well as to unconventional solutions, i.e., new<br />

locomotion principles and new localization strategies.<br />

People living in affected areas often have no choice other then entering minefields for sustaining their<br />

families, i.e. for harvesting fruits, cultivating land, crossing the boarder for working in the near richer<br />

country, and the majority of accidents related to landmines occur when people deliberately enter such<br />

areas. For example, in Cambodia, in 2002, 33% of incidents happened while the victim was tampering<br />

with the mine for extrapolating explosive for selling, the 13% of incidents happened when the victim<br />

was farming and another 13% when he was collecting wood [2]. The need of prosthetics for upper<br />

limbs and legs in mine affected countries is high.<br />

The effectiveness of any technique or tool aimed at developing countries depends on its acceptability<br />

by the local people who will use it. Therefore, taking into account the expectations and capabilities of<br />

local end-users, from all technical, psychological, and cultural (anthropological) points of view, is a<br />

fundamental premise to successful solutions.<br />

This paper aims at presenting the efforts of the researchers of the PMARlab of the University of<br />

Genoa into the development of technical solutions to landmine related problems, friendly to localusers.<br />

2 The PMARlab profile<br />

The PMARlab is the laboratory of Design and Measurements for Automation and Robotics of the<br />

University of Genoa, Italy. The laboratory has large research interests. The main research areas are:<br />

Robotics, Intelligent Automation and Measurements. The aim of the lab is to constantly support a<br />

didactic and research environment to face the requirements of analysis and design of mechatronic<br />

systems during their whole life-cycle, with a particular attention to the development of theoretical<br />

methods. Moreover, the lab has specific skills in modeling and simulation, CAD and virtual<br />

prototyping.<br />

From 2000, the research interests of the lab has enlarged to humanitarian issues. In particular, the lab<br />

is engaged in design of machines and systems for humanitarian demining in uneasy accessible areas<br />

covered by thick vegetation [3-5], in development of prostheses for upper limbs [6] and in design of<br />

rescue robots using unconventional locomotion, e.g., peristaltic [7].<br />

The research work inside the laboratory is shared between researchers and students in order to reach<br />

research results and, at the same time, to form the students. Each year within the courses on Robots<br />

Mechanics and Industrial and Service Robotics a specific subject is proposed to the students and<br />

interdisciplinary teams are composed: after a deep discussion about user needs, literature proposals<br />

and market search, the design work starts. Each team works in parallel performing the design of the


specific module it is in charge of taking into account the pre-defined skeleton layout and needed<br />

interfaces.<br />

3 Design methodology<br />

Hereafter the service robots design methodology [8-9] adopted within the PMAR laboratory is shortly<br />

outlined. During the first part of the project, the combined knowledge, expertise and experience of the<br />

researchers and of the local field experts and practitioners are used to know user needs [10], to derive<br />

specifications and eco-requirements [11], and to set robotic devices performances. A comprehensive<br />

gathering of information from scientific literature, as well as from expert humans knowledge is done<br />

to guide in the definition of the technologies and methodologies for system design and development.<br />

The large use of physical phenomena modelling [12], computer simulation and virtual reality testing is<br />

then scheduled, to provide the throughout characterization of the artefact life-cycle behaviour [13];<br />

this will allow to test, at the design phase, competing architectures (of the mechanical structure, of the<br />

controller, of the sensorial system) and find out those improving the overall figure of merit.<br />

Simulation, in fact, after throughout investigation of achievements and drawbacks, offers affordable<br />

commitment, making possible to rank competing robotic solutions (Fig. 1a). The computer simulation,<br />

moreover, offers, during the system utilization phase, an important aid about the overall work<br />

management and tasks allotment; taking into account the environment-robot interactions<br />

characteristics and the system’s current statics and dynamics, the simulation will be helpful to define<br />

the operating tasks. Kinematics and dynamics models are very useful for control laws setting, mainly<br />

if high performances are required and non-linear, model based control systems have to be<br />

implemented. Static models give important information about low motion intrinsic stability.<br />

USER NEEDS<br />

mechanisms,<br />

geometry, materials,<br />

sensorial system<br />

actuators<br />

control logics<br />

specificated<br />

desired functions,<br />

RAW<br />

MATERIAL<br />

DESIGN<br />

Function<br />

SYNTESIS<br />

MANUFACT<br />

URE<br />

(a) (c)<br />

Form<br />

System<br />

Representation<br />

ANALYSIS<br />

(b)<br />

ASSEMBLY<br />

MODELING<br />

verification of the<br />

current state of<br />

the design by:<br />

modeling<br />

simulation<br />

virtual mock-ups<br />

Behaviour<br />

CUSTOMER<br />

SERVICE<br />

REUSE<br />

RECYCLE<br />

DISPOSAL<br />

ENVIRONMENT<br />

Figure 1 - Design approach and methodology (a) and relationship between form, function and behaviour of a system (b).<br />

The mathematical models are defined and implemented in suitable software modules to be interfaced<br />

and included within the general purpose CAD/CAE packages. These modules will be useful for<br />

motion planning and mission execution purposes. Based on the selected configuration, a reduced<br />

performance digital prototype with basic control features is implemented in order to assess the system<br />

performances and to guide in further development work, while reducing risks and relevant cost of<br />

wrong physical prototypes.<br />

Throughout all the design phases mechatronics methodology and modularity issues [14] are used to<br />

achieve an optimal design of the e-mechanical product. As a design philosophy, mechatronics serves<br />

as an integrating approach to engineering design [15]. It is important for the designer to be able to


simulate the behaviour of the current state of the design: as the design evolves its form, behaviour and<br />

function should be consistent with each other (Fig. 1b). Modularity allows economical and easy set-up<br />

and maintenance of several configuration: a modular robot is built from physically separate sub-units,<br />

each contributing to the operation of the robot.<br />

Digital mock-up and graphic rendering, directly connected with CAD tools, enhance concept design,<br />

moving up life-cycle (Fig. 1c) assessments by virtual prototypes let, so, to devise the optimal layout<br />

and the best mechanical architecture of the robotic system [16] subject to specific cost and<br />

performance constraints. Preliminary investigation by digital mock-up is becoming the engineering<br />

mean to range the technological appropriateness and leanness between competing solutions.<br />

4 Robots designed for rescue and landmine localization<br />

The most of the realized and working systems for landmine localization is designed for terrains clear<br />

and, frequently, enough even and consistent. So far, effective solutions for difficult terrains, very<br />

uneven, rocky and covered by thick vegetation, are lacking.<br />

For such difficult terrains and environments traditional machines are precluded due to size or shape<br />

and appendages such as wheels or legs may cause entrapment and failure.<br />

The PMARlab studies robotic solutions based on biological locomotion suggested by the nature. The<br />

aim is to single out the locomotion mechanical principle and laws to be used in the design of bio<br />

inspired mechanisms.<br />

The main applications of these bio-inspired robots are the mines localization in difficult terrains and<br />

rescue. The robots are thought as mobile devices equipped with suitable sensors in order to fulfill<br />

specific inspection tasks.<br />

This approach led the PMARlab researchers to consider and study unconventional locomotion systems<br />

and propose innovative robotic solutions by exploiting the recalled principles of mechatronic and<br />

modular design. This section is devoted to the presentation of some of these solutions.<br />

Figure 2 [4] shows some tele-operated robotic modules based on different locomotion principles.<br />

a<br />

b)<br />

c)<br />

Figure 2 – Virtual mock-ups of smart crawling robots for landmine localization in thick vegetation using REST (Remote<br />

Explosive Scent Tracing).


The locomotion of the module of Fig. 2a is performed by means of peristaltic movements that provide<br />

trust, while grip is provided by needles that are put inside/outside the external envelope. Figure 2b<br />

presents a module trusted by counter rotating screws, fit for muddy and sandy grounds. The module in<br />

Fig. 2c is trusted by four needle crawlers; it is equipped with suitable detaching wheels for foliage<br />

entangled in the needles of the crawlers. All these modules are powered by means of an umbilical that<br />

also includes wires for signals data. It is noteworthy that these modules have been designed in close<br />

collaboration with students involved in robotic courses.<br />

A further activity recently launched at the PMARlab is the simultaneous design of a modular worm-like robot<br />

by the Pro/Intralink facility by PTC. Different student teams work on the same project: each one is charged of<br />

the design of a specific module. The different types of modules can be combined to satisfy different mission<br />

requirements. Typically modules are designed to perform an elementary function, such as one or more degree<br />

of freedom motion or service: e.g. trust modules (generating the peristaltic contractions-relaxations); steering<br />

modules (bending the snake to steer it in a desired direction); sensor modules (segments that host sensors,<br />

eventually endowed with some degree of freedom); head modules (with a camera, eventually carrying tools<br />

such as grippers or sampling tools).<br />

Figure 3 – Robotic modules for steering and peristaltic trust.<br />

Figures 3 shows the motion cycle of a worm segment composed by three 3dof modules actuated by SMA<br />

springs. Each module can contract and relax or can bend in any direction to steer the snake. The central steel<br />

spring gives shear and torque rigidity to the module. The dark rubber bellows on each module preserve the<br />

inside from any dirtiness coming from the environment, such as water or ground. Moreover, these bellows<br />

provide grip by swelling and contracting depending on the distance between the two bases of the segment<br />

(that is a function of the length of the system of springs). It is possible to arrange plug&play connectors for<br />

signals and power on the interfaces in order to get smart interfacing of the modules.<br />

Figure 4 shows some preliminary physical prototypes of the module.<br />

Figure 4 – Robotic modules for steering and peristaltic trust.<br />

Figure 5 – Conceptual design of other new trust peristaltic modules.


Finally, Figure 5 shows the conceptual design of some innovative peristaltic modules. Small elastomeric<br />

umbrellas are opened and closed by suitable inner mechanisms with flexible joints. These umbrellas provide<br />

grip both by enlarging outside the external diameter of the snake and, mechanically, by biting the ground with<br />

the edge of the umbrella. The result is a one way movement (a preferential direction of motion, from left to<br />

right in the pictures). Once it is asked for recovering the snake, all umbrellas of the trust modules are closed<br />

and the snake is pull from the umbilical. The actuation is again by SMA elements placed inside the modules.<br />

Figure 6 – Simulation of the snake robot trusted by a pushing machine; this robot has been designed for minefield area<br />

reduction by REST (Remote Explosive Scent Tracing).<br />

Figure 6 shows the virtual mock-up of a worm robot (co-bot) [5] trusted by a pushing machine. The<br />

idea is to exploit the bending stiffness of the worm to move it forward no matter the nature of the<br />

ground and independently from the presence of obstacles that would make difficult any distributed<br />

locomotion. The body of the worm does not contain mechanics for locomotion, so it presents lot of<br />

space to host sensors. This is one of the reason why this robot can be fruitfully used for landmine area<br />

reduction by REST (Remote Explosive Scent Tracing). The sampling filters can be rolled up inside the<br />

segments and, once known the geometry of the robot and the number of segments that have been<br />

inserted, it is simple to compute the location of each sample filter with respect to the pushing machine.<br />

So doing it is possible to scan an area (by inserting the snake many times on parallel paths, collecting<br />

each time the filters and registering their location at the moment of the sampling). Once again, the<br />

researches and the design have been carried on involving students of robotic courses and one master<br />

thesis.<br />

5 Design of prostheses for third world countries<br />

Prostheses for people with disabilities, who live in developing countries, should be designed with<br />

requirements different from the ones for developed countries. The main characteristics of such prostheses<br />

shall be: *inexpensive to produce, purchase and maintain (local available or easy to purchase materials and<br />

technical skills), *easy to use, *effective and *designed in consultation with users in a way well suited to the<br />

users' diverse social and physical environments [17]. Other important concerns are the use of human body<br />

energy for the actuation and underactuated mechanisms to simplify the mechanics.<br />

Figure 7 – Prototype of underactuated hand, with mechanics and actuation integrated in an elastomeric structural matrix.<br />

Figure 7 gives an idea of the outcomes of this approach. The figure shows the underactuated hand<br />

prototype realized at the PMARlab in 2003 [6]. Main characteristics of this prototype are a very low<br />

cost, thanks to the shrewd use of everywhere-available material and very easy skills to realize it. The<br />

working principle is simple and there are no constrains about the type of actuation and control, in the<br />

first prototype body energy has been considered for actuation. This hand can fully adapt to every<br />

object shape, but it is not fit to perform manipulation tasks. In fact, dexterity is not a leading<br />

requirement compared to the others such as cost, locally production and time to use. The elastomeric<br />

polymer chosen for the structure, the TST MS 939 from Locktite (probably to be imported), is soft to<br />

the touch and its consistence reminds the one of the natural hand, this improve the acceptability from


the users. The main drawbacks are a low grasping force (but this depends on the implant of the<br />

prosthesis and can be enhanced) and the weight that is a little bit heavy. Design of prostheses for third<br />

world countries is now the subject of a PhD Thesis.<br />

6 Conclusions<br />

The PMAR lab carries on a branched research efforts in humanitarian demining and associated fields.<br />

In particular the PMAR lab is collaborating with: HUDEM IARP work group, EUDEM2, GICHD. At<br />

the moment we are actively involved in the study “Providing demining technology end-users need”:<br />

assessment of end-users needs for humanitarian demining technologies through extensive collection of<br />

data in the field.<br />

Acknowledgement<br />

We would like to thank very much EUDEM2 for the support they are providing within the<br />

collaboration on the study “Providing demining technology end-users need”. We would like to thank<br />

GICHD for constantly giving us high technical level advises. The teams of the 2004 Industrial and<br />

Service Robotics course: F. Canonica, A. Ferrara, E. Micheli, L. Rimassa and M. Sawusch are kindly<br />

acknowledged for the help in the development and design of part of the robotic modules.<br />

References<br />

[1] Landmine Monitor Report: Toward a Mine-Free World, <strong>International</strong> Campaign to Ban Landmines<br />

(http://www.icbl.org), 2002;<br />

[2] National Census of the Victims and Survivors of Landmines and Unexploded Ordnance in Cambodia,<br />

Cambodian Red Cross, 2002.<br />

[3] E.E. Cepolina, Low cost robots for landmine location in thick vegetation. On-site IARP <strong>Workshop</strong> on Robots<br />

for Humanitarian Demining: HUDEM 2003, Prishtina, June 19, 2003<br />

[4] E. Cepolina, M. Zoppi, Cost-effective robots for mine detection in thick vegetation. CLAWAR03, September,<br />

17-19, Catania, Italia, 2003.<br />

[5] E.E. Cepolina, Worm: a simple robotic solution to landmine location. Int. Conf. on Requirements and<br />

technologies for the Detection, Removal, and Neutralization of Landmines and UXO, Brussels September 15-18<br />

[6] E. Ullman, R. Molfino, M. Zoppi, A smart and low-cost hand-like prosthesis for amputees. Poster at 35th<br />

Intl. Symposium on Robotics, ISR 2004, Paris, Nord Villepinte, March 23-26 2004<br />

[7] F. Cotta, F. Icardi Non conventional strategies for peristaltic locomotion, Tesi di Laurea, University of<br />

Genova, April 2004<br />

[8] R.D. Schraft, M. Hagele, R. Dillmann (1997): Technologies, state-of-the-art and challenges in service robot<br />

technology, design and applications, IARP <strong>International</strong> Advanced Robotics Programme service and personal<br />

robots: technologies and applications, Genova, Italy, 23-24 October 1997<br />

[9] P. Anthoine, L.E. Bruzzone, F. Cepolina, R. Molfino: Metodologia di progettazione meccatronica per<br />

robotica di servizio, 1a Conferenza Nazionale “Sistemi Autonomi Intelligenti e Robotica Avanzata”, Frascati,<br />

Italia, 29-31 Ottobre 2002<br />

[10] E. E. Cepolina, C. Bruschini and K. De Bruyn Providing demining technology end-users need Survey on<br />

minefields – Methodology & Aims, EUDEM2, March 2004<br />

[11] S.B. Billatos, N.A. Basaly: Green technology and design for the environment, Taylor & Francis, 1997.<br />

[12] Oelen, W., Modeling as a tool for design of mechatronic systems – design and realization of the Mobile<br />

Autonomous Robot Twente Ph.D.- thesis, University of Twente, 1996<br />

[13] C. Luttropp, J. Lagerstedt: Customer benefits in the context of life cycle design, Eco-Design '99: 1st Intl.<br />

Symp. on Environmental Conscious Design and Inverse Manufacturing, Tokyo, Feb., 1999<br />

[14] S. Farritor, S. Dubowsky, N. Rutman, J. Cole A Systems-Level Modular Design Approach to Field Robotics<br />

Robotics, IEEE <strong>International</strong> Conference on Robotics and Automation, Minneapolis, MN, 1996, .Vol. 4, pp.<br />

2890-5.<br />

[15] P. Lambeck, B. Bertsche G. Lechner Creating Size Ranges Easily Using An Integrated Design Tool The<br />

Sixth <strong>International</strong> Conference on Mechatronic Design and Modeling Cappadocia – Turkey, September 4 - 6,<br />

2002<br />

[16] R. Molfino, M. Zoppi. Virtual engineering techniques: application to the design of a prehensor for picking<br />

up limp pieces. EVEN <strong>Workshop</strong> on Virtual Eng. Appl. for Design and Product Development: the Footwear<br />

case, September, 11-12, Gallipoli, Italia, 2003.<br />

[17] Kejlaa, G.H. Consumer concerns and the functional value of prostheses to upper limb amputees. Prosthetics<br />

& Orthotics <strong>International</strong> 17:157-163, 1993.


Robosoft's Advanced Robotic Solutions for outdoor risky<br />

interventions<br />

Pierre Pomiers, Vincent Dupourqué<br />

Robosoft, Technopole d'Izarbel, 64210 Bidart, France<br />

Tel: +33 5 59 41 53 66<br />

Fax: +33 5 59 41 53 79<br />

E-mail: pierre@robosoft.fr<br />

Abstract<br />

Tomorrow, advanced all-terrain robotic applications have to be able to cope with various situations and<br />

perform many different tasks in a dynamic and changing environment. Furthermore, finding concrete<br />

applications in a wide range of user oriented industrial products, such systems, embedding several<br />

computing units, have to cope with an increasing demand of interactivity and to support number of noncritical<br />

pieces of hardware and software. This whole set of different capabilities needs to be performed<br />

reliably and safely over long time periods. To this aim not only advanced programming techniques, but<br />

also appropriate control architectures are required. For these reasons, Robosoft proposes both a set of<br />

hardware and software components developed from our own experience in the field of automatic<br />

transportation of people and goods, which can be easily adapted to robotic solutions for outdoor risky<br />

interventions.<br />

1 Overview<br />

Most papers concerned with real-time embedded application design present experimental tests showing that<br />

theoretical results, obtained from formal analysis, match real-time behavior of embedded system. But they<br />

do not consider critical aspects of integrating, or interfacing, with other non-real-time compatible<br />

processes such as complex high-level control system or user-end applications. To override the complexity<br />

of computing such application algorithms, the control software is built using a dedicated software<br />

environment: iCORE. iCORE relies on the SynDEx 1 data flow graph formalism (introduced by INRIA)<br />

which objective is to provide rapid prototyping and error free implementation procedures for distributed<br />

and heterogeneous application.<br />

From this flexible and reliable development approach, we present how our advanced mobile<br />

systems could be easily used and customized for implementing outdoor applications, and guaranteeing<br />

integrity during risky interventions. Each point of the method is mainly discussed through one of the<br />

Robosoft all-terrain products: the robuCAR TT. It is an all-terrain car-like mobile platform designed<br />

specifically for exploring, performing both measurements and actions in hazardous environments. Last, in<br />

order to illustrate the possible robot operating mode, we will focus on software modules covering various<br />

needs: autonomous navigation, fleet management, remote control, specific HMI...<br />

2 iCORE development environment<br />

Robotics solutions, we describe here, make use of custom control architectures (composed of one Intel x86<br />

Linux/RTAI machines and from one to 8 Motorola MPC555 based control boards 2 ) with CAN buses as<br />

communication media. This section covers both the development and embedded targets environment.<br />

1 Synchronized Distributed Executive<br />

2 cb555 boards manufactured by Robosoft as part of his own control system products


2.1 iCORE: an approach based on SynDEx methodology<br />

The application development method discussed here makes use of both SynDEx tools and Robosoft<br />

kernels. Developed by INRIA, SynDEx V6 is a graphical interactive software (see Fig. 1) with on-line<br />

documentation (refer to [3]), implementing the AAA 3 methodology. Here is the list of services offered by<br />

the couple SynDEx and Robosoft proprietary kernels:<br />

• specification of an application algorithm as a conditioned data-flow graph (or interface with the<br />

compiler of one of the Synchronous languages ESTEREL, LUSTRE, SIGNAL through the common<br />

format DC)<br />

• specification of a multicomponent as a graph<br />

• heuristic for distributing and scheduling the algorithm on the multicomponent with response time<br />

optimization<br />

• visualization of predicted realtime performances for the multicomponent sizing<br />

• generation of dead-lock free executives for real-time execution on the multicomponent with optional<br />

real-time performance measurement. These executives are built from a processor-dependent executive<br />

kernel. SynDEx comes presently with executives kernels for various digital signal processors,<br />

microprocessors and micro-controllers.<br />

The distributing and scheduling heuristics as well as the predicted real-time diagram, help the user to<br />

parallelize his algorithm and to size the hardware while satisfying real-time constraints. Moreover, as the<br />

executives are automatically generated with SynDEx, the user is relieved from low level system<br />

programming and from distributed debugging. This allows optimized rapid prototyping and dramatically<br />

reduces the development cycle of distributed real-time applications.<br />

3 Algorithm Architecture Adequation<br />

Fig. 1: Application design example using SynDEx CAD


The SynDEx development system described above is able to generated executive binaries for various types<br />

of target including the ones that compose the Robosoft control platform. Robosoft control architecture<br />

typically embeds one or more MPC555 based boards and an Intel x86 Real-time Linux computer.<br />

2.2 Robosoft cb555 control board<br />

The Robosoft cb555 [4] board (see Fig. 2) is a stand-alone four axis controller designed for critical<br />

industrial process handling. Including a 32-bit PowerPC architecture, it provides high computation<br />

performance (refer to Table 1 for a detailed description of boards IO capabilities).<br />

Fig. 2: The cb555 Robosoft control board<br />

2.3 Robosoft emPC and wsPC computers<br />

Table 1: Robosoft control board connectors<br />

description<br />

The context of real-time embedded applications programming is quite different from the classical one, user<br />

usually meets. This notion of "real-time" is not present in normal Linuses. Such real-time dedicated<br />

mechanisms can be added by installing an RTOS 4 on top of Linux standard kernel [1] [2]. Robosoft based<br />

both emPC and wsPC product ranges (resp. for embedded and workstation computers) on RTAI version<br />

which is widely used in embedded industry for prototyping and which is supported by very active<br />

companies.<br />

RTAI basic principle is rather simple. RTAI provides deterministic and preemptive performance<br />

in addition to allowing the use of all standard Linux drivers, applications and functions. To this aim, RTAI<br />

decouples the mechanisms of the real-time kernel from the mechanisms of the general purpose Linux<br />

kernel so that each can be optimized independently and so that the real-time kernel can be kept small and<br />

simple. In our case, the primary function of RTAI kernel is to provide real-time tasks with direct access to<br />

the raw hardware, so that they can execute with minimal latency and maximal processing resource, when<br />

required.<br />

3 The robuCAR TT implementation<br />

The robuCAR TT platform (see figure 3) is a mobile robot offering great all-terrain capabilities, able to be<br />

driven over very hazardous landscapes (such as forests, military sites, building sites...). Consequently, this<br />

platform perfectly fits a wide range of missions: autonomous explorations, all-terrain tele-operation,<br />

supervised demining operations, ... Table 2 gives the main electromechanical specifications of the<br />

robuCAR TT. The platform control is handled by an heterogeneous architecture composed with both cb555<br />

4 Real-Time Operating System


oards and an embedded PC. Figure 3 shows how hardware units are organized: two cb555 controllers<br />

dedicated to low-level critical loops, while the embedded PC focuses more on advanced algorithms and<br />

interfaces with asynchronous devices such as wireless network, GPS, laser scanner... Realtime<br />

communications sequences between cb555 boards and PC rely on a CAN bus, while higher level<br />

communications (toward supervisors, network databases, web server, as well as any other non-realtime<br />

devices) are realized trough classical Ethernet links or serial lines.<br />

Fig. 3: The robuCAR TT platform<br />

Fig. 4: robuCAR TT control architecture<br />

Table 2: robuCAR TT electromechanical<br />

specifications<br />

Thanks to a very modular hardware and software structure, robuCAR TT can handled a wide set of<br />

application. As an illustration, let us focus on figure 5. It represents the organigram of an all-terrain<br />

supervision application. Several software levels are depicted, from the most critical 1kHz task to the fully<br />

aperiodic web oriented processes:<br />

• 1kHz control loops driving the four independent motors and the two independent steering jack servos<br />

• a 100Hz loop dedicated to speed and steering profile computing<br />

• an asynchronous level (running under Linux) handling both DGPS and LMS 5<br />

• aperiodic supervision processes with both web page and database update (running over one or more<br />

computers connected to a network)<br />

Relying on iCORE environment (merging the best of SynDEx programming methodology and Robosoft<br />

5 LMS SICK laser scanner


modular proprietary kernels), these software levels implementation lead to highly predictable and safe<br />

application executions, as well as safe (non-blocking) interactions between software levels.<br />

Fig. 5: robuCAR TT application structure<br />

4 iCORE approach: a decisive step to both hardware and software modularity<br />

Over the robuCAR TT example, we show that fairly complex and exhaustive software applications may by<br />

implemented, covering all the needs required for all-terrain missions (supervision, exploration, ...). The<br />

iCORE approach, we propose, appears to be very well adapted to robotics software development and,<br />

moreover, bring a very interesting modular concept. With iCORE approach, modularity is twofold.


First, relying on SynDEx methodology, iCORE provides users with hardware modularity. This<br />

means that once an application is written for a given architecture (composed with a set of cb555, PC and<br />

CAN buses), it is able to run on an extension of this architecture without any modification. Hence,<br />

extending the computing capabilities of a robotics platform is totally effortless: application is automatically<br />

redistributed in order to handle the new architecture resources.<br />

Secondly, iCORE approach also provides user with software modularity. As shown on figure 1, in<br />

our context, an application is designed as a block diagram. Each block contains either a basic feature (from<br />

iCORE kernels) or another set of blocks implementing a more complex feature. Thus, iCORE approach<br />

make software part be easily reusable: simply by copying and pasting blocks subsets.<br />

Fig. 6: Example of possible platform modularity<br />

Figure 6 gives a striking illustration of possibilities offered by iCORE modularity. On the left side,<br />

a four-wheeled platform is shown, composed with two pods. Each pod is driven by the same piece of<br />

control software (running on its own cb555). Hence, programming control for a six-wheeled version (with<br />

three pods) is nothing but duplicating a diagram subset and adding a new cb555 into architecture graph.<br />

Code distribution and execution will require no other step. The same way, assuming a 6-DOF robot arm<br />

control has been previously realized using iCORE approach, adding such an arm to the 6-wheeled platform<br />

is nothing but merging the two application diagrams together and modifying the architecture description in<br />

order to fit the right set of cb555 boards and PC.<br />

Conclusion<br />

Relying on this flexible and reliable development methodology, the iCORE approach, we present here,<br />

bring an efficient help to users and researchers, interested in overriding all-terrain application<br />

implementations complexity. Thanks to his own experience in the field of automatic transportation of<br />

people and goods, Robosoft has adapted robotic solutions for outdoor risky interventions, leading to a<br />

dedicated product range: rugged hardware components (cb555 and various types of embedded PC), as well<br />

as a set of realtime software components (control loops, I/O primitives, laser and wire guidance, obstacle<br />

detection, ...). As shown over the given examples, iCORE approach allow user to easily implement and<br />

customize outdoor applications. Finally, making use of programming methodology introduced by SynDEx,<br />

iCORE guarantee applications executions integrity during risky interventions.<br />

References<br />

[1] E. Bianchi, L. Dozio, P. Mantegazza.<br />

DIAPM RTAI Dipartimento di Ingegneria Aerospaziale - Politecnico di Milano<br />

Real Time Application Interface.<br />

[2] RTLinux<br />

The Realtime Linux.<br />

[3] Thierry Grandpierre, Christophe Lavarenne, Yves Sorel.<br />

Modèle d'exécutif distribué temps réel pour SynDEx. 1998..<br />

[4] MOTOROLA SEMICONDUCTOR TECHNICAL DATA.<br />

MPC555 Product Preview PowerPC TM Microcontroller, MOTOROLA INC., 1998.


Behaviour-based Motion Control for Offroad<br />

Navigation<br />

Martin Proetzsch † , Tobias Luksch ‡ , Karsten Berns<br />

AG Robotersysteme, Fachbereich Informatik, TU Kaiserslautern, Germany<br />

‡ luksch@informatik.uni-kl.de<br />

Abstract— Many tasks examined for robotic application like<br />

rescue missions or humanitarian demining require a robotic<br />

vehicle to navigate in unstructured natural terrain. This paper<br />

introduces a motion control for a four-wheeled offroad vehicle<br />

trying to tackle the problems arising. These include rough<br />

ground, steep slopes, wheel slippage, skidding and others that<br />

are difficult to grasp with a physical model and often impossible<br />

to aquire with sensoric equipment. Therefore a more reactive<br />

approach is choosen using a behaviour-based architecture. This<br />

way a certain generalisation in unknowned environment is<br />

expected. The resulting behaviour network is described and<br />

initial experiments performed in a simulations environment are<br />

presented.<br />

Index Terms— outdoor navigation, behaviour-based robotics,<br />

wheeled vehicles<br />

I. INTRODUCTION<br />

Locomotion in rough natural environment is a basic precondition<br />

for applications like humanitarian demining or rescue<br />

and monitoring after natural disaster. The field of application<br />

is normally restricted to only a few kilometers. Therefore, it<br />

is necessary to have a UGV able to surmount middle-size<br />

obstacles as well as to cover a distance of some kilometers<br />

in an adequate time. In literature there are several robotic<br />

systems like walking machines, snake-like robots, wheel and<br />

chain driven vehicles for different types of natural terrain. As<br />

a good compromise we examine a high flexible wheel-driven<br />

machine for the locomotion in rough natural terrain.<br />

A. Offroad Locomotion<br />

When aiming at locomotion in rough, natural terrain with<br />

a wheel-driven vehicle one has to consider several problems.<br />

For one thing the characteristics of the environment have to<br />

be examined:<br />

Climate These characteristics include extreme tempertature<br />

or humidity.<br />

Physical features of the terrain One has to consider the<br />

slope which can range from flat to near vertical and can<br />

cause the vehicle to stop or even tip over; surmountable<br />

or not surmountable obstacles like rocks, holes, trees,<br />

builinds or even dynamical obstacles like animals or<br />

other vehicles; the soil consistancy which can affect the<br />

possible torque the wheels can create as well as the<br />

vehicle’s stability.<br />

Furthermore different classes of outdoor terrain should be<br />

taken into consideration. These include partially structured<br />

† m_proetz@informatik.uni-kl.de<br />

terrain with roads or paths, relativly even open land, uneven<br />

terrain with medium hills, difficult terrain with more obstacles<br />

and steeper slopes, or even extreme to near vertical terrain.<br />

This work will focus on uneven or even more difficult terrain<br />

with low obstacle density. Other activities in this area include<br />

[1], [2], and [3]. Efforts focusing on more structured terrain<br />

like [4] are not subject of this paper.<br />

Keeping these contraints in mind there arise several problems<br />

when addressing the control side of outdoor locomotion.<br />

These can be devided in navigation and motion control [5].<br />

For the navigation part questions like environment modelling,<br />

localisation including relative and absolute position estimation,<br />

and trajectory generation have to be faced. The motion control<br />

will have to handle the lower level control and handle difficulties<br />

imposed by e.g. the character of the terrain or certain<br />

kind of obstacles in a reactive manner. Motion control is the<br />

main subject of this paper whereas the navigation problems<br />

will not be addressed.<br />

B. The Test Platform Ravon<br />

The test platform used to verify the motion control is a<br />

wheel driven outdoor vehicle by Robosoft (see figure 1). It has<br />

a size of about 1.4 meters in width and 2.4 meters in length<br />

and wheels of 76 cm in diameter. Each wheel has an individual<br />

D/C motor, the front and rear steering is independent. The<br />

robot is capable of driving up to 3 m/s and of climbing slopes<br />

with 30% inclination.<br />

Fig. 1. The four-wheeled outdoor vehicle Ravon<br />

II. KINEMATIC MODEL<br />

This section introduces an abstract single track kinematic<br />

model to describe the essential features for the motion control.<br />

1


This model is extended to four wheels to calculate the single<br />

wheel velocities and to analyse the errors imposed by the<br />

coupled steering.<br />

A. Single Track Model<br />

The motion of a four wheel steering vehicle can be described<br />

using a single track model [6] as illustrated in figure 2.<br />

In this model the front wheels of a four wheel vehicle are<br />

merged together to one wheel in the centre line of the vehicle.<br />

The same action is applied to the rear wheels. This model<br />

describes the features being essential for the motion of a<br />

four wheel steering vehicle, i.e. translational and rotational<br />

movement.<br />

The description of the vehicle position and motion always<br />

refers to a point C representing the kinematic centre of the<br />

vehicle. The yaw angle ψ describes the heading of the vehicle<br />

body. The vehicle moves with a velocity v to the direction β<br />

(called “side-slip angle”) which is measured relatively to the<br />

vehicle body. At the front reference point the velocity is called<br />

vf , at the rear point vr.<br />

The length between C and the position of the merged front<br />

wheels is called lf , the length to the rear point lr. Thus, the<br />

vehicle length l = lf + lr.<br />

When moving, the vehicle motion has a centre called<br />

“instant rotating centre” (IRC) which, in case of parallel<br />

steering of the front and rear wheel, can reach to infinity. The<br />

distance from C to IRC is called rc, the distances from the<br />

front and the rear wheel rf and rr.<br />

In order to force the vehicle to follow a given track the<br />

steering angles (δf and δr) relative to the vehicle body must<br />

be set so that the lines perpendicular to the front and rear<br />

wheel meet in the intended IRC.<br />

There is the assumption that the steering angles of the<br />

wheels are limited (|δf | < π<br />

2 and |δr| < π<br />

2 ) so that the IRC<br />

never lies at the vehicle body line. In most practical cases<br />

this special case need not be considered due to the constraints<br />

imposed by the mechanical structure restricting the steering<br />

angles.<br />

Fig. 2. Single Track Model<br />

Using these assumptions the side-slip angle can be calculated<br />

as presented follows:<br />

� �<br />

lf · tan (δr) + lr · tan (δf )<br />

β = arctan<br />

(1)<br />

lf + lr<br />

The distances to the IRC can be calculated as follows:<br />

rc = lf · sin � �<br />

π<br />

2 − δf<br />

sin (δf − β) = lr · sin � �<br />

π<br />

2 + δr<br />

(2)<br />

sin (β − δr)<br />

rf = lf · sin � π<br />

2 + β�<br />

sin (δf − β)<br />

rr = lr · sin � π<br />

2 − β�<br />

sin (β − δr)<br />

Assuming that the mechanical constraints of the vehicle do<br />

not allow turning around the kinematic centre C, the vehicle<br />

can be controlled by providing the vehicle velocity v and<br />

the front and rear steering angle δf and δr. In this case the<br />

velocities of the two virtual wheels are of interest:<br />

vf = v · rf<br />

rc<br />

vr = v · rr<br />

rc<br />

The movement of a vehicle in an outdoor terrain consists<br />

of the change of the pose in three dimensions having the<br />

following elements:<br />

• coordinates for x, y, and z<br />

• angles for roll (ρ), pitch (φ), and yaw (ψ)<br />

To determine the components of a movement the angle λ<br />

is introduced which denotes the angle between the x-y-plain<br />

and the vehicle velocity vector:<br />

2<br />

(3)<br />

(4)<br />

(5)<br />

(6)<br />

λ = φ · cos(β) − ρ · sin(β) (7)<br />

Using this angle the change of the position in three dimensions<br />

can be calculated as follows:<br />

˙x = v · cos(λ) · cos(ψ + β · cos(ρ)) (8)<br />

˙y = v · cos(λ) · sin(ψ + β · cos(ρ)) (9)<br />

˙z = v · sin(λ) (10)<br />

The change of the heading angle is the following:<br />

� �<br />

vf ˙ψ<br />

· sin (δf ) − vr · sin (δr)<br />

= arctan<br />

lf + lr<br />

The changes of the roll and pitch angle in three dimensions<br />

are determined as follows:<br />

B. Four Wheel Model<br />

˙φ = ˙ ψ · sin(−ρ) (11)<br />

˙ρ = ˙ ψ · sin(φ) (12)<br />

The vehicle this work is based on is constructed so that<br />

the front wheels and the rear wheels respectively are steered<br />

in pairs. This is realised in a way that—like in the case of<br />

normal car steering—the steering centre of the front wheels<br />

always lies on the extension of the rear axis (see figure 3).<br />

Accordingly the steering centre of the rear wheels lies on the<br />

extension of the front axis.<br />

This constraint determined by the mechanical realisation on<br />

the one hand simplifies the kinematic calculation. However,<br />

on the other hand it introduces errors in comparison with the<br />

correct case that the wheels are steered independently so that<br />

all wheels’ perpendicular lines meet in one point.


Fig. 3. Four Wheel Model<br />

The radii of the wheel tracks determined by the position of<br />

the IRC can be calculated using the law of cosines:<br />

�<br />

rfl = sign (rf ) · r2 f +<br />

�<br />

w<br />

�2 − 2 · rf ·<br />

2<br />

w<br />

2 · cos (δf )<br />

�<br />

rfr = sign (rf ) · r2 f +<br />

�<br />

w<br />

�2 − 2 · rf ·<br />

2<br />

w<br />

2 · cos (π − δf )<br />

�<br />

rrl = sign (rr) · r2 �<br />

w<br />

�2 r + − 2 · rr ·<br />

2<br />

w<br />

· cos (δr)<br />

�<br />

2<br />

rrr = sign (rr) · r2 �<br />

w<br />

�2 r + − 2 · rr ·<br />

2<br />

w<br />

· cos (π − δr)<br />

2<br />

Using the radii calculated above the single wheel velocities<br />

can be calculated as follows:<br />

vfl = v · rfl<br />

rc<br />

vfr = v · rfr<br />

rc<br />

vrl = v · rrl<br />

rc<br />

vrr = v · rrr<br />

III. BEHAVIOUR-BASED MOTION CONTROL<br />

The classical control approach requires a complete physical<br />

model of the robot and the environment, which cannot be<br />

acquired with the given sensors and the examined terrain. This<br />

is the case due to possible wheel slippage, skidding and other<br />

problems implied by the vehicle-terrain interaction. Therefore<br />

this paper introduces a behaviour-based motion control not<br />

requiring a complete knowledge of the environment to achieve<br />

robust locomotion. The general behaviour architecture, the developed<br />

behaviours, their purpose and specific implementation<br />

as well as the interaction in the resulting behaviour network<br />

are described in this section.<br />

rc<br />

A. Behaviour Architecture<br />

The behaviour architecture used for this work is based on<br />

the one developed by Albiez and Luksch (see e.g. [7] or<br />

[8]) which again is loosely based on Brook’s subsumption<br />

architecture [9]. It is originally used for controlling walking<br />

machines and is inspired by the activation patterns in the<br />

brain and the spinal cord of animals. Individual behaviours are<br />

structured on layers in a hierachical network, the coordination<br />

problem is solved by using special meta signals generated by<br />

each behaviour. Only the interaction of the behaviours and<br />

their placement in the network will result in the desired actions<br />

of the overall system.<br />

Fig. 4. The layout of a single behaviour B<br />

Each individual behaviour B = (r, a, F ) is a module of the<br />

form outlined in figure 4. The input vector �e can be composed<br />

of sensor information, data processed by other behaviours or<br />

coordination signals of other behaviours. The activation or<br />

motivation value ι ∈ [0..1] is used by higher level behaviours<br />

to influence the behaviour’s degree of activity, the inhibitation<br />

input i ∈ [0..1] allows lower level behaviours to repress<br />

its activity. The output vector �u = F (�e, ι, i) is composed<br />

of control signals for the robot or lower level behaviours<br />

and is calculated by evaluating the transfer function F . This<br />

function describes the behaviours functionality and can range<br />

from a simple P-controller to complex state machines or AI<br />

algorithms.<br />

To solve the coordination problem each behaviour generates<br />

two meta information signals a and r. The activity a(�e, ι, i) ∈<br />

[0..1] states the degree of action the behaviour is producing.<br />

The target rating r ∈ [0..1] indicates the behaviour’s own<br />

evaluation of the current situation based on the input vector �e.<br />

A target rating of Zero denotes that the behaviour is “content”<br />

with the situation, a rating of One represents maximum discontentedness.<br />

The target rating can conveniently be seperated<br />

in an absolute and relative fraction with r = rabs · rrel. The<br />

combination of a and r can provide all necessary information<br />

on the state of the behaviour (see figure 5).<br />

Behaviours can further be classified into target-based,<br />

progress-based and activation behaviours. Target-based behaviours<br />

try to reach a certain goal, e.g. a mobile robot driving<br />

to a point on a map or a walking robot stabilising. Progressbased<br />

behaviours a performing specific movements without<br />

having a target state, e.g. an exploration strategy. Finally<br />

activation behaviours are controlling lower level behaviours<br />

using their activation ι. They are not directly controlling<br />

3


Fig. 5. The state of a (target-based) behaviour can be deduced from its<br />

activity a and its target rating r<br />

hardware.<br />

As already mentioned all behaviours are arranged in a<br />

hierachical network where a flow of meta information (a, r<br />

and ι) is controlling the coordination. For example by using<br />

the activity of one behaviour as inhibitation input of another,<br />

exclusion can be realised. If more than one behaviour tries<br />

to influence a lower level behaviour, a fusion knot is inserted<br />

where the weighting of inputs is deduced from the influencing<br />

behaviour’s activities.<br />

B. Motion Control<br />

The constraints introduced in section I highly influence<br />

the design of motion control functionality. First of all the<br />

vehicle’s abilities, i.e. vehicle velocity, front and steering, form<br />

the basis of this approach. Following a bottom up strategy<br />

behaviours handling these controls are added. The next layer<br />

consists of behaviours generating an intended vehicle velocity<br />

and a translatory or rotatory movement. Finally high level<br />

behaviours are introduced following a top down strategy, i.e.<br />

defined by high level targets, and use the low level behaviours<br />

to fulfill it.<br />

As a main challenge of the movement in outdoor terrain is<br />

handling the occurrence of slopes, the behaviours introduced<br />

below react on influences concerning the pose of the robot.<br />

These behaviours are embedded in a framework which is<br />

introduced in the following section.<br />

1) Basic Functionality:<br />

a) Unit Conversion: To unify the units used in controlling<br />

and sensing modules the values received from and sent<br />

to the user interface are adapted, i.e. all length values are<br />

converted to metres.<br />

b) Drive Input Conversion: To convert different inputs<br />

to the standard control procedure, i.e. steering angles for front<br />

and rear axis and vehicle velocity, a group which contains<br />

modules for each provided steering mode is implemented.<br />

A multiplexer is introduced for selecting the currently active<br />

control mode. These are:<br />

• The standard control mode including front and rear<br />

steering angle as well as the vehicle velocity. This mode<br />

can be used for manual steering or for the behaviourbased<br />

control.<br />

• A steering mode receiving the reciprocal value of the<br />

radius of the robot motion, the side-slip angle, and the<br />

vehicle velocity.<br />

c) Kinematic: The kinematic of a four wheel steering is<br />

described in section II. These considerations are used to form<br />

a module having two tasks: On the one hand it receives vehicle<br />

motion control data, i.e. vehicle velocity and steering angles,<br />

and calculates single wheel velocities. On the other hand data<br />

acquired from sensors specifying single wheel velocities and<br />

steering angles is converted to the assumed vehicle velocity<br />

and the steering angles of all four wheels.<br />

d) Kinematic and Dynamic Constraints: The automatic<br />

control of a vehicle using a behaviour-based approach is independent<br />

from its mechanical characteristics. The control output<br />

of behaviours describes intended values ignoring kinematic<br />

and dynamic constraints of the vehicle’s structure. Therefore<br />

a central unit controlling the vehicle constraints is introduced<br />

regarding the following restrictions:<br />

• maximum vehicle velocity<br />

• maximum vehicle acceleration and deceleration<br />

• maximum front and rear steering angle<br />

• maximum steering velocity<br />

• maximum steering acceleration and deceleration<br />

e) Wheel Velocity Calculation: Encoder values delivered<br />

by the robot hardware are converted to velocity values used<br />

in the control structure.<br />

f) Odometry: For position estimation using odometry in<br />

this module the movement due to wheel velocities and steering<br />

angles is calculated. The following input values are used:<br />

• the vehicle velocity v<br />

• the velocities at the front and rear reference point of the<br />

vehicle (vf and vr)<br />

• the steering angles δf and δr<br />

• the current vehicle orientation consisting of roll (ρ), pitch<br />

(φ), and yaw (ψ)<br />

With the calculations from section II the following output<br />

values are generated:<br />

• the change of the position ( ˙x, ˙y, and ˙z)<br />

• the change of the orientations ( ˙ρ, ˙ φ, and ˙ ψ)<br />

g) Position Determination: As position estimation must<br />

never rely on only one measurement, information from diverse<br />

measurement techniques are merged in this group. Both absolute<br />

and relative position information can be used.<br />

The functionality introduced previously provides the frame<br />

for the behaviour-based motion control comprised by the<br />

behaviours described in the following section. First the targetbased<br />

behaviours beginning from low-level velocity and steering<br />

control are presented following a bottom-up strategy.<br />

Afterwards some supporting progress-based behaviours are<br />

introduced. Finally a higher level activation behaviour is<br />

introduced following a top-down strategy for reaching a point.<br />

2) Target-Based Behaviours:<br />

a) Velocity: The control of the movement of a vehicle<br />

starts with setting a velocity. Behaviours suggesting a velocity<br />

do this by providing an intended normed absolute value v ∈<br />

[0, 1] and the desired direction vsgn (i.e. vsgn = +1 for<br />

forward movements, vsgn = −1 for backward movements, and<br />

4


vsgn = 0 if a behaviour wants to perform movements without<br />

caring about the direction). These values are weighted separately<br />

by a fusion node. The output of this fusion node is fed<br />

into the velocity behaviour, having the following properties:<br />

The intended velocity of the behaviour is defined as<br />

vintended = vmax · v · sign(vsgn) (13)<br />

where vmax is the maximal velocity to be generated, v and<br />

vsgn are the fusioned values of input behaviours. For maximal<br />

response the internal activity aint is set to 1. The absolute and<br />

relative target rating rabs and rrel are calculated as follows:<br />

rabs = abs (vintended − vcurrent)<br />

� � �<br />

∆v<br />

max 0.5, 1 −<br />

rrel =<br />

2·∆t if ∆v > 0<br />

1 else<br />

where ∆v = abs(vintended − vprevious) − abs(vintended − vcurrent)<br />

refers to the change of velocity in respect to the intended<br />

velocity. Here vcurrent is the current vehicle velocity and vprevious<br />

the vehicle velocity measured one time step ∆t before.<br />

b) Steering: The low-level steering behaviour (used<br />

both for the front and the rear steering) transforms translatory<br />

and rotatory movement into a steering angle. The target angle<br />

δintended is calculated the following way (for front and rear<br />

steering):<br />

δf,intended = δtrans · atrans + sign(v) · δrot · arot<br />

atrans + arot<br />

δr,intended = δtrans · atrans − sign(v) · δrot · arot<br />

atrans + arot<br />

The internal activity and the target rating are calculated<br />

similar to the velocity behaviour using the current and previous<br />

steering angle.<br />

c) Translatory Movement to a Target Point: Tasks a robot<br />

has to achieve are often expressed as a movement to a given<br />

position. The target of this behaviour is to reach a target point<br />

by a translational movement. Depending on the current pose<br />

of the robot and the target position the intended translation<br />

(i.e. the side-slip angle) is generated. Additionally, depending<br />

on the distance to the target the velocity is set.<br />

3) Orientate to a Target Point: While the previous behaviour<br />

handled the translatory movement to a target point this<br />

behaviour has the goal to rotate the vehicle so that it points to<br />

the target. The rotational movement and the velocity are set<br />

depending on the deviation between the vehicle orientation and<br />

the target direction. The direction of the movement depends<br />

on the distance to the target point: If the vehicle is far away<br />

a forward movement is executed. If it is near the target a<br />

backward movement is performed in order to minimise the<br />

deviation. For avoiding thrashing a transition zone is used<br />

where the previously set direction is continued.<br />

a) Minimise Roll: The purpose of the Minimise Roll<br />

behaviour is to reduce the absolute value of the roll angle<br />

of the vehicle. This behaviour can be used to stabilise the<br />

vehicle’s posture.<br />

The behaviour receives the roll and pitch angle of the<br />

vehicle. Depending on the roll angle a rotational movement is<br />

performed, assuming that the near environment only changes<br />

in small amounts.<br />

b) Minimise Pitch: Similar to the Minimise Roll behaviour<br />

this behaviour tries to reduce the pitch angle by setting<br />

a rotational movement. The purpose of this behaviour is to try<br />

and orientate the vehicle along an isoline so that a movement<br />

is not hindered by steep slopes.<br />

4) Progress-Based Behaviours:<br />

a) Follow Absolutely Minimal Gradient: A problem to<br />

be considered in outdoor terrain is the elevation that has to be<br />

overcome. A movement along isolines has the advantage of<br />

generating a smooth path concerning height. For this purpose<br />

this behaviour has the goal to follow the absolutely minimal<br />

gradient, i.e. not to change height.<br />

The intended translation is calculated considering the min-<br />

imal slope:<br />

⎧<br />

⎪⎨ 0 if φ = 0 and ρ = 0<br />

π<br />

τ = 2 � � if φ = 0 and ρ �= 0 (14)<br />

⎪⎩<br />

ρ<br />

arctan φ else<br />

The velocity is determined using the change in height—the<br />

higher this value the faster the behaviour wants to move. As<br />

the direction of the movement does not influence the goal of<br />

this behaviour, vsgn is set to 0.<br />

b) Descend: Especially for resolving situations where<br />

the robot movement is stagnating due to a too high slope<br />

a behaviour moving the vehicle downward is introduced. By<br />

using the current information about pitch (φ) and roll (ρ) the<br />

desired direction of the movement τ is calculated similar to<br />

equation 14.<br />

5) Activation Behaviours: Finally a behaviour with the goal<br />

to reach a given target point is introduced controlling the<br />

behaviours mentioned above by influencing their activation<br />

ι. The algorithm used can automatically detect degrees of<br />

progress or stagnation and adjusts the activations of the<br />

behaviours controlled using gaussians to resolve the problems.<br />

A. Simulation Environment<br />

IV. EXPERIMENTS<br />

Fig. 6. The user interface with the simulated terrain<br />

For simulating the four wheel steering robot a user interface<br />

(see figure 6) on the one hand provides a 2D and 3D view of<br />

5


the robot and its environment and on the other hand gives the<br />

user the ability to control the robot manually or by activating<br />

behaviours.<br />

B. Results<br />

One of the experiments carried out compares the performance<br />

when driving towards a target point with most<br />

supporting behaviours activated and without. In the latter case<br />

the vehicle will move on a straight line and possibly even stop<br />

at a slope too steep instead of searching a more energy efficient<br />

way. Figure 7 and 8 plot the energy used during both runs over<br />

the time. One can observe higher peaks and a nearly double as<br />

high energy sum at the end of the run with most behaviours<br />

turned off. Figure 9 shows the activation, activity and tar-<br />

14000<br />

12000<br />

10000<br />

8000<br />

6000<br />

4000<br />

2000<br />

0<br />

300000<br />

250000<br />

200000<br />

150000<br />

100000<br />

50000<br />

0<br />

0 10 20 30 40 50 60 70<br />

Fig. 7. Energy (top) and accumulated energy (bottom) required for reaching<br />

a target point using all supporting behaviours<br />

25000<br />

20000<br />

15000<br />

10000<br />

5000<br />

0<br />

600000<br />

500000<br />

400000<br />

300000<br />

200000<br />

100000<br />

0<br />

0 5 10 15 20 25 30 35<br />

Fig. 8. Energy (top) and accumulated energy (bottom) required for reaching<br />

a target point without supporting behaviours<br />

get rating of the behaviour generating translatory movement<br />

towards the target point. The activation ι is reduced by the<br />

activation behaviour in case the target rating is low enough to<br />

increase the activation of the supporting behaviours. As soon<br />

as the movement of the vehicle is diverging too much from the<br />

goals of the behaviour (high target rating), its activity raises to<br />

get the robot back on track. The activation behaviour increases<br />

ι again to give the behaviour a higher weighting. The target<br />

rating drops towards Zero as soon as the target point is getting<br />

close.<br />

ι<br />

a<br />

r<br />

1<br />

0<br />

1<br />

0<br />

1<br />

0<br />

0 10 20 30 40 50 60<br />

Fig. 9. Activation (ι), activity (a), and target rating (r) of the translatory<br />

point access behaviour while reaching a target point<br />

V. CONCLUSION AND OUTLOOK<br />

This paper introduced a behaviour-based motion control for<br />

a mobile outdoor robot. Its kinematic has been modelled using<br />

a simplifying single track modell to deduces translational and<br />

rotational movements and the odometric reckoning. It is than<br />

extended to a four wheel modell to calculate e.g. single wheel<br />

velocities. Based on this information a behaviour network<br />

has been designed to allow the vehicle save locomotion<br />

towards a target point in uneven terrain. This is achieved<br />

by the cooperation of a set of behaviours reacting on sensor<br />

information like the pose of the robot. Initial experiments in a<br />

simulation environment indicate the approach’s feasibility and<br />

show promising results.<br />

Next steps will be further experiments under real world<br />

conditions with the mobile robot to verify the results from<br />

the simulation. Additional behaviours will be included to<br />

provide higher level functionality and to incorporate sensor<br />

information to a greater extend. Localisation and navigation<br />

algorithms using natural landmarks will be some of the next<br />

problems to be addressed.<br />

REFERENCES<br />

[1] A. Hait, T. Siméon, and M. Taïx, “Robust motion planning for rough<br />

terrain navigation,” in IEEE <strong>International</strong> Conference on Intelligent<br />

Robots and Systems, 1999.<br />

[2] M. Cherif, C. L. J. Ibañez Guzmán, and T. Goh, “Motion planning for<br />

an all-terrain autonomous vehicle,” in <strong>International</strong> Conference on Field<br />

and Service Robotics, 1999.<br />

[3] S. Singh, R. Simmons, T. Smith, A. Stentz, V. Verma, A. Yahja, and<br />

K. Schwehr, “Recent progress in local and global traversability for<br />

planetary rovers,” in Proceedings of the IEEE <strong>International</strong> Conference<br />

on Robotics and Automation, 2000.<br />

[4] D. Coombs, K. Murphy, A. Lacaze, and S. Legowik, “Driving autonomously<br />

offroad up to 35 km/h,” in Proceedings of the IEEE Intelligent<br />

Vehicles Symposium, Dearborn USA, 2000.<br />

[5] A. Singhal, “Issues in autonomous mobile robot navigation,”<br />

http://www.cs.rochester.edu/research/mobile/docs/areapaper.ps.gz, 1997.<br />

[6] D. Wang and F. Qi, “Trajectory planning for a four-wheel-steering<br />

vehicle,” 2001.<br />

[7] J. Albiez, T. Luksch, K. Berns, and R. Dillmann, “A behaviour network<br />

concept for controlling walking machines,” in 2nd <strong>International</strong> Symposium<br />

on Adaptive Motion of Animals and Machines (AMAM), 2003.<br />

[8] J. Albiez, T. Luksch, K. Berns, and R. Dillmann, “An activation-based<br />

behavior control architecture for walking machines,” The <strong>International</strong><br />

Journal on Robotics Research, Sage Publications, 2003.<br />

[9] R. Brooks, “A robust layered control system for a mobile robot,” vol.<br />

RA-2, no. 1, pp. 14–23, Apr. 1986.<br />

6


PLANNING WALKING PATTERNS FOR A BIPED ROBOT USING<br />

FUZZY LOGIC CONTROLLER<br />

Arbnor Pajaziti, Ismajl Gojani, Ahmet Shala, Bujar Pira<br />

arbnorpajaziti@hotmail.com , ismajlgojani@hotmail.com , shalaa@un.org , b_pira@hotmail.com<br />

Mechanical Engineering Faculty, University of Prishtina<br />

Prishtina - KOSOVA<br />

Abstract<br />

As biped robots assume more important roles in applicable areas, they are expected to perform difficult<br />

tasks like mine detecting and quickly adapt to unknown environment. Therefore, biped robots must<br />

generate quickly the appropriate gait based on information received from visual system. In this paper a<br />

Conventional PD Controller and Fuzzy Logic Controller for Planning Walking on flat ground of a planar<br />

five-link biped robot is presented. Both single support and double support phases are considered. The joint<br />

profiles have been determined based on constraint equations cast in terms of step length, step period,<br />

maximum step height and so on. When the ground conditions and stability constraint are satisfied, it is<br />

desirable to select a walking pattern that requires small torque and velocity of the joint actuators. Using<br />

Computed Torque Method is given input - torque on biped robot. The locomotion control structure is based<br />

on integration of kinematics and dynamics model of biped robot. The proposed Control scheme and Fuzzy<br />

Logic Algorithm could be useful for building an autonomous non-destructive testing system based on biped<br />

robot. Fuzzy Logic - Rule base, is optimized using Genetic Algorithm. The effectiveness of the method is<br />

demonstrated by simulation example using Matlab software.<br />

Key words: Biped Robot, Conventional Controller, Fuzzy Logic, Genetic Algorithm, Planning Walking.<br />

1. INTRODUCTION<br />

Researches of biped humanoid robots are currently one of the most exciting topics in the field of robotics and<br />

are many ongoing projects [1]. From the view point of control and walking pattern generation these works can<br />

be classified into two categories. The first group requires the precise knowledge of robot dynamics including<br />

mass, location of center of mass and inertia of each link to prepare walking patterns. Therefore, it mainly relies<br />

on the accuracy of the models. This group is called the Zero-Moment Point (ZMP) based approach since they<br />

often use the ZMP for pattern generation and walking control [7], [8].<br />

Contrary, there is the second group which uses limited of dynamics e.g. location of total center of mass, total<br />

angular momentum, etc. Since the controller knows little about the system structure, this approach much relies<br />

on a feedback control. This can be called as the inverted pendulum approach, since they frequently uses an<br />

inverted pendulum model.<br />

Further, because the environments where humanoid robots operate are unpredictable, the robot must generate<br />

in real time the appropriate gait based on the information received from the visual system [2]. The robot must<br />

walk with different step lengths, overcome obstacles, etc. Until now, the humanoid robot gait is for the most part<br />

prescribed based on humanoid motion. However, measuring the angle trajectories during human walking for a<br />

wide range of step lengths and step times is difficult and time consuming.<br />

In order to generate a human like motion, we proposed the Control scheme and Fuzzy Logic Algorithm that<br />

could be useful for building an autonomous non-destructive testing system based on biped robot. Therefore,<br />

when the humanoid robot is constrained to walk with a particular velocity and step length, the inputs to FLC are<br />

desired position and velocity. Inputs to FLC firstly are labeled depending on their value on PO – positive, ZE –<br />

zero and NE – negative. GA generated the optimal gaits for a wide range of parameters, like step length and step<br />

time during walking, which are used to create the FLC structure.<br />

The paper is organized as follows. The solution of kinematics and dynamics modeling of the used mobile<br />

robot is given in Section 2. Section 3 discusses the FLC-GA approaches, in details. The results of computer<br />

simulations are presented and discussed in Section 4. Some concluding remarks and scope for future work are<br />

made in Section 5.<br />

2. KINEMATICS AND DYNAMICS MODELING<br />

For modeling the biped with four degrees of freedom (DOFs) we have used the rotation angles of joints: θ 1 , θ 2 ,<br />

θ 4 andθ 5 .


Fig. 1. Full gait cycle of a five-link biped walking in the sagittal plane<br />

From Figure 1 follows:<br />

xh = xa1<br />

+ l1<br />

⋅ cos( θ1) + l2<br />

⋅ cos( θ 2 )<br />

xh = xa2<br />

+ l5<br />

⋅ cos( θ5 ) + l4<br />

⋅ cos( θ 4 )<br />

yh = H h = ya1<br />

+ l1<br />

⋅ sin( θ1) + l2<br />

⋅ sin( θ 2 ) ................................................ (1)<br />

yh = H h = ya2<br />

+ l5<br />

⋅ sin( θ5 ) + l4<br />

⋅ sin( θ 4 )<br />

where:<br />

y trajectories of lower limbs (foots) on x respectively y- axes.<br />

x a ,<br />

1<br />

x a ,<br />

2<br />

y a and<br />

1<br />

a2<br />

x h = xhs<br />

+ xhD<br />

and y h = yhs<br />

= yhD<br />

x hs and y hs are trajectories of hip (body) on x respectively y- axes during single support.<br />

x hD and y are trajectories of hip (body) on x respectively y- axes during double support.<br />

hD<br />

Solution of equations (1) by joint angles [ ] T<br />

v = θ1<br />

θ 2 θ 4 θ5<br />

, represents the inverse kinematics of biped.<br />

The Lagrange formulations are used to derive the dynamic equations of the biped.<br />

After the calculation of Lagrange function, the dynamical equations of the biped in Figure 1, can be expressed<br />

in the matrix form:<br />

D( v)<br />

⋅ v�<br />

� + H ( v,<br />

v�<br />

) = τ ............................................................ (2)<br />

where: dimension of matrix D(v) is 4x4 and dimension of H ( v,<br />

v�<br />

) and τ is 4x1.<br />

3. CONTROL DESIGN<br />

Conventional PD controller is used both, as an ordinary feedback controller to guarantee asymptotic stability<br />

during motion period, and as a reference model for responses of the controlled biped.<br />

The proposed control scheme for the biped is presented in Figure 2.<br />

Computed torque τ(t) is:<br />

d<br />

τ ( t ) = D(<br />

v)[<br />

v�<br />

� + Kv<br />

( e�<br />

+ Yv<br />

) + K P ( e + Y p )] ................................. (3)<br />

where KP, KV PD are gains, while YV, YP is additional angle and velocity, respectively.


Fig. 2. Control Scheme for planning of walking pattern of biped using Fuzzy Logic Controller<br />

3.1. Design of optimal FLC using GA<br />

Based on Neural Network (NN), neurons between layers are connected in all-to-all basis, through respective<br />

weights. Inputs to FLC are desired position and velocity. Inputs to FLC firstly are labeled depending on their<br />

value on PO – positive, ZE – zero and NE – negative. Weights between input layer (fuzzy) and hidden layer Vij<br />

and weights between hidden layer and output layer Wjk are calculated using back-propagation algorithm [3].<br />

Weights depends on trajectory tracking error: e = v d – v, where v d – desired values, v – estimated values.<br />

Hidden layer is activating function called sigmoidal (extended sigmoidal for hidden and output layers [4],<br />

sigmoidal-linear [5]. Figure 3a shows structure of FLC, three layers (fuzzy-sigmoid-linear) after optimization by<br />

Genetic Algorithm (GA). General rule base between inputs and hidden layer is shown in Figure 3b.<br />

d<br />

v<br />

d<br />

v�<br />

PO<br />

ZE<br />

NE<br />

PO<br />

ZE<br />

NE<br />

Input layer<br />

Fuzzy<br />

Vij<br />

S1<br />

S4<br />

S9<br />

S1<br />

S4<br />

S9<br />

Hidden layer<br />

Sigmoid<br />

Wjk<br />

R11<br />

R22<br />

Yp<br />

Yv<br />

Output layer<br />

Linear<br />

d<br />

v<br />

PO<br />

d<br />

v�<br />

ZE NE<br />

PO S1 S2 S3<br />

ZE S4 S5 S6<br />

NE S7 S8 S9<br />

Rj k<br />

Wj k<br />

W11 R11<br />

Rj1 Rj2<br />

W12 R12<br />

W21 R21<br />

W22 R22<br />

a) b)<br />

Fig. 3. a) Structure of FLC optimized with GA and b) general rule base between layers.<br />

Genetic Algorithm used for optimization is same as in [5] with small adaptation. As fitness function is used<br />

quadratic error [6]. Important GA parameters are used as per following: Population size: 80, Crossover<br />

probability: 0.95, Mutation probability: 0.02. Number of generations: In first run was 200 generation and second<br />

run of GA 100 generations. After run the GA the best solution was found.<br />

Only rules 2xS1, 2xS4, 2xS9, R11 and R22 (total 8 rules) are selected as good rules. Other rules don’t indicate or<br />

their indication is bad for minimization of respectively quadratic error.<br />

4. SIMULATION RESULTS<br />

In this section, a joint profile for a five-link biped walking on flat ground with both single and double support<br />

phases are determined using the method discussed in Section 3. The values of the parameters mi, Ii, li, and di, of<br />

the five-link biped robot are listed in Table I and the walking speed is chosen 1.06 m/s with step length<br />

SL=0.72m, TS=0.6 s, TD=0.1s, Hm=0.05m and Sm=0 m [7].


Table I. Parameters of the bipedal robot.<br />

__________________________________________________________________________________________<br />

Mass Moment of Length Location of centre of<br />

Link (kg) inertia (kgm 2 ) (m) mass to lower joint (m)<br />

__________________________________________________________________________________________<br />

Torso (3) 14.79 3.30 x 10 -2 0.486 0.282<br />

Tigh (2,4) 5.28 3.30 x 10 -2 0.302 0.236<br />

Leg (1,5) 2.23 3.30 x 10 -2 0.332 0.189<br />

__________________________________________________________________________________________<br />

Figure 4 shows a stick diagram of the five-link bipedal model walking on flat ground. From this diagram,<br />

one can observe the overall motion of the biped during the single and double support phases. The dashed lines<br />

represent the movements of the hip, upper and lower left and right limbs. The stance limb propels the upper<br />

body while the upper body is maintained in the upright position (è3=0). The posture of the biped at the end of<br />

each step is close to that at the beginning of each step, which indicates that the repeatability condition is<br />

satisfied.<br />

Figure 5 shows the motion of the lower limb joint angles during the single and double support phases for two<br />

steps.<br />

Fig. 4. Stick diagram of bipedal walking for Fig.5. Designed joint angles simulation time 1.4s,<br />

two steps, first 0.7m and second 0.6m two steps, first 0.7m and second 0.6m<br />

Figure 6 a) and b) shows the horizontal displacements and its velocity of the hip during the single and double<br />

support phase of one step.<br />

It can be seen that the hip displacement remain approximately at the centre of the ZMP stability region,<br />

which ensures the greatest stability [7].<br />

a) b)<br />

Fig. 6. Designed: a) Trajectory and b) Velocity of the hip for first step 0.7m and second step 0.6m<br />

Figure 7 shows the displacements in x axis of three step lengths for first, and second lower limb and hip<br />

desired and designed trajectory, respectively. All the trajectories are smooth; i.e., all the velocities are<br />

continuous. The hip trajectory designed with FLC-GA shows better performances than designed hip trajectory.<br />

The quadratic errors “desired-designed FCLGA”, and “desired-designed” for three step lengths: 0.7, 0.6 and<br />

0.5 [m], are given in Figure 8, respectively.


Legende:<br />

trajectory, first lower limb, x axis<br />

trajectory, second lower limb, x axis<br />

desired hip trajectory<br />

designed hip trajectory<br />

hip trajectory designed with FLC-GA<br />

Legende:<br />

quadratic error “desired – designed FLC-GA”<br />

quadratic error “desired – designed”<br />

Fig. 7. Displacements in x axis, three step lengths: Fig. 8. Quadratic errors, three step lengths:<br />

0.7, 0.6, and 0.5[m]. 0.7, 0.6, and 0.5 [m].<br />

5. CONCLUSIONS<br />

In this paper a FLC-GA based approach for optimization of biped gait synthesis is presented. We considered an<br />

important task for the biped robot – walking with different step lengths. Our method can be applied to a wide<br />

range of step lengths. The performance evaluation is carried out by simulation. Based on the simulation results,<br />

we conclude:<br />

• Fuzzy Logic Controller and Neural Networks contain high number of mathematical operations.<br />

• To decrease in minimum mathematical operations in this paper is represented the successfully called<br />

fuzzyfication of FLC and it’s optimally rule base using GA-s.<br />

• Using GA-s we have found minimal number of rules, 8 rules from total 22 rules. Optimal FLC has decreased<br />

error with 63.6% less number of mathematical operations.<br />

• For FLC is necessary to have general rule base, after optimization with GA can be found optimal (minimal)<br />

number of rules.<br />

• From the stability point of view, it is better for the robot body not to be fixed during motion.<br />

• Using GA for the gait optimization of the biped robot when going up-stairs, down-stairs, etc.<br />

REFERENCES<br />

[1] Sh. Kajita et al.: “Biped Walking Pattern Generation by using Preview Control of Zero-Moment Point”,<br />

Proceedings of the 2003 IEEE <strong>International</strong> Conference on Robotics & Automation Taipei, Taiwan, pp.<br />

1620-1626, September 14-19, 2003, Robotica, Volume 13, pp. 477-484, 1995.<br />

[2] G. Capi et al.: ”Real time gait generation for autonomous humanoid robots: A case study for walking”,<br />

Robotics and Autonomous Systems 42, pp. 107-116, 2003.<br />

[3] S. Jung; Hisia T.C.: “A new Neural Network Control Technique for Robot Manipulators” Robotica,<br />

Volume 13, pp. 477-484, 1995.<br />

[4] G. Campa: “Adaptive Neural Networks” toolbox for MATLAB implementation, represented on<br />

www.mathworks.com, last updated June 22 2003.<br />

[5] C.R. Houck, J.Joines and M.Kay.: “Genetic Algorithm Optimization Toolbox” A genetic algorithm for<br />

function optimization: A MATLAB implementation. Free Software Foundation, Inc., 675 Mass Ave,<br />

Cambridge, MA 02139, USA. www.matlab-exchange.com , last updated 2003.<br />

[6] A. Shala, R. Likaj, A. Geca, A. Pajaziti, F. Krasniqi: “Trajectory tracking by using Fuzzy Logic<br />

Controller on Mobile Robot” <strong>International</strong> workshop, HUDEM 2003, Prishtina, Kosova, 2003.<br />

[7] X. Mu, Q. Wu: “Synthesis of a complete sagittal gait cycle for a five-link biped robot”, Robotica, Volume<br />

21, pp. 581-587, 2003.<br />

[8] Q. Huang, et al.: “Planning Walking Patterns for a Biped Robot”, IEEE Transactions on Robotics and<br />

Automation, Vol. 17, No. 3, pp. 280-289, 2001.


Abstract<br />

Adaptive Neuro-Fuzzy Control of AMRU5, a six-legged walking robot<br />

J-C Habumuremyi, P. Kool and Y. Baudoin<br />

Royal Military Academy-Free University of Brussels<br />

08 Hobbema Str, box:MRTM, 1000, Brussels, Belgium<br />

E-mail:Jean-Claude.Habumuremyi@rma.ac.be;Pkool@vub.ac.be;<br />

Yvan.Baudoin@rma.ac.be<br />

Due to the complexity of walking robots which has in general a great number of degrees of freedom, cognitive<br />

modelling controller such as Fuzzy Logic, Neural Networks…seems to be reasonable in the design of adaptive control<br />

of such robot. Fuzzy Logic Controller is more used because it lets you describe desired system behaviour with simple<br />

“if-then” relations. But it has a major limitation because in many applications, the designer has to derive “if-then”<br />

rules manually by trial and error. On the other hand, Neural Networks perform function approximation of a system but<br />

we cannot interpret the solution obtained neither check if its solution is plausible. The two approaches are<br />

complementary. Combining them, Neural Networks will allow learning capability while Fuzzy-Logic will bring<br />

knowledge representation (Neuro-Fuzzy). In this paper, we show an original method to design an adaptive Neuro-Fuzzy<br />

controller which consists in five steps that are: the initial design of an ANFIS controller, the identification of the<br />

dynamic model of the leg joints, the estimation of the parameters of the dynamic model, the calculation of an ideal<br />

torque and the updating of the parameters of the controller and finally the design of the supervisory control.<br />

1 Introduction<br />

Many robots (manipulator and mobile robots), until now, are controlled using linear controllers (PID) which are<br />

independent for each joint. It can be proven that those controllers are fairly effective. The two main reasons are [1]:<br />

- The large reduction ratios between the actuators and the link mechanism (non-linearities and coupling terms<br />

become less important)<br />

- The large feedback gains in the control loops (they enlarge the domain where the complete robot dynamics is<br />

locally equivalent to a linear model).<br />

These controllers operate over a small range in which the dynamics of the system are considered as linear. These<br />

controllers limit the use of such robots to slow motion applications and fixed payload. However, the normal operational<br />

range of a robot may be large, and its payload also could change. To have a controller which works on different<br />

operational range and take into account the change of the payload, the environment and the uncertainties (friction,<br />

flexibility,…), necessitate a sort of on-line parameter estimation scheme in it (an adaptive controller). Most of classical<br />

adaptive controllers are based on the well-known dynamic properties of robots which stipulate that the dynamic model<br />

of a system is linear with respect to the dynamic parameters (mass, moment inertia, link lengths…). Even for simple<br />

cases, it remains difficult to have the relation which expresses the linearity of the dynamic parameters in the dynamic<br />

model. The problem become more complex for a walking robot which has in general a large number of degrees of<br />

freedom (we have 18 just for the robot to walk) and which requires changing internal parameters depending on the<br />

environment that it explores. Also, it seems practically difficult to build a representative model of a walking robot due<br />

to the problem of having accurate internal parameters (distance between joints, moment inertia…) and to accurately<br />

model some complex phenomena such as backslash, friction…In this case, cognitive modelling such as Fuzzy Control<br />

and Neural Networks seems to be reasonable.<br />

Fuzzy Logic Controller (FLC) is more used because it lets you describe desired system behaviour with simple « ifthen<br />

» relations. In many applications, this gets you a simpler solution in less design time. In addition, you can use all<br />

available engineering know-how to optimise the system performance directly. While this is certainly the beauty of fuzzy<br />

logic, at the same time it is a major limitation. In many applications, knowledge that describes desired system behaviour<br />

is contained in data sets. The designer has to derive the « if-then » rules from the data sets manually, which requires a<br />

major effort with large data sets. This is often done by trial and error. Without adaptive capability, the performance of<br />

FLCs relies on two factors: the availability of human experts, and knowledge acquisition techniques to convert human<br />

expertise into appropriate fuzzy « if-then » rules and membership functions. These two factors substantially restrict the


application domain of FLCs. Changing shapes of membership functions can drastically influence the quality of the<br />

FLC. Thus methods for tuning fuzzy controllers are necessary.<br />

Artificial neural networks are highly parallel architectures consisting of simple processing elements, which<br />

communicate through weighted connections. They are able to approximate or to solve certain tasks by learning from<br />

examples. When data sets contain knowledge about the system to be designed, a neural net promises a solution because<br />

it can train itself from the data sets. However, only few commercial applications of neural nets exist due to the lack of<br />

interpretation of the solution, the prohibitive computational effort and the difficulty to select the appropriate net model.<br />

It becomes obvious that a clever combination of the two technologies delivers the best of both. Neuro-Fuzzy [2] is a<br />

combination of the explicit knowledge representation of the fuzzy logic with the learning power of the neural nets. In<br />

this paper, we show the way to design an adaptive Neuro-fuzzy controller in order to track determined trajectories in<br />

different situations. Five steps have been considered in the design of such a controller.<br />

2 ANFIS 1 Architecture<br />

Figure 1: Walking Robot AMRU5<br />

There are many approaches to combine fuzzy logic and Neural Networks, known among them are:<br />

- A Cooperative Neuro-Fuzzy system where ANN learning mechanism determines the FIS membership<br />

functions or fuzzy rules from the training data after ANN goes to the background.<br />

- Concurrent Neuro-Fuzzy systems where ANN assists the FIS continuously to determine the required<br />

parameters.<br />

- Fused Neuro-Fuzzy systems are the methods where FL and ANN share data structures and knowledge<br />

representations. In the above methods FL and ANN are separated. In fused systems, ANN learning algorithms<br />

are used to determine the parameters of FIS. For that, Fuzzy system is represented in a special ANN like<br />

architecture. Some of major works in this area are NEFCON, ANFIS[3], FALCON, GARIC, FINEST and<br />

many others.<br />

In our application, we have used the ANFIS [3] architecture. It keeps the structure of the fuzzy controller that is<br />

determined by the fuzzy rules as depicted in Figure 2.<br />

At layer 1, every node is adaptive (premise parameters) with a node function which is the membership function. Node<br />

output at layer 2 represents the firing strength of a rule. In our application, it is a product of all the incoming signals but<br />

can be in general any T-norm operators that performs fuzzy AND. At layer 3, a normalised firing strengths is realised<br />

by making a ratio of rule’s firing strength to the sum of all rule’s firing strengths. Nodes at layer 4 are adaptive<br />

(consequent parameters) with node function which can be a first-order Sugeno, zero-order Sugeno, Mamdani or<br />

Tsukamoto fuzzy model.<br />

1 Adaptive Neuro-Fuzzy Inference Systems


Figure 2: ANFIS Architecture<br />

3 Step 1: Design of an initial zero-order Sugeno Fuzzy Logic controller<br />

It is important to have an initial controller which works properly in a closed range. One of the reasons is that during the<br />

on-line learning, optimization algorithm will be used and the solution cannot converge to the good one if parameters of<br />

the controller are set far away of the true. We have to make a fuzzy controller which work properly in a closed range<br />

then make it adaptive to take into account uncertainties of the model. Many controller design avoid this problem by<br />

making first a classical controller (PD usually) then add another controller (Fuzzy or Neural Network) to deal with<br />

uncertainties. This makes the controller more complex. In our method, we design an initial Neuro-fuzzy controller<br />

which works similarly as a classical one. After, we make it adaptive to deal with uncertainties. To find good parameters<br />

of a fuzzy logic controller is not an easy task because they have in general a lot of parameters. If we have n input, m<br />

triangular membership functions (2 parameters to adjust by membership function) and a zero-order Sugeno FIS is used,<br />

n<br />

we have to fix 2 mn + m parameters. For illustration of the method, we will use a fuzzy system (equivalent to a<br />

discrete PID controller) with 2 triangular membership functions (N, P), 3 input (e(n), e(n-1) and e(n-2)) and a zero-order<br />

Sugeno FIS as shown on Figure 2, but this method can be generalised. In this case, we have to fix 20 parameters. But if<br />

we fix parameters of the membership function, we have only 8 parameters to fix. A typical rule of such a system has the<br />

form:<br />

If e(i) is N, e(i-1) is P and e(i-2) is N then the output equal<br />

z 1 = p1e(<br />

i)<br />

+ q1e(<br />

i −1)<br />

+ r1e(<br />

i − 2)<br />

+ s1<br />

(1)<br />

Where { p 1 , q1,<br />

r1<br />

, s1}<br />

: is the parameter set of one node. The equation (1) become z 1 = s1<br />

(2) for a zero-order<br />

Sugeno-Takagi FIS.<br />

Figure 2: Zero-Order Sugeno: two triangular MF and three input<br />

The first method to fix parameters of a controller is simple trial and error. Unfortunately, intuitive tuning procedures<br />

can be difficult to develop in the case of Sugeno FIS because a change in one tuning constant tends to affect the<br />

performance of others terms in the controller’s output. Also, the great number of parameters makes this method<br />

practically impossible. The second method can be the analytical approach to the tuning problem. It involves a<br />

mathematical model of the process. This method cannot be used because the advantage of fuzzy logic is precisely the<br />

fact that it is used on complex processes where the establishment of a reliable model is unimaginable. The third


approach to the tuning problem is something of a compromise between purely self-teaching trial and error techniques<br />

and the more rigorous analytical techniques. It was originally proposed by John G. Ziegler and Nathaniel B. Nichols<br />

[5] and remains popular today because of its simplicity and its applicability to process which can be describes by a<br />

“gain”, a “time constant” and a “dead time” (which is the case of joints of robots actuated by DC motor). Ziegler and<br />

Nichols came up with a practical method for estimating the proportional, the integral and the derivative parameters of a<br />

PID controller. In this paper, we show how these techniques can be applied in the design of a fuzzy controller.<br />

3.1 How the method was developed?<br />

Many techniques used to turn a Mandani Fuzzy Model (which has less parameter compare to Sugeno Fuzzy Model) are<br />

intuitive. In many papers, books…they show which parameters to increase or to decrease by considering the rise time,<br />

the overshoot and the steady state error [4]. These techniques seem more like the art than engineering and they are<br />

difficult to apply them to Sugeno Fuzzy Model. The best solution to turn parameters of a Sugeno Fuzzy Model is by<br />

fusing Neural Networks to Fuzzy Logic Systems. But we need data to train the system. The first solution can be to<br />

collect them from a classical controller implemented to the real robot by giving random trajectories to the actuators. We<br />

noticed that we cannot cover all possible operating regions, the time to read data (on encoders, actuators,…) and to<br />

write them on a stored device is too short (the microcontroller has no much time sometime to write all the value) and<br />

there is noise on the data. The error obtained after training is still big by using these data. Another original solution<br />

could be the use of Ziegler-Nichols rules originally applied to PID controllers. The analogue PID controller is expressed<br />

by the equation:<br />

de<br />

u( t)<br />

= K pe( t)<br />

+ K i ∫ e(<br />

t)<br />

dt + K d (3)<br />

dt<br />

Where e: is the difference between the set point and the process output and u the command signal. and K<br />

are controller parameters.<br />

Two practical methods can be used to have a first estimate of the PID controller parameters:<br />

- the step-response method<br />

- and the frequency response method (only this method will be considered in this paper)<br />

K , d<br />

p K i<br />

3.1.1 The step-response method<br />

This method is based on a registration of the open-loop response of the system, which is characterized by two<br />

parameters. The parameters (a and L) are determined from a unit step response of the process, as shown in Figure 3.<br />

When those parameters are known, the controller parameters are obtained from Table 1.<br />

Figure 3: Step-response method Figure 4: Frequency response<br />

method<br />

3.1.2 The frequency-response method<br />

The idea of this method is to determine the point where the Nyquist curve of the open-loop system intersects the<br />

negative real axis. This is done by connecting the controller to the process and setting the parameters so that pure<br />

proportional loop system is obtained. The gain of the controller is then increased until the closed-loop systems reaches<br />

the stability limit. When this occurs, the gain and the period of oscillation shown on Figure 4 are determined.<br />

T<br />

The controller parameters are then given by the Table 2.<br />

K u<br />

u


Controller<br />

Type<br />

K p K i K d<br />

P 1<br />

a<br />

PI 0.<br />

9 0 . 3<br />

a aL<br />

PID 1 . 2 0. 6 0.<br />

6aL<br />

a aL<br />

Table 1: Parameters obtained from<br />

step-response method<br />

Controller<br />

Type<br />

P<br />

PI<br />

PID<br />

K p K i K d<br />

0 . 5K<br />

u<br />

0 . 45K<br />

0 . 6K<br />

u<br />

3.1.3 Method of turning an UFLC based on the frequency-response<br />

u<br />

0.<br />

54K<br />

T<br />

u<br />

u<br />

u<br />

1. 2K<br />

u 0.<br />

075K<br />

u<br />

Table 2: Parameters obtained from frequency-response<br />

method<br />

If the ultimate gain and the ultimate period T of the process were determined by experiment or simulation,<br />

K u<br />

u<br />

equation 3 can be written as follow:<br />

1.<br />

2K<br />

u<br />

de<br />

u( t)<br />

= 0.<br />

6K<br />

ue( t)<br />

+ e t dt K uTu<br />

T ∫ ( ) + 0.<br />

075 (4)<br />

u<br />

dt<br />

There exist different methods to convert equation (3) into discrete form for digital implementation such as Tustin<br />

approximations (or trapezoidal approximations), ramp invariance, rectangular approximations…When the sampling<br />

time T is short, all these methods have nearly the same performance. We’ll use rectangular approximations. Equation<br />

(4) becomes:<br />

n<br />

(5)<br />

(5)-(6) gives<br />

1.<br />

2Ku<br />

e(<br />

n)<br />

− e(<br />

n − 1)<br />

u( n)<br />

0.<br />

6Kue(<br />

n)<br />

+ e(<br />

i)<br />

T + 0.<br />

075KuTu<br />

T<br />

T<br />

= ∑<br />

u i = 1<br />

n 1 1.<br />

2Ku<br />

e(<br />

n − 1)<br />

− e(<br />

n − 2)<br />

u( n − 1)<br />

= 0.<br />

6Kue(<br />

n − 1)<br />

+ ∑e( i)<br />

T + 0.<br />

075KuTu<br />

T<br />

T<br />

−<br />

u<br />

i = 1<br />

(6)<br />

∆u n)<br />

= u(<br />

n)<br />

− u(<br />

n − 1)<br />

= K e(<br />

n)<br />

+ K e(<br />

n − 1)<br />

+ K e(<br />

n − 2)<br />

(7)<br />

( 1<br />

2<br />

3<br />

2<br />

2<br />

Where 3 K ⎛ ⎞<br />

u Tu<br />

+ 8TTu<br />

+ 16T<br />

, − 3Ku<br />

⎛Tu − 4T<br />

⎞ and 3 K uTu<br />

K = ⎜<br />

⎟<br />

1<br />

40 ⎜<br />

⎟<br />

K = ⎜ ⎟ K<br />

2<br />

3<br />

⎝ TTu<br />

⎠ 20 ⎝ T ⎠ 40 T<br />

=<br />

Equation (6) can now be used to build a FLC controller that we called a UFLC (Unit FLC). UFLC will be determined<br />

by the equation:<br />

∆uu ( n)<br />

= K1eu<br />

( n)<br />

+ K 2eu<br />

( n − 1)<br />

+ K 3eu<br />

( n − 2)<br />

(8)<br />

where e is between -1 and 1. If we define a step t (t equal 0.001 for example), we can define a set A of numbers<br />

u<br />

()<br />

between -1 and 1 as follow:<br />

A = { −1,<br />

−1+<br />

t,<br />

−1+<br />

2t,<br />

K , 1−<br />

2t,<br />

1−<br />

t,<br />

1}<br />

{ ( n),<br />

e ( n −1), e ( n − 2<br />

Then we constitute all possible set eu u<br />

u )} with numbers which belong to the set A. From each<br />

set, we calculate ∆ uu (n)<br />

using equation(8). Finally, we can use the set { eu ( n),<br />

eu<br />

( n − 1),<br />

eu<br />

( n − 2),<br />

∆uu<br />

( n)}<br />

to train the<br />

Neuro-Fuzzy Controller. Using a hybrid learning paradigm (least square error algorithm for consequent parameters<br />

which are linear and backpropagation for premise parameters), we noticed that the initial membership functions did not<br />

change (premise parameters remain the same), only consequent parameters change. With 2 triangular membership<br />

functions choose for our illustration, we have analytical expression shown in the Table 3.<br />

The same procedure can be applied to the step-response method and to derive rules of a controller which depends from<br />

the parameters a and L.<br />

3.1.4 Use of the UFLC on a real process<br />

In practice, error will not belong always between -1 and 1. We need some transformation to use the UFLC design on a<br />

real process. If the minimum error of the system is a and the maximum is b (a and b was determined by the limitation of<br />

each joint), a reduced error (n (error between -1 and 1) can be expressed as follow:<br />

e u<br />

)<br />

T<br />

uT


and<br />

b − a b + a<br />

( n)<br />

= e ( n)<br />

+ (10)<br />

2 2<br />

e u<br />

e u<br />

Rule e(n)<br />

2 ⎛ ⎛ b + a ⎞⎞<br />

( n)<br />

= ⎜e(<br />

n)<br />

− ⎜ ⎟⎟<br />

(9)<br />

b − a ⎝ ⎝ 2 ⎠⎠<br />

e(<br />

n −1)<br />

e(<br />

n −<br />

1 N N N<br />

2 N N P<br />

3 N P N<br />

4 N P P<br />

2)<br />

∆ un (n)<br />

− 6KuT<br />

5T<br />

2 2<br />

− 3Ku<br />

( 8T<br />

− Tu<br />

)<br />

20T<br />

T<br />

− 3Ku<br />

( 2T<br />

+ Tu<br />

)<br />

10T<br />

T<br />

2<br />

2<br />

− 3Ku<br />

( 8T<br />

+ 8TuT<br />

+ Tu<br />

)<br />

20T<br />

T<br />

5 P N N<br />

2<br />

3Ku<br />

( 2T<br />

+ Tu<br />

)<br />

10TuT<br />

6 P N P<br />

2 2<br />

3Ku<br />

( 4T<br />

+ Tu<br />

)<br />

10TuT<br />

7 P P N<br />

2 2<br />

3Ku<br />

( 8T<br />

− Tu<br />

)<br />

20TuT<br />

8 P P P<br />

6 KuT<br />

5Tu<br />

Table 3: Rules of the system used for illustration<br />

Equation (10) in (7) gives<br />

b − a<br />

b + a<br />

∆ u( n)<br />

= ( K1e<br />

n ( n)<br />

+ K 2e<br />

n ( n −1)<br />

+ K 3e<br />

n ( n − 2))<br />

+ ( K1<br />

+ K 2 + K 3 ) (11)<br />

2<br />

2<br />

Equation (11) becomes after simplification:<br />

b − a 6K<br />

uT<br />

b + a<br />

∆ u(<br />

n)<br />

= ∆uu<br />

( n)<br />

+<br />

(12)<br />

2 5Tu<br />

2<br />

Figure 5 shows how UFLC is used on a real process.<br />

Figure 5: Use of the UFLC on a real process<br />

u<br />

u<br />

u<br />

u<br />

2


3.1.5 Application to a known function transfer<br />

To allow comparison between a classical PID controller and a UFLC, we have applied the method to the process with a<br />

transfer function<br />

1<br />

G ( s)<br />

= 3<br />

( s + 1)<br />

(13)<br />

This process has the ultimate gain K = 8 and the ultimate periodT<br />

2π = ≈ 3.<br />

63 . From the Table 2 and Table 3, we<br />

u<br />

can easily design a PID and an UFLC. Figure 6 shows the Matlab schematic used to compare the two controllers with<br />

the step function as the input.<br />

Figure 6: Comparison between PID controller and UFLC<br />

The output of the two controllers and their result of the error (output of the PID controller subtract to the output of the<br />

UFLC) are shown on Figure 7 and 8. The error is less than 0.0063. We can see how close they behave in the same way<br />

and the small error between them is mainly due to the truncation of the numbers.<br />

Figure 7: Output of the PID controller and the UFLC Figure 8: Error between the PID controller and the UFLC<br />

u<br />

3


4 Step 2: Identification of the dynamic model of the legs<br />

ANFIS will be used to identify the dynamic model of each legs of the robot AMRU5. The dynamic model of the amru5<br />

leg (or robot in general) is formulated as:<br />

T<br />

τ = A ( θ ) & θ&<br />

+ C(<br />

θ,<br />

& θ ) & θ + F(<br />

θ,<br />

& θ ) + G(<br />

θ ) − J F ( θ ) FRF<br />

(14)<br />

Where:<br />

τ = [ τ1<br />

, τ 2,<br />

τ 3]<br />

θ = [ θ1<br />

, θ 2 , θ 3 ]<br />

Is the vector of forces/torques<br />

Is the vector of the position coordinates<br />

A (θ )<br />

Is the inertia matrix<br />

( θ, θ ) & C Is the centrifugal and Coriolis vectors<br />

( θ, θ ) & F Are frictions forces acting on the joints<br />

G (θ )<br />

Is the vector of the gravitational forces/torques<br />

T<br />

J F<br />

Is a transpose of a Jacobian matrix<br />

F Is the vector of the reaction forces that the ground exerts on the robot feet<br />

RF<br />

Equation 14 can be written in a compact way as follow:<br />

τ = A θ & θ&<br />

+ H θ,<br />

& θ (15)<br />

( ) ( )<br />

With H ( , θ ) being defined as a whole without distinguishing the differences among the different terms.<br />

θ &<br />

H ( , θ ) contains centrifugal, Coriolis, gravitational forces, viscous friction, coulomb friction and reaction forces<br />

terms.<br />

We have used a 2D pantograph mechanism for each leg. This mechanism shown on Figures 9 and 10 has the<br />

particularity to have a decoupling of the joint θ 3 from the joints θ 2 andθ 1.<br />

Equation 15 can be split in two parts as<br />

θ &<br />

follows:<br />

⎛τ<br />

⎞ ⎛ A<br />

⎜<br />

⎟ = ⎜<br />

⎝τ<br />

2 ⎠ ⎝ A<br />

A<br />

A<br />

⎞ ⎛ H 1 ( θ 1 , θ , &<br />

2 θ , &<br />

1 θ 2 ) ⎞<br />

⎟ ⎜<br />

⎟<br />

⎟<br />

+<br />

⎜ H 2 ( 1 , 2 , &<br />

1 , &<br />

2 ) ⎟<br />

⎠ ⎝ θ θ θ θ ⎠<br />

( τ ) A & θ&<br />

+ H ( θ ,θ&<br />

)<br />

= τ =<br />

Α<br />

( θ)<br />

& θ&<br />

+ H(<br />

θ, θ<br />

1 11 12 1<br />

&<br />

12<br />

22<br />

⎞⎛<br />

& θ&<br />

⎟<br />

⎟⎜<br />

⎜<br />

⎠<br />

&<br />

⎝θ<br />

2<br />

3<br />

= (17)<br />

3<br />

3<br />

3<br />

3<br />

3<br />

) (16)<br />

Figure 9: 2D pantograph mechanism Figure 10: General structure of the AMRU5 robot leg<br />

We will only consider the system of equation 16 to explain the ANFIS joints control. Equation 17 is particular case. To<br />

identify parameters of the dynamic model described by equation 16, we need its equivalent discrete-time version


θ &<br />

defined by nonlinear difference equations. To approximate and (i=1 to 2), we have used the Taylor series and we<br />

find:<br />

~ θi<br />

( k + 1)<br />

−θ<br />

i ( k −1)<br />

θi ( k)<br />

=<br />

2∆t<br />

&<br />

(18)<br />

~ θi<br />

( k + 1)<br />

− 2θ<br />

i ( k)<br />

+ θi<br />

( k −1)<br />

θi ( k)<br />

=<br />

2<br />

∆t<br />

&<br />

(19)<br />

Where ∆t<br />

is the sampling time.<br />

Equation … becomes in discrete-time:<br />

τ(k) = A(<br />

θ(<br />

k)<br />

) θ&<br />

& ( k)<br />

+ H ( θ(<br />

k)<br />

, θ&<br />

( k)<br />

) (20)<br />

The procedure to have an Offline (Online is also possible) ANFIS dynamic model of the coupled joints 1 and 2 is as<br />

follows:<br />

1. Collect ( θ( k + 1)<br />

, τ(<br />

k)<br />

) ≡ ( θ1<br />

(k + 1), θ2<br />

(k + 1), τ 1(k),<br />

τ 2 (k) ) from trajectories of the actuators on the<br />

joints 1 and 2. These trajectories are chosen in such a way to cover all the possible movements of the two<br />

joints.<br />

2. Constitute from these collected data sets<br />

3. The sets defined above will be used in the parallel identification model shown in Figure 11. Trial and error<br />

have to be used to find the best type of membership function to use, the number of linguistic variables and the<br />

type of Takagi-Sugeno FIS. The final RMSE (Root Mean-Square Error) of each trial gives a comparative<br />

evaluation between different ANFIS models.<br />

i<br />

θ &<br />

Figure 11: Offline ANFIS parallel Identification model<br />

5 Step 3: Estimation of the parameters of the dynamic model<br />

It is necessary to estimate to estimate the elements of the symmetric inertia matrix ( θ(<br />

k)<br />

)<br />

matrix θ(<br />

k)<br />

, θ(<br />

k)<br />

( )<br />

i<br />

A and the elements of the<br />

H & because they will be used in the strategy control. To estimate those elements, we use the<br />

principle that the elements of the inertia matrix are only dependent on θ(k) and the element of the matrix<br />

H ( θ(<br />

k)<br />

, θ&<br />

( k)<br />

) are dependent on θ(k) and θ(k) i.e. if we change and maintain θ and with the same<br />

value, elements of the inertia matrix will remain the same.<br />

& θ(k) & (k) θ(k) &<br />

Suppose we have the same θ(k) , θ(k) for 3 different angular accelerations (<br />

& θ& &<br />

j (k) j = a,<br />

b and c ), then<br />

(k) = A θ k θ&<br />

& k + H θ k , θ&<br />

k (21)<br />

τ j<br />

j<br />

By applying the principle stated above, we have:<br />

( )<br />

( ( ) ) ( ) ( ) ( )


τ<br />

τ<br />

1a<br />

1a<br />

(k) − τ<br />

(k) − τ<br />

1b<br />

1c<br />

(k) = A<br />

(k) = A<br />

11<br />

11<br />

( θ&<br />

&<br />

1a (k) − θ&<br />

&<br />

1b (k) ) + A ( θ&<br />

&<br />

12 2a (k) − θ&<br />

&<br />

2b (k) )<br />

( θ&<br />

& (k) − θ&<br />

& (k) ) + A ( θ&<br />

& (k) − θ&<br />

& (k)<br />

1a<br />

1c<br />

( θ&<br />

&<br />

1a (k) − θ&<br />

&<br />

1b (k) ) + A ( θ&<br />

&<br />

22 2a (k) − θ&<br />

&<br />

2b (k) )<br />

( θ&<br />

& (k) − θ&<br />

& (k) ) + A ( θ&<br />

& (k) − θ&<br />

& (k)<br />

12<br />

2a<br />

2c<br />

) (22)<br />

τ 2a (k) − τ 2b (k) = A12<br />

τ 2a (k) − τ 2c (k) = A12<br />

1a 1c<br />

22 2a 2c ) (23)<br />

The resolution of the system of equations 22 and 23 gives the estimated values of the elements of the inertia matrix.<br />

There are two way to determine A12<br />

(by using the system of equations 22 or the system of equations 23). These two<br />

ways could give an idea on the precision of the estimated parameters.<br />

6 Step 4: Calculation of the ideal torque to control the joints and updating of the parameters<br />

of the controller<br />

From equation 20, we obtain:<br />

θ&<br />

& −1<br />

(k) = A θ k τ k<br />

−1<br />

− A θ k H θ k , θ&<br />

k (24)<br />

( )<br />

( ( ) ) ( ) ( ( ) ) ( ) ( )<br />

If we suppose that the matrices A and H are known exactly, we can define a control law as follows:<br />

−1 τ(k) = A( θ (k)) [ A ( θ ( k )) H(<br />

θ ( k ), & θ ( k )) + θ&<br />

&<br />

d (k) + ke]<br />

(25)<br />

T<br />

where & θ&<br />

d ( k ) ≡ [ & θ&<br />

1d<br />

( k ) & θ&<br />

2d<br />

] is the vector with the desired angular accelerations of the joints 1 and 2.<br />

e e e&<br />

e e&<br />

T<br />

is the tracking error vector where e = θ − θ and e& & θ − & θ (j=1,2). Also e& &<br />

[<br />

]<br />

= 1 1 2 2<br />

j jd j<br />

will be equal to & θ&<br />

jd − & θ&<br />

j . ⎟ be such that all roots of the polynomial and<br />

⎟<br />

⎛ k1<br />

k 2 0 0 ⎞<br />

2<br />

k = ⎜<br />

s + k 2 s + k1<br />

⎝ 0 0 k 3 k4<br />

⎠<br />

2<br />

s + k4<br />

s +<br />

k<br />

the polynomial are in the open left-half plane.<br />

Introducing 25 in 24, we have:<br />

limt →∞<br />

j<br />

3<br />

e ( t ) = 0<br />

e&<br />

& 1 + k e&<br />

2 1 + k1<br />

= 0<br />

(26)<br />

e&<br />

& + k e&<br />

+ k = 0<br />

We can see that , which is the main objective of the control. Since<br />

2<br />

4<br />

2<br />

3<br />

j = jd j<br />

j<br />

A and H are not known<br />

exactly, we replace them respectively by A ~ and H ~ obtained from the ANFIS model. The resulting control law is:<br />

H ( ( k ), ( k )) θ (k) ke<br />

~<br />

A ( ( k ))<br />

~<br />

A ( (k))<br />

~<br />

−1 (k) = θ θ θ & θ + &<br />

+ (27)<br />

[ ]<br />

τ c<br />

d<br />

τ c (k) obtained from an approximated model of the process is called the certainty equivalent controller [6] in the<br />

adaptive control literature. This torque will be used to update the parameters of the initial controller designed. If τ a is<br />

the output of the zero order Takagi-Sugeno controller, it can be expressed as:<br />

n<br />

∑<br />

i=<br />

1<br />

a = n<br />

∑<br />

i=<br />

1<br />

ω r<br />

τ (28)<br />

Where n is the number of rules, ω i the product of the output of the membership functions which belong to the rule i and<br />

ri<br />

i<br />

ω<br />

the output weight of the rule i. We update only the output weights of the rules by minimizing the error<br />

(<br />

2<br />

(backpropagation algorithm) E = τ −τ<br />

) . For the lth rule with 1 ≤ l ≥ n<br />

c<br />

∂E<br />

∂r<br />

l<br />

2<br />

=<br />

a<br />

2<br />

2E<br />

∂E<br />

= −2<br />

∂r<br />

l<br />

i<br />

i<br />

ω<br />

l<br />

n<br />

∑ ω i<br />

i=<br />

1<br />

( τ −τ<br />

)<br />

c<br />

a<br />

(29)


The update law of the weight of lth rule is<br />

⎛<br />

⎞<br />

⎜<br />

⎟<br />

⎜ ω l<br />

r ( ) ⎟<br />

l ( k + 1 ) = rl<br />

( k ) + η 2 τ −<br />

⎜ n c τ a (30)<br />

⎟<br />

⎜ ∑ ω i ⎟<br />

⎝ i=<br />

1<br />

⎠<br />

Whereη is the learning rate.<br />

7 Step 5: Design of the supervisory control<br />

For simplicity of writing, we will not put the terms between brackets. Equation 24 can be written as follows:<br />

H ~<br />

A ~<br />

A A H<br />

~<br />

H A<br />

~<br />

A ~<br />

A τ k<br />

~<br />

θ&<br />

& −1<br />

−1<br />

−1<br />

−1<br />

−1<br />

−1<br />

(k) = − + − τ − − (31)<br />

( ) ( ) ( )<br />

⎡ 0 1 0 0 ⎤ ⎡0<br />

0⎤<br />

⎢<br />

⎥<br />

Introducing 27 in 31 and defining φ as ⎢<br />

− k1<br />

− k 2 0 0<br />

⎢ ⎥<br />

⎥ , b as ⎢<br />

1 0<br />

⎥ and e as previously, we obtain<br />

⎢ 0 0 0 1 ⎥ ⎢0<br />

0⎥<br />

⎢<br />

⎥ ⎢ ⎥<br />

⎣ 0 0 − k 3 − k4<br />

⎦ ⎣0<br />

1⎦<br />

[ H ] e bJ<br />

~<br />

A ~<br />

A A ) ( A H<br />

~ −1<br />

−1<br />

−1<br />

−1<br />

e&<br />

= φe<br />

+ b ( − τ c + − = φ + (32)<br />

Where<br />

H )<br />

~<br />

A ~<br />

A A ) ( A H<br />

~ −1<br />

−1<br />

−1<br />

−1<br />

J = ( − τ + −<br />

c<br />

1 T<br />

We define a Lyapunov function V = e Pe<br />

2<br />

T<br />

Where P is a positive definite symmetric 4X4 matrix which satisfies the Lyapunov equation φ P + Pφ<br />

= −Q<br />

(Q is<br />

an arbitrary 4X4 positive definite matrix. The derivative of the Lyapunov candidate function gives:<br />

1<br />

V&<br />

T T<br />

= − e Qe + e PbJ (33)<br />

2<br />

In order for θ − , & θ − e&<br />

to be bounded we require that V must be bounded that means V& ≤ 0 when V is greater<br />

d<br />

e d<br />

than a large constantV<br />

. However, from equation 33, it is difficult to design τ c such that e PbJ is less than zero. To<br />

T<br />

solve this problem we add another term<br />

τ τ + τ<br />

τ s to τ c called supervisory control term. The control torque<br />

becomes = c s . With the new definition ofτ , we obtain:<br />

H ~<br />

A ~<br />

A A ) A ( A H<br />

~ −1<br />

−1<br />

−1<br />

−1<br />

−1<br />

e&<br />

= φe<br />

+ b ( − τ − τ + − (34)<br />

[ ]<br />

c<br />

Substituting 34 in 33, we have:<br />

T T 1<br />

1<br />

1<br />

1<br />

1<br />

[ c<br />

c<br />

s<br />

H ) ]<br />

1<br />

~<br />

T<br />

T<br />

1<br />

1 T 1<br />

e Qe e Pb [ A c A H ] e PbA s<br />

2<br />

~<br />

A ~<br />

A A A ( A H<br />

~<br />

1<br />

V&<br />

−<br />

−<br />

−<br />

−<br />

−<br />

= − e Qe + e Pb τ − τ − τ + −<br />

2<br />

(35)<br />

−<br />

−<br />

−<br />

≤ − + & θ&<br />

+ τ + − τ<br />

Using the known properties of the dynamic model of robot in general which stipulate that the inertia matrix and its<br />

n<br />

inverse is positive definite and bounded (i.e. ∃0 < α ≤ β < ∞ , such that αI<br />

≤ A(<br />

θ ) ≤ βI<br />

∀θ<br />

∈ R ) and if<br />

we suppose we know (or we estimate it with great values) the upper bound 1 2 δ δ δ of the matrix H , we obtain<br />

1<br />

~<br />

T<br />

T 1 1<br />

T 1<br />

V& ⎡<br />

⎤<br />

−<br />

≤ − e Qe + e Pb & θ&<br />

I 2 τ c I 2δ<br />

− e PbA τ s<br />

2<br />

⎢ + +<br />

α α<br />

⎥<br />

(36)<br />

⎣<br />

⎦<br />

⎡ ~<br />

T<br />

1<br />

If we define τ s = sgn( e Pb ) βI<br />

&<br />

⎢θ&<br />

2 + I 2τ<br />

c<br />

⎣ α<br />

1 ⎤<br />

+ I 2δ<br />

α<br />

⎥ (37)<br />

⎦<br />

s<br />

n<br />

T ( [ ] )<br />

n


sgn( x )<br />

0<br />

where equal 1 if x ≥ and -1 if x < 0 .<br />

Substituting equation 37 in equation 36, we obtain<br />

V&<br />

≤ −<br />

8 Illustration of the proposed NF Controller<br />

1<br />

2<br />

e<br />

T<br />

Qe ≤ 0<br />

The implementation of this controller on the real robot (which is not a negligible task and which can prove to be<br />

difficult because of the interaction between the software and the hardware) was not going to give us quantitative and<br />

qualitative measurements allowing the comparison of this controller with the range of some considered controllers and<br />

that in various situations. It is for that reason we have recognized the need of having a software to simulate dynamically<br />

the leg of the robot. We have used the simMechanics toolbox of the MathWorks, Inc. The Figure 12 shows all the block<br />

diagrams of the considered controller. This diagram has the leg model of the AMRU5 (block legModel) and the initial<br />

PD-like Fuzzy Logic Controller as subsystems. Figure 13 shows the comparison between the proposed adaptive<br />

controller and the initial controllers (PD-like controller) when tracking a square signal. We can see on this Figure how<br />

fast the adaptive controller reaches the set point and how small is the final error. On the simMechanics legModel, we<br />

have an entry where we can change the external force acting of the robot (it can be the case when the robot is on a<br />

slope, the payload is changed by adding for example devices on the robot (battery, sensors…)). Doing that (5N acting<br />

horizontally at the foot), we can see on Figure 14 that the response of the non-adaptive controller becomes worse while<br />

the adaptive controller adapts to the new situation.<br />

Figure 12: The blocks of the ANFIS controller designed


Figure 13: Response of the adaptive controller (solid<br />

line) and of the initial controller (doted line) to a square<br />

signal<br />

9 Conclusions<br />

Figure 14: Response of the adaptive controller (solid line)<br />

and of the initial controller (doted line) to a square signal and<br />

change of the load<br />

In this paper, we have shown the way to derive parameters of a zero-order Sugeno Fuzzy Logic Controller from the<br />

Ziegler-Nichols method. This method makes a Fuzzy Logic Controller to behave as a classical controller (PID, PI, PD,<br />

P). As zero-order Sugeno Fuzzy Logic is a particular case of known fuzzy reasoning methods (Mamdani, first-order<br />

Sugeno, Tsukamoto), we can conclude that the performance comparisons between a PID and Fuzzy Logic controllers<br />

which can be found in some paper are debatable. A Fuzzy Logic Controller includes the classical PID controller but it is<br />

a non-linear controller and it can cope with more complex situations like variable payload. We have also developed a<br />

method on how to find the model and the parameters of the process using the adaptive Fuzzy Inference System. We<br />

have tested this method on a two link planar manipulator because we have a mathematical model of it. The comparison<br />

between the outputs of the method developed and the mathematical mode show the validity of it. Our method is very<br />

general because it is based on the properties of the equation describing the dynamic model of robots. Finally, we have<br />

presented the way to adapt parameters of the initial controller design. The Lyapunov method has been used to prove that<br />

the controller designed is bounded. This method is based on the model of the process obtained, on the estimation of the<br />

lower and upper bound of the inertia matrix and also the upper bound of the matrix containing the Coriolis, the<br />

centrifugal, the gravitational and the frictions vectors.<br />

10 References<br />

[1] T. Yoshikawa, “Foundations of Robotics: Analysis and Control”. Massachusetts Institute of Technology, USA, 1990<br />

[2] D. Nauck, F. Klawonn and R. Kruse. “Combining Neural Networks and Fuzzy Controllers” FLAI’93, Linz, Austria,<br />

Jun. 28-Jul.2, 1993<br />

[3] J. –S. R. Jang, C. T. Sun and E. Mizutani. “Neuro-Fuzzy and Soft Computing” . Prentice-Hall (UK), 1997<br />

[4] B. Subudhi, A. S. Morris, “Fuzzy and Neuro-Fuzzy approaches to control a flexible single-link manipulator ”<br />

IMechE 2003, 29 May 2003<br />

[5] J.G. Ziegler and N.B. Nichols, “Optimum settings for automatic controllers”, Trans. ASME, 64, 759, 1942<br />

[6] S. Sastry and M. Bodson, “Adaptive control: Stability, Convergence and Robustness”, Englewood Cliffs, NJ:<br />

Prentice-Hall, 1989.


Evaluation of CORBA communication models for the<br />

development of a robot control framework<br />

Eric Colon<br />

Unmanned Ground Vehicles Centre<br />

Royal Military Academy<br />

Avenue de la Renaissance 30<br />

B-1000 Brussels, Belgium<br />

ABSTRACT<br />

This paper reports on ongoing work in the specification and development of a modular<br />

software control framework for mobile robots. In order to allow flexibility, a systematic<br />

analysis of requirements has been conducted and their consequences on the framework<br />

architecture are summarised in this paper. We show that the free CORBA implementation<br />

library ACE_TAO fulfils the communication requirements of the framework. We present in<br />

details the communication models defined by CORBA and compare results obtained by<br />

applying each of them on a typical distributed application.<br />

Keywords: software modularity, distributed control, control framework, CORBA<br />

1 INTRODUCTION<br />

Many researchers in robotics are confronted with the same problem: they have at their<br />

disposal many excellent algorithms but due to the lack of standard it is almost impossible to<br />

easily reuse those bricks into new applications. Existing programs have to be modified,<br />

translated, ported, or simply (!) completely rewritten from scratch when changing/updating<br />

the robotic platform. What is needed is a software framework that enables agile, flexible,<br />

dynamic composition of resources and permits their use in a variety of styles to match present<br />

and changing computing needs and platforms. Since a couple of years, some researchers have<br />

begun to work in this way.<br />

At the Unmanned Ground Vehicle Centre (UGV-C), we are dealing with command and<br />

control applications including tele-monitoring, tele-operation (including shared, traded and<br />

supervised control), collaboration (between users), for single and multi-robots (of the same or<br />

different models). The present efforts are justified by the fact that most of the tools developed<br />

by the research community deals with autonomous robots (MCA, DCA, MIRO, GeNoM,...).<br />

Note that we are not dealing with hard real-time control like the Orocos project<br />

(www.orocos.org). It addresses higher-level components like planning and user interaction.<br />

In the next section, the requirements of the control framework are summarized and related to<br />

the properties of the ACE_TAO library. Afterwards, we examine the communication models<br />

defined by the CORBA specifications and relate them to application needs. Finally we<br />

conclude with some guidelines about the communication model selection.


2 FRAMEWORK REQUIREMENTS AND CORBA<br />

The requirements for the control framework have been presented in [1]. These requirements<br />

have been inferred from a systematic analyse of typical tele-robotic applications [2]. In the<br />

first reference we also justify the choice of CORBA and more particularly the ACE_TAO<br />

CORBA implementation among other middleware technologies for our future developments.<br />

We show here that the communication requirements of the framework are fully compatible<br />

with the free CORBA implementation library ACE_TAO.<br />

• Integration of different robotic systems: use of native libraries (C/C++): ACE_TAO is<br />

mainly written in C++, which is the de facto language of most robot libraries.<br />

• Concurrent control of several robots: distributed and multi-threaded processes, easy<br />

communication and process synchronisation. This is the reason of the existence of<br />

ACE.<br />

• Universal GUI, flexible programming language and implementation solution: the<br />

choice of ACE_TAO does not restrict the GUI developments. GUI can be based on<br />

C++ toolkits or on interpreted language. Furthermore, CORBA communication is<br />

straightforward in Java.<br />

• Shared control between several users requires management of access and use policy as<br />

well as coordination between control modules: this is not directly linked to the choice<br />

of ACE_TAO. However, the network capabilities could facilitate the implementation<br />

of such functions.<br />

• Integration of user algorithms: requires run-time configuration capabilities, portability.<br />

By modifying the server registration data in the naming/trading services different<br />

servers (functionalities) can be selected at run-time.<br />

• Flexibility:<br />

− Distribution: This is the core function of CORBA.<br />

− Modularity: by using Object Oriented programming and interfaces using the<br />

Interface Definition Language, capabilities can be divided among several<br />

components.<br />

− Configurability: the naming/trading services could contribute to solve the<br />

configurability issues.<br />

− Portability: ACE_TAO is a universal library that can be used on almost all<br />

platforms.<br />

− Scalability: ACE_TAO provides many components suited for applications<br />

involving many processes.<br />

− Maintainability: ACE_TAO is based on many standard design-patterns.<br />

CORBA is an industry standard.<br />

• Performance and efficiency: ACE_TAO has been developed for time-critical<br />

applications used in aviation and medicine. Many years of development and<br />

improvement have lead to very efficient software implementation. ACE_TAO<br />

provides many RT components (RT Event communication, RT-CORBA,...).


3 CORBA COMMUNICATION MODELS<br />

CORBA offers different methods to implement communication and data transfer between<br />

objects. The basic communication models provided by CORBA are synchronous two-way,<br />

one-way and deferred synchronous 1 . To alleviate some drawbacks of these models<br />

Asynchronous Method Invocation (AMI) has been introduced. The Event Service and the<br />

Notification Service provide additional communication solutions. The following of this<br />

section briefly describes all these models and discusses their benefits and drawbacks.<br />

Synchronous two-way<br />

In this model, a client sends a two-way request to a target object and waits for the object to<br />

return the response. The fundamental requirement is that the server must be available to<br />

process the client’s request.<br />

While it is waiting, the client thread that invoked the request is blocked and cannot perform<br />

any other processing. Thus, a single-threaded client can be completely blocked while waiting<br />

for a response, which may be unsatisfactory for certain types of performance-constrained<br />

applications.<br />

The advantage of this model is that most programmers feel comfortable with it because it<br />

conforms to the well-know method-call on local objects.<br />

One-way<br />

A one-way invocation is composed of only a request, with no response. One-way is used to<br />

achieve “fire and forget” semantics while taking advantage of CORBA’s type checking,<br />

marshalling/unmarshalling, and operation demultiplexing features. They can be problematic,<br />

however, since application developers are responsible for ensuring end-to-end reliability.<br />

The creators of the first version of CORBA intended ORBs (Object Request Broker) to<br />

deliver one-way over unreliable transports and protocols such as the UDP. However, most<br />

ORBs implement one-way over TCP, as required by the standard Internet Inter-ORB Protocol<br />

(IIOP. This provides reliable delivery and end-to-end flow control. At the TCP level, these<br />

features collaborate to suspend a client thread as long as TCP buffers on its associated server<br />

are full. Thus, one ways over IIOP are not guaranteed to be non-blocking. Consequently,<br />

using one-way may or may not have the desired effect. Furthermore, CORBA states that oneway<br />

operations have “best-effort” semantics, which means that an ORB need not guarantee<br />

their delivery. Thus, if you need end-to-end delivery guarantees for your one-way requests,<br />

you cannot portably rely on one-way semantics.<br />

Deferred synchronous<br />

In this model, a client sends a request to a target object and then continues its own processing.<br />

Unlike the way synchronous two-way requests are handled, the client ORB does not explicitly<br />

block the calling thread until the response arrives. Instead, the client can later either poll to<br />

see if the target object has returned a response, or it can perform a separate blocking call to<br />

wait for the response. The deferred synchronous request model can only be used if the<br />

requests are invoked using the Dynamic Invocation Interface (DII).<br />

1 All specification documents over CORBA are available on the OMG web site: http://www.omg.org.


The DII requires programmers to write much more code than the usual method (Static<br />

Invocation Interface or SSI). In particular, the DII-based application must build the request<br />

incrementally and then explicitly ask the ORB to send it to the target object. In contrast, all of<br />

the code needed to build and invoke requests with the SII is hidden from the application in the<br />

generated stubs. The increased amount of code required to invoke an operation via the DII<br />

yields larger programs that are hard to write and hard to maintain. Moreover, the SII is typesafe<br />

because the C++ compiler ensures the correct arguments are passed to the static stubs.<br />

Conversely, the DII is not type-safe. Thus, the programmer must make sure to insert the right<br />

types into each Any or the operation invocation will not succeed.<br />

Of course, if one can’t afford to block waiting for responses on two-way calls, he needs to<br />

decouple the send and receive operations. Historically, this meant the programmer was stuck<br />

using the DII. A key benefit of the CORBA Messaging specification is that it effectively<br />

allows deferred synchronous calls using static stubs (automatically generated communication<br />

methods hiding CORBA complexities), which alleviates much of the tedium associated with<br />

using the DII.<br />

CORBA Messaging<br />

The CORBA Messaging specification introduces the Asynchronous Method Invocation<br />

(AMI) model. As we saw in the preceding section, the standard CORBA doesn’t define a<br />

truly asynchronous method invocation model using the SII. A common workaround for the<br />

lack of asynchronous operations is to use separate threads for each two-way operation.<br />

However, the complexity of threads makes it hard to develop portable, efficient, and scalable<br />

multi-threaded distributed applications. Moreover, since support for multi-threading is<br />

inadequately defined in the CORBA specification there is significant diversity among ORB<br />

implementations.<br />

Another common workaround to simulate asynchronous behaviour in CORBA is to use oneway<br />

operations. For instance, a client can invoke a one-way operation to a target object and<br />

pass along an object reference to itself. The target object on the server can then use this object<br />

reference to invoke another one-way operation back on the original client. However, this<br />

design incurs all the reliability problems with one-way operations described in previous<br />

section. To address these issues, CORBA Messaging defines the AMI specification that<br />

supports a polling and a callback model. Only the callback model of the CORBA Messaging<br />

specification has been implemented in ACE_TAO.<br />

The internal mechanism is actually based on two normal synchronous invocations in both<br />

directions. Remarkably, adding asynchrony to the client generally does not require any<br />

modifications to the server since the CORBA Messaging specification treats asynchronous<br />

invocations as a client-side language mapping issue.<br />

The Events Service<br />

There are many situations where the standard CORBA (a)synchronous request/response<br />

model is too restrictive. For instance, clients have to poll the server repeatedly to retrieve the<br />

latest data values. Likewise, there is no way for the server to efficiently notify groups of<br />

interested clients when data change.<br />

The OMG COS Events Service provides delivery of event data from suppliers to consumers<br />

without requiring these participants to know about each other explicitly. A Supplier is an


entity that produces events, while a Consumer is one that receives event notifications and<br />

data. The central abstraction in the COS Events Service is the Event Channel, which plays the<br />

role of a mediator between Consumers and Suppliers and supports decoupled communication<br />

between objects. Events are typically represented as messages that contain optional data<br />

fields.<br />

Suppliers and Consumers can both play an active or a passive role. A PushSupplier object can<br />

actively push an event to a passive PushConsumer object. Likewise, a PullSupplier object can<br />

passively wait for a PullConsumer object to actively pull an event from it.<br />

By combining the different possible roles for consumers and producers, we obtain the four<br />

canonical models of component collaboration in the OMG COS Events Service architecture.<br />

While Push type streams are preferred when the supplier/consumer work at the same pace,<br />

Pull type streams are best suited when data processing is slower than possible data production<br />

or when it is requested at random.<br />

Benefits:<br />

• Producers do not receive callback registration invocations. Therefore, it need not<br />

maintain any persistent storage for such registration<br />

• The event channel ensures that each event is distributed to all registered Consumers.<br />

• The symmetry underlying the Events Service model might also be considered as a<br />

benefit. It simplifies application development and allows Event channels to be chained<br />

together for bridging or filtering purposes.<br />

Drawbacks:<br />

• A complicated consumer registration (multiple interfaces, bi-directional object<br />

reference handshake,...),<br />

• The lack of persistence that can lead to events and connectivity information lost,<br />

• The lack of filtering that leads to increased system network utilisation especially when<br />

multiple suppliers are involved.<br />

The Notification Service<br />

Two serious limitations of the event channel defined by the OMG Event Service are that it<br />

supports no event filtering capability and no ability to be configured to support different<br />

qualities of service. Thus, the choice of which consumers connected to a channel receive<br />

which events, along with the delivery guarantee that is made to each supplier, is hard-wired<br />

into the implementation of the channel. Most Event Service implementations deliver all<br />

events sent to a particular channel to all consumers connected to that channel on a best-effort<br />

basis.<br />

A primary goal of the Notification Service is to enhance the Event Service by introducing the<br />

concepts of filtering, and configurability according to various quality of service requirements.<br />

Clients of the Notification Service can subscribe to specific events of interest by associating<br />

filter objects with the proxies through which the clients communicate with event channels.<br />

These filter objects encapsulate constraints which specify the events the consumer is<br />

interested in receiving, enabling the channel to only deliver events to consumers which have<br />

expressed interest in receiving them. Furthermore, the Notification Service enables each


channel, each connection, and each message to be configured to support the desired quality of<br />

service with respect to delivery guarantee, event aging characteristics, and event<br />

prioritisation.<br />

The Notification Service attempts to preserve all of the semantics specified for the OMG<br />

Event Service, allowing for interoperability between basic Event Service clients and<br />

Notification Service clients. The Notification Service supports all of the interfaces and<br />

functionality supported by the OMG Event Service.<br />

The TAO implementation does not support Pull interfaces and Typed Event style<br />

communication. Work is underway to implement the TAO Real-Time Notification Service.<br />

This is an extension to TAO's CORBA Notification Service with Real-Time CORBA support.<br />

Dead or unresponsive consumers and suppliers are detected and automatically disconnected<br />

from the Notification Service.<br />

4 EVALUATION OF CORBA COMMUNICATION MODELS<br />

A comparative example using different communication models has been implemented. It<br />

provides CORBA wrapping to serial communication.<br />

Standard two-way communication model<br />

A serial server has been developed using the TAO library. A client program reads inputs from<br />

the console and sends corresponding commands to the serial server. This server<br />

communicates through the serial port to another program simulating a micro-controller<br />

controlling a robot (Synchro). It reads translation and rotation inputs and adapts these values<br />

to the robot kinematics. The right and left speeds values preceded by the command code are<br />

sent to the serial server. The next figure illustrates this application.<br />

server<br />

RS232<br />

3 2<br />

1<br />

4<br />

Ethernet<br />

Synchro<br />

Client<br />

The time sequence and screen captures are shown in the next figure.<br />

It has been verified that the serial server can process method calls and transfer data to/from<br />

the serial port concurrently. TAO provides several concurrency models and in our tests the<br />

default configuration has been used, namely a single-threaded, reactive model. One thread<br />

handles requests from multiple clients via a single Reactor [3].


Client SerialServer Synchro<br />

1<br />

4<br />

It is appropriate when the requests take a fixed, relatively uniform amount of time and are<br />

largely compute bound. This is the case in this application where the response of the remote<br />

serial client program is immediate and is determined by a 50 ms timer. The single thread<br />

processes all connection requests and CORBA messages. Application servants need not be<br />

concerned with synchronizing their interactions since there is only one thread active with this<br />

model. [4]<br />

AMI callback model<br />

AMI helps solve the problems with waiting efficiently for long latency calls to complete.<br />

With this technique, long-running calls don't interfere with other calls. It allows singlethreaded<br />

applications to avoid blocking while waiting for responses.<br />

One of the advantages of this model is that existing CORBA servers need not be changed at<br />

all to handle AMI requests. Furthermore, the TAO IDL compiler automatically generates<br />

methods that implement the AMI communication model. A reference to a handler object is<br />

passed as an additional parameter to the invocation method and the response is received in<br />

this handler object. The class handler declaration and implementation are also automatically<br />

generated.<br />

The serial server has been modified to generate a one second timeout. Requests are sent in a<br />

loop and responses are arriving in the right order (counter value) with a time difference of 1<br />

second.<br />

The AMI client callback model requires client programmers to write more code than with the<br />

synchronous model. In particular, client programmers must write the Handler servant, as well<br />

as the associated client event loop code to manage the asynchronous replies.<br />

2<br />

3


Events and notification model<br />

While above communication methods are only based on CORBA core implementation, events<br />

and notification communication models rely on additional services. They allow the creation of<br />

events or notification channels that will are used by producers and consumers.<br />

In these models, producer and consumer are now decoupled. If we keep the same data flow,<br />

the client plays now the role of producer pushing data and the serial server the role of<br />

consumer. The data are pushed to the consumer by the event channel.<br />

The data flow throughput is in this case limited by the serial communication speed that is far<br />

slower than the TCIP one. In case of slow data production (manual input), the application<br />

behaves like the original one. But if the data production runs faster, this could rapidly leads to<br />

buffer overflow and lost data. If the communication is broken or the application on the other<br />

side is slow or response times are variable, we cannot absorb the data flow.<br />

In the previous implementation, if there is no response from the system connected to the serial<br />

port, the caller blocks and returns after a timeout of 1sec. It means that if we use a push<br />

model, the push period should be larger than 1 sec. Consequently, it should be better for the<br />

serial server to pull the data (PullConsumer) and for the client to provide it on request<br />

(PullSupplier).<br />

In the case of a mobile robot a joystick could produce motion commands at regular intervals<br />

(typically 50 ms) and consequently a push model is best suited for the Supplier. So we get a<br />

Push/Pull hybrid model and the Event Channel act as a queue. Another solution is to modify<br />

the serial server implementation by using non-blocking (serial) communication.<br />

What happens if a pushconsumer blocks on the call of the push() invocation?<br />

If we use a single threaded Event Channel, all communications will also be blocked. So, it<br />

could perhaps be better to use a thread by connection or thread by client model for the ORB.<br />

One of the advantages of the synchronous two-way communication model is to return the<br />

state of the communication on the serial port to the client. By using an event model, we lose<br />

this capability. It means that we need to use another method if we want to monitor the serial<br />

communication state.<br />

We see that using models with higher capabilities also requires more effort to program and to<br />

maintain and also more resources from the system.<br />

Which model for which task?<br />

According to the task to be accomplished, different communication models can be selected.<br />

From model properties and tests described above, we derive the following guidelines.<br />

Configuration<br />

Synchronous two-way is best suited for light operations. It can be used for configuration tasks<br />

or to evaluate the availability of resources.<br />

Control


An Event model is best suited for components which continuously produce/consume data and<br />

for processes requiring RT capabilities.<br />

Motion commands are generated continuously (periodic or not): mouse click, joystick,<br />

manual commands, file, path generator, ... and are best propagated as events.<br />

Data processing<br />

Asynchronous Method Invocation can be used for components requiring long computation<br />

times (planning, localisation, stereovision,...).<br />

Visualisation<br />

Data are continuously produced (converted by intermediate components and forwarded to the<br />

next one) to finally arrive with the consumers. An event model (Notification) is best suited in<br />

this case.<br />

7 CONCLUSION<br />

Developing modular control software requires a systematic and detailed analysis of the<br />

applications requirements. Furthermore, adopting programming standards like CORBA and<br />

the choice of open-source software is the only way, according to us, to reach this software<br />

modularity.<br />

CORBA provides different communication models that suit different users’ needs. By using<br />

more sophisticated models, like the Events model, we can develop more flexible software. On<br />

the other hand, it generally requires to write more code and to modify data flow models.<br />

If using simple CORBA synchronous model is not more complicated than writing sockets<br />

based programs, other models require assimilating new communication paradigms and<br />

thinking differently. CORBA has certainly a steep learning curve but offers many benefits for<br />

writing heavy distributed applications.<br />

REFERENCES<br />

[1] Software Modularity for Mobile Robotic Applications, Eric Colon, Hichem Sahli, Clawar<br />

conference, September 2003, Catania, Italy.<br />

[2] Telematics Applications in Automation and Robotics - TA2001, July 2001, Weingarten,<br />

Germany<br />

[3] Pattern Languages of Program Design, Jim Coplien and Douglas C. Schmidt, Addison-<br />

Westley, 1995, ISBN 0-201-6073-4<br />

[4] Configuring TAO's components, Douglas C. Schmidt, http://www.cs.wustl.edu/<br />

~schmidt/ACE_wrappers/TAO/docs/configurations.html.


4-D GPR Image Based on FDTD Parallel Technique<br />

ABSTRACT<br />

Ground penetrating radar (GPR) has become an<br />

established technology and provides some<br />

valuable information to aid the detection and<br />

identification image processes of landmine, so<br />

there is a continuation of the research on the<br />

advanced algorithms for the processing of GPR<br />

data, especially for 4-D GPR image. FDTD<br />

method is a useful method for 4-D GPR image of<br />

landmine detection, but it demands a great<br />

amount of memory and computer time. In this<br />

paper, we will describe our experience with 4-D<br />

GPR data image and will discuss some steps<br />

done in the improvement of the speed of our<br />

parallel FDTD algorithm.<br />

1 Introduction<br />

Many years ago image processing was<br />

performed aiming to create an image of the<br />

landmine detection. One of the recent<br />

developments is four-dimensional (4D) image of<br />

landmine [1]. The 4D (three-space and time)<br />

measurement is concerned with a geophysical<br />

measurement that is repeated in time in the same<br />

three-dimensional (3-D) configuration and it has<br />

the following physical connotation: two<br />

dimensions are related to the spatial surface<br />

coverage of the experiment; one dimension is<br />

related to the recording time of one measurement<br />

and later on in the image formation transferred to<br />

the depth dimension coverage. The sampling in<br />

the repetition dimension is larger than the<br />

duration of the coverage. Moreover in the case of<br />

the number of instants on the repetition axis will<br />

be sparse due to economic reasons. The sampling<br />

rates of the common 3D measurement are<br />

selected such that the 3D heterogeneity is<br />

sufficiently captured. In addition, the repetition<br />

of interval has to be chosen so that the localized<br />

* visiting from Computer Science and Technology<br />

Institute, Harbin Engineering University, China<br />

Baikunth Nath , Jing Zhang*<br />

Department of Computer Science &Software Engineering,<br />

University of Melbourne,<br />

Melbourne, VIC 3010, Australia,<br />

baikunth@unimelb.edu.au,jzhang@cs.mu.oz.au<br />

change of 3D heterogeneity is sufficiently<br />

captured.<br />

Ground-penetrating radar (GPR) has been used<br />

for several years as a non-destructive method of<br />

detection and locating landmine. Just as seismic<br />

reflections are generated when a seismic wave<br />

hits a layer in the subsurface with different<br />

material properties, GPR reflections are<br />

generated when a pulse hits an object or layer<br />

with different electromagnetic characteristics.<br />

According to the different reflected signal data,<br />

landmine image can be built.<br />

For solving electromagnetic problems, Yee<br />

proposed the finite-difference time-domain<br />

(FDTD) technique in 1966. At beginning,<br />

because of need sufficient computing resources,<br />

there was little interest in the FDTD method [2].<br />

However, with the advent of low cost, powerful<br />

computers and advances to the method itself, the<br />

FDTD technique has become a popular method<br />

now. However, the requirement for high<br />

performance computing in engineering is endless.<br />

Many fields such as complex object modeling,<br />

engineering design and automation, electronics,<br />

aeronautics and medicine put forward challenges<br />

to computing. The calculating is impossible for a<br />

PC even through the performance of PC is much<br />

higher than before. Now the parallel FDTD<br />

algorithm research have already progressed a lot<br />

[3,4], as the application of it, 4D GPR image<br />

processing became fast.<br />

2 FDTD Parallel Technique<br />

Since mid 1990’s, the finite-difference time<br />

domain (FDTD) method has been successfully<br />

applied to solving problem in ground-penetrating<br />

radar systems for landmine detection.<br />

FDTD computation involves a time-based<br />

leapfrog method [5, 6] where updating equations<br />

alternate between electric field and magnetic<br />

field calculations. In order to realize it,<br />

Maxwell’s equations are transformed from their<br />

vector, differential form into difference


equations.<br />

In the normal FDTD method, the EM field<br />

(electromagnetic field) values of every last time<br />

step and other variables are stored in the memory<br />

of a computer. The larger the object’s electrical<br />

size is analyzed, the more memory the computer<br />

needs. For example, a 32 MHz memory<br />

computer only has the capable for the space size<br />

about 80×80×80 grids. So, we often use a more<br />

powerful workstation or even gigantic computer<br />

to analyze the EM problems of a large electrosize<br />

object. But if we only have a PC or a<br />

workstation that have no enough memory, but<br />

we still want to handle with an electrically largesize<br />

object that we would meet a serious problem.<br />

In order to solve this problem, a high efficiency<br />

method is implement FDTD algorithm in a<br />

parallel computer. There are some researchers<br />

have already developed a little strategy for<br />

FDTD parallel computing [7, 8, 9, 10], what’s<br />

more, the research are becoming bloom.<br />

This paper introduced a strategy for parallel<br />

implementation of the FDTD algorithm using<br />

COW (Cluster of Workstation) parallel<br />

computing system that are built with LINUX<br />

operating system and PVM parallel software.<br />

Some computing examples were given to prove<br />

the feasibility, correctness and high efficiency of<br />

this strategy.<br />

The main advantages of this method lie in three<br />

aspects :Firstly, this method is a time domain<br />

method that means the data in the whole<br />

frequency band can be obtained by only once<br />

calculation in time-domain. Secondly, this<br />

method can easily model complex objects.<br />

Thirdly, the necessary memory is relatively<br />

smaller than other low frequency numerical<br />

technique such as the Moment Method [11, 12].<br />

The details on FDTD algorithm have already<br />

been discussed in many references such as [13,<br />

14]. The preconditions of the parallel FDTD<br />

algorithm is the dividing of all computation task<br />

and then every node of COW can compute one<br />

part of all computation task. Firstly, the basic<br />

principle of FDTD algorithm makes the division<br />

of computational space possible. According to<br />

the principle of FDTD algorithm, the<br />

electromagnetic field value at certain position<br />

can be decided by the value of last time step at<br />

this position and electromagnetic field value of<br />

this time step at nearby position.<br />

The electromagnetic field value has no direct<br />

relation to the values at position far from this<br />

point. So, the whole computational space can be<br />

divided into sections that can be commutated in<br />

some nodes of parallel computing system. The<br />

exchange of field values between nodes can be<br />

executed only at interface between sections.<br />

According to the basic cognition, the relay<br />

computing between parallel nodes can be<br />

executed to simulate the serial computing in a<br />

single PC or workstation.<br />

According to the structure of software described<br />

above, we built a test COW system with two PC<br />

and 10M Ethernet. The two PC are PП266/64M<br />

and PП200/64M, which have Red-Flag Linux2.4<br />

and PVM3.4.2. For proving the feasibility,<br />

correctness and high efficiency of this system,<br />

some computing problems have been executed<br />

by our parallel FDTD computing.<br />

COW system [15] can be constructed by two<br />

parts: workstation and interlinkage network. It<br />

Shows in Fig.1. This is the key point of our<br />

parallel FDTD algorithm. The Master-Slave<br />

programming-style has been used. The Master-<br />

Slave program has three steps normally: Firstly,<br />

slave programs are created by master program<br />

and the information of task and the parameter of<br />

computing are delivered from master program to<br />

every slave program; Secondly, the parallel<br />

computing are executed in every node of COW,<br />

within every time step the synchronization and<br />

communication between every node are kept;<br />

Finally, the computing results are transferred to<br />

master program from every node and the<br />

computing are terminated. Taking two nodes<br />

COW as an example, the computing task are<br />

divided into two parts along Y axis in one<br />

dimension. The segmentation is shown in Fig.2,<br />

and this figure also shows the data needed to<br />

transfer between nodes. Taking the iterative<br />

computing of H x and E z components of onedimension<br />

as an example, the difference formula<br />

for j = m -1/2 and j = m are showed as follows.<br />

From Fig 2, we know that we must deliver the<br />

value of Ez(m) from section 2(No.2) to section<br />

1(No.1) before the value of Hx(m-1/2) is<br />

calculated, and same thing, we must deliver the<br />

value of Hx(m-1/2) from No.1 to No.2 before the<br />

value of Ez(m) is calculated.<br />

The main loop proceed by computing data then<br />

sending updated data values to processors<br />

responsible for adjacent sub-domains.


Synchronization across all nodes is required<br />

before this exchange can occur. Two cycles of<br />

computation, synchronization and data exchange<br />

are required for each time step.<br />

Go through the FDTD processing, we can get the<br />

processed GPR data from raw GPR data. That<br />

means the image from processed PGR data can<br />

get even clearer.<br />

Internet<br />

Router<br />

m-1<br />

m+1<br />

m<br />

* # *<br />

#<br />

*<br />

No.1 section<br />

Workstation PC<br />

Interface<br />

Hub<br />

PC PC<br />

Fig.1.Structure of Parallel<br />

System<br />

No.2 section<br />

3 4D GPR Image Processing<br />

* Ez<br />

# Hx<br />

Fig.2 the one-dimension segmentation and the<br />

data need to transfer between nodes<br />

GPR(Ground-penetrating radar) data is gotten by<br />

the method as follows.<br />

If a GPR pulse hits a layer or object with a<br />

different dielectric constant, the pulse is reflected<br />

back, picked up by the receiving antenna, and the<br />

time and magnitude of that pulse is recorded, as<br />

in many cases, the transmitting and receiving<br />

antennas are the same.<br />

Essentially, a reflection occurs when there is an<br />

increase in the dielectric constant of materials in<br />

the subsurface. The dielectric constant is defined<br />

as the capacity of a material to store a charge<br />

when an electric field is applied relative to the<br />

same capacity in a vacuum, and can be computed<br />

using Equation 1:<br />

Where:<br />

εr = The relative dielectric constant,<br />

(1)<br />

c = The speed of light (30 cm/nanosecond), and<br />

v = The velocity of electromagnetic energy<br />

passing through the material.<br />

GPR data is divided into three types [16, 17, 18,<br />

19], that is, A-scan, B-scan, C-scan. A-scan is a<br />

1D representation of a single GPR profile (trace),<br />

B-scan is a 2D representation of a series of GPR<br />

traces, and C-scan is a 3D representation of a<br />

series of 2D traces, they are shown in Fig.3 and<br />

Fig.4.<br />

After getting the processed GPR data processed<br />

by FDTD technique, we still need to create 4D<br />

image from the processed GPR data if we want<br />

to get a procedure of the detection of landmine,<br />

As for 4-D representation of GPR data, we got it<br />

from processed C-scan PGR data and add time<br />

pixel by using 3D doctor trial software, it’s<br />

shown by Fig.5. The landmine type is T72 APL<br />

and it’s buried under the sand in 5 cm. The<br />

sample picture in Fig.5 is four 3D images<br />

connection.<br />

4 Conclusion<br />

We need not concern the normal setting of<br />

FDTD algorithm but assure the computing object<br />

is absolute same in parallel computing and<br />

reference serial computing. Taking a GPR data<br />

calculation as an example, the whole computing<br />

space includes 76×76×76 grids and is divided<br />

into two parts equally along Y-axis. The<br />

recorded results are shown in Fig.4 and Fig.5 and<br />

the results obtained from normal FDTD are also<br />

shown in the two figure. According to the<br />

computing, the accelerating ratio is 1.6 and the<br />

parallel efficiency is about 0.8. This means the<br />

parallel FDTD system is efficient. So we can<br />

know that this strategy of parallel FDTD


Algorithm is successful and the precision is<br />

absolutely same as normal serial FDTD, so we<br />

can say this system is feasible.<br />

Fig.3 A-scan (Left) and B-scan (Right) Representations<br />

of GPR Data; T72 APL in 5 cm Sand<br />

Refrences<br />

[1]-- “Block iterative techniques for fast 4D<br />

reconstruction using a priori motion models in<br />

gated cardiac SPECT,” Phys. Med. Biol., vol.43,<br />

pp. 875–886, April 1998.<br />

[2]Amir Fijany, Michael A.Jensen, Yahya<br />

Rahmat-Samii, Jacob Barhen, “A massively<br />

Parallel Computation Strategy for FDTD: Time<br />

and Space Parallelism Applied to<br />

Electromagnetic Problems”,IEEE Trans. on AP,<br />

pp 1441-1449, 1995<br />

[3] Cheng Lifeng. “Red Hat Linux 6.0 Network<br />

and Communication”, China People’s Posts &<br />

Telecommunication<br />

Publishing House, 2000.<br />

[4] Chen Jiu, Long Hao, “Red Hat Linux 7 Self-<br />

Culture Guide”, Publishing House of National<br />

Defence Industry, 2001.<br />

Fig.4 C-scan (Left) Representations of GPR<br />

Data; T72 APL in 5 cm Sand<br />

Fig.5 4D Representations of GPR Data; T72 APL<br />

in 5 cm Sand<br />

[5] U Oguz, L. Gurel, “Frequency Responses of<br />

Ground-Penetrating Radars Operating Over<br />

Highly Lossy Grounds”, IEEE Trans. On<br />

Geoscience and Remote Sensing, Vol. 40, No. 6,<br />

pp 1385-1394,2002<br />

[6] W.P. Pala, A. Taflove, M.J. Piket, R.M.<br />

Joseph, “Parallel Finite Difference Time Domain


Calculations”, IEEE Trans. Antennas &<br />

Propagation, Vol. 30, No. 3, pp 83-85, 1991<br />

[7] A Taflove, S.C. Hagness, “Computational<br />

Electrodynamics – The Finite Difference Time<br />

Domain method”, Artech House, 1995<br />

[8] V. Varadarajan, R. Mittra, “Finite-Difference<br />

Time-Domain Analysis using Distributed<br />

Computing”, IEEE Microwave and Guided<br />

Wave Letters, Vol. 4, pp. 144-145, 1994<br />

[9] Q. Chu, K. Chan, C. Chan, “Parallel FDTD<br />

Analysis of Active Integrated Antenna Array”,<br />

Antennas and Propagation Society <strong>International</strong><br />

Symposium, Vol. 1, pp 628-631, 2002<br />

[10] Z.M. Liu, A.S. Mohan, T.A. Aubrey, W.R<br />

Belcher, “Techniques for Implementation of the<br />

FDTD Method on a CM-5 Parallel Computer”,<br />

IEEE Antennas & Propagation Magazine, Vol.<br />

37, No. 5, pp 64-71, 1995<br />

[11] T.V. Pistor, “Electromagnetic Simulation<br />

and Modeling with Applications in Lithography“,<br />

Ph.D Dissertation, University of California<br />

Berkeley, 2001<br />

[12] A.D. Tinniswood, P.S. Excell, M.<br />

Hargreaves, S. Whittle, D. Spicer, “Parallel<br />

Computation of Large-Scale FDTD problems”,<br />

Third <strong>International</strong> Conference on Computation<br />

in Electromagnetics, Vol. 1, pp 7-12, 1996<br />

[13] M. Sypniewski, J. Rudnicki, M. Celuch-<br />

Marcysiak, “Investigation of multithread FDTD<br />

schemes for faster analysis on multiprocessor<br />

PCs”, IEEE Antennas & Propagation<br />

<strong>International</strong> Symposium, Vol. 1, pp 252-255,<br />

2000<br />

[14] G.A. Schiavone, J. Codreanu, R.<br />

Palaniappan, P. Wahid, “FDTD speedups<br />

obtained in distributed computing on a Linux<br />

Workstation Cluster”, IEEE Antennas &<br />

Propagation <strong>International</strong> Symposium, Vol. 3, pp<br />

1336-1339, 2000<br />

[15] C. Guiffaut, K. Mahdjoubi, “A Parallel<br />

FDTD Algorithm using the MPI Library”, IEEE<br />

Antennas & Propagation Magazine, Vol. 43, No.<br />

2, pp 94-103, 2001<br />

[16]T.-S. Pan, D.-S. Luo, and M. A. King,<br />

“Design of an efficient 3-D projector and<br />

backprojector pair for SPECT,” in Proc. Fully-<br />

3D Image Reconstruction Radiol. Nucl. Med.,<br />

1995.<br />

[17]Ma Jifu, “Generalized Computation of 3-<br />

Dimensional EM Field of Resonant Cavities and<br />

Its Applications”, Thesis for Ph. D of China<br />

Institute of Electronics, 1998<br />

[18]. Narayanan, M. A. King, E. J. Soares, C. L.<br />

Byrne, P. H. Pretorius,and M. N. Wernick,<br />

“Application of the Karhunen–Loève transform<br />

to 4D reconstruction of cardiac gated SPECT<br />

images,” IEEE Trans. Nucl. Sci., vol. 46, pp.<br />

1001–1008, Aug. 1999.<br />

[19]Manoj V.Narayanan,”Improved Image<br />

Quality and Computation Reduction in 4-D<br />

Reconstruction of Cardiac-Gated SPECT<br />

Images” IEEE Trans. Nucl. Sci., VOL.<br />

19,NO.5 ,MAY 2000.


Abstract<br />

Paper deals with applications of robots for performing<br />

risky tasks in environments dangerous for humans,<br />

or, tasks where the robot should prevent human<br />

life or big economical as well as ecological<br />

damages. These applications fields require specific<br />

robotic systems that exhibit specific features, some<br />

limited level of autonomy as to mobility and task<br />

oriented functions what directly corresponds to<br />

availability of sensory information from the environment.<br />

Any agent working in risky environment<br />

except its functional performance should exhibit<br />

three features: selfrecovery capabilities, minimal<br />

risk assessment and maximal reliability in all actions.<br />

Required performance is discussed on exa mples<br />

of robotic agents for demining, searching /<br />

cleaning dangerous environments and some surveillance<br />

tasks. The concept of modular construction is<br />

briefly presented.<br />

Key words : Service – mobile robots / agents, hazardous<br />

environment, demining, detection, selfrecovery,<br />

1. Introduction<br />

Dangerous tasks or works in hazardous environments<br />

are those tasks or situations that would endanger the<br />

safety of human beings and / or valuable equipment<br />

used or the near by environment in which the emergency<br />

management application takes place. One<br />

could encounter as those environments: physical<br />

catastrophes (earthquakes, sea or river debacles, etc.)<br />

a fire extinguishing mission in uneven terrain, nuclear<br />

accidents, cleaning terrain from landmines, etc.<br />

As human safety is the highest priority, the interest is<br />

to remove the operator from the scene of the hazardous<br />

environment and / or either totally substitute him<br />

by an onboard “intelligent” agent - which is expected<br />

to provide the same or similar functionality, or, to<br />

provide the operator by such means that would enable<br />

him to perform the same mission safely.<br />

Performing these tasks by a “robot” is a big challenge<br />

for research in all domains of robotics. There<br />

are several task oriented vehicles or mobile systems<br />

Robotic Agents For Dangerous Tasks<br />

Features and Performances<br />

Štefan HAVLÍK<br />

Institute of Informatics<br />

Slovak Academy of Sciences, Severná 5,<br />

974 01 Banská Bystrica, Slovakia<br />

E-mail: havlik@savbb.sk<br />

for inspection, for searching dangerous terrain or for<br />

fighting with fires in forestry. But actual need of<br />

humanitarian demining is one of the most challenging<br />

world movement for development new demining<br />

technologies that could be fast, safe and reliable. A<br />

great research effort was devoted especially to development<br />

sensory systems and signal processing techniques<br />

for detection mines or any unexploded ordnance<br />

[1,2,3,4].<br />

Beside these, yet quite well established applications,<br />

the occurrence of terrorist attacks give arise a new<br />

field for applications of robotic technologies: solving<br />

situations due to terrorists attacks, or due to menace<br />

of their occurrence.<br />

2. Analysis of needs for some dangerous tasks<br />

General requirement is that such a task should be<br />

performed fast and safe in order to reduce any risk<br />

for humans as well as material on economics or environment.<br />

Let’s analyze some tasks from the point of<br />

view robotic research and robotic technology.<br />

Humanitarian demining<br />

Lot of research work has resulted in design of several<br />

concepts demining technologies as well as development<br />

new machines and especially detection systems.<br />

But as it seems, despite this effort, several sophisticated<br />

solutions and systems will not find such acceptance<br />

in practical use as could be expected. Let us<br />

mention some main rules that should be taken into<br />

account before starting new development of any tool:<br />

− Minefields are not laboratories. Robust and reliable<br />

constructions as well as control techniques<br />

should correspond harsh working conditions and<br />

environment. This includes solving so called “selfrecovery<br />

strategies” in most crucial situations that<br />

could arise (occasional explosions, errors in systems<br />

/ operators, lost of communication, etc.).<br />

− The cost and availability of detection as well as<br />

neutralization technologies is a very important factor<br />

that could limit their mass use in conflict areas.<br />

Automatic / robotic cleaning should be faster (as to<br />

productivity in m 2 /hour), cheaper (as to total<br />

cost/m 2 ) comparing to standard methods, reliable<br />

and safe.


− Any new demining technology should be easily<br />

accepted by users, as well as local authorities /<br />

people. The robotic system should satisfy specific<br />

conditions related to its local applications (people /<br />

country, cost / salaries for deminers, infected terrains,<br />

climatic conditions, mines, maintenance,<br />

etc).<br />

− There are no universal solutions. Robotic technologies<br />

have not totally replace standard hand<br />

searching / neutralization methods but they will be<br />

applied beside them. Automatic ways are especially<br />

suited for primary detection and cleaning<br />

large areas under some homogenous conditions<br />

(obstacles, mines, vegetation, etc.).<br />

− The reliable detection and localization of mines<br />

(UXO) as targets is the task of primary importance.<br />

It can be said: “ When a mine is found and<br />

localized about 90% of problems are solved”.<br />

− Any new solution should minimize risks for<br />

people as well as for the damage of relatively expensive<br />

technology. This risk of the damage, or,<br />

the lifetime by using new technology should be<br />

calculated in expected comparable total cost for<br />

demininig the unit of surface.<br />

Fire fighting<br />

Fire fighting represents other dangerous task from<br />

several reasons. There are harsh conditions in the<br />

operation space: toxic gases, high temperatures, possibility<br />

of explosions / breaking down of any constructions<br />

or trees, poor / no visibility, obstacles - bad<br />

accessibility, etc. An intended robotic system should<br />

exhibit some principal functions as follows:<br />

- Mobility in complicated terrain with unknown obstacles.<br />

The vehicle can move on wheels, belts, or<br />

depending on obstacles, on legs.<br />

- Sensory equipment. Principal requirement is positioning<br />

the vehicle in a global / local reference system.<br />

For measuring actual position in global world<br />

coordinates can be used GPS devices able to ascertain<br />

position within 1m, or less, resolution The other<br />

problem is to navigate the vehicle according to actual<br />

situations (obstacle avoidance, thermal navigation to<br />

source of fire, solving recovery situations, etc.). To<br />

do this beside visual feedback an additional thermal<br />

imaging system is need for navigation under poor<br />

vis ibility conditions.<br />

- The board of the vehicle should be equipped by an<br />

active system for fire intervention (water / foam<br />

/sand gun, placement explosives to extinguish the<br />

fire, removing objects / obstacles, etc.<br />

- Control and communication should reliably work<br />

and such an agent should exhibit a given degree of<br />

autonomy to solve some unexpected situations.<br />

Anti-terrorists interventions<br />

This field represents a broad range of possible situations<br />

that should correspond particular menaces and<br />

tasks. It should be noted that the attack on the WTC<br />

means that a new strategy should be adopted in order<br />

to prevent / to solve such situations. Till that time the<br />

terrorist actions were oriented more or less to provoke<br />

a public opinion. Economical damages and<br />

number of killed people, including their own lives,<br />

were not so frequent, or, were not the main goal of<br />

terrorist activities. Unfortunately, since that time, the<br />

purpose and strategy of their attacks is to make<br />

maximal damages as well as to kill maximal number<br />

of people. This fact naturally should result in different<br />

approach in prevention as well as protection<br />

against such actions.<br />

3. Some specific performance features for<br />

agents pe rforming dangerous tasks<br />

Performing operations in risky environment requires<br />

some principal features to be satisfied by an intervention<br />

agent.<br />

Self-recovery<br />

This is an important and specific feature directly<br />

related to particular tasks. Its main purpose is to prevent<br />

/ to avoid loses or self-destruction of the agent<br />

and to finish a given action in risky environment<br />

without serious damages. The self-recovery strategies<br />

should start especially in unwanted situations as<br />

follows:<br />

- In cases of any failure of technique (communication,<br />

engine, control system, sensory system,<br />

etc…)<br />

- In cases of fault decision made by the operator<br />

- There are no / not enough information for further<br />

action, it seems to be risky for the agent and it<br />

could be destroyed.<br />

The general requirement is to built agent and all its<br />

functional parts as reliable as possible. A typical<br />

requirement is solving communication as two independent<br />

systems.<br />

Let us consider an agent performing demining operation<br />

in the minefield. Any its motion should be carefully<br />

considered otherwise any wrong movement<br />

could result in explosion of mine with its destruction.<br />

The principal strategy should be based on following<br />

rule: any motion of the vehicle could be realized in<br />

such direction where no mines were reliably detected,<br />

or, all mines were destructed / removed. This<br />

practically means that the vehicle can move in such<br />

direction where detection, or, destruction systems are<br />

fixed.<br />

Minimal risk<br />

The other situation arises in case of any failure and<br />

the vehicle cannot perform desired activities. The<br />

problem is to remove if from the dangerous terrain<br />

without any risk for persons. One of the simplest way<br />

how to solve this situation is using a cable and pull it<br />

out by the winch mechanisms. The other possibility


enables using another vehicle, which helps to remove<br />

the first one from the minefield.<br />

Solving any situation brings for operator / operation<br />

system the decision problem: to decide for the next<br />

action if any unexpected situation arose. The general<br />

rule is: the operator decides for the next paths of the<br />

agent in order to minimize any risk of damages for<br />

the agent itself. This procedure represents the standard<br />

decision algorithms [6] according to the scheme<br />

for risk assessment in Fig. 1.<br />

New approach /<br />

option to reduce risk<br />

Fig. 1. The general risk assessment and decision<br />

procedure<br />

It is obvious that some decisions can be represented<br />

by relatively simple routines working over a given<br />

set of options: Action : <br />

if .< CONDITION: SENSOR ->. On the other hand,<br />

other operator’s decisions require much more complex<br />

assessment of possible risks with respect to<br />

given criteria. Such a typical situation arises during<br />

automatic demining operation when using mine detection<br />

systems give not reliable information about<br />

the presence of mines and there is only a suspicion if<br />

“something is inside”. The operator should decide if<br />

to continue in searching the terrain by the vehicle /<br />

agent with detection systems, or to send the directly<br />

the destruction (flailing) vehicle. It is clear that the<br />

operator will decide for destruction tools for reason<br />

of minimal risk.<br />

Reliability<br />

The reliability of performing a task should be considered<br />

with respect to criteria given for a particular<br />

operation. To compare different requirements and<br />

criteria some features are given on examples in the<br />

table.<br />

Task Task description (exa mple) Description of the action Criteria<br />

Humanitarian<br />

demining<br />

Nuclear power<br />

plant actions<br />

Anti-terrorist<br />

action<br />

No<br />

Statement criteria<br />

and risk levels<br />

Cost? Time?<br />

Evaluation of options<br />

according to criteria<br />

Choosing an approach / option<br />

Estimate risk and<br />

level of hazard<br />

Is the risk<br />

acceptable ?<br />

Yes<br />

Steps of demining:<br />

- Cleaning vegetation, if<br />

any exists<br />

- Detection of mines and<br />

localization<br />

- Destruction of mines on<br />

place or removing<br />

Surveillance and manipulations<br />

with nuclear materials.<br />

Airport action: Remove an<br />

unattended baggage as<br />

potential / suspected explosives<br />

or other dangerous<br />

Robotic agents for performing particular<br />

tasks are operated from the center.<br />

Agents are equipped by detection<br />

systems and tools for destruction /<br />

neutralization. [1,2,3]<br />

The robot hand executes manipulations<br />

with dangerous materials. The<br />

detection – measuring system verifies<br />

level of radiation.<br />

Equipment: Remotely-operated transport<br />

platform, equipped with robot-<br />

manipulator. Portable operator's console<br />

with radio command communication<br />

channel The vision system with<br />

radio-communication channel. Ga mma-detector<br />

includes static detection<br />

unit (gamma locator) - SDU, dynamic<br />

detection unit and informationprocessing<br />

unit<br />

The remotely-operated mobile robotic<br />

agent will approach to the object,<br />

takes it by an arm and inserts the object<br />

into a container on its platform.<br />

• Reliability of cleaning<br />

(up to 100%)<br />

• Cost / effectivity<br />

• Reliability<br />

• Minimum time<br />

• Prevention of accidental<br />

explosion


4. Modular concept<br />

sives or other dangerous<br />

materials inside. During the<br />

action all persons – passengers<br />

should go away<br />

into a protected / remote<br />

zone. The agent should<br />

remove dangerous object<br />

from the space as soon as<br />

possible.<br />

As follows from the study and analysis of particular<br />

applications the agents for all applications exhibit<br />

some “general purpose” parts to perform common<br />

functions as well as some specific “task oriented”<br />

parts. The modular concept of a robotic agent is the<br />

based on separation of the functional parts. Such<br />

approach of the open architecture system could<br />

minimize cost of a general purpose system and the<br />

whole agent, as well.<br />

General purpose system includes the mobility system<br />

– vehicle and on-board manipulation equipment.<br />

The mobility system is usually a remotely controlled<br />

vehicle moving on wheels, belts or legs. Particular<br />

applications differ by requirements on speed,<br />

weight, mechanical protection against environment.<br />

As regards to control and communication systems,<br />

including sensors / detectors related to motion and<br />

maneuvering capabilities main parts of these systems<br />

can be practically unified.<br />

On board manipulation equipment that represent:<br />

a heavy load manipulator / platform and a robot<br />

arm.<br />

Task oriented parts represent special tools and sensory<br />

equipment dedicated to performing a given task.<br />

Such a modular approach and open architecture solution<br />

enables standardization for majority of functional<br />

parts. This concept of modularity can be seen<br />

on next Fig.2 with functional blocks:<br />

− The mobile platform / vehicle (on wheels / belts)<br />

− The heavy load manipulator (2-3 d.o.f)<br />

− The long reach robot arm (5-6 d.o.f)<br />

− Control and communication systems<br />

Robotic arm<br />

with the set of tools<br />

Camera<br />

4 DoF heavy load manipulator<br />

Fig.2. A modular concept of the multi-purpose robotic<br />

vehicle<br />

ject into a container on its platform.<br />

The object is then carried out into a<br />

safe space for further recognition. The<br />

environment is quite well defined as<br />

to terrain, visibility and weight of<br />

objects.<br />

Equipment:<br />

Mobility system, robotic arm, vision,<br />

hand held camera, remote control<br />

(~200m), other sensors / detectors<br />

The pre-project proposal based on this modular concept<br />

with some performance characteristics are given<br />

below:<br />

Fig.3. The vehicle with the platform<br />

Fig.4. The long reach robot arm<br />

The vehicle:<br />

− speed 0-5 km/h<br />

− remote vision (optionally IR)<br />

− global / local positioning (GPS)<br />

− remote control and communication ~ 5 km<br />

− control panel<br />

The 3-4 d.o.f. heavy load manipulator:<br />

− max. load ~ 500kg<br />

The long reach arm:<br />

− 5-6 d.o.f.<br />

− payload capacity 20-30kg<br />

− reach 3m<br />

− the set of primary tools (grippers)<br />

Additional equipment attached to this base structure<br />

gives a task oriented functionality required for the<br />

agent.<br />

5. Robotic agent for demining<br />

Let us go out from the modular concept of agents.<br />

The open architecture enables to build several modi-


fications performing specific tasks as regards to<br />

demining operations.<br />

The agent for searching dangerous terrain and target<br />

localization.<br />

This agent is a practically a general porter of several<br />

detection systems performing mapping dangerous<br />

terrain and localization of dangerous targets. Detection<br />

systems are on the frontal platform fixed on<br />

manipulator, or, on the robot arm.<br />

Neutralization / destruction vehicle.<br />

The neutralization vehicle is the general porter of<br />

various destruction systems (flails, guns, laser gun,<br />

explosives, etc.). The vehicle with similar maneuvering<br />

capabilities should be more robust and be protected<br />

against of the pressure waves during explosions<br />

of mines. The principal configuration is shown<br />

in Fig. 5.<br />

Fig. 5. Vehicle with flailing destruction system<br />

The realization of this vehicle with flailing destruction<br />

tool can be seen in Fig.6.<br />

Fig.6. The “DIANA” flailing vehicle for demining<br />

Robotic arm with set of tools.<br />

The long reach arm will be used for fine works as<br />

follows: removing / deployment neutralization explosives,<br />

probing, fine cutting the vegetation, localization<br />

of mines using additional detectors, etc.<br />

The set of tools consists of probes, cutters, various<br />

grippers, additional sensors for detecting explosives,<br />

etc. On the end of arm will be small camera what<br />

will allow detailed views on the mine and place of its<br />

vicinity. Control is considered to be in local world or<br />

tool reference coordinates.<br />

Besides flailing system on Fig.5, other principles that<br />

activate explosion of mines can be used. There are:<br />

directed energy system, laser gun or sniper rifle. One<br />

of the most reliable methods for destruction mines is<br />

placement additional explosive beside position of the<br />

mine. This is task for robotic arm with set of tools<br />

(grippers, sensors, probes, etc.). Input data for all<br />

these systems are positions / coordinates of objects<br />

recognized as mines.<br />

The modular concept in Fig.7, then consists of common<br />

functional blocks and special attachment that<br />

specifies purpose of such an agent.<br />

GPS<br />

Mobility systems<br />

perception<br />

and<br />

recognition<br />

Supervising and<br />

control center<br />

minefield<br />

robot arm<br />

GIS (maps)<br />

-scanning<br />

vehicles<br />

-tracking terrain<br />

-fine motions<br />

-scanning motions<br />

-maneuvering by<br />

of detectors<br />

-navigation<br />

tools (flail, cutter) and small<br />

-obstacle avoidance<br />

tools<br />

-self-recovery<br />

-target (gross motion<br />

positionning<br />

Neutralization Removing<br />

tools:<br />

obstacles<br />

mechanical Tools:<br />

(flail, digger,<br />

shovel,..)<br />

cutter,<br />

gripper,<br />

others laser,.etc) saw,...<br />

Fig. 7. A modular concept and parts of robotic<br />

vehicle for demining operations<br />

5. Conclusion<br />

The general concept of mobile robotic systems for<br />

performing task in hazardous environment, especially<br />

for demining operations was presented. Real<br />

need and importance for development of these robotic<br />

systems is a big challenge for researchers working<br />

in robotics, control, sensing technologies, signal /<br />

information processing and communication.<br />

As briefly shown, any agent working in risky environment<br />

except its functional performance should<br />

exhibit minimally three features: selfrecovery capabilities,<br />

minimal risk assessment and maximal reliability<br />

in all actions.<br />

A concept of modularity in construction of these<br />

mobile agents is outlined. It is based on the fact that<br />

for the majority of similar tasks it is possible to define<br />

general and common functionalities that a multipurpose<br />

mobile system could satisfy. Then, specific<br />

features for an agent will give the task oriented tooling<br />

and sensory equipment. This concept is shown on<br />

examples of two vehicles for demining operations.<br />

References<br />

[1] Proc. EUDEM2-SCOT –2003 Int. Conf. on Requirements<br />

and Technologies for Detection, Removal<br />

and Neutralization of Landmines and<br />

UXO. Sept. 15-18, Brussel, Belgium


[2] Havlik, S.: Some concepts of mine clearance<br />

robots. Proc. Int. Conf. on Robotics and Automation.<br />

Thai-Pei, Taiwan, Sept. 2003<br />

[3] Proc. HUDEM’02 IARP <strong>Workshop</strong> on Robots for<br />

Humanitarian Demining., Nov. 3-5, 2002, Vienna<br />

[4] www.<strong>gichd</strong>.ch;<br />

[5] Salvall, A. Avello, L. Briones, Two Compact<br />

Robots for Remote Inspection of Hazardous Ar-<br />

eas in Nuclear Power Plants. Proceedings <strong>International</strong><br />

Conference on Robotics and Automation,<br />

1999.<br />

[6] Kaminski, L. et al.: The GICHD Mechanical<br />

Application in Mine Clearance Study. Proc.<br />

EUDEM2-SCOT –2003 Int. Conf. on Requirements<br />

and Technologies for Detection, Removal<br />

and Neutralization of Landmines and UXO. Sept.<br />

15-18, Brussel, Belgium, pp.335-341


How to Design a Haptic Telepresence System for the Disposal of<br />

Explosive Ordnances<br />

Bernd Petzold<br />

and Michael F. Zaeh<br />

Institute for Machine Tools and<br />

Industrial Management (iwb)<br />

Technische Universität München<br />

Germany<br />

bernd.petzold@iwb.tum.de<br />

Alexander Kron<br />

and Günther Schmidt<br />

Institute of Automatic Control<br />

Engineering (LSR)<br />

Interactive Systems & Control Group<br />

Technische Universität München<br />

Germany<br />

alexander.kron@ei.tum.de<br />

Abstract<br />

Barbara Deml<br />

and Berthold Färber<br />

Human Factors Institute (lfa)<br />

University of the Armed Forces<br />

Germany<br />

barbara.deml@unibw-muenchen.de<br />

At the Technische Universität München (TUM) an experimental telepresence system is developed enabling<br />

the disposal of explosive ordnances (EOD) without any risk for the user. Most current EOD-systems enabling<br />

user control of remote environments are single-armed configurations and display only visual feedback. In<br />

contrast to this, the presented telepresence system constitutes of an intuitive bimanual interface and displays<br />

visual as well as force feedback to the user. In order to generate guiding design principles, two operator<br />

consoles were set up and evaluated experimentally.<br />

1 Introduction<br />

The threatening of blasting compositions to harm military and civil population is more present these days<br />

than ever. To dispose explosive ordnances, most of the time direct human interaction is required inevitably.<br />

Though robots are available for basic operations during the explosive ordnances disposal (EOD), they do not<br />

suffice for all required actions [3]. This is mainly due to the poor interaction between the human operator<br />

(HO) and the robot. Usually, the operator controls the robot by supervising the robot’s actions via a monitor<br />

screen remotely (see Fig. 1). As the monitor screen does not provide any depth cues, it is very difficult to<br />

determine the exact spatial positions of remote objects as well as of the robot end-effector. Besides, current<br />

control panels do not provide an intuitive interface. Smooth movements of the manipulator afford a lot of<br />

training and even expert users are not able to accomplish all the tasks needed for an EOD because of the<br />

following reasons: First, most manipulators are designed to be single-armed configurations. As almost all<br />

human interactions are two-handed they cannot be mapped onto a single manipulator in an optimal manner.<br />

Second, present systems usually do not display force feedback to the HO which in consequence diminishes<br />

a sensitive interaction in the remote environment. Especially for visually hidden operating areas this turns out<br />

to be a major drawback. To improve current EOD-systems, a more sophisticated bimanual interaction,<br />

experiencing haptic (force and touch) feedback, seems to be necessary [5]. Therefore, telepresence<br />

technology is highly suggestive for developing EOD-systems [1, 3].<br />

Figure 1: EOD with the commercially available single-armed TeleRob vehicle [7].<br />

Telepresence is characterized by a high-fidelity human-system interface which enables the HO to feel<br />

‘present’ in a remote environment. To guarantee familiar and intuitive manipulations, force feedback devices<br />

play a major role for the design of such an interface. As the human hand is a very complex organ, the<br />

devices vary extremely in size, shape, and function. Some devices track finger motions and display grasping<br />

forces to the user (e.g. CyberGrasp, from Immersion, Corp. [2]). Other devices present contact forces or the<br />

weight of objects (e.g. PHANToM-Series from Sensable Technologies [6]).


At the TUM two telepresence EOD-systems for the disposal of a remote bounding fragmentation mine are<br />

developed (Section 2). In order to gain a deeper insight in how to design a haptic telepresence EOD-system,<br />

both setups were evaluated experimentally (Section 3, 4).<br />

2 Telepresence Setup<br />

2.1 Teleoperator<br />

The teleoperator consists of two manipulator arms each providing 4 degrees of freedom (DoF) (3<br />

translational and 1 rotational) in motion [3, 4]. For the manipulation task the end-effectors are equipped with<br />

two-jaw grippers; whereas one is arranged vertically for task execution of the dominant hand, the other one<br />

is arranged horizontally accomplishing the non-dominant hand’s operations (see Fig. 2). The shared<br />

workspace of both arms is cube shaped with 60 x 20 x 30 cm³. The maximum gripper opening is 9 cm. Both<br />

arms are equipped with force/torque sensors that measure contact forces as well as the gripping forces while<br />

picking up and holding objects.<br />

2.2 Operator Side<br />

Figure 2: Bimanual teleoperator system.<br />

For the operator side two different setups are realized: setup A (see Fig. 3) resembles to the manipulator<br />

kinematically and provides 4 DoF of motion and active force feedback by the same SCARA configuration as<br />

the teleoperator [4]. The grippers of the telemanipulator are replaced by clamps by which the wrists of both<br />

arms are linked to the haptic display. Thus, the operator perceives appearing contact forces at his/her wrist.<br />

A force/torque sensor which is integrated below the clamps enables the implementation of a dual hybrid<br />

control architecture [4] displaying teleoperator forces one-to-one. For the grasping procedure a CyberGrasp<br />

exoskeleton [2] for each hand is applied so that the user can perceive gripping forces. Consequently, the<br />

overall force transmission is done in a parallel manner by a combined finger-wrist display.<br />

For setup B (see Fig. 3) two PHANToMs (Desktop and 1.5 / 6DoF) are used as input devices enhanced by<br />

two-finger gripping masters so that the operator can perform grasping procedures. As PHANToM devices do<br />

not provide integrated force measurement, a force-pose control architecture is implemented to display the<br />

weight of objects as well as the occurring contact and gripping forces. In order to display hard environmental<br />

contacts, the PHANToM devices are made compliant by local impedance control [3]. Despite of the adjusted<br />

manipulator compliance, the achieved force capability is still less than the magnitude of the measured<br />

contact forces. Therefore, the displayed forces are scaled appropriately and in consequence the maximum<br />

forces are lower compared to setup A. Force transmission occurrs exclusively at the fingertips. As the<br />

PHANToMs are desktop devices, the operator performs the teleoperation sitting in front of the system<br />

console.<br />

In order to ensure natural motions, users are encouraged to reconfigure their workspace during the<br />

teleoperation: By switching a pedal the communication between display and teleoperator is disconnected<br />

and the input devices can be repositioned; a succeeding switch reconnects the user and the teleoperation<br />

continues. During this operating mode the telemanipulator might reach its limits of the workspace. Therefore,<br />

manipulator motions coming close to constraints in joint space are augmented by mapping virtual arrows into<br />

real stereo video images. In addition, an user is supported by augmented force feedback pushing him/her<br />

back into the workspace smoothly [3]. The same kind of augmentation is applied to avoid any collision of<br />

both manipulator arms.


Figure 3: Operator side for setup A and for setup B.<br />

Instead of a conventional visualisation by video-streaming, a stereoscopic view is implemented in both<br />

setups. The scene is recorded by a stereo camera setup with fixed view position and focus. On operator side<br />

the stereo images are displayed by means of a Head Mounted Display (HMD). The three-dimensional<br />

appearance by stereo-view ensures a reliable depth estimation of the scene.<br />

3 Experimental Evaluation<br />

It is obvious that both systems differ in more than one dimension. Though both setups are able to display<br />

contact forces as well as gripping forces, they vary at least in three features: Setup A corresponds better to<br />

the kinematical configuration of the teleoperator and distinguishes by providing higher force feedback,<br />

whereas setup B is designed without a tight system linkage and therefore enables a more natural activation<br />

of the limbs. As these variations could not be held constant or controlled during the experiment much<br />

emphasis was put on a structured qualitative interview with which user requirements were explored<br />

thoroughly. The experimental evaluation was guided by the following hypotheses:<br />

H1 Higher force feedback will reduce the applied contact forces in the remote environment and will<br />

decrease the distance error of the gripper when grasping objects. In consequence users who<br />

prefer setup A will experience this system to be more transparent and sensitive for the task at<br />

hand (amount of haptic feedback).<br />

H2 By a tight system linkage a natural presentation of force feedback is diminished. In consequence<br />

users who prefer system B will experience that interface to be more realistic and classify those<br />

interactions to be more sensitive (presentation of haptic feedback).<br />

H3 The kinematical correspondence of teleoperator and operator is less important than the amount<br />

or the presentation of the haptic feedback. For this reason the commanded trajectories will not<br />

differ and both setups will reveal the same degree of motion efficiency.<br />

H4 A haptic augmentation of occurring constraints in the workspace will be perceived as useful and<br />

will not increase the operator’s mental load.<br />

3.1 Experimental Procedure<br />

The disposal of a fragmentation mine of the type PROM (see Fig. 4) served as experimental scenario. The<br />

task required the following operations: First, the mine had to be gripped by the non-dominant hand and a<br />

retaining pin had to be picked up by the dominant hand. Next, the retaining pin had to be put on the<br />

detonator so that the mine was covered and the detonator could be unscrewed without any risk. Finally, both<br />

the detonator and the corpus of the mine were to be stored separately. The whole experimental procedure<br />

requires bimanual activity: While the non-dominant hand held and stabilized the gripped object, the dominant<br />

hand performed all sensitive procedures. According to the participants unscrewing the detonator was judged<br />

to be the most demanding operation followed by inserting the retaining pin; correspondingly longer<br />

completion times were recorded for these two subtasks.


Mainly, experienced EOD-experts from the German Armed Forces were recruited as participants. As the<br />

participants were not accustomed to telepresence technology, a sufficient training period was provided. The<br />

experiment was conducted as a within-subject design, so that every of the 20 participants performed the<br />

defusing task with both setups; serial and positioning effects were avoided by a balanced design.<br />

Figure 4: Assembly parts of a fragmentation mine of the type PROM served as experimental<br />

scenario - original (left) and experimental reproduction (right).<br />

3.2 Experimental Results<br />

When asking the participants to decide for one setup two-thirds (67%) of the users chose setup A whereas<br />

one third (33%) preferred to work with setup B. The subjective choice could be backed up by objective<br />

criteria as the participants required less reconfiguration of the workspace when teleoperating with their<br />

preferred system. As supposed, both groups accounted for their decision to be more sensitive for the task at<br />

hand. While according to the first group this was due to the displayed gripping and contact forces, the<br />

second group appreciated the available gripping masters and the way how input movements where<br />

performed (see Fig. 5).<br />

reconfiguration (%)<br />

56<br />

44<br />

ranking for setup A (Median)<br />

gripping forces<br />

contact forces<br />

40<br />

way of movement<br />

presentation of forces<br />

gripping masters<br />

weight of objects<br />

60<br />

setup A setup B<br />

teleoperator correspondance<br />

sensitivity (%)<br />

89<br />

89<br />

11<br />

11<br />

setup A setup B<br />

ranking for setup B (Median)<br />

way of movement<br />

gripping masters<br />

teleoperator correspondence<br />

gripping forces<br />

presentation of forces<br />

contact forces<br />

weight of objects<br />

setup A preferred<br />

setup B preferred<br />

lower medians indicate<br />

higher importance<br />

Figure 5: Participants reconfigured the dominant workspace less often when working with the setup that<br />

they preferred, and that they judged to be more sensitive. While for system A the appearing<br />

forces were regarded to be important, for system B the kind how movements were executed<br />

was preferred.<br />

H1: In order to evaluate the objective value of higher displayed contact and gripping forces, both systems<br />

were compared according to the applied contact forces in the remote environment. Besides, the occurring<br />

error between gripper distance input at the operator side and the measured distance at the end-effectors was<br />

assessed (see Tab. 1). The assumption that contact forces which are displayed one-to-one to the user would<br />

reduce the applied contact forces in the remote environment was supported by a t-test. Furthermore, the<br />

percentage distance error, both for the non-dominant and the dominant hand, turned out to be significantly<br />

lower when working with system A. As a lower distance error indicates that the gripped object was felt more<br />

precisely, it is comprehensible when two-thirds of the participants classified system A to be more sensitive.


mean standard error t-test (sign. level)<br />

A B A B<br />

non-dominant contact forces [N] 6.0436 6.7716 0.1136 0.5367 1.327 (0.196)<br />

dominant contact forces [N] 2.7388 4.2984 0.0974 0.2002 7.004 (0.001)**<br />

non-dominant distance error [%] 34.80 84.36 3.52 3.79 9.581 (0.001)**<br />

dominant distance error [%] 39.44 67.96 3.51 3.67 5.614 (0.001)**<br />

Table 1: Comparison of appearing contact forces and gripping error for system A and B; highly significant<br />

results are marked by asterisks.<br />

H2: Whether the activation of the limbs is natural or not can only be judged in comparison with an identical<br />

direct interaction. For this reason further 20 participants were recruited and asked to perform the defusion<br />

task in a real physical environment. The manual task was executed twice whereby the participants interacted<br />

once similar to setup A and once similar to setup B. In both conditions the participants were instructed to<br />

concentrate on their haptic sensation so that they would be able to assign the appropriate percentage values<br />

to the demanded limbs (see Tab. 2). When the participants were interviewed after the teleoperation they<br />

were also asked to express the perceived activation by percentages. As setup A provides a less natural<br />

stimulation, it is likely to expect that the gathered percentage contribution will differ from the manual<br />

benchmark data set whereas there would be no difference for the percentage contribution of setup B.<br />

Actually, a chi-square test revealed that the participants favouring setup A did not differ from the manual<br />

benchmark data set, neither for setup A nor for setup B (I, III). In contrast to this, the participants preferring<br />

setup B differed in both teleoperation settings from the manual benchmark data set (II, IV). Thus the<br />

participants favouring setup A seemed to be more robust towards the perception of haptic feedback: they<br />

were not aware of the altered presentation of haptic feedback displaying interaction forces in a parallel<br />

manner at wrist and finger and tended to fuse sensed force feedback into realistic perceptions. In contrast,<br />

the participants favouring setup B recognized a different haptic stimulation in setup A. Though their<br />

recordings for setup B also differed from the manual benchmark data set, they demonstrated sensation of<br />

the higher stimulation of the fingertips that has been especially characteristic for this setup. In consequence<br />

the participants who preferred setup B turned out to be particularly sensitive towards the presentation of<br />

force feedback at the fingertip performing precision grasps and for this reason it is comprehensible that they<br />

decided for setup B.<br />

Where did you sense benchmark (%) teleoperation (%)<br />

contact forces in… A preferred B preferred<br />

setup A: fingertip 46.75 46.39 38.89<br />

phalanx 31.50 34.17 46.67<br />

palm 8.50 8.33 4.78<br />

wrist 13.25 11.11 9.66<br />

chi-square-test I) 0.58 (0.90) II) 11.23 (0.01)*<br />

setup B: fingertip 50.56 38.06 53.33<br />

phalanx 30.22 41.67 36.67<br />

palm 6.11 6.10 0.56<br />

wrist 13.11 14.17 9.44<br />

chi-square-test III) 7.51 (0.06) IV) 27.55 (0.01)*<br />

Table 2: Comparison of a manual benchmark data set with the two teleoperation settings according to<br />

the activation of the limbs.<br />

H3: For gaining a robust mental model of the telepresence system it might be favourable when the operator<br />

console corresponds to the kinematical configuration of the telemanipulator. To assess the motion efficiency<br />

both setups were compared in terms of their telemanipulator’s trajectories (see Tab. 3). As assumed, in<br />

average a t-test revealed no significant difference between both setups. Solely, the motor demanding<br />

unscrewing operation profited by a similar kinematical configuration and was performed more efficiently<br />

when working with setup A. In consequence, the hypothesis has to be specified: Whereas a kinematical<br />

correspondence does not seem to be relevant in general, it becomes more important when the task difficulty<br />

increases and when rotational movements are demanded. The participants themselves ranked the<br />

kinematical correspondence to be rather unimportant (see Fig. 5).


manipulator trajectory [m]<br />

Mean<br />

setup A setup B<br />

standard error<br />

setup A setup B<br />

t-test<br />

1) gripping mine 0.0055 0.0229 0.0053 0.0117 1.356 (0.187)<br />

2) gripping safety pin 0.6553 0.5364 0.0457 0.0546 -1.670 (0.103)<br />

3) inserting saftey pin 1.2275 1.4756 0.2134 0.3335 0.627 (0.535)<br />

4) unscrewing detonator 2.0896 5.7882 0.1804 0.5253 6.659 (0.001)**<br />

5) storing mine 0.1333 0.1519 0.0281 0.0294 0.459 (0.649)<br />

6) storing detonator 0.6663 0.7711 0.0621 0.0872 0.979 (0.335)<br />

Table 3: Comparison of setup A and setup B in terms of motion efficiency for the dominant input.<br />

H4: All participants classified the reconfiguration of the workspace to be helpful. Besides, the majority of the<br />

users (81%) appreciated the haptic augmentation of occurring constraints of the teleoperator and solely a<br />

minor part (19%) classified the force display to be an additional cognitive burden. Likewise, two-thirds (67%)<br />

of the participants stated that they made use of the forces provided additionally, although it has to be<br />

mentioned that one third (33%) drew the conclusion that they could not interpret the augmentation<br />

intentionally. Their interpretation difficulty is also mirrored in the recorded data as these participants afforded<br />

slightly more reconfiguration procedures compared to the others. When taking a closer look at the input<br />

behaviour, these users also tended to apply higher contact forces with the telemanipulators at the remote<br />

environment and for this reason seemed to be less sensitive towards haptic feedback (contingency<br />

coefficient: 0.71). The augmented force display might also be less useful when working with setup B as here<br />

higher contact forces were applied, too. In addition, due to the support with a lower amount of haptic<br />

feedback by setup B in comparison to setup A, participants demonstrated less sensitiveness for the<br />

augmented force feedback.<br />

4 Conclusion<br />

Two haptic telepresence systems were compared in terms of their usability for EOD whereby especially<br />

setup A was accepted by expert users. In order to state the reasons for the acceptance more generally, the<br />

following design principles could be derived: (A) Though being favourable it is not essentially necessary that<br />

force feedback activates the limbs in the same way a direct manipulation does. The majority of the users was<br />

not even aware of the different haptic presentation. (B) The telepresence system should be able to display<br />

high contact and gripping forces. Thus, the distance error when gripping objects can be diminished and<br />

lower contact forces will be applied when interacting with the remote environment which in consequence<br />

prevents damages of the telepresence system. (C) A kinematical correspondence of operator console and<br />

telemanipulator must not be overestimated and will yield only benefit when difficult tasks have to be<br />

executed (e.g. rotations). (D) In most cases it is recommendable to augment occurring limitations of the<br />

teleoperator’s workspace not only visually but also haptically. The design principles (A-D) are proposed for<br />

novel designs of haptic telepresence systems employed to remote controlled EOD. Presently, at the TUM<br />

bimanual haptic devices and telemanipulators with full movability in 6 DoF are under development enabling<br />

more advanced teleoperated task execution in future.<br />

5 Acknowledgments<br />

The presented work is funded as part of the ‘SFB 453’ Collaborative Research Center ‘High-Fidelity<br />

Telepresence and Teleaction’ of the DFG (Deutsche Forschungsgemeinschaft).<br />

References<br />

[1] Hirose, S.; Kato, K.: Development of Quadruped Walking Robot with the Mission of Mine Detection and Removal – Proposal<br />

of Shape-Feedback Master-Slave Arm. In Proc. of the IEEE Int. Conf. on Robotics and Automation, pp. 1713-1718, 1998.<br />

[2] Immersion, Corp. Available at: http://www.immersion.com [25.05.2004].<br />

[3] Kron, A.; Schmidt,G.; Petzold, B.; Zäh, M.F.; Hinterseer, P.; Steinbach, E.: Disposal of Explosive Ordnances by Use of a<br />

Bimanual Haptic Telepresence System. In Proc. of the IEEE Int. Conf. on Robotics and Automation, pp. 1968-1973, 2004.<br />

[4] Kron, A.; Schmidt, G.: Bimanual Haptic Telepresence Technology Employed to Demining Operations. To appear in Proc. of<br />

the EuroHaptics conference, June 2004.<br />

[5] Lederman, S.; Klatzky, R.L.: Designing Haptic and Multimodal Interfaces. A Cognitive Scientist’s Perspective. In Proc. of the<br />

<strong>Workshop</strong> on Advances in Interactive Multimodal Telepresence Systems, pp. 71-80, München, 2001.<br />

[6] SensAble Technologies, Inc. Available at: http://www.sensable.com [25.05.2004].<br />

[7] Telerob Gesellschaft für Fernhantierungstechnik. tEODor – telerob Explosive Ordnance Disposal. Available at<br />

http://www.telerob.de [25.05.2004].


Detection of material and structure of mines by acoustic<br />

analysis of mechanical drilling noise<br />

Abstract:<br />

G. Holl, B. Schwark-Werwach, C. Becker<br />

Wehrwissenschaftliches Instiut für Werk-, Expolsiv- und Betriebstoffe (WIWEB)<br />

Großes Cent<br />

53913 Swisttal-Heimerzheim<br />

Institut für Neue Basistechnologien (INBT)<br />

Alte Leipziger Str. 50<br />

D-99735 Bielen<br />

The Wehrwissenschaftliches Instiut für Werk-, Expolsiv- und Betriebstoff (WIWEB) built in<br />

cooperation with Institut für Neue Basistechnologien (INBT) a computer controlled drilling<br />

and acoustic detection system DIMIDAC (Detection of Mines by Drill Acoustic). This system<br />

can detect known samples different in material and shape during drilling process. The<br />

vibration spectrum of sample in exchange with drilling process is measured, analysed and then<br />

compared with known spectra. The position of the drilling-rode is also recorded and therefore<br />

drilling rate is known.<br />

The analysed characteristic parameters lay typically within limits depending on material and<br />

shape of sample.<br />

1. Introduction:<br />

Because mine clearance operations procedure need much more time than mine laying the<br />

number of polluting mines is still increasing world-wide. By humanitarian causes there is a<br />

great demand for fast detection systems, which reduce the risk for an accident by a mine to<br />

nearly zero. A good solution seems to be using automatic detection systems.<br />

Such systems need one or better several complementary (see [1]) very sensitive sensors and<br />

good signal processing tools to reduce the false alarm rate.<br />

One idea to get information of a possible mine is to use a mechanical drilling device and to<br />

analyse the vibration spectrum, a procedure remembering on handling with prodder.<br />

Detection of vibration spectrum is a common method in process control (see [2],[3] and [4])<br />

and is very sensitive as we know from our sense of hearing.


2. Experimental Device:<br />

The Wehrwissenschaftliches Instiut für Werk-, Expolsiv- und Betriebstoffe (WIWEB) built in<br />

cooperation with Institut für Neue Basistechnologien (INBT) a computer controlled drilling<br />

and acoustic detection system DIMIDAC (Detection of Mines by Drill Acoustic) (Pic.1).<br />

Picture 1: Photo of the experimental device<br />

A high-speed drilling machine can be conveyed pneumatically in a guide rail low and up with<br />

given force if moved against a solid sample. This force is measured by a weighing machine,<br />

upon which the container for samples is placed. The signal of a position sensor fixed on<br />

sliding carriage, so as the signals of two vibration sensors, one for low frequencies and one for<br />

high frequencies, is recorded during measuring process.<br />

A computer with a data acquisition and a control board controls the whole measuring and<br />

detection procedure. Programming and signal processing can be done with aid of the software<br />

DASYLAB, which allows you to program hardware directly within a graphical environment.<br />

Detection procedure starts with working drilling machine at its highest position and with<br />

rotating drilling rode in air. The data acquisition starts and a ground level signal is recorded.<br />

Then movement down starts. If signal of vibration sensors or of weight sensor exceed a certain<br />

level we have contact with a solid body. After 2 or 3 seconds we can analyse a stabile signal<br />

which is characteristic for each type of sample.<br />

If sample is detected or if a time or position limit is exceeded data acquisition stops, a message<br />

with description of detected sample is on display of computer and drilling machine moves up<br />

to its start position.


3. Analyse of Signals<br />

In Picture 2 you can see a typical signal of a vibration sensor. You can recognise that sample<br />

contact happens just before 5 seconds on time scale.<br />

After Fast-Fourier-Transformation of time signals you get frequency spectra. In Picture 3<br />

typical spectra of samples with different material are presented. It is obvious that they differ<br />

significantly from each other.<br />

V<br />

10,0<br />

7,5<br />

5,0<br />

2,5<br />

0,0<br />

-2,5<br />

-5,0<br />

-7,5<br />

-10,0<br />

plot-V10_D15_HF5_U_AXT-FE1.DDF<br />

0<br />

NF-Sensor<br />

5 10<br />

Picture 2: Typical signal of vibration sensor<br />

0,30<br />

0,25<br />

0,20<br />

0,15<br />

0,10<br />

0,05<br />

0,00<br />

0<br />

2500<br />

5000<br />

FE B-WOOD<br />

STONE K-WOOD<br />

PLASTIC BRICK<br />

7500<br />

10000<br />

Hz<br />

s<br />

0,30<br />

0,25<br />

0,20<br />

0,15<br />

0,10<br />

0,05<br />

0,00<br />

Picture 3: Overview of spectra of sample consisting of different material


To get characteristic parameters for detection procedure we calculate an effective value in<br />

special frequency intervals (see Picture 4), which are found out and characterise the samples<br />

The effective value in frequency interval of the upper diagram in picture 4 is nearly equal for<br />

every sample, while effective value in frequency interval of the lower diagram in picture 4<br />

differs from sample to sample.<br />

0,30<br />

0,25<br />

0,20<br />

0,15<br />

0,10<br />

0,05<br />

0,00<br />

0,30<br />

0,25<br />

0,20<br />

0,15<br />

0,10<br />

0,05<br />

0,00<br />

0<br />

0<br />

2500<br />

5000<br />

FE B-WOOD<br />

STONE K-WOOD<br />

PLASTIC BRICK<br />

2500<br />

5000<br />

FE B-WOOD<br />

STONE K-WOOD<br />

PLASTIC BRICK<br />

7500<br />

7500<br />

Hz<br />

Hz<br />

10000<br />

10000<br />

Picture 4: Selection of frequency intervals to get characteristic parameters for detection


Another detection parameter is the drilling rate, which can be calculated from position signal<br />

after contact with sample.<br />

With this procedure we can discriminate samples consisting of different material but also<br />

samples with identical materials but different shapes e.g. a steel plate and a steel-tube.<br />

4. Discussion and outlook<br />

The results are encouraging, nevertheless we could not be sure that object, which sounds like a<br />

mine, is really a mine and we will only recognise mines which spectra are known.<br />

How can this detection system be used?<br />

At first we have to find all objects, which can be possible mines, in order to have contact with<br />

drilling rode. This can be done by using other systems as ground penetrating radar (GPR) or<br />

set a grid of drilling positions, which have to be checked.<br />

The next step is to drill and detect the found material, maybe to detect a special type of mine<br />

by one drilling. But we can get more information if we set several drillings nearby the first<br />

contact with an interesting object. So we win a contour of the object and the distribution of<br />

spectra because they will change along object surface.<br />

As a next step it is possible to drill into the possible mine and take a sample of material inside.<br />

As a last step it is thinkable to destroy the mine or reduce the risk of detonation by taking<br />

special mechanical measures at surface of mine.<br />

Advantage of this detection system is its simple, robust and not too expensive set up with<br />

components of production technology, which are industry standards.<br />

Literature:<br />

[1] M. Acheroy, M. Piette, Y. Baudoin, J.-P. Salmon: Belgian project on humanitarian<br />

demining (HUDEM), Sensor design and signal processing aspects. Royal Military<br />

Academy (RMA), 2000.<br />

http://www.sic.rma.ac.be/~acheroy/worshp_Ispra_2000/workshp_Ispra_2000.html<br />

[2] Strache, Wolfgang: Multisensorielle Überwachung des Stanzprozesses. Düsseldorf :<br />

VDI- Verl., 2000<br />

[3] Lindemann, M.: Erkennung von Verbrennungsaussetzern mit Hilfe von<br />

Klopfsensoren Düsseldorf : VDI Verl., 2001<br />

[4] Brandmeier, Thomas: Verschleißerkennung durch Schwingungsanalyse an der<br />

Drehmaschine. Dissertation Universität Karlsruhe, 1992

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!