The Draper Technology Digest - Draper Laboratory
The Draper Technology Digest - Draper Laboratory The Draper Technology Digest - Draper Laboratory
The Draper Technology Digest 2011 Volume 15 | CSDL-R-3028
- Page 2 and 3: Front cover photo: (from left to ri
- Page 4 and 5: Table of Contents 4 6 9 21 33 47 55
- Page 6 and 7: Introduction by Dr. John Dowdle, Vi
- Page 8 and 9: The 2011 Charles Stark Draper Prize
- Page 10 and 11: 8 H•e jΦ Resonator φ = θ O -
- Page 12 and 13: that any disturbances in the loop n
- Page 14 and 15: n r (t) n e (t) nonwhite noise proc
- Page 16 and 17: The resulting natural frequency noi
- Page 18 and 19: where 16 sgn(w) = -1, w < 0 0, w =
- Page 20 and 21: References [1] Leeson, D.B., “A S
- Page 22 and 23: 20 Due to the limited lifespan of a
- Page 24 and 25: Three different culture media were
- Page 26 and 27: Statistical Analysis Data were calc
- Page 28 and 29: A B C positive matrix staining for
- Page 30 and 31: [7] Caplan, A.I., “Tissue Enginee
- Page 32 and 33: 30 Piia K. Valonen is currently a P
- Page 34 and 35: 32 r s Most vehicles can be refuele
- Page 36 and 37: Many other propellantless propulsio
- Page 38 and 39: 80 60 40 20 0 -20 -40 -60 -80 36 -1
- Page 40 and 41: fundamental charges), their kinetic
- Page 42 and 43: control algorithm to use the Lorent
- Page 44 and 45: the decreased power usage is couple
- Page 46 and 47: [18] Choinière, E. and B.E. Gilchr
- Page 48 and 49: 46 The U.S. military’s unmanned a
- Page 50 and 51: Table 1. Relative Error in Target L
<strong>The</strong> <strong>Draper</strong> <strong>Technology</strong> <strong>Digest</strong><br />
2011 Volume 15 | CSDL-R-3028
Front cover photo:<br />
(from left to right) Troy B. Jones, Autonomous Systems Capability Leader; Nirmal Keshava,<br />
Group Leader for Fusion, Exploitation, and Inference Technologies; Sungyung Kim, Senior<br />
Member of the Technical Staff in Strategic and Space Guidance and Control; and Amy E.<br />
Duwel, Group Leader for RF and Communications
<strong>The</strong> <strong>Draper</strong> <strong>Technology</strong> <strong>Digest</strong> (CSDL-R-3028) is published annually<br />
under the auspices of <strong>The</strong> Charles Stark <strong>Draper</strong> <strong>Laboratory</strong>, Inc., 555<br />
<strong>Technology</strong> Square, Cambridge, MA 02139. Requests for individual<br />
copies or permission to reprint the text should be submitted to:<br />
<strong>Draper</strong> <strong>Laboratory</strong><br />
Media Services<br />
Phone: (617) 258-1887<br />
Fax: (617) 258-1800<br />
email: techdigest@draper.com<br />
Editor-in-Chief<br />
Michael J. Matranga<br />
Artistic Director<br />
Pamela Toomey<br />
Designer<br />
Lindsey Ruane<br />
Editor<br />
Beverly Tuzzalino<br />
Writers<br />
Jeremy Singer<br />
Amy Schwenker<br />
Alicia Prewett<br />
Illustrator<br />
William Travis<br />
Photographer<br />
James Thomas<br />
Photography Coordinator<br />
Drew Crete<br />
Copyright © 2011 by <strong>The</strong> Charles Stark <strong>Draper</strong> <strong>Laboratory</strong>, Inc.<br />
All rights reserved.
Table of Contents<br />
4<br />
6<br />
9<br />
21<br />
33<br />
47<br />
55<br />
71<br />
83<br />
2<br />
Introduction<br />
by Dr. John Dowdle, Vice President of Engineering<br />
2011 Charles Stark <strong>Draper</strong> Prize<br />
PAPERS<br />
Oscillator Phase Noise: Systematic Construction of an Analytical Model Encompassing Nonlinearity<br />
Paul A. Ward and Amy E. Duwel<br />
In Vitro Generation of Mechanically Functional Cartilage Grafts Based on Adult Human Stem Cells and<br />
3D-Woven poly(ε-caprolactone) Scaffolds<br />
Piia K. Valonen, Franklin T. Moutos, Akihiko Kusanagi, Matteo G. Moretti, Brian O. Diekman, Jean F. Welter,<br />
Arnold I. Caplan, Farshid Guilak, and Lisa E. Freed<br />
General Bang-Bang Control Method for Lorenz Augmented Orbits<br />
Brett J. Streetman and Mason A. Peck<br />
Tactical Geospatial Intelligence from Full Motion Video<br />
Richard W. Madison and Yuetian Xu<br />
Model-Based Design and Implementation of Pointing and Tracking Systems: From Model to Code in One Step<br />
Sungyung Lim, Benjamin F. Lane, Bradley A. Moran, Timothy C. Henderson, and Frank A. Geisel<br />
Detection of Deception in Structured Interviews Using Sensors and Algorithms<br />
Meredith G. Cunha, Alissa C. Clarke, Jennifer Z. Martin, Jason R. Beauregard, Andrea K. Webb, Asher A.<br />
Hensley, Nirmal Keshava, and Daniel J. Martin<br />
Requirements-Driven Autonomous System Test Design: Building Trusting Relationships<br />
Troy B. Jones and Mitch G. Leammukda<br />
Dithering of 5-arcsec LOS change<br />
Table of Contents
92<br />
101<br />
102<br />
105<br />
106<br />
107<br />
108<br />
109<br />
110<br />
111<br />
112<br />
114<br />
116<br />
List of 2010 Published Papers and Presentations<br />
PATEnTS<br />
Patents Introduction<br />
Systems and Methods for High Density Multi-Component Modules<br />
U.S. Patent No. 7,727,806; Date Issued: June 1, 2010<br />
Scott A. Uhland, Seth M. Davis, Stanley R. Shanfield, Douglas W. White, and Livia M. Racz<br />
List of 2010 Patents<br />
AwARDS<br />
<strong>The</strong> 2010 <strong>Draper</strong> Distinguished Performance Awards<br />
Design and Demonstration of a Guided Bullet for Extreme Precision Engagement of Targets at Long Range<br />
Laurent G. Duchesne, Richard D. Elliott, Robert M. Filipek, Sean George, Daniel I. Harjes,<br />
Anthony S. Kourepenis, and Justin E. Vican<br />
Development of an Ultra-Miniaturized, Paper-Thin Power Source<br />
Stanley R. Shanfield, Albert C. Imhoff, Thomas A. Langdo, Balasubrahmanyan “Biga” Ganesh,<br />
and Peter A. Chiacchi<br />
<strong>The</strong> 2010 Outstanding Task Leader Awards<br />
COTS Guidance, navigation, and Targeting<br />
Ian T. Mitchell<br />
MK6 System Test Complex<br />
Daniel J. Monopoli<br />
<strong>The</strong> 2010 Howard Musoff Student Mentoring Award<br />
Sarah L. Tao<br />
<strong>The</strong> 2010 Excellence in Innovation Award<br />
navigation by Pressure<br />
Catherine L. Slesnick, Benjamin F. Lane, Donald E. Gustafson, and Brad D. Gaynor<br />
List of 2010 Graduate Research <strong>The</strong>ses<br />
Table of Contents<br />
3
Introduction by Dr. John Dowdle, Vice President of Engineering<br />
Introduction by<br />
This publication, the 15th issue of the <strong>Draper</strong> <strong>Technology</strong><br />
<strong>Digest</strong>, presents a collection of publications, patents,<br />
and awards representative of the outstanding technical<br />
achievements by <strong>Draper</strong> staff members. Seven technical<br />
papers are presented in this issue to showcase work<br />
associated with <strong>Draper</strong>’s capabilities and technologies.<br />
<strong>The</strong>se publications represent long-standing <strong>Draper</strong> core<br />
capabilities in the areas of guidance, navigation, and<br />
control, autonomous systems and information systems,<br />
as well as emerging strengths in biomedical systems and<br />
multimodal sensor fusion.<br />
This issue also recognizes the winners of several<br />
<strong>Draper</strong> awards for technical excellence, leadership,<br />
and mentoring. <strong>The</strong> Distinguished Performance Award<br />
is the most prestigious technical achievement award<br />
that the <strong>Laboratory</strong> bestows upon its employees. This<br />
year's award was presented to two teams. Laurent<br />
Duchesne, Richard Elliott, Robert Filipek, Sean George,<br />
Daniel Harjes, Anthony Kourepenis, and Justin Vican<br />
were acknowledged for “Design and Demonstration<br />
of a Guided Bullet for Extreme Precision Engagement<br />
of Targets at Long Range,” work that resulted in the<br />
development of a guidance system for a 50-caliber<br />
bullet. Stanley Shanfield, Albert Imhoff, Thomas<br />
Langdo, Balasubrahmanyan “Biga” Ganesh, and Peter<br />
Chiacchi were recognized for “Development of an<br />
Ultra-Miniaturized, Paper-Thin Power Source,” which<br />
represents a dramatic breakthrough in miniature<br />
portable energy.<br />
Exceptional technical efforts require outstanding<br />
leadership. Two individuals were awarded the<br />
Outstanding Task Leader Award this year: Ian Mitchell<br />
was recognized for his leadership of “COTS Guidance,<br />
Navigation, and Targeting,” while Daniel Monopoli<br />
was acknowledged for directing the “MK6 System Test<br />
Complex.” Student leadership and mentoring remain
Dr. John Dowdle,<br />
Vice President of Engineering<br />
a priority at <strong>Draper</strong>. <strong>The</strong> Howard Musoff Student Mentoring Award<br />
recognizes exceptional student mentoring while simultaneously<br />
honoring a former <strong>Draper</strong> mentor who devoted much time and energy<br />
to many students. Sarah Tao was the sixth recipient of the Howard<br />
Musoff Student Mentoring Award.<br />
Innovation is a key element to <strong>Draper</strong>’s success. Two awards<br />
acknowledge innovation of our technical staff. <strong>The</strong> 2010 Excellence<br />
in Innovation Award recognized a team effort by Catherine Slesnick,<br />
Benjamin Lane, Donald Gustafson, and Brad Gaynor for “Navigation<br />
by Pressure.” <strong>The</strong> second team recognized for innovation was Seth<br />
Davis, Stanley Shanfield, Douglas White, and Livia Racz, who received<br />
the 2010 Vice President’s Best Patent Award for “Systems and Methods<br />
for High Density Multi-Component Modules.”<br />
<strong>The</strong> Vice President’s Best Paper Award recognizes an original publication<br />
that represents <strong>Draper</strong>’s high standards of professionalism,<br />
originality, and creativity. This year’s recipients of the Best Paper<br />
Award were Paul Ward and Amy Duwel, who authored the paper “Oscillator<br />
Phase Noise: Systematic Construction of an Analytical Model<br />
Encompassing Nonlinearity.” <strong>The</strong>ir paper, which provides a straightforward<br />
approach for trading off oscillator design parameters, is the<br />
first paper in this digest.<br />
<strong>The</strong> <strong>Draper</strong> Prize, endowed by <strong>Draper</strong> <strong>Laboratory</strong> and awarded by<br />
the National Academy of Engineering, honors individuals who have<br />
developed a unique concept that advances science and technology<br />
while promoting the welfare and freedom of society. Since its<br />
inception in 1988, the <strong>Draper</strong> Prize has recognized the developers<br />
of the integrated circuit, the turbojet engine, FORTRAN, the Global<br />
Positioning System, and the World Wide Web, to name a few. This<br />
year, the <strong>Draper</strong> Prize was awarded to Frances H. Arnold and Willem<br />
P.C. Stemmer for “directed evolution,” a process that mimics natural<br />
mutation and selection to guide the creation of desirable properties<br />
in proteins and cells in an accelerated laboratory environment.<br />
On behalf of <strong>Draper</strong> <strong>Laboratory</strong>, I would like to congratulate both<br />
recipients for their achievements, which are highlighted in greater<br />
detail on the following pages.<br />
Introduction by Dr. John Dowdle, Vice President of Engineering<br />
5
<strong>The</strong> 2011 Charles Stark <strong>Draper</strong> Prize<br />
6<br />
<strong>The</strong> 2011 Charles Stark <strong>Draper</strong> Prize<br />
<strong>The</strong> Charles Stark <strong>Draper</strong> Prize was established in 1988 to honor the memory of Dr.<br />
Charles Stark <strong>Draper</strong>, “the father of inertial navigation.” <strong>The</strong> Prize was instituted by<br />
the National Academy of Engineering and endowed by <strong>Draper</strong> <strong>Laboratory</strong>. <strong>The</strong> Prize<br />
is recognized as one of the world's preeminent awards for engineering achievement,<br />
and honors individuals who, like Dr. <strong>Draper</strong>, developed a unique concept that has<br />
made significant contributions to the advancement of science and technology, as<br />
well as the welfare and freedom of society.<br />
For information on the nomination process, contact the Public Affairs Office at the<br />
National Academy of Engineering at 202.334.1237.<br />
<strong>The</strong> 2011 Charles Stark <strong>Draper</strong> Prize was awarded on February 22 at a ceremony in<br />
Washington, D.C. to Frances H. Arnold and Willem P.C. Stemmer, who individually<br />
contributed to a process called “directed evolution.” This process, now used in<br />
laboratories worldwide, allows researchers to guide the creation of certain properties<br />
in proteins and cells.<br />
Directed evolution is postulated on the idea that the mutation and selection processes<br />
that occur in nature can be accelerated in the laboratory to obtain specific, targeted<br />
improvements in the function of single proteins and multiprotein pathways. Arnold<br />
showed that randomly mutating genes of a targeted protein, especially enzymes,<br />
would result in some new proteins with more desirable traits than they had before.<br />
Selecting the best proteins and repeating this process multiple times, she essentially<br />
directed the evolution of the proteins until they had the desired properties.<br />
Stemmer concentrated on a different natural process for creating diversity and<br />
concentrated on recombining preexisting natural diversity, a process he called “DNA<br />
shuffling.” Rather than causing random mutations, he shuffled the same gene from<br />
diverse but related species to create clones that were as good as or better than the<br />
parental genes in a given targeted property.<br />
An important aspect of directed evolution is that it provides a practical and costeffective<br />
way to improve protein function. Previous efforts, especially those that<br />
involved a design based on enzyme structures and the predicted effects of mutations,<br />
were often not successful and were expensive and labor-intensive.<br />
According to George Georgiou, a professor at the University of Texas at Austin.<br />
“Arnold and Stemmer’s joint development of directed protein evolution was a<br />
milestone in biological research. It is impossible to overstate the impact of their<br />
discoveries for science, technology, and society; nearly every industrial product and<br />
application involving proteins relies on directed evolution.”<br />
Arnold is the Dick and Barbara Dickinson Professor of Chemical Engineering and<br />
Biochemistry at the California Institute of <strong>Technology</strong>. She is listed as co-inventor<br />
on more than 30 U.S. patents and has served as science advisor to more than 10<br />
companies. In 2005, Arnold cofounded Gevo Inc., which develops new microbial
outes to produce fuels and chemicals from renewable resources.<br />
She is among the few individuals who is a member of all three<br />
membership organizations of the National Academies: <strong>The</strong> National<br />
Academy of Engineering (2000), <strong>The</strong> Institute of Medicine (2004),<br />
and the National Academy of Sciences (2008). She holds a B.S. in<br />
Mechanical and Aerospace Engineering from Princeton University<br />
(1979) and a Ph.D. in Chemical Engineering from the University of<br />
California, Berkeley.<br />
Stemmer is founder and CEO of Amunix Inc., which creates<br />
pharmaceutical proteins with extended dosing frequency. In<br />
2008, Amunix joined with Index Ventures to create Versartis Inc.<br />
for the purpose of clinical development of three specific products<br />
for the treatment of metabolic diseases. Stemmer has invented<br />
other technologies that have led to other successful companies<br />
and products. In 1993, he invented DNA shuffling and co-founded<br />
Maxygen to commercialize the process. Prior to 1993, he was a<br />
Distinguished Scientist at Affymax and a scientist at Hybritech.<br />
In 2001, he invented the Avimer technology and founded Avidia<br />
in 2003 to commercialize it; he was chief scientific officer of the<br />
company until 2005. Stemmer has 68 research publications, 97 U.S.<br />
patents, and is a recipient of the Doisy Award, the Perlman Award,<br />
and the NASDAQ VCynic Award. He received his Ph.D. from the<br />
University of Wisconsin-Madison in 1985.<br />
<strong>The</strong> 2011 Charles Stark <strong>Draper</strong> Prize<br />
Recipients of the Charles Stark <strong>Draper</strong> Prize<br />
2009: Robert H. Dennard for the invention and development of Dynamic<br />
Random Access Memory (DRAM)<br />
2008: Rudolf Kalman for the development and dissemination of the<br />
Kalman Filter<br />
2007: Timothy Berners-Lee for creation of the World Wide Web<br />
2006: Willard S. Boyle and George E. Smith for the invention of the charge-<br />
coupled device (CCD)<br />
2005: Minoru S. Araki, Francis J. Madden, Don H. Schoessler, Edward A.<br />
Miller, and James W. Plummer for their invention of the Corona<br />
reconnaissance satellite technology<br />
2004: Alan C. Kay, Butler W. Lampson, Robert W. Taylor, and Charles P.<br />
Thacker for the development of the Alto computer at Xerox's Palo<br />
Alto Research Center (PARC)<br />
2003: Ivan A. Getting and Bradford W. Parkinson for their technological<br />
achievements in the development of the Global Positioning System<br />
2002: Robert Langer for bioengineering revolutionary medical drug<br />
delivery systems<br />
2001: Vinton Cerf, Robert Kahn, Leonard Klienrock, and Lawrence Roberts<br />
for their individual contributions to the development of the Internet<br />
1999: Charles Kao, Robert Maurer, and John MacChesney for spearheading<br />
advances in fiber-optic technology<br />
1997: Vladimir Haensel for the development of the chemical engineering<br />
process of “Platforming” (short for Platinum Reforming), which<br />
was a platinum-based catalyst to efficiently convert petroleum into<br />
high-performance, cleaner-burning fuel<br />
1995: John R. Pierce and Harold A. Rosen for their development of<br />
communication satellite technology<br />
1993: John Backus for his development of FORTRAN, the first widely used,<br />
general-purpose, high-level computer language<br />
1991: Sir Frank Whittle and Hans J.P. von Ohain for their independent<br />
development of the turbojet engine<br />
1989: Jack S. Kilby and Robert N. Noyce for their independent development<br />
of the monolithic integrated circuit<br />
7
8<br />
H•e jΦ<br />
Resonator<br />
φ = θ O - θ R<br />
n r (t) n e (t)<br />
e jΨr<br />
θ O (t)<br />
n(t)<br />
Σ e jΨn<br />
H•e jΦ<br />
Ψ n (n(t))<br />
θ O (t)<br />
Amplifier<br />
Σ Σ<br />
Resonator<br />
θ R (t)<br />
e jΨn<br />
Ψ n (n(t)) Ψ n (n e (t))<br />
Engineers building new communications, navigation, and radar<br />
systems seek to minimize phase noise, which can harm performance.<br />
In a navigation system, phase noise can make it take longer for the<br />
device to acquire the GPS satellite signal, which also drains power. In<br />
communications and radar systems, phase noise can diminish range<br />
and disrupt low-level signals.<br />
This paper provides a general model for engineers studying design<br />
trades who are seeking to understand how various noise sources,<br />
as well as environmental disturbances, manifest as phase noise in<br />
oscillators, which produce electronic signals.<br />
This work could lead the way to improved oscillators that support<br />
defense and intelligence customers’ communications and navigation<br />
needs with more capable systems in smaller packages.<br />
G<br />
G•e jΦ<br />
Amplifier<br />
θ R (t)<br />
Oscillator Phase Noise: Systematic Construction of an Analytical Model Encompassing Nonlinearity
Oscillator Phase Noise: Systematic Construction of<br />
an Analytical Model Encompassing Nonlinearity<br />
Paul A. Ward and Amy E. Duwel<br />
Copyright ©2011 by the Institute of Electrical and Electronics Engineers (IEEE), published in IEEE Transactions on Ultrasonics and<br />
Frequency Control, Vol. 58, No. 1, January 2011<br />
Abstract<br />
This paper offers a derivation of phase noise in oscillators resulting in a closed-form analytic formula that is both general and convenient<br />
to use. This model provides a transparent connection between oscillator phase noise and the fundamental device physics and noise<br />
processes. <strong>The</strong> derivation accommodates noise and nonlinearity in both the resonator and feedback circuit, and includes the effects of<br />
environmental disturbances. <strong>The</strong> analysis clearly shows the mechanism by which both resonator noise and electronics noise manifest as<br />
phase noise, and directly links the manifestation of phase noise to specific sources of noise, nonlinearity, and external disturbances. This<br />
model sets a new precedent, in that detailed knowledge of component-level performance can be used to predict oscillator phase noise<br />
without the use of empirical fitting parameters.<br />
I. Introduction<br />
This paper provides a predictive model for phase noise that does<br />
not require fitting parameters, but instead is rigorously derived<br />
from the fundamental dynamics of an oscillator loop. We build on<br />
the work and insight of predecessors, including Leeson [1], Hajimiri<br />
[2], and Rubiola [3] in particular. Section IIA briefly summarizes<br />
key concepts from the literature that make our work possible. This<br />
section also introduces a phase criterion that is new to oscillator<br />
analysis and enables our modeling approach; specifically, the phase<br />
criterion requires that the phase differences around a closed-loop<br />
sum to zero instantaneously. Leveraging the insight of the linear<br />
time variant (LTV) effect [2], we add a rigorous derivation that<br />
provides a closed-form expression for the LTV gain function and<br />
clearly shows the associated frequency-translation of the additive<br />
noise as it manifests in phase noise (Sections IIB and IIC). By<br />
capturing the LTV behavior in a general analytical model, one can<br />
further appreciate the elegance of topologies such as Colpitts, in<br />
which the feedback is periodically applied, to provide low-phasenoise<br />
oscillators. Section IID shows how oscillator phase noise is<br />
obtained from injected phase noise using the phase criterion. In this<br />
section, we benchmark our approach by using it to derive Leeson’s<br />
expression for phase noise. In Section IIE, we briefly address noise<br />
that is derived directly from the resonator itself. Finally, Sections<br />
IIF and IIG discuss the role of nonlinearity and time variance on<br />
phase noise. Although it is already well known that nonlinearity in<br />
active devices can degrade phase noise, we provide a formalism for<br />
exactly how the mapping to phase noise occurs. In particular, we<br />
show how the measurable property of voltage to phase conversion<br />
in an amplifier can parameterize the resulting oscillator phase noise.<br />
Our formalism also offers new results. We can predict for the<br />
spectral density of the phase noise the 1/f3 corner frequency from<br />
the amplifier properties and discuss why this frequency is usually<br />
much lower than the 1/f corner of the individual amplifier. We also<br />
provide an analytical expression for the phase noise resulting from<br />
resonator nonlinearity and time variance and offer new insight into<br />
an earlier finding that a nonlinear resonator can actually improve<br />
the phase noise of an oscillator [4]. In particular, we show that<br />
nonlinearity in the resonator reduces the phase noise that would<br />
be predicted by the Leeson equation; however, we explain how<br />
nonlinearity can add a new term to the phase noise because of the<br />
coupling between frequency and amplitude. <strong>The</strong> derivation focuses<br />
on mechanical or lumped-element-based resonators. Though our<br />
approach is quite general, future work should explore the broader<br />
applicability of these results, e.g., to photonic resonators.<br />
II. Modeling of Phase Noise<br />
A. Conceptual Basis of Model<br />
<strong>The</strong> analysis relies heavily on two key insights. <strong>The</strong> first insight was<br />
articulated by Hajimiri and Lee [2]. <strong>The</strong>y recognized that phase<br />
noise is an LTV function of the additive noise. Building on that<br />
insight, the present work provides a closed-form solution to the<br />
LTV gain function that is valid for small noise but can be extended<br />
for arbitrarily large noise. <strong>The</strong> analysis leverages the concept of an<br />
analytic signal that possesses the same phase (and amplitude) as<br />
the oscillator signal at each respective node. This technique and its<br />
application to oscillators are also described nicely in [3].<br />
<strong>The</strong> second key to analyzing this system is a requirement that at an<br />
instant in time, the phase at any point in the loop is well-defined<br />
and single-valued with respect to a reference phase. <strong>The</strong> loop<br />
topology then constrains the system such that the phase shifts<br />
around the loop sum to zero. This includes phase shifts caused by<br />
noise processes.<br />
<strong>The</strong> constraint appears reminiscent of the Barkhausen criterion,<br />
which describes the condition for steady-state oscillation. <strong>The</strong><br />
Barkhausen criterion, however, captures the condition that the<br />
closed-loop transfer function has a resonance, so that there is a<br />
finite response even with zero input. <strong>The</strong> statement is often made<br />
Oscillator Phase Noise: Systematic Construction of an Analytical Model Encompassing Nonlinearity<br />
9
that any disturbances in the loop not meeting the Barkhausen phase<br />
criterion will decay in time. In the present analysis, a physically<br />
realizable system must meet the zero-loop-sum phase condition<br />
at all times and instantaneously because of the topology of being<br />
in a loop. This seemingly intuitive condition has not been stated<br />
explicitly in this context before. It allows one to use feedbacktheory-based<br />
models, elegantly presented in [3], when the system<br />
is fully linearized.<br />
B. Determination of Injected Phase Noise<br />
We consider the case in which a random signal n(t) is added to a<br />
sinusoid of amplitude A, as shown in Figure 1. <strong>The</strong> only restriction<br />
placed on n(t) is that it is wide-sense stationary (along with, by<br />
extension, its Hilbert transform nˆ(t)), and thus we can define<br />
its power spectrum. <strong>The</strong> noise phasor has random amplitude<br />
and random phase. Thus, referring to Figure 1, it is clear that<br />
the resultant phasor representing c(t) possesses both random<br />
amplitude modulation and random phase modulation.<br />
Figure 1. Graphic representation of an analytic signal.<br />
10<br />
Im{c(t)}<br />
c(t)<br />
Ae jw 0 t<br />
noise<br />
Re{c(t)}<br />
It is customary with phase noise analysis to ignore amplitude noise<br />
on the oscillator output. We shall follow this custom here. However,<br />
we will consider the effects of amplitude noise within the loop on<br />
the oscillator phase noise. As we shall see later, the amplitude noise<br />
can excite nonlinear effects that can increase phase noise.<br />
<strong>The</strong> phasor c(t) is represented as an analytic signal. In Appendix<br />
A, we show how the amplitude and phase of this signal are related<br />
to n(t). Figure 2 represents the analytic signal at different nodes of<br />
an oscillator block diagram at a snapshot in time. We consider that<br />
the noise n(t) is referred to a single node and is due to the feedback<br />
electronics. An explicit phase shift Ψ is introduced to identify the<br />
n<br />
noise-derived phase shift introduced through the LTV mapping of<br />
n(t) into the loop. We use the analytic signal formalism to show how<br />
Ψ n (t) is derived from n(t).<br />
H•e jΦ<br />
Resonator<br />
φ = θ O - θ R<br />
Figure 2. Simplified block diagram for an oscillator.<br />
Oscillator Phase Noise: Systematic Construction of an Analytical Model Encompassing Nonlinearity<br />
θ O (t)<br />
n(t)<br />
Σ e jΨn<br />
Ψ n (n(t))<br />
G<br />
Amplifier<br />
θ R (t)<br />
From our analytic signal representation, we can express the injected<br />
phase noise as<br />
Ψ (t) = tan n -1 – tan-1 A sin(w t) + n o ,<br />
^ (t) sin(w t) o<br />
A cos(w t) + n(t) cos(w t)<br />
o o<br />
where w o is the time-average oscillator frequency. Equation (1) is<br />
exact, but not particularly convenient. To obtain a working equation<br />
for phase noise, it can be expanded in a Taylor series. Considering<br />
that the noise is small compared with the signal amplitude, we can<br />
keep only the first-order terms of the expansion, resulting in<br />
n<br />
Ψ (t) n ≅ cos(w t) – sin(w t).<br />
o o ^ (t)<br />
n(t)<br />
A<br />
A<br />
We see that the phase noise is an LTV function of additive electronics<br />
noise n(t). Furthermore, the zero-loop-sum criterion requires that<br />
φ = -Ψ . Equation (2) is important because it forms the basis for the<br />
n<br />
mapping of additive noise to feedback phase noise.<br />
It should be noted that this analysis implicitly assumes that there is<br />
no parametric modulation in the feedback electronics. This is not<br />
a linear time invariant (LTI) effort and is addressed in more detail<br />
later in the paper.<br />
In addition, all oscillators possess some nonlinearity that keeps<br />
the amplitude from growing without bound. This nonlinearity may<br />
be intrinsic or may be added as part of the design. In Section IIG,<br />
we address the exacerbation in phase noise for the specific case<br />
in which active amplitude stabilization is used and the resonator<br />
possesses coupling between amplitude and frequency.<br />
C. Spectrum of Injected Phase Noise<br />
Because we are dealing with random signals, we wish to work in<br />
terms of frequency spectra. To that end, we wish to compute the<br />
spectrum of the injected phase noise. Note that the spectrum of<br />
the phase noise is not a signal power spectrum per se, but instead<br />
represents the distribution of phase power as a function of<br />
frequency, having units of Hz-1 .<br />
(1)<br />
(2)
<strong>The</strong> derivation of injected phase spectrum is rather lengthy, and<br />
therefore has been included as Appendix B. <strong>The</strong> result is<br />
1<br />
S (w) Ψn ≅ [S (w – w ) + S (w + w )],<br />
n o n o A2 (3)<br />
where S (w) is the double-sided spectrum of the additive<br />
n<br />
electronics noise (in V2 /Hz) and w is the baseband frequency.<br />
In words, the double-sided injected phase noise spectrum is<br />
proportional to the double-sided additive noise spectrum shifted<br />
toward positive frequency by w plus the double-sided electronics<br />
o<br />
noise spectrum shifted toward negative frequency by w . Equation<br />
o<br />
(3) also shows that the injected phase noise is given by the ratio<br />
of additive amplifier noise power to signal power, as expected. For<br />
the case in which the amplifier noise is attributed to white noise<br />
[S (w) = S ], this ratio can be put into more fundamental units by<br />
n nw<br />
dividing double-sided available noise power density by available<br />
signal power:<br />
2Snw A2 2Fk T B S (w) Ψn ≅ = ,<br />
where F is the amplifier noise factor and P s is the signal power.<br />
D. Oscillator Phase Noise and the Leeson Formula<br />
<strong>The</strong> oscillator phase noise, which we will also refer to as the readout<br />
phase noise θ (t), is the sum of two terms: the phase noise of the<br />
R<br />
resonator output θ (t) and the phase noise injected, Ψ (t). This can<br />
o n<br />
be seen by walking through the loop in Figure 2.<br />
<strong>The</strong> phase noise of the resonator output is also the time integral of<br />
the frequency noise of the resonator output. That is,<br />
P s<br />
θ R (t) = Ψ n (t) + ∫ t<br />
w o (t)dt.<br />
–∞<br />
In the case in which the resonator is LTI, the frequency noise of the<br />
resonator output signal is given by<br />
w o (t) =<br />
∂φ(w o )<br />
∂w<br />
-1<br />
φ(t).<br />
Thus, the general expression for phase noise of an oscillator<br />
employing an LTI resonator is<br />
∂φ(w ) o θ (t) = Ψ (t) +<br />
R n<br />
-1<br />
∫ t -1<br />
∫ ∂w<br />
t<br />
∂w –∞<br />
where the transfer function of the resonator is<br />
H(w) = |H(w)|e jφ(w) .<br />
φ(t)dt,<br />
Very often, the resonator is a second-order LTI system. In this case,<br />
the phase slope at resonance is<br />
∂φ(w o )<br />
∂w<br />
-2Q<br />
= ,<br />
w o<br />
Oscillator Phase Noise: Systematic Construction of an Analytical Model Encompassing Nonlinearity<br />
(4)<br />
(5)<br />
(6)<br />
(7)<br />
(8)<br />
(9)<br />
where Q is the loaded quality factor of the resonator. Considering<br />
also the phase criterion that φ = -Ψ n , the phase noise is given by<br />
θ R (t) = Ψ n (t) +<br />
∫ t<br />
wo Ψ (t)dt.<br />
2Q n<br />
–∞<br />
(10)<br />
This equation is a time-domain representation of the phase noise<br />
from the oscillator. It leads directly to the Leeson equation because<br />
it shows how the output phase noise is derived from the sum of<br />
injected phase noise plus its integral.<br />
Note that determination of the time-domain oscillator phase noise<br />
involves evaluation of a running integral of the injected phase<br />
noise. We run into a difficulty when attempting to integrate white<br />
noise because the integral tends to grow without bound. We can<br />
circumvent this difficulty by considering the integration time to be<br />
finite. To accommodate this restriction in the frequency domain,<br />
we will restrict validity of the spectrum of the integrated feedback<br />
phase noise, so that w = 0 is excluded.<br />
<strong>The</strong> spectrum of the oscillator phase noise is found by taking the<br />
power spectral density (PSD) of (10) subject to the integration time<br />
restriction, given by<br />
S θR (w) ≅ S Ψn (w) 1 +<br />
∂φ(w o )<br />
∂w<br />
- 2<br />
1<br />
w 2<br />
, |w| > 0.<br />
(11)<br />
For an oscillator having a second-order LTI resonator, the phase<br />
noise spectrum is given by<br />
S θR (w) ≅ S Ψn (w) 1 +<br />
w o<br />
2Qw<br />
2<br />
.<br />
(12)<br />
It is clear from these expressions that injected phase noise is a<br />
critical determinant of oscillator phase noise, particularly in the<br />
case in which the resonator has a finite phase slope at resonance.<br />
In the case in which only white additive noise with a PSD of S n = S nw is<br />
considered, the injected phase noise becomes S Ψn (w) = 2S nw /A 2 , and<br />
(12) parallels the familiar Leeson result.<br />
E. Inclusion of Resonator Noise<br />
Refer to Figure 3 on the following page, which shows an oscillator<br />
loop having electronics noise and resonator noise. We can use the<br />
results obtained earlier for mapping of additive noise to phase noise<br />
and for generating the corresponding phase noise spectrum. <strong>The</strong><br />
resonator noise will contribute to injected phase noise and injected<br />
amplitude noise, but only the phase noise is important in the case of<br />
a linear resonator. If we assume that the resonator input is a sinusoid<br />
of amplitude B plus additive white noise n r (t) having a PSD of S rw ,<br />
the injected phase spectrum including both electronics noise and<br />
resonator noise is given by<br />
2Srw B2 1<br />
S (w) Ψ ≅ [S (w – w ) + S (w + w )] + .<br />
n o n o<br />
A 2<br />
(13)<br />
For simplicity, we took the resonator noise to be white, though it<br />
can be any stationary noise process. Section IIF will show how a<br />
11
n r (t) n e (t)<br />
nonwhite noise process in the electronics (1/f noise in this case)<br />
contributes to oscillator phase noise, and identical steps can be<br />
applied to analyze the impact of nonwhite resonator noise on phase<br />
noise.<br />
By separating out the electronics phase noise from resonator phase<br />
noise, it is possible to use material-based models for the spectra<br />
of each component and see how the noise propagates through a<br />
given system. For the case in which the resonator is mechanical, a<br />
fundamental noise source is the white Brownian force associated<br />
with the finite mechanical loss in the resonator [5]. If given an<br />
equivalent circuit for the mechanical resonator, the model for this<br />
noise term is exactly like Johnson noise in a resistor [6]:<br />
12<br />
H•e<br />
Ψ (n(t)) n Ψ (n (t))<br />
n e jΦ<br />
e jΨn<br />
θ (t)<br />
O<br />
e jΨr<br />
Σ Σ<br />
Resonator<br />
2S rw = 4k B TR x .<br />
G•e jΦ<br />
Amplifier<br />
θ R (t)<br />
Figure 3. Simplified block diagram for an oscillator. This model<br />
explicitly includes electronics noise, resonator noise, and parametric<br />
phase shifts in the amplifier.<br />
(14)<br />
R is the equivalent circuit resistance at resonance and depends<br />
x<br />
on both the resonator Q and the design-specific electromechanical<br />
coupling.<br />
F. Effects of Nonlinearities and Parametric Sensitivities in<br />
the Amplifier<br />
For an LTI resonator with a feedback network free of parametric<br />
modulation, the expressions developed thus far for the phase noise<br />
spectrum, (12) and (13), are as accurate as (and, without resonator<br />
noise, equate to) the Leeson formula. Oscillator phase noise<br />
predictions based on Leeson’s equation typically underestimate<br />
the phase noise below about 1 kHz, in part because of the effect of<br />
flicker noise.<br />
In general, the excess phase noise will be caused by non-LTI<br />
effects. In the electronics, we refer to these effects as parametric<br />
modulation, which can be driven by additive noise, amplitude noise,<br />
or effects such as temperature variation and vibration.<br />
We will model the noise caused by parametric modulation by<br />
including an additive term SΦ(w) in the expression for feedback<br />
electronics phase. Important components of SΦ(w) are the<br />
terms produced by amplitude noise, power supply variations,<br />
and temperature. <strong>The</strong> feedback amplifier will generally include<br />
transistors, and the transistor parameters are signal-dependent.<br />
<strong>The</strong>refore, the phase shift imparted by the transistor amplifier will<br />
change with bias point or signal amplitude, as well as temperature.<br />
Low-frequency additive noise (such as flicker) produces a lowfrequency<br />
variation in the bias point, which in turn produces a<br />
low-frequency parametric modulation and a corresponding lowfrequency<br />
phase modulation. This is responsible for the propagation<br />
of additive flicker noise to flicker noise in feedback phase, which<br />
produces a 1/f 3 oscillator phase noise PSD. Considering the effects<br />
of additive noise-to-phase conversion and amplitude noise-to-phase<br />
conversion, the spectral density caused by parametric modulation<br />
is given by<br />
S Φ (w) ≅<br />
∂Φ(w)<br />
∂n<br />
2<br />
S n (w) +<br />
2<br />
S A (w).<br />
Oscillator Phase Noise: Systematic Construction of an Analytical Model Encompassing Nonlinearity<br />
∂Φ<br />
∂A<br />
(15)<br />
Note that the parametric modulation coefficients of (15) are<br />
easy to determine either from circuit simulation or by empirical<br />
measurement at the circuit level.<br />
To capture the phase noise caused by parametric modulation in the<br />
electronics, Figure 3 explicitly identifies a parametric phase shift in<br />
the amplifier, Φ(t). In this case, the spectrum of the feedback phase<br />
becomes:<br />
S θf (w) = S Ψ (w) + S Φ (w)<br />
2S rw<br />
1<br />
A2 ≅ [S (w – w ) + S (w + w )] + + S (w).<br />
n o n o Φ B2 (16)<br />
Note that (16) makes it clear that feedback phase noise is composed<br />
of the sum of injected phase noise (the phase produced by the<br />
LTV mapping of additive noise) and parametric phase noise. <strong>The</strong><br />
associated oscillator phase noise of (11) generalizes to<br />
∂φ(w ) o S (w) θR ≅ S (w) 1+ , |w| > 0.<br />
θf -2<br />
1<br />
∂w<br />
w 2<br />
(17)<br />
To elaborate on the propagation of additive flicker noise to flicker<br />
feedback phase, note that additive flicker noise does not convert<br />
to appreciable phase noise without a nonlinear element coupling<br />
additive noise to phase noise (i.e., without parametric modulation).<br />
For example, let the additive flicker noise PSD be<br />
Kfe S (w) n ≅ .<br />
|w|<br />
(18)<br />
In the absence of direct conversion of additive noise to phase noise,<br />
the corresponding feedback phase noise spectrum is<br />
S (w)<br />
1<br />
θf ≅ + ≅ 0, 0 < |w| « w . o |w – w | |w + w | o o (19)<br />
A 2<br />
K fe<br />
Thus, in this case, we are insensitive to additive flicker noise because<br />
of the effect of the LTV mapping of additive noise to phase noise<br />
(the up-modulation).<br />
However, in the case in which the feedback electronics possesses<br />
parametric modulation that results in direct coupling between<br />
additive noise voltage and phase (for example, because of biasdependent<br />
semiconductor devices in the electronics), there will be<br />
K fe
a flicker component in the feedback phase noise spectrum given by<br />
Kfp S (w) Φ ≅ .<br />
|w|<br />
At very low frequency, the phase noise spectrum becomes:<br />
S θR (w) ≅<br />
∂φ(w o )<br />
∂w<br />
-2<br />
K fp<br />
|w| 3<br />
.<br />
Oscillator Phase Noise: Systematic Construction of an Analytical Model Encompassing Nonlinearity<br />
(20)<br />
(21)<br />
Thus, the presence of direct coupling between voltage (or current)<br />
and phase will result in a sensitivity to flicker noise, producing a 1/<br />
f 3 close-in slope to the phase noise spectrum. <strong>The</strong> corner frequency<br />
of the oscillator phase noise, using the variables introduced in this<br />
analysis, becomes<br />
w c.osc ≅<br />
∂Φ(0)<br />
∂n<br />
2<br />
A 2<br />
w , c.amp 2<br />
(22)<br />
where w is the corner frequency of the electronics noise, at which<br />
c.amp<br />
the flicker component (K /|w|) equals the white noise component<br />
fe<br />
(2S /A nw 2 ). Eq. (22) represents a significant deviation from the<br />
convention: the corner frequency where 1/f3 oscillator noise begins<br />
to dominate is not equal to the amplifier corner frequency, but is<br />
scaled by the amplifier nonlinearity. Though many have noted that<br />
the oscillator corner can be significantly lower than the amplifier<br />
corner, this analysis offers a specific closed-form model. It should<br />
be noted that K can be reduced by several techniques that amount<br />
fp<br />
to linearization through the use of degenerative feedback in the<br />
electronics. In addition, the electronics flicker coefficient, K is fe<br />
process-dependent, but for a given process, can be reduced through<br />
device scaling.<br />
<strong>The</strong> phase shift in the amplifier section can also fluctuate because<br />
of temperature, vibration, and other effects. Device models for<br />
fluctuation in the amplifier components and their resulting phase<br />
shifts can be inserted into (15)-(17) to identify the impact on phase<br />
noise.<br />
G. Effects of Nonlinearities and Parametric Sensitivities in the<br />
Resonator<br />
In its simplest form, the oscillator phase noise includes only the<br />
additive electronics noise mapped into phase noise through the<br />
LTV transformation and passed through the resonator in a closed<br />
loop. This is expressed in (11). Equations (16) and (17) generalize<br />
this to include more noise sources feeding into the loop.<br />
<strong>The</strong> derivations have intentionally left the resonator phase slope<br />
at resonance as a general function. For mechanical resonators that<br />
have nonlinear response characteristics, the slope near w can o<br />
become higher than that of a linear device and even bifurcate into<br />
a multivalued function [7]. It has been shown that some types of<br />
nonlinearity can actually improve the phase noise of the device as<br />
long as the operating point can be well-defined [4].<br />
In addition to being affected by feedback phase, the phase noise of<br />
an oscillator can also be affected by variations in natural frequency.<br />
<strong>The</strong> natural frequency can be affected by environmental influences,<br />
such as temperature and vibration, or by a nonlinear coupling<br />
between oscillator amplitude and natural frequency.<br />
To define the parametric noise caused by shifts in the resonator<br />
natural frequency, we must make a distinction between resonator<br />
oscillation frequency noise w (t) and resonator natural frequency<br />
o<br />
noise w (t). <strong>The</strong> natural frequency noise is the resonator oscillation<br />
n<br />
frequency noise in the absence of feedback phase noise. <strong>The</strong><br />
resonator oscillation frequency noise can be expressed as (see (6))<br />
w o (t) ≅ w n (t) +<br />
∂φ(w o )<br />
∂w<br />
-1<br />
φ(t).<br />
(23)<br />
We can express the readout phase noise in a form that is explicit in<br />
feedback phase noise and natural frequency noise as follows:<br />
θ R (t) = Ψ n (t) +<br />
-1<br />
∫ t<br />
φ(t)dt + ∫ t<br />
∂w –∞<br />
–∞<br />
∂φ(w o )<br />
w n (t)dt.<br />
(24)<br />
Notice that this reduces to the expression for the LTI resonator<br />
when there is no natural frequency noise.<br />
With this effect, together with the resonator noise, the Leeson<br />
equation (12) is more generally written as<br />
S θR (w) ≅ S θf (w) 1 +<br />
w o<br />
2Qw<br />
2<br />
S (w)<br />
+ wn .<br />
w 2<br />
(25)<br />
Mechanical resonators often exhibit amplitude-frequency<br />
sensitivity at high amplitudes, which can be due to materials,<br />
geometric, or even electrostatic effects [8]-[10]. Thus, stochastic<br />
variation of the resonator amplitude can convert directly to phase<br />
noise. In general, the phase noise contribution caused by amplitudefrequency<br />
coupling can be written as<br />
S wn (w) ≅<br />
∂w n<br />
∂X<br />
2<br />
S X (w),<br />
(26)<br />
where in this case X is the amplitude of the resonator. Typical values<br />
for mechanical nonlinearity are often quoted in terms of power and<br />
range from 10 -9 /μW for AT- and BT-cut quartz, to 10-11 /μW for SCcut<br />
quartz [8].<br />
Resonator amplitude noise will depend, in part, on the amplitude<br />
control approach. In the case in which an amplitude control loop<br />
is designed to control the oscillator output amplitude to a specific<br />
value, the amplitude noise is given by the amplitude noise injected<br />
by the oscillator feedback electronics, provided that we neglect<br />
noise added by the amplitude control circuitry. In this case, we<br />
can determine oscillator amplitude noise using the analytic signal<br />
formulation in a way that parallels our derivation of injected phase<br />
noise. Doing so results in injected amplitude noise spectrum given<br />
by<br />
1<br />
S (w) = [S (w – w ) + S (w + w )]. (27)<br />
An ne o ne o 2<br />
13
<strong>The</strong> resulting natural frequency noise spectrum is given by<br />
14<br />
S wn (w) ≅<br />
∂w n<br />
∂X<br />
∂X<br />
∂A<br />
2 2<br />
S An (w).<br />
(28)<br />
<strong>The</strong>refore, amplitude noise will increase oscillator phase noise in<br />
the case of a resonator with amplitude-frequency coupling, and<br />
white amplitude noise will increase 1/f2 phase noise in this case.<br />
Finally, external influences on the resonator frequency, such as those<br />
described by [11], can be inserted into the phase noise predictions<br />
using (25). For example, if random vibration induces changes in the<br />
natural frequency of the resonator, we may write:<br />
S wn (w) ≅<br />
∂w n<br />
∂g<br />
2<br />
S g (w),<br />
(29)<br />
where S g (w) is the vibration PSD in g 2 /Hz. This expression is<br />
consistent with well-established results [8], [11], [12]. Typical<br />
frequency sensitivities (Δw/w o ) are in the range of 10 -10 /g, and<br />
typical rms vibration levels inside a quiet building are given on the<br />
order of 20 mg [11], resulting in substantial phase noise that cannot<br />
be neglected in the interpretation of real test data.<br />
H. Single-Sideband Noise Spectral Density<br />
Phase noise is typically expressed in terms of decibels per hertz with<br />
respect to the carrier power (dBc/Hz) using the single-sideband<br />
noise spectral density, defined per IEEE Standard 1139–2008 [13] as<br />
1<br />
L(w) = S (w),<br />
θR,SSB 2<br />
(30)<br />
where S (w) is the single-sideband spectrum and S (w) =<br />
θR,SSB φ,SSB<br />
2S (w), w > 0. Appendix C discusses the definitions in more detail.<br />
φ<br />
Hereafter, we replace w by Δw in the phase noise expression to<br />
emphasize the fact that the frequency to which we refer is the offset<br />
from the carrier frequency.<br />
In the case in which the resonator is LTI, the electronics do not<br />
include parametric modulation, where we neglect resonator noise<br />
and consider only the electronics noise, the single-sideband noise<br />
spectral density (in dBc/Hz) is given by<br />
L(Δw) ≅ 10 log<br />
. 1 +<br />
(S n (Δw – w o ) + S n (Δw + w o ))<br />
A 2<br />
∂φ(w o )<br />
∂w<br />
1<br />
Δw<br />
-2 2<br />
I. General Expression for Single-Sideband Noise<br />
Spectral Density<br />
For the general case, the readout phase spectrum is given by<br />
S θR (w) ≅ S θf (w) 1 +<br />
∂φ(w o )<br />
∂w<br />
-2<br />
.<br />
1<br />
w 2<br />
S (w)<br />
+ wn ,<br />
w2 (31)<br />
(32)<br />
where S θf was defined in (16). Assuming additive white resonator<br />
noise, stationary electronics noise, a non-LTI resonator, and<br />
feedback electronics with parametric modulation, the singlesideband<br />
noise spectral density is given by (33), see above, where<br />
w o is the oscillator frequency; 0 < |Δw|
L(Δw)<br />
-110<br />
-120<br />
-130<br />
-140<br />
-150<br />
oscillator total L<br />
oscillator noise due to amplifier<br />
white noise:<br />
L =10•log 2s w nw o<br />
A2 1+<br />
2QΔw<br />
amplifier white noise floor:<br />
L =10•log<br />
-160 1 10 100 10 3<br />
w3dB<br />
Δw<br />
(a) (b)<br />
Figure 4. Plots of the total oscillator phase noise spectral density (red), overlaid with plots emphasizing the role of white noise (a) and 1/f<br />
noise (b) in the total phase noise. Plot (a) focuses on the role of white noise in the feedback phase. <strong>The</strong> noise floor of the amplifier is given,<br />
and the term in the total L caused by white noise in the feedback phase is plotted as the solid blue line. Plot (b) focuses on the role of the<br />
amplifier nonlinearity in response to 1/f noise. <strong>The</strong> amplifier L caused by 1/f noise is shown as a dashed line, together with the amplifier white<br />
noise floor. <strong>The</strong> solid blue line shows how amplifier 1/f noise contributes to feedback phase and plots oscillator phase noise caused by this<br />
term. It is clear from the plot that the 1/f contribution dominates the oscillator’s total response at low frequencies.<br />
Cases can be run that show further increases in close-in noise if we<br />
introduce resonator nonlinearity. Finally, even though vibration<br />
was neglected, we note that even a small amount of vibration has a<br />
significant effect on the phase noise.<br />
III. Conclusion<br />
<strong>The</strong> past two decades have seen substantial progress in the<br />
understanding of fundamental phase noise processes and modeling<br />
tools. Although many basic mechanisms were identified in the early<br />
days of radio, there have been outstanding recent contributions<br />
in advanced numerical tools and insightful analytical modeling.<br />
This work adds to the analytical modeling literature, providing an<br />
intuitive model for oscillator phase noise that allows convenient<br />
generalization to cases where the resonator is non-LTI or the<br />
electronics include parametric modulation.<br />
This paper began by postulating that the instantaneous incremental<br />
phase shifts around an oscillator loop sum to zero. This led to the<br />
assertion that the instantaneous resonator phase noise is equal<br />
in magnitude and opposite in sign to the feedback phase noise.<br />
Oscillator phase noise was then expressed as the sum of the<br />
feedback phase noise and the running integral of resonator output<br />
frequency. Resonator output frequency for the case of an LTI<br />
resonator was found by multiplying the feedback phase noise by the<br />
reciprocal of the resonator phase slope evaluated at the resonator<br />
oscillation frequency.<br />
<strong>The</strong> work showed that electronics flicker noise impacts close-in<br />
phase noise only if the electronics includes parametric modulation,<br />
2<br />
in which case, the feedback phase noise consists of both injected<br />
phase noise (noise passed through the LTV mapping) and<br />
parametric phase noise. <strong>The</strong> flicker noise results in a 1/f3 component<br />
in the close-in phase spectrum. Parametric phase noise may also<br />
have contributions from variations in amplitude, power supply, and<br />
temperature. Additional phase noise can result if the resonator is<br />
non-LTI. This additional phase noise is associated with perturbation<br />
of the resonator natural frequency. <strong>The</strong> natural frequency can be<br />
disturbed by resonator amplitude variations (in the case of a nonlinear<br />
resonator), and by effects such as vibration, temperature, and drift.<br />
Appendix A<br />
Analytic Signal Representation of Oscillator Signal<br />
Fundamental to our approach for determining phase noise is the<br />
recognition that any real signal s(t) has a complex counterpart<br />
(called its analytic signal) that possesses the same instantaneous<br />
amplitude and the same instantaneous phase as s(t). <strong>The</strong> analytic<br />
signal will be denoted c(t) and is given by<br />
c(t) = s(t) + js^(t), (A1)<br />
where sˆ(t) is the Hilbert transform of s(t). For our purposes, the<br />
Hilbert transform is best viewed as the output of a linear filter that<br />
produces a phase shift of -90 deg at all positive frequencies, is<br />
Hermitian, and has unity gain. <strong>The</strong> transfer function of the Hilbert<br />
transformer is given by<br />
H (w) = -j sgn(w), (A2)<br />
H<br />
Oscillator Phase Noise: Systematic Construction of an Analytical Model Encompassing Nonlinearity<br />
2S nw<br />
A 2<br />
L(Δw)<br />
-100<br />
-120<br />
-140<br />
-160<br />
amplifier 1/f: L =10•log<br />
wc.amp<br />
K fe<br />
A 2 Δw<br />
-180<br />
1 10 100 3 10<br />
wc.osc oscillator noise due to amplifier 1/fnoise:<br />
oscillator total L<br />
L =10•log Kfp 1+<br />
Δw<br />
wo 2QΔw<br />
2<br />
15
where<br />
16<br />
sgn(w) =<br />
-1, w < 0<br />
0, w = 0<br />
+1, w > 0. (A3)<br />
Consider the case in which a random noise signal n(t) is added to a<br />
sinusoid of amplitude A. <strong>The</strong> only restriction placed on n(t) is that it<br />
is wide-sense stationary, and thus we can define its power spectrum.<br />
<strong>The</strong> noisy sinusoid is given by<br />
s(t) = A cos(w t) + n(t). (A4)<br />
o<br />
<strong>The</strong> analytic signal is given by<br />
c(t) = A cos(w o t) + jA sin(w o t) + n(t) + jn^(t). (A5)<br />
<strong>The</strong> analytic signal can be expressed in a polar form as<br />
c(t) = Aejwot + n2 (t) + n^ 2 (t)e j tan-1 (n^(t)/n(t)) . (A6)<br />
We see that the analytic signal is the sum of two phasors, one<br />
corresponding to the signal and another corresponding to the noise.<br />
<strong>The</strong> signal phasor has amplitude A and is rotating counterclockwise<br />
at a constant rate of w . <strong>The</strong> noise phasor has random amplitude<br />
o<br />
and random phase. <strong>The</strong> resultant phasor representing c(t) possesses<br />
both random amplitude modulation and random phase modulation.<br />
Appendix B<br />
Derivation of Feedback Phase Spectrum<br />
<strong>The</strong> injected phase is given by<br />
n^ (t)<br />
n(t)<br />
Ψ (t) n ≅ cos(w t) – sin(w t).<br />
o o A<br />
A (B1)<br />
<strong>The</strong> autocorrelation function (ACF) of the injected phase is given by<br />
n^(t)<br />
A<br />
n(t)<br />
A<br />
R Ψn (t) = cos(w o t) – sin(w o t)<br />
. n^(t + t)<br />
n(t + t)<br />
cos(w (t + t)) – sin(w (t + t)) ,<br />
o o A<br />
A (B2)<br />
where the angle brackets denote the expected value operator. <strong>The</strong><br />
ACF can be expanded as<br />
1<br />
R (t) = 〈n^(t)n^(t + t) cos w t cos w (t + t)<br />
Ψn A o o 2<br />
+ n(t)n(t + t) sin w t sin w (t + t)<br />
o o<br />
– n(t)n^(t + t) sinw t cos w (t + t)<br />
o o<br />
– n^(t)n(t + t) cos w t sin w (t + t)〉. o o (B3)<br />
We can eliminate terms that average to zero, yielding<br />
1<br />
R (t) = cos w t〈n^(t)n^(t + t)〉<br />
Ψn o<br />
2A 2<br />
1<br />
+ cos w t〈n(t)n(t + t)〉<br />
o<br />
2A 2<br />
1<br />
+ sin w t〈n(t)n^(t + t)〉<br />
o<br />
2A 2<br />
1<br />
– sin w t〈n^(t)n(t + t)〉.<br />
2A o 2 (B4)<br />
It can be shown that [15]<br />
〈n(t)n^(t + t)〉 = –〈n^(t)n(t + t)〉. (B5)<br />
<strong>The</strong>refore, the cross-correlation terms add, and we have<br />
1<br />
R (t) = cos w t〈n^(t)n^(t + t)〉<br />
Ψn o<br />
Oscillator Phase Noise: Systematic Construction of an Analytical Model Encompassing Nonlinearity<br />
2A 2<br />
1<br />
+ cos w t〈n(t)n(t + t)〉<br />
o<br />
2A 2<br />
1<br />
+ sin w t〈n^(t)n(t + t)〉.<br />
A o 2 (B6)<br />
1<br />
R (t) = cos w t〈n^(t)n^(t + t)〉<br />
Ψn o<br />
2A 2<br />
1<br />
+ cos w t〈n(t)n(t + t)〉<br />
o<br />
2A 2<br />
It can also be shown that [15]<br />
1<br />
+ sin w t〈n^(t)n(t + t)〉.<br />
2A o 2 (B7)<br />
sn^ (w) = n -jS (w) w < 0<br />
nn<br />
jS (w) w < 0.<br />
nn<br />
(B8)<br />
Now, note the following Fourier transform property applied to the<br />
autocorrelation function:<br />
FT 1<br />
R (t) R (t)↔ S (w) 1 2 1 * S (w), 2<br />
2p (B9)<br />
where the asterisk denotes the convolution operator. From (B7), we<br />
have<br />
1<br />
1<br />
R (t) = R Ψn n^ (t) cos w t + R (t) cos w t<br />
o n o<br />
2A 2<br />
1<br />
+ Rn^ (t) sin w t<br />
n o<br />
A 2<br />
Using (B9) and (B10), we obtain<br />
2A 2<br />
1<br />
S (w) = FT{R Ψn n^(t)} * FT{cos w t} o<br />
4pA 2<br />
1<br />
+ FT{R (t)} n * FT{cos w t} o<br />
4pA 2<br />
(B10)<br />
1<br />
+ FT{Rn^ (t)} n * FT{sin w t}. o 2pA2 (B11)<br />
This equation can be rewritten as<br />
1<br />
S (w) = S (w) Ψn n * FT{cos w t} o<br />
Note that:<br />
4pA 2<br />
1<br />
+ S (w) n * FT{cos w t} o<br />
4pA 2<br />
1<br />
+ jS (w) sgn(w) n * FT{sin w t}. o 2pA2 (B12)<br />
FT{cos w o t} = p[d(w – w o ) + d(w + w o )] (B13)
and<br />
FT{sin w o t} = jp[d(w + w o ) – d(w – w o )]. (B14)<br />
Simplifying (B12), we obtain<br />
1<br />
S (w) = S (w) Ψn n * FT{cos w t} o<br />
2pA 2<br />
j<br />
– S (w) sgn(w) n * FT{sin w t}. o 2pA2 (B15)<br />
Finally, we obtain the injected phase noise spectrum:<br />
1<br />
S (w) = [S (w – w )[1 – sgn(w – w )]]<br />
Ψn n o o<br />
2A 2<br />
1<br />
+ [S (w + w )[1 + sgn(w + w )]].<br />
n o o<br />
2A 2<br />
(B16)<br />
Because we are concerned with lower frequencies, we know that for<br />
our frequencies of interest<br />
0 < | w| < w o . (B17)<br />
Thus, the injected phase noise spectrum can be approximated as<br />
1<br />
S (w) Ψn ≅ [S (w – w ) + S (w + w )].<br />
A n o n o 2<br />
Appendix C<br />
Noise-Carrier Ratio from Phase Spectrum<br />
We can define the noise-carrier ratio (NCR) as<br />
(B18)<br />
single sideband PSD<br />
NCR(w) ≡ .<br />
carrier power<br />
(C1)<br />
This definition is useful when attempting to measure phase noise<br />
using a spectrum analyzer. Although NCR has been replaced with<br />
the more general single-sideband noise spectral density denoted<br />
by L(w) [13] (also explained in [3]), the authors believe it may<br />
be useful to provide the derivation between phase spectrum and<br />
noise-carrier ratio, because NCR(w) and L(w) are identical for<br />
small phase noise.<br />
Consider a sinusoid with small phase modulation<br />
e(t) = cos(w o t + φ(t)). (C2)<br />
<strong>The</strong> signal can be approximated (using a Taylor expansion) as<br />
e(t) ≅ cos(w o t) – φ(t) sin(w o t). (C3)<br />
<strong>The</strong> sideband term can be approximated as<br />
<strong>The</strong> sideband spectrum is found from<br />
or<br />
e SB (t) ≅ –φ(t) sin w o t. (C4)<br />
1<br />
S (w) SB ≅ S (w) φ * FT{sin w t} o 2p<br />
Oscillator Phase Noise: Systematic Construction of an Analytical Model Encompassing Nonlinearity<br />
(C5)<br />
-j<br />
S (w) SB ≅ S (w) sgn(w) φ * jp[d(w + w ) – d(w – w )].<br />
o o 2p (C6)<br />
<strong>The</strong>refore,<br />
1<br />
S (w) SB ≅ [S (w + w ) + S (w – w )].<br />
φ o φ o 2<br />
(C7)<br />
Because the phase spectrum will always be a low-pass function,<br />
we will treat it here as a strictly band-limited low-pass function for<br />
simplicity. <strong>The</strong>n from (C7) and Figure C1, it is clear that<br />
1<br />
2 S (w - w )<br />
φ 0<br />
- w 0<br />
1<br />
S (w + w) SB o ≅ S (w). φ 2<br />
S SB (w)<br />
S φ (w)<br />
+ w 0<br />
(C8)<br />
S φ (w + w 0 )<br />
Figure C1. Phase spectrum and corresponding power spectrum for a<br />
phase-modulated sinusoid.<br />
<strong>The</strong>refore,<br />
1<br />
2<br />
S (w + w)<br />
SB o<br />
NCR(w) ≡<br />
P ≅ S (w), w >0.<br />
φ<br />
o<br />
(C9)<br />
Note that because the signal amplitude in (C2) was normalized to<br />
unity, then P = 1/2.<br />
o<br />
Finally, we note that to avoid any confusion, because frequency w<br />
in the NCR expression represents the frequency deviation from the<br />
carrier frequency, we will replace w by Δw in the NCR expression,<br />
so that:<br />
NCR ≅ S (Δw). (C10)<br />
φ<br />
Figure C1 shows a representative baseband phase spectrum and the<br />
corresponding spectrum for a phase-modulated sinusoid having<br />
normalized amplitude. For analytical purposes, double-sided<br />
spectra are used in this manuscript unless specifically mentioned<br />
otherwise. <strong>The</strong> IEEE Standard 1139 reports phase noise spectra in<br />
terms of the single- sided spectrum as [13]<br />
1<br />
L(Δw) = S (Δw) = S (Δw), Δw > 0,<br />
φ,SSB φ 2<br />
(C11)<br />
where, per the usual definition, the single-sided spectrum is defined<br />
only when Δw > 0 and is given by S,SSB(Δw) = 2S(Δw).<br />
Acknowledgments<br />
<strong>The</strong> authors wish to thank <strong>The</strong> Charles Stark <strong>Draper</strong> <strong>Laboratory</strong>,<br />
Inc. for supporting this work, and especially Jeff Lozow of <strong>Draper</strong><br />
<strong>Laboratory</strong> for verifying the derivation in Appendix B.<br />
17
References<br />
[1] Leeson, D.B., “A Simple Model of Feedback Oscillator Noise<br />
Spectrum,” Proc. IEEE, Vol. 54, No. 2, 1966, pp. 329-330.<br />
[2] Hajimiri A. and T.H. Lee, “A General <strong>The</strong>ory of Phase Noise<br />
in Electrical Oscillators,” IEEE J. Solid-state Circuits, Vol. 33,<br />
No. 2, 1998, pp. 179-194.<br />
[3] Rubiola, E., Phase Noise and Frequency Stability in Oscillators,<br />
Cambridge University Press, New York, NY, 2009.<br />
[4] Greywall, D.S., B. Uurke, P.A. Busch, A.N. Pargellis, and R.L.<br />
Willett, “Evading Amplifier Noise in Nonlinear Oscillators,”<br />
Phys. Rev. Lett., Vol. 72, No. 9, 1994, pp. 2992-2995.<br />
[5] Gabrielson, T., “Mechanical-<strong>The</strong>rmal Noise in Microma-<br />
chined Acoustic and Vibration Sensors,” IEEE Trans.<br />
Electron. Dev., Vol. 40, No. 5, 1993, pp. 903-909.<br />
[6] Nguyen, C. “Micromechanical Resonators for Oscillators<br />
and Filters,” Proceedings, IEEE Int. Ultrasonics Symp., Seattle,<br />
WA, 1995, pp. 489-499.<br />
[7] Nayfeh A.H. and D. Mook, Nonlinear Oscillations, Wiley, New<br />
York, NY, 1995.<br />
[8] Walls F. and J. Gagnepain, “Environmental Sensitivities of<br />
Quartz Oscillators,” IEEE Trans. Ultrason. Ferroelectr. Freq.<br />
Figure, Vol. 39, No. 2, 1992, pp. 241-249.<br />
[9] Agarwal, M., K. Park, B. Kim, M. Hopcroft, S.A. Chandorkar,<br />
R.N. Candler, C.M. Jha, R. Melamud, T.W. Kenny, and B.<br />
Murmann, “Amplitude Noise-Induced Phase Noise in<br />
Electrostatic MEMS Resonators,” Solid-State Sensor,<br />
Actuator, and Microsystems Workshop, Hilton Head, SC,<br />
2006, pp. 90-93.<br />
[10] Kusters, J., “<strong>The</strong> SC Cut Crystal - An Overview,” Proceedings,<br />
Ultrasonics Symp., 1981, pp. 402-409.<br />
[11] Vig. J., Available [Online]: “Quartz Crystal Resonators and<br />
Oscillators,”nhttp://www.ieee-uffcorg/frequency_control/<br />
teaching.asp, February 2010.<br />
[12] Filler, R., “<strong>The</strong> Acceleration Sensitivity of Quartz Crystal<br />
Oscillators: A Review,” IEEE Trans. Ultrason. Ferroelectr. Freq.<br />
Control, Vol. 35, No. 3, 1988, pp. 297-305.<br />
[13] IEEE Standard Definitions of Physical Quantities for Funda-<br />
mental Frequency and Time Metrology–Random Instabilities,<br />
IEEE Standard 1139 –2008, 2008, pp. c1–35.<br />
[14] Ellinger, F., Radio Frequency Integrated Circuits and Technologies,<br />
Springer, New York, NY, 2007.<br />
[15] Papoulis, A., Probability, Random Variables, and Stochastic Processes,<br />
McGraw-Hill, New York, NY, 1965.<br />
18<br />
Oscillator Phase Noise: Systematic Construction of an Analytical Model Encompassing Nonlinearity
Paul A. Ward is <strong>Laboratory</strong> Technical Staff (highest technical tier) at <strong>The</strong> Charles Stark <strong>Draper</strong> <strong>Laboratory</strong>. He has<br />
extensive experience in the development of high-performance electronics for a wide array of systems. He has been<br />
with <strong>Draper</strong> for 25 years and has developed innovative circuits and signal processing to support precision signal<br />
references, fiber-optic gyroscopes, Microelectromechanical System (MEMS) gyroscopes and accelerometers,<br />
strategic radiation-hard inertial instruments, and other instruments and systems. He received <strong>Draper</strong>’s<br />
Distinguished Performance Award in both 1994 and 1997, as well as <strong>Draper</strong>’s Best Patent Award in 1996, 1997,<br />
and 1998. Mr. Ward has managed <strong>Draper</strong>’s Microelectronics group, Analog and Power Systems group, and Mixed-<br />
Signal Control Systems group. He currently holds 22 U.S. patents with several in application and has coauthored<br />
numerous papers. Mr. Ward holds B.S. and M.S. degrees in Electrical Engineering from Northeastern University.<br />
Amy E. Duwel is Group Leader for RF and Communications at <strong>The</strong> Charles Stark <strong>Draper</strong> <strong>Laboratory</strong> after many<br />
years managing <strong>Draper</strong>’s MEMS Group. Her technical interests focus on microscale energy transport and on the<br />
dynamics of MEMS resonators in application as inertial sensors, RF filters, and chemical detectors. Dr. Duwel<br />
received a B.A. in Physics from the Johns Hopkins University and M.S. and Ph.D. degrees in Electrical Engineering<br />
and Computer Science from the Massachusetts Institute of <strong>Technology</strong> (MIT).<br />
Oscillator Phase Noise: Systematic Construction of an Analytical Model Encompassing Nonlinearity<br />
19
20<br />
Due to the limited lifespan of artificial joints and the ineffectiveness of current<br />
drug regimens, patients below the age of 65 with end-stage osteoarthritis<br />
often live with severe pain and disability. Researchers from Cytex<br />
<strong>The</strong>rapeutics, <strong>Draper</strong>, and MIT are collaborating on the project under a grant<br />
from the national Institutes of Health with the hope to develop treatments<br />
that replace damaged tissue at the joint surface with a living cartilage tissue<br />
substitute. Cytex and MIT began working on the project in 2007, and <strong>Draper</strong><br />
joined in 2010.<br />
Current cell-based cartilage repair procedures are limited to small defects.<br />
<strong>The</strong> researchers believe that using a mechanically functional biomaterial<br />
scaffold may enable repair of the entire joint surface while also allowing loadbearing<br />
associated with normal daily activities.<br />
<strong>The</strong> researchers also expect that using a live cell component may allow the<br />
new cartilage tissue to maintain itself. This may enable improvement over<br />
current artificial joint prostheses, which tend to wear over time, resulting in<br />
an effective life span of approximately 10 to 20 years.<br />
Young patients with end-stage osteoarthritis stand to benefit the most from<br />
this work, which could also bring down the high cost associated with treating<br />
end-stage osteoarthritis by reducing the number of revision surgeries and<br />
the need for drugs to treat pain. <strong>The</strong> ability to postpone replacement of an<br />
osteoarthritic joint for 5 to 10 years would be highly welcome by both patients<br />
and payers.<br />
<strong>The</strong> next step is to demonstrate the long-term efficacy of this type of implant<br />
in animal models. Cytex <strong>The</strong>rapeutics expects that the work will be ready for<br />
human clinical trials in the 2015 to 2020 time frame.<br />
In Vitro Generation of Mechanically Functional Cartilage Grafts . . .
In Vitro Generation of Mechanically Functional Cartilage<br />
Grafts Based on Adult Human Stem Cells and 3D-Woven poly<br />
(ε-caprolactone) Scaffolds<br />
Piia K. Valonen, Franklin T. Moutos, Akihiko Kusanagi, Matteo G. Moretti, Brian O. Diekman,<br />
Jean F. Welter, Arnold I. Caplan, Farshid Guilak, and Lisa E. Freed<br />
Copyright ©2009 by Elsevier Ltd. All rights reserved. Published in Biomaterials, Vol. 31, January 19, 2010, pp 2193 - 2200.<br />
Abstract<br />
Three-dimensionally (3D) woven poly(ε-caprolactone) (PCL) scaffolds were combined with adult human mesenchymal stem cells<br />
(hMSC) to engineer mechanically functional cartilage constructs in vitro. <strong>The</strong> specific objectives were to: (1) produce PCL scaffolds<br />
with cartilage-like mechanical properties, (2) demonstrate that hMSCs formed cartilage after 21 days of culture on PCL scaffolds, and<br />
(3) study the effects of scaffold structure (loosely vs. tightly woven), culture vessel (static dish vs. oscillating bioreactor), and medium<br />
composition (chondrogenic additives with or without serum). Aggregate moduli of 21-day constructs approached normal articular<br />
cartilage for tightly woven PCL cultured in bioreactors, were lower for tightly woven PCL cultured statically, and lowest for loosely woven<br />
PCL cultured statically (p < 0.05). Construct DNA, total collagen, and glycosaminoglycans (GAG) increased in a manner dependent on<br />
time, culture vessel, and medium composition. Chondrogenesis was verified histologically by rounded cells within a hyaline-like matrix<br />
that immunostained for collagen type II but not type I. Bioreactors yielded constructs with higher collagen content (p < 0.05) and more<br />
homogenous matrix than static controls. Chondrogenic additives yielded constructs with higher GAG (p < 0.05) and earlier expression of<br />
collagen II mRNA if serum was not present in medium. <strong>The</strong>se results show the feasibility of functional cartilage tissue engineering from<br />
hMSC and 3D-woven PCL scaffolds.<br />
Introduction<br />
Degenerative joint disease affects 20 million adults with an<br />
economic burden of over $40 billion per year in the U.S. [1].<br />
Once damaged, adult human articular cartilage has a limited<br />
capacity for intrinsic repair [2] and hence injuries can lead to<br />
progressive damage, joint degeneration, pain, and disability. Cellbased<br />
repair of small cartilage defects in the knee joint was first<br />
demonstrated clinically 15 years ago [3]. Many cartilage tissue<br />
engineering studies use chondrocytes as the cell source [4], [5],<br />
however, this approach is challenged by the limited supply of<br />
chondrocytes, their limited regenerative potential due to age,<br />
disease, dedifferentiation during in vitro expansion, and the<br />
morbidity caused by chondrocyte harvest [6]. <strong>The</strong>refore, other<br />
studies use mesenchymal stem cells (MSC) as the cell source [7],<br />
[8], as these stem cells can be harvested safely by marrow biopsy,<br />
readily expanded in vitro, and selectively differentiated into<br />
chondrocytes [9].<br />
Clinical translation of tissue engineered cartilage is currently<br />
limited by inadequate construct structure, mechanical function,<br />
and integration [2], [10]. Currently, most tissue engineered<br />
constructs for articular cartilage repair possess cartilage-mimetic<br />
material properties only after long-term (e.g., 1-6 months) in vitro<br />
culture [5], [11], [12]. This lack of early construct mechanical<br />
function implies a need for new tissue engineering technologies<br />
such as scaffolds and bioreactors [13], [14]. For example, the<br />
stiffness and strength of previously used scaffolds were several<br />
orders of magnitude below normal articular cartilage, particularly<br />
in tension [12], [15], [16]. Likewise, mechanical properties of<br />
engineered cartilage produced using these scaffolds and hMSC<br />
were at least one order of magnitude below values reported for<br />
normal cartilage despite prolonged in vitro culture [17], [18].<br />
In Vitro Generation of Mechanically Functional Cartilage Grafts . . .<br />
<strong>The</strong> goal of the present study was to produce mechanically<br />
functional tissue engineered cartilage from adult hMSC and<br />
3D-woven PCL scaffolds in 21 days in vitro. Effects of (1)<br />
scaffold structure (loosely vs. tightly woven PCL); (2) culture<br />
vessel (static dish vs. oscillating bioreactor); and (3) medium<br />
composition (chondrogenic additives with or without serum) on<br />
construct mechanical, biochemical, and molecular properties<br />
were quantified. A 3D weaving method [19] was applied to<br />
multifilament PCL yarn to create scaffolds with cartilage-mimetic<br />
mechanical properties. <strong>The</strong> PCL was selected because it is a<br />
FDA-approved, biocompatible material [20], [21] that supports<br />
chondrogenesis [22] and degrades slowly (i.e., less than 5%<br />
degradation at 2 years, as measured by mass loss) into byproducts<br />
that are entirely cleared from the body [23], [24].<br />
<strong>The</strong> 3D-woven PCL scaffolds were seeded with hMSC mixed<br />
with Matrigel® such that gel entrapment enhanced cell seeding<br />
efficiency [25] and also helped to maintain spherical cell<br />
morphology for the promotion of chondrogenesis [26]. <strong>The</strong><br />
hMSC-PCL constructs were cultured either in static dishes or in an<br />
oscillatory bioreactor that provided bidirectional percolation of<br />
culture medium directly through the construct [27]. Bioreactors<br />
were studied because these devices are known to enable functional<br />
tissue engineering due to the combined effects of enhanced mass<br />
transport and mechanotransduction [14], [28]-[34]. Bidirectional<br />
rather than unidirectional perfusion was selected because the<br />
latter yielded different conditions at opposing construct upper<br />
and lower surfaces resulting in spatial concentration gradients<br />
and inhomogeneous tissue development [35], [36].<br />
21
Three different culture media were tested as follows. Differentiat-<br />
ion medium 1 (DM1) containing serum and chondrogenic<br />
additives (TGFβ, ITS+ Premix, dexamethasone, ascorbic acid,<br />
proline, and nonessential amino acids) was selected based on<br />
our previous work [8], [17], [37]. Differentiation medium 2<br />
(DM2) containing chondrogenic additives but not serum was<br />
selected based on reports that serum inhibited chondrogenesis by<br />
synoviocytes [38], [39] and caused hypertrophy of chondrocytes<br />
[40]. Control medium (CM) without chondrogenic additives was<br />
tested to assess spontaneous chondrogenic differentiation in<br />
hMSC-PCL constructs.<br />
Materials and Methods<br />
All tissue culture reagents were from Invitrogen (Carlsbad, CA)<br />
unless otherwise specified.<br />
Poly(ε-caprolactone) (PCL) Scaffolds<br />
A custom-built loom [19] was used to weave PCL multifilament<br />
yarns (24 µm diameter per filament; 44 filaments/yarn, Grilon<br />
KE-60, EMS/Griltech, Domat, Switzerland) in three orthogonal<br />
directions (x-warp, y-weft, and a vertical z-direction) (Figure 1A).<br />
A loosely woven scaffold was made with widely spaced warp yarns<br />
(8 yarns/cm), closely spaced weft yarns (20 yarns/cm), and two<br />
z-layers between each warp yarn (Figure 1B). A tightly woven<br />
scaffold was made with closely spaced warp and weft yarns (24<br />
and 20 yarns/cm, respectively) and one z-layer between each warp<br />
yarn (Figure 1C). <strong>The</strong>se weaving parameters, in conjunction with<br />
fiber size and the density of PCL (1.145 g/cm3 ) [41], determined<br />
scaffold porosity and pore dimensions. <strong>The</strong> loosely woven scaffold<br />
had porosity of 68 ± 0.3%, approximate pore dimensions of 850<br />
µm × 1100 µm × 100 µm, and approximate thickness of 0.9 mm.<br />
<strong>The</strong> tightly woven scaffold had porosity of 61 ± 0.2%, approximate<br />
pore dimensions of 330 µm × 260 µm × 100 µm and approximate<br />
thickness of 1.3 mm. Prior to cell culture, scaffolds were immersed<br />
in 4 N NaOH for 18 h, thoroughly rinsed in deionized water, dried,<br />
ethylene oxide sterilized, and punched into 7-mm diameter discs<br />
using dermal punches (Acuderm Inc., Ft. Lauderdale, FL).<br />
Human Mesenchymal Stem Cells<br />
<strong>The</strong> hMSC were derived from bone marrow aspirates obtained<br />
from a healthy middle-aged adult male at the Hematopoietic Stem<br />
Cell Core Facility at Case Western Reserve University. Informed<br />
consent was obtained, and an Institutional Review Board-approved<br />
aspiration procedure was used [42]. Briefly, the bone marrow<br />
Figure 1. <strong>The</strong> 3D-woven PCL scaffold. (A) schematic; (B-C)<br />
scanning electron micrographs of (B) loosely and (C) tightly woven<br />
scaffolds. Scale bars: 1 mm.<br />
22<br />
In Vitro Generation of Mechanically Functional Cartilage Grafts . . .<br />
sample was washed with Dulbecco’s modified Eagle’s medium<br />
(DMEM-LG, Gibco) supplemented with 10% fetal bovine serum<br />
(FBS) from a selected lot [9]. <strong>The</strong> sample was centrifuged at 460×g<br />
on a preformed Percoll density gradient (1.073 g/mL) to isolate<br />
the mononucleated cells. <strong>The</strong>se cells were resuspended in serumsupplemented<br />
medium and seeded at a density of 1.8 × 105 cells/<br />
cm2 in 10-cm diameter plates. Nonadherent cells were removed<br />
after 4 days by changing the medium. For the remainder of the cell<br />
expansion phase, the medium was additionally supplemented with<br />
10 ng/mL of recombinant human fibroblast growth factor-basic<br />
(rhFGF-2, Peprotech, Rocky Hill, NJ) [43], and was replaced twice<br />
per week. <strong>The</strong> primary culture was trypsinized after approximately<br />
2 weeks, and then cryopreserved using Gibco Freezing medium.<br />
Tissue Engineered Constructs<br />
<strong>The</strong> hMSC were thawed and expanded by approximately 10-fold<br />
during a single passage in which cells were plated at 5500 cells/<br />
cm2 and cultured in DMEM-LG supplemented with 10% FBS, 10<br />
ng/mL of rhFGF-2, and 1% penicillin-streptomycin-fungizone.<br />
Medium was completely replaced every 2-3 days for 7 days.<br />
Multipotentiality was verified for the expanded hMSC by inducing<br />
differentiation into the chondrogenic lineage in pellet cultures<br />
of passage 2 (P2) cells [44] and into adipogenic and osteogenic<br />
lineages in monolayer culture [45]. <strong>The</strong> PCL scaffolds (a total of<br />
n = 15-20 per group, in three independent studies) were seeded<br />
with P2 hMSC by mixing cells in growth factor-reduced Matrigel®<br />
(B&D Biosciences) while working at 4°C, and pipetting the cellgel<br />
mixture evenly onto both surfaces of the PCL scaffold. Each<br />
7 mm diameter, 0.9 mm thick loosely woven scaffold was seeded<br />
with a cell pellet (1 million cells in 10 µL) mixed with 25 µL of<br />
Matrigel®, whereas each 7 mm diameter, 1.3 mm thick tightly<br />
woven scaffold was seeded with a similar cell pellet mixed with 35<br />
µL of Matrigel®. Freshly seeded constructs were placed in 24-well<br />
plates (one construct per well), placed in a 37°C in a humidified,<br />
5% CO /room air incubator for 30 min to allow Matrigel® gelation,<br />
2<br />
and then 1 mL of medium was added to each well.<br />
After 24 h, constructs were transferred either into 6-well plates<br />
(one construct per well containing 9 mL of medium) and cultured<br />
statically, or into bioreactor chambers as described previously<br />
[27]. Briefly, each construct allocated to the bioreactor group<br />
was placed in a custom-built poly(dimethyl-siloxane) (PDMS)<br />
chamber that was connected to a loop of gas-permeable silicone<br />
rubber tubing (1/32-in wall thickness, Cole Parmer, Vernon<br />
Hills, IL). Each loop was then mounted on a supporting disc, and<br />
medium (9 mL) was added, such that the construct was submerged<br />
in medium in the lower portion of the loop and a gas bubble was<br />
present in the upper portion of the loop [27]. Multiple loops were<br />
mounted on an incubator-compatible base that slowly oscillated<br />
the chamber about an arc of ~160 deg. Importantly, bioreactor<br />
oscillation directly applied bidirectional medium percolation and<br />
mechanical stimulation to their upper and lower surfaces of the<br />
discoid constructs.<br />
Three different medium compositions (DM1, DM2 and<br />
CM) were studied. Differentiation medium 1 (DM1) was<br />
DMEM-HG supplemented with 10% FBS, 10 ng/mL hTGFβ-3<br />
(PeproTech, Rocky Hill, NJ), 1% ITS+ Premix (B&D Biosciences),
10-7 M dexamethasone (Sigma), 50 mg/L ascorbic acid, 0.4 mM<br />
proline, 0.1 mM nonessential amino acids, 10 mM HEPES, 100<br />
U/mL penicillin, 100 U/mL streptomycin, and 0.25 μg/mL of<br />
fungizone. Differentiation medium 2 (DM2) was identical to DM1<br />
except without FBS. Control medium (CM) was identical to DM1<br />
except without chondrogenic additives (TGFβ-3, ITS+ Premix, and<br />
dexamethasone). Media were replaced at a rate of 50% every 3-4<br />
days, and constructs were harvested after 1, 7, 14, and 21 days.<br />
Mechanical Testing<br />
Confined compression tests [46] were performed on 3-mm<br />
diameter cylindrical test specimens, cored from the centers of<br />
21-day constructs or acellular (initial) scaffolds, using an ELF-<br />
3200 materials testing system (Bose-Enduratec, Framingham,<br />
MA). Specimens (n = 5-6 per group) placed in a 3 mm diameter<br />
confining chamber within a bath of phosphate buffered saline<br />
(PBS), and compressive loads were applied using a solid piston<br />
against a rigid porous platen (porosity of 50%, pore size of 50-<br />
100 μm). Following equilibration of a 10 gf tare load, a step<br />
compressive load of 30 gf was applied to the sample and allowed<br />
to equilibrate for 2000 s. Aggregate modulus (H ) and hydraulic<br />
A<br />
permeability (k) were determined numerically by matching the<br />
solution for axial strain (ε ) to the experimental data for all creep<br />
z<br />
tests using a two-parameter, nonlinear least-squares regression<br />
procedure [47], [48]. Unconfined compression tests were done by<br />
applying strains, ε, of 0.04, 0.08, 0.12, and 0.16 to the specimens<br />
(n = 5-6 per group) in a PBS bath after equilibration of a 2% tare<br />
strain. Strain steps were held constant for 900 s, which allowed<br />
the specimens to relax to an equilibrium level. Young’s modulus<br />
was determined by linear regression on the resulting equilibrium<br />
stress-strain plot.<br />
Histology and Immunohistochemistry<br />
Histological analyses were performed after specimens (n = 2<br />
constructs per time point per group) were fixed in 10% neutral<br />
buffered formalin for 24 h at 4°C, post fixed in 70% ethanol,<br />
embedded in paraffin, and sectioned both en-face and in cross<br />
section. Sections 5 μm thick were stained with safranin-O/fast<br />
green for proteoglycans. For immunohistochemical analysis, 20<br />
μm thick sections were deparaffinized in xylene and rehydrated.<br />
To efficiently expose epitopes, the sections were incubated with<br />
700 U/mL bovine testicular hyaluronidase (Sigma) and 2 U/mL<br />
pronase XIV (Sigma) for 1 h at 37°C. Double immunostaining for<br />
collagen type I (mouse monoclonal antibody, ab6308, Abcam Inc.,<br />
Cambridge, MA) and collagen type II (mouse monoclonal antibody,<br />
CII/CI, Hybridoma Bank, University of Iowa) was performed using<br />
Table 1. <strong>The</strong> Sequence of PCR Primers (Sense and Antisense, 5’ to 3’).<br />
In Vitro Generation of Mechanically Functional Cartilage Grafts . . .<br />
an Avidin/Biotin kit (Vector Lab, Burlingame, CA). Control sections<br />
were incubated with PBS/1% bovine serum albumin (Sigma)<br />
without primary antibody.<br />
Biochemical Analyses<br />
Standard assays for DNA, ortho-hydroxyproline (OHP, an index<br />
of total collagen content), and GAG (an index of proteoglycan<br />
content) were performed (n = 3-4 bisected constructs per time<br />
point per group). Values obtained for all 1-day constructs produced<br />
from tightly woven PCL scaffolds in DM1, DM2, and CM groups<br />
were pooled, averaged, and used as a basis for comparison for<br />
subsequent (7, 14, and 21-day) constructs produced from tightly<br />
woven scaffolds. After measuring wet weight, constructs were<br />
diced and digested in papain for 12 h at 60°C. DNA was measured<br />
using the Quant-iTTM PicoGreen® dsDNA assay (Molecular Probes,<br />
Eugene, OR). GAG was measured using the BlyscanTM sulphated GAG<br />
assay (Biocolor, Carrickfergus, Northern Ireland). To measure total<br />
collagen, papain digests were hydrolyzed in HCl at 110°C overnight,<br />
dried in a desiccating chamber, reconstituted in acetate-citrate<br />
buffer, filtered through activated charcoal, and OHP was quantified<br />
by Ehrlich’s reaction [49]. Briefly, hydrolysates were oxidized with<br />
chloramine-T, treated with dimethylaminobenzaldehyde, read at<br />
540 nm against a trans-4-hydroxy-L-proline standard curve, and<br />
total collagen was calculated by using a ratio of 10 μg of collagen<br />
per 1 μg of 4-hydroxyproline. <strong>The</strong> conversion factor of 10 was<br />
selected since immunohistochemical staining showed that type<br />
II collagen represented virtually all of the collagen present in the<br />
constructs [50], [51].<br />
Reverse Transcriptase Polymerase Chain Reaction (RT-PCR)<br />
<strong>The</strong> presence of two cartilage biomarkers was tested: Sox-9,<br />
one of the earliest markers for MSC differentiation toward the<br />
chondrocytic lineage, preceding the activation of collagen II [52],<br />
and collagen type II, a chondrocyte-related gene. Collagen type I<br />
provided a marker for undifferentiated MSC, and GAPDH provided<br />
an intrinsic control [53]. Total RNA was isolated from hMSC prior to<br />
and after culture on PCL scaffolds (n = 3-4 bisected constructs per<br />
group per time point) using a Qiagen RNeasy mini kit. DNase treated<br />
RNA was used to make first stranded cDNA with the SuperScript<br />
III First-Strand Synthesis for RT-PCR. <strong>The</strong> cDNA was amplified in<br />
an iCycler <strong>The</strong>rmal Cycler 582BR (Bio-Rad, Hercules, CA) using<br />
primer sequences given in Table 1. <strong>The</strong> cycling conditions were as<br />
follows: 2 min at 94°C; 30 cycles of (30 s at 94°C, 45 s at 58°C, 1<br />
min at 72°C), and 5 min at 72°C. <strong>The</strong> PCR products were analyzed<br />
by means of 2% agarose gel electrophoresis containing ethidium<br />
bromide (E-Gel® 2%, Invitrogen).<br />
Primer Sense Antisense Product size<br />
Collagen type II (Col II) atgattcgcctcggggctcc tcccaggttctccatctctg 260 bp<br />
Sox-9 aatctcctggaccccttcat gtcctcctcgctctccttct 198 bp<br />
Collagen type I (Col I) gcatggccaagaagacatcc cctcgggtttccacgtctc 300 bp<br />
23
Statistical Analysis<br />
Data were calculated as mean ± standard error and analyzed<br />
using multiway analysis of variance (ANOVA) in conjunction with<br />
Tukey’s post hoc test using Statistica (v. 7, Tulsa, OK, USA). Values<br />
of p < 0.05 were considered statistically significant.<br />
Results<br />
Effects of Scaffold Structure<br />
Scaffold structure did not have any significant effect on the<br />
amounts of DNA, total collagen, or GAG in constructs cultured<br />
statically for 21 days in DM1 (Table 2, Group A vs. B). In contrast,<br />
scaffold structure significantly impacted aggregate modulus<br />
(H ) and Young’s modulus (E) of initial (acellular) scaffolds and<br />
A<br />
cultured constructs (Figure 2). Acellular loosely woven scaffolds<br />
exhibited lower (p < 0.05) mechanical properties (H of 0.18 ±<br />
A<br />
0.011 MPa and E of 0.042 ± 0.004 MPa) than acellular tightly<br />
woven scaffolds (H of 0.46 ± 0.049 MPa and E of 0.27 ± 0.017<br />
A<br />
MPa). Likewise, 21-day constructs based on loosely woven<br />
scaffolds exhibited lower (p < 0.05) mechanical properties (H of A<br />
0.16 ± 0.006 MPa and E of 0.064 ± 0.004 MPa) than constructs<br />
based on tightly woven scaffolds (H of 0.37 ± 0.030 MPa and E of<br />
A<br />
0.41 ± 0.023 MPa) (Figure 2). As compared to acellular scaffolds,<br />
the 21-day constructs exhibited similar aggregate modulus and<br />
higher (p < 0.05) Young’s modulus.<br />
Effects of Culture Vessel<br />
Aggregate modulus of 21-day constructs based on tightly woven<br />
PCL and cultured in bioreactors was higher (p < 0.05) than that<br />
measured for otherwise similar constructs cultured statically<br />
(Figure 2, Table 2). Construct amounts of DNA, total collagen, and<br />
24<br />
0.8<br />
0.6<br />
0.4<br />
0.2<br />
0.0<br />
In Vitro Generation of Mechanically Functional Cartilage Grafts . . .<br />
GAG increased in a manner dependent on time, culture vessel,<br />
and medium composition. DNA and GAG contents were similar in<br />
21-day constructs cultured in bioreactors and statically (Figure<br />
3A and C). Total collagen content was 1.5-fold higher (p < 0.05)<br />
in bioreactors compared to static cultures (Figure 3B; Table 2,<br />
Group B vs. C). Bioreactors yielded more homogeneous tissue<br />
development than static cultures based on qualitative histological<br />
appearance of cross sections (Figure 4, row I). Chondrogenesis<br />
was demonstrated histologically by rounded cells within a hyalinelike<br />
matrix that immunostained strongly and homogeneously<br />
positive for collagen type II. Bioreactors yielded constructs in<br />
which Coll-II immunostaining was more pronounced than static<br />
cultures (Figure 4, row IV). Immunostaining for Col I was minimal<br />
under all conditions tested. <strong>The</strong> RT-PCR analysis showed type of<br />
culture vessel did not affect the temporal expression of mRNAs for<br />
collagen type II (Figure 5A), Sox-9 (Figure 5B), collagen type I (not<br />
shown), and GAPDH (not shown).<br />
Effects of Medium Composition<br />
DNA content was 1.4-fold higher (p < 0.05) in 21-day constructs<br />
cultured in DM1 compared to DM2 (Figure 3A, Table 2, Group B<br />
vs. D). Also, total collagen content was 1.8-fold higher (p < 0.05)<br />
in 21-day constructs cultured in DM1 compared to DM2 (Figure<br />
3B). Conversely, GAG content was lower (42% as high, p < 0.05)<br />
in 21-day constructs cultured in DM1 compared to DM2 (Figure<br />
3C). Likewise, the GAG/DNA ratio was lower (30% as high, p
Table 2. Mechanical and Biochemical Properties of hMSC-PCL Constructs after Short-Term Culture.<br />
Parameter Culture<br />
time<br />
(days)<br />
Aggregate Modulus<br />
(H , MPa, n = 5-6)<br />
A<br />
Young’s Modulus<br />
(E, MPa, n = 5-6)<br />
DNA<br />
(μg/construct, n = 3-4)<br />
Collagen<br />
(μg/construct, n = 3-4)<br />
Collagen per DNA<br />
(mg/mg, n = 3-4)<br />
Glycosaminoglycans<br />
(GAG, μg/construct, n<br />
= 3-4)<br />
GAG per DNA<br />
(mg/mg, n = 3-4)<br />
Construct wet weight<br />
(mg/construct, n =<br />
3-4)<br />
Group A Group B Group C Group D Group E<br />
Loosely woven<br />
PCL Static, in<br />
DM1 AVG ± SEM<br />
Tightly woven<br />
PCL Static, in<br />
DM1 AVG ± SEM<br />
In Vitro Generation of Mechanically Functional Cartilage Grafts . . .<br />
Tightly woven<br />
PCL Bioreactor, in<br />
DM1 AVG ± SEM<br />
Tightly woven<br />
PCL Static, in<br />
DM2 AVG ± SEM<br />
21 0.16 ± 0.006 0.37 ± 0.03 a 0.55 ± 0.084 b NM NM<br />
21 0.06 ± 0.004 0.41 ± 0.023 a 0.34 ± 0.0099 NM NM<br />
1<br />
7<br />
14<br />
21<br />
1<br />
7<br />
14<br />
21<br />
1<br />
7<br />
14<br />
21<br />
1<br />
7<br />
14<br />
21<br />
1<br />
7<br />
14<br />
21<br />
1<br />
7<br />
14<br />
21<br />
4.76 ± 0.28<br />
10.6 ± 0.82<br />
11.1 ± 0.29<br />
9.15 ± 0.31<br />
57.5 ± 2.66<br />
182 ± 18.40<br />
295 ± 0.37<br />
413 ± 6.66<br />
12.2 ± 1.28<br />
17.2 ± 1.21<br />
26.5 ± 0.69<br />
45.3 ± 1.41<br />
6.72 ± 1.26<br />
52.7 ± 1.55<br />
80.0 ± 4.72<br />
110 ± 4.62<br />
1.40 ± 0.18<br />
5.03 ± 0.26<br />
7.17 ± 0.38<br />
12.1 ± 0.78<br />
37.7 ± 2.10<br />
46.8 ± 0.59<br />
55.3 ± 1.20<br />
54.7 ± 0.80<br />
4.74 ± 0.15<br />
9.02 ± 0.65<br />
10.1 ± 0.43<br />
10.7 ± 0.74<br />
23.5 ± 9.74<br />
124 ± 16.8<br />
274 ± 17.5<br />
395 ± 4.64<br />
5.23 ± 2.34<br />
14.2 ± 2.61<br />
27.2 ± 1.33<br />
37.6 ± 2.53<br />
7.78 ± 0.63<br />
48.2 ± 1.49<br />
85.0 ± 4.43<br />
138 ± 9.13<br />
1.65 ± 0.15<br />
5.41 ± 0.33<br />
8.49 ± 0.51<br />
13.3 ± 1.75<br />
62.7 ± 0.67<br />
67.9 ± 0.98<br />
72.7 ± 2.16<br />
74.9 ± 0.82<br />
4.74 ± 0.15<br />
8.47 ± 0.25<br />
12.7 ± 0.85b 11.5 ± 0.72<br />
23.5 ± 9.74<br />
55.2 ± 6.52b 501 ± 35.1b 585 ± 66.5b 5.23 ± 2.34<br />
6.48 ± 0.58b 40.1 ± 4.73b 52.0 ± 8.98<br />
7.78 ± 0.63<br />
26.4 ± 2.97b 91.9 ± 5.83<br />
140 ± 7.94<br />
1.65 ± 0.15<br />
3.11 ± 0.29b 7.32 ± 0.70<br />
12.4 ± 1.23<br />
62.7 ± 0.67<br />
57.9 ± 1.56<br />
64.2 ± 2.04<br />
65.2 ± 1.08<br />
4.74 ± 0.15<br />
6.16 ± 0.50<br />
6.63 ± 0.44c 7.70 ± 0.38c 23.5 ± 9.74<br />
26.1 ± 7.20c 125 ± 21.4c 219 ± 32.6c 5.23 ± 2.34<br />
4.13 ± 0.90c 18.7 ± 2.43<br />
28.3 ± 3.06<br />
7.78 ± 0.63<br />
32.1 ± 4.85<br />
149 ± 22.6c 326 ± 47.7c 1.65 ± 0.15<br />
5.14 ± 0.44<br />
22.4 ± 2.45c 41.9 ± 4.00c 62.7 ± 0.67<br />
62.6 ± 2.67<br />
64.8 ± 1.54<br />
69.7 ± 1.72<br />
Tightly woven<br />
PCL Static, in CM<br />
AVG ± SEM<br />
4.74 ± 0.15<br />
NM<br />
NM<br />
6.14 ± 0.237d 23.5 ± 9.74d NM<br />
NM<br />
105 ± 18.4d 5.23 ± 2.34<br />
NM<br />
NM<br />
17.1 ± 244d 7.78 ± 0.63<br />
NM<br />
NM<br />
20.4 ± 1.28d 1.65 ± 0.15<br />
NM<br />
NM<br />
3.32 ± 0.18d 62.7 ± 0.67<br />
NM<br />
NM<br />
71.4 ± 0.38<br />
Static = culture in petri dish; Bioreactor = culture in gas-permeable loop with slow, bidirectional oscillation; DM1 = differentiation medium #1; DM2 = differentiation<br />
medium #2; CM = control medium; n = number of samples tested; NM = not measured.<br />
Multiway ANOVA for Groups A-C for the culture time of 21 days showed significant effects of scaffold structure and culture vessel.<br />
Multiway ANOVA for Groups A-D for culture times of 1-21 days showed significant effects of time, culture vessel, and culture medium composition.<br />
aSignificant effect due to scaffold structure.<br />
bSignificant effect due to culture vessel.<br />
cSignificant effect due to presence of FBS in culture medium.<br />
dSignificant effect due to presence of chondrogenic additives in culture medium.<br />
25
A<br />
B<br />
C<br />
positive matrix staining for GAG and Coll-II, Figure 4, rows III and<br />
IV). Culture in DM2 yielded earlier expression of collagen type<br />
II mRNA (Figure 5A) such that this biomarker was present by<br />
day 7 in contrast to DM1. Constructs cultured in CM contained<br />
substantially lower amounts of collagen and GAG compared to<br />
DM1 (Table 2, Group B vs. E). Chondrogenic differentiation was<br />
virtually absent in constructs cultured in CM with respect to<br />
rounded cell shape, matrix staining for GAG and collagen type II,<br />
and measured GAG and collagen contents (Figures 3 and 4).<br />
26<br />
DNA (μg/sample)<br />
Collagen (μg/sample)<br />
GAG (μg/sample)<br />
15<br />
10<br />
5<br />
600<br />
500<br />
400<br />
300<br />
200<br />
100<br />
0<br />
1d 7d 14d 21d<br />
Static dish<br />
DM1<br />
400<br />
300<br />
200<br />
100<br />
0<br />
1d 7d 14d 21d<br />
Static dish<br />
DM1<br />
0<br />
1d 7d 14d 21d<br />
Static dish<br />
DM1<br />
a<br />
1d 7d 14d 21d 1d 7d 14d 21d 21d<br />
Bioreactor<br />
DM1<br />
a<br />
Bioreactor<br />
DM1<br />
a<br />
1d 7d 14d 21d 1d 7d 14d 21d 21d<br />
Bioreactor<br />
DM1<br />
b<br />
Static dish<br />
DM2<br />
b<br />
Static dish<br />
DM2<br />
In Vitro Generation of Mechanically Functional Cartilage Grafts . . .<br />
b<br />
b<br />
c<br />
Static<br />
CM<br />
1d 7d 14d 21d 1d 7d 14d 21d 21d<br />
b<br />
Static dish<br />
DM2<br />
b<br />
c<br />
Static<br />
CM<br />
c<br />
Static<br />
CM<br />
Figure 3. Amounts of (A) DnA, (B) total collagen, and (C)<br />
glycosaminoglycans (GAG) in constructs produced from tightly<br />
woven scaffolds and hMSC cultured for up to 21 days, statically<br />
or in bioreactors in DM1, statically in DM2, and statically<br />
in CM. aSignificant difference due to type of culture vessel,<br />
b c Significant difference due to serum, Significant difference due to<br />
chondrogenic additives.<br />
I<br />
II<br />
III<br />
IV<br />
Static dish,<br />
DM1<br />
Bioreactor,<br />
DM1<br />
Static dish,<br />
DM2<br />
Static Dish,<br />
CM<br />
Figure 4. Histological appearance of constructs produced from<br />
tightly woven scaffolds and hMSC cultured for 21 days statically<br />
in DM1 (column 1), in bioreactors in DM1 (column 2), statically<br />
in DM2 (column 3) or statically in CM (column 4). Rows I-II:<br />
full cross-section (I) or en-face (II-III) sections stained with<br />
safranin-O/fast green (GAG appears orange-red, cell nuclei<br />
black, and PCL scaffold white); Row IV: en-face sections double<br />
immunostained for collagen types I and II (Coll-II appears green,<br />
Coll-I was stained red and was not seen, DAPI-counterstained cell<br />
nuclei appear blue). Scale bars: 200 μm (Rows I and II); 20 μm<br />
(Row III); 100 μm (Row IV).<br />
Figure 5. Electrophoresis gels for RT-PCR products of collagen<br />
type II and of Sox-9. Lane 1 = day 0 hMSCs; Lanes 2 and 3 = 7 days<br />
and 21 days static culture in DM1; Lanes 4 and 5 = 7 days and 21<br />
days bioreactor culture in DM1; Lanes 6 and 7 = 7 days and 21 days<br />
static culture in DM2; Lane 8 = control for DnA contamination.<br />
<strong>The</strong> DnA Ladder shown is TrackItT 100 bp.<br />
Discussion<br />
<strong>The</strong> findings of this study demonstrate the ability to produce<br />
functional tissue engineered cartilage starting from hMSC and a<br />
tightly woven PCL scaffold within 21 days in vitro. Importantly,<br />
aggregate and Young’s moduli of hMSC-PCL constructs cultured<br />
statically and in bioreactors (Table 2, Groups B and C), approached<br />
the values reported for normal articular cartilage (H A of 0.1-2.0<br />
MPa; E of 0.4-0.8 MPa) [54]-[56]. Young’s modulus was higher<br />
for 21-day constructs than initial acellular scaffolds, which may<br />
be due to accumulation of cell-derived cartilaginous extracellular<br />
matrix within the 3D-woven scaffold and associated increase<br />
in shear modulus. <strong>The</strong>se effects would be expected to reduce<br />
relative PCL yarn movement and cross-sectional shape distortion
during compressive testing and have a more pronounced effect<br />
during mechanical testing in the unconfined configuration (i.e.,<br />
where scaffolds are not laterally constrained) than the confined<br />
configuration, thereby affecting E more than H . Although short-<br />
A<br />
term maintenance of mechanical properties of constructs and<br />
scaffolds was demonstrated, further studies of constructs and<br />
acellular scaffolds are warranted to assess mechanical properties<br />
over longer time periods. Long-term maintenance can be<br />
reasonably expected due to the slow biodegradation of PCL [23],<br />
[24] in concert with continued accumulation of cell-derived<br />
cartilaginous matrix.<br />
Efficient hMSC seeding and cartilaginous matrix deposition<br />
were observed for loosely and tightly woven PCL scaffolds and<br />
can be attributed to the combination of Matrigel®-enhanced cell<br />
entrapment [25] and large, homogenously distributed pores<br />
in the scaffold (i.e., in-plane pores of 250-1000 μm, Figure 1).<br />
Consistently, scaffolds with 250-500 μm pores were found to<br />
enhance GAG secretion as compared to smaller pores [57],<br />
and 380-405 μm pores were found suitable for chondrocyte<br />
proliferation [58]. Culture in an oscillating bioreactor yielded<br />
constructs with higher aggregate modulus, higher total collagen<br />
content, and more homogeneous tissue development, especially<br />
at the upper and lower surfaces than otherwise identical static<br />
cultures (Figures 2-4). Constructs from the oscillating bioreactor<br />
exhibited strongly positive immunostaining for Coll-II and<br />
virtually negative staining for Coll-I, although collagen type II<br />
as a fraction of the total collagen was not measured explicitly.<br />
We previously showed that constructs from rotating bioreactors<br />
contained more total and type II collagen than statically cultured<br />
controls [12], and in these constructs, Coll-II represented 92-<br />
99% of the total collagen [51], [59]. Consistently, others showed<br />
hydrodynamic shear increased total collagen, type II collagen,<br />
and tensile modulus of multilayered chondrocyte sheets [60],<br />
and bidirectional perfusion yielded more homogeneous tissue<br />
engineered cartilage than unidirectional perfusion [35], [36].<br />
Culture medium composition significantly impacted construct<br />
amounts of DNA and GAG, intensity of safranin-O staining and<br />
Coll- II immunostaining, and the temporal profile of chondrogenic<br />
differentiation by hMSC. Specifically, chondrogenic additives<br />
without serum (DM2) yielded constructs with higher GAG,<br />
higher GAG/DNA ratio, earlier expression of collagen II mRNA,<br />
and more homogenous immunostaining for Coll-II as compared<br />
to chondrogenic additives with serum (DM1) (Figures 3-5).<br />
Consistently, serum was recently reported to inhibit chondrogenic<br />
differentiation of synoviocytes [38], [39], and this finding was<br />
attributed to an enhanced proliferation of the cells that hindered<br />
their differentiation capacity. Interestingly, the GAG/DNA ratio<br />
measured in the present study at culture day 21 exceeded the<br />
GAG/DNA ratios previously reported after prolonged in vitro<br />
cultivation [17], [39], [61].<br />
Conclusion<br />
In this work, a 3D-woven PCL scaffold combined with adult human<br />
stem cells yielded mechanically functional tissue engineered<br />
cartilage constructs within only 21 days in vitro. Scaffold structure<br />
significantly impacted construct aggregate and Young’s moduli<br />
In Vitro Generation of Mechanically Functional Cartilage Grafts . . .<br />
(i.e., tightly woven scaffolds yielded constructs with higher moduli<br />
than more loosely woven scaffolds). Importantly, compressive<br />
moduli of 21-day constructs based on tightly woven scaffolds<br />
approached values reported for normal articular cartilage.<br />
Production of constructs with robust mechanical properties was<br />
accelerated by culture in oscillating bioreactors as compared to<br />
static dishes (i.e., the bioreactor yielded constructs with higher H , A<br />
higher total collagen content, more immunostaining for collagen<br />
type II, and more spatially homogenous tissue development).<br />
Chondrogenic differentiation of hMSC was observed only if<br />
culture medium was supplemented with chondrogenic additives<br />
(TGFβ, ITS+ Premix), and was accelerated if this medium did not<br />
contain serum (i.e., lack of serum yielded constructs with higher<br />
GAG content, higher GAG/DNA ratio, earlier expression of collagen<br />
type II mRNA, and more pronounced matrix staining for GAG and<br />
collagen type II).<br />
Acknowledgments<br />
This work was supported by the Academy of Finland and the<br />
Finnish Cultural Foundation (PKV), NIH AR055414-01 (LEF),<br />
NASA NNJ04HC72G (LEF), NIH AR050208 (JFW), NIH P01<br />
AR053622 (AIC, JFW), and NIH AR48852 (FG). We thank EMS/<br />
Griltech (Domat, Switzerland) for donating the multifilament PCL<br />
yarn, A. Gallant for expert help with the oscillating bioreactor, and<br />
C.M. Weaver for help with manuscript preparation. One of the<br />
authors (FG) owns equity in Cytex <strong>The</strong>rapeutics, Inc. <strong>The</strong> other<br />
authors have no known conflicts of interest associated with this<br />
publication.<br />
Notes<br />
Figures with essential color discrimination, Figures 1, 2, and 4 in<br />
this article, are difficult to interpret in black and white. <strong>The</strong> full<br />
color images can be found in the on-line version, at doi:10.1016/j.<br />
biomaterials.2009.11.092.<br />
References<br />
[1] Praemer, A., S. Furner, D.P. Rice, Musculoskeletal Conditions<br />
in the United States, American Academy of Orthopaedic Surgeons,<br />
Rosemont, IL, 1999.<br />
[2] Hunziker, E.B., “<strong>The</strong> Elusive Path to Cartilage Regeneration,”<br />
Advanced Materials, Vol. 21, 2009, pp. 3419-24.<br />
[3] Brittberg, M., A. Lindahl, A. Nilsson, C. Ohlsson, O. Isaksson, L.<br />
Peterson, “Treatment of Deep Cartilage Defects in the Knee with<br />
Autologous Chondrocyte Transplantation,” New England Journal of<br />
Medicine, Vol. 331, 1994, pp. 889-95.<br />
[4] Freed, L.E., J.C. Marquis, A. Nohria, J. Emmanual, A.G. Mikos, R.S.<br />
Langer, “Neocartilage Formation in Vitro and in Vivo Using Cells<br />
Cultured on Synthetic Biodegradable Polymers,” J. Biomed. Mater.<br />
Res., Vol. 27, 1993, pp. 11-23.<br />
[5] Byers, B.A., R.L. Mauck, I.E. Chiang, R.S. Tuan, “Transient Exposure<br />
to Transforming Growth Factor Beta 3 Under Serum-Free Conditions<br />
Enhances the Biomechanical and Biochemical Maturation of Tissue-<br />
Engineered Cartilage,” Tissue Engineering Part A, Vol. 14, 2008, pp.<br />
1821-34.<br />
[6] Lee, C.R., A.J. Grodzinsky, H.P. Hsu, S.D. Martin, M. Spector, “Effects<br />
of Harvest and Selected Cartilage Repair Procedures on the Physical<br />
and Biochemical Properties of Articular Cartilage in the Canine<br />
Knee,” J. Orthop. Res., Vol. 18, 2000, pp. 790-9.<br />
27
[7] Caplan, A.I., “Tissue Engineering Designs for the Future: New Logics,<br />
Old Molecules,” Tissue Engingeering, Vol. 6, 2000, pp. 1-8.<br />
[8] Martin, I., V.P. Shastri, R.F. Padera, J. Yang, A.J. Mackay, R.S. Langer, et<br />
al., “Selective Differentiation of Mammalian Bone Marrow Stromal<br />
Cells Cultured on Three-Dimensional Polymer Foams,” J. Biomed.<br />
Mater. Res., Vol. 55, 2001, pp. 229-35.<br />
[9] Lennon, D., S. Haynesworth, S. Bruder, N. Jaiswal, A.I. Caplan,<br />
“Human and Animal Mesenchymal Progenitor Cells from Bone<br />
Marrow: Identification of Serum for Optimal Selection and<br />
Proliferation,” In Vitro Cell Dev. Biol. Anim., Vol. 32, 1996, pp. 602-11.<br />
[10] Hunziker, E.B., “Articular Cartilage Repair: Basic Science and Clinical<br />
Progress. A Review of the Current Status and Prospects,” Osteoarthr.<br />
Cartil., Vol. 10, 2002, pp. 432-63.<br />
[11] Freed, L.E., R.S. Langer, I. Martin, N. Pellis, G. Vunjak-Novakovic,<br />
“Tissue Engineering of Cartilage in Space,” Proceedings of the<br />
National Academy of Science, U.S.A., Vol. 94, 1997, pp. 13885-90.<br />
[12] Pei, M., L.A. Solchaga, J. Seidel, L. Zeng, G. Vunjak-Novakovic,<br />
A.I. Caplan, et al., “Bioreactors Mediate the Effectiveness of Tissue<br />
Engineering Scaffolds,” FASEB J, Vol. 16, 2002, pp. 1691-4.<br />
[13] Freed, L.E., G.C. Engelmayr, Jr., J.T. Borenstein, F.T. Moutos, F. Guilak,<br />
“Advanced Material Strategies for Tissue Engineering Scaffolds,”<br />
Advanced Materials, Vol. 21, 2009, pp. 3410-8.<br />
[14] Wendt, D., S.A. Riboldi, M. Cioffi, I. Martin, “Potential and Bottlenecks<br />
of Bioreactors in 3D Cell Culture and Tissue Manufacturing,”<br />
Advanced Materials, Vol. 21, 2009, pp. 3352-67.<br />
[15] LeRoux, M.A., F. Guilak, L.A. Setton, “Compressive and Shear<br />
Properties of Alginate Gel: Effects of Sodium Ions and Alginate<br />
Concentration,” J. Biomed. Mater. Res., Vol. 47, 1999, pp. 46-53.<br />
[16] Mauck, R.L., M.A. Soltz, C.C.B. Wang, D.D. Wong, P.G. Chao, W.B.<br />
Valhmu, et al., “Functional Tissue Engineering of Articular Cartilage<br />
Through Dynamic Loading of Chondrocyte-Seeded Agarose Gels,”<br />
Journal of Biomechanical Engineering, Vol. 122, 2000, pp. 252-60.<br />
[17] Marolt, D., A. Augst, L.E. Freed, C. Vepari, R. Fajardo, N. Patel, et al.,<br />
“Bone and Cartilage Tissue Constructs Grown Using Human<br />
Bone Marrow Stromal Cells, Silk Scaffolds and Rotating Bioreactors,”<br />
Biomaterials, Vol. 27, 2006, pp. 6138-49.<br />
[18] Mauck, R.L., X. Yuan, R.S. Tuan, “Chondrogenic Differentiation and<br />
Functional Maturation of Bovine Mesenchymal Stem Cells in Long-<br />
Term Agarose Culture,” Osteoarthr. Cartil., Vol. 14, 2006, pp. 179-89.<br />
[19] Moutos, F.T., L.E. Freed, F. Guilak, “A Biomimetic Three-Dimensional<br />
Woven Composite Scaffold for Functional Tissue Engineering of<br />
Cartilage,” Nat. Mater., Vol. 6, 2007, pp. 162-7.<br />
[20] Pitt, C.G., “Poly-Epsilon-Caprolactone and Its Copolymers,”<br />
Biodegradable Polymers as Drug Delivery Systems, M. Chasin and<br />
R.S. Langer, editors, Marcel Dekker, New York, 1990, pp. 71-120.<br />
[21] Sinha, V.R., K. Bansal, R. Kaushik, R. Kumria, A. Trehan, “Poly-<br />
Epsilon-Caprolactone Microspheres and Nanospheres: An<br />
Overview,” Int. J. Pharm., Vol. 278, 2004, pp. 1-23.<br />
[22] Li, W.J., K.G. Danielson, P.G. Alexander, R.S. Tuan, “Biological<br />
Response of Chondrocytes Cultured in Three-Dimensional<br />
Nanofibrous Poly(Epsilon-Caprolactone) Scaffolds,” J. Biomed .Mater. Res.,<br />
Vol. 67A, 2003, pp. 1105-14.<br />
[23] Huang, M.H., S.M. Li, D.W. Hutmacher, J. Coudane, M. Vert,<br />
“Degradation Characteristics of Poly(Epsilon-Caprolactone)-Based<br />
Copolymers and Blends,” Journal of Applied Polymer Science, Vol. 102,<br />
2006, pp. 1681-7.<br />
28<br />
In Vitro Generation of Mechanically Functional Cartilage Grafts . . .<br />
[24] Sun, H., L. Mei, C. Song, X. Cui, P. Wang, “<strong>The</strong> in Vivo Degradation,<br />
Absorption and Excretion of PCL-Based Implant,” Biomaterials, Vol.<br />
27, 2006, pp. 1735-40.<br />
[25] Radisic, M., M. Euloth, L. Yang, R.S. Langer, L.E. Freed, G. Vunjak-<br />
Novakovic, “High Density Seeding of Myocyte Cells for Tissue<br />
Engineering,” Biotechnol. Bioeng., Vol. 82, 2003, pp. 403-14.<br />
[26] Watt, F.M., Dudhia J., “Prolonged Expression of Differentiated<br />
Phenotype by Chondrocytes Cultured at Low Density on a<br />
Composite Substrate of Collagen and Agarose that Restricts Cell<br />
Spreading,” Differentiation, Vol. 38, 1988, pp. 140-7.<br />
[27] Cheng, M., M. Moretti, G.C. Engelmayr, L.E. Freed, “Insulin-Like<br />
Growth Factor-I and Slow, Bi-Directional Perfusion Enhance the<br />
Formation of Tissue-Engineered Cardiac Grafts,” Tissue Engineering<br />
Part A,” Vol. 15, 2009, pp. 645-53.<br />
[28] Vunjak-Novakovic, G., I. Martin, B. Obradovic, S. Treppo, A.J.<br />
Grodzinsky, R.S. Langer, et al., “Bioreactor Cultivation Conditions<br />
Modulate the Composition and Mechanical Properties of Tissue<br />
Engineered Cartilage,” J. Orthop. Res., Vol. 17, 1999, pp. 130–8.<br />
[29] Pazzano, D., K.A. Mercier, J.M. Moran, S.S. Fong, D.D. DiBiasio, J.X.<br />
Rulfs, et al., “Comparison of Chondrogenesis in Static and Perfused<br />
Bioreactor Culture,” Biotechnol. Prog., Vol. 16, 2000, pp. 893-6.<br />
[30] Davisson, T.H., R.L. Sah, A.R. Ratcliffe, “Perfusion Increases Cell<br />
Content and Matrix Synthesis in Chondrocyte Three-Dimensional<br />
Cultures,” Tissue Engineering, Vol. 8, 2002, pp. 807-16.<br />
[31] Darling, E.M., K.A. Athanasiou, “Articular Cartilage Bioreactors and<br />
Bioprocesses,” Tissue Engineering, Vol. 9, 2003, pp. 9-26.<br />
[32] Raimondi, M.T., M. Moretti, M. Cioffi, C. Giordano, F. Boschetti, K.<br />
Lagana, et al., “<strong>The</strong> Effect of Hydrodynamic Shear on 3D Engineered<br />
Chondrocyte Systems Subject to Direct Perfusion, Biorheology, Vol.<br />
43, 2006, pp. 215-22.<br />
[33] Moretti, M., L.E. Freed, R.F. Padera, K. Lagana, F. Boschetti, M.T.<br />
Raimondi, “An Integrated Experimental-Computational Approach<br />
for the Study of Engineered Cartilage Constructs Subjected to<br />
Combined Regimens of Hydrostatic Pressure and Interstitial<br />
Perfusion,” Biomed. Mater. Eng., Vol. 18, 2008, pp. 273-8.<br />
[34] Concaro, S., F. Gustavson, P. Gatenholm, “Bioreactors for Tissue<br />
Engineering of Cartilage,” Adv. Biochem. Eng. Biotechnol., Vol. 112,<br />
2009, pp. 125-43.<br />
[35] Wendt, D., A. Marsano, M. Jakob, M. Heberer, I. Martin, “Oscillating<br />
Perfusion of Cell Suspensions Through Three-Dimensional Scaffolds<br />
Enhances Cell Seeding Efficiency and Uniformity,” Biotechnol.<br />
Bioeng., Vol. 84, 2003, pp. 205-14.<br />
[36] Mahmoudifar, N., P.M. Doran, “Tissue Engineering of Human<br />
Cartilage in Bioreactors Using Single and Composite Cell-Seeded<br />
Scaffolds,” Biotechnol. Bioeng., Vol. 91, 2005, pp. 338-55.<br />
[37] Augst, A., D. Marolt, L.E. Freed, C. Vepari, L. Meinel, M. Farley, et<br />
al., “Effects of Chondrogenic and Osteogenic Regulatory Factors on<br />
Composite Constructs Grown Using Human Mesenchymal Stem<br />
Cells, Silk Scaffolds and Bioreactors,” J. R. Soc. Interface., Vol. 5, 2008,<br />
pp. 929-39.<br />
[38] Bilgen, B., E. Orsini, R.K. Aaron, D.M. Ciombor, “FBS Suppresses<br />
TGF-Beta1-Induced Chondrogenesis in Synoviocyte Pellet Cultures<br />
While Dexamethasone and Dynamic Stimuli Are Beneficial,” J. Tissue<br />
Eng. Regen. Med., Vol. 1, 2007, pp. 436-42.<br />
[39] Lee, S., J.H. Kim, C.H. Jo, S.C. Seong, J.C. Lee, M.C. Lee, “Effect of Serum<br />
and Growth Factors on Chondrogenic Differentiation of Synovium-<br />
Derived Stromal Cells,” Tissue Engineering Part A, 2009.
[40] Bruckner, P., I. Horler, M. Mendler, Y. Houze, K.H. Winterhalter, S. Bender,<br />
et al., “Induction and Prevention of Chondrocyte Hypertrophy in<br />
Culture, Journal of Cell Biology, Vol. 109, 1989, pp. 2537-45.<br />
[41] Van De Velde, K., P. Kiekens “Biopolymers: Overview of Several<br />
Properties and Consequences on their Applications,” Polymer Test,<br />
Vol. 21, 2002, pp. 433-42.<br />
[42] Haynesworth, S.E., J. Goshima, V.M. Goldberg, A.I. Caplan,<br />
“Characterization of Cells with Osteogenic Potential from Human<br />
Bone Marrow,” Bone, Vol. 13, 1992, pp. 81-8.<br />
[43] Solchaga, L.A., K. Penick, J.D. Porter, V.M. Goldberg, A.I. Caplan, J.F.<br />
Welter, “FGF-2 Enhances the Mitotic and Chondrogenic Potentials of<br />
Human Adult Bone Marrow-Derived Mesenchymal Stem Cells,”<br />
Journal of Cell Physiology, Vol. 203, 2005, pp. 398-409.<br />
[44] Penick, K.J., L.A. Solchaga, J.F. Welter, “High-Throughput Aggregate<br />
Culture System to Assess the Chondrogenic Potential of<br />
Mesenchymal Stem Cells,” Biotechniques, Vol. 39, 2005, pp. 687-91.<br />
[45] Pittenger, M.F., A.M. Mackay, S.C. Beck, R.K. Jaiswal, R. Douglas, J.D.<br />
Mosca, et al., “Multilineage Potential of Adult Human Mesenchymal<br />
Stem Cells,” Science, Vol. 284, 1999, pp. 143-7.<br />
[46] Mow, V.C., S.C. Kuei, W.M. Lai, C.G. Armstrong, “Biphasic Creep and<br />
Stress Relaxation of Articular Cartilage in Compression: <strong>The</strong>ory and<br />
Experiments,” Journal of Biomechanical Engineering, Vol. 102, 1980,<br />
pp. 73-84.<br />
[47] Cohen, B., W.M. Lai, V.C. Mow, “A Transversely Isotropic<br />
Biphasic Model for Unconfined Compression of Growth Plate and<br />
Chondroepiphysis,” Journal of Biomechanical Engineering, Vol. 120,<br />
1998, pp. 491-6.<br />
[48] Elliott, D.M., F. Guilak, T.P. Vail, J.Y. Wang, L.A. Setton, “Tensile<br />
Properties of Articular Cartilage Are Altered by Meniscectomy in a<br />
Canine Model of Osteoarthritis,” J. Orthop. Res., Vol. 17, 1999, pp. 503-8.<br />
[49] Awad, H.A., M.Q. Wickham, H.A. Leddy, J.M. Gimble, F. Guilak,<br />
“Chondrogenic Differentiation of Adipose-Derived Adult Stem Cells<br />
in Agarose, Alginate, and Gelatin Scaffolds,” Biomaterials, Vol. 25,<br />
2004, pp. 3211-22.<br />
[50] Hollander, A.P., T.F. Heathfield, C. Webber, Y. Iwata, R. Bourne, C.<br />
Rorabeck, et al., “Increased Damage to Type II Collagen in<br />
Osteoarthritic Articular Cartilage Detected by a New Immunoassay,”<br />
J. Clin. Invest., Vol. 93, 1994, pp. 1722-32.<br />
[51] Freed, L.E., A.P. Hollander, I. Martin, J.R. Barry, R.S. Langer, G. Vunjak-<br />
Novakovic, “Chondrogenesis in a Cell-Polymer-Bioreactor System,”<br />
Exp. Cell Res., Vol. 240, 1998, pp. 58-65.<br />
[52] Robins, J.C., N. Akeno, A. Mukherjee, R.R. Dalal, B.J. Aronow, P.<br />
Koopman, et al., “Hypoxia Induces Chondrocyte-Specific Gene<br />
Expression in Mesenchymal Cells in Association with Transcriptional<br />
Activation of Sox9,” Bone, Vol. 37, 2005, pp. 313-22.<br />
[53] Barry, F., R.E. Boynton, B. Liu, J.M. Murphy, “Chondrogenic<br />
Differentiation of Mesenchymal Stem Cells from Bone Marrow:<br />
Differentiation-Dependent Gene Expression of Matrix Components,”<br />
Exp. Cell Res., Vol. 268, 2001, pp. 189-200.<br />
[54] Mow, V.C., X.E. Guo, “Mechano-Electrochemical Properties of<br />
Articular Cartilage: <strong>The</strong>ir Inhomogenieties and Anisotropics,” Annu.<br />
Rev. Biomed. Eng., Vol. 4, 2002, pp. 175-209.<br />
[55] Athanasiou, K.A., M.P. Rosenwasser, J.A. Buckwalter, T.I. Malinin, V.C.<br />
Mow, “Interspecies Comparisons of in situ Mechanical Properties of<br />
Distal Femoral Cartilage,” J. Orthop. Res., Vol. 9, 1991, pp. 330-40.<br />
In Vitro Generation of Mechanically Functional Cartilage Grafts . . .<br />
[56] Jurvelin, J.S., M.D. Buschmann, E.B. Hunziker, “Optical and<br />
Mechanical Determination of Poisson’s Ratio of Adult Bovine<br />
Humeral Articular Cartilage,” J. Biomech., Vol. 30, 1997, pp. 235-41.<br />
[57] Lien, S.M., Ko L.Y., Huang T.J., “Effect of Pore Size on ECM Secretion<br />
and Cell Growth in Gelatin Scaffold for Articular Cartilage Tissue<br />
Engineering,” Acta Biomater, Vol. 5, 2009, pp. 670-9.<br />
[58] Oh, S.H., I.K. Park, J.M. Kim, J.H. Lee, “In Vitro and in Vivo<br />
Characteristics of PCL Scaffolds with Pore Size Gradient Fabricated<br />
by a Centrifugation Method,” Biomaterials, Vol. 28, 2007, pp. 1664-71.<br />
[59] Riesle, J., A.P. Hollander, R.S. Langer, L.E. Freed, G. Vunjak-Novakovic,<br />
“Collagen in Tissue-Engineered Cartilage: Types, Structure and<br />
Crosslinks,” Journal of Cell Biochemistry, Vol. 71, 1998, pp. 313-27.<br />
[60] Gemmiti, C.V., R.E. Guldberg, “Fluid Flow Increases Type II Collagen<br />
Deposition and Tensile Mechanical Properties in Bioreactor-Grown<br />
Tissue-Engineered Cartilage,” Tissue Engineering, Vol. 12, 2006,<br />
pp. 469-79.<br />
[61] Hu, J., K. Feng, X. Liu, P.X. Ma, “Chondrogenic and Osteogenic<br />
Differentiations of Human Bone Marrow-Derived Mesenchymal<br />
Stem Cells on a Nanofibrous Scaffold with Designed Pore Network,”<br />
Biomaterials, Vol. 30, 2009, pp. 5061-7.<br />
29
30<br />
Piia K. Valonen is currently a Postdoctoral Researcher at the University of Eastern Finland. Her research is<br />
focused on Alzheimer’s disease, concentrating on how different cell types (e.g., mesenchymal stem cells) differ<br />
between healthy and AD patients. She is also involved in a Clean Room Training Center project. In November<br />
2007, she joined Dr. Lisa Freed’s group at the Harvard-MIT Division of Health Sciences & <strong>Technology</strong> as a<br />
Postdoctoral Associate for 13 months. Dr. Valonen holds M.Sc. and Ph.D. degrees from the University of Kuopio<br />
(now the University of Eastern Finland).<br />
Franklin T. Moutos is currently a Research Scholar at Duke University Medical Center and is working to develop<br />
new technologies for the treatment of degenerative joint diseases. He has broad experience in the application of<br />
textile and biomedical engineering principles to the development of biomaterial scaffolds for tissue regeneration.<br />
Prior to completing his graduate studies, he spent three years in a startup company that designed and produced<br />
high-performance textiles and composite materials for aerospace, industrial, and biomedical applications. Dr.<br />
Moutos received B.S. and M.S. degrees in Textile Material Science from North Carolina State University and a Ph.D.<br />
in Biomedical Engineering from Duke University.<br />
Akihiko Kusanagi is an Assistant Professor at Shiga University of Medical Science in Japan. Previously, he was a<br />
Postdoctoral Fellow at the Langer Research <strong>Laboratory</strong>, MIT (2006-2010). He has extensive background in cartilage<br />
tissue engineering and stem cell research, including human embryonic stem cells, iPS cells, and mesenchymal<br />
stem cells. Prior to joining to MIT, he was Director of Research and Development at the Histogenics Corporation, a<br />
leading company for cartilage tissue engineering for clinical applications. He was in charge of several orthopedicbased<br />
tissue engineering technologies, and one product, NeoCart, is in an FDA phase III clinical trial in the U.S. Dr.<br />
Kusanagi earned B.S. and D.V.M. degrees from Azabu University and a Ph.D. in Veterinary Medicine from University<br />
of Tokyo.<br />
Matteo G. Moretti is head of the Cell and Tissue Engineering <strong>Laboratory</strong> at IRCCS Galeazzi Orthopaedic<br />
Institute, Milan, Italy. His main research interests are osteochondral and cardiovascular tissue engineering and<br />
multiscale bioreactor systems aimed at developing microfluidic and traditional tissue bioreactor technologies as<br />
a key to more viable and accessible tissue and cell therapies. He holds B.Eng and M.Sc degrees in Bioengineering<br />
from Politecnico di Milano and Trinity College Dublin, respectively. He carried out part of his doctoral studies in<br />
Prof. I. Martin’s Lab at Basel University and obtained a Ph.D. from Politecnico di Milano. Dr. Moretti then worked as<br />
Postdoctoral Fellow at the Harvard-MIT Division of Health Science and <strong>Technology</strong> supervised by Dr. Lisa Freed.<br />
Brian O. Diekman is a Ph.D. student in Biomedical Engineering at Duke University. His research investigates<br />
human and mouse stem cell populations for use in cellular therapy and cartilage tissue engineering. He was<br />
awarded a Fulbright Student Grant to perform stem cell research at the Regenerative Medicine Institute in Galway,<br />
Ireland. Mr. Diekman earned a B.S. in Biomedical Engineering from Duke University.<br />
In Vitro Generation of Mechanically Functional Cartilage Grafts . . .
Jean F. Welter is a Research Associate Professor at the Skeletal Research Center, Department of Biology, Case<br />
Western Reserve University, Cleveland, OH. Current research interests include tissue engineering, primarily of<br />
cartilage, but also bone and skin; quality control of mesenchymal stem cells and tissue engineered products,<br />
focusing on nondestructive testing methods for the latter; bioreactor design and development; bone grafting;<br />
and intra-articular drug delivery for OA/RA. He has published 32 papers, 6 book chapters, and has one patent<br />
application pending. Dr. Welter holds a Doktor der gesamten Heilkunde (M.D.) from the School of Medicine,<br />
Leopold Franzens Universität, Innsbruck, Austria, an M.Sc. in Experimental Surgery from McGill University,<br />
Montréal, Québec, Canada, and a Ph.D. in Physiology and Biophysics from Case Western Reserve University,<br />
Cleveland, OH.<br />
Arnold I. Caplan is Professor of Biology and the Director of the Skeletal Research Center at Case Western Reserve<br />
University. His research has involved understanding the development, maturation, and aging and regeneration of<br />
cartilage, bone, skin, and other mesenchymal tissues and pioneering research on mesenchymal stem cells (MSCs).<br />
He and his collaborators have helped define the immunoregulatory and tropic activities of MSCs as manifested<br />
by the secretion of a complex array of bioactive molecules at sites of tissue injury or inflammation. Dr. Caplan<br />
received a B.S. in Chemistry at the Illinois Institute of <strong>Technology</strong> and a Ph.D. from <strong>The</strong> Johns Hopkins University<br />
School of Medicine. Dr. Caplan did a Postdoctoral Fellowship in the Department of Anatomy at <strong>The</strong> Johns Hopkins<br />
University, followed by Postdoctoral Fellowships at Brandeis University.<br />
Farshid Guilak is the Laszlo Ormandy Professor of Orthopaedic Surgery and Director of Orthopaedic Research<br />
at Duke University Medical Center. Dr. Guilak’s research focuses on the study of osteoarthritis, a painful and<br />
debilitating disease of the synovial joints. His laboratory has used a multidisciplinary approach to investigate the<br />
role of biomechanical factors in the onset and progression of osteoarthritis, as well as the development of new<br />
tissue engineering therapies for this disease. His work in this area has focused on the use of adult stem cells in<br />
combination with novel 3D biomaterial scaffolds for the regeneration of articular cartilage to treat osteoarthritis.<br />
He received a B.S. in Biomedical Engineering from Rensselaer Polytechnic Institute, and a Ph.D. in Mechanical<br />
Engineering from Columbia University.<br />
Lisa E. Freed is a Senior Member of the Technical Staff in the Biomedical Engineering Group at <strong>Draper</strong> <strong>Laboratory</strong><br />
and an MIT-Affiliated Research Scientist in the Harvard-MIT Division of Health Sciences and <strong>Technology</strong>. She<br />
has been a Principal Investigator at MIT since 1993 and at <strong>Draper</strong> since 2009. Dr. Freed’s research focuses on<br />
the development and implementation of novel tools for tissue engineering, including biomaterial scaffolds with<br />
tissue-mimetic properties and cell culture bioreactors that apply physiologic biophysical stimuli to accelerate<br />
tissue growth. With collaborators, she has engineered functional tissues that resemble heart muscle and cartilage<br />
(i.e., muscle that withstands tensile loading and propagates electrical signals and skeletal tissue constructs that<br />
withstand compressive loading). She received a S.B. in Biology from MIT, a Ph.D. in Applied Biological Sciences<br />
from MIT, and the M.D. from Harvard Medical School.<br />
In Vitro Generation of Mechanically Functional Cartilage Grafts . . .<br />
31
32<br />
r s<br />
Most vehicles can be refueled repeatedly over the course of their<br />
lifetime, but gas stations don’t exist in space today – or in the near<br />
future – so a satellite is no longer useful once its tank is empty. <strong>Draper</strong><br />
engineers may solve this problem by replacing the fuel that runs the<br />
spacecraft’s propulsion system – which keeps it in the correct position<br />
as well as maneuvers when necessary – with magnetic forces.<br />
In addition to enabling satellites to stay in place and maneuver<br />
indefinitely, relying on the push from the Earth’s magnetic field would<br />
enable far more frequent orbit changes, which today are only done if<br />
absolutely necessary in order to conserve fuel.<br />
Taking this approach would shake up the way that satellites have<br />
traditionally been designed. Fuel takes up a significant portion of a<br />
spacecraft’s mass today; that space would be filled by a larger power<br />
system to harness the magnetic forces, which are less powerful than<br />
rocket fuel, leading to slower, though unlimited, orbit changes.<br />
while this approach could be used with any satellite in low Earth orbit,<br />
it would be particularly useful for missions that benefit from repeated<br />
orbit changes, including the Pentagon’s Operationally Responsive<br />
Space Initiative.<br />
L<br />
General Bang-Bang Control Method for Lorentz Augmented Orbits<br />
R<br />
Illustration Credit: nASA
General Bang-Bang Control Method for Lorentz<br />
Augmented Orbits<br />
Brett J. Streetman and Mason A. Peck<br />
Copyright © 2009 by Brett Streetman. Published by the American Institute of Aeronautics and Astronautics, Inc., with permission.<br />
Abstract<br />
An orbital control framework is developed for the Lorentz augmented orbit. A spacecraft carrying an electrostatic charge moves through<br />
the geomagnetic field. <strong>The</strong> resulting Lorentz force is used in the general control framework to evolve the spacecraft’s orbit. <strong>The</strong> controller<br />
operates with a high degree and order spherical-harmonic magnetic field model by partitioning the space of latitude in a meaningful way.<br />
<strong>The</strong> partitioning reduces the complexity of the problem to a manageable level. A successful maneuver developed within this bang-off control<br />
framework results in a combined orbital plane change and orbit raising. <strong>The</strong> cost of this maneuver is in electrical power. Reductions in the<br />
power usage, at the expense of longer maneuver times, are obtained by using information about local plasma density.<br />
Nomenclature<br />
a = semimajor axis, m<br />
B = Earth’s magnetic field, T<br />
C = capacitance, F<br />
E = specific energy, m/s<br />
e = eccentricity<br />
F L<br />
= Lorentz force, N<br />
h = specific angular momentum magnitude, m2 /s<br />
h = specific angular momentum, m2 /s<br />
i = inclination, rad<br />
L = length of cylindrical capacitor, m<br />
n^ = Earth spin axis<br />
n e<br />
= electron number density, 1/m 3<br />
q = net spacecraft charge, C<br />
q/m = charge-to-mass ratio, C/kg<br />
R = stocking radius, m<br />
r = radial coordinate, rad<br />
r = spacecraft position, m<br />
r = wire sheath radius, m<br />
s<br />
u = argument of latitude, rad<br />
v = spacecraft velocity, m/s<br />
ε 0<br />
= permittivity of free space, F/m<br />
θ = azimuth angle, rad<br />
μ = gravitational parameter, m3 /s 2<br />
ν = true anomaly, rad<br />
φ = colatitude, rad<br />
Ω = spacecraft right ascension of the ascending<br />
node, rad<br />
w = argument of perigee, rad<br />
w = Earth’s spin rate, rad/s<br />
E<br />
Introduction<br />
Propellantless propulsion opens up new possibilities for spacecraft<br />
missions. One form of propellantless propulsion is the Lorentz<br />
augmented orbit (LAO), which was first presented by Peck [1], and<br />
General Bang-Bang Control Method for Lorentz Augmented Orbits<br />
further examined through our later work [2], [3]. A body with net<br />
charge q moving in a magnetic field B experiences a Lorentz force<br />
F L = q(v – w E n ^ × r) × B (1)<br />
where r is measured in an Earth-centered, inertial reference frame.<br />
<strong>The</strong> velocity correction (-w n^ × r) is required because the magnetic<br />
E<br />
field B is constant only in an Earth-fixed frame. <strong>The</strong> charge-to-mass<br />
ratio q/m of the spacecraft determines the magnitude of the Lorentz<br />
acceleration. <strong>The</strong> direction of this acceleration is fixed by the<br />
velocity of the spacecraft and the magnetic field at the spacecraft<br />
location. Because the charge on the spacecraft can be maintained<br />
solely with electrical power and because the Lorentz force acts<br />
externally, LAO technology represents propellantless propulsion. If<br />
q/m is varied as a control input, an LAO can achieve novel orbits and<br />
enable new missions.<br />
Orbit perturbations on charged particles due to the Lorentz force<br />
have been observed in nature. Schaffer and Burns [4], [5] and<br />
Hamilton [6] have studied these effects and derived various<br />
perturbation equations. <strong>The</strong>y have shown that the Lorentz force<br />
acting on micron-sized, naturally charged dust grains creates<br />
significant changes in their orbits. This effect explains features<br />
seen in the ethereal rings of Jupiter. <strong>The</strong> dynamics of these charged<br />
dust grains is well understood in the context of naturally occurring<br />
systems.<br />
Perturbations have also been examined in the context of the natural<br />
charging of Earth-based spacecraft. Early studies include Sehnal<br />
[7] in 1969. Others have tried to explain orbital deviations of<br />
the LAGEOS spacecraft using naturally occurring Lorentz forces,<br />
including work by Vokrouhlicky [8] and later Abdel-Aziz [9]. In<br />
this work, we wish to not only examine the perturbations caused<br />
by the Lorentz force, but to expand the available orbits and add<br />
controlled charging to exploit the Lorentz dynamics for engineering<br />
applications through LAOs.<br />
33
Many other propellantless propulsion systems have been proposed.<br />
<strong>The</strong> electrodynamic tether system is closely related to LAO. Tethers<br />
force current through a long conductor [10]. <strong>The</strong> current in this<br />
tether moving with the satellite creates a Lorentz force. By using<br />
a current in a wire rather than a space charge on the spacecraft, a<br />
tether can produce forces in directions an LAO spacecraft cannot.<br />
However, the direction of the tether must be controlled, whereas<br />
LAO is attitude-independent. LAO and tethers (along with other<br />
propellantless propulsion systems) differ in from where they harvest<br />
energy. LAO does work on a satellite by using the rotation of Earth’s<br />
magnetic field. If in a perfect vacuum, an LAO system would require<br />
only enough power to charge up and discharge the spacecraft.<br />
A tether system is essentially a device for converting between<br />
electrical energy and kinetic energy. Solar sails and magnetic sails<br />
harvest energy from the sun to perform propellantless maneuvers.<br />
In addition to LAO, a charged spacecraft architecture has been<br />
proposed for formation flight [11], [12]. <strong>The</strong> Coulomb Spacecraft<br />
Formation (CSF) concept makes use of the coulomb force acting<br />
between two charged satellites, rather than the Lorentz force.<br />
Whereas LAO uses an external force, CSF can produce only forces<br />
internal to the formation. <strong>The</strong> CSF system performs better at higher<br />
altitudes like geosynchronous earth orbit, whereas LAO produces<br />
more useful formation forces in low Earth orbit (LEO) [13].<br />
Our earlier studies [1]-[3] present the dynamics of LAOs under<br />
simplified conditions, including greatly simplified magnetic field<br />
models. This study expands that analysis to include spherical<br />
harmonic magnetic fields of arbitrary complexity. <strong>The</strong> following<br />
section, “Lorentz Perturbations,” gives an overview of the effect of<br />
the Lorentz force on an orbit, drawn from previous work. <strong>The</strong> next<br />
section, “Geomagnetic Field,” discusses the general properties of<br />
the geomagnetic field. “Space-Vehicle Design” gives new material<br />
on possible LAO system architectures. “Lorenz Augmented Orbit<br />
Maneuvers and Limitations” presents a discussion of the maneuver<br />
limitations introduced by the Lorentz force along with a compelling<br />
mission enabled by the LAO concept. “Lorenz Augmented Orbit<br />
Power Consumption and Plasma-Density-Based Control” considers<br />
the effects of ionospheric conditions on performance and power<br />
usage of an LAO spacecraft.<br />
Lorentz Perturbations<br />
<strong>The</strong> effects of the Lorentz force on an orbit are studied using<br />
perturbation methods. We have previously shown that the change<br />
in orbital energy E of a charged spacecraft affected by an arbitrary<br />
magnetic field is given by [2]<br />
34<br />
Ė = w [(v . n E ^ )(B . r) – (v . r)(n^ q<br />
. B)]<br />
m<br />
(2)<br />
To describe this position r and velocity v, we use an Earth-centered,<br />
inertial reference frame with spherical coordinates: radius r,<br />
colatitude φ, and azimuth angle θ, as shown in Figure 1. In these<br />
coordinates, Eq. (2) can be expressed as<br />
Ė = w [(rv . n E ^ – cos φr . v)(B . r ^ ) + sin φ(r . v)(B . φ^ q<br />
)]<br />
m (3)<br />
General Bang-Bang Control Method for Lorentz Augmented Orbits<br />
<strong>The</strong> r^ and φ ^ unit vectors are shown in Figure 1.<br />
n^<br />
r^<br />
φ<br />
r<br />
φ ^<br />
Figure 1. Spherical coordinates and unit vectors used.<br />
θ<br />
Change in vector angular momentum is also found from<br />
perturbation methods [2]:<br />
h . = (B . r)v – (r . v)B – w (B . r)(n E ^ q<br />
q<br />
q<br />
× r)<br />
m m<br />
m (4)<br />
Equations (2) and (4) are used to obtain the derivatives of other<br />
orbital elements. Equation (4) leads to an expression for the<br />
derivative of inclination i [14]:<br />
di<br />
=<br />
dt<br />
h . cos i – h . . n ^<br />
h sin i<br />
In the spherical coordinates, Eq. (5) becomes<br />
di –1<br />
= Ė + w r E dt hw sin i E 2 (rv . n^ – cos φr . v) (B . r ^ q cos i<br />
)<br />
m h2 sin i<br />
θ ^<br />
(5)<br />
+ (r . v)(B . φ^ ) + h sin i cos (θ – Ω)(r . v)(B . θ^ h cos i<br />
)<br />
sin φ (6)<br />
<strong>The</strong> first term in Eq. (6) shows that changes in inclination are<br />
coupled to changes in orbital energy, especially for orbits that are<br />
near circular (where r . v goes to zero) or polar (where cos i goes<br />
to zero). This coupling does not rise from some fundamental<br />
relationship between energy change and inclination change,<br />
but rather from the particulars of the Lorentz force. <strong>The</strong> energy<br />
change is driven by the radial component of the magnetic field<br />
and the apparent velocity induced by the rotation of the field. <strong>The</strong><br />
inclination change is generally driven by the radial component of<br />
the magnetic field and the in-track velocity of the spacecraft. <strong>The</strong>se
velocities, magnetic field components, and perturbation equations<br />
happen to line up such that they depend on the same dynamic<br />
quantities in the same relationships.<br />
An expression for the change in eccentricity e is [14]<br />
a<br />
e<br />
μ<br />
. = (1 – e2 1/2<br />
) (F . r L ^ ) sin ν<br />
+ [F L . (h ^ . r ^ )] cos ν +<br />
e + cos ν<br />
1 + e cos ν (7)<br />
This expression makes use of the Lorentz force F L explicitly.<br />
Equations (3), (4), (6), and (7) are greatly simplified if we restrict<br />
our discussion to circular (or near circular) orbits, where the term<br />
(r . v) vanishes. Applying this simplification to Eq. (3) yields<br />
Ė = w sin i cos u(B . r E ^ q μ<br />
)<br />
m r3 (8)<br />
<strong>The</strong> argument of latitude is defined as the angle from the point of<br />
right ascension of the ascending node (RAAN) to the spacecraft’s<br />
position, measured around the orbit. Only the radial component of<br />
the magnetic field affects the orbital energy of a circular LAO. If the<br />
same simplification is applied to Eq. (4), we find that the change<br />
in scalar angular momentum is simply a multiple of the change in<br />
energy:<br />
h . =<br />
r<br />
Ė<br />
3<br />
μ<br />
<strong>The</strong> inclination change in a circular orbit follows the same pattern:<br />
w r E<br />
di cos i –1<br />
= Ė<br />
dt hw sin i E 2<br />
h<br />
General Bang-Bang Control Method for Lorentz Augmented Orbits<br />
(9)<br />
(10)<br />
Equation (10) implies that, in circular orbits, orbital energy and<br />
inclination are not independently controllable with the Lorentz<br />
force. For every increase in energy, there is a corresponding<br />
decrease in inclination. (This fact also holds true for any polar orbit,<br />
eccentric or not.) This correlation limits the maneuvers that can be<br />
performed with LAO-based propulsion.<br />
<strong>The</strong> circular-orbit assumption simplifies Eq. (7), resulting in<br />
q<br />
e<br />
m<br />
. = 2 r2 sin i cos (θ – Ω) sin φ cos ν (B . r ^ h w a E<br />
)<br />
μ h μ<br />
+ sin ν w r sin φ – (B . φ E ^ h cos i<br />
)<br />
r sin φ<br />
– sin i cos (θ – Ω) sin ν(B . θ^ h<br />
)<br />
r<br />
(11)<br />
<strong>The</strong> change in eccentricity depends on all three components of the<br />
magnetic field, making for more complicated analysis. Each term in<br />
Eq. (11) involves the true anomaly ν. This relationship shows the<br />
importance of radial velocity, which is also explicitly related to ν.<br />
Changes in eccentricity are driven by small deviations from the<br />
circular-orbit assumption.<br />
Geomagnetic Field<br />
<strong>The</strong> simplest model of the Earth’s magnetic field is a dipole aligned<br />
with Earth’s spin axis. However, this simple model fails to describe<br />
two important features for an LAO: that the dipole component<br />
is not aligned with the Earth’s spin axis and that terms higher in<br />
degree than the dipole are significant components of the field. <strong>The</strong><br />
Earth’s magnetic field is best described as a full spherical-harmonic<br />
expansion [15]. Here, spherical-harmonic coefficients released as<br />
the International Geomagnetic Reference Field (IGRF) are used<br />
[16], in particular, the IGRF95 (or IGRF-7) model. All simulations in<br />
this study use coefficients up to 10th degree and order. An important<br />
note on the magnetic field is that it is represented in Earth-fixed<br />
coordinates. <strong>The</strong> field itself is locked in step with the rotation of the<br />
Earth [17]. One must be careful to distinguish between Earth-fixed<br />
longitudes and inertial longitudes.<br />
<strong>The</strong> effect of the Lorentz force on an orbit is conveniently broken up<br />
into the components of the magnetic field in spherical-coordinate<br />
unit vectors: the radial direction r^ , the colatitude direction φ^ , and<br />
the azimuthal direction θ^ . <strong>The</strong> magnetic field B is studied as the<br />
three components (B . r^), (B . φ^ ), and (B . θ^ ). Figure 2 shows a<br />
contour plot of (B . r^) over (Earth-fixed) latitude and longitude at an<br />
altitude of 600 km. Positive values are represented by dotted gray<br />
contours, and negative contours are dashed gray. <strong>The</strong> black contour<br />
is referred to as the magnetic equator and indicates where the radial<br />
component is zero. In the traditional lexicon, the magnetic equator<br />
is where the field has no inclination (or “dip”). For an axis-aligned<br />
dipole model, the magnetic equator would lie on the latitudinal<br />
equator, but the additional higher-degree terms modify its location<br />
significantly.<br />
80<br />
60<br />
40<br />
20<br />
0<br />
-20<br />
-40<br />
-60<br />
-80<br />
-150 -100 -50 0 50 100 150<br />
Figure 2. Contour plot of the radial component of the geomagnetic<br />
field over latitude and longitude.<br />
Figure 3 shows a contour plot of (B . φ ^ ). Again, dashed gray contours<br />
are negative and dotted gray positive, with black being zero. <strong>The</strong> φ ^<br />
component of the field is generally negative, except for small polar<br />
regions. <strong>The</strong> φ ^ component is small near these polar regions and is<br />
largest near the magnetic equator.<br />
35
80<br />
60<br />
40<br />
20<br />
0<br />
-20<br />
-40<br />
-60<br />
-80<br />
36<br />
-150 -100 -50 0 50 100 150<br />
Figure 3. Contour plot of the component of the geomagnetic field in<br />
the φ ^ direction over latitude and longitude.<br />
Figure 4 shows a contour plot of (B • θ ^ ). <strong>The</strong> contour colors are as<br />
previously described. Figure 4 shows distinct regions of positive<br />
and negative values. <strong>The</strong> zero contour represents the line of zero<br />
declination (or zero difference between true north and magnetic<br />
north). <strong>The</strong> dipole component of the field (and all other zero-order<br />
terms) contributes nothing to the θ ^ component of the field.<br />
80<br />
60<br />
40<br />
20<br />
0<br />
-20<br />
-40<br />
-60<br />
-80<br />
-150 -100 -50 0 50 100 150<br />
Figure 4. Contour plot of the component of the geomagnetic field in<br />
the θ ^ direction over latitude and longitude.<br />
<strong>The</strong> three orthogonal components of the field can be used to<br />
divide the space of latitude and longitude into eight distinct zones.<br />
<strong>The</strong> zones are defined by whether each component is positive or<br />
negative and are bounded by the zero contours depicted in Figures<br />
2-4. <strong>The</strong> zones are numbered I-VIII and are depicted graphically<br />
in Figure 5 with the properties shown in Table 1. Figure 5 shows<br />
each of the zones superimposed on a map of the Earth. Because<br />
of distortion due to the map projection, zones I, II, VII, and VIII<br />
are shown larger than their actual sizes. In a three-dimensional<br />
General Bang-Bang Control Method for Lorentz Augmented Orbits<br />
(3D) view, they appear in a small region near each pole. <strong>The</strong> large<br />
southward swing of the zero declination contour over eastern Africa<br />
actually crosses the magnetic equator, causing zones III and V to<br />
have noncontiguous regions. Table 1 lists the differences among the<br />
zones. A ‘+’ in the table refers to a quantity greater than zero and a ‘-’<br />
denotes less than zero.<br />
80<br />
60<br />
40<br />
20<br />
0<br />
-20<br />
-40<br />
-60<br />
-80<br />
-150 -100 -50 0 50 100 150<br />
Figure 5. Eight distinct zones of the geomagnetic field, numbered<br />
I-VIII.<br />
Table 1. Zone Properties.<br />
Zone ( B • r ^ ) ( B • φ ^ ) ( B • θ ^ )<br />
I + + +<br />
II + + -<br />
III + - +<br />
IV + - -<br />
V - - -<br />
VI - - +<br />
VII - + -<br />
VIII - + +<br />
In each zone, the geomagnetic field has a certain sign for a particular<br />
component of the field. Each zone creates different effects on the<br />
orbit of a charged satellite. We use these differences to create a<br />
control sequence to perform a desired maneuver. <strong>The</strong> zones are<br />
defined with respect to Earth-fixed latitude and longitude as the<br />
geomagnetic field rotates with the Earth. Figures 2-4 are for a<br />
representative altitude (600 km) because the relative strength<br />
of each order of field terms depends on this altitude. Although<br />
the actual zone boundaries depend on altitude, they are easily<br />
calculated at any particular location by the simple sign definitions<br />
shown in Table 1.
Space-Vehicle Design<br />
This section offers a brief overview of possible architectures for<br />
LAO-capable spacecraft. It considers three competing, interrelated<br />
parameters: capacitance, power, and space-vehicle mass. <strong>The</strong>re are<br />
also implementation issues, such as deployability of the capacitor,<br />
technology readiness of the power system, thermal implications of<br />
high power, and interactions among various subsystems (notably<br />
attitude control). <strong>The</strong>se issues are minimized here. For the present,<br />
maximizing the q/m metric is taken to be the only goal of LAO spacevehicle<br />
design. Furthermore, we consider this metric only in terms<br />
of a constant-mass spacecraft. Six hundred kilograms is chosen as a<br />
somewhat arbitrary constraint for this mass optimization. <strong>The</strong> mass<br />
is given some contingency.<br />
Capacitance<br />
High q/m implies high charge, which requires high capacitance.<br />
Known technologies for self-capacitance store charge on the surface<br />
of a conductor with no sharp local features or high curvature. So,<br />
a successful design realizes high surface area to volume in flat<br />
structures or long, thin ones. Such a capacitor likely encounters a<br />
limit associated with the minimum thickness of thin films or the<br />
minimum feasible diameter of long filaments. That limit ultimately<br />
leads to a minimum mass for the capacitor. <strong>The</strong> capacitor is also<br />
designed to exploit plasma interactions. Based on work by Choinière<br />
and Gilchrist [18], we have baselined a cylindrical capacitor<br />
constructed of a sparse wire mesh. This stockinglike arrangement<br />
of appropriately spaced thin wires develops a plasma sheath due to<br />
ionospheric interactions that raises the capacitance of the cylinder<br />
well above what it would be in a pure vacuum. We emphasize that<br />
such self-capacitance is not available from off-the-shelf electronics<br />
components, which merely hold equal amounts of positive and<br />
negative charge.<br />
In this model, the capacitance C is taken to be that of a solid cylinder<br />
of the stocking’s radius R, but with a concentric shell (due to the<br />
plasma sheath) equal to the thickness of an individual wire’s sheath r : s<br />
2pε L 0 C =<br />
R + r<br />
log s<br />
(12)<br />
R<br />
where ε is the permittivity of free space. <strong>The</strong> sheath radius<br />
0<br />
increases with potential and is calculated as described by Choinière<br />
and Gilchrist [18]. In the architecture described in the section<br />
“Space-Vehicle Design: Power,” R takes on values of tens to hundreds<br />
of meters. <strong>The</strong> sheath thickness r depends on the temperature<br />
s<br />
and density of the plasma and on capacitor potential, ranging<br />
from millimeters to meters in earth orbit. We space these wires so<br />
that one wire is a sheath’s thickness away from its neighbor. This<br />
spacing ensures overlap between individual wires’ sheaths, but<br />
keeps the structure sparse. Occasional structural elements, such<br />
as thin conductive bands, would be necessary to maintain the<br />
spacing along the capacitor because of coulomb repulsion that acts<br />
among the wires. This repulsion would also serve as a useful means<br />
for deploying the capacitor without heavy trusses or actuators. A<br />
schematic of this design is shown in Figure 6.<br />
General Bang-Bang Control Method for Lorentz Augmented Orbits<br />
r s<br />
Figure 6. LAO spacecraft with cylindrical stocking capacitor.<br />
Power<br />
We consider two fundamentally different approaches to the<br />
power subsystem. <strong>The</strong> classical approach depends on solar power.<br />
Energy from solar panels is used directly to power the capacitor,<br />
countering the plasma currents, or is stored in batteries or some<br />
sort of efficient ultracapacitor to be used in a periodic-charging<br />
scheme. Some assumptions about the specific power (W/kg) must<br />
be made. Although the power density of current systems is about 40<br />
W/kg [19], a farther-term power density of 130 W/kg, is used here,<br />
consistent with DARPA’s Fast Access Spacecraft Testbed (FAST)<br />
program [20].<br />
In the case of this solar-power approach, the charge is maintained<br />
by modifying the current collection scheme proposed by Sanmartin<br />
et al. [21]. A power supply onboard the spacecraft establishes a<br />
potential between two conductive surfaces exposed to the plasma<br />
environment. <strong>The</strong> positive end attracts the highly mobile electrons,<br />
while the negative end attracts the far less mobile ions (such as<br />
O+). <strong>The</strong> substantial imbalance in electron and ion currents leads<br />
the negative end to accumulate a nonzero charge while the positive<br />
end is almost electrically grounded in the plasma. So, with the<br />
wire capacitor on the negative end, the spacecraft would achieve<br />
a net charge roughly equal to the product of the capacitance of the<br />
wires and the potential across the power supply [18]. This charge is<br />
accomplished without the use of particle beams.<br />
A more unusual approach exploits alpha-particle emission from<br />
an appropriate radioactive isotope [22], such as Po 210. <strong>The</strong>se<br />
emissions are not converted to electrical power thermionically<br />
as in a radioisotope thermoelectric generator or via fission in a<br />
nuclear reactor; instead, the isotope is spread thinly enough on the<br />
capacitor’s surface that up to half of the emitted alpha particles<br />
carry charge away from the spacecraft. <strong>The</strong> electrical current<br />
of these particles is proportional to their charge (two positive<br />
L<br />
R<br />
37
fundamental charges), their kinetic energy (roughly 5.3 × 106 eV), and the isotope’s decay rate. If the maximum potential can<br />
be achieved despite currents from the surrounding ionospheric<br />
plasma, this approach offers as much as 42 kW/kg of Po 210 after<br />
1 year of alpha decay. Maintaining this charge requires no power<br />
supply. <strong>The</strong> spatially distributed nature of the current from the thin<br />
film suggests that the current does not approach any sort of beamdensity<br />
limit due to space charge.<br />
We focus on the prospects for the solar-panel approach because<br />
launching an isotope is likely to encounter a variety of technical<br />
and nontechnical roadblocks. In all cases, the capacitor maintains<br />
negative charge. <strong>The</strong> ion currents are then given by the orbit<br />
motion limited estimate [18]. We use the International Reference<br />
Ionosphere (IRI) [23] to provide the necessary plasma number<br />
density and temperature. We also account for the photoelectric<br />
current emitted from the surface of the conductive capacitor. In<br />
the case of the solar panel approach, all this power is subject to<br />
resistive losses as the power supply drives current through the many<br />
thin wires. Assuming that the current is uniform to all parts of the<br />
capacitor, we average the losses along the length of wire that the<br />
current has to travel.<br />
Space-Vehicle Mass<br />
<strong>The</strong> charge-to-mass ratio depends on the mass of the entire<br />
space vehicle. We model this mass coarsely as the sum of discrete<br />
components with interrelated dependencies. Table 2 summarizes<br />
this mass model.<br />
An example of the power calculation is shown in Table 3. Table 4<br />
uses this power calculation to arrive at the 600-kg space-vehicle<br />
mass requirement.<br />
Performance Estimates<br />
Figure 7 summarizes the results of these calculations for a 600-kg<br />
spacecraft that charges for 50% of the time over a 600-km orbit.<br />
Figure 7 shows the FAST power design, which yields q/m = 0.0070<br />
C/kg for a 20-km stocking at a 7- kV potential. <strong>The</strong> efficiency (force<br />
per power) increases with lower potential. For example, the optimal<br />
value of 5C in a 600-km polar orbit produces about 2.3 N, for 1.6<br />
× 10-5 N/H when the capacitor is charged. However, at only 1 kV,<br />
the resulting 3.1C represents 2 × 10-5 N/W. So, if the speed of the<br />
maneuver is unimportant, lower-potential designs may be better.<br />
As the capacitor potential increases beyond the optimum for<br />
q/m, more mass of the fixed 600 kg must be devoted to the power<br />
subsystem, which comes at the expense of capacitor mass. <strong>The</strong><br />
accuracy of these performance measures depends on the accuracy<br />
of the simplified sheath model and should eventually be verified by<br />
a more complex 2D algorithm such as that developed by Choinière<br />
and Gilchrist [18].<br />
38<br />
Table 2. Space-Vehicle Mass Model.<br />
General Bang-Bang Control Method for Lorentz Augmented Orbits<br />
Subsystem or Component Value Units<br />
Payload 50 kg<br />
Bus (w/payload power) 3.33 (kg bus)(kg payload)<br />
LAO solar power 130 W/kg of orbit-average<br />
power<br />
LAO isotope power 42 kW/kg of polonium<br />
after 1 yr of decay<br />
Power mass contingency 14 kg<br />
Capacitor 2700<br />
pR 2 nL<br />
Capacitor mass<br />
contingency<br />
kg for n aluminum<br />
wires of length L and<br />
radius R<br />
1.1 m kg, where m is the<br />
sum of the wires’<br />
masses<br />
Table 3. Example of Power Calculation for a Spacecraft in a 600km<br />
Altitude LEO Circular Orbit at an Inclination of 28.5 deg.<br />
Parameter Value Units<br />
Wire material Aluminum<br />
Wire radius 500 × 10 -6 m<br />
% overlap sheath diameter 0%<br />
Length of stocking, L 20 km<br />
Stocking radius as a % of stocking length 5.00%<br />
Stocking mass sandbag 3 kg<br />
Immediate calculation Value Units<br />
Material resistivity at 20°C 2.82 x 10-8 Ω<br />
Radius of stocking, R 1 km<br />
Material density of wire 2700 kg/m 3<br />
Sheath thickness 1.764 m<br />
Resistance per wire 7.198 M Ω<br />
Number of wires 7142<br />
Mass of stocking 30.36 kg<br />
Mass of capacitor 33.40 kg<br />
Average cylinder-as-body capacitance, C 600 × 10 -4 F<br />
Result Value Units<br />
Average body charge-to-mass ratio -0.0070 C/kg<br />
Exposed wire area 2548 m 2<br />
Photoelectron current 0.122 A<br />
Orbit-average power required (~50%<br />
duty cycle)<br />
53.54 kW
Table 4. Example of Mass Calculation.<br />
Parameter Value Units<br />
Potential -7000 V<br />
Orbit-average power for<br />
LAO<br />
LAO power system mass<br />
dependency<br />
46.28 kW<br />
130 W/kg<br />
LAO power system mass 466 kg<br />
Power system mass<br />
contingency<br />
14 kg<br />
Payload 20 kg<br />
Bus mass (including any<br />
propellant)<br />
100 kg<br />
Total space-vehicle mass 600 kg<br />
Approximate Orbit-Average Power (W)<br />
Approximate Orbit-Average q/m, (C/kg)<br />
70000<br />
60000<br />
50000<br />
40000<br />
30000<br />
20000<br />
10000<br />
0<br />
0 5,000 10,000 15,000 20,000 25,000 30,000 35,000 40,000<br />
Capacitor Potential (V)<br />
0.008<br />
0.007<br />
0.006<br />
0.005<br />
0.004<br />
0.003<br />
0.002<br />
0.001<br />
0<br />
0 5,000 10,000 15,000 20,000 25,000 30,000 35,000 40,000<br />
Capacitor Potential (V)<br />
Figure 7. Orbit-average power and q/m vs. capacitor potential.<br />
Lorentz Augmented Orbit Maneuvers and Limitations<br />
Maneuver Limitations<br />
A Lorentz augmented orbit cannot experience arbitrary changes for<br />
all initial orbital elements. In certain regimes, as evidenced by Eq.<br />
(10), changes in orbital elements are tightly coupled. This coupling<br />
stems from the basic physics of the Lorentz force. <strong>The</strong> direction<br />
General Bang-Bang Control Method for Lorentz Augmented Orbits<br />
of the force is set by the magnetic field and the velocity of the<br />
spacecraft with respect to that magnetic field, neither of which can<br />
be altered by the spacecraft control system.<br />
A further limiting factor is that the best system architectures<br />
provide only one polarity of charge (negative). Because electrons<br />
in the ionosphere are far more mobile than ions, significantly less<br />
power is required to maintain a negative charge than a positive<br />
charge. <strong>The</strong> single-polarity system limits what changes can be made<br />
to the RAAN Ω and the argument of perigee w. For a given charge<br />
polarity, Ω and w evolve only in a single direction (in LEO). For a<br />
negative charge in LEO, Ω always decreases and w always increases.<br />
Table 5 summarizes some of the abilities and limits of LAO for a<br />
single polarity of charge. <strong>The</strong> first column of the table shows the<br />
net effect of a constant charge on a spacecraft. <strong>The</strong> second column<br />
shows the available directions of change for each orbital element for<br />
a variable (but single polarity) charge. <strong>The</strong> final column summarizes<br />
some the special cases and coupling within the dynamics. Some of<br />
these special cases are addressed more explicitly in our earlier work<br />
[2], [3].<br />
Table 5. LAO Effects for q/m < 0 in LEO.<br />
Element Net Effect<br />
of Constant<br />
Charge<br />
Signs of<br />
Possible<br />
Changes<br />
Notes<br />
a 0 ± a/i coupled for e = 0 or i =<br />
90 deg,<br />
e 0 ± a • = 0 for i = 0 deg and e<br />
= 0<br />
i 0 ± ė > 0 for e = 0<br />
Ω - - a/i coupled for e = 0 or i<br />
= 90 deg<br />
w + + Ω undefined for i = 0 deg<br />
ν ± w undefined for i = 0 deg<br />
and e = 0<br />
<strong>The</strong> Lorentz force is at its strongest in LEO. <strong>The</strong> strength of the<br />
dipole component of the magnetic field drops off with the cube<br />
of radial distance. Additionally, spacecraft velocities with respect<br />
to the magnetic field tend to be larger in LEO. A geostationary<br />
spacecraft has no velocity with respect to the magnetic field and<br />
thus experiences no Lorentz force.<br />
Example Maneuver: Low-Earth-Orbit Inclination Change and<br />
Orbit Raising<br />
<strong>The</strong> minimum inclination a spacecraft can be launched into is equal<br />
to the latitude of its launch site. For a U.S. launch, this minimum<br />
inclination is generally 28.5 deg, the latitude of Cape Canaveral,<br />
FL. However, for certain missions, equatorial orbits are desirable.<br />
<strong>The</strong> plane change between i = 28.5 deg and i = 0 deg is expensive<br />
in terms of ΔV and requires either a launch vehicle upper stage or<br />
a significant expenditure of spacecraft resources. We develop a<br />
39
control algorithm to use the Lorentz force to perform this inclination<br />
change without the use of propellant, while simultaneously raising<br />
the orbital altitude.<br />
This maneuver is primarily concerned with inclination change in<br />
circular orbit. Equation (10) describes the relevant dynamics. As<br />
energy change and inclination change are coupled in this situation,<br />
Eq. (8) describes both the energy and plane changes. In this circular<br />
case, only the radial component of the magnetic field affects the<br />
energy and inclination. For the inclination to decrease, the energy<br />
must increase. With these facts, we develop a bang-off controller<br />
based on the argument of latitude and the sign of the radial<br />
component of the field. Using q/m < 0, the term cos u(B • r^) must<br />
be negative. We know that (B • r^) is positive below the magnetic<br />
equator (zones I, II, III, and IV) and negative above the magnetic<br />
equator (zones V, VI, VII, and VIII). Thus, for northward motion of<br />
the satellite (cos u > 0), the charge should be nonzero within zones<br />
V-VIII. For southward satellite motion (cos u < 0), nonzero charge<br />
is applied in zones I-IV. In other words, the charge should be off for<br />
the first quadrant of the orbit, on for the second quadrant, off for<br />
the third, and on for the fourth. This control can be represented as<br />
40<br />
q<br />
m =<br />
q<br />
–( m)<br />
max<br />
0<br />
q<br />
–( m)<br />
max<br />
0<br />
if cos u > 0, (B . r ^ ) < 0<br />
if cos u > 0, (B . r ^ ) > 0<br />
if cos u < 0, (B . r ^ ) > 0<br />
if cos u < 0, (B . r ^ ) < 0<br />
(13)<br />
where -(q/m) is largest available negative charge-to-mass ratio.<br />
max<br />
However, when this simple quadrant control is used, the eccentricity<br />
of the orbit tends to grow undesirably large. Maintaining an identically<br />
zero eccentricity is impossible, though. Any charge on a circularorbiting<br />
spacecraft causes an increase in the eccentricity. However,<br />
if the oblateness of the Earth is considered, the eccentricity remains<br />
bounded by a small value. Figure 8 shows this result, plotting a short<br />
simulation of an orbit under the quadrant controller. <strong>The</strong> blue line<br />
shows the growth of eccentricity with J2 absent, while the green<br />
line shows the bounding of e under the influence of J2. <strong>The</strong> effect of<br />
J2 on the eccentricity of the orbit is larger than that of the Lorentz<br />
force. <strong>The</strong> J2 perturbation does not affect the overall performance<br />
of the maneuver, though. <strong>The</strong> Lorentz force depends on the velocity<br />
of the spacecraft, which only changes by small amount due to the<br />
presence of J2. <strong>The</strong> presence of J2 only creates a small periodic<br />
disturbance to both a and e.<br />
Figure 9 shows the results of a simulation using the e-limiting<br />
quadrant method. <strong>The</strong> simulation begins with a 600-km altitude<br />
circular orbit. <strong>The</strong> charge-to-mass ratio is q/m = -0.007 C/kg. A full<br />
model of J2 is used. <strong>The</strong> simulation lasts until an equatorial orbit is<br />
reached. <strong>The</strong> IGRF95 magnetic field model is used to 10th degree<br />
and order. Figure 9a shows the increase in semimajor axis given<br />
by the quadrant method. <strong>The</strong> initial 600-km orbit is raised to a<br />
724.0-km circular orbit, an increase of 124 km. Figure 9b shows the<br />
desired decrease in inclination. Because the magnetic equator does<br />
not align with the true equator, the inclination can be brought to<br />
exactly zero. Zero inclination is reached in about 340 days with this<br />
value of charge. Figure 9c shows the eccentricity. <strong>The</strong> eccentricity<br />
is bounded by the J2 perturbation to a small value. Finally, Figure<br />
General Bang-Bang Control Method for Lorentz Augmented Orbits<br />
km<br />
0.03<br />
0.025<br />
0.02<br />
0.015<br />
0.01<br />
0.005<br />
7200<br />
7100<br />
7000<br />
6900<br />
0 200 400<br />
0<br />
0 200 400<br />
Time (days) Time (days)<br />
a) Semimajor Axis, a<br />
b) Inclination, i<br />
0.015<br />
0.01<br />
0.005<br />
J2 Present<br />
J2 Absent<br />
0<br />
0 10 20 30<br />
Time (days)<br />
40 50 60<br />
Figure 8. Effect of Earth oblateness on the eccentricity under the<br />
quadrant control.<br />
0<br />
0 200 400<br />
Time (days)<br />
Eccentricity, e<br />
Angle (degrees)<br />
Angle (degrees)<br />
30<br />
20<br />
10<br />
200<br />
-200<br />
0 200 400<br />
Time (days)<br />
c) Eccentricity, e d) Right Ascension, Ω<br />
Figure 9. Orbital elements for the LEO plane change and<br />
orbit-raising maneuver.<br />
9d shows the RAAN. For a negative q/m in LEO, the RAAN always<br />
decreases; however, in this simulation, the effect of J2 on RAAN<br />
dominates. If the aforementioned simulated maneuver is performed<br />
using conventional impulsive thrust, it requires a ΔV of 3.75 km/s.<br />
Thus, using LAOs could significantly increase the payload ratio of a<br />
spacecraft that needed such a maneuver. However, this mass savings<br />
comes at a cost of time spent, the mass of the capacitor and power<br />
system, and electrical power consumed during the maneuver.<br />
0
Lorentz Augmented Orbit Power Consumption and<br />
Plasma-Density-Based Control<br />
<strong>The</strong> preceding simulations use a code that does not include a<br />
model of the Earth’s ionosphere. <strong>The</strong> spacecraft design process<br />
is carried out initially using the IRI model, and then that design<br />
is used in simulation. <strong>The</strong> simulation assumes the spacecraft<br />
maintains its design charge-to-mass ratio, regardless of the local<br />
plasma conditions. In this section, we explore the use of a more<br />
in-depth LAO simulation by revisiting the LEO inclination-change<br />
maneuver. This simulation uses a code that takes into account<br />
local ionospheric conditions and their effect on the instantaneous<br />
charge-to-mass ratio and power consumption of the spacecraft.<br />
<strong>The</strong> high-fidelity, plasma dynamics simulation is based on the<br />
Global Core Plasma Model (GCPM) [24]. <strong>The</strong> GCPM model is<br />
a framework for blending multiple empirical plasma-density<br />
models and extending the IRI model to full global coverage. For<br />
the next simulations, the GCPM model at one particular time is<br />
used. This time corresponds to mean solar conditions. Although<br />
there is a strong correlation between plasma conditions and time<br />
of day, this effect is averaged out by simulating over the course of<br />
multiple days.<br />
This simulation functions in a different fashion from the results<br />
presented in the “Lorentz Augmented Orbit Maneuvers and<br />
Limitations” section. <strong>The</strong> earlier simulations assume that q/m<br />
is either zero or constant at a value of -0.007 C/kg. <strong>The</strong> GCPM<br />
simulation assumes that the spacecraft maintains a constant<br />
potential on the capacitor. Because of local variations in plasma<br />
density, a constant potential results in varying values of chargeto-mass<br />
ratio and varying power required to hold the constant<br />
potential. Although the mean q/m and orbit-average power are<br />
consistent with those predicted in our earlier analysis in the<br />
“Space-Vehicle Design” section, they have peak and minimum<br />
values that depend on the local plasma environment.<br />
<strong>The</strong> local electron number density n is a strong predictor of power<br />
e<br />
usage and is readily available from the GCPM model. A higher ne General Bang-Bang Control Method for Lorentz Augmented Orbits<br />
corresponds to a denser plasma, which in turn, results in more<br />
current collection for a stocking at a given potential. Thus, high-ne values correlate to high power usage. In a gross sense, n is larger<br />
e<br />
in the low- to mid-latitudes on the daytime side of the Earth. <strong>The</strong><br />
density of the plasma also drops sharply as a function of altitude.<br />
Assuming the spacecraft has knowledge of its local plasma<br />
conditions, significant power savings can be realized by limiting<br />
the charge-on time when n is high. <strong>The</strong> spacecraft simply follows<br />
e<br />
its normal control law, but turns off the charge whenever ne exceeds a particular value. A sample of this power savings and<br />
cost in time is shown in Table 6. This table lists the results of four<br />
simulations performed with the constant-potential code. Each<br />
simulation integrates over three days and begins with the same<br />
initial conditions. Reported for each run is the mean q/m achieved<br />
(during times of nonzero charging), the average power used (over<br />
the entire simulation), the peak instantaneous power used, the<br />
total inclination change over the simulation, and an efficiency<br />
in the form of degrees of inclination change per day divided by<br />
the average power used. <strong>The</strong> first simulation uses the e-limiting<br />
quadrant controller discussed earlier with no modification based<br />
on electron density. <strong>The</strong> other three runs superimpose an n - e<br />
based, density-limited control, turning off the charge when n is e<br />
greater than some value.<br />
<strong>The</strong> average q/m achieved by each successive simulation is<br />
slightly lower, as seen in the first row of Table 6. In regions of high<br />
plasma density, the capacitance of the stocking is increased by a<br />
tighter plasma sheath, leading to a larger capacitance. However,<br />
this increase in q/m requires significantly more power to maintain,<br />
as the denser plasma greatly increases the current collected by the<br />
stocking. <strong>The</strong> power reduction due to density-limited control is<br />
shown in the second and third rows of Table 6. Without densitybased<br />
control, the average power usage over the simulation is<br />
53.54 kW, but with a peak instantaneous power usage of 418.57<br />
kW. When charge is only applied for an n of less than 1.1 × 10 e 11<br />
m-3 (the mean electron density in this orbit), the power usage<br />
drops to a mean of 12.94 kW, with a peak of 89.59 kW. Of course,<br />
Table 6. Limiting Power Usage via n e (in m -3 ) Sensing for 3-day Simulations at an Initial Inclination of i = 28.5 deg at a 600-km Altitude.<br />
No n e Control ne < 2 × 10 11 n e < 1.5 × 10 11 n e < 1.1 × 10 11<br />
(q/m) mean , C/kg -0.0060 -0.0057 -0.0054 -0.0048<br />
P mean , kW 53.54 35.88 24.33 12.94<br />
P peak , kW 418.57 220.01 140.64 89.59<br />
Δi, deg 0.3764 0.3292 0.2653 0.1747<br />
deg/day/kW mean 0.0023 0.0031 0.0036 0.0045<br />
41
the decreased power usage is coupled with a lengthening of the<br />
maneuver time. Row 4 of Table 6 shows the inclination change<br />
achieved over 3 days for each level of density control. <strong>The</strong><br />
unlimited control changes inclination at a rate about 2.2 times<br />
higher than the n < 1.1 × 10 e 11 case. However, the density-limited<br />
controllers achieve inclination changes in a more efficient way.<br />
<strong>The</strong> fifth row of Table 6 displays an efficiency metric for each<br />
simulation, namely degrees of inclination change achieved per day<br />
per average kilowatt used. Charging only at low values of n uses e<br />
the available power more efficiently to effect inclination change.<br />
<strong>The</strong> profile of electron densities experienced by a spacecraft varies<br />
greatly depending on its orbit. In the 28.5-deg inclination-change<br />
example, both the change in inclination and the change in altitude<br />
during the maneuver cause no one limit on n to be appropriate.<br />
e<br />
However, recreating this entire maneuver using the GCPM, constant<br />
voltage simulation is impractical in its computational demands. A<br />
reasonable approximation is a hybrid simulation in which a constant<br />
charge-to-mass ratio is used, but the electron density is calculated<br />
at each step in the integration to superimpose the density-limited<br />
control strategy. To take advantage of the orbit raising that occurs<br />
during the maneuver, the n cutoff value is made a linear function<br />
e<br />
of the spacecraft altitude. This line is defined by two points: n equal<br />
e<br />
to 2.0 × 1011 m-3 at an altitude of 600 km, and n equal to 1.6 ×<br />
e<br />
1011 m-3 at an altitude of 700 km. <strong>The</strong>se values are chosen to give<br />
a reasonable tradeoff between power savings and maneuver time.<br />
Figure 10 shows the results of this hybrid simulation. <strong>The</strong> top plot<br />
of this figure shows the semimajor axis, while the lower plot gives<br />
the inclination, both versus time in days. <strong>The</strong> solid green lines are<br />
the results of the hybrid constant q/m, density-limited simulation.<br />
For comparison, the dashed blue lines show the results of the<br />
constant charge-only simulation. <strong>The</strong> hybrid strategy completes the<br />
inclination-change maneuver in 380 days compared with 340 days<br />
for the original strategy.<br />
To provide insight into the power saved by using the density-limited<br />
hybrid strategy, short-duration simulations are run using the full<br />
GCPM, constant voltage code. <strong>The</strong>se simulations are run for three<br />
42<br />
General Bang-Bang Control Method for Lorentz Augmented Orbits<br />
points in the trajectory of both the hybrid simulation and the original<br />
inclination-change maneuver. When each trajectory reaches 28.5,<br />
10, and 1 deg of orbital inclination, its state is retrieved and used<br />
as the initial conditions for a 3-day simulation using the full GCPM<br />
code. <strong>The</strong> results of these simulations are summarized in Table 7.<br />
This table gives the mean achieved charge-to-mass ratio, average<br />
and peak power consumptions, and inclination change over the<br />
3-day simulation for each control strategy for each inclination<br />
considered. <strong>The</strong> addition of density-limited control reduces both<br />
the mean and peak power usage, but also decreases the speed of the<br />
inclination change.<br />
km<br />
Angle (degrees)<br />
7200<br />
7100<br />
7000<br />
6900<br />
0 100 200<br />
Time (days)<br />
n Control<br />
e<br />
No n control<br />
e<br />
300 400<br />
a) Semimajor axis, a<br />
30<br />
20<br />
10<br />
b) Inclination, i<br />
Time (days)<br />
n e Control<br />
No n e control<br />
0<br />
0 100 200 300 400<br />
Figure 10. Comparison of hybrid simulation of constant charge-tomass<br />
ratio with plasma-density-limited control to constant q/monly<br />
control.<br />
Table 7. Comparison of Power Usage During both the Hybrid Simulation and the Original, Constant Charge Simulation.<br />
Inclination Constant Charge Hybrid, Density-Limited<br />
28.5 deg (q/m) mean , C/kg<br />
P mean (P peak ), kW<br />
Δi, deg<br />
10 deg (q/m) mean , C/kg<br />
P mean (P peak ), kW<br />
Δi, deg<br />
1 deg (q/m) mean , C/kg<br />
Pmean(P peak ), kW<br />
Δi, deg<br />
-0.0060<br />
53.54 (418.57)<br />
0.3764<br />
-0.0055<br />
56.79 (259.47)<br />
0.1806<br />
-0.0055<br />
58.39 (235.89)<br />
0.1447<br />
-0.0057<br />
36.50 (217.15)<br />
0.3308<br />
-0.0053<br />
43.64 (208.58)<br />
0.1622<br />
-0.0053<br />
43.54 (208.39)<br />
0.1330
Conclusions<br />
Lorentz augmented orbits use the Earth’s magnetic field to provide<br />
propellantless propulsion. Although the direction of the Lorentz<br />
force is fixed by the velocity of the spacecraft and the local field,<br />
varying the magnitude of the charge-to-mass ratio of the satellite<br />
can produce novel and useful changes to an orbit. A simple onoff<br />
(or bang-off) charging scheme is sufficient to perform most<br />
available maneuvers and can create large ΔV savings.<br />
A preliminary evaluation of some possible architectures leads us<br />
to the tentative conclusion that up to 0.0070 C/kg can be reached<br />
by a negatively charged LEO spacecraft of 600-kg mass. <strong>The</strong>se<br />
designs use cylindrical mesh “stocking” capacitive structures that<br />
are shorter than most proposed electrodynamic tethers and offer<br />
the important benefit that their performance is independent<br />
of their attitude in the magnetic field. That simplicity largely<br />
decouples attitude control from propulsion, a consideration that<br />
can complicate the operation of tether-driven spacecraft.<br />
<strong>The</strong> Earth’s magnetic field is a complex structure. Accurate analytical<br />
expressions for orbital perturbations are difficult to obtain. <strong>The</strong><br />
proposed control method accommodates this complexity by<br />
breaking the geomagnetic field into distinct zones based on its<br />
sign in three orthogonal directions, leading to eight zones. Within<br />
each zone, an LAO tends to evolve in certain directions for certain<br />
orbital elements. Understanding how the orbital evolution relates to<br />
the zone the spacecraft is in allows us to develop control strategies<br />
to execute complex maneuvers. A simple, but effective strategy is<br />
to operate a bang-off control scheme that switches only at zone<br />
boundaries. This scheme allows for the execution of a sample<br />
maneuver of a LEO plane change without the use of propellant,<br />
saving a ΔV of 3.75 km/s required for a conventional propulsive<br />
maneuver. However, this maneuver lasts for 340 days and requires<br />
about 53 kW of power on average. A controller that limits charging in<br />
response to local plasma-density measurements reduces this power<br />
requirement to an average of 40 kW, but increases the maneuver<br />
time to 380 days.<br />
General Bang-Bang Control Method for Lorentz Augmented Orbits<br />
References<br />
[1] Peck, M.A., “Prospects and Challenges for Lorentz-Augmented Orbits,”<br />
Proceedings of the AIAA Guidance, Navigation, and Control Conference,<br />
AIAA Paper 2005-5995, August 2005.<br />
[2] Streetman, B. and M.A. Peck, “New Synchronous Orbits Using the<br />
Geomagnetic Lorentz Force,” Journal of Guidance, Control, and<br />
Dynamics, Vol. 30, No. 6, 2007, pp. 1677-1690.<br />
[3] Streetman, B. and M.A. Peck, “Gravity-Assist Maneuvers Augmented<br />
by the Lorentz Force,” Proceedings of the AIAA Guidance, Navigation,<br />
and Control Conference, AIAA Paper 2007-6846, August 2007.<br />
[4] Schaffer, L. and J.A. Burns, “<strong>The</strong> Dynamics of Weakly Charged Dust:<br />
Motion Through Jupiter’s Gravitational and Magnetic Fields,” Journal of<br />
Geophysical Research, Vol. 92, No. A3, 1987, pp. 2264-2280.<br />
[5] Schaffer, L. and J.A Burns, “Charged Dust in Planetary Magnetospheres:<br />
Hamiltonian Dynamics and Numerical Simulations for Highly<br />
Charged Grains,” Journal of Geophysical Research, Vol. 99, No. A9,<br />
1994, pp. 17211-17223.<br />
[6] Hamilton, D.P., “Motion of Dust in a Planetary Magnetosphere: Orbit-<br />
Averaged Equations for Oblateness, Electromagnetic, and Radiation<br />
Forces with Applications to Saturn’s F Ring,” Icarus, Vol. 101, No. 2,<br />
February 1993, pp. 244-264 (Erratum: Icarus, Vol. 103, p. 161).<br />
[7] Sehnal, L., <strong>The</strong> Motion of a Charged Satellite in the Earth’s Magnetic<br />
Field, Smithsonian Institution Technical Report, Smithsonian<br />
Astrophysical Observatory Special Report No. 271, June 1969.<br />
[8] Vokrouhlicky, D., “<strong>The</strong> Geomagnetic Effects on the Motion of<br />
Electrically Charged Artificial Satellite,” Celestial Mechanics and<br />
Dynamical Astronomy, Vol. 46, 1989, pp. 85-104.<br />
[9] Abdel-Aziz, Y., “Lorentz Force Effects on the Orbit of a Charged<br />
Artificial Satellite: A New Approach,” Applied Mathematical Sciences<br />
[online], Vol. 1, Nos. 29-32, 2007, pp. 1511-1518, http://www.<br />
m-hikari.com/ams/ams-password-2007/ams-password29-32-2007/<br />
index.html.<br />
[10] Cosmo, M.L. and E.C. Lorenzini, Tethers in Space Handbook, 3rd ed.,<br />
NASA Marshall Spaceflight Center, Huntsville, AL, 1997, pp. 119-151.<br />
[11] King, L.B., G.G. Parker, S. Deshmukh, J. Chong, “A Study of Inter-<br />
Spacecraft Coulomb Forces and Implications for Formation Flying,”<br />
Journal of Propulsion and Power, Vol. 19, No. 3, 2003, pp. 497-505.<br />
[12] Schaub, H., G.G. Parker, L.B. King, “Challenges and Prospects of<br />
Coulomb Spacecraft Formations,” Proceedings of the AAS John L.<br />
Junkins Symposium, American Astronautical Society Paper 03-278,<br />
May 2003.<br />
[13] Peck, M.A., B. Streetman, C.M. Saaj, V. Lappas, “Spacecraft Formation<br />
Flying Using Lorentz Forces,” Journal of the British Interplanetary<br />
Society, Vol. 60, July 2007, pp. 263-267, http:// www.bis-spaceflight.<br />
com/sitesia.aspx/page/358/id/1444/l/en-us.<br />
[14] Burns, J.A., “Elementary Derivation of the Perturbation Equations of<br />
Celestial Mechanics,” American Journal of Physics, Vol. 44, No. 10,<br />
1976, pp. 944-949.<br />
[15] Roithmayr, C.M., Contributions of Spherical Harmonics to Magnetic and<br />
Gravitational Fields, NASA, TR TM-2004-213007, March 2004.<br />
[16] Barton, C.E., “International Geomagnetic Reference Field: <strong>The</strong> Seventh<br />
Generation,” Journal of Geomagnetism and Geoelectricity, Vol. 49, Nos.<br />
2-3, 1997, pp. 123-148.<br />
[17] Rothwell, P.L., “<strong>The</strong> Superposition of Rotating and Stationary Magnetic<br />
Sources: Implications for the Auroral Region,” Physics of Plasmas, Vol.<br />
10, No. 7, 2003, pp. 2971-2977.<br />
43
[18] Choinière, E. and B.E. Gilchrist, “Self-Consistent 2D Kinetic<br />
Simulations of High-Voltage Plasma Sheaths Surrounding Ion-<br />
Attracting Conductive Cylinders in Flowing Plasmas,” IEEE<br />
Transactions on Plasma Science, Vol. 35, No. 1, 2007, pp. 7-22.<br />
[19] Wertz, J.R. and W.J. Larson, Space Mission Analysis and Design,<br />
Microcosm Press, El Segundo, CA, 1999, pp. 141-156.<br />
[20] “Fast Access Spacecraft Testbed (FAST),” Defense Advanced Research<br />
Projects Agency Broad Agency Announcement, BAA-07-65,<br />
November 2007.<br />
[21] Sanmartin, J.R., M. Martinez-Sanchez, E. Ahedo, “Bare Wire Anodes for<br />
Electrodynamic Tethers,” Journal of Propulsion and Power, Vol. 9, June<br />
1993, pp. 353-360.<br />
[22] Linder, E.G. and S.M. Christian,“<strong>The</strong> Use of Radioactive Material for<br />
the Generation of High Voltage,” Journal of Applied Physics, Vol. 23, No.<br />
11, 1952, pp. 1213-1216.<br />
[23] Bilitza, D., “International Reference Ionosphere 2000,” Radio Science,<br />
Vol. 36, No. 2, 2001, pp. 261-275.<br />
[24] Gallagher, D.L., P.D. Craven, R.H. Comfort, “Global Core Plasma Model,”<br />
Journal of Geophysical Research, Vol. 105, No. A8, 2000, pp. 18,819-<br />
18,833.<br />
44<br />
General Bang-Bang Control Method for Lorentz Augmented Orbits
Brett J. Streetman is currently a Senior Member of the Technical Staff at <strong>Draper</strong> <strong>Laboratory</strong> working primarily<br />
in space systems guidance, navigation, and control (GN&C). At <strong>Draper</strong>, he has worked on the Talaris Hopper, a<br />
joint MIT and <strong>Draper</strong> Lunar and planetary hopping rover GN&C testbed, performed control system analysis for<br />
the International Space Station, and worked on the GN&C system for the guided airdrop platform. Dr. Streetman<br />
received a B.S. in Aerospace Engineering from Virginia Tech and M.S. and Ph.D. degrees in Aerospace Engineering<br />
from Cornell University.<br />
Mason A. Peck is an Associate Professor in Mechanical and Aerospace Engineering at Cornell University. His<br />
research focuses on spaceflight dynamics, specifically, the discovery of new behaviors that can be exploited for<br />
mission robustness, advanced propulsion, and low-risk GN&C design. He holds 17 U.S. and European patents in<br />
space technology. Dr. Peck earned B.S. and B.A. degrees from the University of Texas at Austin, an M.A. from the<br />
University of Chicago, and Ph.D. and M.S. degrees from UCLA.<br />
General Bang-Bang Control Method for Lorentz Augmented Orbits<br />
45
46<br />
<strong>The</strong> U.S. military’s unmanned aircraft systems are constantly<br />
gathering an enormous amount of video imagery, but much of it is<br />
not useful to tactical forces due to a shortage of analysts who are<br />
needed to process the information.<br />
This paper examines four automated methods that address<br />
the military’s requirements for turning full motion video into a<br />
functional tool for a wide variety of tactical users.<br />
<strong>The</strong> authors have demonstrated the feasibility of these methods,<br />
and could complete the development and testing needed for<br />
operational use within 3 years if funding is made available.<br />
Tactical Geospatial Intelligence from Full Motion Video
Tactical Geospatial Intelligence from Full<br />
Motion Video<br />
Richard W. Madison and Yuetian Xu<br />
Copyright © by IEEE. Presented at Applied Imagery Pattern Recognition 2010: From Sensors to Sense (AIPR 2010), Washington D.C.,<br />
October 13–15, 2010<br />
Abstract<br />
<strong>The</strong> current proliferation of Unmanned Aircraft Systems provides an increasing amount of full-motion video (FMV) that, among other<br />
things, encodes geospatial intelligence. But the FMV is rarely converted into useful products, thus the intel potential is wasted. We have<br />
developed four concept demonstrations of methods to convert FMV into more immediately useful products, including more accurate<br />
coordinates for objects of interest; timely, georegistered, orthorectified imagery; conversion of mouse clicks to object coordinates;<br />
and first-person-perspective visualization of graphical control measures. We believe these concepts can convey valuable geospatial<br />
intelligence to the tactical user.<br />
Introduction<br />
Geospatial intelligence, which includes maps, coordinates, and<br />
other information derived from imagery [1], can address many of<br />
the intelligence needs of a tactical user [2], [3]. A potentially rich<br />
source of imagery to inform this geospatial intelligence is the Full<br />
Motion Video (FMV) from the U.S. military’s thousands [4], [5] of<br />
fielded Unmanned Air Systems (UASs). Current programs promise<br />
to dramatically increase the number of FMV feeds in the near future<br />
[6], [7]. However, there are too few analysts to process that flood<br />
of FMV [8], thus much of it goes unused. At the tactical echelons,<br />
raw FMV simply is not used to generate geospatial intelligence [9].<br />
We have developed four concept demonstrations to show how<br />
FMV could be shaped into potentially useful forms of geospatial<br />
intelligence. This paper describes the four demonstrations in more<br />
detail.<br />
In the first demonstration (“Object-of-Interest Geolocation” section),<br />
we used the contents of Predator FMV to improve the accuracy of<br />
telemetered locations of objects by an order of magnitude, averaged<br />
over 4000 image frames. Our contributions include one-time userassisted<br />
frame alignment, telemetry extraction, altitude telemetry<br />
correction, target tracking, and roll estimation.<br />
In the second demonstration (“Orthorectified Imagery” section),<br />
we combined image stitching with extracted telemetry to generate<br />
orthorectified and georegistered imagery of an area overflown by a<br />
Predator. This imagery could be produced in short order by brigadelevel<br />
air forces to allow ground assets to navigate an area that has no<br />
existing up-to-date maps or imagery.<br />
In the third demonstration (“Metric Video” section), we used<br />
transforms between FMV frames and orthorectified imagery to<br />
recover the ground coordinates of objects clicked on in the FMV.<br />
This involved automatically detecting, monitoring, and updating<br />
the coordinates of moving objects.<br />
Tactical Geospatial Intelligence from Full Motion Video<br />
In the fourth demonstration (“Video Markup” section), we used the<br />
same transforms to project graphical control measures drawn over<br />
the orthorectified imagery back into the FMV, allowing a user to<br />
see how objects in the video move relative to the control measures,<br />
facilitating rehearsal and/or after-action review.<br />
Object-of-Interest Geolocation<br />
One particularly useful form of geospatial intelligence is the<br />
coordinate of an object seen in FMV. This could be used, for instance,<br />
to call in fire, dispatch forces, cue a sensor, or retrieve locationrelevant<br />
video from an archive. Previous object geolocation work<br />
at <strong>Draper</strong> <strong>Laboratory</strong> [10] focused on sensor-disadvantaged,<br />
small UASs triangulating object coordinates from multiple looks.<br />
Larger vehicles, such as Predators, could combine accurate Global<br />
Positioning System (GPS)-Inertial Navigation System (INS), laser<br />
ranging, onboard Digital Terrain Elevation Data (DTED), etc., to<br />
identify target coordinates from a single look. Coordinates of the<br />
ground “target” at the center of the Predator’s camera reticule can<br />
be calculated and overlaid in real time on the camera feed. However,<br />
are those coordinates sufficiently accurate that one could call in fire<br />
on the correct target or extract archive video of just the target and<br />
not the whole neighborhood? We assert that the content of the FMV<br />
could be used to improve the accuracy.<br />
In the first concept demonstration, we showed how the accuracy of<br />
Predator target telemetry can be improved by an order of magnitude<br />
with a little operator intervention and image processing. Table 1<br />
shows the relative magnitude of error in object geolocation that we<br />
observed with raw and improved Predator target telemetry.<br />
We began our pursuit of better geolocation with a simple experiment<br />
to evaluate the accuracy of Predator’s target telemetry. We obtained<br />
unclassified video from a Predator camera following a truck as it<br />
winds through a small town. A sample frame is shown in Figure 1.<br />
<strong>The</strong> targeting telemetry is easily good enough to call up a Google<br />
47
Table 1. Relative Error in Target Lat, Lon at Stages of Improvement<br />
Condition Observed Error Improvement<br />
Target lat, lon from CC<br />
stream<br />
Target lat, lon from<br />
image overlay<br />
Plus GUI-based<br />
altitude correction<br />
48<br />
100% 1×<br />
115% 0.87×<br />
22% 4.6×<br />
Plus image processing 7% 13.4×<br />
Figure 1. Example of a frame of the video sequence as a truck drives<br />
through the town.<br />
Earth [11] map of the town. Watching the video, we observed the<br />
path taken by the truck and measured the coordinates of that path<br />
on the Google Earth map. This formed the “ground truth.” At each<br />
of approximately 4000 frames of video, we compared the ground<br />
location of the truck (per the telemetry overlaid on the video)<br />
against the nearest point on the truck’s ground truth trajectory.<br />
We declare the mean distance to be the Predator’s target telemetry<br />
error. Similarly, we compared the ground locations reported in<br />
the video’s closed caption telemetry stream against the ground<br />
truth over the same interval. <strong>The</strong> closed caption stream provided<br />
our “baseline” error. <strong>The</strong> mean error based on the video overlay<br />
was approximately 115% of baseline error, reflecting some errors<br />
in the automatic optical character recognition we used to extract<br />
telemetry from the screen.<br />
Figure 2 shows the telemetered (and processed) paths of the<br />
observed truck overlaid on the Google Earth map. At first glance, the<br />
locations given by the raw telemetry may seem shifted left relative<br />
to the map. However, the shift actually varies with camera azimuth.<br />
<strong>The</strong> error is best explained by an offset in the camera’s estimated<br />
height above target such as might arise from inaccuracy of the<br />
GPS altitude [12], [13], barometric altimeter, or DTED. To find the<br />
ground location of a target, as shown in Figure 3, one begins at the<br />
Tactical Geospatial Intelligence from Full Motion Video<br />
camera location and extends the camera’s line-of-sight (LOS) until<br />
its height matches the camera’s estimated height above target. If the<br />
vehicle operates at any reasonable standoff, the LOS has only slight<br />
depression, and a few meters of error in the UAS’ altitude estimate<br />
corresponds to many meters of lateral error in the target coordinate.<br />
<strong>The</strong> telemetry contains an accurate camera azimuth, so if we know<br />
the altitude error and if the terrain is flat near the target, we can<br />
calculate the lateral error in the target coordinate.<br />
Figure 2. Path followed by a truck observed in Predator video,<br />
according to, in order of increasing accuracy, telemetry overlay<br />
(yellow), closed-caption telemetry (brown), altitude-corrected<br />
telemetry (pink), and fully corrected telemetry (orange).<br />
Figure 3. Lateral distance from aircraft to target is a function of<br />
camera depression angle and altitude above target. For slight<br />
depression angles, a small inaccuracy in altitude produces a much<br />
larger error in lateral distance.<br />
Conversely, we can calculate altitude error given lateral offset of the<br />
target. We implemented a Matlab script that projects a single frame<br />
of Predator video onto the ground plane based on the telemetry<br />
overlaid on that image. It saves the projection as an image and a<br />
KML file. A user imports the projection into Google Earth, shifts it<br />
to align visually with the map, and saves it. In theory, this could be<br />
done automatically, but the difference in camera modalities (EO vs.<br />
infrared (IR)), perspectives (low oblique vs. nadir pointing), and<br />
capture times (potentially many seasonal and illumination changes)
make the task difficult to automate, yet comparatively easy for a<br />
human. A second script uses the observed shift and the Predator<br />
camera pointing angles (given in the telemetry) to calculate the<br />
corresponding altitude offset. This offset is subsequently applied<br />
to all telemetry to calculate revised target coordinates, improving<br />
mean accuracy by about a factor of 4.6. <strong>The</strong> resulting path is shown<br />
in pink in Figure 2.<br />
We can do even better with some image processing and filtering.<br />
First, we track the 2D location of the target in the video. Predator<br />
target telemetry is based on camera pointing, so when the operator<br />
does not hold the camera on target, the telemetry is inaccurate.<br />
From information such as the 2D location of the target in each<br />
image, the camera pointing angles, field of view, and the corrected<br />
altitude yielded by the process described in the paragraph above,<br />
we can calculate the ground location of the target wherever it moves<br />
in the image. Figure 2 shows the impact where the orange and pink<br />
lines diverge at the center and bottom left of the figure.<br />
<strong>The</strong> calculation requires an estimate of camera roll. This does not<br />
explicitly appear in either the telemetry overlay or the closedcaption<br />
(CC) telemetry stream. However, it can be inferred from the<br />
vehicle orientation and camera azimuth and elevation recorded in<br />
the CC telemetry stream. Those values are not synchronized with<br />
each other or the video, but we can roughly synchronize the CC and<br />
video streams by finding the time offset that best aligns the contents<br />
of the target location and camera location fields, which appear in<br />
both streams.<br />
Next, we filter the telemetry extracted by optical character<br />
recognition to eliminate sharp jumps in target location. Such a jump<br />
occurs in the center of Figure 2, where the trajectory jumps to the<br />
left briefly as the truck turns a corner. Here, the ground falls away<br />
into a ravine to the left, the LOS from the camera far to the right<br />
roughly parallels the ground slope and thus intersects far to the left.<br />
Our altitude correction cannot overcome this much error. However,<br />
filtering drastically reduces this overshoot. After image processing<br />
and filtering, the trajectory shown in Figure 2 by the orange line<br />
represents a 13.4× improvement in mean error in the location of<br />
the tracked truck compared with the raw Predator target telemetry.<br />
This demonstration shows good accuracy improvement for a single<br />
image sequence. It is limited by its assumption of locally level terrain,<br />
but this limitation could be removed by incorporating a DTED into<br />
the correction process. Observed performance improvement will<br />
vary to the extent that the Google Earth map coordinates deviate<br />
from truth and as the altitude error varies over time and across<br />
videos.<br />
Orthorectified Imagery<br />
Another potentially useful form of geospatial intelligence is timely,<br />
orthorectified imagery. A tactical end user, for instance at the<br />
company or platoon level embarking on a mission may appreciate<br />
intelligence about his area of operations, such as the fact that a<br />
particular bridge is out, a river is full, trees block intervisibility in<br />
certain key places at this time of year, and he will be observed by the<br />
shepherds whose herd of goats is grazing along his intended route.<br />
A recent survey shows that he will often be given maps 15 years<br />
out of date [9]. Nor does he likely have real-time access to satellites<br />
Tactical Geospatial Intelligence from Full Motion Video<br />
or photogrammetrists to generate up-to-date, georegistered,<br />
orthorectified imagery of his small area of operations. However, he<br />
may warrant time from brigade-level air assets. Perhaps a Predator<br />
can provide FMV, which can be used to generate orthorectified,<br />
map-registered, up-to-date imagery to support his mission.<br />
To test this theory, we orthorectified imagery from the Predator<br />
video used in the previous demonstration. We used the same<br />
graphical user interface (GUI) to correct the telemetered altitude,<br />
latitude, and longitude of ground at the center of the image. <strong>The</strong>se<br />
and the telemetered camera pointing angles define a transform<br />
between image coordinates and ground coordinates, assuming flat<br />
ground. We selected key frames (every 50th frame) from the video<br />
and extracted interest points based on SIFT descriptors [14]. We<br />
used the algorithm of [15] to automatically detect reliable sets of tie<br />
points matched across images. We retained tie points that appeared<br />
in at least four images and projected them to the ground using their<br />
images’ image-to-ground transforms. <strong>The</strong> transforms are predicated<br />
on flat terrain and perfect telemetry, neither of which were available,<br />
so the projections of matching tie points are imperfect and form<br />
clusters in the neighborhood of their correct location.<br />
Next, we used an Expectation Maximization (EM) approach to<br />
determine a single coordinate for each cluster of matched tie points.<br />
<strong>The</strong> approach consists of a loop wherein each iteration (1) finds<br />
the center (mean location) of each cluster and (2) modifies each<br />
image’s image-to-ground transform to better align the tie points to<br />
the corresponding cluster centers. This loop continues until the<br />
calculated changes in cluster centers and transforms are all small.<br />
In both phases, weighting favors tie points that appear in more<br />
frames (better for enforcing consistency), are better aligned (less<br />
likely to be outliers), or come from images with few tie points (to<br />
avoid ignoring these images). <strong>The</strong> weights gradually evolve to favor<br />
tie points in images with better telemetry. <strong>The</strong> approach is inspired<br />
by Google PageRank [16], which solves the analogous problem of<br />
evolving weights to favor more authoritative web pages connected<br />
by hyperlinks.<br />
<strong>The</strong> EM loop runs twice, the first time adjusting image-to-ground<br />
transforms by only translation along the ground and the second<br />
time finding a full homography for each image to best align tie<br />
points. <strong>The</strong> second pass solves for more parameters and would be<br />
poorly conditioned if not for the elimination of bulk translation<br />
from the first pass.<br />
<strong>The</strong> EM approach was chosen because it aligns images yet respects<br />
the global shape of the image set provided by the telemetry. This<br />
compares favorably with three other potential approaches. Simply<br />
projecting images based on their telemetry would provide an<br />
image set with good global accuracy, but the individual images<br />
would not align, so it would be unclear how to combine them into<br />
a single mosaic. Conversely, a pure image-stitching algorithm would<br />
align the image content, including error in the image content. This<br />
error would compound as a map grew around the single frame<br />
that was manually aligned in the GUI, such that the geometry of<br />
image content would rapidly deviate from reality with distance<br />
from that single image. An obvious compromise is a weighted least<br />
squares solution that attempts to find the set of image transforms<br />
49
that minimizes both the size of tie point clusters and the distance<br />
from the telemetered transform. But it is unclear how to define a<br />
meaningful distance in image transform space or how to weight<br />
error there relative to pixel distance. <strong>The</strong> chosen algorithm avoids<br />
these problems by minimizing error purely in image space.<br />
After the EM process converges to a set of consistent image-toground<br />
transforms, we use the transforms to combine the images. We<br />
normalize the intensity of the images so that frames throughout the<br />
sequence have comparable mean intensity. We project each image<br />
into ground coordinates and resample into a common grid of pixels.<br />
For each pixel in the grid, we identify the set of projected images<br />
that overlap at that pixel, extract the intensities at that pixel, and<br />
record the median intensity. Thanks to the earlier intensity scaling,<br />
the set of intensities at a pixel tends to be a tight cluster except for<br />
outliers caused by transient objects moving through the pixel over<br />
the course of a few frames. Thus, median filtering excludes most<br />
moving objects from the mosaic. Slowly or infrequently moving<br />
objects, e.g., cows, may remain in the mosaic, especially in areas<br />
built from a small number of images.<br />
Figure 4 shows the resulting imagery overlaid on an actual map.<br />
It has several valuable traits. First, it shows physical objects<br />
that ground troops can compare visually to objects seen in the<br />
environment. A building on the image looks like a building that is<br />
truly in the scene. This compares favorably to a contour map, which<br />
probably does not depict the building, or a year-old image where<br />
the building may have a different shape. Second, because the data<br />
come from oblique-view video, the buildings are rendered from<br />
oblique view, which may be easier for a human to parse than an<br />
overhead view. Third, objects seen in video can be located precisely<br />
on this imagery. This may be more intuitive than GPS coordinates<br />
as a way to convey a precise location to a soldier. Fourth, the new<br />
imagery overlays existing imagery and is georegistered moderately<br />
well based on corrected telemetry. <strong>The</strong> existing imagery provides<br />
context and archival data where current data are not available.<br />
<strong>The</strong> image stitching implementation we used presumes locally<br />
flat terrain and requires overlap between images. Our imagery<br />
violated both conditions, so the orthorectified image is distorted<br />
in several places. We have begun work to reduce or eliminate these<br />
dependencies to produce less distorted imagery. However, this<br />
may be unnecessary, because even if orthorectification is distorted<br />
and/or georegistration is inaccurate, and even if GPS is denied,<br />
a coordinate-seeking infantry platoon can navigate directly to a<br />
target marked on the up-to-date and easy-to-understand imagery.<br />
Metric Video<br />
Yet another potentially useful type of geospatial intelligence is metric<br />
video. Metric video allows one to obtain an object’s coordinates<br />
by simply clicking on it. This capability is being developed [17] in<br />
hardware and will be available to some users of some air vehicles. For<br />
everyone else, could software and regular video from an arbitrary<br />
UAS act as a poor man’s metric sensor? Such a solution could also<br />
avoid costs of retrofitting existing systems.<br />
We used the same video and the mosaic generated in the previous<br />
concept demonstration. Each video key frame used to make the<br />
mosaic already has a coordinate transform that warps it into mosaic<br />
50<br />
Tactical Geospatial Intelligence from Full Motion Video<br />
Figure 4. Imagery orthorectified from Predator video (gray) overlaid<br />
on an existing map (color) [10]. Pseudo-oblique view and up-todate<br />
contents may make such maps valuable even with imperfect<br />
orthorectification and georegistration.<br />
coordinates, which are roughly aligned with the GPS coordinates from<br />
the corrected telemetry. If the user clicks in one of these key frames,<br />
the transform converts click coordinates into GPS coordinates. For<br />
images between key frames, we reuse components of the mosaicing<br />
algorithm to locate tie points in the image and the nearest key<br />
frame automatically, and fit a camera rotation and translation that<br />
best explains the motion of the tie points. This projects the clicked<br />
coordinates from an arbitrary image to a key frame, whence they can<br />
be converted to GPS coordinates.<br />
In addition to detecting the coordinate of the clicked point, we<br />
report whether the click represents a moving or stationary object.<br />
We project the clicked image onto the mosaic and compare intensity<br />
in the area of the click. If the mosaic and projected image have<br />
similar intensity, then the point probably represents a stationary<br />
object. If the two have differing intensity, the likely cause is that the<br />
mosaic shows the typical intensity at that location and the clicked<br />
image shows a transient object. We identify the area covered by<br />
the transient object (the area where intensity does not match the<br />
mosaic), and report the click as a moving object.<br />
Figure 5 shows an example frame from a metric video. As the video<br />
plays, clicking on the video causes a square to appear around the<br />
click location. GPS coordinates of the clicked location appear<br />
above the square. Green squares represent stationary objects. <strong>The</strong>ir<br />
coordinates are fixed, and their boxes are back-projected onto each<br />
new video frame using the frame-to-mosaic transform. Red squares<br />
represent moving objects. <strong>The</strong>y are sized automatically to match<br />
the extent of the moving object. <strong>The</strong>y are visually tracked through<br />
consecutive frames so that the red box remains on the object, not its<br />
original location. <strong>The</strong>ir changing ground coordinates are determined<br />
from their tracked 2D coordinates using each new frame’s frameto-mosaic<br />
transform. <strong>The</strong> mosaic, frame-to-mosaic transforms, and<br />
the locations of moving objects are all determined in preprocessing<br />
so that the application runs for the user at full video speed. This is<br />
suitable for operating on archived video where the user does not see<br />
the preprocessing time. Additional work would be required to operate<br />
on real-time, streaming video.
Figure 5. Poor-man’s metric video. User clicks on objects, generating<br />
boxes that lock onto the objects through later video frames, determine<br />
whether the objects are stationary (green) or moving (red), and<br />
report their GPS coordinates.<br />
Video Markup<br />
A final useful type of geospatial intelligence reverses the concept<br />
of the metric video. A user marks up an image mosaic, and the<br />
markings are projected into the video. This could provide a useful,<br />
nonoverhead perspective of a battlefield with control graphics for<br />
mission rehearsal or after-action review. Or if air assets are available<br />
during a battle and the software were dramatically accelerated, it<br />
could potentially provide real-time observation of how units move<br />
relative to intended controls, as an audience currently expects for<br />
televised sports [18].<br />
Figure 6 shows an example. Battlefield control graphics [19] are<br />
drawn over the mosaic from the previous demonstrations using a<br />
paint program that supports layers. <strong>The</strong> markup layer is saved as an<br />
image with the same coordinate system as the original mosaic. When<br />
displaying each video frame, the frame’s frame-to-mosaic transform<br />
is used to project the markup layer into the frame’s coordinates, and<br />
the markings are overlaid on the image. <strong>The</strong> result is a perspective<br />
on the control graphics that may be more intuitive to a human.<br />
Conclusion<br />
Tactical echelons require geospatial intelligence, such as maps and<br />
coordinates, which could be derived from the FMV from UAS that<br />
are now ubiquitous on the battlefield. <strong>The</strong>y simply need tools to<br />
convert the video into a more immediately useful format. We have<br />
shown four possible tools in concept demonstrations that convert<br />
FMV into, respectively: accurate coordinates of objects-of-interest;<br />
intuitive, timely, orthorectified, and georegistered imagery of an<br />
area of interest; object coordinates extractable by clicking directly<br />
on video; and battlefield control graphics projected into the video.<br />
We believe these applications can provide valuable geospatial<br />
intelligence to the tactical user.<br />
Tactical Geospatial Intelligence from Full Motion Video<br />
Figure 6. Video markup. Control graphics are drawn over orthorectified<br />
imagery using a paint program, then projected into video for<br />
comparison against activities shown by the video.<br />
51
Acknowledgment<br />
This work was funded under <strong>Draper</strong> <strong>Laboratory</strong>’s Internal Research<br />
and Development Program.<br />
References<br />
[1] 10 U.S.C. S 467: U.S. Code-Section 467: Definitions.<br />
[2] “AGC Brochure,” http://www.agc.army.mil/publications/AGCbrochure.<br />
pdf, July 28, 2010.<br />
[3] Field Manual 34-130, Headquarters, Department of the Army,<br />
July 8, 1994.<br />
[4] “Too Much Information: Taming the UAV Data Explosion,”<br />
http://www.defenseindustrydaily.com/uav-data-volumesolutions-06348,<br />
March 16, 2010.<br />
[5] Drew, C., “Drones Are Weapons of Choice in Fighting Al Qaeda,”<br />
<strong>The</strong> New York Times, March 16, 2009.<br />
[6] “ARGUS-IS,” http://www.darpa.mil/ipto/programs/argus/argus.<br />
asp, July 28, 2010.<br />
[7] “Gorgon Stare Update,” Air Force Magazine, Vol. 98, No. 5, May 2010.<br />
[8] Baldor, L., “Air Force Develops New Sensor to Gather War Intel,” <strong>The</strong><br />
Seattle Times, July 6, 2009.<br />
[9] Richards, J.E., Integrating the Army Geospatial Enterprise:<br />
Synchronizing Geospatial-Intelligence to the Dismounted Soldier,<br />
Master of Science in Engineering and Management <strong>The</strong>sis, System<br />
Design and Management Program, Massachusetts Institute of<br />
<strong>Technology</strong>, June 2010.<br />
[10] Madison, R., P. DeBitetto, A.R. Olean, M. Peebles, “Target Geolocation<br />
from a Small Unmanned Aircraft System,” IEEE Aerospace<br />
Conference, 2008.<br />
[11] Google, “Google Earth,” http://earth.google.com, July 22, 2010.<br />
[12] Corp., C.V., “GPS Altimetry,” http://docs.controlvision.com/pages/<br />
gps altimetry.php, 2004.<br />
[13] Mehaffey, J., “GPS Altitude Readout > How Accurate?” http://<br />
gpsinformation.net/main/altitude.htm, February 10, 2001.<br />
[14] Lowe, D.G., “Distinctive Image Features from Scale-Invariant<br />
Keypoints,” Int. J. Comput. Vision, Vol. 60, No. 2, 2004, pp. 91-110.<br />
[15] Xu, Y. and R. Madison, “Robust Object Recognition Using a Cascade<br />
of Geometric Consistency Filters,” Proc. Applied Imagery and<br />
Pattern Recognition, 2009.<br />
[16] Page, L., S. Brin, R. Motwani, T. Winograd, <strong>The</strong> PageRank Citation<br />
Ranking: Bringing Order to the Web, Stanford InfoLab Technical<br />
Report.<br />
[17] DARPA, “Standoff Precision ID in 3D (SPI-3D),” http://www.darpa.<br />
mil/ipto/programs/spi3d/spi3d.asp, 2010.<br />
[18] Sportvision, Inc., “SportVision,” http://www.sportvision.com/, 2008.<br />
[19] U.S. Department of Defense, “Common Warfighting Symbology,”<br />
MIL-STD-2525C, November 17, 2008.<br />
52<br />
Tactical Geospatial Intelligence from Full Motion Video
Richard W. Madison is a Senior Member of Technical Staff in the Perception Systems group at <strong>Draper</strong> <strong>Laboratory</strong>.<br />
His work is in vision-aided navigation with forays into related fields, such as tracking, targeting, and augmenting<br />
reality. Before joining <strong>Draper</strong>, he worked on similar projects at the Jet Propulsion <strong>Laboratory</strong>, Creative Optics, Inc.,<br />
and the Air Force Research <strong>Laboratory</strong>. Dr. Madison holds a B.S. in Engineering from Harvey Mudd College and M.S.<br />
and Ph.D. degrees in Electrical and Computer Engineering from Carnegie Mellon University.<br />
Yuetian Xu is a Member of Technical Staff at <strong>Draper</strong> <strong>Laboratory</strong>. His current research interests include computer<br />
vision, robotic navigation, GPU computing, embedded systems (Android), and biomedical imaging. Mr. Xu holds<br />
B.S. and M.S. degrees in Electrical Engineering and Computer Science from MIT.<br />
Tactical Geospatial Intelligence from Full Motion Video<br />
53
1<br />
averager<br />
1<br />
z<br />
54<br />
wbGds<br />
wbG<br />
SC_Ctrl_eKF_M<br />
SC_Ctrl_eKF_H<br />
SC_Ctrl_eKF_R<br />
SC_Ctrl_N_Its<br />
SC_Sensor_Gyro_BS_tau<br />
2<br />
Qst2i<br />
w<br />
wp<br />
M<br />
H<br />
R<br />
delT<br />
tau<br />
stMeasOn<br />
eKFprop<br />
Qst2i Quatprop Qst2i_neg<br />
<strong>The</strong> rapid development of guidance, navigation, and control (Gn&C)<br />
systems for precision pointing and tracking spacecraft requires a set<br />
of tools that leverages common architectural elements and a modelbased<br />
design and implementation approach.<br />
<strong>The</strong> paper presents an approach that can accelerate the speed of<br />
development while reducing the cost of Gn&C flight software. It uses a<br />
spacecraft’s pointing and tracking system as an example, and describes<br />
the detailed models of elements such as gyros, reaction wheels, and<br />
telescopes, as well as Gn&C algorithms and the direct conversion of<br />
the models into software for software-in-loop and hardware-in-loop<br />
testing.<br />
Model-based design and software development is slowly being adopted<br />
in the aerospace industry, but <strong>Draper</strong> is more flexible and is able to<br />
adopt these types of time-saving techniques more quickly. <strong>Draper</strong> is<br />
applying this approach today as a member of the Orbital Sciencesled<br />
team competing for nASA’s Commercial Orbital Transportation<br />
(COTS) program, as well as in the development of the ExoplanetSat<br />
planet-finding cubesat in partnership with MIT.<br />
K<br />
Model-Based Design and Implementation of Pointing and Tracking Systems<br />
K<br />
Qst2i_neg<br />
Qst2i<br />
update<br />
bias_pos<br />
Qst2i_pos
Model-Based Design and Implementation of Pointing<br />
and Tracking Systems: From Model to Code in One Step<br />
Sungyung Lim, Benjamin F. Lane, Bradley A. Moran, Timothy C. Henderson, and Frank A. Geisel<br />
Copyright © 2010 American Astronautical Society (AAS), Presented at the 33rd AAS Guidance and Control Conference,<br />
Breckenridge, CO, February 6 - 10, 2010<br />
Abstract<br />
This paper presents an integrated model-based design and implementation approach of pointing and tracking systems that can shorten<br />
the design cycle and reduce the development cost of guidance, navigation, and control (GNC) flight software. It provides detailed models of<br />
critical pointing and tracking system elements such as gyros, reaction wheels, and telescopes, as well as essential pointing and tracking GNC<br />
algorithms. This paper describes the process of developing models and algorithms followed by direct conversion of the models into software<br />
for software-in-the-loop (SWIL) and hardware-in-the-loop (HWIL) tests. A representative pointing system is studied to provide valuable<br />
insights into the model-based GNC design.<br />
Introduction<br />
Pointing and tracking (P&T) systems are very important<br />
elements of surveillance, strategic defense applications, optical<br />
communications, and science observations, including both<br />
astronomical and terrestrial targets. A P&T system is generally<br />
required to provide both agility (the ability to rapidly change<br />
pointing line-of-sight (LOS) vector over large angles) and jitter<br />
suppression; the design challenge comes from trying to achieve<br />
both in a cost-effective manner.<br />
<strong>The</strong> Operational Responsive Space (ORS) field seeks to develop<br />
and deploy a constellation of small and low-cost yet customized<br />
P&T systems in a short period of time [1]. In this situation, certain<br />
traditional practices in engineering, development, and operation<br />
may become stumbling blocks. Reusability of heritage engineering<br />
tools and flight software is seemingly attractive but often conceals a<br />
high price tag. Even relatively straightforward efforts to customize<br />
or adapt heritage flight software are often time-consuming and<br />
costly.<br />
A novel approach to address these challenges is model-based design<br />
and implementation [2]. This approach can be summarized with<br />
three steps: (1) analysis and design of algorithms with simulation<br />
and a user-friendly language such as Matlab/Simulink, (2) automatic<br />
code generation of flight software from algorithms written in the<br />
modeling language, and (3) continuous validation and verification<br />
of flight software against source models and simulation.<br />
Although this approach does not generate all vehicle and GNC flight<br />
software (e.g., mission manager, scheduler, and data management<br />
are generally not included), it can significantly reduce the cycle<br />
of the design and implementation of core GNC algorithms by<br />
streamlining the design and implementation process. In particular,<br />
iterative design processes are simplified as single design iteration<br />
Model-Based Design and Implementation of Pointing and Tracking Systems<br />
only requires modification of the algorithm blocks. Also, critical<br />
implementation issues can be detected early, possibly even in the<br />
algorithm development stage. <strong>The</strong>y encompass accommodation of<br />
processor speed and memory limitations, numerical representation<br />
(floating or fixed point), and incorporation of real-time data<br />
management such as buffering, streaming, and pipelining.<br />
This paper presents a model-based design and implementation<br />
approach for P&T systems. First, potential P&T elements are<br />
surveyed. <strong>The</strong>n detailed models of key P&T elements and core GNC<br />
algorithms are provided. Models such as reaction wheels, telescopes,<br />
focal plane sensors, and several others are presented, together with<br />
typical parameter ranges. Next, the GNC algorithms for spacecraft<br />
attitude and telescope LOS stabilization are provided. Although<br />
detailed analysis and design techniques are not addressed, critical<br />
rules-of-thumb are provided to guide gain parameter selection.<br />
Within the model-based design approach, the models and algorithms<br />
are developed using Matlab/Simulink blocks and Embedded Matlab<br />
Language (EML) so that they may be autocoded directly to flight<br />
model and software [3]. <strong>The</strong>y are partitioned into flight model<br />
and flight software groups, respectively. This partition simplifies<br />
implementation of SWIL and HWIL tests. Each model or algorithm<br />
is connected to others using “rate transition blocks” [4]. <strong>The</strong> use<br />
of rate transition blocks brings some advantages; such blocks can<br />
be implemented at a designated rate in simulation, and the code<br />
generated by autocoding is grouped in terms of rate. Integration<br />
of autocoded flight software into main flight software is therefore<br />
much easier as it needs to identify a few different rate groups and<br />
need not identify all functions of autocoded flight software. At the<br />
end of the paper, a GNC design example for a representative P&T<br />
system is provided with some interesting plots and discussions.<br />
55
Pointing and Tracking Systems<br />
A P&T system can be roughly grouped into spacecraft elements and<br />
payload elements. <strong>The</strong> spacecraft elements may include actuators<br />
(reaction wheels, control momentum gyros, magnetic torque rods,<br />
etc.), sensors (star tracker, gyro, fine guidance sensor, etc.), and<br />
GNC flight software. Since they have been standardized with unique<br />
roles, spacecraft elements are omitted in the discussion of this<br />
section, but will be addressed in great detail in subsequent sections.<br />
Payload elements may consist of a variety of components,<br />
including imaging systems, focal plane sensors, steering mirrors,<br />
and references. Figure 1 illustrates potential candidates for each<br />
functional group. Since some elements have unique roles and<br />
others have partially overlapping roles, they need to be downselected<br />
carefully in order to derive a specific P&T architecture. <strong>The</strong><br />
functional groups and their associated options are briefly discussed<br />
as follows:<br />
• Mount mechanism: strapdown, (active or passive) vibrationisolated<br />
optical bench, or gimbaled optical bench.<br />
A strapdown system is simplest; the payload is mounted rigidly<br />
to the spacecraft. A vibration-isolation optical bench eliminates<br />
vibration coupling between the spacecraft and the payload;<br />
such isolation can be active or passive. A gimbal mechanism<br />
provides stable and/or agile tracking capability at the cost of<br />
additional complexity.<br />
• Pointing reference signal source: payload mission signal,<br />
inertially-stable reference signal or an independent<br />
observation signal.<br />
A pointing reference for a tracking system is generally required.<br />
It can come from the mission payload itself (e.g., from a targettracking<br />
algorithm applied to the instrument focal plane data),<br />
from a separate reference source (e.g., an inertially stabilized<br />
laser [5]), or from a separate tracking sensor.<br />
• Moving elements: A steering mirror may be placed as the first<br />
optical element in the payload (“siderostat”) or later in the<br />
optical beam-train (“fast-steering mirror” or FSM).<br />
A siderostat provides beam agility for moving object tracking<br />
and rapid repointing, but does not generally have the highbandwidth<br />
motion capability required for vibration rejection. It<br />
is also comparatively massive and costly. An FSM by contrast can<br />
provide sufficient high-bandwidth beam steering for jitter<br />
rejection at modest cost and complexity. However, an FSM is<br />
generally limited by optical constraints to a relatively small<br />
angular operating range.<br />
56<br />
Model-Based Design and Implementation of Pointing and Tracking Systems<br />
• Sensing elements: focal plane array (FPA) sensor, wavefront<br />
sensor, inertial sensors (Microelectromechanical System<br />
(MEMS) accelerometer or gyro, high-frequency angular<br />
rate sensor).<br />
An FPA sensor can be used to locate a target or reference source<br />
such as a star. <strong>The</strong> sensor generally provides very high-accuracy<br />
information, and in particular, is not subject to significant lowfrequency<br />
drift error. However, due to the often limited update<br />
bandwidth (e.g., guide stars are faint, requiring long exposures),<br />
it is often desirable to augment the FPA sensor with a highbandwidth<br />
inertial sensor.<br />
• Moving-element actuators: brushless DC motor, stepper motor,<br />
piezo device (PZT).<br />
Motors are generally used to actuate the gimbaled elements,<br />
while PZTs are used for short-stroke high-bandwidth<br />
applications such as FSM steering. A DC motor is typically used<br />
for high-frequency actuation, while a stepper motor is preferred<br />
for low-frequency actuation.<br />
• Moving-element sensors: encoder, gyro, or reference object.<br />
<strong>The</strong> encoder and gyro directly sense the relative angle of<br />
gimbals. <strong>The</strong> information of known objects could enhance<br />
sensing accuracy of encoder and gyro with measurement<br />
uncertainty and drift.<br />
A specific P&T system can be designed to meet a set of system<br />
requirements by choosing among the menu of available options<br />
and constructing models using the available elements. Two<br />
representive examples are depicted in Figures 2 and 3. Figure 2<br />
is an example of a “passive strapdown” P&T system in which all<br />
pointing and tracking capability is provided by the host spacecraft.<br />
In this case, the payload is rigidly mounted to the spacecraft and<br />
has no active elements to control its LOS. A variant of this approach<br />
uses the payload instrument to provide pointing information, e.g.,<br />
by tracking a guide star in the instrument focal plane; such an<br />
approach can eliminate the effect of misalignments between the<br />
instrument and the spacecraft [6].<br />
Figure 3 is an example of an “active strapdown” P&T system. <strong>The</strong><br />
P&T capability is shared by both the spacecraft and the payload; the<br />
spacecraft provides large-angle agility and coarse pointing stability,<br />
while an FSM in the payload is used to reject high-frequency and<br />
small-amplitude pointing jitter [7], [8]. In this configuration, it is<br />
important to manage the interaction between the spacecraft and the<br />
payload carefully. This paper will focus mainly on this P&T system.<br />
Note that the strapdown passive P&T system is a simplified version,<br />
and thus, the discussion in this paper can be applied directly to<br />
this system.
Optics<br />
• Flexible<br />
• Rigid<br />
Fast Steering Mirror<br />
• Rigid<br />
• Flexible<br />
Figure 1. Elements of P&T system.<br />
Guidance<br />
Actuators<br />
Mirror Control<br />
Software<br />
OPTICAL BEnCH<br />
SPACECRAFT<br />
Gimbaled Siderostat<br />
• Stepper/DC motor<br />
• Encoder/Gyro<br />
REF<br />
CAMERA GYRO<br />
BEAM<br />
ARRAY ARRAY<br />
PLATFORM<br />
SERVO QUAD<br />
SERVO<br />
ACS Software<br />
Rigid Body<br />
Flex Modes<br />
S/C Dynamics<br />
Steering Mirror<br />
Ref Beam Sensor<br />
Model-Based Design and Implementation of Pointing and Tracking Systems<br />
ECC<br />
Figure 2. Functional block diagram of strapdown passive P&T system.<br />
STAR<br />
Ref beam<br />
IMAGER<br />
Gyros<br />
Star Trackers<br />
Inertial Sensors<br />
Rigid Optics<br />
CMD<br />
SERVO<br />
Pointing Reference Beam Unit<br />
• Mechanical/MEMS Gyro<br />
TRAnSDUCER<br />
True state<br />
Ref beam<br />
True LOS<br />
True LOS<br />
Platform<br />
• Vibration isolator<br />
• Gimbal mechanism<br />
Detector<br />
• Focal plane<br />
• Wave front<br />
• Inertial<br />
• Known object<br />
Imager<br />
57
Modeling of Pointing and Tracking Systems<br />
<strong>The</strong> spacecraft of interest in this paper are small spacecraft with<br />
masses in the 5- to 500-kg range. This class encompasses 6U<br />
CubeSats as well as Small Explorer (SMEX) missions [6], [8], [9].<br />
<strong>The</strong> key characteristics and approximate parameter ranges are<br />
summarized in Table 1. This class of spacecraft typically implements<br />
one of the two P&T system architectures introduced in the previous<br />
section.<br />
Spacecraft Attitude Dynamics<br />
<strong>The</strong> spacecraft is assumed to be a rigid body with small flexible<br />
appendages such as solar panels and structural modes at high<br />
frequencies. <strong>The</strong> flexibility is modeled as a series of 2nd-order massspring-damper<br />
systems, a simplified version of Craig-Bampton or<br />
Liken’s approach [10]. <strong>The</strong> effective mass and natural frequencies<br />
are estimated by standard NASTRAN analysis, and the damping<br />
ratio is typically assumed to be between 0.1% and 1%. This massspring-damper<br />
system can also be used to model a vibration<br />
isolation mechanism or an optical bench.<br />
Table 1. Spacecraft Parameters.<br />
Condition Improvement<br />
Mass (kg) 5 ~ 500<br />
Moment of Inertia (kg-m 2 ) 0.05 ~ 100<br />
Dimension (m 2 ) 0.2 × 0.3 ~ 1.5 × 3.0<br />
Power (W) 30 ~ 350<br />
Pointing Accuracy (arcsec, 3σ) 0.2 ~ 60<br />
58<br />
Guidance<br />
Actuators<br />
Mirror Control<br />
Software<br />
ACS Software<br />
Rigid Body<br />
S/C Dynamics<br />
Steering Mirror<br />
Ref Beam Sensor<br />
Ref Beam<br />
Figure 3. Functional block diagram of strapdown active P&T system.<br />
Gyros<br />
Star Trackers<br />
Inertial Sensors<br />
Rigid Optics<br />
True LOS<br />
True LOS<br />
Model-Based Design and Implementation of Pointing and Tracking Systems<br />
Imager<br />
Spacecraft Disturbance<br />
<strong>The</strong> spacecraft experiences known internal and environmental<br />
disturbance during operation. <strong>The</strong> internal disturbance<br />
encompasses reaction wheel (RW)/control momentum gyro (CMG)<br />
torque noise, cryocooler disturbance, reaction torque of any<br />
moving parts and/or subsystems, and thermal snap during solar<br />
eclipse ingress/egress. For small spacecraft, reaction torques and<br />
any dynamic interaction between spacecraft structure and periodic<br />
internal disturbances are most important. <strong>The</strong> thermal snap must<br />
be taken care of by system design, i.e., use of an attached or stiffened<br />
solar array.<br />
External torques that act on spacecraft stem from gravity<br />
gradients, solar radiation pressure, residual magnetic dipoles, and<br />
aerodynamic drag. Solar torque is often coupled to orbit rate and<br />
must be considered when sizing the momentum capacity of RWs.<br />
Since it varies seasonally, at least four seasonal values need to be<br />
assessed. <strong>The</strong> frequency of gravity gradient variations can vary from<br />
the orbit rate to the slew rate. <strong>The</strong> secular component of the gravity<br />
gradient is another factor to be considered in assessing the required<br />
momentum capacity of RWs. It is typically calculated during an<br />
inertial pointing where the gravity gradient is maximal (45-deg<br />
rotation along one axis that has a nonminimum moment of inertia).<br />
<strong>The</strong> importance of aerodynamic drag decreases with increasing<br />
altitude and is often ignored beyond 400 km [11].<br />
Reaction Wheels<br />
RWs provide the necessary torque for slews and disturbance compensation<br />
via the exchange of angular momentum with the spacecraft.<br />
Figure 4 shows the implementation of the mathematical model<br />
developed in [12]; Table 2 gives the key parameters with a range of<br />
typical values based on RWs for small spacecraft of interest.
1 -1 Kpu K- 1<br />
z<br />
Transformation scale factor RW electronics<br />
Bus to RWs<br />
Process Delay<br />
KTs<br />
z-1<br />
Figure 4. Reaction wheel Simulink block diagram.<br />
Table 2. Reaction wheel Parameters.<br />
Parameters Range<br />
Inertia (g-m 2 ) 0.01 ~ 30<br />
Max Speed (rpm) 1000 ~ 10000<br />
Max Momentum (mN-m-s) 1 ~ 8000<br />
Max Torque (mN-m) 0.5 ~ 90<br />
Process Delay (ms) 50 ~ 250<br />
Coulomb Friction (mN-m) 0.2 ~ 2<br />
Quantization Error (mN-m) 0.002 ~ 0.005<br />
Random Noise (mN-m, 1σ) 0.03 ~ 0.05<br />
Angular Torque Noise (deg) ~10<br />
Static Imbalance (mg-cm) 2 ~ 500<br />
Dynamic Imbalance (mg-cm 2 ) 20 ~ 5000<br />
<strong>The</strong> literature [13] primarily focuses on static and dynamic<br />
imbalances in RWs and their influence on spacecraft pointing.<br />
However, for small spacecraft with pointing accuracy down to<br />
arcsec levels, the effects of processor delay, quantization error,<br />
angular torque noise, and RW speed zero-crossing are equal to or<br />
K-<br />
3pN sin<br />
quantization<br />
error<br />
SC_Effect_RW_VisFrict<br />
Model-Based Design and Implementation of Pointing and Tracking Systems<br />
K-<br />
coeff<br />
SC_Effect_RW_CoulFrict<br />
Coulomb Friction<br />
×<br />
×<br />
×<br />
+<br />
+<br />
+<br />
+<br />
Angular torque noise<br />
torque noise<br />
+<br />
+<br />
+<br />
-<br />
SC_Effector_RW_InitSpeed<br />
Initial RW speed<br />
K<br />
max torque limit Transformation<br />
RWs to Bus<br />
pu u 2<br />
KTs<br />
z-1<br />
K-<br />
SC_RW_Us * SC_RW_distFromCG<br />
Static imblance Const<br />
SC_RW_Ud<br />
KTs<br />
x<br />
oz-1 sin<br />
cos<br />
K-<br />
×<br />
×<br />
K p u<br />
+ +<br />
1<br />
rwTrq<br />
more important than those of static and dynamic imbalance. At<br />
that level of performance, due to unfavorable inertia ratios and<br />
inexpensive components, everything matters.<br />
Processor delay is determined by the GNC computer command<br />
output rate and the rate of the RW motor processor. As pointed<br />
out in the literature [12], angular torque noise is a critical RW<br />
noise source as its frequency is often close to the bandwidth of<br />
the spacecraft GNC controller. However, the effect of this torque<br />
can be mitigated significantly using a high-bandwidth spacecraft<br />
GNC controller.<br />
Control Momentum Gyros<br />
Miniature CMGs have been developed for small spacecraft [14]. One<br />
example produces a torque of 52.25 mN-m, sufficient to generate an<br />
average slew of 3 deg/s. It also consumes 20-70% less power than<br />
RWs of the same weight. However, the static imbalance torque at<br />
the fundamental frequency is larger by an order of magnitude than<br />
that of RWs with similar maximum torque capability. Furthermore,<br />
bearing lifetime is an important issue since CMGs are typically<br />
spinning as fast as 11 krpm. <strong>The</strong>se reasons tend to make RWs<br />
preferred as primary actuators for P&T systems as long as pointing<br />
stability is more important than a fast slewing capability.<br />
Simultaneous pointing stability and fast tracking are difficult to<br />
achieve by either RWs or CMGs. <strong>The</strong> use of both actuators may<br />
be required at the cost of increased mass and complexity. Alternative<br />
solutions involve either using FSMs or gimbals to articulate the<br />
entire payload or siderostat to actuate the payload LOS<br />
vector [15], [16].<br />
×<br />
×<br />
×<br />
×<br />
+<br />
+<br />
+<br />
+<br />
K p u<br />
2<br />
3<br />
K p u<br />
+<br />
+<br />
59
Star Tracker<br />
<strong>The</strong> star tracker (ST) is one of the major attitude determination<br />
instruments for spacecraft. Miniaturized STs with low mass and<br />
power consumption are recently preferred for small spacecraft.<br />
Even a low-end, compact ST with limited accuracy (e.g., 18-90<br />
arcsec) is in high demand for autonomous attitude determination<br />
of CubeSat spacecraft (mass 200 Hz.<br />
However, it is typically reduced (averaged) to an effective rate of<br />
20 Hz or less in order to reduce the effect of angle white noise. Bias<br />
stability, which is a random process, is generally a more important<br />
factor in determining navigation filter performance than is pure<br />
bias, especially when the time constant of the bias stability is<br />
relatively short.<br />
A gyro can also be used to measure the local attitude of the payload,<br />
which can be different from spacecraft attitude determination.<br />
Examples encompass attitude measurement of packaged P&T<br />
elements such as the “inertial pseudostar reference unit” [5]<br />
and attitude measurement of a gimbaled siderostat. In the latter<br />
situation, the gyro effectively replaces an encoder.<br />
sqt<br />
×<br />
q qnorm<br />
Qnorm2<br />
Qb2c<br />
Qa2c q qnorm Qa2b Qb2b<br />
Qa2b Qnorm Qpose<br />
Qmult<br />
×<br />
1<br />
Qst2i
SC_Sensor_Gyro_Qb2gref<br />
Table 4. Gyro Parameters.<br />
Parameters Range<br />
Output Rate (Hz) 100 ~ 200<br />
Angle Random Walk (deg/√h) 0.00015 ~ 0.1<br />
Angle White Noise<br />
(arcsec/√Hz)<br />
Rate Random Walk<br />
(arcsec/√s 3 )<br />
1<br />
wb<br />
Figure 6. Gyro Simulink block diagram.<br />
>0.0035<br />
>9.495E-5<br />
Bias Stability (deg/h) 0.0045 ~ 3.3<br />
qi2b<br />
QtransVect<br />
Bias Stability Time Constant (s) ~300<br />
SC_Sensor_Gyro_Bias<br />
Transformation<br />
matrix<br />
nonorthogonality<br />
Scale Factor (ppm) 1 ~ 100<br />
vi<br />
rate random work<br />
bias stability<br />
Telescope<br />
A telescope generally compresses beams of light and focuses them<br />
onto an FPA detector. A model based on geometric ray-trace optics<br />
was developed [15] and implemented using Matlab/Simulink<br />
blocks and EML. This formalism provides a way to derive (to linear<br />
order) the effect of motion of optical system components on<br />
the focal plane image. Thus, it becomes possible to model effects<br />
ranging from deliberate actuation or pointing of a body-fixed largeaperture<br />
telescope, through the motion of a siderostat, to the motion<br />
of a small FSM. It is also possible to include rigid-body motions<br />
of the optical elements themselves due to effects such as<br />
spacecraft vibration.<br />
<strong>The</strong> model is initialized with an optical prescription specifying the<br />
parameters of the imaging system, including mirror dimensions,<br />
Model-Based Design and Implementation of Pointing and Tracking Systems<br />
K Ts<br />
z -1<br />
(n)=Cx(n)+Du(n)<br />
(n+1)=Ax(n)+Bu<br />
1st MP with time cost<br />
vb Kxu 0.5<br />
+<br />
+ + -K-<br />
+<br />
+<br />
+<br />
+<br />
1<br />
z<br />
prev sample1<br />
+<br />
+<br />
+<br />
angle random<br />
walk<br />
K-<br />
scale<br />
factor<br />
conic constants, placement, and orientation. Some key parameters<br />
of a representative Ritchey-Chretien telescope with f/D = 6.9<br />
are listed in Table 5 [22]. <strong>The</strong> model can directly calculate the<br />
combined focal plane spot position of input beam angle and<br />
position in simulation. Furthermore, it can also drive sensitivity<br />
matrices relating input beam angle and position to focal plane spot<br />
position. For example, the chief ray of the representative telescope<br />
model has the following relationship:<br />
dU<br />
dV<br />
dt<br />
to derive delta angle<br />
angle white noise<br />
+ +<br />
dφ<br />
3.45 0 0<br />
= dθ +<br />
0 3.45 0<br />
1<br />
-K- 1<br />
1/dt<br />
to derive<br />
rate signals<br />
-0.08 0<br />
0 -0.08<br />
dx<br />
dy<br />
where [dU, dV] are focal plane spot position changes in meters, [dφ,<br />
dθ] are input beam angle changes in radians, and [dx, dy] are FSM<br />
angles in radians. This is essentially a simple pinhole model of star<br />
tracker or camera modified to include an FSM, however, higherfidelity<br />
models can be incorporated with ease.<br />
A particularly useful feature of the adopted model is that it<br />
accurately accounts for optical effects such as beam walk as well<br />
as certain types of optical aberrations of an active P&T system. In<br />
particular, consider a strapdown active P&T system: a small FSM<br />
is used at high bandwidth to correct for small-angle errors, while<br />
the spacecraft is operated so as to keep the FSM centered. This<br />
configuration was used in the Joint Astrophysics Nascent Universe<br />
Satellite (JANUS) and James Webb Space Telescope (JWST) [7], [8].<br />
However, such an FSM is sometimes limited by the fact that the FSM<br />
is not always placed in the system pupil. As a consequence, as the<br />
field of view of an imaging system is increased, the nonideal FSM<br />
location results in additional blurring, which is a function of the<br />
magnitude of the spacecraft pointing error. By accurately modeling<br />
the optical system in its entirety, it is possible to accurately derive<br />
Out<br />
(1)<br />
61
Table 5. Representative Ritchey-Chretien Telescope Parameters.<br />
Parameters Value<br />
Dimension (m 2 ) 0.5 × 0.9<br />
Primary Focal Length (m) 1.0<br />
Effective Focal Length (m) 3.45<br />
Distance from Primary Mirror to System<br />
Focal Point (m)<br />
Distance between Primary Mirror and<br />
Secondary Mirror (m)<br />
the coupling between spacecraft body pointing stability and image<br />
quality, and thus perform better system-level trades during the GNC<br />
design process.<br />
Focal Plane Array Sensor<br />
A simple FPA model was developed using Matlab/Simulink blocks<br />
and EML to simulate the effects of realistic detector integration,<br />
pixilation, and detector noise. In addition, the effects of diffraction,<br />
while not modeled accurately in the geometric ray-trace approach<br />
outlined above, can be accounted for by convolving the resulting<br />
detector image with a suitable point-spread function (PSF). Some<br />
key parameters of a representative FPA detector are listed in Table<br />
6 [22].<br />
Table 6. Representative Focal Plane Array Sensor Parameter.<br />
Parameters Value<br />
Star Flux at Magnitude Zero Point (photons/cm 2 /s) 1.26E+6<br />
Dark Current & Detector Background Noise (e/p/s) 5.0<br />
Detector Readout Noise (electrons) 20<br />
Pixel Size (μm) 10<br />
Effective Noise (arcsec, 1σ) 0.1<br />
<strong>The</strong> detector noise model includes terms for detector dark current,<br />
read noise, and scattered light background as well as photon noise.<br />
Signal levels (photon count rates) are determined by integrating<br />
suitable stellar spectral templates multiplied by detector response<br />
functions and mirror coating reflectivity [22].<br />
A new class of FPA sensor has recently become available from<br />
Teledyne [23], the HAWAII-2RG detector, which is designed to allow<br />
simultaneous multiple readouts of different locations on the chip<br />
at different rates. Thus, it is possible to read out a small “guide box”<br />
of 10-20 pixels on a side, typically centered on a bright star, at a<br />
rapid rate (~10 Hz) while reading out the remaining pixels of the<br />
4096 × 4096 array at a much slower rate for increased sensitivity.<br />
62<br />
0.6<br />
0.73<br />
Eccentricity of Primary Mirror 1.23<br />
Eccentricity of Secondary Mirror 1.74<br />
Model-Based Design and Implementation of Pointing and Tracking Systems<br />
Thus, it becomes possible to use the same focal plane both as a fine<br />
guidance sensor and as a science detector, greatly simplifying the<br />
optical system and eliminating the need for a second focal plane<br />
array devoted entirely to instrument guiding [23].<br />
<strong>The</strong> post-detection signal processing is also modeled. Such<br />
processing takes the simulated image and passes it to a star<br />
detection algorithm that compares the magnitude of the candidate<br />
star to the detector noise level; once the signal-to-noise ratio<br />
exceeds a set threshold, a “star present” flag is set to “ON,” and the<br />
position of the detected star is derived using a simple centroiding<br />
algorithm. This information is then passed to the spacecraft and/or<br />
the payload GNC loop.<br />
Angular Rate Sensor<br />
An angular rate sensor (ARS) senses high-frequency angular<br />
vibration. It accurately detects vibrations with frequencies above 10<br />
Hz as well as vibrations in the 1- to 10-Hz range with some degraded<br />
performance. In this range, a simple logic to compensate for known<br />
gain and phase loss may improve the controller performance.<br />
Recently, some efforts have been made to replace and/or enhance an<br />
ARS with MEMS accelerometers for the detection of high-frequency<br />
vibration [24]. <strong>The</strong> ARS sensor can be simply represented by a 2ndorder<br />
high-pass filter as follows:<br />
s(s + 10)<br />
s 2 + (4p)s + (4p) 2<br />
<strong>The</strong> typical random noise of ARS is 8 μrad/s (1σ) and the range is<br />
10 rad/s.<br />
Others<br />
We have also developed models for such elements as MEMS<br />
accelerometers, magnetic torque rods and magnetometers, gimbal<br />
mechanisms, motors, and FSM mechanisms.<br />
GNC Algorithms for Pointing and Tracking Systems<br />
<strong>The</strong> choice of a P&T GNC algorithm depends on the specific<br />
architecture under investigation. We are focusing on the strapdown<br />
active P&T system described before. In this case, the algorithm can<br />
be grouped into the spacecraft GNC algorithm and the payload<br />
GNC algorithm. <strong>The</strong> spacecraft GNC algorithm slews the spacecraft<br />
toward a designated target and stabilizes the spacecraft attitude<br />
around the target using reaction wheels, star tracker, and gyro. <strong>The</strong><br />
payload GNC algorithm precisely stabilizes the LOS vector of the<br />
payload to the target using the FSM, FPA detector, and the ARS.<br />
Note that only essential GNC algorithms are addressed in this paper.<br />
For example, less critical algorithms such as the RW momentum<br />
control loop using magnetic torque rods are omitted for brevity.<br />
Spacecraft GNC Algorithm<br />
<strong>The</strong> spacecraft GNC algorithm consists of four major components:<br />
slew maneuver planner, navigation filter, sensor processing, and<br />
control law, each implemented using Matlab/Simulink blocks and<br />
EML. <strong>The</strong> Attitude Determination and Control System (ADCS)<br />
Mission Manager and FSW Scheduler are not parts of typical GNC<br />
algorithms, but rather are functions of an upper-level program that<br />
integrates and executes these tasks. <strong>The</strong>y are typically developed by<br />
GNC software engineers directly in C or C++ [25]. However, since<br />
(2)
they are prerequisite for implementing GNC algorithms, simplified<br />
versions are modeled to a limited extent. <strong>The</strong> ADCS Mission<br />
Manager can read predefined time-tagged mission profile data and<br />
command slew instructions and control modes; the FSW Scheduler<br />
is not explicitly modeled but implicitly using “rate transition<br />
blocks” that allow the GNC algorithms to be executed in specific<br />
execution cycles.<br />
<strong>The</strong> Slew Maneuver Planner processes slew information and<br />
provides smooth spinup-coasting-spindown eigenrotation attitude<br />
profiles along slew directions defined in the slew command. <strong>The</strong><br />
attitude profile includes commanded quaternion and angular rate.<br />
<strong>The</strong> Slew Maneuver Planner can employ an advanced slew algorithm<br />
that can provide the shortest slew profile even when Earth, Sun, and<br />
Moon are in the path of eigenrotation slewing.<br />
<strong>The</strong> Navigation Filter consists of three major components:<br />
compensation, extended Kalman navigation filter (KF), and error<br />
calculation. <strong>The</strong> Compensation compensates for gyro bias in the<br />
measured gyro data using an estimated value from the navigation<br />
filter and compensates for deterministic time delay in the measured<br />
attitude quaternion using the measured gyro rate data. <strong>The</strong> Error<br />
Calculation calculates the error state of attitude and rate for the<br />
1<br />
wbG<br />
averager<br />
1<br />
z<br />
wbGds<br />
wbG<br />
SC_Ctrl_eKF_M<br />
SC_Ctrl_eKF_H<br />
SC_Ctrl_eKF_R<br />
SC_Ctrl_N_Its<br />
SC_Sensor_Gyro_BS_tau<br />
2<br />
Qst2i<br />
SC_Ctrl_N_Its<br />
Figure 7. Extended Kalman navigation filter Simulink block diagram.<br />
Model-Based Design and Implementation of Pointing and Tracking Systems<br />
w<br />
wp<br />
M<br />
H<br />
R<br />
delT<br />
tau<br />
stMeasOn<br />
Qst2i Quatprop Qst2i_neg<br />
delT<br />
eKFprop<br />
K<br />
Control Law and estimates the gyroscopic term, which is not<br />
negligible in fast slewing.<br />
<strong>The</strong> most complicated component is the extended Kalman<br />
navigation filter depicted in Figure 7. This block implements a<br />
standard extended KF that processes the measured rate and attitude<br />
quaternions and produces the rate and attitude quaternion states<br />
[26]. It has three components: quaternion propagation, Kalman gain<br />
propagation (eKFprop), and filter state update. As shown in Figure<br />
7, three “rate transition blocks” are specially employed around the<br />
eKFprop block. <strong>The</strong> main purpose is to allow the eKFprop block<br />
to be executed at a different rate from the others (e.g., 1 Hz). <strong>The</strong><br />
eKFprop block executes standard matrix calculations, including<br />
matrix inversion, which are often computationally expensive<br />
elements. By running this block at a lower rate, the computational<br />
load on the flight computer may be reduced, albeit at a cost of<br />
reduced performance of the extended KF.<br />
<strong>The</strong> Control Law implements a simple proportional and derivative<br />
(PD) controller. <strong>The</strong> gains are parameterized in terms of damping<br />
ratio and bandwidth as follows:<br />
Kw = 2ξw J Kp = w N 2<br />
NJ (3)<br />
1<br />
z<br />
K<br />
Qst2i_neg<br />
Qst2i<br />
update<br />
bias_pos<br />
Qst2i_pos<br />
2<br />
wbE<br />
1<br />
wb_biasE<br />
3<br />
Qst2iE<br />
63
Here, J is the moment of inertia. <strong>The</strong> damping ratio (ξ) is typically<br />
0.707 in most applications, but a high damping ratio (e.g., 0.995)<br />
is recommended in the P&T application to minimize undesired<br />
overshoot. <strong>The</strong> bandwidth (w ) is typically gain-scheduled by<br />
N<br />
a function of active mode (i.e., fast slew mode and fine tracking<br />
mode) and active status of payload GNC system. For example, a high<br />
bandwidth is used for the spacecraft GNC controller during fast slew<br />
mode, while a low bandwidth is used during fine tracking mode.<br />
When the payload GNC system is active, the bandwidth for the<br />
spacecraft GNC controller tends to be lowered further to prevent<br />
the two GNC loops from fighting each other.<br />
<strong>The</strong> bandwidth is a critical element to pointing accuracy. A higher<br />
bandwidth yields better pointing accuracy, or equivalently, smaller<br />
pointing error. <strong>The</strong>refore, it is typically asked what the nominal<br />
bandwidth is and what the achievable bandwidth is. This is<br />
another reason why we prefer to employ the simple PD controller<br />
characterized by bandwidth instead of more advanced controllers<br />
such as linear quadratic Gaussian (LQG) and H-infinity.<br />
Without detailed analysis such as stability and Monte Carlo<br />
analysis, a typical range of bandwidth could be estimated by a<br />
rule-of-thumb waterfall effect—an order reduction in magnitude<br />
at each step. In particular, reduction in bandwidth happens at the<br />
step from GNC process cycle to GNC navigation filter bandwidth,<br />
and another reduction follows at the step from GNC navigation filter<br />
bandwidth to GNC controller bandwidth. Of course, the magnitude<br />
of reduction may vary around 10% as a function of the quality of<br />
the GNC subsystems. For example, using a very high-end gyro like<br />
an SIRU makes it possible for GNC controller bandwidth to be 2/10<br />
of navigation filter bandwidth.<br />
Payload GNC Algorithm<br />
<strong>The</strong> payload GNC algorithm is typically simple and effectively<br />
consists of only the control algorithm as shown in Figure 8. <strong>The</strong><br />
64<br />
2<br />
ARS_rate<br />
FPA_uv<br />
1<br />
e_ARS_HPF<br />
e_ARS_HPF<br />
e_ARS_HPF<br />
e_ARS_HPF<br />
e_ARS_HPF<br />
e_ARS_HPF<br />
K Ts<br />
z-1<br />
K p u<br />
P gain<br />
K p u<br />
I gain<br />
Integrator<br />
BeamTF<br />
InvFSMOpticalGain<br />
Figure 8. Simplified P&T payload GnC Simulink block diagram.<br />
1<br />
s<br />
+<br />
+<br />
K p u<br />
Model-Based Design and Implementation of Pointing and Tracking Systems<br />
main reason for making the payload GNC flight software as simple<br />
as possible is that it often runs at a high rate (particularly if it<br />
incorporates a high-rate FSM) on a field-programmable gate array<br />
(FPGA) with limited computational resources.<br />
<strong>The</strong> control algorithm stabilizes the location of the image of a<br />
guidance target (e.g., photons from a star) on the focal plane with<br />
the FSM tilt angle modulated by measurements of the FPA detector<br />
and the ARS. <strong>The</strong> FSM tilt angle loop consists of an ARS-to-FSM loop<br />
and an FPA-to-FSM loop. <strong>The</strong> former compensates for the image<br />
jitter with frequency higher than 5 Hz and the latter compensates<br />
for the image jitter with frequency lower than 5 Hz. To ensure that<br />
there is no frequency overlapping, the ARS-to-FSM loop employs<br />
washout filters.<br />
<strong>The</strong> FPA-to-FSM loop is assumed to be a 200-Hz process. This loop<br />
employs a proportional and integral (PI) controller. <strong>The</strong> integrator<br />
is used for rejecting any bias component. <strong>The</strong> proportional gain is<br />
selected to be a fraction of the ratio of FPA sensor output rate (e.g.,<br />
10 Hz) with respect to the GNC process cycle (e.g., 200 Hz). This<br />
is one way to generate a high-rate command signal from a low-rate<br />
measurement. Another way is to employ a low-pass filter. <strong>The</strong> output<br />
from the PI controller, which is the desired relative FSM tilt angle<br />
with respect to the current one, is integrated before being combined<br />
with the FSM command from ARS-to-FSM loop.<br />
We have focused primarily on the pointing GNC algorithm in this<br />
paper. However, the tracking GNC algorithm presents uniquely<br />
challenging issues, including coordinated tracking among multiple<br />
actuation elements of the spacecraft, payload gimbal mechanisms,<br />
and FSM; and reconstruction of the true LOS vector from the payload<br />
or spacecraft to a target from multiple sensor measurements such<br />
as encoders, gyros, FPA detectors, and known reference object<br />
locations. <strong>The</strong> development of precision tracking GNC algorithms is<br />
one of our next research topics.<br />
K p u<br />
InvFSMOpticalGain1<br />
K p u -1<br />
+<br />
+<br />
1<br />
z<br />
+<br />
+<br />
1<br />
fsmCmd
Automatic Code Generation and Implementation<br />
We have discussed models of critical P&T elements and GNC<br />
algorithms. Here, we shift our focus to autogeneration of flight<br />
models and GNC flight software from the models and algorithms<br />
discussed previously. <strong>The</strong> code generation requires a few<br />
prerequisite conditions in the framework of model-based design.<br />
First, the model-based design tool is constructed with appropriate<br />
partitions that can accommodate the goal of code generation. For<br />
example, consider a SWIL/HWIL test of spacecraft GNC algorithms.<br />
An ideal partition consists of two main components: one for<br />
spacecraft GNC algorithms and the other for the remaining models<br />
sunhc<br />
6<br />
9<br />
8<br />
6<br />
5<br />
4<br />
7<br />
mode<br />
1<br />
tsmModeCmd<br />
2<br />
rmMCmd<br />
3<br />
rwTCmd<br />
mode<br />
fet<br />
TFPA_Sensor<br />
fuvP<br />
fsmModelCmd<br />
fpaBias<br />
mode<br />
ara2fsmC<br />
wobA<br />
fsmP<br />
TFSM_FSW<br />
fpa2fsmC<br />
ewDone<br />
SC_Actuators<br />
rwSpd<br />
rwTCmd<br />
rwTrq<br />
Hrw<br />
mtMCmd<br />
mtM<br />
fuvP<br />
mode<br />
1<br />
tsmPos<br />
2<br />
magBfield<br />
3<br />
raleGyro<br />
4<br />
Q_s [2]<br />
5<br />
10<br />
slewDone<br />
Figure 9. Spacecraft non-GnC flight software block.<br />
Qt2i<br />
HRate Sensor Proc<br />
wbGF wb G<br />
rwTrq<br />
Qb2i<br />
Hrw<br />
mtM<br />
wb<br />
scPos<br />
acc<br />
Qt2i<br />
fet<br />
Bb<br />
SC_Dynamics<br />
mode<br />
Qt2i<br />
Qob2i<br />
fuv<br />
fsmP<br />
fpaBias<br />
fuvP<br />
TOPT_Model<br />
FSM_Dynamics<br />
ars2fsmC<br />
fsmP<br />
fpa2smC<br />
Model-Based Design and Implementation of Pointing and Tracking Systems<br />
and algorithms. <strong>The</strong>se blocks are connected by input/output signals<br />
with “rate transition blocks” (See Figure 9). As mentioned before,<br />
a “rate transition block” permits one to execute the algorithms at<br />
different rates in the level of design and implementation. Second,<br />
models and algorithms are developed using autocodable Matlab/<br />
Simulink blocks and EML. Third, all parameters, including static<br />
and tunable parameters such as GNC control gains, are defined as<br />
external constant inputs to the GNC algorithm blocks and therefore<br />
can be provided by the GNC mission manager during each specific<br />
mission phase.<br />
SC_Sensors<br />
magBfield<br />
Bb<br />
wob<br />
rateGyro<br />
Qob2i<br />
quat_st2i<br />
rwSpd<br />
rwSpeed<br />
fet<br />
Qb2i<br />
wb<br />
Qob2i<br />
TFSM_PSensor<br />
Outputs<br />
acc<br />
wob<br />
wob wobA<br />
OB_Dynamics TARS_Model<br />
fsmP<br />
fsmPos<br />
65
With these prerequisite conditions, code generation is rather<br />
straightforward using the Real-Time Workshop code generator [27].<br />
<strong>The</strong> generated code is pre-tested with a set of regression tests before<br />
SWIL and HWIL tests. First, it is validated and verified against the<br />
original model or algorithm within the same model-based design<br />
framework, using the Matlab/Simulink Verification and Validation<br />
toolbox [28]. Second, it is tested for runtime errors using Matlab/<br />
PolySpace [29], and further tested on a virtual process that mimics<br />
the major functionality of a real target processor, i.e., with help of a<br />
third-party vendor’s MULTI [30].<br />
<strong>The</strong> code that passes the above regression tests is now ready for SWIL<br />
and HWIL tests. For a SWIL test, the flight model code and the GNC<br />
flight software are running separately on two Power PC processors<br />
via internet communication using the Matlab/xPCTarget operating<br />
system [31]. This test is useful to check communication between<br />
flight software and the outside world and for further code validation.<br />
For the HWIL test, the flight models can be replaced by actual flight<br />
hardware, and the GNC flight software is running on the actual flight<br />
computer. This test is useful to evaluate the computational speed<br />
and memory limitations of the flight processor and data interfaces<br />
between hardware and flight software. Furthermore, it can evaluate<br />
network delays and communication data drop-offs. As a result, the<br />
development, validation, and verification of flight software may be<br />
sped up using the model-based design and implementation.<br />
Examples<br />
Some results are present from the strapdown active P&T system that<br />
we have focused on, with a special focus on GNC system design and<br />
analysis. <strong>The</strong> gains for spacecraft and payload GNC algorithms are<br />
selected according to the guidelines described in previous sections.<br />
<strong>The</strong> simulation executed a mission plan that sequentially executed<br />
initial tracking, slew maneuver, and fine tracking modes.<br />
<strong>The</strong> results are plotted in Figures 10-13. Figure 10 plots the position<br />
error of the image with respect to the center of the FPA detector.<br />
For convenience, the position error is converted to the LOS error,<br />
dividing it by the effective focal length of telescope. As shown, the<br />
image position error is reduced significantly within the threshold<br />
of 0.35 arcsec (3σ) during fine tracking mode when the FPA<br />
detects a guide star and provides the FSM loop with the tracking<br />
information of that guide star. By contrast, large errors occur when<br />
the FPA detector is in a loss-of-lock situation due to a large slew or<br />
small repointing of the instrument LOS (“dither,” done to provide a<br />
background calibration for the science instrument being modeled).<br />
<strong>The</strong> large error between 100 and 250 s is due to a large spacecraft<br />
slew. Two smaller errors with magnitudes of 5 arcsec are due to<br />
dithering the FSM. During each dither, the FPA acquires a new guide<br />
star and tracks it. Dithering is clearly shown in Figure 11.<br />
Figures 12 and 13 both show power spectral densities of spacecraft<br />
pointing LOS error, FSM tilt angle, and FPA image LOS error. In Figure<br />
12, the spacecraft pointing error is shown to be much larger than<br />
the requirement. This implies that a “passive P&T system” with same<br />
spacecraft GNC system cannot meet the P&T LOS requirement.<br />
However, as the FSM tilt angle is modulated to compensate for the<br />
spacecraft LOS error, the image LOS error on the FPA detector is<br />
controlled within the requirement of 0.35 arcsec (9% margin). <strong>The</strong><br />
same figure also shows the bandwidths of important subsystems:<br />
66<br />
Model-Based Design and Implementation of Pointing and Tracking Systems<br />
the bandwidth of the spacecraft GNC system is about 0.06 Hz; the<br />
bandwidth of the telescope GNC system is 2 Hz; the RW speed is<br />
nominally 1200 rpm (or 20 Hz); and first solar array bending and<br />
cryocooler frequencies are 35 Hz and 50 Hz, respectively.<br />
Figure 13 shows how the FPA-to-FSM loop and the ARS-to-FSM loop<br />
contribute to the power spectral density of the FSM tilt angles shown<br />
in Figure 12. <strong>The</strong> FPA-to-FSM loop is shown to compensate for the<br />
spacecraft pointing error with frequencies of up to 0.3 Hz (i.e.,<br />
spacecraft response due to RW and ST noises), whereas the ARS-to-<br />
FSM loop is shown to compensate for the spacecraft pointing error<br />
with frequencies higher than 10 Hz (i.e., spacecraft vibration). This<br />
is consistent with expectations based on the FPA detector and ARS<br />
sensor bandwidths.<br />
Conclusions<br />
This paper has presented a model-based design and implementation<br />
approach for pointing and tracking systems. <strong>The</strong> GNC models and<br />
algorithms developed are comparable to heritage counterparts in<br />
terms of accuracy and performance. Furthermore, development<br />
and modification/adaptation of GNC flight software are time<br />
and cost efficient, which is critical to new demands arising in the<br />
operationally responsive space field.
Magnitude (arcsec)<br />
Figure 10. FPA image LOS error during initial tracking, slew<br />
maneuver, fine tracking modes.<br />
Angle PSD (arcsec 2 /Hz)<br />
10 6<br />
10 5<br />
10 4<br />
10 3<br />
10 2<br />
10 1<br />
10 0<br />
10 -1<br />
FPA Image Radial Error<br />
FPA provides no data so spacecraft LOS error<br />
is used instead<br />
FPA acquisition is done and FPA<br />
measurement is used to close FSM loop<br />
10 -2<br />
0 100 200 300 400 500 600 700 800<br />
Time (s)<br />
10 4<br />
10 2<br />
10 0<br />
10 -2<br />
10 -4<br />
10 -6<br />
FMS is dithered and FPA provides no measurement<br />
during reacquisition. <strong>The</strong> peak is<br />
fictitious error to represent this process<br />
10 -8<br />
10 -2 10 -1 10 0 10 1 10 2<br />
Frequency (Hz)<br />
SC Pointing Error<br />
FPA Image Radial Error<br />
FSM Tilt Angle<br />
Image RMS<br />
Figure 12. Power spectral density distribution of spacecraft LOS<br />
error, FPA image LOS error, FSM tilt angle, and image rms.<br />
Model-Based Design and Implementation of Pointing and Tracking Systems<br />
Magnitude (arcsec)<br />
20<br />
10<br />
0<br />
-10<br />
-20<br />
-30<br />
-40<br />
-50<br />
-60<br />
-70<br />
FSM Tilt Angle<br />
Dithering of 5-arcsec LOS change<br />
-80 0 100 200 300 400 500 600 700 800<br />
Time (s)<br />
Figure 11. FSM tilt angle during initial tracking, slew maneuver, fine<br />
tracking modes.<br />
Angle PSD (arcsec 2 /Hz)<br />
10 4<br />
10 2<br />
10 0<br />
10 -2<br />
10 -4<br />
10 -6<br />
SC Pointing Error<br />
(FPA, ARS)-to-FSM Tilt Angle<br />
FPA-to-FSM Tilt Angle<br />
ASR-to-FSM Tilt Angle<br />
10-8 10-2 10-1 100 101 102 Frequency (Hz)<br />
Figure 13. Power spectral density distribution of spacecraft LOS<br />
error, combined FSM tilt angle, FPA contribution on FSM tilt angle,<br />
and ARS contribution on FMS tilt angle.<br />
67
References<br />
[1] Wegner, P., Operationally Responsive Space, http://www.<br />
responsivespace.com/ors/reference/ORS%20Office%20Overview_<br />
PA_Cleared%20notes.pdf.<br />
[2] Barnard, P., “Software Development Principles Applied to Graphical<br />
Model Development,” AIAA Modeling and Simulation Technologies<br />
Conference and Exhibit, AIAA-2005-5888, 2005.<br />
[3] Embedded Mathlab, http://www.mathworks.com/products/featured/<br />
embeddedMatlab/.<br />
[4] Rate Transition Block, http://www.mathworks.com/access/helpdesk/<br />
help/toolbox/simulink/slref/ratetransition.html.<br />
[5] Gilmore, J., S. Feldgoise, T. Chien, D. Woodbury, M. Luniewicz, “Pointing<br />
Stabilization System Using the Optical Reference Gyro,” Institute of<br />
Navigation Conference, Cambridge, MA, June 1993.<br />
[6] Dorland, B. and R. Gaume, “<strong>The</strong> J-MAPS Mission: Improvements to<br />
Orientation Infrastructure and Support for Space Situational<br />
Awareness,” AIAA SPACE 2007 Conference & Exposition, Long Beach,<br />
California, September 2007.<br />
[7] James Webb Space Telescope, http://www.jwst.nasa.gov/scope.html.<br />
[8] Joint Astrophysics Nascent Universe Satellite (JANUS), A SMEX Mission<br />
Proposal Concept Study Report, December 16, 2008.<br />
[9] Grocott, S., R. Zee, J. Matthews, “<strong>The</strong> MOST Microsatellite Mission: One<br />
Year in Orbit,” 18th Annual AIAA/USU Conference on Small Satellites,<br />
Salt Lake, Utah, 2004.<br />
[10] Wie, B., Space Vehicle Dynamics and Control, AIAA Education Series,<br />
1998.<br />
[11] Psiaki, M., “Spacecraft Attitude Stabilization Using Passive<br />
Aerodynamics and Active Magnetic Torquing,” AIAA Guidance,<br />
Navigation, and Control Conference and Exhibit, AIAA 2003-5420,<br />
Austin, Texas, August 2003.<br />
[12] Bialke, B., “High-Fidelity Mathematical Modeling of Reaction Wheel<br />
Performance,” 21st Annual American Astronautical Society Guidance<br />
and Control Conference, AAS 98-063, February 1998.<br />
[13] Masterson, R., D. Miller, R. Grogan, “Development of Empirical and<br />
Analytical Reaction Wheel Disturbance Models,” AIAA 99-1204.<br />
[14] Lappas, V., W. Steyn, C. Underwood, “Design and Testing of a Control<br />
Moment Gyroscope Cluster for Small Satellites,” Journal of Spacecraft<br />
and Rockets, Vol. 42, No. 4, July-August 2005.<br />
[15] Redding, D. and W. Breckenridge, “Optical Modeling for Dynamics and<br />
Control Analysis,” Journal of Guidance, Navigation, and Control, Vol.<br />
14, No. 5, September-October 1991.<br />
[16] Sugiura, N., E. Morikawa, Y. Koyama, R. Suzuki, Y. Yasuda,<br />
“Development of the Elbow Type Gimbal and the Motion Simulator<br />
for OISL,” 21st International Communications Satellite Systems<br />
Conference and Exhibit, AIAA 2003-2268, 2003.<br />
[17] Brady, T., S. Buckley, M. Leammukda, “Space Validation of the Inertial<br />
Stellar Compass,” 21st Annual AIAA/USU Conference on Small<br />
Satellites, Salt Lake, Utah, 2007.<br />
[18] ComTech AeroAstro Miniature Star Tracker, http://www.aeroastro.<br />
com/components/star_tracker.<br />
[19] Liebe, C.C., “Accuracy Performance of Star Trackers–A Tutorial,” IEEE<br />
Aerospace and Electronic Systems, Vol. 38, No. 2, 2002.<br />
68<br />
Model-Based Design and Implementation of Pointing and Tracking Systems<br />
[20] Cemenska, J., Sensor Modeling and Kalman Filtering Applied to<br />
Satellite Attitude Determination, Masters <strong>The</strong>sis, University of<br />
California at Berkeley, 2004.<br />
[21] Jerebets, S., “Gyro Evaluation for the Mission to Jupiter,” IEEE<br />
Aerospace Conference, Big Sky, Montana, March 2007.<br />
[22] Schroeder, D., Astronomical Optics, 2nd ed., Academic Press, 2000.<br />
[23] Teledyne Imaging Sensors HAWAII-2RG, http://www.teledyne-si.com/<br />
imaging/H2RG.pdf.<br />
[24] ARS-12A MHD Angular Rate Sensor, http://www.atasensors.com/.<br />
[25] Hart, J., E. King, P. Miotto, S. Lim, “Orion GN&C Architecture for<br />
Increased Spacecraft Automation and Autonomy Capabilities,” AIAA<br />
Guidance, Navigation & Control Conference, Honolulu, Hawaii, 2008.<br />
[26] Lefferts, E.J., F.L. Markley, M.D. Shuster, “Kalman Filtering for<br />
Spacecraft Attitude Estimation,” 20th AIAA Aerospace Sciences<br />
Meeting, AIAA-82-0070, Orlando, Florida, 1982.<br />
[27] Real-Time Workshop 7.4, http://www.mathworks.com/products/rtw/.<br />
[28] Simulink Verification and Validation, http://www.mathworks.com/<br />
products/simverification/.<br />
[29] PolySpace Client C/C++ 7.1, http://www.mathworks.com/products/<br />
polyspaceclientc/.<br />
[30] MULTI Integrated Development Environment, http://www.mathworks.<br />
com/products/connections/product_detail/product_35473.html.<br />
[31] xPC Target 4.2, http://www.mathworks.com/products/xpctarget/.<br />
[32] Hecht, E., Optics, 4th ed., Addison Wesley, 2002.<br />
[33] Rodden, J., “Mirror Line of Sight on a Moving Base,” American<br />
Astronautical Society, Paper 89-030, February 1989.<br />
[34] Weinberg, M., “Working Equations for Piezoelectric Actuators and<br />
Sensors,” Journal of Microelectromechanical Systems, Vol. 8, No. 4,<br />
December 1999.
Sungyung Lim is a Senior Member of the Technical Staff in the Strategic and Space Guidance and Control group<br />
at <strong>Draper</strong> <strong>Laboratory</strong>. Before joining <strong>Draper</strong>, he was a Senior Engineering Specialist at Space Systems/Loral. His<br />
work there involved the analysis of spacecraft dynamics, on-orbit anomaly investigation of spacecraft control<br />
systems, and the design and analysis of spacecraft pointing systems. At <strong>Draper</strong>, his work has extended to analysis<br />
and design of GN&C algorithm and software in the strategic and space area. His current interests include modelbased<br />
GN&C design and analysis and design of high precision pointing systems for small satellites. Dr. Lim received<br />
B.S. and M.S. degrees in Aerospace from Seoul National University and a Ph.D. in Aeronautics and Astronautics<br />
from Stanford University.<br />
Benjamin F. Lane is a Senior Member of the Technical Staff at <strong>Draper</strong> <strong>Laboratory</strong> and is currently the Task<br />
Lead for the Guidance System Concepts effort. Expertise includes the development of advanced algorithms<br />
for image processing and real-time control systems, including adaptive optics and spacecraft instrumentation.<br />
He has developed instrument concepts, requirements, designs, control software, integration, testing and<br />
commissioning, and operations, debugging, and data acquisition. He helped design, build, and operate a multipleaperture<br />
telescope system (the Palomar Testbed Interferometer) for extremely high-angular resolution (picorad)<br />
astronomical observations, and also designed and built high-contrast imaging payloads for sounding rocket<br />
missions and spacecraft. He has published more than 45 peer-reviewed papers in his area of expertise and is a<br />
recipient of the 2010 <strong>Draper</strong> Excellence in Innovation Award. Dr. Lane holds a Ph.D. in Planetary Science from the<br />
California Institute of <strong>Technology</strong>.<br />
Bradley A. Moran is a Program Manager for Space Systems at <strong>Draper</strong> <strong>Laboratory</strong>. With 26 years of professional<br />
experience in both academic and industry settings, he has developed and implemented GN&C algorithms<br />
for a number of platforms ranging from undersea to on-orbit. Recent experiences include mission design and<br />
analysis for rendezvous and proximity operations and systems engineering for NASA, DoD, and other government<br />
sponsors. Since 2009, he has been the Mission System Engineer for the Navy’s Joint Milli-Arcsecond Pathfinder<br />
Survey program.<br />
Timothy C. Henderson is a Distinguished Member of the Technical Staff at <strong>Draper</strong> <strong>Laboratory</strong>. He has over<br />
35 years of experience leading projects in structural dynamics, GN&C flight software, fault-tolerant systems,<br />
robotics, and precision pointing and tracking. He served as Technical Director and the Attitude Determination<br />
and Control System (ADCS) Lead for the Joint Astrophysics Nascent Universe Satellite (JANUS) Small Explorer<br />
satellite program. He holds B.S. and M.S. degrees in Civil Engineering from Tufts University and MIT, respectively.<br />
Frank A. Geisel is currently Program Manager for Strategic Business Development at <strong>Draper</strong> <strong>Laboratory</strong> and is<br />
responsible for the identification, capture, and management of programs that are focused on developing leadingedge<br />
solutions for DoD, NASA, and the Intelligence Community. He has worked on various aspects of systems<br />
engineering, networking, and communications architectures at <strong>Draper</strong> since 2000, and has held management<br />
positions at <strong>Draper</strong> in both the programs and engineering organizations. His early career was spent in the offshore<br />
industry, developing and fielding deep-water integrated navigation systems and closed-loop robotic control<br />
systems for subsea inspection, operation, and recovery applications. In the early 1980s, he was the Program<br />
Manager for 13 major expeditions to the Arctic and Antarctic in support of oil and gas exploration. Mr. Geisel is a<br />
Member of the AIAA, IEEE, and the Society of Naval Architects and Marine Engineers (SNAME). He received a B.S.<br />
in Ocean Engineering from MIT.<br />
Model-Based Design and Implementation of Pointing and Tracking Systems<br />
69
70<br />
Law enforcement and other security-related personnel could benefit<br />
greatly from the ability to objectively and quantitatively determine whether<br />
or not an individual is being deceptive during an interview.<br />
<strong>Draper</strong> engineers, working with MRAC, have been developing ways<br />
to detect deception during interviews using multiple, synchronized<br />
physiological measurements. <strong>The</strong>ir work attempts to bring engineering<br />
rigor and approaches to the collection and analysis of physiological<br />
measurements during a highly controlled psychological experiment. Most<br />
previous research has been performed with fewer sensing modalities<br />
in more academic environments, primarily with university students<br />
as participants in a mock crime. This effort was the first of its kind at<br />
<strong>Draper</strong> and represents an early step forward in exploiting physiological<br />
measurements. Future efforts would build on this one and aim toward<br />
developing and testing a usable tool.<br />
<strong>The</strong> impact of this work could be valuable in any environment where two<br />
persons interact and where the assessment of credibility of information<br />
is important. This includes law enforcement, homeland security, and<br />
intelligence community applications. Such knowledge could ultimately<br />
enable better ways to elicit and validate the information from people<br />
during interviews and could have applications in a wide variety of contexts.<br />
<strong>The</strong> researchers continue to investigate how to quantify the physiological<br />
responses associated with human interactions in order to make useful<br />
inferences about intent and behavior.<br />
Detection of Deception in Structured Interviews Using Sensors and Algorithms
Detection of Deception in Structured Interviews<br />
Using Sensors and Algorithms<br />
Meredith G. Cunha, Alissa C. Clarke, Jennifer Z. Martin, Jason R. Beauregard, Andrea K. Webb, Asher A.<br />
Hensley, Nirmal Keshava, and Daniel J. Martin<br />
Copyright © 2010 by the Society of Photo-Optical Instrumentation Engineers (SPIE). Presented at SPIE Defense, Security, and<br />
Sensing 2010, Orlando FL, April 5-9, 2010<br />
Abstract<br />
<strong>Draper</strong> <strong>Laboratory</strong> and Martin Research and Consulting, LLC * (MRAC) have recently completed a comprehensive study to quantitatively<br />
evaluate deception detection performance under different interviewing styles. <strong>The</strong> interviews were performed while multiple<br />
physiological waveforms were collected from participants to determine how well automated algorithms can detect deception based on<br />
changes in physiology. We report the results of a multifactorial experiment with 77 human participants who were deceptive on specific<br />
topics during interviews conducted with one of two styles: a forcing style that relies on more coercive or confrontational techniques, or<br />
a fostering approach that relies on open-ended interviewing and elements of a cognitive interview. <strong>The</strong> interviews were performed in a<br />
state-of-the-art facility where multiple sensors simultaneously collect synchronized physiological measurements, including electrodermal<br />
response, relative blood pressure, respiration, pupil diameter, and electrocardiogram (ECG). Features extracted from these waveforms<br />
during honest and deceptive intervals were then submitted to a hypothesis test to evaluate their statistical significance. A univariate<br />
statistical detection algorithm then assessed the ability to detect deception for different interview configurations. Our paper will explain<br />
the protocol and experimental design for this study. Our results will be in terms of statistical significances, effect sizes, and receiver<br />
operating characteristic (ROC) curves and will identify how promising features performed in different interview scenarios.<br />
* MRAC is a veteran-owned research and consulting firm that specializes in bridging the gap between empirical knowledge and corporate or government applications. MRAC<br />
conducts human subject testing for government agencies, academic institutions, and corporate industries nationwide.<br />
Introduction<br />
Motivation<br />
A significant amount of current research has been focused on<br />
detecting deception based on changes in human physiology, with<br />
the obvious benefits to military operations, counterintelligence,<br />
and homeland defense applications, which must optimally collect<br />
and use human intelligence (HUMINT). Progress in this area has<br />
largely focused on how individual sensors (e.g., functional magnetic<br />
resonance imaging (fMRI), video, ECG) can reveal evidence of<br />
deception, with the hope that a computerized system may be able<br />
to automate deception detection reliably. For practical applications<br />
outside the laboratory environment, however, an equally important<br />
and complementary aspect is the way that information can best<br />
be “educed” during elicitations, debriefings, and interrogations.<br />
Thus far, however, there has been little investigation into how<br />
sensing technologies can complement and improve on educing<br />
methodologies and traditional observer-based credibility<br />
assessments.<br />
In 2006, the Intelligence Science Board published a comprehensive<br />
report on educing information [1]. A major finding of the report<br />
is that there has been minimal research on most methods of<br />
educing information, and there is no scientific evidence that the<br />
techniques commonly taught to interviewers achieve the intended<br />
results. Indeed, it appears that there have not been any systematic<br />
investigations of educing methodologies for almost 50 years.<br />
According to the report, most educing tactics and procedures<br />
Detection of Deception in Structured Interviews Using Sensors and Algorithms<br />
taught at the various service schools have had little or no validation<br />
and are frequently based more on casual empiricism and tradition<br />
than on science.<br />
In this paper, we report quantitative results of a comprehensive<br />
study whose objective was to determine how well measurements of<br />
physiology from multiple sensors can be collected and fused to detect<br />
deception and to explore how two distinctly different interviewing<br />
styles can affect deception detection (DD) performance. As part of<br />
a broader research goal at <strong>Draper</strong> <strong>Laboratory</strong> to understand how<br />
contact and remote sensors can be employed to infer identity and<br />
cognitive state, the experiment was conducted in a state-of-the-art<br />
facility that hosts multiple sensors in a monitored environment that<br />
enables highly calibrated and synchronized measurements to be<br />
collected and fused. Typical research facilities in this field do not<br />
have the resources to collect synchronized data from the number of<br />
sensors we have utilized.<br />
Psychological Basis for Investigation<br />
Our work builds on longstanding efforts in the area of<br />
psychophysiology that have attempted to translate cognitive<br />
processes attributed to stress and deception to physiological<br />
observables [2]-[10] whose attributes have been quantified using<br />
statistics. A key contribution of our effort has been to implement<br />
psychophysiological features identified by different researchers<br />
into a common, flexible analytical framework that allows their<br />
efficacy to be impartially compared and scrutinized.<br />
71
During the previous century, credibility assessment and deception<br />
detection have been investigated in various scientific disciplines<br />
including psychiatry, psychology, physiology, philosophy,<br />
communications, and linguistics [11], [12]. Areas of specific<br />
exploration include nonverbal behavioral cues to deception [13],<br />
[14], verbal cues including statement validity analysis [15], [16],<br />
psychophysiological measures of lie detection [17], [18], and the<br />
effectiveness of training programs in detecting deception [19],<br />
[20]. While much progress has been made in determining specific<br />
measures that are helpful in detecting deception, research has<br />
consistently shown that “catching liars” is a difficult task and that<br />
people, including professional lie-catchers, often make mistakes<br />
[12], [21]. Thus far, however, there has been little investigation<br />
into how sensing technologies can complement and improve on<br />
educing methodologies and traditional observer-based credibility<br />
assessments.<br />
Traditional psychophysiological measures used to detect deception<br />
(collectively referred to as the polygraph or lie detector test) have<br />
changed little since they first became available. Psychophysiological<br />
assessment involves fitting an individual with sensors that are then<br />
connected to a polygraph machine. <strong>The</strong>se sensors measure sweating<br />
from the palm of the hand or fingers (referred to as electrodermal<br />
response or galvanic skin response), relative blood pressure<br />
(measured by an inflated cuff on the upper arm), and respiration.<br />
Recently, however, other sensors have been proposed [22] as<br />
possible alternatives to the polygraph, such as thermal imaging, eye<br />
tracking, or a reduced set of only two sensors (electrodermal activity<br />
and plethysmograph, referred to as the Preliminary Credibility<br />
Assessment Screening System (PCASS)). It now remains to be seen<br />
to what extent these sensors will improve on current credibility<br />
assessment methodologies.<br />
Methods<br />
Experimental Design<br />
In this paper, we report quantitative results of a comprehensive<br />
study whose objective was to determine how well measurements<br />
of physiology from multiple sensors can be collected and fused<br />
to detect deception and to explore how two distinctly different<br />
interviewing styles can affect deception detection performance.<br />
Participants were recruited primarily through an advertisement<br />
in the Boston Metro, a free newspaper distributed in major metro<br />
stations. Seventy-eight participants ultimately completed the study;<br />
they were on average 42 years old and had an average of 14 years<br />
of education. Participants were informed that they would be paid<br />
$75 for the successful completion of the research session and an<br />
Table 1. Frequencies of Participants in the Experimental Conditions.<br />
72<br />
additional $100 if they were deemed by the interviewer as truthful<br />
throughout the interview. This bonus was intended to motivate<br />
the participants to convince the interviewer of his or her honesty.<br />
In reality, all of the participants who successfully completed the<br />
study were paid the full $175, regardless of the interviewer’s<br />
determination.<br />
This study was a 2 (deception) X 2 (concealment) X 2 (interview<br />
style) factorial design. In the interview, participants were asked first<br />
about their current residence, followed by their religious beliefs<br />
and their employment. Participants were either instructed to tell<br />
the truth about their current residence, but lie about their religious<br />
beliefs and their employment status, or to tell the truth about all<br />
three topics. <strong>The</strong> concealment aspect involved a final portion of the<br />
interview and is not discussed in the results presented here.<br />
Eligible participants were randomly assigned to one of eight<br />
conditions. <strong>The</strong>re were no significant differences in gender, race, or<br />
age between these different groups, ps > 0.05. <strong>The</strong> frequencies of<br />
participants in each experimental condition are presented in Table 1.<br />
Participants were randomized to be interviewed in one of the two<br />
styles described below.<br />
Forcing<br />
<strong>The</strong>re are several sources currently available that provide<br />
information about intelligence interviewing techniques, including<br />
the U.S. Army Intelligence and Interrogation Handbook [23],<br />
the Central Intelligence Agency’s KUBARK Counterintelligence<br />
Interrogation Manual [24], and Gordon and Fleisher’s Effective<br />
Interviewing & Interrogation Techniques [25]. Despite the breadth<br />
of the Army handbook’s suggestions for educing information,<br />
many sources note that the more coercive or confrontational<br />
approaches contained in the handbook have often received<br />
emphasis during training and have been overused in the field. In the<br />
forcing interview, the interviewer tightly controls the course of the<br />
conversation, and frequently challenges the participant’s motives<br />
and responses through open skepticism or accusations of deceit.<br />
<strong>The</strong> interviewer assesses the participant’s honesty, in part, on how<br />
the participant reacts to these accusations. This style establishes<br />
a comparatively adversarial relationship between the interviewer<br />
and the participant.<br />
Fostering<br />
<strong>The</strong> fostering interview includes elements of motivational<br />
interviewing and the cognitive interview. <strong>The</strong> cognitive interview<br />
[26], [27] was originally developed to improve the ability of the<br />
police to acquire the most accurate information possible from a<br />
Forcing Fostering<br />
Lie Condition Conceal No Conceal Total Lie Condition Conceal No Conceal Total<br />
Lying 13 11 24 Lying 10 8 18<br />
Truthful 8 8 16 Truthful 12 8 20<br />
Total 21 19 40 Total 22 16 38<br />
Detection of Deception in Structured Interviews Using Sensors and Algorithms
witness. <strong>The</strong> fostering style interview aims to establish a collaborative<br />
relationship between the interviewer and the participant. In this<br />
style, the interviewer adopts a friendlier demeanor. <strong>The</strong> interviewer<br />
does not openly accuse the participant of lying, and his questions<br />
never presume deceit. He asks open-ended questions designed to<br />
elicit a wealth of reported information that the interviewer could<br />
use to assess the participant’s honesty. <strong>The</strong> fostering interview<br />
questions aim to establish a cooperative tone.<br />
<strong>The</strong> following hypotheses generated for this study will be the focus<br />
of this report:<br />
1. Does the type of interview affect the interviewer’s ability to<br />
detect deception?<br />
2. How well do the physiologic sensors predict deception when<br />
analyzed individually?<br />
3. Does the type of interview influence the accuracy of physiologic<br />
sensors in detecting deception?<br />
Facilities Used for Experimentation<br />
<strong>The</strong> facility used to conduct the experiments consisted of an<br />
integrated, sensor-centric testing space consisting of a waiting area,<br />
an assessment room, a noise-insulated testing room, an operations<br />
room, and a data management room. <strong>The</strong> research staff executed<br />
the research protocol in the waiting area and in the assessment<br />
and testing rooms. <strong>The</strong>se rooms were equipped with the different<br />
physiological sensors that were used to collect participant data<br />
during the execution of the research protocol. Temporal protocol<br />
execution, sensor control, and data collection were remotely<br />
controlled from the operations room. Finally, the collected<br />
electronic sensor and experiment data were processed and stored<br />
in the data management room.<br />
Sensors Used for Data Collection<br />
In the current study, we evaluated 14 features from 5 physiological<br />
signals. Several nonverbal behavioral cues were assessed with a<br />
Tobii eye tracker, including pupil size and blink rate. <strong>The</strong> LifeShirt<br />
System (commercially available through VivoMetrics) measured<br />
ECG and respiration. <strong>The</strong> electrodermal activity sensor from the<br />
Lafayette polygraph was used to measure changes in the electrical<br />
activity of the skin surface. <strong>The</strong>se changes in electrical activity can<br />
be thought of as indicators of imperceptible sweating that signify<br />
sympathetic arousal. In addition, the plethysmograph from the<br />
Lafayette polygraph was used. This photoelectric plethysmograph<br />
measures rapidly occurring relative changes in pulse blood volume<br />
in the finger.<br />
Features of Interest<br />
From the 5 signals collected, 14 features were analyzed as described<br />
in Table 2. Some features have been reviewed sufficiently in the<br />
literature that a direction of change can be hypothesized when<br />
comparing feature values from baseline data to those gathered while<br />
the subject was deceiving. In some cases, the direction of change is<br />
not known with certainty. <strong>The</strong> features that consistently performed<br />
better than chance included: interbeat interval, pulse area, pulse<br />
width, peak-to-peak interval, and left and right pupil diameter. This<br />
feature group comprises cardiac-related features as well as pupil<br />
diameter features, and they will be discussed further here.<br />
Detection of Deception in Structured Interviews Using Sensors and Algorithms<br />
Interbeat interval was calculated by first decomposing the ECG<br />
signal into peaks. A method for locating the R-peak (the highest<br />
peak) in the ECG signal was adapted from an algorithm implemented<br />
in the ECGTools package by Gari Clifford, which in turn was<br />
inspired by the literature [28]. <strong>The</strong> highest and lowest points in<br />
the difference signal were found via filtering and segmentation.<br />
Each pair contained an R-peak, located at the time of the maximum<br />
signal value in the interval. Once the peaks were located, the R-to-R<br />
interval was calculated by taking the difference between times at<br />
which successive peaks occurred.<br />
In order to calculate the photoplethysmograph (PPG) features, it<br />
was necessary to find the peaks and valleys that defined the signal.<br />
This was done according to mixed state feature extraction derived<br />
from an object tracking algorithm used to track fish movements in<br />
video [29].<br />
Pulse area was calculated as the sum PPG signal for one full cycle.<br />
Pulse width was calculated at half of the maximum pulse height,<br />
according to the full-width half-max (FWHM) norm. Peak-to-peak<br />
interval was calculated as the difference between the times at<br />
which two consecutive peaks occurred. Each peak must be part of a<br />
complete cycle, which begins and ends with a valley.<br />
<strong>The</strong> right and left pupil diameter features were read directly from<br />
the data reported by the Tobii eye tracker.<br />
Measures of Performance<br />
Significance<br />
<strong>The</strong> analysis invokes the following signal model for the measurement<br />
of sensor data from multiple sensors to investigate whether<br />
deception can be discerned through the observation of feature<br />
values.<br />
H : r = s + n<br />
0 0<br />
H : r = s + n<br />
1 1<br />
Here, H is assumed to be the hypothesis that the subject is<br />
0<br />
completely truthful and H is the hypothesis that the subject is<br />
1<br />
deceptive. <strong>The</strong> received signal, r, is a vector of features in RN , where<br />
N is the number of features. <strong>The</strong> signal vectors for each hypothesis,<br />
s and s , are the vectors of feature values generated under each<br />
0 1<br />
hypothesis and were assumed to possess Gaussian distributions,<br />
although this assertion was not tested statistically. <strong>The</strong> additive<br />
noise, n, was assumed to be all white Gaussian noise (AWGN) that is<br />
identically distributed under each hypothesis.<br />
Data for the H distribution were gathered from the interview on<br />
0<br />
residence, which was the first topic discussed in the interview.<br />
<strong>The</strong>se data were gathered either from the entire topical interview<br />
or from immediate post-question periods. Data collected from the<br />
entire topical interview were collected from several seconds prior to<br />
the first question to 20 s after the last question, even though a brief<br />
orienting response to the first question of the interview is expected.<br />
Data collected from post-question intervals were collected from all<br />
questions except the first question of the topical interview for 20<br />
s beginning at the end of the question. Data for the H distribution<br />
1<br />
were gathered from the deception interview on employment/<br />
religion in one of the two manners described above.<br />
73
Table 2. Sensor, Description, and Anticipated Change Under Deception for Each Feature Calculated.<br />
Feature Sensor Feature Description Expected Change Direction<br />
Under Deception<br />
Pupil Diameter (Right) EyeTracker Subject right eye pupil<br />
diameter, in millimeters<br />
Up<br />
Pupil Diameter (Left) EyeTracker Subject left eye pupil diameter,<br />
in millimeters<br />
Blink Rate EyeTracker Subject eye blink frequency, in<br />
hertz<br />
<strong>The</strong> metrics used to describe significance were the t-test and effect<br />
size. Two-tailed t-tests were used with an assumption of equal<br />
variance and an alpha value of 0.05. Cohen’s d measure of effect size<br />
was used. <strong>The</strong> general trend of a feature was assessed by looking at<br />
the median effect size for that feature across a number of subjects.<br />
74<br />
d=<br />
μ – μ0<br />
1<br />
(n 1 – 1) σ 2<br />
1 + (n 0<br />
n 1 + n 0 – 2<br />
– 1) σ2<br />
0<br />
Detection<br />
Test statistics were calculated as the z-score shown below, where t is<br />
the test statistic, θ(x) is the test data point, μ 0 is the mean from the<br />
H 0 distribution, and σ 0 is standard deviation of the H 0 distribution. In<br />
this way, test statistics from individual subjects were appropriately<br />
comparable.<br />
t = θ(x) – μ 0<br />
σ 0<br />
<strong>The</strong>se were used to create ROC curves that encapsulated the three<br />
important quantities associated with any detection algorithm that<br />
indicate how well it is able to detect deception over an ensemble<br />
of subjects: Probability of Detection (PD), Probability of False<br />
Alarm (PFA), and Area Under Curve (AUC). A z-score was used<br />
to compare the AUC to 0.5, the AUC under a curve generated by<br />
random guessing between classes [30]. A similar measure was used<br />
to compare two ROC curves and to judge whether their difference<br />
was statistically significant [31].<br />
Results<br />
Statistical Analyses<br />
Does the type of interview affect the interviewer’s ability to detect<br />
deception? Interviewer assessment accuracy was analyzed with<br />
binomial tests to ascertain if accuracy was better than 50%.<br />
Detection of Deception in Structured Interviews Using Sensors and Algorithms<br />
Up<br />
Down<br />
Pulse Area Photoplethysmograph (PPG) Area of signal in one beat Down<br />
Pulse Amplitude PPG Difference between peak<br />
amplitude and trough amplitude<br />
Down<br />
Pulse Width PPG FWHM of peak Down<br />
Peak-to-Peak Interval PPG Time between successive peaks Up<br />
Electrodermal Activity (EDA)<br />
Amplitude<br />
EDA Finger Electrode Peak amplitude of skin resistance Up<br />
EDA Duration EDA Finger Electrode Time for skin resistance to<br />
return to preresponse level<br />
EDA Line Length EDA Finger Electrode Length of skin resistance line<br />
from peak to recovery<br />
Interbeat Interval ECG (LifeShirt) Time between successive Rpeaks<br />
of cardio signal<br />
Respiratory Inhale/Exhale Ratio Inductive Plethysmograph<br />
(LifeShirt Respiratory Sensor)<br />
Ratio of the time interval from<br />
one trough to the next peak to<br />
the time interval of the peak to<br />
the next trough<br />
Respiratory Cycle Time LifeShirt Respiratory Sensor Time between successive peaks<br />
in respiratory signal<br />
Respiratory Amplitude LifeShirt Respiratory Sensor Difference between peak amplitude<br />
and trough amplitude<br />
Up<br />
---<br />
Up<br />
Up<br />
Up<br />
Down
Analyses were performed separately for each interview type.<br />
Seventy-seven participants were included in the analyses. Data<br />
from one participant were not included because the person was<br />
deemed ineligible. <strong>The</strong> results are shown in Table 3. Interviewers<br />
were able to detect participant deception significantly better than<br />
chance when interviewing in the fostering style (n = 38, accuracy<br />
= 71%, p = 0.01), but not the forcing style (n = 39, accuracy = 62%,<br />
p = 0.20).<br />
Table 3. Interviewer Accuracy at Deception Detection.<br />
Accuracy 62%<br />
(24/39)<br />
Forcing Fostering All<br />
71%*<br />
(27/38)<br />
66%*<br />
(51/77)<br />
Interviews were conducted by two different interviewers, and<br />
there were no significant differences in accuracy by interviewer,<br />
p > 0.05. Interviewer 2’s accuracy in detecting lies about religion/<br />
employment was significantly better than chance (accuracy = 70%,<br />
p = 0.01 ). <strong>The</strong>re was no significant difference in detecting deception<br />
between interviewers on the basis of interview style.<br />
Participant anxiety was assessed for changes due to the interview.<br />
Participants were significantly more anxious during the interview<br />
(M = 35.69, SD = 11.22) than they were before (M = 30.22, SD =<br />
8.73) (t(77) = -4.63, p < 0.05). <strong>The</strong>re were no significant differences<br />
in anxiety change scores by interview type (fostering M = -4.13, SD<br />
= 11.44; forcing M = -6.75, SD = 9.38), or lie condition (deceptive M<br />
= -6.83, SD = 11.43; honest M = -3.89, SD = 9.07).<br />
Feature Analyses<br />
Significance Testing<br />
How well do the physiologic sensors predict deception when analyzed<br />
individually? Test statistics generated using residence interview<br />
post-question interval data for background and the mean of the<br />
employment/religion interview post-question interval data were<br />
correlated with dichotomous criteria:<br />
• Interview Type (forcing coded 0, fostering coded 1).<br />
• Deception State (deceptive coded 0, truthful coded 1).<br />
<strong>The</strong> results can be seen in Table 4. Interbeat interval, peak-to-peak<br />
interval, pulse area, and pulse width were significantly correlated<br />
with deception state; this is indicated by the bold type in the table.<br />
Features that did not have significant correlations are not listed<br />
in the table. Pulse amplitude showed a positive correlation with<br />
interview type but not with deception state.<br />
Detector ROC Curves<br />
Deception detectors were built from distributions garnered from<br />
each interval option for each feature. <strong>The</strong> AUC for detectors that<br />
performed significantly better than chance at the α = 0.05 level are<br />
reported in Table 5 along with the sample size for the given ROC<br />
curve. LifeShirt data were not recoverable for 2 out of 77 subjects,<br />
lowering the sample size for features from the LifeShirt sensors.<br />
Further, one subject had poor quality ECG data during a portion of<br />
Detection of Deception in Structured Interviews Using Sensors and Algorithms<br />
Table 4. Deception Post-Question Test Statistic Correlations with<br />
Interview Type and Participant Deception State. Positive correlation<br />
indicates that participants in the given state have smaller test<br />
statistics. *p < 0.05 **p < 0.01<br />
Feature Interview Type Deception State<br />
Interbeat Interval, ECG 0.289*<br />
Peak-to-Peak Interval,<br />
PPG<br />
Pulse Amplitude 0.233*<br />
0.243*<br />
Pulse Area 0.324**<br />
Pulse Width 0.225*<br />
Table 5. Significant Deception Detector Results for Each of 8 Features<br />
Measured with Both Intervals. Detectors are statistically different<br />
from chance at the α = 0.05 level. nonsignificant detectors are not<br />
shown. All test statistics were generated with a mean test value.<br />
Feature Post-Question Entire Topical<br />
Interview<br />
Interbeat Interval, ECG 0.703 (N = 75) 0.794 (N = 74)<br />
Right Pupil Diameter 0.720 (N=77)<br />
Left Pupil Diameter 0.691 (N = 77)<br />
Respiratory Inhale/Exhale<br />
(I/E) Ratio<br />
0.647 (N = 75)<br />
Respiratory Cycle Time 0.304 (N = 75)<br />
Pulse Area 0.693 (N = 77) 0.778 (N = 77)<br />
Pulse Width 0.663 (N = 77) 0.697 (N = 77)<br />
Peak-to-Peak Interval,<br />
PPG<br />
0.673 (N = 77) 0.762 (N = 77)<br />
one of the interviews; this prevented analysis of the interview as a<br />
whole without disrupting the post-question analysis for the interbeat<br />
interval feature. In all cases where a ROC curve was significant for<br />
both entire topical interview intervals and post-question intervals,<br />
the AUC for the entire topical interview-generated ROC curve was<br />
higher. <strong>The</strong> two best performing detectors for both interval options<br />
were from the interbeat interval and peak-to-peak interval features.<br />
For the entire topical interview data interval, these both produced<br />
curves with AUC higher than 0.75. For both interval options, both of<br />
these curves performed significantly better than chance, but they<br />
were not significantly different from each other.<br />
Does the type of interview influence the accuracy of physiologic sensors<br />
in detecting deception? Correlations between deception post-question<br />
interval test statistics and deception state were computed. Three<br />
features had positive correlations with deception state under the forcing<br />
interview style. <strong>The</strong>se were interbeat interval (0.355, p < 0.05), pulse<br />
75
area (0.401, p < 0.05), and peak-to-peak interval (0.410, p < 0.01). Only<br />
under the forcing interview style did feature values from the deception<br />
interview on employment/religion correlate with the deception state.<br />
Deception detectors were generated for all features using data<br />
from the entire topical interview. <strong>The</strong> detectors for all features are<br />
compared for the two different interview styles via their area under<br />
curve in Figure 1. <strong>The</strong> maximum area under curve possible is one,<br />
and a detector that performs at chance will have an area of 0.5. <strong>The</strong><br />
pulse-based features (interbeat interval, pulse width, peak-to-peak<br />
interval, and pulse area) as well as the pupil diameter features in the<br />
forcing interview style performed the best, all with an area above<br />
0.7. All of these ROC curves were significantly better than chance. In<br />
the fostering interview style, only the pulse area feature performed<br />
significantly better than chance.<br />
Forcing<br />
Fostering<br />
76<br />
Area Under the Curve for Detector Families<br />
interBeatInterval<br />
rightPupilDiameter<br />
leftPupilDiameter<br />
eyeBlink<br />
edaAmplitude<br />
edaDuration<br />
edaLineLengths<br />
respiratoryAmplitude<br />
respiratoryieRatio<br />
respiratoryCycleTime<br />
pulseArea<br />
pulseAmplitude<br />
pulseWidth<br />
peakToPeakInterval<br />
Table 6. Forcing Interview Style Detector Results Summary.<br />
Detectors were generated for both conditions and both interval<br />
types. For each combination, the features producing significant<br />
detectors are reported with the AUC. All detectors were built with<br />
test statistics garnered from mean test values.<br />
Condition Interval Features Producing<br />
Significant Detectors<br />
Deception Post-question Interbeat Interval, ECG (0.730)<br />
Respiratory I/E Ratio (0.790)<br />
Pulse Area (0.775)<br />
Pulse Width (0.775)<br />
Peak-to-Peak Interval (0.785)<br />
Deception Entire topical<br />
interview<br />
Detection of Deception in Structured Interviews Using Sensors and Algorithms<br />
1<br />
0.9<br />
0.8<br />
0.7<br />
0.6<br />
0.5<br />
0.4<br />
0.3<br />
0.2<br />
0.1<br />
0<br />
Figure 1. AUC for ROC curves generated from entire topical interview<br />
intervals for deception detection, comparing results for forcing and<br />
fostering interview styles.<br />
Significant detectors for the forcing interview style are summarized<br />
in Table 6. Detectors that performed significantly better than<br />
chance are listed along with their AUC for each condition, interval<br />
type, and feature. <strong>The</strong> detector with the highest area is the interbeat<br />
interval deception detector that operates on the entire topical<br />
interview distributions. It has an area of 0.863. For both interval<br />
types, interbeat interval, pulse area, pulse width, and peak-to-peak<br />
interval make significant deception detectors.<br />
Only the pulse area feature yielded a significant detector (AUC<br />
0.649) for the fostering interview style. This was the case when<br />
entire topical interview intervals were used for deception detection.<br />
<strong>The</strong> best performing feature from all combinations of interval<br />
choice and interview style was interbeat interval. <strong>The</strong> deception<br />
Probability Detection<br />
1<br />
0.9<br />
0.8<br />
0.7<br />
0.6<br />
0.5<br />
0.4<br />
0.3<br />
0.2<br />
0.1<br />
Interbeat Interval, ECG (0.863)<br />
Right Pupil Diameter (0.720)<br />
Respiratory Cycle Time (0.250)<br />
Pulse Area (0.846)<br />
Pulse Width (0.751)<br />
Peak-to-Peak Interval (0.855)<br />
detection performance of this feature was enhanced when data<br />
only from the forcing style interview were used. In Figure 2, the<br />
entire topical interview interval detector for interbeat interval<br />
from forcing interview participants is shown in comparison to the<br />
comparable detector from the fostering interview participants.<br />
<strong>The</strong> forcing interview style detector performed significantly better<br />
than chance. (<strong>The</strong> detector from the fostering participants was not<br />
statistically significantly better than chance.)<br />
ROC Curves<br />
InterBeatInterval - Forcing (0.863)<br />
InterBeatInterval - Fostering (0.658)<br />
0<br />
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1<br />
Probability False Alarm<br />
Figure 2. ROC curves comparing interview styles for entire topical<br />
interview intervals data for interbeat interval, ECG. Fostering detector<br />
was not significantly better than chance.
Discussion<br />
We have demonstrated the promise of certain sensor signals,<br />
features, and their analysis parameters in detecting lies. Further,<br />
we have demonstrated the effects of interview style on the<br />
detection of deception in an interview recorded along with<br />
physiological signals.<br />
Sensor signal analysis results included higher successful<br />
classification and more significant results among participants who<br />
underwent the forcing interview style than the fostering interview<br />
style. This was apparent through several different analysis<br />
approaches. While five features produced significant detectors for<br />
the forcing interview style subset, there was only one feature with<br />
a significant detector for the fostering subset. <strong>The</strong> highest area<br />
under the curve was 0.863, produced by the interbeat interval<br />
feature calculated from the ECG signal of forcing interview<br />
participants. When deception test statistics were correlated with<br />
deception state for participants who underwent each interview<br />
style, significant correlations only occurred in the subset that<br />
underwent the forcing interview style.<br />
<strong>The</strong> features that consistently performed better than chance<br />
included: interbeat interval, pulse area, pulse width, peak-to-peak<br />
interval, left pupil diameter, and right pupil diameter. This feature<br />
group comprises cardiac-related features measured from both the<br />
ECG and PPG signals as well as pupil diameter features. Significant<br />
detectors were also produced by the respiratory I/E ratio and<br />
respiratory cycle time features. <strong>The</strong>se will not be discussed as they<br />
were not significant in the correlation analysis, nor did they show<br />
a moderate or large effect size. <strong>The</strong> cardiac and pupil features will<br />
be discussed further here.<br />
A number of cardiac features showed significant correlations with<br />
deception state, had moderate-to-high effect size differences, or<br />
generated significant detectors. Interbeat interval and peak-topeak<br />
interval decreases were significant as measured by effect<br />
size on both short (20 s) and long (entire 5-min interview) time<br />
scales. <strong>The</strong>se features were also significantly correlated with<br />
deception state and they produced detectors with the highest<br />
area for both data interval definitions. Although the differences<br />
between the two features were not significant, interbeat interval<br />
had consistently higher correlations, effect sizes, and areas under<br />
curve. <strong>The</strong> strong results shown by the interbeat interval and<br />
peak-to-peak interval features are indicators that deception can<br />
be measured effectively by an increase in heart rate. This is in<br />
contrast to previous studies that have found a heart rate decrease<br />
when participants are deceptive.<br />
<strong>The</strong>re has been some debate regarding the heart rate (HR)<br />
response to deception. Some have found HR responses to<br />
deception to be biphasic [8], [32]. <strong>The</strong>re is an initial increase in<br />
HR for the first 4 s following question onset, followed by a decrease<br />
until approximately the 7th post stimulus second, and the HR<br />
then returns to baseline. Others have found HR deceleration to be<br />
indicators of deception [33], [34]. <strong>The</strong>re also has been discussion<br />
in the literature as to the nature of the HR response to different<br />
types of deception tests [5], [32], [10]. <strong>The</strong> authors have noted<br />
that the direct and often accusatory questions that comprise<br />
Detection of Deception in Structured Interviews Using Sensors and Algorithms<br />
the comparison question test may produce defensive responses,<br />
evidenced in part by HR acceleration, whereas the stimuli used<br />
in a guilty knowledge test may produce orienting responses,<br />
evidenced in part by HR deceleration. <strong>The</strong> HR increase found<br />
in the present study could be indicative of a defensive response<br />
since the test format is similar to that of the comparison question<br />
test. A change in time window does not explain the discrepancy<br />
between the HR decrease reported in the literature and the HR<br />
increase measured in this study. <strong>The</strong> responses recorded here also<br />
do not follow the biphasic theory as a HR increase was observed<br />
both in the 4-s window immediately after question onset as well as<br />
in the window from 6 to 12 s after question onset.<br />
Other cardiac measures also showed promise. Both pulse area and<br />
pulse width had significant correlations with deception state as<br />
well as significant detectors. Pulse amplitude, however, was a less<br />
promising feature. <strong>The</strong> pulse amplitude feature is normalized with<br />
respect to baseline signal value, while the pulse area feature was<br />
not implemented in this way. This may be why pulse amplitude<br />
was not a significant predictor of deception while pulse area was.<br />
A decreasing DC signal component could cause an erroneously<br />
significant result for the pulse area feature, and this possibility<br />
should be avoided in future implementations by eliminating the<br />
DC component of the signal or by subtracting a baseline or valley<br />
value from each measure of pulse area.<br />
Pupil diameter measures in the left and right eye did not show<br />
significant correlations with deception state, but they exhibited<br />
moderate mean effect size differences, and on the entire topical<br />
interview interval, they generated detectors that performed better<br />
than chance. Detectors generated for entire topical interview<br />
background intervals and tested with data from post-question<br />
intervals were also not significant. Significance on entire topical<br />
interview interval detectors may indicate that participants’ pupil<br />
diameter was more likely to increase in response to a question<br />
that required an answer more detailed than ‘yes’ or ‘no.’ <strong>The</strong> postquestion<br />
interval data distributions are drawn from select yes/no<br />
questions in the experiment.<br />
Two data intervals were considered here, and there were several<br />
instances in which the entire topical interview data intervals<br />
were more informative than the 20-s post-question intervals.<br />
(For an example, see Table 5.) <strong>The</strong> benefits of using entire topical<br />
interview intervals for data collection were not observed when<br />
the background distribution was gathered from the entire topical<br />
interview if the test data came only from post-question intervals.<br />
This suggests that because there is more dialogue involved in<br />
an interview as compared with a more traditional detection of<br />
deception test in which each question is answered with a simple<br />
yes/no, there may be more physiological activity and more<br />
information that can be extracted from an entire topical interview<br />
as opposed to a 20-s post-question interval. This is the case even<br />
when those questions are quite to the point of the matter at hand<br />
(e.g., “Are you being truthful when you tell me that you work as a<br />
retail salesperson?”)<br />
Study design may have also impacted the utility of these data<br />
intervals. Although the wording of the questions asked during the<br />
77
interviews was similar to those asked in a traditional deception<br />
detection test, the test structure was different. In one type of<br />
deception detection test, the comparison question test, each<br />
relevant question is preceded by a comparison question, and<br />
decisions regarding veracity are made by comparing responses to<br />
the two question types [35]. In the present study, the questions<br />
used for comparison were asked early in the interview. <strong>The</strong><br />
deception questions were presented toward the middle of the<br />
interview. It may be difficult to see a change in post-question<br />
autonomic responding between questions that are presented far<br />
apart during the interview.<br />
Conclusions and Future Work<br />
Interview style impacts interviewer assessment accuracy.<br />
Interviewer accuracy at detecting deception was better than<br />
chance in the fostering interview style. Rapport developed<br />
during a fostering interview may facilitate the interviewer’s<br />
ability to detect deceit. In the forcing interview style, interviewer<br />
assessment accuracy was not statistically different from chance.<br />
A forcing style interview amplifies physiological signals indicative<br />
of deception. When the forcing interview style was used, sensor<br />
signals yielded detectors that operated significantly better than<br />
chance. This showed an advantage over interviewer assessment<br />
accuracy that was not better than chance.<br />
Physiologic information elicited during topical interviews may<br />
be more indicative of deception than physiologic information<br />
gathered from periods of structured yes/no questions, although<br />
the sources for physiologic changes may be more difficult to<br />
identify. This trade-off should be a topic of further study.<br />
Heart rate and other pulse-based features show good capability<br />
in deception detection. Our results indicate a need for better<br />
understanding of the orienting and defensive responses and when<br />
to expect each.<br />
With regard to pupil diameter, our results are suggestive, but not<br />
as strong as the evidence that others have shown.<br />
Interview-based deception detection techniques should be<br />
pursued further in cases where deception detection with<br />
physiological sensors is desired. Placement of comparison<br />
questions should be reevaluated.<br />
<strong>Draper</strong> <strong>Laboratory</strong> continues to expand its facilities, resources,<br />
and expertise to pursue important challenges in this area,<br />
including unstructured interview analysis, remote sensing of<br />
physiology, and contextual factors in educing information.<br />
References<br />
[1] Educing Information: Interrogation: Science and Art: Foundations for<br />
the Future: Phase 1 Report, Intelligence Science Board, Washington,<br />
DC, National Defense Intelligence College, 2006.<br />
[2] Allen, J., “Photoplethysmography and Its Application in Clinical<br />
Physiological Measurement,” Physiol. Meas., 2007, pp. R1-R39.<br />
[3] Bell, B.G., D.C. Raskin, C.R. Honts, J.C. Kircher, “<strong>The</strong> Utah Numerical<br />
Scoring System,” Polygraph, Vol. 28, 1999, pp. 1-9.<br />
78<br />
[4] Dionisio, D.P., E. Granholm, W.A. Hillix, W.F. Perrine, “Differentiation<br />
of Deception Using Pupillary Responses as an Index of Cognitive<br />
Processing,” Psychophysiology, 2001, pp. 205-211.<br />
[5] Elaad, E. and G. Ben-Shakhar, “Finger Pulse Waveform Length<br />
in the Detection of Concealed Information,” International Journal of<br />
Psychophysiology, 2006, pp. 226-234.<br />
[6] Handler, M. and D.J. Krapohl, “<strong>The</strong> Use and Benefits of the<br />
Photoelectric Plethysmograph in Polygraph Testing,” Polygraph,<br />
2007, pp. 18-25.<br />
[7] Kircher, J.C., S.D Kristjansson, M.K. Gardner, A.K. Webb, Human<br />
and Computer Decision-Making in the Psychophysiological Detection of<br />
Deception, University of Utah, Salt Lake City, 2005.<br />
[8] Podlesny, J.A. and D.C. Raskin, “Effectiveness of Techniques and<br />
Physiological Measures in the Detection of Deception,”<br />
Psychophysiology, Vol. 15, 1978, pp. 344-358.<br />
[9] Siegle, G.J., N. Ichikawa, S. Steinhauer, “Blink Before and After You<br />
Think: Blinks Occur Prior to and Following Cognitive Load Indexed<br />
by Pupillary Responses,” Psychophysiology, 2008, pp. 679-687.<br />
[10] Verschuere B., G. Crombez, A. De Clercq, E.H.W. Koster,<br />
“Autonomic and Behavioral Responding to Concealed Information:<br />
Differentiating Orienting and Defensive Responses,” Psychophysiology,<br />
Vol. 41, 2004, pp. 461-466.<br />
[11] Granhag, P.A. and L.A. Strömwall, <strong>The</strong> Detection of Deception in Forensic<br />
Contexts, Cambridge University Press, Cambridge, UK, 2004.<br />
[12] Vrij, A., Detecting Lies and Deceit: <strong>The</strong> Psychology of Lying and the<br />
Implications for Professional Practice, Wiley, Chichester, England,<br />
2000.<br />
[13] DePaulo, B.M., J.L. Lindsay, B.E. Malone, L. Muhlenbruck, K. Charlton,<br />
H. Cooper, “Cues to Deception,” Psychological Bulletin, Vol. 129,<br />
2003, pp. 74-118.<br />
[14] DePaulo, B.M. and W.L. Morris, “Discerning Lies from Truth:<br />
Behavioural Cues to Deception and the Indirect Pathway of<br />
Intuition,” <strong>The</strong> Detection of Deception in Forensic Contexts, P.A.<br />
Granhag and L.A. Strömwall, eds., Cambridge University Press,<br />
Cambridge, UK, 2004.<br />
[15] Köhnken, G., “Statement Validity Analysis and the Detection of the<br />
Truth,” <strong>The</strong> Detection of Deception in Forensic Contexts, P.A. Granhag<br />
and L.A. Strömwall, eds., Cambridge University Press, Cambridge,<br />
UK, 2004.<br />
[16] Vrij, A., “Criteria-Based Content Analysis: A Qualitative Review of<br />
the First 37 Studies,” Psychology Public Policy and Law, Vol. 11, 2005,<br />
pp. 3-41.<br />
[17] Ben-Shakhar, G. and E. Elaad, “<strong>The</strong> Validity of Psychophysiological<br />
Detection of Information with the Guilty Knowledge Test: A Meta-<br />
Analytic Review,” Journal of Applied Psychology, Vol. 88, 2003, pp.<br />
131-151.<br />
[18] Honts, C.R., “<strong>The</strong> Psychophysiological Detection of Deception,”<br />
<strong>The</strong> Detection of Deception in Forensic Contexts, P.A. Granhag and L.A.<br />
Strömwall, eds., Cambridge University Press, Cambridge, UK, 2004.<br />
[19] Bull, R., “Training to Detect Deception from Behavioural Cues:<br />
Attempts and Problems,” <strong>The</strong> Detection of Deception in Forensic<br />
Contexts, P.A. Granhag and L.A. Strömwall, eds., Cambridge<br />
University Press, Cambridge, UK, 2004.<br />
Detection of Deception in Structured Interviews Using Sensors and Algorithms
[20] Frank, M.G. and T.H. Feeley, “To Catch a Liar: Challenges for Research<br />
in Lie Detection Training,” Journal of Applied Communication<br />
Research, Vol. 31, 2003, pp. 58-75.<br />
[21] Vrij, A., “Guidelines to Catch a Liar,” <strong>The</strong> Detection of Deception in<br />
Forensic Contexts, P.A. Granhag and L.A. Strömwall, eds., Cambridge<br />
University Press, Cambridge, UK, 2004.<br />
[22] Vendemia, J.M., M.J. Schilliaci, R.F. Buzan, E.P. Green, S.W. Meek,<br />
“Credibility Assessment: Psychophysiology and Policy in the<br />
Detection of Deception,” American Journal of Forensic Psychology,<br />
Vol. 24, 2006, pp. 53-85.<br />
[23] U.S. Army Intelligence and Interrogation Handbook: <strong>The</strong> Official Guide<br />
on Prisoner Interrogation, Department of the Army, <strong>The</strong> Lyons Press,<br />
Guilford, CT, 2005.<br />
[24] KUBARK Counterintelligence Interrogation, Central Intelligence<br />
Agency, Washington, DC, 1963.<br />
[25] Effective Interviewing and Interrogation Techniques, 2nd ed., Gordon,<br />
N.J. and W.L. Fleisher, eds., Academic Press, Burlington, MA, 2006.<br />
[26] Colwell, K., C.K. Hiscock, A. Memon, “Interviewing Techniques and<br />
the Assessment of Statement Credibility,” Applied Cognitive<br />
Psychology, Vol. 16, 2002, pp. 287-300.<br />
[27] Colwell, K., C. Hiscock-Anisman, A. Memon, A. Rachel, L. Colwell,<br />
“Vividness and Spontaneity of Statement Detail Characteristics as<br />
Predictors of Witness Credibility,” American Journal of Forensic<br />
Psychology, Vol. 25, 2007, pp. 5-30.<br />
[28] Pan, Hamilton, Tompkins, “A Real Time QRS Detection Algorithm,”<br />
IEEE Trans Bio Engineering, Vol. 32, No. 3, 1985, pp. 230-236.<br />
[29] Schell, C., S.P. Linder, J.R. Zeider, “Tracking Highly Maneuverable<br />
Targets with Unknown Behavior,” Proceedings of the IEEE, Vol. 92,<br />
No. 3, 2004, pp. 558-574.<br />
[30] Hanley, J.A. and B.J. McNeil, “<strong>The</strong> Meaning and Use of the Area under<br />
a Receiver Operating Characteristic (ROC) Curve,” Radiology, 1982,<br />
pp. 29-36.<br />
[31] Hanley, J.A. and B.J. McNeil, “A Method of Comparing the Areas<br />
Under Receiver Operating Characteristic Curves Derived from the<br />
Same Cases,” Radiology, 1983, pp. 839-843.<br />
[32] Raskin, D.C., “Orienting and Defensive Reflexes in the Detection of<br />
Deception,” <strong>The</strong> Orienting Reflex in Humans, H.D. Kimmel, E.H. van<br />
Olst, and J.F. Orlebeke, eds., Erlbaum Associates, Hillsdale, NJ, 1979,<br />
pp. 587-605.<br />
[33] Patrick, C.J. and W.G. Iacono, “A Comparison of Field and <strong>Laboratory</strong><br />
Polygraphs in the Detection of Deception,” Psychophysiology, Vol.<br />
28, 1991, pp. 632-638.<br />
[34] Podlesny, J.A. and C.M. Truslow, “Validity of an Expanded-Issue<br />
(Modified General Question) Polygraph Technique in a Simulated<br />
Distributed-Crime-Roles Context,” Journal of Applied Psychology,<br />
Vol. 78, 1993, pp. 788-797.<br />
[35] Raskin, D.C. and C.R. Honts, “<strong>The</strong> Comparison Question Test,”<br />
Handbook of Polygraph Testing, M. Kleiner, ed., Academic Press, San<br />
Diego, CA, 2002, pp 1-47.<br />
Detection of Deception in Structured Interviews Using Sensors and Algorithms<br />
79
80<br />
Meredith G. Cunha is a Member of the Technical Staff in the Fusion, Exploitation, and Inference Technologies<br />
Group. She has experience with data analysis and pattern classification of hyperspectral, biochemical sensors and<br />
physiological data. Her recent work is in physiological and psychophysiological signal processing. Mrs. Cunha<br />
received the Bachelor of Science and Master of Engineering degrees from the Electrical Engineering and Computer<br />
Science Department at MIT.<br />
Alissa C. Clarke is a research consultant at MRAC LLC. She has worked in collaboration with <strong>Draper</strong> <strong>Laboratory</strong> on<br />
several studies in the area of deception detection, supporting the development of research protocols, coordinating<br />
the recruitment and testing of participants, and aiding in the preparation of final reports. Ms. Clarke received an<br />
A.B. in Psychology and Health Policy from Harvard University.<br />
Jennifer Z. Martin is an Advisor and Senior Research Scientist with MRAC. She is currently involved with several<br />
research projects, including work on intelligence interviewing and cues to deception or malintent (the intent<br />
or plan to cause harm). She is an author of the <strong>The</strong>ory of Malintent, which drives the Department of Homeland<br />
Security (DHS) Future Attribute Screening Technologies (FAST) program, and helped devise the malintent research<br />
paradigm. Prior to her work with MRAC, she excelled in both corporate and academic settings. Dr. Martin received<br />
a Ph.D. in Experimental Social Psychology from Ohio University.<br />
Jason R. Beauregard is a Research Associate with MRAC. Since joining the firm, he has managed several research<br />
projects spanning a variety of topics and utilizing unique protocols. His responsibilities include supervising protocol<br />
planning, development, implementation, and operation of human subject testing. Prior to his employment with<br />
MRAC, he served as a Case Manager, Intervention Specialist, and Assistant Program Director of a Court Support<br />
Services Division (CSSD)-sponsored diversionary program in the state of Connecticut. Mr. Beauregard received a<br />
B.A. in Psychology from the University of Connecticut.<br />
Andrea K. Webb is a Psychophysiologist at <strong>Draper</strong> <strong>Laboratory</strong>. She has an extensive background in<br />
psychophysiology, eye-tracking, deception detection, quantitative methods, and experimental design. Her work at<br />
<strong>Draper</strong> has focused on security screening, interviewing, autonomic specificity, and post-traumatic stress disorder<br />
(PTSD). She is currently Principal Investigator for a study examining autonomic responses in people with PTSD<br />
and is the data analysis lead for a project funded by DHS. Dr. Webb earned a B.S. in Psychology from Boise State<br />
University and M.S. and Ph.D. degrees in Educational Psychology from the University of Utah.<br />
Detection of Deception in Structured Interviews Using Sensors and Algorithms
Asher A. Hensley is a Radar Systems Engineer at the Telephonics Corporation. His background includes work in<br />
sea clutter modeling, detection, antenna blockage processing, and tracking. His primary research interests are in<br />
machine learning and computer vision. Mr. Hensley received a B.Sc. in Electrical Engineering from Northeastern<br />
University and is currently pursuing a Ph.D. in Electrical Engineering from SUNY Stony Brook.<br />
Nirmal Keshava is the Group Leader for the Fusion, Exploitation, and Inference Technologies group at <strong>Draper</strong><br />
<strong>Laboratory</strong>. His interests include the development of statistical signal processing techniques for the analysis of<br />
physiological and neuroimaging measurements, as well as the fusion of heterogeneous data in decision algorithms.<br />
He received a B.S. in Electrical Engineering from UCLA, an M.S. in Electrical and Computer Engineering from Purdue<br />
University, and a Ph.D. in Electrical and Computer Engineering from Carnegie Mellon University.<br />
Daniel J. Martin, Ph.D., ABPP, is the Director of MRAC LLC, a research and consulting firm that specializes in bridging<br />
the gap between empirical knowledge and corporate or government applications. He is also the Director of Research<br />
for the DHS’s FAST program and serves as experimental lead on several research studies investigating security<br />
screening and interviewing. His research interests include the effectiveness of different interviewing methodologies<br />
in eliciting information and the psychological and physiological cues to deception and malintent. Dr. Martin joined the<br />
faculty at Yale University in 1999. <strong>The</strong>re he conducted several multisite clinical trials in substance abuse treatment.<br />
He also has over 10 years of experience training hundreds of individuals in motivational interviewing with resistant<br />
populations. Dr. Martin is board certified in Clinical Psychology by the American Board of Professional Psychology.<br />
Detection of Deception in Structured Interviews Using Sensors and Algorithms<br />
81
Planner Complexity<br />
82<br />
Planner Complexity with Operator Interaction<br />
Human<br />
1/Mission<br />
Urban Challenge<br />
Increasing<br />
Environment<br />
Uncertainty<br />
Aircraft<br />
Autopilot<br />
Active Military<br />
Robots<br />
Teleoperated<br />
Remotely operated robotic systems have demonstrated life-saving<br />
utility during U.S. military operations, but the Department of Defense<br />
(DoD) has also seen the limitations of ground and aerial robotic systems<br />
that require many people for operations and maintenance. Over time,<br />
the DoD envisions more capable robotic systems that autonomously<br />
execute complex missions with far less human interaction. To enable<br />
this transition, the DoD needs to clearly understand the trade-offs that<br />
must be made when choosing to develop an autonomous system. <strong>The</strong>re<br />
are many circumstances where actions that are straightforward for a<br />
manned system to accomplish are enormously difficult — and therefore<br />
costly — for machine systems to handle.<br />
This developmental paper addresses the need to define understandable<br />
requirements for performance and the implications of those requirements<br />
on the system design. Instead of attempting to specify a “level” of<br />
“autonomy” or overall “intelligence,” the authors propose a starting<br />
set of quantifiable — and testable — requirements that can be applied<br />
to any autonomous robotic system. <strong>The</strong>se range from the dynamics<br />
of the operating environment to the overall expected assertiveness of<br />
the system when faced with uncertain conditions. we believe a solid<br />
understanding of these expectations will not only benefit the system<br />
development, but be a key component of building trust between humans<br />
and robotic systems.<br />
Requirements-Driven Autonomous System Test Design: Building Trusting Relationships
Requirements-Driven Autonomous System Test<br />
Design: Building Trusting Relationships<br />
Troy B. Jones and Mitch G. Leammukda<br />
Copyright © 2010 by the Instrumentation Test and Evaluation Association (ITEA). Presented at the 15th Annual Live-Virtual-<br />
Constructive Conference, El Paso, TX, January 11 - 14, 2010. Sponsored by: ITEA<br />
Abstract<br />
Formal testing of autonomous systems is an evolving practice. For these systems to transition from operating in restricted (or completely<br />
isolated) environments to truly collaborative operations alongside humans, new test methods and metrics are required to build trust<br />
between the operators and their new partners. <strong>The</strong>re are no current general standards of performance and safety testing for autonomous<br />
systems. However, we propose that there are several critical system-level requirements to consider for an autonomous system that can<br />
efficiently direct the test design to focus on potential system weaknesses: environment uncertainty, frequency of operator interaction, and<br />
level of assertiveness. We believe that by understanding the effects of these system requirements, test engineers–and systems engineers–will<br />
be better poised to develop validation and verification plans that expose unexpected system behaviors early, ensure a quantifiable level<br />
of safety, and ultimately build trust with collaborating humans. To relate these concepts to physical systems, examples will be related to<br />
experiences from the Defense Advanced Research Projects Agency (DARPA) Urban Challenge autonomous vehicle race project in 2007 and<br />
other relevant systems.<br />
Introduction<br />
<strong>The</strong> adoption of autonomous systems in well-defined and/or<br />
controlled operational environments is common; commercial and<br />
military aircraft routinely rely on advanced autopilot systems for<br />
the majority of flight duties, manufacturing operations around<br />
the world employ vast robotic systems, and even individuals rely<br />
on increasingly “active” safety systems in automobiles to reduce<br />
injuries from collisions.<br />
On the surface, based on these trends, adding levels of autonomy in<br />
any of these existing systems and deploying new even more helpful<br />
systems seems not only inevitable but a straightforward extension of<br />
existing development, testing, and deployment methods. However,<br />
until fundamental changes in social, legal, and engineering practice<br />
are made, the amazing autonomous system advances being<br />
demonstrated at universities and research laboratories will remain<br />
educational experiments. We see at least three challenges:<br />
1. People must trust an autonomous system in situations when<br />
it may harm them: Arguably, people already trust complex<br />
autonomous systems under circumstances such as the aircraft<br />
autopilot, but passengers know that a human is supervising that<br />
system constantly.<br />
2. Legal ramifications of injuries or deaths resulting from the<br />
actions of autonomous systems must be clearly defined:<br />
When an autonomous system causes a death (which certainly<br />
will happen), what party is held liable for that injury or death?<br />
3. <strong>The</strong>re must be well-defined standards to test the autono<br />
mous system to operate in the required environments with<br />
Requirements-Driven Autonomous System Test Design: Building Trusting Relationships<br />
“acceptable” performance: Defining what is or is not “acceptable”<br />
performance for an autonomous system ties directly into how well<br />
people will ultimately trust that system and will ease the definition<br />
of fair legal responsibilities.<br />
In this paper, we examine perhaps the easiest of these topics:<br />
proposed methods for specifying and ultimately testing the<br />
performance of autonomous systems. As engineers, we are<br />
responsible for supplying the supporting evidence to the<br />
customer that new autonomous systems will meet expectations of<br />
performance and safety.<br />
<strong>Draper</strong> <strong>Laboratory</strong> has worked in autonomous system evaluation<br />
for many years [1]. This paper describes a new approach for<br />
autonomous system requirements development and test design<br />
based largely on experiences gained during our participation in the<br />
DARPA Urban Challenge autonomous vehicle race held in 2007.<br />
<strong>The</strong>se concepts are still in development and will be refined as we<br />
evaluate more systems and collaborate with other members of the<br />
engineering community.<br />
Autonomous System Characteristics<br />
<strong>The</strong>re are as many definitions of “autonomous systems” as there<br />
are papers that define it. Instead of creating yet another incomplete<br />
definition, we propose that there are common sets of traits<br />
that can be specified for any automated/autonomous/robotic/<br />
intelligent system. <strong>The</strong>se traits help establish what performance is<br />
expected of the system (thereby providing a basis for system-level<br />
requirements), and effectively point out the most critical areas for<br />
test and evaluation.<br />
83
<strong>The</strong>se characteristics are intentionally structured in easily<br />
comprehended terms with the goal of improving how operators and<br />
observers understand the actions of an autonomous system. <strong>The</strong><br />
following sections will explain these characteristics and include<br />
examples of how they drive the autonomous system requirements<br />
and testing. Unfortunately, these characteristics are highly coupled<br />
and not necessarily in a linear fashion, but understanding these<br />
interrelationships is a key area of ongoing work.<br />
Environment Uncertainty<br />
We live in an uncertain world: Our perceptual abilities (visual,<br />
auditory, olfactory, touch) are constantly (and unconsciously) at<br />
work keeping us informed about changes in our environment. When<br />
designing an autonomous system to function in this uncertain<br />
world, we need to carefully understand the environment in which<br />
we expect the system to operate. Furthermore, we propose that<br />
classifying environmental uncertainty is performed adequately<br />
by answering the question: “What is the reaction time we expect<br />
from the system to detect and avoid collisions with objects in the<br />
environment?”<br />
Above all other characteristics, environmental uncertainty is the<br />
primary driver for how much perceptual ability an autonomous<br />
system requires to do its job. How well does the system need to<br />
“see” the environment in order to react to potential hazards and<br />
accomplish its mission?<br />
This discussion of environment uncertainty is restricted to visual<br />
types of perception, but we believe autonomous systems will need<br />
to take advantage of other “senses” to eventually meet our (human)<br />
expectations of performance.<br />
Perception Coverage<br />
We define the perception coverage as the percentage of spherical<br />
volume around a system that is pierced by a perceptual sensing<br />
system. For an easy-to-understand example, we begin by estimating<br />
the perception coverage for a human visual system.<br />
Human Visual Perception Coverage<br />
Since we desire a nondimensional metric, we will choose an arbitrary<br />
radius, in this case 100 m, for the spherical volume and project<br />
how much of the volume is seen by human eyes. This graphical<br />
construction is shown in Figure 1 and shows that human vision<br />
in a given instant of time can perceive about 40% of the volume<br />
around your head. Of course, we can rapidly scan our environment<br />
by rotating our heads and bodies, thus providing a complete visual<br />
scan in seconds.<br />
What does this mean with regard to environmental uncertainty?<br />
Certainly humans are very adept at operating in highly uncertain<br />
conditions and do so with a high degree of success. <strong>The</strong>refore, we<br />
propose that the human instantaneous perceptual coverage (visual<br />
in this case) is an intuitive upper bound on the same metric for an<br />
autonomous system.<br />
Having this large amount of input perceptual information at all<br />
times gives us excellent awareness of changes in our environment.<br />
It has been shown [2] that humans see, recognize, and react to a<br />
visual stimulus within 400-600 ms of seeing the stimulus. This range<br />
then is a practical lower limit on how quickly we should expect an<br />
autonomous system to react to changes in the environment.<br />
84<br />
Figure 1. Human visual perceptual coverage, approximately = 40%..<br />
Autonomous System Perception Coverage<br />
We now select an example of an autonomous system that operates<br />
in a high uncertainty environment, the MIT Urban Challenge LR3<br />
autonomous vehicle, Talos, shown in Figure 2, and estimate the same<br />
metric. This system completed approximately 60 mi of completely<br />
autonomous driving in a low-speed race with human-driven and<br />
other autonomous vehicle traffic. To do this, Talos has a myriad of<br />
perceptual sensor inputs [3]:<br />
• 1 x Velodyne HDL-64 LIDAR 360-deg 3D scanning LIDAR.<br />
• 12 x SICK planar scanning LIDAR.<br />
• 5 x Point Grey Firefly MV cameras.<br />
• 15 x Delphi automotive cruise control radars.<br />
ACC RADAR (15)<br />
Skirt<br />
SICK LIDAR (7)<br />
Figure 2. MIT Urban Challenge vehicle Talos.<br />
Requirements-Driven Autonomous System Test Design: Building Trusting Relationships<br />
Velodyne HDL<br />
Pushbroom<br />
SICK LIDAR (5)<br />
Cameras (6)<br />
<strong>The</strong> most useful of these perceptual inputs for detecting obstacles<br />
and vehicle tracking [3] is the Velodyne HDL-64. It contains an array<br />
of 64 lasers mounted in a head unit that covers a 26-deg vertical
ange [4]. Motors spin the entire head assembly at 15 revolutions<br />
per second, generating approximately 66,000 range samples per<br />
revolution, or about 1 million samples per second of operation.<br />
Each full revolution of the Velodyne returned a complete 3D point<br />
cloud of distances to objects all around the vehicle and it was by far<br />
the most popular single sensor to have in the Urban Challenge (if<br />
your team could afford it).<br />
A single sensor that returns continuous data around the entire<br />
vehicle eliminates the need to construct a 3D environment model<br />
out of successive line scans from multiple planar LIDAR (such as<br />
the SICK units), which is a computationally intensive and errorprone<br />
process.<br />
Performing the same calculation of perception coverage for a<br />
Velodyne HDL-64 LIDAR involves representing the geometry of<br />
the sensing beams. For the human vision system, we assumed the<br />
resolution of the image data is practically infinite, but the LIDAR is<br />
restricted to 64 discrete beams of distance measurement that are<br />
swept around a center axis. To perform the calculation, we assumed<br />
that each beam has a nominal diameter of 1/8 in, does not diffract,<br />
and assumed that each revolution of a beam was a continuous disk<br />
of range data, when in fact each revolution is a series of laser pulses.<br />
Based on those (generous) input assumptions, we created a<br />
graphical construction of perception coverage for the Velodyne,<br />
which is shown in Figure 3. We discovered that a single scan is<br />
approximately 0.1% coverage, that is, 400 times less than a single<br />
instant of human visual information. Despite the large disparity<br />
with human ability, the Velodyne proved to be an adequate primary<br />
sensor in the context of the Urban Challenge.<br />
Talos had several methods to detect and avoid collisions with<br />
objects that reduced its effective reaction time [3]. However, for this<br />
example, we limit the reaction time estimate based on the rules of<br />
the DARPA Urban Challenge, which placed a 30-mph speed limit on<br />
all vehicles [5]. If we consider the case of two vehicles in opposing<br />
lanes of travel, we have a maximum closing speed of 60 mph (27<br />
m/s). Talos used approximately 60 m of the Velodyne’s 100+ m<br />
range [4] for perception, and therefore would have just over 2 s in<br />
which to react to an oncoming vehicle in the wrong lane.<br />
Clearly, there is a relationship between the operating environment<br />
of a system and the perceptual capabilities needed to operate in that<br />
environment, and we illustrate this by using the two examples given<br />
and the addition of a third point: We assume that in order to react<br />
to uncertainty instantly, a system would need 100% perceptual<br />
coverage (and the ability to process and decide actions instantly).<br />
<strong>The</strong>se data points and a qualitative relationship between them are<br />
shown in Figure 4.<br />
It is logical that decreasing the uncertainty in the environment<br />
should reduce the need for perception, and the same is true for the<br />
converse.<br />
While not the final answer to how to specify requirements for an<br />
autonomous system perception system, we believe it is a start that<br />
leads to metrics for testing the perception coverage of the system.<br />
In fact, it is very compelling that vehicles in the Urban Challenge<br />
were able to safely complete the race with so little perceptual<br />
information overall.<br />
Requirements-Driven Autonomous System Test Design: Building Trusting Relationships<br />
Figure 3. Perception coverage for Velodyne HDL-64 LIDAR<br />
system, ~0.1%.<br />
80<br />
60<br />
40<br />
20<br />
Perception Coverage (%) 100<br />
Perception Coverage with Environmental Uncertainty<br />
Human<br />
Reaction Time<br />
Human<br />
Visual<br />
Blind<br />
Visible<br />
Urban Challenge<br />
Required Reaction Time<br />
Velodyne<br />
HDL 360-deg<br />
LIDAR<br />
0<br />
0 0.5 1 1.5 2<br />
Potential Time to Collision (s)<br />
Figure 4. Variation in perception coverage as driven by the<br />
environment uncertainty.<br />
Environmental Uncertainty Test Concepts<br />
Clear requirements on perception coverage and potential time-tocollision<br />
will bound the test design for environmental stimuli. It is<br />
up to the test engineering team to design experiments that validate<br />
the ability of the system to meet performance goals and also stress<br />
the system to find potential weaknesses.<br />
If a system is designed to operate in a low uncertainty environment<br />
all the time (e.g., on factory floor welding metal components), the<br />
perception-related tests required are limited to proper detection of<br />
85
the work piece and the welded connections. If an operator enters<br />
a work zone of the system as defined by a rigid barrier, the system<br />
must shut down immediately [6].<br />
On the other extreme, an autonomous system operating in a dynamic<br />
environment, be it on the ground or in the air, needs the perceptual<br />
systems stressed early and as often as possible. As we discovered<br />
during the DARPA Urban Challenge experience, changing the<br />
operating environment of the system always revealed flaws in the<br />
assumptions made during the development of various algorithms.<br />
Perception systems should be tested thoroughly against many kinds<br />
of surfaces moving at speeds appropriate to the environment, as<br />
both surface properties and velocity will impact the accuracy of the<br />
detection and classification of objects [3]. In addition, test cases for<br />
tracking small objects, if applicable, can be very challenging due to<br />
gaps in coverage and latency of the measurements.<br />
Frequency of Operator Interaction<br />
When developing any system, it is critical to understand how the<br />
users interact with it. This information can be captured in “Concept<br />
of Operations” documents that specify when users are expecting to<br />
input information into or get information out of a system. This same<br />
concept must be applied to autonomous systems with a slight shift<br />
in implications.<br />
When we are developing an autonomous system, we need to<br />
establish expectations on how much help a human operator is<br />
expected to provide during normal operations. Ideally, an entirely<br />
autonomous system would require a single mission statement and<br />
it would execute that mission without further assistance. However,<br />
just as people sometimes need additional inputs during a task, an<br />
autonomous system requires the same consideration.<br />
On the other end of the spectrum, an autonomous system can<br />
degenerate into an entirely remotely-controlled system. <strong>The</strong> human<br />
operator is constantly in contact with the system, providing direct<br />
commands to accomplish the task.<br />
In this section, we explore the impact of specifying the required<br />
level of operator interaction. This characteristic in particular has<br />
far-reaching implications, and unlike environmental uncertainty,<br />
is fully controlled by the customer and developer of the system.<br />
A customer can choose (for example) to require an autonomous<br />
system to need only a single operator interaction per year, but that<br />
requirement will significantly impact development time and cost.<br />
Planner Complexity<br />
If the autonomous system is intended to operate with very little<br />
operator interaction, then that system must be able to effectively<br />
decide what to do on its own as the environment and mission evolve.<br />
We will refer to this capability generically as “planning” rather<br />
than “intelligence.” <strong>The</strong> planner operation is central to how well<br />
autonomous systems operate in uncertain environments. We will<br />
review some examples of planning complexity and how it relates<br />
to operator inputs. Additionally, when ranking complexity, we need<br />
to consider the operating environment of the system. A planning<br />
system that operates in a highly uncertain environment must adapt<br />
quickly to changes in that environment, whereas low uncertainty<br />
environments can be traversed with perhaps only a single plan for<br />
the entire mission.<br />
86<br />
Aircraft Autopilot<br />
Everyday autopilot systems in commercial and military aircraft<br />
perform certain planning tasks based on pilot commands. Modern<br />
autopilot systems have many potential “modes” of operation, such<br />
as maintaining altitude and heading or steering the aircraft to follow<br />
a set course of waypoints [7]. Even though the pilot must initiate<br />
these modes, once activated, the autopilot program can make<br />
course changes to follow the desired route and therefore is planning<br />
vehicle motion. However, an aircraft autopilot program will not<br />
change the course of the aircraft to avoid a collision with another<br />
aircraft [8]. Instead, the pilot is issued an “advisory” to change<br />
altitude and/or heading.<br />
With this basic understanding of what an autopilot is allowed to<br />
do, we rank the planner complexity of these systems as low. Since<br />
most aircraft with autopilot systems operate in air traffic controlled<br />
airspace, we also believe the environmental uncertainty is low,<br />
placing the autopilot planner complexity on a qualitative graph as<br />
shown in Figure 5.<br />
Planner Complexity with Operator Interaction<br />
Frequency of Operator Interaction<br />
Figure 5. Variation in planner complexity as function of required<br />
frequency of operator interaction.<br />
Requirements-Driven Autonomous System Test Design: Building Trusting Relationships<br />
Planner Complexity<br />
Human<br />
1/Mission<br />
Urban Challenge<br />
Increasing<br />
Environment<br />
Uncertainty<br />
Aircraft<br />
Autopilot<br />
Active Military<br />
Robots<br />
Teleoperated<br />
Urban Challenge Autonomous System<br />
Vehicles that competed in the Urban Challenge were asked to<br />
achieve a difficult set of goals with a single operator interaction<br />
with the system per mission. After initial setup, the vehicle was<br />
required to negotiate an uncertain environment of human-driven<br />
and autonomous traffic vehicles without assistance.<br />
Accomplishing this performance implied a key top-level design<br />
requirement for the planning system: it must be capable of<br />
generating entirely new vehicle motion plans and executing them<br />
automatically. For example, both the 4th place finisher, MIT, and<br />
the 1st place finisher, Carnegie Mellon (Boss), relied on planning<br />
systems that were constantly creating new motion plans based on<br />
the perceived environment [9], [10]. In the case of Talos, the vehicle<br />
motion planning system was always searching for safe vehicle plans
that would achieve the goal of the next mission waypoint, but also<br />
possible plans for bringing the vehicle to safe stop at all times. This<br />
strategy was flexible and allowed the vehicle to execute sudden<br />
stops if a collision was anticipated.<br />
This type of flexibility, however, comes at a high complexity cost, at<br />
least when compared with traditional systems that are not allowed<br />
to replan their actions automatically without human consent. <strong>The</strong><br />
motion plans were generated continuously at a 10-Hz rate and<br />
could represent plans up to several seconds into the future [9].<br />
<strong>The</strong> dynamic nature of the planning was founded on incorporating<br />
randomness in the system, meaning that there was no predefined<br />
finite set of paths from which the system was selecting. Instead, it<br />
was constantly creating new possible plans and selecting them<br />
based on the environmental and rule constraints.<br />
We feel this adapting type of planning system is the evolutionary<br />
path to greater autonomous vehicle capability and it represents<br />
a high level of complexity. But the Urban Challenge systems still<br />
operated in a controlled environment with moderate levels of<br />
uncertainty, so we rank the planner complexity well above the<br />
autopilot case and on a higher environment uncertainty curve.<br />
Human Planning<br />
For the upper bound of the relationship, we rank human planning<br />
processes as extremely adaptable and highly complex, giving<br />
the highest level complexity ranking for the most uncertain<br />
environments, and likely off the notional planner complexity scale<br />
as shown.<br />
Remotely Operated Systems<br />
<strong>The</strong> lowest end of the planning complexity curve is occupied by<br />
remotely operated systems. <strong>The</strong>se systems depend on a human<br />
operator to make all planning decisions. For this ranking, we consider<br />
only the capabilities of the base system without the operator. We<br />
understand that indeed a great advantage of remotely operated<br />
systems is the planning capability of the human operator. Currently,<br />
most active robots used by the military fall into this classification.<br />
Verification Effort<br />
<strong>The</strong> frequency of interaction with an autonomous system is a<br />
powerful parameter stating how independent we expect the system<br />
to be over time (and is also tightly related to the level of assertiveness<br />
in the next section). Intuitively, we expect that the more independent<br />
a system is, the more time must be spent performing testing to verify<br />
system performance and safety. <strong>The</strong> following examples will help<br />
create another qualitative relationship between verification effort<br />
and the required frequency of operator interaction.<br />
Aircraft Autopilot<br />
As an example, we first consider the very formal verification process<br />
performed for certification of aircraft autopilot software (and other<br />
avionics components), as recommended by the Federal Aviation<br />
Agency (FAA) [11]. Autopilots are robust autonomous systems<br />
flying millions of miles without incident [12]. Organizations are<br />
required to meet the guidelines set forth in DO-178B [13] (and the<br />
many ancillary documents) in order to achieve autopilot software<br />
certification. <strong>The</strong> intent of these standards is to provide a rigorous<br />
set of processes and tests that ensure the safety of software products<br />
that operate the airplane. <strong>The</strong> process of achieving compliance<br />
Requirements-Driven Autonomous System Test Design: Building Trusting Relationships<br />
with DO-178B and obtaining the certification for new software is<br />
so involved that entire companies exist to assist with/perform the<br />
process or create software tools to help generate software that is<br />
compliant with the standards [14]-[16]. <strong>The</strong>refore, we classify<br />
aircraft avionics software as being a “very high” level of verification<br />
effort, not the highest, but certainly close. And remember, we<br />
classified the complexity of the planning software as low.<br />
For this example, we will quantify an aircraft autopilot as needing<br />
inputs from the human operators from several to many times in a<br />
given flight. <strong>The</strong> pilots are responsible for setting the operational<br />
mode of the autopilot and are required to initiate complex<br />
sequences like autolanding [7]. <strong>The</strong>refore, we place aircraft<br />
autopilot on a verification effort versus frequency of operator<br />
interaction as shown in Figure 6.<br />
Verification Effort<br />
Verification and Comms with Operator Interaction<br />
1/Mission<br />
Aircraft Avionics*<br />
Urban Challenge<br />
Verification Effort<br />
Bandwidth<br />
Active Military<br />
Robots<br />
Bandwidth of Comm Link<br />
Teleoperated<br />
Frequency of Operator Interaction<br />
Figure 6. Verification effort and communications bandwidth as a<br />
function of operator interaction.<br />
Urban Challenge Autonomous System<br />
DARPA required all entrants in the Urban Challenge to have only a<br />
single operator interaction per mission in order to compete in the<br />
race. <strong>The</strong> operators could stage the vehicle at the starting zone,<br />
enter the mission map, arm the vehicle for autonomous driving, and<br />
walk away. At that point, the operators were intended to have no<br />
further contact, radio or physical, with the vehicle until the mission<br />
was completed [5].<br />
Due to the highly experimental nature of the Urban Challenge<br />
and the compressed schedule, most, if not all, teams performed a<br />
tightly coupled “code → code while testing → code” iterative loop<br />
of development. This practice was certainly true of the MIT team<br />
and left little room for evaluating the effects of constant software<br />
changes on the overall performance of the system. In other words,<br />
the team was continuously writing software with no formal process<br />
for updating the software on the physical system. <strong>The</strong>refore,<br />
87
while the vehicles met the goals set forth by DARPA for operator<br />
interaction, we estimate the level of verification on each vehicle<br />
was very low, as shown in Figure 6. This highlights the large gap that<br />
exists between a demonstration system that drives in a controlled<br />
environment and a deployable mass-market or military system.<br />
As described in the previous section, the software on Urban<br />
Challenge vehicles was constantly creating and executing new<br />
motion plans. However, this capability implies that tests must<br />
adequately verify the performance of a system that does not have<br />
a finite set of output actions based on a single set of inputs. This<br />
verification discussion is beyond the scope of this paper, but is of<br />
great interest to <strong>Draper</strong> <strong>Laboratory</strong> and will continue to be an area<br />
of research for many.<br />
Remotely Operated Systems<br />
Most, if not all, currently deployed military and law enforcement<br />
“robots” or “unmanned systems” are truly operated remotely. A<br />
human operator at a terminal is providing frequent to continuous<br />
input commands to the system to accomplish a mission. While<br />
it is certainly required to verify that these systems perform their<br />
functions, that verification testing process can focus on the accurate<br />
execution of the operator commands. We therefore consider<br />
remotely operated systems at the lowest end of the verification<br />
effort scale, certainly nonzero, but far from the aircraft avionics<br />
case. It is possible, however, that some unmanned aircraft systems<br />
will execute automated flights back to home base on certain failure<br />
conditions. <strong>The</strong>refore, those systems would likely need verification<br />
levels consummate with aircraft autopilot systems.<br />
Communications Bandwidth<br />
<strong>The</strong> expected interactions of the operator with the system also<br />
have a direct effect on how much data must be exchanged between<br />
the operator and the system during the mission. Higher operator<br />
interaction will drive more bandwidth requirements, while low<br />
interactions will save bandwidth, but increase the required<br />
verification effort.<br />
Urban Challenge Autonomous System<br />
As the minimum case, we have the Urban Challenge type<br />
autonomous vehicles, which were required to have only a dedicated<br />
“emergency-stop” transceiver active during the race. This radio<br />
allowed the race monitoring center to remotely pause or completely<br />
disable any vehicle on the course, as well as give those same options<br />
to the dedicated chase car drivers that were following each vehicle<br />
around the course [5]. This kind of link did not exchange much<br />
information; the GPS coordinates of the vehicle and some bits to<br />
indicate current operating mode were sufficient. <strong>The</strong>refore, we can<br />
locate the bandwidth requirements for these vehicles on the very<br />
low end of the scale as shown in Figure 6.<br />
Remotely Operated Systems<br />
At the opposite end of the scale, we have systems that are<br />
representative of all the actively deployed “robotic” or “unmanned”<br />
systems used in military operations. <strong>The</strong>se systems are remotely<br />
operated, requiring a constant high-bandwidth data link to a<br />
ground station that allows an operator to see live video and other<br />
system data at all times. <strong>The</strong>se types of links are required to satisfy<br />
the human operator’s need for rapidly updating data to operate<br />
88<br />
the system safely. <strong>The</strong>refore, we place these systems highest on the<br />
bandwidth requirement.<br />
Operator Interaction Test Concepts<br />
With an understanding of how the operators are expected to<br />
interact with the system, the performance of the system with regard<br />
to this metric can be measured directly. At all times during any<br />
system-level tests, the actions of the operators must be recorded<br />
and compared against the expected values.<br />
During the Urban Challenge, we observed that many teams had<br />
dedicated vehicle test drivers. <strong>The</strong>se drivers had over months of<br />
involvement become comfortable with what level of help would be<br />
required for their vehicle for many scenarios. A practiced vehicle<br />
test driver would allow the autonomous system to proceed in<br />
situations a less experienced test driver would deem dangerous<br />
and take control of the vehicle. This observation is an example of<br />
how different drivers trusted the systems they interacted with and it<br />
highlights the need to understand this relationship.<br />
To transition more autonomous systems into daily use, the time<br />
for developing that trust must be shortened from months or weeks<br />
into hours, or perhaps even minutes. All of us routinely estimate the<br />
actions of others around us and trust that they will execute tasks<br />
much as we would on a daily basis. When driving on a road, we all<br />
assume that others around us are following the rules of that road as<br />
expected; we routinely trust our lives to complete strangers.<br />
Indeed, it is a daunting task to conjecture what will be required to<br />
ever achieve the verification of a completely autonomous vehicle<br />
driving in a general city setting. Aircraft avionics benefit from a very<br />
strict operating set of conditions and intentional air traffic control to<br />
mitigate the chance of collisions, but ground vehicles have no such<br />
aids and operate in a far more complex and dynamic environment.<br />
Level of Assertiveness<br />
Finally, we discuss the idea of an autonomous system being<br />
assertive: How much leeway should the system be given in<br />
executing the desired mission? This is another characteristic that is<br />
entirely controllable by the customer and the development team.<br />
It is inversely related to the previously discussed frequency of<br />
operator interaction in that a system intended to operate for long<br />
periods without assistance must necessarily be assertive in mission<br />
execution.<br />
<strong>The</strong> intent of specifying assertiveness is to give the operators and<br />
collaborating humans a feel for how persistent a given system will<br />
be in completing the mission. This “feel” may be a time span over<br />
which the system “thinks” about different options for continuing<br />
a mission in the face of an obstacle, and it may include various<br />
physical behaviors that allow the system to scan the situation with<br />
the perceptual system from a different viewpoint in order to resolve<br />
a perceived ambiguity in what the system is seeing.<br />
Object Classification Accuracy<br />
We feel that for an autonomous system to be assertive in executing<br />
a mission, it must be able to not only see obstacles in the path of<br />
the vehicle, it must be able to classify what those obstacles are. For<br />
example, if a truck-sized LIDAR-equipped ground vehicle encounters<br />
a long row of low bushes, it will “see” these bushes as a point cloud of<br />
Requirements-Driven Autonomous System Test Design: Building Trusting Relationships
distance measurements with a regular height. Those bushes, for all<br />
practical purposes, will look exactly like a concrete wall to a LIDAR,<br />
and the vehicle will be confronted with a planning decision: find a<br />
way around this obstacle or drive over it. To an outside observer, this<br />
decision is trivial (assuming the property owner is not around), but<br />
it is a real and difficult problem in autonomous system deployment.<br />
<strong>The</strong> DARPA Urban Challenge mitigated the issue of object<br />
classification by carefully selecting the rules to make the problem<br />
more tractable. For example, rules dictated that the race course<br />
would only contain other cars and static objects such as traffic<br />
cones or barrels. <strong>The</strong> distinction between static objects and cars<br />
was important due to different standoff distance requirements for<br />
the two types of objects. Vehicles were allowed to pass much closer<br />
to static objects (including parked cars) than moving vehicles.<br />
In the case of Talos, the classification of objects was performed<br />
based solely on the tracked velocity of the object. This type of<br />
classification avoided the need to attempt to extract vehicle<br />
specific geometry from LIDAR and camera data, but also<br />
contributed to a low-speed collision with the Cornell system [17].<br />
Unlike previous system characteristics, we only have the Urban<br />
Challenge example, but we feel qualitative curves can still<br />
be constructed to show a relationship between classification<br />
accuracy and assertiveness as shown in Figure 7. Notice that we<br />
also feel that the need to increase levels of classification accuracy<br />
is a function of the environment uncertainty: Systems that operate<br />
in a low uncertainty environment can be very assertive with a low<br />
level of classification accuracy.<br />
At the lowest end of the scale, we place a zero assertiveness<br />
system: It will never change the operating plan without interaction<br />
from an operator because the operator is making all classification<br />
decisions. Examples of zero assertiveness systems are remotely<br />
operated robots and aircraft autopilots, both of which require<br />
operator interaction to change plans.<br />
We estimate most Urban Challenge systems have low classification<br />
accuracy in a moderately uncertain environment. Based on<br />
experience with the Talos system, we estimate that it classified<br />
objects correctly around 20% of the time, and the assertiveness<br />
was intentionally skewed toward the far end of the scale, but the<br />
system would eventually stop all motion if no safe motion plans<br />
were found.<br />
Finally, we include a not quite 100% rating for human classification<br />
accuracy for the most uncertain environments at the “never ask<br />
for help” end of the scale.<br />
As shown, we feel that there is much work remaining to achieve<br />
practical autonomous systems that can complete missions in<br />
uncontrolled environments without a well-defined method of<br />
operators assisting that system.<br />
Assertiveness Test Concepts<br />
Object classification was an important part of the Urban<br />
Challenge testing processes. Test scenarios were developed to<br />
intentionally provide a mixed set of vehicle and static obstacles<br />
during development. Other team members (and even other Urban<br />
Challenge systems) provided live traffic vehicles in a representative<br />
Requirements-Driven Autonomous System Test Design: Building Trusting Relationships<br />
Classification Accuracy (%)<br />
Classification Accuracy with Level of Assertiveness<br />
100<br />
Human<br />
80<br />
60<br />
40<br />
20<br />
0<br />
Stop & Wait for Input<br />
Urban Challenge<br />
Increasing<br />
Environment<br />
Uncertainty<br />
Aircraft Autopilot<br />
Assertiveness Never Ask for Help<br />
Figure 7. Variation of classification accuracy as a function of<br />
assertiveness.<br />
“urban” environment at the former El Toro Marine Corps Air Station<br />
in Irvine, California.<br />
Another valuable source of classification data came from the<br />
human-driven commuting times to and from test sites. <strong>The</strong> software<br />
architecture of the Talos system allowed real-time recording of all<br />
system data that could be played back later. This allowed algorithm<br />
testing with real system data on any developer computer [3]. <strong>The</strong><br />
vehicle perception systems were often left operating in many<br />
types of general traffic scenarios that were later used to evaluate<br />
classification performance.<br />
Testing for assertiveness did not happen until near the end of<br />
the Urban Challenge project for the MIT team as it was a concept<br />
that grew out of testing just prior to the race. <strong>The</strong> Talos team and<br />
others [10], [3] implemented logic into the systems designed to<br />
“unstick” the vehicles and continue on the mission. In order to test<br />
these features, the test designer must have a working knowledge<br />
of how the assertiveness of the system should vary as a function of<br />
operating conditions.<br />
When the Talos vehicle failed to make forward progress, a chain of<br />
events would start increasing the assertiveness level of the system<br />
incrementally. This was done by relaxing planning constraints that<br />
the vehicle was maintaining, such as staying within the lane or road<br />
boundaries and large standoff distances to objects. This gave the<br />
planning system a chance to recalculate a plan that may result in<br />
forward progress. <strong>The</strong>se relaxations of constraints would escalate<br />
until eventually, if no plan were found, the system would reboot all<br />
the control software and try again [3].<br />
Conclusions<br />
We have proposed and given examples for how to categorize the toplevel<br />
requirements for the performance of an autonomous system.<br />
<strong>The</strong>se characteristics are intended to apply to any automated/<br />
intelligent/autonomous system by describing expected behaviors<br />
that in turn specify the required performance on lower level system<br />
capabilities, thereby providing a basis for testing and analysis.<br />
89
Environmental uncertainty is the primary driver for the overall<br />
perceptual needs of the system. Systems that operate in highly<br />
uncertain environments must be able to recognize and react to<br />
objects from any direction at a response time sufficient to avoid<br />
collisions. Estimating a metric of perception coverage reveals<br />
that current state-of-the-art LIDAR systems provide far less<br />
perception information than human vision and provide seconds<br />
or less in collision detection range, but depending on the required<br />
operational environment may be sufficient. Testing the system for<br />
varying levels of environment uncertainty must be a focus of any<br />
autonomous system verification; experience indicates that groundbased<br />
systems in particular are very sensitive to environmental<br />
uncertainty.<br />
<strong>The</strong> frequency of operator interaction is a controlling parameter<br />
that has a direct effect on several key system abilities: planner<br />
complexity, verification effort, and communications bandwidth.<br />
Motion planning systems capable of continuously creating new plans<br />
in response to environment changes are inherently a nonfinite state<br />
and therefore need new types of verification testing and research.<br />
When a system is intended to operate with minimal operator input,<br />
it allows the communications bandwidth to be reduced, whereas<br />
teleoperated systems with constant operator interaction require<br />
more robust links.<br />
And finally, the level of assertiveness of the system, which is tied to<br />
the desired frequency of operator interaction, will have an impact<br />
on how accurately the autonomous system must be able to classify<br />
objects in the environment. Systems that are intended to operate<br />
with little supervision must make safe decisions about crossing<br />
perceived constraints of travel in the environment, which drives the<br />
need to classify objects around the system. Object classification is a<br />
complex topic that requires much research to create robust systems.<br />
Specifying an assertive autonomous system also requires a planning<br />
system that is allowed to change motion plans automatically during<br />
the mission, driving up the planner complexity and the associated<br />
verification efforts.<br />
<strong>Draper</strong> <strong>Laboratory</strong> will continue efforts to refine these characteristics<br />
(and expand if needed) and is interested in collaborating with<br />
other institutions in developing requirements and test metrics for<br />
autonomous systems. We believe it will take widespread agreement<br />
among different organizations to arrive at an understandable set of<br />
guidelines that will help move advanced autonomous systems into<br />
fielded use domestically and in military operations. <strong>The</strong>se systems,<br />
even with limitations of current perception and planning, can be<br />
useful right now in reducing threats to U.S. military forces. We must<br />
focus efforts on specifying and testing systems that can be trusted<br />
by their operators to succeed in their missions.<br />
References<br />
[1] Cleary, M., M. Abramson, M. Adams, S. Kolitz, “Metrics for Embedded<br />
Collaborative Systems,” Charles Stark <strong>Draper</strong> <strong>Laboratory</strong>, Performance<br />
Metrics for Intelligent Systems, National Institute of Standards &<br />
<strong>Technology</strong> (NIST), Gaithersburg, MD, 2000.<br />
[2] Sternberg, S., “Memory Scanning: Mental Processes Revealed by<br />
Reaction Time Experiments,” American Scientist, Vol. 57, 1969, pp.<br />
421-457.<br />
90<br />
[3] Leonard, J., D. Barrett, T. Jones, M. Antone, R. Galejs, “A Perception<br />
Driven Autonomous Urban Vehicle,” Journal of Field Robotics, DOI<br />
10.1002, 2008. [PDF]: http://acl.mit.edu/papers/LeonardJFR08.pdf.<br />
[4] Velodyne HDL-64E Specifications [HTML]: http://www.<br />
[5]<br />
velodyne.com/lidar/products/specifications.aspx.<br />
DARPA Urban Challenge Rules [PDF]: http://www.darpa.mil/<br />
grandchallenge/docs/Urban_Challenge_Rules_102707.pdf.<br />
[6] “Preventing the Injury of Workers by Robots,” National Institute<br />
of Occupational Safety and Health (NIOSH), Publication No.<br />
85-103, [HTML]: http://www.cdc.gov/niosh/85-103.html.<br />
[7] Advanced Avionics Handbook, U.S. Department of Transportation,<br />
Federal Aviation Administration, FAA-H-8083-6, 2009. [PDF]:<br />
http://www.faa.gov/library/manuals/aviation/media/FAA-H-8083-6.pdf.<br />
[8] Introduction to TCAS II Version 7, ARINC, [PDF]: http://www.arinc.<br />
com/downloads/tcas/tcas.pdf.<br />
[9] Kuwata, Y., G. Fiore, E. Frazzoli, “Real-Time Motion Planning<br />
with Applications to Autonomous Urban Driving,” IEEE<br />
Transactions on Control Systems <strong>Technology</strong>, Vol. XX, No. XX,<br />
January 2009 [PDF]: http://acl.mit.edu/papers/KuwataTCST09.pdf.<br />
[10] Baker, C., D. Ferguson, J. Dolan, “Robust Mission Execution for<br />
Autonomous Urban Driving,” 10th International Conference<br />
on Intelligent Autonomous Systems (IAS 2008), July, 2008,<br />
Carnegie Mellon University, [PDF]: http://www.ri.cmu.edu/pub_<br />
files/pub4/baker_christopher_2008_1/baker_christopher_2008_1.pdf.<br />
[11] FAA Advisory Circular 20-115B [PDF]: http://rgl.faa.gov/<br />
Regulatory_and_Guidance_Library/rgAdvisoryCircular.nsf/0/<br />
DCDB1D2031B19791862569AE007833E7? OpenDocument.<br />
[12] Aviation Accident Statistics, National Transportation Safety<br />
Board, [HTML]: http://www.ntsb.gov/aviation/Table2.htm.<br />
[13] Software Considerations in Airborne Systems and Equipment<br />
Certification, RTCA DO-178B, [PDF]: http://www.rtca.<br />
org/downloads/ListofAvailableDocs_December_2009.htm#_<br />
Toc247698345.<br />
[14] Donatech Commercial Aviation DO-178B Certification Services<br />
Page [HTML]: http://www.donatech.com/aviation-defense/commer<br />
cial/commercial-tanker-transport-planes.html.<br />
[15] HighRely, Reliable Embedded Solutions [HTML]: http://<br />
highrely.com/index.php.<br />
[16] Esterel Technologies [HTML]: http://www.esterel-technologies.<br />
com/products/scade-suite/.<br />
[17] Fletcher, L., I. Miller, et al., “<strong>The</strong> MIT-Cornell Collision and Why<br />
It Happened”, Journal of Field Robotics, DOI 10.1002, 2008 [PDF]:<br />
http://people.csail.mit.edu/seth/pubs/FletcherEtAlJFR2008.pdf.<br />
Requirements-Driven Autonomous System Test Design: Building Trusting Relationships
Troy B. Jones is the Autonomous Systems Capability Leader at <strong>Draper</strong> <strong>Laboratory</strong>. He joined <strong>Draper</strong> in 2004<br />
and began working in the System Integration and Test division on the TRIDENT MARK 6 MOD 1 inertial guidance<br />
system. Current duties focus on strengthening <strong>Draper</strong>’s existing autonomous technologies in platform design and<br />
software by adding new testing methods and incorporating concepts from Human System Collaboration. <strong>Draper</strong>’s<br />
ultimate goal is to produce autonomous systems that are trusted implicitly by our customers to perform their<br />
critical missions. In 2006, he joined with students and faculty at MIT to build an entry for the 2007 DARPA Urban<br />
Challenge. <strong>The</strong> team’s fully autonomous Land Rover LR3 used a combination of LIDAR, vision, radar, and GPS/INS<br />
to perceive the environment and road, and safely completed the Urban Challenge in fourth place overall. Mr. Jones<br />
earned B.S. and M.S. degrees at Virginia Tech.<br />
Mitch G. Leammukda is a Member of the Technical Staff in the Integrated Systems Development and<br />
Test group at <strong>Draper</strong> <strong>Laboratory</strong>. For the past 7 years, he has worked on navigation systems for naval aircraft,<br />
space instruments, and individual soldiers. He has also led the system integration for a robotic forklift and an<br />
RF instrumentation platform. He is currently developing a universal test station platform for inertial guidance<br />
instruments. Mr. Leammukda holds M.S. and B.S. degrees in Electrical Engineering from Northeastern University.<br />
Requirements-Driven Autonomous System Test Design: Building Trusting Relationships<br />
91
92<br />
List of 2010 Published Papers and Presentations<br />
Abrahamsson, C.K.; Yang, F.; Park, H.; Brunger, J.M.; Valonen, P.K.;<br />
Langer, R.S.; Welter, J.F.; Caplan, A.I.; Guilak, F.; Freed, L.E.<br />
Chondrogenesis and Mineralization During In Vitro Culture of<br />
Human Mesenchymal Stem Cells on 3D-Woven Scaffolds<br />
Tissue Engineering: Part A, Vol. 16, No. 7, July 2010<br />
Abramson, M.R.; Kahn, A.C.; Kolitz, S.E.<br />
Coordination Manager - Antidote to the Stovepipe Anti-Pattern<br />
Infotech at Aerospace Conference, Atlanta, GA, April 20-22, 2010.<br />
Sponsored by: AIAA<br />
Abramson, M.R.; Carter, D.W.; Kahn, A.C.; Kolitz, S.E.; Riek, J.C.;<br />
Scheidler, P.J.<br />
Single Orbital Revolution Planner for NASA’s EO-1 Spacecraft<br />
Infotech at Aerospace Conference, Atlanta, GA, April 20-22, 2010.<br />
Sponsored by: AIAA<br />
Agte, J.S.; Borer, N.K.; de Weck, O.<br />
Simulation-Based Design Model for Analysis and Optimization of<br />
Multistate Aircraft Performance<br />
Multidisciplinary Design Optimization (MDO) Specialist’s Conference,<br />
Orlando, FL, April 12-15, 2010. Sponsored by: AIAA<br />
Ahuja, R.; Tao, S.L.; Nithianandam, B.; Kurihara, T.; Saint-Geniez, M.;<br />
D’Amore, P.; Redenti, S.; Young, M.<br />
Polymer Thin Films as an Antiangiogenic and Neuroprotective<br />
Biointerface<br />
Materials Research Society (MRS) Fall Meeting, Boston, MA, November<br />
29-December 3, 2010. Sponsored by: MRS.<br />
Ahuja, R.; Nithianandam, B.; Kurihara, T.; Saint-Geniez, M.; D’Amore, P.;<br />
Redenti, S.; Young, M.; Tao, S.L<br />
Polymer Thin-Films as an Antiangiogenic and Neuroprotective<br />
Biointerface<br />
Graduate Student Award Appreciation, Materials Research Society,<br />
Boston, MA, November 2010<br />
Barbour, N.M.; Hopkins III, R.E.; Kourepenis, A.S.; Ward, P.A.<br />
Inertial MEMS System Applications (SET116)<br />
NATO SET Lecture Series, Turkey, Czech Republic, France, Portugal,<br />
March 15-26, 2010. Sponsored by: NATO Research & <strong>Technology</strong><br />
Organization<br />
Barbour, N.M.<br />
Inertial Navigation Sensors (SET116)<br />
NATO SET Lecture Series, Turkey, Czech Republic, France, Portugal,<br />
March 15-26, 2010. Sponsored by: NATO Research & <strong>Technology</strong><br />
Organization<br />
List of 2010 Published Papers and Presentations<br />
Barbour, N.M.; Flueckiger, K.<br />
Understanding Commonly Encountered Inertial Instrument<br />
Specifications<br />
Missile Defense Agency/Deputy for Engineering, Producibility (MDA/<br />
DEP), June 2010<br />
Bellan, L.; Wu, D.; Borenstein, J.T.; Cropeck, D.; Langer, R.S.<br />
Microfluidics in Hydrogels Using a Sealing Adhesion Layer<br />
(poster)<br />
Biomedical Engineering Society/Annals of Biomedical Engineering,<br />
May 5, 2010<br />
Benvegnu, E.; Suri, N.; Tortonesi, M.; Esterrich III, T.<br />
Seamless Network Migration Using the Mockets Communications<br />
Middleware<br />
Military Communications Conference (MILCOM), San Jose, CA,<br />
October 31-November 3, 2010. Sponsored by: IEEE<br />
Bettinger, C.J.; Borenstein, J.T.<br />
Biomaterials-Based Microfluidics for Tissue Development<br />
Soft Matter, Vol. 6, No. 20, October 2010<br />
Billingsley, K.L.; Balaconis, M.K.; Dubach, J.M.; Zhang, N.; Lim, E.;<br />
Francis, K.; Clark, H.A.<br />
Fluorescent Nano-Optodes for Glucose Detection<br />
Analytical Chemistry, American Chemical Society (ACS), Vol. 82, No. 9,<br />
May 1, 2010<br />
Bogner, A.J.; Torgerson, J.F.; Mitchell, M.L.<br />
GPS Receiver Development History for the Extended Navy Test Bed<br />
Missile Sciences Conference, Monterey, CA, November 16-18, 2010.<br />
Sponsored by: AIAA<br />
Borenstein, J.T.; Tupper, M.M.; Mack, P.J.; Weinberg, E.J.; Khalil, A.S.;<br />
Hsiao, J.C.; García-Cardeña, G.<br />
Functional Endothelialized Microvascular Networks with Circular<br />
Cross-Sections in a Tissue Culture Substrate<br />
Biomedical Microdevices, Vol. 12, No. 1, February 2010<br />
Borer, N.K.<br />
Analysis and Design of Fault-Tolerant Systems<br />
DEKA Lecture Series, Manchester, NH, August 12, 2010. Sponsored by:<br />
DEKA Research and Development.<br />
Borer, N.K.; Cohanim, B.E.; Curry, M.L.; Manuse, J.E.<br />
Characterization of a Persistent Lunar Surface Science Network<br />
Using On-Orbit Beamed Power<br />
Aerospace Conference, Big Sky, MT, March 6-13, 2010. Sponsored by:<br />
IEEE
Borer, N.K.; Claypool, I.R.; Clark, W.D.; West, J.J.; Odegard, R.G.;<br />
Somervill, K.; Suzuki, N.<br />
Model-Driven Development of Reliable Avionics Architectures for<br />
Lunar Surface Systems<br />
Aerospace Conference, Big Sky, MT, March 6-13, 2010. Sponsored by:<br />
IEEE<br />
Bortolami, S.B.; Duda, K.R.; Borer, N.K.<br />
Markov Analysis of Human-in-the-Loop System Performance<br />
Aerospace Conference, Big Sky, MT, March 6-13, 2010. Sponsored by:<br />
IEEE<br />
Brady, T.M.; Paschall II, S.C.<br />
Challenge of Safe Lunar Landing<br />
Aerospace Conference, Big Sky, MT, March 6-13, 2010. Sponsored by:<br />
IEEE<br />
Brady, T.M.; Paschall II, S.C.; Crain, T.<br />
GN&C Development for Future Lunar Landing Missions<br />
Guidance, Navigation, and Control Conference and Exhibit, Toronto,<br />
Canada, August 2-5, 2010. Sponsored by: AIAA<br />
Carter, D.J.; Cook, E.<br />
Towards Integrated CNT-Bearing Based MEMS Rotary Systems<br />
Gordon Research Conference on Nanostructure Fabrication, Tilton,<br />
NH, July 18-23, 2010. Sponsored by: Tilton School<br />
Clark, T.; Stimpson, A.; Young, L.R.; Oman, C.M.; Duda, K.R.<br />
Analysis of Human Spatial Perception During Lunar Landing<br />
Aerospace Conference, Big Sky, MT, March 6-13, 2010. Sponsored by:<br />
IEEE<br />
Cohanim, B.E.; Cunio, P.M.; Hoffman, J.; Joyce, M.; Mosher, T.J.; Tuohy, S.T.<br />
Taking the Next Giant Leap<br />
33rd Guidance and Control Conference, Breckenridge, CO, February<br />
6-10, 2010. Sponsored by: AAS<br />
Collins, B.K.; Kessler, L.J.; Benagh, E.A.<br />
Algorithm for Enhanced Situation Awareness for Trajectory<br />
Performance Management<br />
Infotech at Aerospace Conference, Atlanta, GA, April 20-22, 2010.<br />
Sponsored by: AIAA<br />
Copeland, A.D.; Mangoubi, R.; Mitter, S.K.; Desai, M.N.; Malek, A.M.<br />
Spatio-Temporal Data Fusion in Cerebral Angiography<br />
IEEE Transactions on Medical Imaging, Vol. 29, No. 6, June 2010<br />
Crain, T.; Bishop, R.H.; Brady, T.M.<br />
Shifting the Inertial Navigation Paradigm with MEMS <strong>Technology</strong><br />
33rd Guidance and Control Conference, Breckenridge, CO, February<br />
6-10, 2010. Sponsored by: American Astronautical Society (AAS)<br />
List of 2010 Published Papers and Presentations<br />
Cuiffi, J.D.; Soong, R.K.; Manolakos, S.Z.; Mohapatra, S.; Larson, D.N.<br />
Nanohole Array Sensor <strong>Technology</strong>: Multiplexed Label-Free<br />
Protein Binding Assays<br />
26th Southern Biomedical Engineering Conference, College Park,<br />
MD, April 30-May 2, 2010. Sponsored by: International Federation for<br />
Medical and Biological Engineering (IFMBE)<br />
Cunha, M.G.; Clarke, A.C.; Martin, J.; Beauregard, J.R.; Webb, A.K.;<br />
Hensley, A.A.; Keshava, N.; Martin, D.J.<br />
Detection of Deception in Structured Interviews Using Sensors<br />
and Algorithms<br />
International Society for Optical Engineers (SPIE) Defense, Security<br />
and Sensing, Orlando, FL, April 5-9, 2010. Sponsored by: SPIE<br />
Cunio, P.M.; Lanford, E.R.; McLinko, R.; Han, C.; Canizales-Diaz, J.;<br />
Olthoff, C.T.; Nothnagel, S.L.; Bailey, Z.J.; Hoffman, J.; Cohanim, B.E.<br />
Further Development and Flight Testing of a Prototype Lunar<br />
and Planetary Surface Exploration Hopper: Update on the<br />
TALARIS Project<br />
Space 2010 Conference, Anaheim, CA, August 30-September 3, 2010.<br />
Sponsored by: AIAA<br />
Cunio, P.M.; Corbin, B.A.; Han, C.; Lanford, E.R.; Yue, H.K.; Hoffman, J.;<br />
Cohanim, B.E.<br />
Shared Human and Robotic Landing and Surface Exploration in<br />
the Neighborhood of Mars<br />
Space 2010 Conference, Anaheim, CA, August 30-September 3, 2010.<br />
Sponsored by: AIAA<br />
Davis, J.L.; Striepe, S.A.; Maddock, R.W.; Johnson, A.E.; Paschall II, S.C.<br />
Post2 End-to-End Descent and Landing Simulation for ALHAT<br />
Design Analysis Cycle 2<br />
International Planetary Probe Workshop, Barcelona, Spain, June 14-18,<br />
2010. Sponsored by: Georgia Institute of <strong>Technology</strong><br />
DeBitetto, P.A.<br />
Using 3D Virtual Models and Ground-Based Imagery for Aiding<br />
Navigation in Large-Scale Urban Terrain<br />
35th Joint Navigation Conference (JNC), Orlando, FL, June 8-10, 2010.<br />
Sponsored by: Joint Services Data Exchange (JSDE)<br />
Dorland, B.N.; Dudik, R.P.; Veillette, D.; Hennessy, G.S.; Dugan, Z.; Lane,<br />
B.F.; Moran, B.A.<br />
Automated Frozen Sample Aliquotting System<br />
European <strong>Laboratory</strong> Robotics Interest Group (ELRIG) Liquid<br />
Handling & Label-Free Detection Technologies Conference,<br />
Whittlebury Hall, UK, March 4, 2010. Sponsored by: ELRIG<br />
93
Dorland, B.N.; Dudik, R.P.; Veillette, D.; Hennessy, G.S.; Dugan, Z.; Lane,<br />
B.F.; Moran, B.A.<br />
<strong>The</strong> Joint Milli-Arcsecond Pathfinder Survey (JMAPS):<br />
Measurement Accuracy of the Primary Instrument when Used as<br />
Fine Guidance Sensor<br />
33rd Guidance and Control Conference, Breckenridge, CO, February<br />
6-10, 2010. Sponsored by: AAS<br />
Dubach, J.M.; Lim, E.; Zhang, N.; Francis, K.; Clark, H.A.<br />
In Vivo Sodium Concentration Continuously Monitored with<br />
Fluorescent Sensors<br />
Integrative Biology: Quantitative Biosciences from Nano to Macro,<br />
November 2010<br />
Duda, K.R.; Johnson, M.C.; Fill, T.J.; Major, L.M.; Zimpfer, D.J.<br />
Design and Analysis of an Attitude Command/Hover Hold plus<br />
Incremental Position Command Blended Control Mode for Piloted<br />
Lunar Landing<br />
Guidance, Navigation, and Control Conference and Exhibit, Toronto,<br />
Canada, August 2-5, 2010. Sponsored by: AIAA<br />
Duda, K.R.; Oman, C.M.; Hainley Jr., C.J.; Wen, H.-Y.<br />
Modeling Human-Automation Interactions During Lunar Landing<br />
Supervisory Control<br />
81st Annual Aerospace Medical Association (ASMA) Scientific<br />
Meeting, Phoenix, AZ, May 9-13, 2010. Sponsored by: ASMA<br />
Effinger, R.T.; Williams, B.; Hofmann, A.<br />
Dynamic Execution of Temporally and Spatially Flexible Reactive<br />
Programs<br />
24th Association for the Advancement of Artificial Intelligence (AAAI)<br />
Conference on Artificial Intelligence, Atlanta, GA, July 11-15, 2010.<br />
Sponsored by: AAAI<br />
Epshteyn, A.A.; Maher, S.P.; Taylor, A.J.; Borenstein, J.T.; Cuiffi, J.D.<br />
Membrane-Integrated Microfluidic Device for High-Resolution<br />
Live Cell Imaging Fabricated via a Novel Substrate Transfer<br />
Technique<br />
Materials Research Society (MRS) Fall Meeting, Boston, MA, November<br />
29-December 3, 2010. Sponsored by: MRS<br />
Fallon, L.P.; Magee, R.J.; Wadland, R.A.<br />
Centrifuge Technologies for Evaluating Inertial Guidance Systems<br />
81st Shock and Vibration Symposium, Orlando, FL, October 24-28,<br />
2010. Sponsored by: Shock and Vibration Information Analysis Center<br />
(SAVIAC)<br />
Feng, M.Y.; Marinis, T.F.; Giglio, J.; Sherman, P.G.; Elliott, R.D.; Magee, T.;<br />
Warren, J.<br />
Electronics Packaging to Isolate MEMS Sensors from <strong>The</strong>rmal<br />
Transients<br />
International Mechanical Engineering Congress, Vancouver, CA,<br />
November 12-18, 2010. Sponsored by: ASME<br />
94<br />
List of 2010 Published Papers and Presentations<br />
Fill, T.J.<br />
Lunar Landing and Ascent Trajectory Guidance Design for<br />
the Autonomous Landing and Hazard Avoidance <strong>Technology</strong><br />
(ALHAT) Program<br />
Space Flight Mechanics Conference, San Diego, CA, February 14-17,<br />
2010. Sponsored by: AAS and AIAA<br />
Fritz, M.P.; Zanetti, R.; Vadali, S.R.<br />
Analysis of Relative GPS Navigation Techniques<br />
Space Flight Mechanics Conference, San Diego, CA, February 14-17,<br />
2010. Sponsored by: AAS and AIAA<br />
Frohlich, E.; Ko, C.W.; Tao, S.L.; Charest, J.L.<br />
Fabrication of Cell Substrates to Determine the Role of Mechanical<br />
Cues in Tissue Structure Formation of Renal Epithelial Cells<br />
Science and Engineering Day Symposium, Boston, MA, March 30,<br />
2010. Sponsored by: Boston University<br />
Frohlich, E.; Ko, C.W.; Zhang, X.; Charest, J.L.; Tao, S.L.<br />
Fabrication of Cell Substrates to Determine the Role of<br />
Topographical Cues in Differentiation and Tissue Structure<br />
Formation<br />
Tech Connect Summit, Anaheim, CA, June 21-24, 2010. Sponsored by:<br />
TechConnect World<br />
Geisler, M.A.<br />
Expedition MCM-D Layout for Multi-Layer Die<br />
User2User (U2U) Mentor Graphics Users Conference, Westford, MA,<br />
April 14, 2010. Sponsored by: U2U<br />
Grant, M.J.; Steinfeldt, B.A.; Braun, R.D.; Barton, G.H.<br />
Smart Divert: A New Mars Robotic Entry, Descent, and Landing<br />
Architecture<br />
Journal of Spacecraft and Rockets, AIAA, Vol. 47. No. 3, May-June 2010<br />
Guillemette, M.D.; Park, H.; Hsiao, J.C.; Jain, S.R.; Larson, B.L.; Langer,<br />
R.S.; Freed, L.E.<br />
Combined Technologies for Microfabricating Elastomeric Cardiac<br />
Tissue Engineering Scaffolds<br />
Journal of Macromolecular Bioscience, Vol. 10, No. 11, November 2010<br />
Guo, X.; Popadin, K.Y.; Markuzon, N.; Orlov, Y.L.; Kraytsberg, Y.;<br />
Krishnan, K.J.; Zsurka, G.; Turnbull, D.M.; Kunz, W.S.; Khrapko, K.<br />
Repeats, Longevity, and the Sources of mtDNA Deletions:<br />
Evidence from “Deletional Spectra”<br />
Trends in Genetics, Vol. 26, No. 8, August 2010, pp. 340-343<br />
Hammett, R.C.<br />
Fault-Tolerant Avionics Tutorial for the NASA/Army Forum on<br />
“Challenges of Complex Systems”<br />
NASA/Army Systems and Software Engineering Forum, Huntsville, AL,<br />
May 11-12, 2010. Sponsored by: University of Alabama
Harjes, D.I.; Dubach, J.M.; Rosenzweig, A.; Das, S.; Clark, H.A.<br />
Ion-Selective Optodes Measure Extracellular Potassium Flux in<br />
Excitable Cells<br />
Macromolecular Rapid Communications, Vol. 31, No. 2, January 2010<br />
Herold, T.M.; Abramson, M.R.; Kahn, A.C.; Kolitz, S.E.; Balakrishnan, H.<br />
Asynchronous, Distributed Optimization for the Coordinated<br />
Planning of Air and Space Assets<br />
Infotech at Aerospace Conference, Atlanta, GA, April 20-22, 2010.<br />
Sponsored by: AIAA<br />
Hicks, B.; Cook, T.; Lane, B.F.; Chakrabarti, S.<br />
OPD Measurement and Dispersion Reduction in a Monolithic<br />
Interferometer<br />
Optics Express, Vol. 18, No. 16, August 2, 2010, pp. 17542-17547<br />
Hicks, B.; Cook, T.; Lane, B.F.; Chakrabarti, S.<br />
Progress in the Development of MANIC: a Monolithic Nulling<br />
Interferometer for Characterizing Extrasolar Environments<br />
Astronomical Telescopes and Instrumentation, San Diego, CA, June<br />
27-July 2, 2010. Sponsored by: SPIE<br />
Hoganson, D.M.; Anderson, J.L.; Weinberg, E.J.; Swart, E.F.; Orrick, B.;<br />
Borenstein, J.T.; Vacanti, J.P.<br />
Branched Vascular Network Architecture: A New Approach to<br />
Lung Assist Device <strong>Technology</strong><br />
Journal of Thoracic and Cardiovascular Surgery, Vol. 140, No. 5,<br />
November 2010<br />
Hopkins III, R.E.<br />
Contemporary and Emerging Inertial Sensor Technologies<br />
Position Location and Navigation Symposium (PLANS), Indian Wells,<br />
CA, May 4-6, 2010. Sponsored by: IEEE/Institute of Navigation (ION)<br />
Hopkins III, R.E.; Barbour, N.M.; Gustafson, D.E.; Sherman, P.G.<br />
Integrated Inertial/GPS-Based Navigation Applications<br />
NATO SET Lecture Series, Turkey, Czech Republic, France, Portugal,<br />
March 15-26, 2010 Sponsored by: NATO Research & <strong>Technology</strong><br />
Organization<br />
Hsiao, J.C.; Borenstein, J.T.; Kulig, K.M.; Finkelstein, E.B.; Hoganson, D.M.;<br />
Eng, K.Y.; Vacanti, J.P.; Fermini, B.; Neville, C.M.<br />
Novel In Vitro Model of Vascular Injury with a Biomimetic Internal<br />
Elastic Lamina<br />
TERMIS-NA Annual Conference & Exposition, Orlando, FL, December<br />
5-10, 2010. Sponsored by: Tissue Engineering International and<br />
Regenerative Medicine Society<br />
Hsu, W.-M.; Carraro, A.; Kulig, K.M.; Miller, M.L.; Kaazempur-Mofrad,<br />
M.R.; Entabi, F.; Albadawi, H.; Watkins, M.T.; Borenstein, J.T.; Vacanti, J.P.;<br />
Neville, C.M.<br />
Liver Assist Device with a Microfluidics-Based Vascular Bed in an<br />
Animal Model<br />
Annals of Surgery, Vol. 252, No. 2, August 2010<br />
List of 2010 Published Papers and Presentations<br />
Huxel, P.J.; Cohanim, B.E.<br />
Small Lunar Lander/Hopper Navigation Analysis Using Linear<br />
Covariance<br />
Aerospace Conference, Big Sky, MT, March 6-13, 2010. Sponsored by:<br />
IEEE<br />
Irvine, J.M.<br />
ATR <strong>Technology</strong>: Why We Need It, Why We Can’t Have It, and How<br />
We’ll Get It<br />
Geotech Conference, Fairfax, VA, September 27-28, 2010. Sponsored<br />
by: American Society of Photogrammetry and Remote Sensing<br />
(ASPRS)<br />
Irvine, J.M.<br />
Human Guided Visualization Enhances Automated Target<br />
Detection<br />
SPIE Defense, Security and Sensing, Orlando, FL, April 5-9, 2010.<br />
Sponsored by: SPIE<br />
Jackson, M.C.; Straube, T.<br />
Orion Flight Performance Design Trades<br />
Guidance, Navigation, and Control Conference and Exhibit, Toronto,<br />
Canada, August 2-5, 2010. Sponsored by: AIAA<br />
Jackson, T.R.; Keating, D.J.; Mather, R.A.; Matlis, J.; Silvestro, M.; Ting, B.C.<br />
Role of Modeling, Simulation, Testing, and Analysis Throughout<br />
the Design, Development, and Production of the MARK 6 MOD 1<br />
Guidance System<br />
Missile Sciences Conference. Monterey, CA, November 16-18, 2010.<br />
Sponsored by: AIAA<br />
Jang, D.; Wendelken, S.M.; Irvine, J.M.<br />
Robust Human Identification Using ECG: Eigenpulse Revisited<br />
SPIE Defense, Security and Sensing, Orlando, FL, April 5-9, 2010.<br />
Sponsored by: SPIE<br />
Jang, J.-W.; Plummer, M.K.; Bedrossian, N.S.; Hall, C.; Spanos, P.D.<br />
Absolute Stability Analysis of a Phase Plane Controlled Spacecraft<br />
20th Spaceflight Mechanics Meeting, San Diego, CA, February 14-17,<br />
2010. Sponsored by: AAS/AIAA<br />
Jang, J.-W.; Alaniz, A.; Bedrossian, N.S.; Hall, C.; Ryan, S.; Jackson, M.<br />
Ares I Flight Control System Design<br />
2010 Astrodynamics Specialist Conference, Toronto, Canada, August<br />
2-5, 2010. Sponsored by: AAS/AIAA<br />
Jones, T.B.; Leammukda, M.G.<br />
Requirements-Driven Autonomous System Test Design: Building<br />
Trusting Relationships<br />
International Test and Evaluation Association (ITEA) Live Virtual<br />
Constructive Conference, El Paso, TX, January 11-14, 2010<br />
Kahn, A.C.; Kolitz, S.E.; Abramson, M.R.; Carter, D.W.<br />
Human-System Collaborative Planning Environment for<br />
Unmanned Aerial Vehicle Mission Planning<br />
Infotech at Aerospace Conference, Atlanta, GA, April 20-22, 2010.<br />
Sponsored by: AIAA<br />
95
Keating, D.J.; Laiosa, J.P.; Ting, B.C.; Wasileski, B.J.; Vican, J.E.; Silvestro,<br />
M.; Foley, B.M.; Shakhmalian, C.T.<br />
Using Hardware-in-the-Loop Simulation for System Integration of<br />
the MARK 6 MOD 1 Guidance System<br />
Missile Sciences Conference, Monterey, CA, November 16-18, 2010.<br />
Sponsored by: AIAA<br />
Keshava, N.<br />
Detection of Deception in Structured Interviews Using Sensors<br />
and Algorithms<br />
SPIE Defense, Security and Sensing, Orlando, FL, April 5-9, 2010.<br />
Sponsored by: SPIE<br />
Keshava, N.; Coskren, W.D.<br />
Sensor Fusion for Multi-Sensor Human Signals to Infer Cognitive<br />
States<br />
National Symposium Sensor Data Fusion, Las Vegas, NV, July 26-29,<br />
2010. Sponsored by: Military Sensing Symposium<br />
Kessler, L.J.; West, J.J.; McClung, K.; Miller, J.; Zimpfer, D.J.<br />
Autonomous Operations for the Next Generation of Human Space<br />
Exploration<br />
SpaceOps, Huntsville, AL, April 25-30, 2010. Sponsored by: AIAA<br />
Kim, K.H.; Burns, J.A.; Bernstein, J.J.; Maguluri, G.N.; Park, B.H.; De Boer, J.F.<br />
In Vivo 3D Human Vocal Fold Imaging with Polarization Sensitive<br />
Optical Coherence Tomography and a MEMS Scanning Catheter<br />
Optics Express, Vol. 18, No. 14, July 5, 2010<br />
King, E.T.; Hart, J.J.; Odegard, R.<br />
Orion GN&C Data-Driven Flight Software Architecture for<br />
Automated Sequencing and Fault Recovery<br />
Aerospace Conference, Big Sky, MT, March 6-13, 2010. Sponsored by:<br />
IEEE<br />
Kniazeva, T.; Hsiao, J.C.; Charest, J.L.; Borenstein, J.T.<br />
Microfluidic Respiratory Assist Device with High Gas Permeability<br />
for Artificial Lung Applications<br />
Biomedical Microdevices, Online First, November 26, 2010<br />
Ko, C.W.; McHugh, K.J.; Yao, J.; Kurihara, T.; D’Amore, P.; Saint-Geniez, M.;<br />
Young, M.; Tao, S.L.<br />
Nanopatterning of Poly(e-caprolactone) Thin Film Scaffolds for<br />
Retinal Rescue<br />
4th Military Vision Symposium on Ocular and Brain Injury, Boston,<br />
MA, September 26-30, 2010. Sponsored by: Schepens Eye Research<br />
Institute<br />
Ko, C.W.<br />
Micro and Nanostructured Polymer Thin Films for the<br />
Organization and Differentiation of Retinal Progenitor Cells<br />
Materials Research Society Fall Meeting, Boston, MA, November<br />
29-December 3, 2010. Sponsored by: MRS<br />
96<br />
List of 2010 Published Papers and Presentations<br />
Kourepenis, A.S.<br />
Emerging Navigation Technologies for Miniature Autonomous<br />
Systems<br />
Autonomous Weapons Summit and GNC Challenges for Miniature<br />
Autonomous Systems Workshop, Fort Walton Beach, FL, October 25-<br />
27,<br />
2010. Sponsored by: ION<br />
Lai, W.; Erdonmez, C.K.; Marinis, T.F.; Bjune, C.K.; Dudney, N.J.; Xu, F.;<br />
Wartena, R.; Chiang, Y.-M.<br />
Ultrahigh-Energy-Density Microbatteries Enabled by New<br />
Electrode Architecture and Micropackaging Design<br />
Advanced Materials, Vol. 22, No. 20, May 2010<br />
Larson, D.N.; Slusarz, J.; Bellio, S.L.; Maloney, L.M.; Ellis, H.J.; Rifai, N.;<br />
Bradwin, G.; de Dios, J.<br />
Automated Frozen Sample Aliquotter<br />
International Society of Biological and Environmental Respositories<br />
(ISBER) Annual Meeting and Exhibits, Rotterdam, Netherlands, May<br />
11-14, 2010. Sponsored by: ISBER<br />
Larson, D.N.; Fiering, J.O.; Kowalski, G.J.; Sen, M.<br />
Development of a Nanoscale Calorimeter: Instrument for<br />
Developing Pharmaceutical Products<br />
Innovative Molecular Analysis Technologies (IMAT) Conference, San<br />
Francisco, CA, October 25-26, 2010. Sponsored by: National Cancer<br />
Institute (NCI)<br />
Larson, D.N.; Miranda, L.; Dederis, J.<br />
Innovations in Biobanking-Related Engineering and Design: A<br />
Novel Automated Methodology for Optimizing Banked Sample<br />
Processing<br />
ISBER Annual Meeting and Exhibits, Rotterdam, Netherlands, May 11-<br />
14, 2010. Sponsored by: ISBER<br />
Larson, D.N.<br />
Nanohole Array for Protein Analysis<br />
26th Southern Biomedical Engineering Conference, College Park, MD,<br />
April 30-May 2, 2010. Sponsored by: IFMBE<br />
Larson, D.N.<br />
Nanohole Array Sensing<br />
Biomedical Optics Workshop, Boston, MA, April 13, 2010. Sponsored<br />
by: IEEE and Boston University<br />
Larson, D.N.<br />
New Method for Processing Banked Samples<br />
Biospecimen Research Network (BRN) Symposium, Bethesda, MD,<br />
March 24-25, 2010. Sponsored by: NCI<br />
Larson, D.N.<br />
Optimizing the Processing and Augmenting the Value of Critical<br />
Banked Biological Specimens<br />
Biorepositories Conference, Boston, MA, September 27-29, 2010
Larson, D. N.<br />
Transitioning Research into Operations: A View from Healthcare<br />
NASA Human Research Program Investigators’ Workshop, Houston,<br />
TX, February 3-5, 2010. Sponsored by: NASA/NASA Space Biomedical<br />
Research Institute (NSBRI)<br />
Lim, S.; Lane, B.F.; Moran, B.A.; Henderson, T.C.; Geisel, F.A.<br />
Model-Based Design and Implementation of Pointing and<br />
Tracking Systems: From Model to Code in One Step<br />
33rd Guidance and Control Conference, Breckenridge, CO, February<br />
6-10, 2010. Sponsored by: AAS<br />
Lowry, N.C.; Mangoubi, R.S.; Desai, M.N.; Sammak, P.J.<br />
Nonparametric Segmentation and Classification of Small Size<br />
Irregularly Shaped Stem Cell Nuclei Using Adjustable Windowing<br />
7th International Symposium on Biomedical Imaging: From Nano to<br />
Macro, Rotterdam, the Netherlands, April 14-17, 2010. Sponsored by:<br />
IEEE<br />
Madison, R.W.; Xu, Y.<br />
Tactical Geospatial Intelligence from Full Motion Video<br />
Applied Imagery Pattern Recognition Workshop, Washington, D.C.,<br />
October 13-15, 2010. Sponsored by: IEEE<br />
Magee, R.J.<br />
Shock and Vibration Information Analysis Center (SAVIAC) Video<br />
81st Shock and Vibration Symposium, Orlando, FL, October 24-28,<br />
2010. Sponsored by: SAVIAC<br />
Major, L.M.; Duda, K.R.; Zimpfer, D.J.; West, J.J.<br />
Approach to Addressing Human-Centered <strong>Technology</strong> Challenges<br />
for Future Space Exploration<br />
Space 2010 Conference, Anaheim, CA, August 30-September 3, 2010.<br />
Sponsored by: AIAA<br />
Manolakos, S.Z.; Evans-Nguyen, T.G.; Postlethwaite, T.A.<br />
Low Temperature Plasma Sampling for Explosives Detection in a<br />
Handheld Prototype<br />
Chemical and Biological Defense Science and <strong>Technology</strong> Conference,<br />
Orlando, FL, November 15-19, 2010. Sponsored by: Defense Threat<br />
Reduction Agency (DTRA)<br />
Marchant, C.C.<br />
Ares I Avionics Introduction<br />
AIAA Webinar, Huntsville, AL, February 11, 2010. Sponsored by: AIAA<br />
Marchant, C.C.<br />
Ares I Avionics Introduction<br />
NASA/Army Systems and Software Engineering Forum, Huntsville, AL,<br />
May 11-12, 2010. Sponsored by: University of Alabama<br />
Marinis, T.F.; Nercessian, B.<br />
Hermetic Sealing of Stainless Steel Packages by Seam Seal Welding<br />
43rd International Symposium on Microelectronics, Raleigh,<br />
NC, October 31-November 4, 2010. Sponsored by: International<br />
Microelectronics and Packaging Society (IMAPS)<br />
List of 2010 Published Papers and Presentations<br />
Mather, R.A.<br />
Development and Simulation of a 4-Processor Virtual Guidance<br />
System for the MARK 6 MOD 1 Program<br />
Missile Sciences Conference, Monterey, CA, November 16-18, 2010.<br />
Sponsored by: AIAA<br />
Matlis, J.<br />
Application of Instruction Set Simulator <strong>Technology</strong> for Flight<br />
Software Development for the MARK 6 MOD 1 Program<br />
Missile Sciences Conference, Monterey, CA, November 16-18, 2010.<br />
Sponsored by: AIAA<br />
Matranga, M.J.<br />
<strong>Draper</strong> Multichip Modules for Space Applications<br />
ChipSat Workshop, Providence, RI, February 18, 2010. Sponsored by:<br />
Brown University<br />
McCall, A.A.; Swan, E.E.; Borenstein, J.T.; Sewell, W.F.; Kujawa, S.G.;<br />
McKenna, M.J.<br />
Drug Delivery for Treatment of Inner Ear Disease: Current State of<br />
Knowledge<br />
Ear & Hearing, Vol. 31, January 2010<br />
McHugh, K.J.; Teynor, W.A.; Saint-Geniez, M.; Tao, S.L.<br />
High-Yield MEMS Technique to Fabricate Microneedles for Tissue<br />
Engineering Applications<br />
National Institute of Biomedical Imaging and Bioengineering Training<br />
Grantees Meeting, Bethesda, MD, June 24-25, 2010. Sponsored by:<br />
National Institutes of Health (NIH)<br />
McHugh, J.; Tao, S.L.; Saint-Geniez, M.<br />
Template Fabrication of a Nanoporous Polycaprolactone Thin-<br />
Film for Retinal Tissue Engineering<br />
Materials Research Society (MRS) Fall Meeting, Boston, MA, November<br />
29-December 3, 2010. Sponsored by: MRS<br />
McLaughlin, B.L.; Wells, A.C.; Virtue, S.; Vidal-Puig, A.; Wilkinson, T.D.;<br />
Watson, C.J.E.; Robertson, P.A.<br />
Electrical and Optical Spectroscopy for Quantitative Screening of<br />
Hepatic Steatosis in Donor Livers<br />
Physics in Medicine and Biology, Vol. 55, No. 22, November 2010<br />
Mescher, M.J.; Kim, E.S.; Fiering, J.O.; Holmboe, M.E.; Swan, E.E.; Sewell,<br />
W.F.; Kujawa, S.G.; McKenna, M.J.; Borenstein, J.T.<br />
Development of a Micropump for Dispensing Nanoliter-Scale<br />
Volumes of Concentrated Drug for Intracochlear Delivery<br />
33rd Association for Research in Otolaryngology (ARO) Midwinter<br />
Meeting, Anaheim, CA, February 6-11, 2010. Sponsored by: ARO<br />
Middleton, A.; Paschall II, S.C.; Cohanim, B.E.<br />
Small Lunar Lander/Hopper Performance Analysis<br />
Aerospace Conference, Big Sky, MT, March 6-13, 2010. Sponsored by:<br />
IEEE<br />
97
Miotto, P.; Breger, L.S.; Mitchell, I.T.; Keller, B.; Rishikof, B.<br />
Designing and Validating Proximity Operations Rendezvous and<br />
Approach Trajectories for the Cygnus Mission<br />
Astrodynamics Specialist Conference, Toronto, Canada, August 2-5,<br />
2010. Sponsored by: AAS/AIAA<br />
Mitchell, M.L.; Werner, B.; Roy, N.<br />
Sensor Assignment for Collaborative Urban Navigation<br />
35th Joint Navigation Conference, Orlando, FL, June 8-10, 2010.<br />
Sponsored by: JSDE<br />
Mohiuddin, S.; Donna, J.I.; Axelrad, P.; Bradley, B.<br />
Improving Sensitivity, Time to First Fix, and Robustness of GPS<br />
Positioning by Combining Signals from Multiple Satellites<br />
35th Joint Navigation Conference, Orlando, FL, June 8, 2010-June 10,<br />
2010. Sponsored by: JSDE<br />
Muterspaugh, M.W.; Lane, B.F.; Kulkarni, S.R.; Konacki, M.; Burke, B.F.;<br />
Colavita, M.M.; Shao, M.; Wiktorowicz, S.J.; Hartkopf, W.I.; O’Connell, J.;<br />
Williamson, M.; Fekel, F.C.<br />
<strong>The</strong> PHASES Differential Astrometry Data Archive: Parts I – V<br />
Astronomical Journal, AAS, Vol. 140, No. 6, December 2010<br />
Nelson, E.D.; Irvine, J.M.<br />
Intelligent Management of Multiple Sensors for Enhanced<br />
Situational Awareness<br />
Applied Imagery Pattern Recognition Workshop, Washington, D.C.<br />
October 13-15, 2010. Sponsored by: IEEE<br />
Nothnagel, S.L.; Bailey, Z.J.; Cunio, P.M.; Hoffman, J.; Cohanim, B.E.;<br />
Streetman, B.J.<br />
Development of a Cold Gas Spacecraft Emulator System for the<br />
TALARIS Hopper<br />
Space 2010 Conference, Anaheim, CA, August 30-September 3, 2010.<br />
Sponsored by: AIAA<br />
Olthoff, C.T.; Cunio, P.M.; Hoffman, J.; Cohanim, B.E.<br />
Incorporation of Flexibility into the Avionics Subsystem for the<br />
TALARIS Small Advanced Prototype Vehicle<br />
Space 2010 Conference, Anaheim, CA, August 30-September 3, 2010.<br />
Sponsored by: AIAA<br />
O’Melia, S.; Elbirt, A.J.<br />
Enhancing the Performance of Symmetric-Key Cryptography via<br />
Instruction Set Extensions<br />
IEEE Transactions on Very Large Scale Integration (VLSI) Systems, Vol.<br />
18, No. 11, November 2010<br />
Okerson, G.; Kang, N.; Ross, J.; Tetewsky, A.K.; Soltz, J.; Greenspan, R.L.;<br />
Anszperger, J.C.; Lozow, J.B.; Mitchell, M.R.; Vaughn, N.L.; O’Brien, C.P.;<br />
Graham, D.K.<br />
Qualitative and Quantitative Inter-Signal Correction Metrics for<br />
On Orbit GPS Satellites<br />
35th Joint Navigation Conference, Orlando, FL, June 8-10, 2010.<br />
Sponsored by: JSDE<br />
98<br />
List of 2010 Published Papers and Presentations<br />
Perry, H.C.; Polizzotto, L.; Schwartz, J.L.<br />
Creative Path from Invention to Successful Transition<br />
Aerospace Conference, Big Sky, MT, March 6-13, 2010. Sponsored by:<br />
IEEE<br />
Polizzotto, L.<br />
Creating Customer Value Through Innovation<br />
<strong>Technology</strong> & Innovation, Vol. 12, No. 1, January, 2010<br />
Putnam, Z.R.; Barton, G.H.; Neave, M.D.<br />
Entry Trajectory Design Methodology for Lunar Return<br />
Aerospace Conference, Big Sky, MT, March 6-13, 2010. Sponsored by:<br />
IEEE<br />
Putnam, Z.R.; Neave, M.D.; Barton, G.H.<br />
PredGuid Entry Guidance for Orion Return from Low Earth Orbit<br />
Aerospace Conference, Big Sky, MT, March 6-13, 2010. Sponsored by:<br />
IEEE<br />
Rachlin, Y.; McManus, M.F.; Yu, C.C.; Mangoubi, R.S.<br />
Outlier Robust Navigation Using L1 Minimization<br />
35th Joint Navigation Conference, Orlando, FL, June 7-10, 2010.<br />
Sponsored by: JSDE<br />
Roy, W.A.; Kwok, P.Y.; Chen, C.-J.; Racz, L.M.<br />
<strong>The</strong>rmal Management of a Novel iUHD-<strong>Technology</strong>-Based MCM<br />
IMAPS National Meeting, Palo Alto, CA, September 28-30, 2010.<br />
Sponsored by: IMAPS<br />
Schaefer, M.L.; Wongravee, K.; Holmboe, M.E.; Heinrich, N.M.; Dixon,<br />
S.J.; Zeskind, J.E.; Kulaga, H.M.; Brereton, R.G.; Reed, R.R.; Trevejo, J.M.<br />
Mouse Urinary Biomarkers Provide Signatures of Maturation,<br />
Diet, Stress Level, and Diurnal Rhythm<br />
Chemical Senses, Vol. 35, No. 6, July 2010<br />
Serna, F.J.<br />
Systems Engineering Considerations in Practicing Test and<br />
Evaluation<br />
26th Annual National Test and Evaluation Conference, San Diego,<br />
CA, March 1-4, 2010. Sponsored by: National Defense Industrial<br />
Association (NDIA)<br />
Sherman, P.G.<br />
Precision Northfinding INS with Low-Noise MEMS Inertial Sensors<br />
Joint Precision Azimuth Sensing Conference (JPASC), Las Vegas, NV,<br />
August 2-6, 2010<br />
Sievers, A.; Zanetti, R.; Woffinden, D.C.<br />
Multiple Event Triggers in Linear Covariance Analysis for<br />
Spacecraft Rendezvous<br />
Guidance, Navigation, and Control Conference and Exhibit, Toronto,<br />
Canada, August 2-5, 2010. Sponsored by: AIAA
Silvestro, M.<br />
Time Synchronization in Closed-Loop GPS/INS Hardware-in-the-<br />
Loop Simulations<br />
35th Joint Navigation Conference, Orlando, FL, June 7-10, 2010.<br />
Sponsored by: JSDE<br />
Smith, B.R.; Kwok, P.Y.; Thompson, J.C.; Mueller, A.J.; Racz, L.M.<br />
Demonstration of a Novel Hybrid Silicon-Resin High-Density<br />
Interconnect (HDI) Substrate<br />
60th Electronic Components and <strong>Technology</strong> Conference (ECTC), Las<br />
Vegas, NV, June 1-4, 2010. Sponsored by IEEE, Components, Packaging<br />
and Manufacturing <strong>Technology</strong> (CPMT) Society<br />
Sodha, S.; Wall, K.A.; Redenti, S.; Klassen, H.; Young, M.; Tao, S.L.<br />
Microfabrication of a Three-Dimensional Polycaprolactone Thin-<br />
Film Scaffold for Retinal Progenitor Cell Encapsulation<br />
Journal of Biomaterials Science - Polymer Edition, Vol. 22, No. 4-6,<br />
January 2011<br />
Stanwell, P.; Siddall, P.; Keshava, N.; Cocuzzo, D.C.; Ramadan, S.; Lin, A.;<br />
Herbert, D.; Craig, A.; Tran, Y.; Middleton, J.; Gautam, S.; Cousins, M.;<br />
Mountford, C.<br />
Neuro Magnetic Resonance Spectroscopy Using Wavelet<br />
Decomposition and Statistical Testing Identifies Biochemical<br />
Changes in People with Spinal Cord Injury and Pain<br />
Neuroimage, Vol. 53, No. 2, November 2010<br />
Steedman, M.R.; Tao, S.L.; Klassen, H.; Desai, T.A.<br />
Enhanced Differentiation of Retinal Progenitor Cells Using<br />
Microfabricated Topographical Cues<br />
Biomedical Microdevices, Vol. 12, No. 3, June 2010<br />
Steinfeldt, B.A.; Grant, M.J.; Matz, D.A.; Braun, R.D.; Barton, G.H.<br />
Guidance, Navigation, and Control System Performance Trades<br />
for Mars Pinpoint Landing<br />
Journal of Spacecraft and Rockets, AIAA, Vol. 47, No. 1, 2010<br />
Steinfeldt, B.A.; Braun, R.D.; Paschall II, S.C.<br />
Guidance and Control Algorithm Robustness Baseline Indexing<br />
Guidance, Navigation, and Control Conference and Exhibit, Toronto,<br />
Canada, August 2-5, 2010. Sponsored by: AIAA<br />
Streetman, B.J.; Peck, M.A.<br />
General Bang-Bang Control Method for Lorentz Augmented Orbits<br />
Journal of Spacecraft and Rockets, AIAA, Vol. 47, No. 3, May-June 2010<br />
Streetman, B.J.; Johnson, M.C.; Kroehl, J.F.<br />
Generic Framework for Spacecraft GN&C Emulation: Performing a<br />
Lunar-Like Hop on the Earth<br />
Guidance, Navigation, and Control Conference and Exhibit, Toronto,<br />
Canada, August 2-5, 2010. Sponsored by: AIAA<br />
List of 2010 Published Papers and Presentations<br />
Swan, E.E.; Borenstein, J.T.; Fiering, J.O.; Kim, E.S.; Mescher, M.J.; Murphy,<br />
B.; Tao, S.L.; Chen, Z.; Kujawa, S.G.; McKenna, M.J.; Sewell, W.F.<br />
Characterization of Reciprocating Flow Parameters for Inner Ear<br />
Drug Delivery<br />
33rd Midwinter Meeting, Association for Research in Otolaryngology,<br />
Anaheim, CA, February 6-11, 2010. Sponsored by: ARO<br />
Tamblyn, S.; Henry, J.R.; King, E.T.<br />
Model-Based Design and Testing Approach for Orion GN&C Flight<br />
Software Development<br />
Aerospace Conference, Big Sky, MT, March 6-13, 2010. Sponsored by:<br />
IEEE<br />
Tao, S.L.<br />
Polycaprolactone Nanowires for Controlling Cell Behavior at the<br />
Biointerface<br />
Popat, K., ed., Nanotechnology in Tissue Engineering and Regenerative<br />
Medicine, Chapter 3, CRC Press, Taylor & Francis Group, Boca Raton,<br />
FL, November 22, 2010<br />
Tepolt, G.B.; Mescher, M.J.; LeBlanc, J.; Lutwak, R.; Varghese, M.<br />
Hermetic Vacuum Sealing of MEMS Devices Containing Organic<br />
Components<br />
Photonics West-MOEMS-MEMS, San Francisco, CA, January 22-27,<br />
2010. Sponsored by: SPIE<br />
Torgerson, J.F.; Sherman, P.G.; Scudiere, J.D.; Tran, V.; Del Colliano, J.;<br />
Sokolowski, S.; Ganop, S.<br />
Collaborative Soldier Navigation Study<br />
35th Joint Navigation Conference, Orlando, FL, June 7-10, 2010.<br />
Sponsored by: JSDE<br />
Tucker, J.; Boydston, T.E.; Heffner, K.<br />
Closing the Level 4 Secure Computing Gap via Advanced MCM<br />
<strong>Technology</strong><br />
Department of Defense Anti-Tamper Conference, Baltimore, MD, April<br />
13-15, 2010. Sponsored by: DoD<br />
Tucker, B; Saint-Geniez, M.; Tao, S.L.; D’Amore, P.; Borenstein, J.T.;<br />
Herman, I.M.; Young, M.<br />
Tissue Engineering for the Treatment of AMD<br />
Expert Reviews in Ophthalmology, Vol. 5, No. 5, October 2010<br />
Valonen, P.K.; Moutos, F.T.; Kusanagi, A.; Moretti, M.; Diekman, B.O.;<br />
Welter, J.F.; Caplan, A.I.; Guilak, F.; Freed, L.E.<br />
In Vitro Generation of Mechanically Functional Cartilage Grafts<br />
Based on Adult Human Stem Cells and 3D-woven Poly(εcaprolactone)<br />
Scaffolds<br />
Biomaterials, Vol. 31, January 2010<br />
99
Varsanik, J.S.; Teynor, W.A.; LeBlanc, J.; Clark, H.A.; Krogmeier, J.; Yang,<br />
T.; Crozier, K.; Bernstein, J.J.<br />
Subwavelength Plasmonic Readout for Direct Linear Analysis of<br />
Optically Tagged DNA<br />
Photonics West-BIOS, San Francisco, CA, January 23-28, 2010.<br />
Sponsored by: SPIE<br />
Wen, H.-Y.; Duda, K.R.; Oman, C.M.<br />
Simulating Human-Automation Task Allocations for Space<br />
System Design<br />
Human Factors and Ergonomic Society Student Conference, New<br />
England Chapter, Boston, MA., October 22, 2010<br />
Wang, J.; Bettinger, C.J.; Langer, R.S.; Borenstein, J.T.<br />
Biodegradable Microfluidic Scaffolds for Tissue Engineering<br />
from Amino Alcohol-Based Poly(Ester Amide) Elastomers<br />
Organogenesis, Volume 6, No. 4, 2010, pp. 1-5<br />
Yoon, S.-H.; Cha, N.-G.; Lee, J.S.; Park, J.-G.; Carter, D.J.; Mead, J.L.; Barry,<br />
C.M.F.<br />
Effect of Processing Parameters, Antistiction Coatings, and<br />
Polymer Type when Injection Molding Microfeatures<br />
Polymer Engineering & Science, Vol. 50, Issue 2, February 2010<br />
Yoon, S.-H.; Lee, K.-H.; Palanisamy, P.; Lee, J.S.; Cha, N.-G.; Carter, D.J.;<br />
Mead, J.L.; Barry, C.M.F.<br />
Enhancement of Surface Replication by Gas Assisted<br />
Microinjection Moulding<br />
Plastics, Rubber and Composites, Vol. 39, No. 7, September 2010<br />
100<br />
List of 2010 Published Papers and Presentations<br />
Young, L.R.; Oman, C.M.; Stimpson, A.; Duda, K.R.; Clark, T.<br />
Flight Displays and Control Modes for Safe and Precise Lunar<br />
Landing<br />
81st Annual Aerospace Medical Association Scientific Meeting,<br />
Phoenix, AZ, May 9-13, 2010. Sponsored by: ASMA<br />
Young, L.R.; Clark, T.; Stimpson, A.; Duda, K.R.; Oman, C.M.<br />
Sensorimotor Controls and Displays for Safe and Precise Lunar<br />
Landing<br />
61st International Astronautical Congress, Prague, Czech Republic,<br />
September 27-October 1, 2010. Sponsored by: International<br />
Astronautical Federation (IAF)<br />
Zanetti, R.<br />
Multiplicative Residual Approach to Attitude Kalman Filtering<br />
with Unit-Vector Measurements<br />
Space Flight Mechanics Conference, San Diego, CA, February 14-17,<br />
2010. Sponsored by: AAS and AIAA<br />
Zanetti, R.; DeMars, K.J.; Bishop, R.H.<br />
On Underweighting Nonlinear Measurements<br />
Journal of Guidance, Control, and Dynamics, AIAA, Vol. 33, No. 5,<br />
September-October 2010, pp. 1670-1675
Patents Introduction<br />
Patents<br />
Patents<br />
Introduction<br />
Introduction<br />
<strong>Draper</strong> <strong>Laboratory</strong> is well known for integrating diverse technical capabilities and technologies into<br />
innovative and creative solutions for problems of national concern. <strong>Draper</strong> encourages scientists and<br />
engineers to advance the application of science and technology, expand the functions of existing<br />
technologies, and create new ones.<br />
<strong>The</strong> disclosure of inventions is an important step in documenting these creative efforts and is required under<br />
<strong>Laboratory</strong> contracts (and by an agreement with <strong>Draper</strong> that all employees sign). <strong>Draper</strong> has an established<br />
patent policy and understands the value of patents in directing attention to individual accomplishments.<br />
Pursuing patent protection enables the <strong>Laboratory</strong> to pursue its strategic mission and to recognize its<br />
employees’ valuable contributions to advancing the state-of-the-art in their technical areas. An issued<br />
patent is also recognition by a critical third party (the U.S. Patent Office) of innovative work for which the<br />
inventor should be justly proud.<br />
On average, <strong>Draper</strong>’s Patent Committee typically recommends seeking patent protection for 50 percent of<br />
the disclosures received. Millions of U.S. patents have been issued since the first patent in 1836. Through<br />
December 31, 2010, 1,468 <strong>Draper</strong> patent disclosures have been submitted to the Patent Committee since<br />
1973; 757 of which were approved by <strong>Draper</strong>’s Patent Committee for further patent action. As of December<br />
31, a total of 552 patents have been granted for inventions made by <strong>Draper</strong> personnel. Nineteen patents were<br />
issued for calendar year 2010.<br />
THIS YEAR’S FEATURED PATENT IS:<br />
Systems and Methods for High Density<br />
Multi-Component Modules<br />
<strong>The</strong> following pages present an overview of the technology covered in the patent and the official<br />
patent abstract issued by the U.S. Patent Office.<br />
101
Systems and Methods for High Density<br />
Multi-Component Modules<br />
Scott A. Uhland, Seth M. Davis, Stanley R. Shanfield, Douglas W. White, and Livia M. Racz<br />
U.S. Patent No. 7,727,806; Date Issued: June 1, 2010<br />
<strong>Draper</strong>’s patented i-UHD technology will enable <strong>Draper</strong> to take miniaturization to new levels for customers who demand highly capable<br />
systems with minimal size and power requirements. By removing all nonessential elements and stacking layers of components buried in<br />
silicon wafers on top of each other, <strong>Draper</strong> can fit an entire system into a package the size of a Scrabble tile.<br />
This work is close to transitioning into production for two sponsors, and the extreme miniaturization could be an asset for other customers<br />
in fields ranging from national security to biomedical technology.<br />
Scott A. Uhland is a Member of the Technical Staff at the Palo<br />
Alto Research Center (PARC). Within the Electronic Materials and<br />
Devices <strong>Laboratory</strong>, Dr. Uhland is developing microfluidic actuated<br />
systems for a variety of commercial applications ranging from<br />
devices for hormone therapy to optical displays. Prior to joining<br />
PARC, he was a Senior Member of the Technical Staff at <strong>Draper</strong><br />
<strong>Laboratory</strong>, where he was the Bioengineering Group Leader and<br />
oversaw the development of a wide variety of technologies and<br />
programs, including biological sensors, tissue engineering, and drug<br />
delivery. He was also a Principal Investigator (PI) at <strong>Draper</strong> for the<br />
research and development of electronic packaging technologies<br />
that push component densities to the theoretical limit. From<br />
2000 to 2004, he was one of the initial PIs at MicroCHIPS, Inc.,<br />
where he pioneered the use of MEMS technology in the medical<br />
field, particularly in the development of innovative drug delivery<br />
and sensing systems. He has authored more than 35 publications,<br />
reviews, and patents, and holds 60+ pending U.S. applications. Dr.<br />
Uhland received a B.S. in Materials Science and Engineering (summa<br />
cum laude) from Rutgers University, where he served as President of<br />
the Tau Beta Pi Honor Society, and a Ph.D. in Materials Science and<br />
Engineering from MIT.<br />
Seth M. Davis is currently the Associate Director for Communication,<br />
Navigation, and Miniaturization in the Special Programs Office.<br />
He is responsible for business development, strategic planning, and<br />
internal technology investment for first-of-a-kind special communications<br />
systems, miniaturized navigation systems, and advanced<br />
tagging tracking and locating systems. His technical interests focus<br />
on ultra-miniaturization of complex, low-power electronics systems<br />
for sensing, signal processing, and RF communications. Prior to his<br />
current position, he was Division Leader of the Electronics Division.<br />
Mr. Davis received B.S. and M.S. degrees in Electrical Engineering<br />
from MIT and Northeastern University, respectively.<br />
Stanley Shanfield is a Distinguished Member of the Technical<br />
Staff, and has recently been a Technical Director for a variety of<br />
intelligence community programs. He led a team that developed<br />
a miniature, low-power, stable frequency source that maintains<br />
stability to better than 0.1 part-per-billion over several seconds,<br />
suitable for high-performance digital transmitters and receivers.<br />
He also led a team that developed and demonstrated an
Systems and Methods for High Density Multi-Component Modules<br />
103
Left to Right:<br />
Douglas W. White, Stanley R. Shanfield, Livia M. Racz, and Seth M. Davis; missing: Scott A. Uhland<br />
104<br />
Systems and Methods for High Density Multi-Component Modules
Anderson, R.S.; Hanson, D.S.; Kasparian, F.J.; Marinis, T.F.; Soucy, J.W.<br />
Sensor Isolation System<br />
Patent No. 7,679,171, March 16, 2010<br />
Appleby, B.D.; Paradis, R.D.; Szczerba, R.J.<br />
Mission Planning System for Vehicles with Varying Levels of<br />
Autonomy<br />
Patent No. 7,765,038, July 27, 2010<br />
Bernstein, J.J.; Rogomentich, F.J.; Lee, T.W.; Varghese, M.; Kirkos,<br />
G.A. Systems, Methods and Devices for Actuating a Moveable<br />
Miniature Platform<br />
Patent No. 7,643,196, January 5, 2010<br />
Borenstein, J.T.; Weinberg, E.J.; Orrick, B.; Pritchard, E.M.; Barnard, E.;<br />
Krebs, N.J.; Marentis, T.C.; Vacanti, J.P.; Kaazempur-Mofrad, M.R.<br />
Micromachined Bilayer Unit for Filtration of Small Molecules<br />
Patent No. 7,776,021, August 17, 2010<br />
Duwel, A.E.; Varsanik, J.S.<br />
Electromagnetic Composite Metamaterial<br />
Patent No. 7,741,933, June 22, 2010<br />
Elwell Jr., J.M.; Gustafson, D.E.; Dowdle, J.R.<br />
Systems and Methods for Positioning Using Multipath Signals<br />
Patent No. 7,679,561, March 16, 2010<br />
Fiering, J.O.; Varghese, M.<br />
Devices for Producing a Continuously Flowing Concentration<br />
Gradient in Laminar Flow<br />
Patent No. 7,837,379, November 23, 2010<br />
Laine, J-P.J.; Miraglia, P.; Tapalian Jr., H.C.<br />
High Efficiency Fiber-Optic Scintillator Radiation Detector<br />
Patent No. 7,791,046, September 7, 2010<br />
Marinis, T.F.; Kondoleon, C.A.; Pryputniewicz, D.R.<br />
Structures for Crystal Packaging Including Flexible Membranes<br />
Patent No. 7,851,970, December 14, 2010<br />
Mescher, M.J.<br />
High Speed Piezoelectric Optical System with Tunable Focal<br />
Length<br />
Patent No. 7,826,144, November 2, 2010<br />
Sammak, P.J.; Mangoubi, R.S.; Desai, M.N.; Jeffreys, C.G.<br />
Methods and Systems for Imaging Cells<br />
Patent No. 7,711,174, May 4, 2010<br />
List of 2010 Patents<br />
List of 2010 Patents<br />
Sawyer, W.D.<br />
MEMS Devices and Interposer and Method for Integrating MEMS<br />
Device and Interposer<br />
Patent No. 7,655,538, February 2, 2010<br />
Tawney, J.; Hakimi, F.<br />
Methods and Apparatus for Providing a Semiconductor Optical<br />
Flexured Mass Accelerometer<br />
Patent No. 7,808,618, October 5, 2010<br />
Uhland, S.A.; Davis, S.M.; Shanfield, S.R.; White, D.W.; Racz, L.M.<br />
Systems and Methods for High Density Multi-Component<br />
Modules<br />
Patent No. 7,727,806, June 1, 2010<br />
Vacanti, J.P.; Rubin, R.; Cheung, W.; Borenstein, J.T.<br />
Method of Determining Toxicity with Three Dimensional<br />
Structures<br />
Patent No. 7,670,797, March 2, 2010<br />
Vacanti, J.P.; Shin, Y-M.M.; Ogilvie, J.; Sevy, A.; Maemura, T.; Ishii, O.;<br />
Kaazempur-Mofrad, M.R.; Borenstein, J.T.; King, K.R.; Wang, C.C.;<br />
Weinberg, E.J.<br />
Fabrication of Tissue Lamina Using Microfabricated Two-<br />
Dimensional Molds<br />
Patent No. 7,759,113, July 20, 2010<br />
Ward, P.A.<br />
Interferometric Fiber Optic Gyroscope with Off-Frequency<br />
Modulation Signals<br />
Patent No. 7,817,284, October 19, 2010<br />
Weinberg, E.J.; Borenstein, J.T.<br />
Systems, Methods, and Devices Relating to a Cellularized<br />
Nephron Unit<br />
Patent No. 7,790,028, September 7, 2010<br />
Young, J.; Turney, D.J.<br />
Systems and Methods for Reconfigurable Computing<br />
Patent No. 7,669,035, February 23, 2010<br />
105
106<br />
<strong>The</strong> 2010 <strong>Draper</strong> Distinguished<br />
Performance Awards<br />
Chairman of the Board John A. Gordon and President Jim Shields presented the 2010 <strong>Draper</strong> Distinguished<br />
Performance Awards (DPAs) to two teams at the Annual Dinner of the Corporation on October 7. <strong>The</strong> first<br />
team included Laurent G. Duchesne, Richard D. Elliott, Robert M. Filipek, Sean George, Daniel I. Harjes,<br />
Anthony S. Kourepenis, and Justin E. Vican for the “Design and Demonstration of a Guided Bullet for Extreme<br />
Precision Engagement of Targets at Long Range.” <strong>The</strong> second team included Stanley R. Shanfield, Albert C.<br />
Imhoff, Thomas A. Langdo, Balasubrahmanyan “Biga” Ganesh, and Peter A. Chiacchi for the “Development<br />
of an Ultra-Miniaturized, Paper-Thin Power Source.”<br />
Each year since 1989, <strong>Draper</strong> <strong>Laboratory</strong> presents Distinguished Performance Awards to recognize<br />
extraordinary and unique individual and team performance. A committee of <strong>Draper</strong> staff representing<br />
every organization evaluated the nominations against the following criteria:<br />
• Constitutes a major technical accomplishment.<br />
• Involves highly challenging and complex tasks of substantial benefit to the <strong>Laboratory</strong>.<br />
• Is a recent discrete accomplishment that is clearly extraordinary and represents a standard of<br />
excellence for the <strong>Laboratory</strong>.<br />
• <strong>The</strong> responsible individual or team can be identified as the prime factor in the results.<br />
• Is regarded as a major advance by the outside community.<br />
<strong>The</strong> Distinguished Performance Award Evaluation Committee was chaired by Jim Comolli. Committee<br />
members included Mark Abramson, Dick Dramstad, Alex Edsall, Dan Eyring, Al Ferraris, Ryan Prince, Peter<br />
Halatyn, and Livia Racz. Jean Avery provided administrative support.<br />
<strong>The</strong> 2010 <strong>Draper</strong> Distinguished Performance Awards
Left to Right:<br />
Daniel I. Harjes, Anthony S. Kourepenis, Sean George, Richard D. Elliott, Laurent G. Duchesne, Justin E. Vican, and Robert M. Filipek<br />
Design and Demonstration of a Guided Bullet for Extreme Precision Engagement of Targets at Long Range<br />
Performing for the DARPA Extreme Accuracy Tasked Ordnance (EXACTO) program, the team developed a revolutionary .50<br />
caliber bullet guidance system that will be used to produce the smallest, fastest, highest g projectile to date that is fully<br />
guided. To perform across a 70,000-g launch acceleration, they designed a first-of-a-kind, two-body bullet with a decoupled<br />
aft section that despins from 120,000 to 0 rpm in under 300 ms. This required the implementation of an innovative, alternator<br />
controlled, despun aft section that provides sufficient maneuverability but low drag for the bullet to remain supersonic out<br />
to maximum range.<br />
<strong>The</strong> team worked within an 11-month time frame to deliver a system that exceeded all of the accuracy requirements across<br />
a variety of night- and daytime ranges, moving targets, wind speeds and directions, and other environmental conditions.<br />
<strong>The</strong> effort culminated in May with a physics and experimentally-based, fully integrated hardware- and software-in-the-loop<br />
demonstration that not only validated superior system performance, but also exceeded designated product requirements<br />
over all ranges and all target motion challenges. For this accomplishment, the program was recently awarded Phase II to<br />
continue the design and development of the guidance mechanics and electronics in collaboration with a commercial sponsor.<br />
<strong>The</strong> outstanding technical achievements demonstrated in the design, fabrication, simulation, and testing of this miniaturized<br />
guidance system are well-deserving of this award.<br />
<strong>The</strong> 2010 <strong>Draper</strong> Distinguished Performance Awards<br />
107
108<br />
Back to Front:<br />
Thomas A. Langdo, Albert C. Imhoff, Balasubrahmanyan "Biga" Ganesh, Peter A. Chiacchi, and Stanley R. Shanfield<br />
Development of an Ultra-Miniaturized, Paper-Thin Power Source<br />
This award recognizes a truly revolutionary advance in energy delivery that shows an order of magnitude improvement over<br />
current technologies. <strong>The</strong> paper-thin-power-source (PTPS) thermoelectric power source has successfully demonstrated a<br />
dramatic breakthrough in miniature portable energy through the combined use of an innovative linear array of miniaturized, heat<br />
scavenging, thermocouple pairs, and extremely efficient dc-dc power converters. <strong>The</strong>se advances will better enable miniature<br />
portable systems to achieve their required mission endurance.<br />
<strong>The</strong> concept and the fabrication approach both required significant innovation commensurate with the criteria of this award.<br />
PTPS required the thin-film deposition of Bi2Te3 (bismuth telluride) and other high-performance thermoelectric materials that<br />
are difficult to use due to their composition and material defects. <strong>The</strong> team developed innovative material processing methods<br />
and unique machining and handling procedures to realize the long, thin features necessary for high efficiency and high voltages.<br />
No one had processed this material at these aspect and size ratios before while maintaining its bulk properties. Miniaturization<br />
of the thermocouple pairs was critical to the PTPS design’s success since the combined cross section of the thermocouple pairs<br />
forming the array had to be small enough to prevent conductive heat transfer from reducing the temperature of the source. This<br />
resulted in a prototype that significantly outperformed the current state-of-the-art.<br />
<strong>The</strong>se individuals were the key innovators and implementers who successfully designed, built, and tested this unprecedented,<br />
micron-scale thermoelectric generator system, which led to a successful customer demonstration of the integrated technologies<br />
and the Phase III contract now underway.<br />
<strong>The</strong> 2010 <strong>Draper</strong> Distinguished Performance Awards
<strong>The</strong> 2010 Outstanding Task Leader Awards<br />
Ian T. Mitchell is a Distinguished Member of the Technical Staff in the Dynamic Systems and Control Division. He<br />
has over 25 years of experience in designing and developing GN&C systems for a wide range of space programs.<br />
Prior to joining <strong>Draper</strong>, he was the lead GN&C engineer for the XSS-10 and XSS-11 microsatellite missions that<br />
successfully demonstrated a number of key technologies related to autonomous rendezvous and proximity<br />
operations. Mr. Mitchell is currently Task Lead for the Commercial Orbital Transportation Services (COTS)<br />
program, which involves the flight demonstration of the Cygnus spacecraft delivering cargo to the International<br />
Space Station (ISS). In this role, he has led <strong>Draper</strong>’s team in the development of guidance, navigation, and targeting<br />
(GN&T) algorithms and flight software for the Cygnus vehicle. Mr. Mitchell has also provided technical leadership<br />
within <strong>Draper</strong> in the application of Model-Based Design (MBD) methods to high-assurance, mission-critical GN&C<br />
algorithms and flight software development programs. Mr. Mitchell received a B.Sc. in Mathematics from the<br />
University of Manchester, Institute of Science and <strong>Technology</strong> (UMIST), UK, and an M.Sc. in Control Engineering<br />
from the City University, London, UK.<br />
Daniel Monopoli is a Principal Member of the Technical Staff and System Integration Group Leader in the System<br />
Integration, Test, and Evaluation Division. Since joining <strong>Draper</strong> in 2000, he has made significant contributions in<br />
program areas including integration, test, and evaluation of strategic guidance, INS/GPS, and avionics systems.<br />
For the past 5 years, he has held numerous system integration roles within the Navy’s TRIDENT program. He is<br />
currently the Control Account Manager for MARK6 MOD1 System Test Equipment. This is a multiyear, crossdisciplinary<br />
effort between <strong>Draper</strong>, industrial support contractors, and the Integrated Support Facility to develop,<br />
integrate, test, evaluate, and certify the system test equipment required to perform production acceptance testing<br />
of the MARK6 MOD1 system. Prior to joining <strong>Draper</strong>, he was a Process Engineer in the Iron, Steel, and Casting<br />
Division for the U.S. Steel Group in Birmingham, Alabama. He is currently a Master’s candidate in Engineering<br />
Management at Tufts Gordon Institute. Mr. Monopoli holds B.E. and M.S. degrees in Mechanical Engineering from<br />
Vanderbilt University.<br />
NEXT PAGE // PHOTOS<br />
<strong>The</strong> 2010 Outstanding Task Leader Awards<br />
109
Ian Mitchell<br />
2010 Outstanding Task Leader<br />
<strong>The</strong> 2010 Outstanding Task Leader Awards
Daniel Monopoli<br />
2010 Outstanding Task Leader
112<br />
<strong>The</strong> 2010 Howard Musoff Student<br />
Mentoring Award<br />
<strong>The</strong> 2010 Howard Musoff Student Mentoring Award was presented to Sarah L. Tao, a Senior Member of<br />
the Technical Staff in the MEMS Design Group. Sarah received a B.S. in Bioengineering from the University<br />
of Pennsylvania and earned a Ph.D. in Biomedical Engineering from Boston University, where she was also a<br />
NASA Graduate Research Fellow. Before joining <strong>Draper</strong>, she was a Research Scientist and Sandler Translational<br />
Research Fellow at the University of California, San Francisco. Her research combines fabrication methods<br />
used for MEMS to create therapeutic platforms for cell encapsulation, targeted drug delivery, and templates<br />
for cell and tissue regeneration. She has over 25 publications, and her research efforts in microfabricated<br />
therapeutic platforms have earned numerous accolades, including the Society for Biomaterials Award for<br />
Outstanding Research, the Eurand Grand Prize for Outstanding Novel Research, and the Capsugel/Pfizer<br />
Award for Innovative Research. Away from the <strong>Laboratory</strong>, Sarah enjoys remaining active. She is a black belt<br />
candidate in Taekwondo, a scuba diver, and with a newfound interest in triathlons, has more recently become<br />
an avid runner and cyclist (swimmer - still to be determined).<br />
Without exception, past recipients of this award have indicated that they have been as enriched by their<br />
mentoring experiences as their students. As Sarah explains it:<br />
“<strong>The</strong> environment at <strong>Draper</strong> is unique in that as staff, we are able to engage young minds from major universities<br />
across the greater Boston area through cooperative education programs, senior engineering design courses,<br />
and our own <strong>Draper</strong> Fellow Program. Likewise, students at <strong>Draper</strong> are able to capitalize on a diverse network<br />
of experienced staff for support and guidance as they develop an individualized skill set on their pathway<br />
to independent research. This academic year, I had the benefit of working with three exceptional graduate<br />
students from MIT (Mechanical Engineering) and Boston University (Biomedical Engineering and Medical<br />
Sciences)—each with different backgrounds, interests, and personal goals. However, what they share in<br />
common is a remarkable motivation: an eagerness to learn, grow, and take ownership of their research. I believe<br />
the student-mentor partnership at <strong>Draper</strong> is both collaborative and reciprocal in every way. We work together<br />
to define goals—both research and career-wise—to work toward. And as the students inevitably come across<br />
problems, collectively, we look at their research from all angles and discuss options and strategies to maximize<br />
success. <strong>The</strong>se talented and creative students continuously bring their unique skills, perspectives, and ideas<br />
to our research on a daily basis. <strong>The</strong>ir drive and determination has been critical in the successful execution of<br />
multiple research projects, manuscript submissions, and conference presentations. And their input has been<br />
central in generating new lines of collaboration with our existing academic partners. It has been a privilege to<br />
work with and learn from each of my students this year.”<br />
<strong>The</strong> Howard Musoff Student Mentoring Award was established in his memory in 2005. A <strong>Draper</strong> employee<br />
for over 40 years, Musoff advised and mentored many <strong>Draper</strong> Fellows. <strong>The</strong> award is presented each February<br />
during National Engineers Week to recognize staff members who, like Musoff, share their expertise and<br />
supervise the professional development and research activities of <strong>Draper</strong> Fellows. <strong>The</strong> award, endowed by<br />
the Howard Musoff Charitable Foundation, includes a $1,000 honorarium and a plaque. Each Engineering<br />
Division Leader may submit one nomination of a staff person from his Division. <strong>The</strong> Education Office assists in<br />
the process by soliciting comments from students who were residents during that time period. <strong>The</strong> Selection<br />
Committee consists of the Vice President of Engineering, the Principal Director of Engineering, and the Director<br />
of Education.<br />
<strong>The</strong> 2010 Howard Musoff Student Mentoring Award
Sarah L. Tao
114<br />
<strong>The</strong> 2010 Excellence in Innovation Award<br />
Catherin L. Slesnick, Benjamin F. Lane, Donald E. Gustafson, and Brad D. Gaynor received the 2010 Excellence in<br />
Innovation Award for their work on Navigation by Pressure. Under government sponsorship, <strong>Draper</strong> is developing<br />
technology to track objects at sea in a completely RF-denied environment. Geolocation is performed using<br />
barometric pressure. A small pressure sensor logs measurements of ambient barometric pressure and time.<br />
<strong>The</strong> recorded pressure time series is then correlated against gridded, high-quality, worldwide, and regional<br />
pressure data sets available from the weather monitoring community.<br />
Catherine L. Slesnick is a Senior Member of the Technical Staff at <strong>Draper</strong> <strong>Laboratory</strong>. Her professional<br />
interests include working with large datasets from remote sensing and ground-based observing platforms.<br />
Emphasis has been on signal processing, time series analysis, fusion of information from heterogeneous<br />
sensors, and sensor system simulation. She has been involved with multiple projects involving analysis of Earth<br />
science and meteorological observations and/or astronomical observations. She is currently Technical Lead<br />
for the Navigation by Pressure project and Lead Photometric Scientist for the U.S. Naval Observatory (USNO)sponsored<br />
Joint Milli-Arcsecond Pathfinder Survey (JMAPS) satellite mission. Before joining <strong>Draper</strong>, she was a<br />
Fellow at the Department of Terrestrial Magnetism, Carnegie Institution for Science in Washington DC. She has<br />
30+ publications in the combined fields of astrophysics and Earth science data analysis. Dr. Slesnick earned<br />
a B.A. in Physics and Mathematics from New York University and a Ph.D. in Astrophysics from the California<br />
Institute of <strong>Technology</strong> as a National Science Foundation Graduate Fellow.<br />
Benjamin F. Lane is a Senior Member of the Technical Staff at <strong>Draper</strong> <strong>Laboratory</strong> and is currently the Task<br />
Lead for the Guidance System Concepts effort. Expertise includes development of advanced algorithms for<br />
image processing and real-time control systems; developing instrument concepts, requirements, designs,<br />
control software, integration, testing and commissioning, and operations, debugging, data acquisition. He<br />
helped design, build, and operate a multiple-aperture telescope system (the Palomar Testbed Interferometer)<br />
for extremely high-angular resolution (picorad) astronomical observations, and also designed and built highcontrast<br />
imaging payloads for sounding rocket missions and spacecraft. He has published more than 45 peerreviewed<br />
papers in his area of expertise. Dr. Lane holds a Ph.D. in Planetary Science from the California Institute<br />
of <strong>Technology</strong>.<br />
Donald E. Gustafson is a Distinguished Member of the Technical Staff at <strong>Draper</strong> <strong>Laboratory</strong> and has over 40<br />
years of experience in conceptual design, analysis and simulation of complex systems. Expertise includes<br />
development of advanced algorithms for GPS-based navigation, multipath exploitation in indoor and urban<br />
environments, robotic localization and tracking, underground object detection using ground penetrating radar,<br />
space/time adaptive signal processing, and biomedical signal processing and pattern recognition. He was one<br />
of the principal developers of <strong>Draper</strong>’s patented Deep Integration system, a nonlinear filtering algorithm for<br />
GPS-based code and carrier tracking. More recently, he has worked on inversion of GPS measurements for<br />
atmospheric refractivity tomography and has developed algorithms for navigation using atmospheric pressure<br />
measurements. <strong>Draper</strong> awards he has received include two Best Publication awards, two Patent of the Year<br />
awards, and co-recipient of the 2000 Distinguished Performance Award. He has also received two Best Paper<br />
awards from the Institute of Navigation. Dr. Gustafson has published more than 40 papers and holds a Ph.D. in<br />
Instrumentation and Control from MIT.<br />
Brad D. Gaynor is a Program Manager in the Special Operations Program Office at <strong>Draper</strong> <strong>Laboratory</strong>. He<br />
manages a number of programs that utilize key <strong>Draper</strong> technologies, including deep-fade GPS processing;<br />
miniature, low-power hardware, and multisensor navigation. Mr. Gaynor is currently enrolled in a Ph.D. program<br />
at Tufts University, where he also earned B.S. and M.S. degrees.<br />
<strong>The</strong> 2010 Excellence in Innovation Award
Benjamin F. Lane, Catherine L. Slesnick, Donald E. Gustafson, and Brad D. Gaynor<br />
115
116<br />
List of 2010 Graduate Research <strong>The</strong>ses<br />
During 2010, over 50 students pursued their graduate degree programs while participating in the <strong>Draper</strong> Fellows<br />
program, conducting research in a wide variety of topics at top universities, including MIT, northeastern University,<br />
and Rice University. <strong>The</strong>ses that were completed in 2010 are listed below. <strong>The</strong>ses that were completed in 2009 after<br />
the <strong>Digest</strong> went to press are also listed. Details on these and other student research can be obtained by contacting the<br />
<strong>Draper</strong> Education Office at ed@draper.com.<br />
Burke, D.; Supervisors: Spanos, P.; Dick, A.; Meade, A.J.; Bedrossian,<br />
N.; King, E.<br />
On-Orbit Transfer Trajectory Methods Using High Fidelity<br />
Dynamic Models<br />
Master of Science <strong>The</strong>sis, Rice University, April 2010<br />
Clark, T.; Supervisors: Young, L.R.; Duda, K.R.; Modiano, E.<br />
Human Spatial Orientation Perceptions During Simulated<br />
Lunar Landing<br />
Master of Science <strong>The</strong>sis, MIT, June 2010<br />
Deshmane, A.V.; Supervisors: Mark, R.G.; Kessler, L.J.; Terman, C.J.<br />
False Arrhythmia Alarm Suppression Using ECG, ABP, and<br />
Photoplethysmogram<br />
Master of Engineering <strong>The</strong>sis, MIT, August 2009<br />
Herold, T.M.; Supervisors: Abramson, M.; Balakrishnan, H.; Bertsimas, D.<br />
Asynchronous, Distributed Optimization for the Coordinated<br />
Planning of Air and Space Assets<br />
Master of Science <strong>The</strong>sis, MIT, June 2010<br />
Hung, B.W.; Supervisors: Kolitz, S.E.; Ozdaglar, A.; Bertsimas, D.<br />
Optimization-Based Selection of Influential Agents in a Rural<br />
Afghan Social Network<br />
Master of Science <strong>The</strong>sis, MIT, June 2010<br />
Jeon, J.; Supervisors: Charest, J.; Kamm, R.D.; Hardt, D.E.<br />
3D Cyclic Olefin Copolymer (COC) Microfluidic Chip<br />
Fabrication Using Hot Embossing Method for Cell Culture<br />
Platform<br />
Master of Science <strong>The</strong>sis, MIT, June 2010<br />
Kotru, K.; Supervisors: Ezekiel, S.; Stoner, R.E.; Modiano, E.<br />
Toward a Demonstration of a Light Force Accelerometer<br />
Master of Science <strong>The</strong>sis, MIT, September 2010<br />
Marrero, J.; Supervisors: Cleary, M.E.; Katz, B.; Terman, C.J.<br />
Resolution of Linear Entity and Path Geometries Expressed via<br />
Partially-Geospatial Natural Language<br />
Master of Engineering <strong>The</strong>sis, MIT, February 2010<br />
Middleton, A.J.; Supervisors: Hoffman, J.; Paschall II, S.C.; Modiano, E.<br />
Modeling and Vehicle Performance Analysis of Earth and Lunar<br />
Hoppers<br />
Master of Science <strong>The</strong>sis, MIT, September 2010<br />
Owen, R.; Supervisors: Hansman, R.J.; Kessler, L.J.; Modiano, E.<br />
Modeling the Effect of Trend Information on Human Failure<br />
Detection and Diagnosis in Spacecraft Systems<br />
Master of Science <strong>The</strong>sis, MIT, June 2010<br />
List of 2010 Graduate Research <strong>The</strong>ses<br />
Richards, J.E.; Supervisors: Major, L.M.; Rhodes, D.; Hale, P.<br />
Integrating the Army Geospatial Enterprise: Synchronizing<br />
Geospatial-Intelligence to the Dismounted Soldier<br />
Master of Science <strong>The</strong>sis, MIT, June 2010<br />
Savoie, T.B.; Supervisors: Frey, D.D.; McCarragher, B.C.; Hardt, D.E.<br />
Human Detection of Computer Simulation Mistakes in<br />
Engineering Experiments<br />
Doctor of Philosophy <strong>The</strong>sis, MIT, June 2010<br />
Seidel, S.B.; Supervisors: Hildebrant, R.R.; Graves, S.C.; Bertsimas, D.<br />
Planning Combat Outposts to Maximize Population Security<br />
Master of Science <strong>The</strong>sis, MIT, June 2010<br />
Shenk, K.N.; Supervisors: Markuzon, N.; Bertsimas, D.; Jaillet, P.<br />
Patterns of Heart Attacks<br />
Master of Science <strong>The</strong>sis, MIT, June 2010<br />
Sievers, A.; Supervisors: Spanos, P.D.; Dick, A.; Meade, A.J.; Zanetti, R.;<br />
D’Souza, C.<br />
Multiple Event Triggers in Linear Covariance Analysis for<br />
Orbital Rendezvous<br />
Master of Science <strong>The</strong>sis, Rice University, April 2010<br />
Small, T.; Supervisors: Hall, S.R.; Proulx, R.J.; Modiano, E.<br />
Optimal Trajectory-Shaping with Sensitivity and Covariance<br />
Techniques<br />
Master of Science <strong>The</strong>sis, MIT, May 2010<br />
Snyder, A.M.; Supervisors: Markuzon, N.; Welsch, R.; Bertsimas, D.<br />
Data Mining and Visualization: Real Time Predictions<br />
and Pattern Discovery in Hospital Emergency Rooms and<br />
Immigration Data<br />
Master of Science <strong>The</strong>sis, MIT, June 2010<br />
Wilder, J.; Supervisors: Spanos, P.D.; Jang, J.-W.; Meade, A.J.;<br />
Stanciulescu, I.<br />
Time-Varying Stability Analysis of Linear Systems with Linear<br />
Matrix Inequalities<br />
Master of Science <strong>The</strong>sis, Rice University, May 2010<br />
Xu, Y.; Supervisors: Madison, R.W.; Poggi, T.A.; Terman, C.J.<br />
VICTORIOUS: Video Indexing with Combined Tracking and<br />
Object Recognition for Improved Object Understanding in<br />
Scenes.<br />
Master of Engineering <strong>The</strong>sis, MIT, July 2009
<strong>The</strong> Charles Stark <strong>Draper</strong> <strong>Laboratory</strong>, Inc.<br />
555 <strong>Technology</strong> Square<br />
Cambridge, MA 02139-3563<br />
617.258.1000<br />
www.draper.com<br />
Business Development<br />
busdev@draper.com<br />
617.258.2124<br />
Houston<br />
Suite 470<br />
17629 El Camino Real<br />
Houston, TX 77058<br />
281.212.1101<br />
Huntsville<br />
Suite 225<br />
1500 Perimeter Parkway<br />
Huntsville, AL 35806<br />
256.890.7392<br />
St. Petersburg<br />
9900 16th St N<br />
St. Petersburg, FL 33716<br />
727.235.6500<br />
Tampa<br />
<strong>Draper</strong> Bioengineering Center at USF<br />
Suite 201<br />
3802 Spectrum Boulevard<br />
Tampa, FL 33612-9220<br />
813.465.5400<br />
Washington<br />
Suite 501<br />
1555 Wilson Boulevard<br />
Arlington, VA 22209<br />
703.243.2600