issue 21 (1985) - Market Technicians Association
issue 21 (1985) - Market Technicians Association
issue 21 (1985) - Market Technicians Association
You also want an ePaper? Increase the reach of your titles
YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.
MARKET<br />
TECHNICIANS<br />
ASSOCIATION<br />
JOURNAL<br />
Issue <strong>21</strong><br />
May <strong>1985</strong>
MARKET TECHNICIANS ASSOCIATION JOURNAL<br />
MTA Journal/May <strong>1985</strong> 1
MTA Journal/May <strong>1985</strong>
MARKET TECHNICIANS ASSOCIATION JOURNAL<br />
Issue <strong>21</strong> May, <strong>1985</strong><br />
Editor: James M. Yates<br />
Bridge Data Company<br />
10050 Manchester Road<br />
St. Louis, Missouri 63122<br />
Contributors: David R. Aronson<br />
Barbara B. Diamond<br />
Ralph Fogel<br />
David Holt<br />
William R. Johnston<br />
George C. Lane<br />
Steve Leuthold<br />
J. Curtis Shambaugh<br />
Jim Tillman<br />
Bronwen Wood<br />
Publisher: <strong>Market</strong> <strong>Technicians</strong> <strong>Association</strong><br />
70 Pine Street<br />
New York, New York 10005<br />
MTA Journal/May <strong>1985</strong> 3
O<strong>Market</strong> <strong>Technicians</strong> <strong>Association</strong> <strong>1985</strong><br />
MTA Journal/May <strong>1985</strong> 4
Title<br />
MTA JOURNAL - MAY, <strong>1985</strong><br />
TABLE OF CONTENTS<br />
MTA OFFICERS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . , . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6<br />
MEMBERSHIP AND SUBSCRIPTION INFORMATION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7<br />
STYLE SHEET FOR SUBMISSION OF ARTICLES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8<br />
MTA LETTER FROM THE EDITOR . . . . . . . . . . . . . . . . . . . . , . . . . . . . . . . . . , . . . . . . . . . . . . . . . . . . . . 9<br />
James M. Yates<br />
TECHNICAL ANALYSIS IN THE UNITED KINGDOM, DOMESTIC AND INTERNATIONAL . . . . . . . 11<br />
Bronwen Wood<br />
HOW CYCLETREND CHANNELS HELP DETERMINE TURNING POINTS<br />
FOR STOCKS AND THE MARKET . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . , . . . . . . . . . . . . 27<br />
Jim Tillman<br />
THE POWER OF THE YIELD CURVE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . , . . . . . . 29<br />
J. Curtis Shambaugh<br />
RELATIVESTRENGTH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35<br />
A Workshop on Relative Strength Moderated by Steve Leuthold<br />
LANE’S STOCHASTICS: THE ULTIMATE OSCILLATOR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37<br />
George C. Lane<br />
A VIEW FROM THE FLOOR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43<br />
William R. Johnston<br />
A THREE YEAR FOLLOW-UP ON “THE ENIGMATIC STOCK OPTION”<br />
A CONSTANT CHANGE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . , . . 47<br />
David Holt<br />
A VIEW FROM THE FLOOR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . , . . . . . . . . . . . . . . . . 61<br />
Ralph Fogel<br />
CENTERFOLD . . . . . . . . , . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . , . . . . . . . 64<br />
OPTIMIZATION - SOFTWARE REVIEW WORKSHOP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67<br />
Barbara B. Diamond<br />
ARTIFICIAL INTELLIGENCE/PATTERN RECOGNITION APPLIED TO<br />
FORECASTING FINANCIAL MARKET TRENDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . , . . . . . . . . . 91<br />
David R. Aronson<br />
MTA Journal/May <strong>1985</strong> 5<br />
Page
PRESIDENT<br />
Richard Yashewski<br />
Butcher & Singer<br />
5161627-l 600<br />
VICE PRESIDENT<br />
John Greeley<br />
Greeley Securities<br />
<strong>21</strong>2/227-6900<br />
1984-85 MARKET TECHNICIANS ASSOCIATION<br />
VICE PRESIDENT (SEMINAR)<br />
Gail Dudack<br />
Pershing/Div. DLJ<br />
<strong>21</strong><strong>21</strong>902-3322<br />
PROGRAMS<br />
David Krell<br />
<strong>21</strong><strong>21</strong>623-8533<br />
NEWSLETTER<br />
Robert Prechter<br />
4041536-0309<br />
JOURNAL<br />
James Yates<br />
314/8<strong>21</strong>-5660<br />
CERTIFICATION<br />
Charles Comer<br />
<strong>21</strong><strong>21</strong>825-4367<br />
MEMBERSHIP<br />
Phil Roth<br />
<strong>21</strong><strong>21</strong>742-6535<br />
LIBRARY<br />
Ralph Acampora<br />
<strong>21</strong>2747-2355<br />
OFFICERS<br />
COMMITTEE CHAIRPERSONS<br />
SECRETARY<br />
Cheryl Stafford<br />
Wellington Management<br />
617/227-9500<br />
TREASURER<br />
Robert Simpkins<br />
Delafield, Harvey, Tabell<br />
6091924-9660<br />
ETHICS & STANDARDS/ PUBLIC RELATIONS<br />
Tony Tabell<br />
6091924-9660<br />
PLACEMENT<br />
John Brooks<br />
404/266-6262<br />
EDUCATION<br />
Fred Dickson<br />
<strong>21</strong><strong>21</strong>398-8489<br />
COMPUTER SPECIAL INTEREST GROUP<br />
John McGinley<br />
203/762-0229<br />
FUTURES SPECIAL INTEREST GROUP<br />
John Murphy<br />
<strong>21</strong><strong>21</strong>724-6982<br />
SAN FRANCISCO TECHNICAL SOCIETY<br />
SPECIAL INTEREST GROUP<br />
Henry Pruden<br />
415/459-l 319<br />
MTA Journal/May <strong>1985</strong> 6
MARKET TECHNICIANS ASSOCIATION<br />
MEMBERSHIP AND SUBSCRIPTION INFORMATION<br />
REGULAR MEMBERSHIP - $75 per year plus $10 one-time application fee.<br />
Receives the MTA Journal, the monthly MTA Newsletter, invitations to all meetings, voting member status, and a dis-<br />
count on the Annual Seminar fee. Eligibility requires that the emphasis of the applicant’s professional work involve<br />
technical analysis.<br />
SUBSCRIBER STATUS - $75 per year.<br />
Receives the MTA Journal and the MTA Newsletter, which contains shorter articles on technical analysis. The sub-<br />
scriber receives special announcements of the MTA meetings open to The New York Society of Security Analysts<br />
and/or the public, plus a discount on the Annual Seminar fee.<br />
ANNUAL SUBSCRIPTION TO THE MTA JOURNAL - $35 per year.<br />
SINGLE ISSUES OF THE MTA JOURNAL (including back <strong>issue</strong>s) - $15.<br />
The <strong>Market</strong> <strong>Technicians</strong> <strong>Association</strong> Journal is scheduled to be published three times each fiscal year in approxi-<br />
mately November, February, and May.<br />
An ANNUAL SEMINAR is held each Spring.<br />
Inquiries for REGULAR MEMBERSHIP and SUBSCRIBER STATUS should be directed to:<br />
MTA Journal/May <strong>1985</strong> 7<br />
Membership Chairman<br />
<strong>Market</strong> <strong>Technicians</strong> <strong>Association</strong><br />
70 Pine Street<br />
New York. New York 10005
STYLE SHEET FOR THE SUBMISSION OF ARTICLES<br />
MTA Editorial Policy<br />
The <strong>Market</strong> <strong>Technicians</strong> <strong>Association</strong> Journal is published by the <strong>Market</strong> <strong>Technicians</strong> Associ-<br />
ation, 70 Pine Street, New York, New York 10005 to promote the investigation and analysis of<br />
price and volume activities of the world’s financial markets. The MTA Journal is distributed to<br />
individuals (both academic and practioner) and libraries in the United States, Canada, Europe<br />
and several other countries. The Journal is copyrighted by the <strong>Market</strong> <strong>Technicians</strong> <strong>Association</strong><br />
and registered with the Library of Congress. All rights are reserved. Publication dates are Feb-<br />
ruary, May, and November.<br />
Style for the MTA Journal<br />
All papers submitted to the MTA Journal are requested to have the following items as prereq-<br />
uisites to consideration for publication.<br />
Short (one paragraph) biographical presentation for inclusion at the end of the accepted article<br />
upon publication. Name and affiliation will be shown under the title.<br />
All charts should be provided in camera ready form and be properly labeled for text reference.<br />
All tables should be properly labeled and in camera ready form.<br />
Paper should be submitted typewritten, double-spaced in completed form on 8% by 11 inch<br />
paper. If both sides are used, care should be taken to use sufficiently heavy paper to avoid<br />
reverse side images. Footnotes and references should be put in the end of the article.<br />
Greek characters should be avoided in the text and in all formulae.<br />
One submission copy is satisfactory.<br />
Manuscripts of any style will be received and examined, but upon acceptance, they should be<br />
prepared in accordance with the above policies.<br />
MTA Journal/May <strong>1985</strong>
MTA LETTER FROM THE EDITOR<br />
The Journal’s deepest appreciation for the contributor’s notes, text, and exhibits for this Sem-<br />
inar Journal. A lot of hard work went into their preparation and it is evident by the contents.<br />
The MTA Seminar Indicator is once again included for your inspection and interpretation.<br />
Good weather and lots of hot air is predicted at Hilton Head in May, <strong>1985</strong>; so we hope everyone<br />
enjoys and takes advantage of it.<br />
The editor’s compliments to the valued assistance in the production (as usual) go to Sally Rup-<br />
pert and Pam Hollrah. The Seminar <strong>issue</strong> is always close to the wire and could not occur with-<br />
out their competent and cheerful help.<br />
James M. Yates<br />
EDITOR<br />
MTA Journal/May <strong>1985</strong> 9
This page left intentionally blank for notes, doodling, or writing articles and comments for the MTA Journal.<br />
; MTA Journal/May <strong>1985</strong> 10
TECHNICAL ANALYSIS IN THE UNITED KINGDOM, DOMESTIC AND INTERNATIONAL<br />
Bronwen Wood<br />
INDICATORS AND STATISTICS NOT AVAILABLE IN THE LONDON STOCK MARKET<br />
1) Due to the British paranoia for secrecy there is no volume data for individual stocks. The<br />
only volume figures for sectors are monthly. There is no breakdown of any kind of the one daily<br />
volume figure, which is “total equity bargains.”<br />
2) There are no satisfactory figures on institutional liquidity. Official figures are months out of<br />
date. Private sampling suggests that absolute levels of liquidity move in up and downtrends,<br />
so that shortage of cash so great that the market cannot rise further was signalled at five per-<br />
cent between 1976 and 1980, whereas the level at which the supply of cash exceeded the<br />
supply of stock and, therefore, caused the market to rise, trended down from fourteen percent<br />
to five and one-half percent over the same period. There was then a readjustment, and the<br />
new trend has high liquidity sloping from eight percent to five and one-quarter percent and low<br />
liquidity from five percent to two and one-half percent. Another adjustment is in the making at<br />
the moment, with possibility of an uptrend developing over the next few years.<br />
3) The only sentiment indicator available is the puticall ratio which is so new that its use-<br />
fulness cannot yet be established. In fact, the options market is so little used that it may turn<br />
out not to be a very good indicator.<br />
4) There are no margin debt figures as margin trading in stocks and bonds is not permitted<br />
in London.<br />
5) There are no specialist, member or odd-lot, short sales figures. This is basically because<br />
short sales are not permitted at all by many stockbrokers and are generally not admitted to<br />
when they do occur. Rolling a short position over from one two-week accounting period to the<br />
next is rarely permitted, even if your broker will knowingly allow clients to go short. Even if short<br />
sales figures were collected, the all-pervading secrecy would never allow the data to be broken<br />
down into specialist, member and odd-lot, bargains.<br />
6) There is no formal measure of contrary opinion. There is not a big enough private client-<br />
base seeking investment advice to allow more than a small number of market letters to pros-<br />
per, and these mostly take the form of tip sheets. There is therefore no way of assessing<br />
professional advisory sentiment objectively.<br />
INDICATORS AND APPROACHES WHICH ARE HELPFUL IN LONDON<br />
7) Overbought/Oversold Indicators. I calculate three for the equity market, the gilt (or gov-<br />
ernment bond) market, and the gold mining market.<br />
a) The daily net figure of how many more of the last ten trading<br />
days were up than down. At plus six or greater, the market is<br />
showing signs of being overextended.<br />
MTA Journal/May <strong>1985</strong> 11
) The fourteen day R.S.I., using seventy and thirty as overbought<br />
and oversold levels. This is sometimes helpful as a divergence indicator.<br />
c) The ten-day moving total of the net rises and falls for all<br />
stocks in the relevant market. For short-term corrections<br />
this is the best of these indicators. For medium-term moves,<br />
it is a good non-confirmation indicator. (See Chart 1:<br />
gilt overbought/oversold in May and December, 1984;<br />
equity overboughtioversold December, 1983, and July, 1984.)<br />
These indicators are particularly useful when they are all clearly overbought or oversold to-<br />
gether, which they frequently are not. In addition, they are good for confirming a major change<br />
in the direction of the market, when, for example, they become overbought but the market re-<br />
fuses to correct (e.g., Chart l), the period in the gilt market after November, 1982, when the<br />
market was embarking on a prolonged period of sideways trading after a long bull leg, and<br />
August, 1984, in the equity market when a huge new bull move ( + thirty-five percent) had just<br />
started.<br />
d) Five-day momentum is a good short-term indicator. It is<br />
overextended at plus three percent.<br />
8) Cumulative advance/decline works well, though not invariably. For example, it gave a<br />
good sell signal in May, 1984, - see Chart 1. It gave clear but not always early signals at all<br />
major tops and bottoms from 1971 onwards.<br />
9) Annual momentum is most useful in London at major tops and bottoms. It gives good ad-<br />
vance warning that absolute peaks and lows are about to be hit and has signalled well for both<br />
equities and gilts over the past fifteen years. (See Chart 1: gilts end 1982; Chart 2: equities<br />
mid-l 970 to February, 1971; Chart 3: equities March/May, 1972.)<br />
10) Volume rarely has anything to add to an understanding of the London market. Occasion-<br />
ally, one finds volume starvation (e.g., Chart 3 May/June, 1972, and September/October, 1972).<br />
11) Another divergence indicator which sometimes works extremely well, but is also some-<br />
times confusing is illustrated for the gilt market on Chart 4. The top line is a three-day moving<br />
average and the bottom line is a five-day average of the percentage difference between the<br />
index and its thirty-eight-day moving average.<br />
12) As a lagging indicator the five-day moving averages of new highs and new lows often<br />
works well. When a change of primary trend looks likely it remains unconfirmed until the two<br />
lines have crossed dramatically and been able to stay crossed. See Chart 1 - 1983.<br />
13) The quotient of the All-Share Index divided by the Government Securities Index has<br />
given excellent long-term trends for many years. The trend only breaks when top or base-build-<br />
ing in the equity market is well advanced or has been completed. See Chart 5.<br />
OTHER ASPECTS OF TECHNICAL ANALYSIS IN BRITAIN<br />
14) While indicators such as these are a major part of the technician’s armoury in the London<br />
market, it must be admitted that severally and together they have let us down quite often in the<br />
past five years. The most recent memorable occasion was in July, 1984, when most indicators<br />
MTA Journal/May <strong>1985</strong> 12
suggested that the bull market was coming to an end. See Chart 1. A pattern which looked as<br />
though it could well be a major head-and-shoulders top developed (Chart 6). However, in Au-<br />
gust, relief that interest rates were falling, and that the sterling crisis was over, seems to have<br />
been enough to make a nonsense of all the technical indicators, and the market began a rise<br />
which added thirty-five percent to the All-Share Index in under six months.<br />
The only way to get the market right and be useful to one’s clients for quite a long time now<br />
has been to concentrate on sectors and individual shares rather than on market indices and<br />
indicators.<br />
15) London has become so volatile that moving averages have often set traps by breaking,<br />
rolling over and crossing just as the share or the market concerned is about to reverse direc-<br />
tion I find them too unreliable to be useful except as confirmation, and sometimes not even<br />
then.<br />
16) Relative strength, however, on shares and sectors is one of the most useful tools in Lon-<br />
don, particularly for support and resistance levels and trendlines. For sector selection, in par-<br />
ticular, relative strength lines are invaluable.<br />
17) The government securities market in London is extremely important. I keep my three<br />
overbought/oversold indicators, the oscillator, annual momentum, interest rate charts, futures<br />
charts and subsector charts for gilts. The London gilt market is one of the biggest and most<br />
flexible in the world and attracts enormous overseas business. It outperforms the equity mar-<br />
ket quite often, and not just in bear markets. Given the end of the bear market in sterling and<br />
the probability of interest rates falling, a lot of international money will probably be attracted to<br />
United Kingdom gilts during the next eighteen months. Contrary to prejudice, technical anal-<br />
ysis works extremely well in our gilt market.<br />
18) Because London fund managers invest enormous sums abroad, it is necessary for us to<br />
keep abreast of overseas markets. My solution to this is to be aware of the position of the<br />
major indices for each national market, but only to look at individual shares as requested. The<br />
only source of reliable charts for individual stocks over a wide range of countries that I am aware<br />
of is the Chart Analysis international Book. It is my contention that for top quality technical as<br />
well as fundamental information on individual stocks, experts in the country of origin should<br />
be used, due to the greater depth of data available to them. However, for an overall view of<br />
foreign markets it is often possible to be surprisingly successful by using market indices, fig-<br />
ures for which are available in the Financial Times.<br />
19) Being internationally orientated, gold bullion and currencies are very important in Lon-<br />
don. I use a combination of long-term and short-term charts for both. In particular, I like one<br />
box reversal point and figure charts on an insensitive scale for currencies. This allows history<br />
and sensitivity to both be apparent on the same sheet of paper. See Charts 7a, b, and c. For<br />
bullion, I use an insensitive long-term point and figure chart and a sensitive bar chart, the for-<br />
mer for perspective and the latter for estimating trading moves. See Charts 8 and 9.<br />
20) The various chart books and services available from Chart Analysis and Investment Re-<br />
search are excellent for London equities and gilts, overseas markets, currencies, and com-<br />
modities. They stand head-and-shoulders above anything else available so far from England,<br />
including Datastream, whose charts, though more numerous, are not so reliable.<br />
MTA Journal/May <strong>1985</strong> 13
MTA Journal/May <strong>1985</strong> 14<br />
-<br />
i<br />
i<br />
\
i<br />
MTA Journal/May <strong>1985</strong> 15
MTA Journal/May <strong>1985</strong> 16
MTA Journal/May <strong>1985</strong> 17
MTA Journal/May <strong>1985</strong> 18
MTA Journal/May <strong>1985</strong> 19<br />
CHART 6<br />
ALL-SHARE INDEX 1967-<strong>1985</strong><br />
Scale: 2 points per box, 5 box revel sal
MTA Journal/May <strong>1985</strong> 20
., .,.<br />
/:: :, .A.!.’ ::,:::<br />
T ‘.:! : :i”l::: : ,. ‘_:<br />
: ‘. .,. :<br />
4% :.: :.:. : :<br />
,.,I.: I I I I i<br />
:,<br />
‘,.I<br />
:,<br />
‘f”,<br />
,,...<br />
I’ : ,I<br />
I’., : ~<br />
I : : :<br />
.- -<br />
:<br />
I :,<br />
.j,<br />
j<br />
..:<br />
/<br />
., .:‘:.<br />
,,:::,;<br />
:::.‘I<br />
J<br />
~<br />
‘(::.<br />
-:<br />
--*-, /’<br />
/<br />
., I. I<br />
. ...! .._.. / . .._ ;..A-.<br />
MTA Journal/May <strong>1985</strong> <strong>21</strong><br />
, jp,.<br />
I I !<br />
4<br />
:<br />
I
MTA Journal/May <strong>1985</strong> 22
MTA Journal/May <strong>1985</strong> 23<br />
--<br />
X-.x.x-.r.r<br />
9
i<br />
._<br />
:.A.:<br />
.<br />
L..’ :.<br />
:<br />
:.<br />
.: .._.<br />
,‘. : :;<br />
__~_.__. .: --._- 7-L L-. _ .._.<br />
I-” ‘: +---<br />
MTA Journal/May <strong>1985</strong> 24<br />
I<br />
I<br />
_ .~ I<br />
I<br />
-1.<br />
:<br />
----.:<br />
.<br />
,i<br />
I<br />
I<br />
I<br />
. . .<br />
1<br />
_:- . ..__.<br />
i<br />
I I :<br />
1 ..~ 1 ..:- -;<br />
I<br />
' ,.: .i<br />
I<br />
I--<br />
I<br />
1 - ---r-<br />
/<br />
I<br />
; .:<br />
1.1’<br />
.I<br />
.<br />
j<br />
..-.L-..LL-<br />
: ;<br />
;:.. .::<br />
-, - :. .- _..__ 1,: ~._ .~. .2-L-<br />
‘g::<br />
*-:<br />
II : .I.. ..: ; .,_ .:j<br />
fit<br />
I ./ 1: : ‘.
BIOGRAPHY<br />
Bronwen Wood joined Rowe & Pitman twelve years ago where she is currently in charge of<br />
Technical Research, covering the U.K. stock and bond markets, commodities, currencies and<br />
overseas markets for a largely institutional clientele.<br />
Bronwen was educated at Bristol and London Universities and the Central London Polytechnic<br />
where she completed a post-graduate diploma in management studies. Bronwen first joined<br />
a stockbroking firm as a fundamental analyst. Finding technical analysis to be more effective,<br />
she gradually switched over and moved to Chart Analysis, the well-respected technical con-<br />
sultancy firm. From there she joined Rowe 8. Pitman. Rowe & Pitman is due to become part<br />
of the new financial conglomerates which will come into existence sometime in October, 1986.<br />
Its United States operations have already been merged into S. G. Warburg, Rowe and Pitman,<br />
Akroyd, Inc.<br />
MTA Journal/May <strong>1985</strong> 25
This page left intentionally blank for notes, doodling, or writing articles and comments for the MTA Journal.<br />
MTA Journal/May <strong>1985</strong> 26
HOW CYCLETREND CHANNELS HELP DETERMINE TURNING<br />
POINTS FOR STOCKS AND THE MARKET<br />
Jim Tillman<br />
Having written a market letter based on cycles for over ten years, I have tried various methods<br />
of presentation to correctly convey my views. Dealing with such a conceptually difficult subject<br />
as cycles and the way they combine within the market has been difficult, at best, for many and<br />
total frustration for others. Only after adding cycle channels a few years ago did the total picture<br />
come into focus for the average reader.<br />
These charts of the Dow Industrial Average on a weekly, daily, and hourly basis show how the<br />
concept may be helpful no matter what time parameters one may choose. Of course, this par-<br />
ticular time period (February 15,<strong>1985</strong>), was clearly saying the market was ready to come down<br />
and would need to pick up channel support before ready to advance again.<br />
At the <strong>Market</strong> <strong>Technicians</strong> <strong>Association</strong> annual conference, I will show where we are currently<br />
in the Cycletrend channels, illustrate how channels may be used on individual stocks, and make<br />
projections for the market based on current dominant cycles. I look forward to seeing you there.<br />
DOW JONES INDUSTRIAL AVERAGE INDEX:<br />
Daily, weekly, and hourly charts<br />
MTA Journal/May <strong>1985</strong> 27
BIOGRAPHY<br />
Jimmie E. Tillman is Vice President, interstate Securities, institutional Department, Charlotte,<br />
North Carolina. He is also author of Cycletrend, an institutional cycie timing service for the stock<br />
market and stock groups. Married, with four children, Mr. Tillman is a native south Georgian,<br />
educated at Clemson Univsersity and a self-taught market technician for twenty-five years.<br />
MTA Journal/May <strong>1985</strong> 28
THE POWER OF THE YIELD CURVE<br />
J. Curtis Shambaugh<br />
The most frequently observed phenomena that all capital market participants utilize in making<br />
decisions is the term structure of interest rates. The U. S. Treasury yield curve, which appears<br />
in all financial publications, as well as being available “on line” in numerous information re-<br />
trievable devices, represents the sum total of all participants transactions in the capital market,<br />
whether they become borrowers or lenders, hedgers or investors, are taxable or non-taxable,<br />
individual or institutional, or domestic or foreign. Any alteration of the slope of the yield curve<br />
reflects changes occurring somewhere else in the financial markets induced by governmental<br />
policies, supply and demand of credit, or even fear or greed.<br />
As a result of the past half-decade’s rapid deregulation of interest rates and the onset of very<br />
liquid hedging devices of futures and options, a huge market of interest rate “swaps” has de-<br />
veloped. Consequently, a greater proportion of the U. S. economy is now more sensitively at-<br />
tached to the level of and change of shorter-term interest rates, particularly in “adjustable rate”<br />
mortgages for housing, and automobile loan rates.<br />
Academicians in study of the Treasury Yield Curve describe a positive yield curve as a forecast<br />
of rising interest rates when utilizing “rolling horizon” analyses. A simple example of this method<br />
would be that if a one-year security yielded nine percent and a two-year security yielded ten<br />
percent, in theory, one year later in time a one-year security could yield eleven percent to result<br />
in an equivalent total return as the original two-year security.<br />
However, yield curves have evidenced long periods of positive or negative character before<br />
such “forecasts” come to pass. Also, interest rates have come down a number of times when<br />
the yield curve was positive or have even come down when the yield curve was less positive.<br />
Most rises in interest rates have occurred in periods of negative yield curves.<br />
Over the past half-decade, changes in slope of the yield curve have evidenced a high corre-<br />
lation to prediction of moves in the capital markets as will be more evident in the following charts.<br />
The reasons that these changes in slope are predictive are more easily explained in visceral<br />
terms of fear and greed. Most simply, as the yield curve gets more positive, the investing world<br />
is induced to extend in maturity (read “risk”) and when the yield curve becomes less positive,<br />
such incentive is reduced. Since the price of anything is most effected by the marginal trans-<br />
action and if the marginal transaction is a purchase, it seems logical that the price should rise.<br />
As can be seen in Chart 1, there have been numerous changes of significance over the past<br />
five years. This chart is very simple: just the ratio of the active long-term treasury bond yield<br />
divided by the bond equivalent yield of the six-month treasury bill. After testing more complex<br />
ratios and/or different maturities, I have found this chart worked best and was extraordinarily<br />
correlative to subsequent moves in monthly total returns of long-term treasury bonds (Chart<br />
2) and even leading changes in equity indices that are not overweighted by market price or<br />
capitalization. (Chart 3 is the monthly average of the industrial component as the Value Line<br />
Index.)<br />
We will discuss in more depth, at the seminar, each of these charts and which moving aver-<br />
ages seem to add even greater predictive value to these events.<br />
MTA Journal/May <strong>1985</strong> 29
0.8<br />
Od<br />
CHART 1
2<br />
15<br />
-5<br />
10<br />
-10<br />
5<br />
0<br />
-<br />
-<br />
-<br />
\<br />
YIELD TOTRET GOV (45.000)<br />
LUNCi IktW WNU ~IURNS<br />
1/29/80 To 3/26/85<br />
CHART 2<br />
THE FIRST ROSTON CORPORATION<br />
fIXED INCOME RESEARCH
220<br />
200<br />
140<br />
Ir(WSTt(LIaI.<br />
PRICE VALUE LINE INDEX<br />
CHART 3<br />
THE flRST ROSTON CORPORATION<br />
pl”WeL *.a-. .- -s-m . - --
BIOGRAPHY<br />
J. Curtis Shambaugh has been with First Boston Corporation as Vice President, Taxable Fixed<br />
Income - Strategist since January, 1983. Prior to that he was with with Alliance Capital Man-<br />
agement and its predecessors in a variety of positions. Going backwards in time Mr. Sham-<br />
baugh has been a portfolio manager equity accounts, fixed income accounts; manager of<br />
discretionary fixed income department (originated in 1970); investment counselor; member of<br />
Moody’s Investors Rating Committee-Industrial Specialist; and Moody’s Manuals. Prior to 1981,<br />
he was employed by Edwards & Hanley, registered representative: Permatex Corp., Labora-<br />
tory Chemist, and U. S. Weather Bureau.<br />
Mr. Shambaugh was educated at Massachusetts Institute of Technology, and C. W. Post Col-<br />
lege.<br />
MTA Journal/May <strong>1985</strong> 33
This page left intentionally blank for notes, doodling, or writing articles and comments for the MTA Journal.<br />
MTA Journal/May <strong>1985</strong> 34
PANELISTS:<br />
RELATIVE STRENGTH<br />
A Workshop on Relative Strength Moderated by Steve Leuthold<br />
Jim Bohan<br />
Merrill Lynch, Pierce, Fenner & Smith, inc.<br />
Richard Gala<br />
Batterymarch<br />
Ed Nicoski<br />
Piper, Jaffray & Hopwood, Inc.<br />
David Upshaw<br />
Waddell & Reed<br />
QUESTIONS:<br />
What are the relative strength tools that you use and how are they calculated? Oscillators?<br />
Percentiles? Charts?, etc.<br />
What are the strengths of relative strength analysis?<br />
What are some pitfalls of relative strength analysis?<br />
Why does it not always pay to buy positive relative strength and sell negative relative strength?<br />
How important are relative strength considerations in your overall analytical approach?<br />
What is the best market proxy to use in calculating relative strength? S&P 500? NYSE Com-<br />
posite? Unweighted Indices such as Value Line or Indicator Digest? Other? Why?<br />
Should an individual stocks volatility characteristics be factored into its relative strength cal-<br />
culation? (Beta adjusted relative strength.) Why or why not?<br />
Do you think relative strength is a less useful tool today than it was ten years ago or is it about<br />
the same? If less effective, how do you explain it?<br />
Now, what about the future. Do you expect relative strength to become more or less useful?<br />
Why?<br />
MTA Journal/May <strong>1985</strong> 35
BIOGRAPHY<br />
Steve Leuthold is an investment strategist and researcher, actively involved in various phases<br />
of investment and economic research for over twenty years. He is the managing director of<br />
The Leuthold Group, an investment research organization headquartered in Minneapolis, Min-<br />
nesota. From 1977 through 1981, prior to forming his own firm, Mr. Leuthold served as an of-<br />
ficer and portfolio manager for two mutual funds -- Pilot and Industries Trend Fund. From 1969<br />
through 1981, he was also associated with Piper, Jaffray & Hopwood as an investment strat-<br />
egist.<br />
MTA Journal/May <strong>1985</strong> 36
LANE’S STOCHASTICS: THE ULTIMATE OSCILLATOR<br />
George C. Lane<br />
In 1954, I joined Investment Educators as a junior analyst. In reality, I was a “go-fer,” running<br />
the projector, carrying the luggage. But I also kept up the charts, learning the art of technical<br />
analysis by doing.<br />
investment Educators was then an eight year old educational school, teaching charting, mov-<br />
ing averages, and the Elliott Wave in a series of three classes--all on the stock market. In those<br />
days, the stock market had periods of drifting without much to interest potential clients, so we<br />
soon added commodities courses to our fare. I taught them.<br />
After I joined the six-man, no-pay research staff, we discovered oscillators. We researched and<br />
experimented with over sixty applications, with the result that we found about twenty-eight that<br />
had predictable values. In charting our cumulative oscillators, we found they were running all<br />
over the chart paper. Soon, we had chart paper running all over the walls. So, we struck upon<br />
the technique of reducing these oscillators to a percentage. We used the alphabet to differ-<br />
entiate one from the other: %A, %B, etc. Each one was reduced to a percentage indicator pri-<br />
marily so we could manage to keep them workable on the chart paper!<br />
As a result of all the hard work (the 14-hour, mostly by hand, no-pay days), we decided that<br />
the most reliable indicator was %D for “% of Deviation.” The basic premise of %D is that mo-<br />
mentum leads price. It makes top before price and it makes bottom before price. Momentum<br />
is a leading indicator. %D is a momentum oscillator.<br />
I quote from Welles Wilder’s book, New Concepts in Technical Trading Systems::<br />
One of the most useful tools employed by many technicians is the momentum os-<br />
cillator. The momentum oscillator measures the velocity of directional price move-<br />
ment. When the price moves up very rapidly, at some point it is considered to be<br />
overbought; when it moves down very rapidly, at some point it is considered to be<br />
oversold. In either case, a reaction or reversal is imminent. The slope of the mo-<br />
mentum oscillator is directly proportional to the velocity of the move. The distance<br />
traveled up or down by the momentum oscillator is proportional to the magnitude of<br />
the move.<br />
For those of you who would like a detailed mathematical description of the theory and func-<br />
tioning of momentum and oscillators, I refer to Perry Kaufman’s book, Commodity Fading Sys-<br />
tems and Methods.<br />
Let us now turn to the practical application of Lane’s Stochasatics (%K and %D). We are using<br />
U. S. Treasury Bond Futures for illustration. Using Elliott Wave analysis, we had exhausted the<br />
downside in 1981, when we reached the 55-00 level. The period of 1981 to 1984 can be ana-<br />
lyzed as a double bottom formation. (See Chart A.) Our analysis begins in 1983 (see Chart<br />
B). We are short and, as we follow the downward pattern of T-bonds, we are aware that our<br />
short side prosperity must, someday, come to an end. But when?<br />
In late May, we noticed the long-term interest rates were making a pointed top, and at the same<br />
time, T-bond futures had accelerated their decline, n-2 pushing their way through the downside<br />
of their previous channel. (See Chart C.) We drew a parallel channel of the same width below<br />
MTA Journal/May <strong>1985</strong> 37
it, and on Thursday, May 28, 1984, T-bonds just touched the bottom of that channel and re-<br />
bounded. There is a loose five-week cycle in T-bonds, so we bracketed a period of six weeks<br />
(allowing for ten percent deviation on either side, as taught by Walt Bressert) in advance. T<br />
bonds returned back inside their original channel, rallied up to the top of it and turned down.<br />
History has taught us that, if this is truly to be the bottom in the futures (the top in interest rates),<br />
the channel should contain the downmove. We, therefore, now had a window of price and time<br />
(a technique taught by Jake Bernstein): price: 58-l 6 to 59-l 6; time: the week of June 29 to July<br />
5,1984. Five weeks after their first top, long-term interest rates, which had declined, rallied and<br />
made an attempt at a new high. This attempt failed, and by July 3 to 5, 1984, we could spec-<br />
ulate that interest rates had topped out--and T-bond futures had made bottom! (The municipal<br />
bonds topped a week earlier than the corporate and treasury instruments. It just goes to show<br />
you: the bond dealers are smarter than the government and corporations!)<br />
Question: Did we have a major bottom? Could we cover our shorts and reverse our position<br />
in the face of so much adverse, contrary professional and public opinion?<br />
We now turned to our computers not that we hadn’t been haunting them, checking the printouts<br />
in the wee hours, in the weeks previous! (See Chart D.) What did we find?<br />
A. Volume showed a large increase at the first botton - a selling climax. But volume dried up<br />
at the second bottom - classic volume action at a double bottom: bullish!<br />
B. Open interest had begun growing in April and continued to do so right through the double<br />
bottom: bullish!<br />
C. Lane’s stochastics gave us a preliminary buying signal in May 1 and in June 2, when price<br />
made lower lows but %D made higher lows, a divergence caused by deviation from the former<br />
rate of descent. The final buying signal in July 3 (completing the l-2-3 buying signal pattern)<br />
showed enormous upward strength, barely managing to touch the oversold band: bullish! As<br />
far as we were concerned, the bottom was in!<br />
D. Lane’s Serial Differencing, a tool we use to complement and confirm Lane’s Stochastics,<br />
gave us the same l-2-3 buying signal in May - June - July: Bullish confirmation that the sec-<br />
ond leg of the double bottom had been completed.<br />
As we went through our printouts, we found that six out of seven of our other indicators con-<br />
firmed our analysis. This was the “buy week!” So, we did!<br />
To profit in trading futures, be it gold, T-bonds, or the stock indexes, we have a simple, but ef-<br />
fective approach. We use conventional charting techniques, augmented by Elliott Wave and<br />
cycles to determine a window of time and price. Within that window, we use our own Lane’s<br />
Stochastics to determine when the major change in trend occurs. By your bank balance, will<br />
you also swear it works!<br />
MTA Journal/May <strong>1985</strong> 38
!. -. T-BONDS CBT CHI. s<br />
Chart A<br />
‘<br />
ii<br />
II<br />
1<br />
1<br />
1<br />
1<br />
1<br />
Chart B<br />
MTA Journal/May <strong>1985</strong> 39<br />
60<br />
76<br />
76<br />
74<br />
72<br />
70<br />
66<br />
66<br />
64<br />
62<br />
60<br />
‘56<br />
56<br />
54
‘0 24 I 9 7J 6 20 4 18<br />
FEB. MAR. I APR. I MAY JULY I AUG.<br />
MTA Journal/May <strong>1985</strong><br />
Chart C<br />
40
PI ki Pl Li<br />
._<br />
1; : Lt..& 5 I-,<br />
I<br />
ct % il -._.-.-._ .-.-..-.. - .-.-.- ._.-. ‘%.K..~~ i<br />
STOCHASTICS<br />
SERIAL DIFFERENCING<br />
Chart D<br />
MTA Journal/May <strong>1985</strong> 41
BIOGRAPHY<br />
George C. Lane’s educational background is in Political-Military Science, Medicine, Finance,<br />
Security Analysis, and investment Management. He attended Drake University, Washington<br />
and Lee University, Northwestern University, The Academy, The Citadel, William & Mary, The<br />
New School, Baruch School of Finance, E. F Hutton, Chicago Board of Trade Institute, and<br />
Chicago Mercantile Exchange School.<br />
The majority of his working life since 1957 has been spent teaching bankers, investors, agri-<br />
cultural producers, brokers and market analysts alike the mechanics of Hedging and the Sci-<br />
ence - and Art - of Technical Analysis. As President of Investment Educators, the oldest technical<br />
commodities school in the United States, George writes a weekly market letter with a daily Hot-<br />
line update and teaches commodities seminars on a regular basis. He is currently working on<br />
a book detailing Lane’s Stochastics and its variations; publication Fall, <strong>1985</strong>.<br />
MTA Journal/May <strong>1985</strong> 42
A VIEW FROM THE FLOOR<br />
William R. Johnston<br />
MTAs theme for this 10th anniversary seminar “Looking Back - Looking Ahead“ is certainly<br />
appropriate as we celebrate, or mourn (depending on one’s perspective) the tenth anniversary<br />
of May Day, this forum permits me to reflect on the past and the future from my vantage point<br />
as a specialist. A role, I might add, which many thought would be non-existent long before May<br />
Day’s tenth birthday.<br />
In the relatively short ten years since the unfixing of commission rates, our business has lit-<br />
erally been transformed into an industry whose participants are as diverse as the products it<br />
now offers.<br />
May Day marked much more than the departure from the 183-year tradition of fixing rates by<br />
the New York Stock Exchange (NYSE). It meant the end of the business as we knew it. It meant<br />
future performance would be under a microscope. May, 1975, was the end of an era and the<br />
birth of an industry the financial services industry<br />
The deregulation which has occurred over the last decade has shaped an industry which is<br />
unlike the one it replaced. Today’s financial services business is one driven by customer ser-<br />
vice, product diversity, and competitive edge gained through innovation and technology.<br />
In my end of the business, the pace of change has been equally quick. Whereas ten years ago<br />
there were almost one hundred units on the NYSE, today there are fewer than sixty At the<br />
same time, capital in today’s specialist community is almost one hundred percent.<br />
In a broad context, as I look ahead, even greater change is in store for the specialist business.<br />
Recent rule changes at the NYSE will permit hedging by specialists in their registered <strong>issue</strong>s,<br />
as well as open the way for diversified firms’ entry into the specialist business. As these de-<br />
velopments, and others on the near horizon, begin to impact the way our industry operates,<br />
the future of the specialist business will be determined by capital, talent, and competitive tech-<br />
nology. But unlike the last ten years, these ingredients for success will be in greater demand<br />
than ever before. Upstairs risk positioning to accommodate a progressively more volatile and<br />
short-term oriented trading scene will mean greater levels of risk to brokers and specialists,<br />
thereby creating new channels to spread those risks. And use of derivative products by our<br />
customers will place ever increasing demands upon the dealer community to quickly and ef-<br />
fectively respond to major shifts by institutional investors.<br />
I would like to focus a bit this morning on three major elements of change effecting the spe-<br />
cialist business: technology, capital, and customer service.<br />
Today, the trading floor at the NYSE looks nothing like it did a few short years ago. If you were<br />
to visit the Floor, and I hope each of you will consider this an open invitation to do so, you will<br />
find fully electronic trading posts. Those old stanchions of oak and mahogany which served<br />
the Exchange so well between 1929 and 1980 have all vanished. In fact, most were preserved<br />
and now grace the halls of prominent museums and universities around the United States. In<br />
their place, we have built the most efficient electronic trading arena the securities world has<br />
ever seen.<br />
Beginning with our automated order routing network, known as SuperDot 250, a customer can<br />
walk into a branch office of a member firm and place a market order for 1,000 shares of any<br />
MTA Journal/May <strong>1985</strong> 43
listed stock. That order will route electronically from branch office to point of sale on the Trading<br />
Floor, be exposed to the auction market, executed and reported to the office of entry in less<br />
than 80 seconds. And SuperDot’s capability continues to grow. We are currently providing au-<br />
tomatic executions (up to 1,000 shares) in several hundred stocks with 118 point markets. Most<br />
importantly, the entering firm’s own floor personnel never touch these orders. The significance:<br />
systemized orders, now defined at 1,099 shares or less at the markeV30,099 shares or less<br />
with limited prices, may be electronically routed, efficiently handled, given exposure to auction<br />
market principles, executed and reported to originating office in the time it once took an<br />
average order clerk to type it for transmission. The retail customer is efficiently served, and the<br />
broker-dealer’s own floor staff is free to concentrate on the high end (block) business, where<br />
professional agent representation is critical.<br />
The routing network (SuperDot 250) was only the beginning of technological enhancement to<br />
our market. We followed SuperDot 250 with an opening assist program known as OARS, which<br />
stands for Opening Automated Reporting System. This system allows firms to electronically<br />
enter orders prior to the opening each day in a master electronic file. The specialist cues the<br />
file shortly before the opening to determine the supply/demand picture in a stock. Using con-<br />
ventional methods for opening a stock, he then enters the opening price in OARS which au-<br />
tomatically triggers instant reports to all orders in the file. In the past, large openings could cause<br />
significant delays in reports to customers. Today, delays caused by an influx of orders at the<br />
opening are virtually nonexistent. Most importantly, all trades entered in the OARS file are clocked<br />
and guaranteed clean, that is no “don’t knows” or “question trades” which cause administra-<br />
tive headaches and significant expense.<br />
The technology express moved to high gear in recent months with the elimination of the paper<br />
books for specialists at eleven locations on the Floor. In their place are electronic limit order<br />
files that accept, store, monitor, display and report electronically delivered limit orders (up to<br />
30,099 shares). As an integral part of SuperDot 250, once these limit orders are executed, a<br />
single input automatically triggers execution reports on such order. This system will continue<br />
to expand floor-wide.<br />
A totally paperless “touch-trade” system using personal computers with touch screens is the<br />
way of our future. Touch-trade will perform all reporting, trade and quote dissemination tasks<br />
for market and limit orders that had traditionally been handled manually. Thus, a single touch<br />
executes a trade, reports it to the tape, sends reports to the entering firm, and enters the trans-<br />
action into the comparison system. There are six such systems in operation on the Floor today.<br />
Post-trade reconciliation also bears mention. Each trade executed through SuperDot 250 is<br />
automatically submitted to the comparison cycle on a locked-in basis. This guarantees that all<br />
systematized orders are processed error-free with a complete audit trail. This process will ul-<br />
timately lead to a much streamlined post-trade process, reducing the current five-day cycle to<br />
overnight processing.<br />
We are also developing voice-recognition technology. The potential applications are enor-<br />
mous. Suffice to say that in the future all trade data, execution reports, tape prints, post trade<br />
reconciliation, audit trail, etc. will be captured at the point of sale by capturing the brokers’ spo-<br />
ken words.<br />
Let me refocus briefly on specialists capital. Considering the prospect of diversified firms en-<br />
tering the specialist business and the competitive factors which that implies, financial capital<br />
commitment in our business will inevitably grow. Human capital will also increase dramatically.<br />
The sure judgment and expertise required to efficiently utilize dealers’ dollar commitments, in<br />
an increasingly volatile marketplace, continues to force a change in the specialist community.<br />
One measure of the ongoing competition for expert market-makers is the fact that the NYSE<br />
MTA Journal/May <strong>1985</strong> 44
specialist community is a much younger, more aggressive group than you would have found<br />
only a few years ago. And that trend continues.<br />
The third leg of our future is a clearly focused commitment to customer service. ,NYSE spe-<br />
cialists will soon have a new job description built on that principle. Periodic interface with both<br />
our listed corporate community, and with our direct customers (the broker/dealers) will be re-<br />
quired. As the wizards of merger and acquisition continue to chip away at our list, the specialist<br />
community can barely stay even as new equity listings are brought on board. At the NYSE,<br />
allocation of new listings utilizes customer evaluation of each specialist’s performance as the<br />
key criterion. Thus, exceptional specialist performance in meeting customers’ needs is the only<br />
sure way to enlarge our business.<br />
Let me sum up by saying that unrelenting focus on our customers’ needs, and fulfilling those<br />
needs in a raipdly changing environment, is the cornerstone of our future. One element that<br />
you can be certain will not change is the Exchange’s commitment to providing a marketplace<br />
where all public investors are afforded an equal opportunity to compete and interact, regard-<br />
less of the size of their orders. From the largest institution to the 100 share purchaser, the Ex-<br />
change has been and will always be a trading market which serves to ensure a high level of<br />
participation in equity investing for the broadest possible customer base.<br />
I thank you for the opportunity to be here today and would welcome any questions you may<br />
have.<br />
MTA Journal/May <strong>1985</strong> 45
BIOGRAPHY<br />
Mr. Johnston is presently serving as Chairman of the Board & Chief Executive Officer of Agora<br />
Securities. Prior to August, 1980, he was Senior Vice President and Director of Mitchum, Jones<br />
& Templeton, Inc. Other business activities include: NYSE floor official (second term), NYSE<br />
Competitive Review Committee and NYSE Specialist Evaluation Committee, Board Member<br />
and Treasurer of Specialist Critical Issues Organization, Chairman of Education Committee of<br />
SCIO (Reverse FACTS) and Director of North American Bank Corporation.<br />
Mr. Johnston graduated from Washington & Lee University with attainments in commerce in<br />
1961.<br />
MTA Journal/May <strong>1985</strong> 46
PREFACE<br />
A THREE YEAR FOLLOW-UP ON “THE ENIGMATIC STOCK OPTION”<br />
A CONSTANT CHALLENGE<br />
David Holt<br />
The theme of the 1982 <strong>Market</strong> <strong>Technicians</strong> <strong>Association</strong> (MTA) Annual Meeting in Princeton,<br />
New Jersey, was “Challenges for the 80’s.” As the author pointed out in his presentation at that<br />
meeting, the theme was especially apropos for exchange listed options. Because of their unique<br />
qualities, options resisted and, in some cases, totally repelled conventional rules of technical<br />
analysis.<br />
During the three intervening years the listed options market has undergone a tremendous met-<br />
amorphosis; and yet, the more it changes, the more it stays the same, at least as far as tech-<br />
nical analysis is concerned.<br />
Our objective in presenting this paper hopefully meets the appropriate requirements of Article<br />
II of the MTA Constitution, to wit: “B. Educate the public and the investment community (in-<br />
cludes MTA members) to the uses and limitations of technically oriented research and its value<br />
in the formulation of investment decisions. C. foster the interchange of material ideas and<br />
information for the purpose of adding to the knowledge of the membership.”<br />
To reach these objectives, we will first review the idiosyncrasies of options that create the in-<br />
compatibilities with conventional technical analytical techniques, both as they existed three years<br />
ago and as they are now. We then will present some ideas and information that, hopefully, will<br />
add to the knowledge of the members who review this presentation.<br />
Perhaps the best way to start this discussion is to touch on several of the unique features of<br />
stock options that severely restrict conventional technical analysis techniques.<br />
VOLUME<br />
Unlike equity securities, the volume of stock option contracts is all but useless as raw data for<br />
the application of conventional technical analysis. More correctly stated, it’s the unique aspect<br />
of option volume as well as the method used in reporting option volume that stymies the tech-<br />
nician.<br />
One of the primary objectives of analyzing volume is to determine the amplitude and bias of<br />
any imbalance between demand and supply. With equities, this is a rather straightforward anal-<br />
ysis and has been quite useful for a number of years. Options, however, are another story be-<br />
cause of the unique situation where a transaction can either be supply, demand, or both. When<br />
a “closing” trade occurs between two parties who are both exiting, you have supply. However,<br />
when one side of either transaction is opposite to the other, you have both supply and demand<br />
which tends to neutralize their pressures. To compound the problem, the various option ex-<br />
changes (through their control of the Option Clearing Corporation (OCC) ) continue in their re-<br />
fusal to release opening and closing volume on a timely basis. To their credit, they did throw<br />
a crumb to the technicians (who had been grinding them for this type of data for years) in 1984<br />
when they started releasing opening and closing statistics for customer orders. However, the<br />
MTA Journal/May <strong>1985</strong> 47
data is so delayed in its availability that its usefulness has been reduced to a minimal level.<br />
The OCC still contends the firm and market maker orders, which consistently run over sixty<br />
percent of the total, cannot be marked “opening” or “closing” for competitive reasons. We readily<br />
admit that a large neon sign flashing “opening” or “closing” on every ticker a market maker<br />
activate would unduly restrict his (or her) floor activities. However, in this high technology age<br />
of the 198Os, there are undoubtedly multiple ways in which orders could be identified as open-<br />
ing or closing without contributing the real and implied threats to the security of “privileged in-<br />
formation” of those in front of or behind the various posts on the trading floors.<br />
If properly motivated we feel the OCC could easily release opening and closing volume for all<br />
orders each day, along with the other statistics created by their overnight clearing activities.<br />
Until that happens, volume statistics will continue to perplex and frustrate technicians attempt-<br />
ing to use them in conventional ways.<br />
Whether option volume is used in its more traditional role as a confirmation of price movement<br />
or, in what has become quite a fad with the introduction of broad-based cash-settlement op-<br />
tions, as a foundation for put/call ratios, option volume can be a useful tool for technicians who<br />
can break out the supply and demand portions (i.e., brokerage firms who can tabulate both<br />
their own and their customers’ opening and closing volume) on a timely basis. Other than that<br />
relatively small application, we submit that technical indicators using option volume are, at the<br />
best, marginally efficient and, at the worst, dangerous.<br />
BREADTH<br />
Because of having a set life span, which in the spectrum of investments is relatively short-term,<br />
options naturally have a built-in downside bias as their time value evaporates. Thus, if you are<br />
attempting to work with advance/decline data in the conventional sense, you must first elimi-<br />
nate the downside bias. But that is easier to say than do. We have expended a lot of man and<br />
computer hours analyzing various option series in an effort to find a consistent pattern of ero-<br />
sion. When we first started, we felt it would be a task easily and quickly disposed of, as the<br />
severe downside bias should be in the weeks immediately prior to expiration. However, in the<br />
real world, where expanded position limits, new products, and a sharp increase in the sophis-<br />
tication level of the players exploded the number and size of hedging and arbitrage programs,<br />
this makes a predictable erosion curve as elusive as a feather in a windstorm.<br />
We started with the hypothesis that a set of option series would adopt a pattern of eroding time<br />
value dictated by the characteristics of the underlying stock and general-market psychological<br />
pressures. Perhaps it was merely a case of being naive, but we felt, as long as the logic was<br />
there, the reality of it would follow. What we failed to anticipate was a change in the basic forces<br />
brought about by proliferation and unique external pressures, such as straight and reverse<br />
conversions, illiquidity, and huge arbitrage programs that employ options. In our early work we<br />
did not allow for the almost unbelievably high level of sophistication that would be achieved by<br />
the market professionals on the floor(s) that would in turn create an unprecedented level of<br />
efficiency. Fortunately, our learning curve allowed us to adjust to the highly efficient market that<br />
evolved, even though, in the process, we had to scrap most of our initial programs.<br />
The erosion of time value, which produces the downside bias in breadth indicators, is still fairly<br />
consistent for calls when monitored as a group (i.e., expiration series, exchange, in/out-of-<br />
money, etc.). However, puts are relatively erratic even when smoothed by the use of large<br />
universes (see Charts A-l and A-2).<br />
MTA Journal/May <strong>1985</strong>
After extensive computerized cross-screens of the corresponding aspects of option series, we<br />
have come to the conclusion that the relatively erratic erosion curve of puts is a direct result<br />
of the liquidity quotient. This hypothesis is supported by the application of simple logic.<br />
The one thing puts and calls do have in common in this area is the almost total unpredict-<br />
ability of individual contracts, even on the same stock and same cycles. On the surface, it would<br />
appear that the erosion patterns are “controlled” to a large degree by the dictates of the market<br />
professionals based on what part that particular contract plays in their overall position/hedge<br />
strategy.<br />
Even though none of the foregoing, either separately or collectively, represent an insurmount-<br />
able hurdle in achieving penetrating technical analysis of the options market, they do present<br />
challenges worth pursuing with more sophisticated analytical processes.<br />
One of the unique features of listed stock options that may well end up being the foundation<br />
of a truly historic breakthrough in technical analysis is . .<br />
REMAINING TIME VALUE<br />
In the interest of brevity, we will summarize our conclusions on time value by saying it is the<br />
sum and total of all supply and demand pressures that are at work on the price structure of an<br />
option contract at any particular point in time. It is the bottom line of an options’ financial state-<br />
ment revealing which pressure is in excess and to what degree.<br />
If you are a writer, you want all the time value you can get, because it represents your potential<br />
gain. If you are a buyer, you don’t want to pay anything over intrinsic value if you don’t have<br />
to. As a matter of fact, you would like to be able to buy the contract you want at a discount if<br />
you could, and quite often, can if it is far enough in-the-money. NOTE: In-the-money options<br />
do not necessarily go “point-for-point” with their underlying stocks as the attached tabulations<br />
for April 11, <strong>1985</strong>, show. (See Tables I and Il.)<br />
As a consequence of this, we use time value as one of the primary screening devices for the<br />
selection of options both for writing (primary) and purchase (secondary).<br />
It seems logical, therefore, that time value would be an excellent base for constructing timing<br />
indexes for the overall options market.<br />
The application of logic tells you time value for calls should increase during a market uptrend<br />
as enthusiastic buyers in their exuberance bid up prices. Conversely, the time value for calls<br />
should decrease as a market correction unfolds and buyers become more and more reluctant<br />
to be on the long side. The opposite to the above should be the pattern of time value for puts.<br />
Apparently, this logic is faulty, because that is not how time value equates to overall price struc-<br />
tures, at least not consistently enough to be of value. In a very loose interpretation, the time<br />
value for puts does go in opposite direction from their underlying stocks, but time value for<br />
calls does not correlate with its stocks even loosely. (See Chart 9.)<br />
This lack of correlation is undoubtedly a direct result of the high degree of efficiency achieved<br />
by the proliferation of sophisticated hedging and arbitrage programs during recent years.<br />
MTA Journal/May <strong>1985</strong> 49
Getting back to logic for a minute, it would seem that comparing time value of puts to calls<br />
(on a percentage basis so you have a common denominator) would produce a very mean-<br />
ingful ratio. When the ratio gets high, the price structure of the stocks should be overextended,<br />
and thus, a top could be expected to form. When the value is low, prices should be in the pro-<br />
cess of bottoming as the corrective process comes to a conclusion.<br />
Based on the above, we programmed our computer to calculate this data so we could con-<br />
struct a put/call ratio of percentage time value. We only used stocks that had both puts and<br />
calls (starting in the summer of 1977 with twenty-five) so there would not be any distortion in<br />
the ratio with only one side of the equation (i.e., calls only). Our thoughts were that it would<br />
be a superior indicator to one using volume (we couldn’t factor in opening and closing volume),<br />
prices (distorted by the imbalance of in or out-of-the-money contracts) or other criteria.<br />
The resultant put/call ratio is depicted on Chart C. We have indicated most of the interme-<br />
diate-term tops and bottoms in price with the tie-lines. Up until the early 198Os, the put/call<br />
ratio, at the best, could be labeled interesting, provocative, or enigmatic. It most certainly was<br />
not the historic breakthrough we were looking for, even though we felt quite strongly time value<br />
reflects all internal and external pressures being brought to bear on prices.<br />
However, during recent years one characteristic has become extremely reliable in confirming<br />
a major market advance:<br />
When the put/call ratio is relatively low and experiences a sharp and substantial<br />
increase, a major advance in the overall market is virtually assured (See Chart C.)<br />
In all candor, we must admit the logic of why this occurs escapes us. Our logic tells us it should<br />
be just the opposite; time values for calls should expand rapidly as a major uptrend is launched,<br />
not puts. However, here again the cause of this effect is undoubtedly the result of massive<br />
hedging and arbitrage programs which utilize the purchase of puts. This excess demand, which<br />
is anticipatory, produces a sharp expansion in the time value for puts while the expansion of<br />
demand for calls is reactionary pressure produced by the normal lag-time sequence of trend<br />
following decision making processes.<br />
Thus, by utilizing one of the unique features of options, you can develop a relatively consistent<br />
timing device for the price structure of the underlying stocks. This tool can, therefore, be added<br />
to the technician’s arsenal of conventional market timing tools to arrive at an even stronger<br />
conviction as far as impending market behavior is concerned.<br />
The next logical search takes you in quest of a method to effectively use this unique charac-<br />
teristic of options in an efficient screening process.<br />
AVERAGE (AVR) PUT AND CALL PERCENTAGE TIME VALUE FOR INDIVIDUAL STOCKS<br />
A SCREENING TECHNIQUE<br />
Let’s start with the assumption that you have recognized and accepted the fact that time value<br />
is a valuable piece of information you can use to increase your performance, regardless of the<br />
option strategies you employ. Now, how do you obtain and put this information to use?<br />
Our experience has taught ,us that the most productive sequence in screening any type of data<br />
is to start with the stock and end up with the option. Once you accept the validity of time value,<br />
MTA Journal/May <strong>1985</strong> 50
it becomes easy to understand why some stocks consistently command relatively high and<br />
others relatively low time values.<br />
A high velocity, highly volatile stock that has a lot of sex-appeal to investors is, naturally, going<br />
to have large time values for both their puts and calls. Stocks in the area of technology are<br />
obvious examples, as well as “swinger” stocks that are the favorites of professional traders<br />
because they can get a lot of action out of them--in both directions.<br />
On the other side of the equation, you have low beta, slow moving, pachyderm-type stocks<br />
that consistently have relatively low time values. This is primarily due to low demand from<br />
traders who cannot make any money on stocks that are “asleep” and from investors who, by<br />
the very nature of the stocks, do not feel compelled to use their options to hedge their stock<br />
positions. Utilities are the most common among these type of stocks.<br />
Because of the basic desire of option buyers to be “cheap” and option writers to be “greedy,”<br />
you would, naturally, expect writers to be attracted to the former and buyers to the latter, which<br />
they are. But here is where actual strategies must be “fitted” with the correct merchandise<br />
(options). As an example, income writers, as a general rule, require (desire) very stable stocks,<br />
and, thus, they would not be interested in the normally high beta, high time value stocks whereas<br />
the speculative and aggressive writers would feel right at home with these “swingers,” as<br />
would investors who write options as hedging techniques.<br />
By compiling a relative strength (rank) of the percentage time value of the almost four hundred<br />
underlying stocks, you have a logical screening technique for both buyers and writers of op-<br />
tions. (See Tables III and IV).<br />
First, a general explanation of the printouts. There are four (4) different listings of forty stocks<br />
which are appropriately labeled. The various columns ar self-explanatory except you should<br />
be aware the the last column (NO) refers to the number of put or call options for that stock<br />
depending on the heading of that list.<br />
As you glance through these four lists you can see the pattern of stocks we described earlier<br />
(i.e., highest time value = electronic and computer stocks; lowest time value = utilities<br />
and banks). There will, naturally, be some that don’t fit the mold, but that’s because they are<br />
there for “special situation” reasons or are, in reality, different than they are generally perceived<br />
to be so far as velocity and volatility are concerned. In general, however, you will be able to<br />
accept the placement of most of the stocks in each category.<br />
There are several cross-screens of these lists that are very fruitful exercises. First, you have<br />
the stocks that appear in both the highest average percentage time value for both puts<br />
and calls. These are the high velocity, highly volatile stocks whose options were consistently<br />
in demand, at least for the week tabulated, enough to produce extremely high time values.<br />
These time values reflect the excess demand better than do most other numbers you could<br />
come up with. Fifteen of the forty stocks were in both lists in the previous week. As you would<br />
expect, the names pretty well fit the mold and contained some exceptionally large betas, which<br />
confirmed their high degree of volatility.<br />
The next cross-screen was, logically, the stocks in both the lowest AVR percentage time<br />
value for both puts and calls. The eight names on Table IV contain a low-beta utility as well<br />
as stocks like Western Company of North America, Lehman Corporation, and Tri-Continental.<br />
The options on these stocks are, currently at least, out of favor for both buyers and writers.<br />
The third list, and quite frankly the one that most piqued our imagination, was the stocks that<br />
were in the highest AVR, percentage time value for calls, and lowest AVR percentage<br />
time value for puts.<br />
MTA Journal/May <strong>1985</strong>
If you locate a stock whose puts and calls consistently have one of the highest AVR per-<br />
centage time value out of all three hundred seventy-four underlying stocks, you know you<br />
have a high velocity, highly volatile stock. Because the options on this stock, both puts and<br />
calls, are under heavy demand, you know you are going to be required to pay a hefty premium<br />
(i.e., time value) if you want to buy them. If you are a writer, you know you are going to receive<br />
a hefty bonus for being on the supply side of a transaction. How you can use this informtion<br />
to your advantage is relatively straightforward, especially those utilizing writing strategies.<br />
Now, let’s take a look at a totally different breed of cat. Let’s say you isolate a stock whose calls<br />
have one of the largest AVR percentage time value of all underlying stocks. What does this<br />
tell you, and how can you use this information to increase the performance of your option strat-<br />
egies? The first logical conclusion is either the calls are overvalued, the puts are underval-<br />
ued, or both. It is an obvious case of unbalanced supply and demand pressures on the options<br />
caused by any number of possible reasons.<br />
Your strategy is, therefore, relatively apparent as you would want to go short the calls and go<br />
long the puts. This strategy, of course, takes into consideration the point we made earlier that<br />
all other considerations must be equal, or at least neutralized. In other words, you should not,<br />
necessarily act on the AVR percentage time values, and disregard all the other factors that<br />
could affect your results (i.e., overall market conditions, underlying stock’s technical condition,<br />
status of individual option, etc.). However, the point we want to make is that such a strategy<br />
could be viable enough to employ independent of, but in conjunction with, your normal strat-<br />
egies.<br />
In conclusion, we must confess we have experienced a great deal of success in applying proven<br />
technical analysis techniques, such as first and second derivatives of price, to stock options<br />
and are not in the least deterred in our efforts. We are, however, intrigued by the challenges<br />
presented by the items mentioned in this article and will continue to pursue them, even though<br />
the successful conclusion is reached by someone other than ourselves. Indeed, we would be<br />
extremely thrilled to learn these challenges have already been overcome, if the conquerors<br />
are willing to share the results.<br />
MTA Journal/May <strong>1985</strong> 52<br />
1
X TIME 'VAIUE<br />
8" EXP,RCI:O:I SE9iES<br />
JAN 1082 FE3 1'182 XARCIi 1982<br />
A<br />
a\<br />
I<br />
\<br />
:.a..<br />
:<br />
*.<br />
:<br />
'\<br />
I<br />
\<br />
',<br />
i., CALLS<br />
.<br />
\ CALLS<br />
\<br />
:<br />
:<br />
:<br />
\<br />
\<br />
'\\<br />
t<br />
:<br />
:<br />
:<br />
:<br />
'. .<br />
:<br />
:<br />
:<br />
:<br />
:<br />
\<br />
I<br />
\<br />
\<br />
‘\<br />
\<br />
\<br />
I<br />
\<br />
\<br />
\<br />
\<br />
+.<br />
\<br />
: PUTS<br />
1<br />
:<br />
:<br />
:<br />
--.<br />
/<br />
\<br />
\<br />
\<br />
\<br />
, PUTS<br />
'\<br />
\<br />
\<br />
\<br />
\<br />
:<br />
:<br />
*.*:<br />
*.<br />
**..<br />
f .<br />
. a.*<br />
-4<br />
:<br />
:<br />
'.<br />
:...*<br />
:<br />
:<br />
:<br />
*.<br />
*.<br />
:<br />
:<br />
'0<br />
:<br />
:<br />
.:** :<br />
'. . . ,. -.*<br />
: :<br />
L\",<br />
v \<br />
I<br />
I , r=<br />
\ t<br />
"<br />
\ :<br />
\A'<br />
v II<br />
\<br />
\<br />
\<br />
\<br />
\<br />
', 1<br />
\<br />
'\\ \,<br />
' \ \ \<br />
\<br />
' \<br />
JAN <strong>1985</strong><br />
: *.<br />
*.<br />
‘.<br />
:<br />
:<br />
‘: ‘.<br />
l 1 \\<br />
I<br />
\ .-\ \ \<br />
\<br />
FiGURE I\-: \\<br />
, \<br />
CHART A-l<br />
CHART A-2<br />
MTA Journal/May <strong>1985</strong> 53
CHART B<br />
A.II!‘I , ,I ; ( , I I<br />
IrWIIl /‘I I/ I I<br />
CHART C<br />
MTA Journal/May <strong>1985</strong> 54
TRAVELERS AUG 30 42.25 11.62 42 -1 .s<br />
LOuEST X TIME VALUE CCbLLS) YU?"HY OIL AUG 23 30.37 10.cc 51 -1.2<br />
OPTION RRICE<br />
STCCK OPT'N<br />
PR:CE<br />
x<br />
OUT<br />
IN<br />
'<br />
;;"<br />
NTL 3ISTLR<br />
LEM'4AN CP<br />
t"" ChA34<br />
PUG<br />
PUG<br />
PUG<br />
20<br />
IO<br />
10<br />
30.87<br />
13.75<br />
14.25<br />
10.50<br />
3.62<br />
4.12<br />
54<br />
37<br />
42<br />
-1.2<br />
-0.9<br />
-n.9<br />
ucNUINE 24 PUG 25 32.12 6.37 23 -0.5<br />
APR SEQIES<br />
COLECO INO APR 5<br />
AM EXPRESS APR 20<br />
AVON PROD APR 15<br />
GEORGIA P4 APR 15<br />
YES4 'ETRO APR 10<br />
T I E COMM APR 5<br />
AY EXPRESS APR 25<br />
AM EXPRESS APR 30<br />
CROWN ZELL APR 25<br />
CULLINANE APR 15<br />
MAY SERIES<br />
14.62 3.25 192<br />
42.25 <strong>21</strong>.25 111<br />
<strong>21</strong>.2s 5.75 41<br />
22.02 7.12 50<br />
li?.5C e.lr as<br />
6.75 1.62 35<br />
42.25 16.5G 69<br />
42.25 11.50 40<br />
43.25 17.50 73<br />
25.75 13.25 91<br />
ARMCO INC YAY 5 7.75 2.5C 55<br />
TRAVELERS MAY 30 42.25 11.12 4C<br />
GULF CNADA YAY 10 14.25 3.87 42<br />
COMYOR INT MAY 5 lC.GO 4.75 1cc<br />
AYDAHL CP YAY lo 14.87 4.50 4E<br />
SAXTER TRV M4Y lo 15.87 5.50 58<br />
CINCI HILA MAY 15 22.62 7.12 50<br />
PITTSTON C M4Y 10 11.62 1.37 16<br />
SIGNAL COS 14AY 25 34.37 8.75 37<br />
04TAPCINT MAY 10 14.50 4.25 45<br />
JUN SERIES<br />
SC1 4TLNTA JUN 5<br />
HECLA MINI JUN lo<br />
BUCYRUS-EQ JUN 10<br />
MITCHL EGY JUN 10<br />
LOEWS CP JUN 30<br />
FED NTL MA JUN 10<br />
BAKER INTL JUN 10<br />
KANE6 SPVC JUN 5<br />
TELEX CORP JUN 25<br />
SOTHLD CP JUN 20<br />
JUL SERIES<br />
lG.87 5.62 117<br />
17.12 4.75 71<br />
14.50<br />
15.37<br />
4.25<br />
5.12<br />
45<br />
53<br />
4b.75 16.00 55<br />
15.75 5.50 57<br />
16.42 6.37 66<br />
3.12 4.oc 52<br />
40.75 15.25 63<br />
32.25 11.87 61<br />
AVON PROD JUL 15 <strong>21</strong>.25 5.87<br />
CROUN LELL JUL 25 43.25 17.75<br />
CROWN ZELL JUL 30 43.25 12.75<br />
FST CHICAG JUL 15 22.75 7.50<br />
ALLIED CP JUL 30 42.25 11.37<br />
DAYTON HUD JUL 25 36.75 11.50<br />
BANKAPERCA JUL 15 19.25 4.12<br />
AM EXPRESS JUL 30 42.25 12.00<br />
CIGNA CORD JUL 35 48.87 13.62<br />
KERR MCGEE JUL 25 30.75 5.62<br />
AUG SERIES<br />
CARTER H H AUG 20<br />
-2.C<br />
-2.4<br />
-2 . 4<br />
3AXTER TRV AU; 10<br />
ROCKkZLL<br />
PUG 25<br />
SCUTHRN CC ALi 15<br />
SEP<br />
SfRIES<br />
-2.2 PHILIP MOP SEP so<br />
-2.'<br />
-1.9<br />
PriILID<br />
SANEB<br />
MOR<br />
SPVC<br />
SEP<br />
SEP<br />
85<br />
5<br />
-1.' SC1 ATLNTA SER 5<br />
1;':<br />
.<br />
3UCYRUS-EP<br />
YIC SO UT<br />
SE3<br />
SEP<br />
10<br />
IC<br />
-1*7 FE9 NTL YA SEP 10<br />
MITCHL E;Y SEP 10<br />
CCHSST ENG SEP 25<br />
-~<br />
*.<br />
2 S3TrtL3 CP SEP 20<br />
$3<br />
.<br />
OCT SERIES<br />
-2.5<br />
AVON RR03 OCT 15<br />
-'-' CROWN ZELL OCT 3c<br />
-2*4 4ti TEL&TEL OCT 15<br />
-2*2 3UKE POk'ER OCT 25<br />
-'-' TEXACO INC OCT 33<br />
-lma ALLIED CP OCT 33<br />
-lm7 EXXON COQP OCT 43<br />
ATL QICYFL OCT Co<br />
3ANKAMERCA OCT 15<br />
-2.3<br />
CoMOISc0 OCT IC<br />
15.67 5.75 55 -c.a<br />
34.75 9.5C 39 -c.7<br />
20.12 5.00 34 -0.6<br />
93.62 10.75 17 -3.1<br />
93.52 7.12 IO -1.6<br />
9.12 4.00 52 -1.4<br />
10.87 5.75 117 -1.1<br />
14.50 4.37 45 -c.9<br />
14.12 4.oc 41 -c.9<br />
lC I. 75 5.62 57 -0.8<br />
15.37 5.25 53 -0.8<br />
32.2 5 7.00 29 -0.3<br />
32.25 12.12 61 -0.4<br />
<strong>21</strong>.25 6.05 41 -1.2<br />
43.25 13.OC 44 -0.6<br />
<strong>21</strong>.37 6.25 42 -0.6<br />
32.50 7.37 3c -0.4<br />
36.00 5.87 20 -3.4<br />
42.25 12.12 r@ -0.3<br />
50.e7 10.75 27 -0.2<br />
46.5G 8.5G <strong>21</strong> C.0<br />
19.25 4.25 2e 0.0<br />
16.25 6.25 bi 0.0<br />
I;*;<br />
.<br />
NCV SERIES<br />
-1.7<br />
-1.6<br />
-l.b<br />
-1.5<br />
-1.4<br />
-1.2<br />
-1.2<br />
CARTER H H<br />
NTL DISTLR<br />
ARKLA INC<br />
SOUTilRN CO<br />
CONS EDISO<br />
LEHMAN CR<br />
N B I INC<br />
NOV<br />
kOV<br />
NOV<br />
NOV<br />
NOV<br />
NOV<br />
NOV<br />
20<br />
20<br />
15<br />
15<br />
25<br />
10<br />
‘0<br />
2e.ocl<br />
30.87<br />
22.25<br />
20.12<br />
32.50<br />
13.75<br />
14.OG<br />
7.50<br />
lO.SG<br />
7.oc<br />
5.oc<br />
7.5c<br />
3.75<br />
4.00<br />
40<br />
54<br />
48<br />
34<br />
30<br />
37<br />
40<br />
-1 .e<br />
-1.2<br />
-1 .I<br />
-0.6<br />
0.0<br />
0.0<br />
0.c<br />
SHELL OIL YOV 5c 59.62 9.62 19 0.C<br />
SCHERING P NCV 35 44.12 3.25 26 c.3<br />
41 -, . 3 CCC1 PETRO NOV 25 3c.00 5.12 20 0.4<br />
73 -1 .i<br />
44 -1.2<br />
51 -1.1<br />
4c -c.9<br />
47 -0.7<br />
25 -0.7<br />
40 -0.6<br />
39 -0.5<br />
23 -G.4<br />
28.00 7.5C 40 -1.8<br />
TABLE I<br />
OEC SERIES<br />
LAPITA CP OEC 10<br />
SANE8 SPVC oic 5<br />
MI3 SO UT DEC 10<br />
BUCYRUS-ER DEC 1G<br />
FED NTL MA DEC lo<br />
YITCtlL EGY 3EC 10<br />
VALERO ENG DEC 5<br />
qti;HS TOOL OEC 10<br />
CdEVRCN CP DEC 30<br />
N ii INOUS DEC 50<br />
MTA Journal/May <strong>1985</strong> 55<br />
14.87<br />
9.12<br />
14.12<br />
14.50<br />
15.75<br />
15.37<br />
11.12<br />
15.25<br />
34.5c<br />
55.37<br />
4.62<br />
4.00<br />
4.00<br />
4.50<br />
5.75<br />
5.37<br />
6.12<br />
5.37<br />
4.37<br />
6.00<br />
4E<br />
32<br />
41<br />
4s<br />
57<br />
53<br />
122<br />
52<br />
15<br />
10<br />
-1.7<br />
-1.4<br />
-c.9<br />
0.0<br />
G.0<br />
-0.C<br />
-0.0<br />
0.8<br />
1.1<br />
1.1
LOWEST x TIME VALUE (PUTS) LEIYAY CR AU; 20 13.75 5.7s 31 -3.6<br />
LEHYAN CP AUG 15 13.75 G.97 6 -2.8<br />
OPTION<br />
STCCU<br />
PR:CE<br />
CPT'N<br />
PRICE<br />
r.<br />
IN<br />
OUT<br />
x<br />
TM<br />
VL<br />
30ME MINES<br />
ZENITH RAD<br />
INTL FLAVC<br />
\ L INOUS<br />
PUG<br />
AUG<br />
AUG<br />
AUS<br />
15<br />
30<br />
35<br />
15<br />
9.75<br />
2l.iC<br />
27.37<br />
11 .SG<br />
5.OC<br />
3.50<br />
6.5:<br />
3.25<br />
35<br />
3c<br />
2P "<br />
23<br />
-2.6<br />
-2.4<br />
-2.2<br />
-2.2<br />
APR SERIES<br />
TESORO PET AU: 2C 12.12 7.62 39 -2.1<br />
AVNET INC PUG 40 30.37 F.GC 24 -2.1<br />
STOR; TECH<br />
STORC TECH<br />
WSTR UNION<br />
bISTR UNION<br />
WSTR UNION<br />
VERBATiM<br />
VEROATIH<br />
STOR; TECH<br />
SOUTHLND R<br />
SYITH INTL<br />
APR<br />
APR<br />
APR<br />
APQ<br />
APR<br />
APR<br />
APR<br />
APR<br />
APR<br />
APR<br />
15<br />
IO<br />
15<br />
20<br />
25<br />
10<br />
15<br />
5<br />
2'0<br />
20<br />
2.50<br />
2.50<br />
e.a7<br />
8.e7<br />
e.87<br />
7.37<br />
7.37<br />
2.50<br />
15.37<br />
11.37<br />
12.12<br />
7.12<br />
5.75<br />
lG.75<br />
15.75<br />
2.37<br />
7.37<br />
2.43<br />
4.25<br />
3.37<br />
e3<br />
75<br />
4C<br />
55<br />
54<br />
26<br />
SC<br />
50<br />
23<br />
43<br />
-15.2<br />
-15.2<br />
-4.2<br />
-4.2<br />
-4.2<br />
-3.5<br />
-3.5<br />
-2.E<br />
-2.4<br />
-2.2<br />
SKYLINE CC<br />
SE? SERIES<br />
MOHAWK OAT<br />
MARY KAY<br />
L T V CORP<br />
NTL PATENT<br />
TEXAS GIL<br />
DATA GENRL<br />
VEECO IhST<br />
AUG<br />
SEP<br />
SEP<br />
SCP<br />
SEP<br />
SEP<br />
SEP<br />
SEP<br />
20<br />
15<br />
1s<br />
15<br />
2s<br />
25<br />
70<br />
SO<br />
14.00<br />
5.00<br />
11.37<br />
lG.12<br />
15.25<br />
17.37<br />
46.00<br />
19.75<br />
5.75<br />
9.75<br />
3.25<br />
4.62<br />
9.37<br />
7.25<br />
23.00<br />
9.87<br />
3C<br />
66<br />
24<br />
32<br />
39<br />
30<br />
34<br />
34<br />
-1.8<br />
-5.2<br />
-3.3<br />
-2.5<br />
-2.5<br />
-2.2<br />
-2.2<br />
-1.9<br />
MAY SERIES<br />
FCL CP AM SEP 1s 6.87 8.OC 54 -1 .e<br />
SANTAFE SP SEP 35 27.00 7.5c 22 -1.9<br />
REAONG-BAT qAY 15 9.62 4.62 35 -7.8<br />
COHP SC1 sip 20 15.30 4.75 25 -1.7<br />
G E c INTL<br />
LEHMAN CP<br />
LEHqAN CP<br />
SKYLINE CO<br />
DOME MINES<br />
ZENITH RAO<br />
INTL FLAVO<br />
N L INDUS<br />
TESORO PET<br />
HAY<br />
MAY<br />
MAY<br />
MAY<br />
'4AY<br />
MAY<br />
MAY<br />
MAY<br />
MAY<br />
10<br />
20<br />
15<br />
20<br />
15<br />
3G<br />
35<br />
IS<br />
20<br />
5.12<br />
13.7s<br />
13.75<br />
14.00<br />
9.75<br />
<strong>21</strong>.00<br />
27.87<br />
11.50<br />
12.12<br />
4.62<br />
5.75<br />
3.87<br />
5.b2<br />
5.00<br />
8.50<br />
6.50<br />
3.25<br />
7.42<br />
49<br />
31<br />
e<br />
30<br />
35<br />
32<br />
20<br />
23<br />
39<br />
-5.0<br />
-3.6<br />
-2.8<br />
-2.7<br />
-2.6<br />
-2.4<br />
-2.2<br />
-2.2<br />
-2.1<br />
OCT SERIES<br />
uSTP UNION<br />
SCUTHLNC R<br />
CHX&NU TRN<br />
Ti E COMM<br />
TANDY CORR<br />
3ETHLHH ST<br />
WINNEBA;O<br />
OCT<br />
CCT<br />
OCT<br />
OCT<br />
OCT<br />
OCT<br />
OCT<br />
15<br />
20<br />
30<br />
15<br />
40<br />
25<br />
25<br />
9.67<br />
15.37<br />
13.b2<br />
6.75<br />
33.25<br />
17.50<br />
17.75<br />
5.75<br />
4.25<br />
15.oc<br />
3.12<br />
6.25<br />
7.25<br />
7.00<br />
4c<br />
23<br />
34<br />
5s<br />
16<br />
30<br />
29<br />
-4.2<br />
-2.4<br />
-1.9<br />
-1.9<br />
-1.5<br />
-1.4<br />
-1.4<br />
JUN SERIFS<br />
HSTN NAT G OCT 55 45.37 3.5c 16 -1.4<br />
FLUOR CORP OCT 25 19.00 5.7s 24 -1.3<br />
KEY PHARM<br />
MOHAWK OAT<br />
GEhc MOTORS<br />
~GLCBAL MAR<br />
PARADYNE<br />
KEY PHARM<br />
L T V CORP<br />
ZAPATA CP<br />
VEECO INST<br />
NTL PATENT<br />
JUN<br />
JUk<br />
JUN<br />
JUN<br />
JUN<br />
JUN<br />
JUN<br />
JUN<br />
JUN<br />
JUN<br />
20<br />
15<br />
85<br />
10<br />
20<br />
15<br />
15<br />
25<br />
30<br />
25<br />
9.75<br />
5.oc<br />
73.50<br />
4.37<br />
14.37<br />
9.75<br />
10.12<br />
14.87<br />
19.75<br />
15.25<br />
9.75<br />
9.75<br />
9.00<br />
5.5c<br />
5.25<br />
5.00<br />
4.b2<br />
9.75<br />
9.75<br />
9.37<br />
51<br />
66<br />
13<br />
56<br />
2e<br />
3s<br />
32<br />
40<br />
34<br />
39<br />
-5.1<br />
-5.0<br />
-3.4<br />
-2.9<br />
-2.6<br />
-2.6<br />
-2.5<br />
-2.5<br />
-2.5<br />
-2.5<br />
PCLAROIO<br />
NCV SERIES<br />
READNG-BPT<br />
LEHMAN CP<br />
DATAPC:NT<br />
LEHYAN CP<br />
ZENITH RAO<br />
INTL FLAVO<br />
N L INOUS<br />
OCT<br />
NOV<br />
NOV<br />
NOV<br />
NOV<br />
NOV<br />
NOV<br />
NOV<br />
35<br />
15<br />
2C<br />
20<br />
15<br />
30<br />
35<br />
15<br />
29.37<br />
9.62<br />
13.75<br />
14.50<br />
13.75<br />
<strong>21</strong>.00<br />
27.87<br />
11.50<br />
5.25<br />
5.00<br />
5.75<br />
s.oc<br />
0.87<br />
3.5G<br />
6.5C<br />
3.25<br />
16<br />
35<br />
31<br />
27<br />
8<br />
3P<br />
20<br />
23<br />
-1.3<br />
-3.9<br />
-3.4<br />
-3.4<br />
-2.8<br />
-2.4<br />
-2.2<br />
-2.2<br />
JUL SERIES<br />
TESCRO PET NOV 20 12.12 7'.62 39 -2.1<br />
AVNET INC tiov 40 30.37 9.oc 24 -2.1<br />
STORG TECH JUL 5 2.50 2.25 50, -13.0<br />
DATAPCINT NOV 25 14.50 10.25 42 -1.7<br />
STORG<br />
WSTR<br />
TECH<br />
UNION<br />
JUL<br />
JUL<br />
1C<br />
15<br />
2.5c<br />
3.07<br />
7.37<br />
5.7s<br />
75<br />
4c<br />
-5.2<br />
-4.2<br />
OEC SERiES<br />
WSTR UNION<br />
WSTR UNION<br />
VERBATIM<br />
SOUTHLND R<br />
BETHLHM ST<br />
ALLIS CHAL<br />
T I C COMM<br />
JUL<br />
JUL<br />
JUL<br />
JUL<br />
JUL<br />
JUL<br />
JUL<br />
20<br />
25<br />
10<br />
20<br />
25<br />
15<br />
10<br />
8.87<br />
8.87<br />
7.37<br />
15.37<br />
17.50<br />
6.75<br />
6.75<br />
10.75<br />
15.75<br />
2.37<br />
4.25<br />
7.12<br />
3.12<br />
3.12<br />
5s<br />
64<br />
26<br />
23<br />
30<br />
5s<br />
32<br />
-4.2<br />
-4.2<br />
-3.5<br />
-2.4<br />
-2.2<br />
-1.9<br />
-1.9<br />
MARY KAY<br />
OOU CHEM<br />
COHP SC1<br />
KANE3 SRVC<br />
KEY PHARY<br />
L T V CORP<br />
APACHE CP<br />
OEC<br />
OEC<br />
OEC<br />
OEC<br />
DEC<br />
OEC<br />
OEC<br />
15<br />
35<br />
20<br />
15<br />
15<br />
15<br />
15<br />
11.37<br />
29.00<br />
15.00<br />
9.12<br />
9.7s<br />
10.12<br />
11.37<br />
3.25<br />
5.50<br />
4.75<br />
5.75<br />
5.12<br />
4.75<br />
3.50<br />
24<br />
17<br />
2s<br />
39<br />
35<br />
32<br />
24<br />
-3.3<br />
-1.7<br />
-1.7<br />
-1.4<br />
-1.3<br />
-1.2<br />
-1.1<br />
AUG SERIES<br />
PER'CIN ELM DEC 30 24.00 5.75 2c -1.c<br />
PARADYNE DEC 20 14.37 5.50 28 -0.9<br />
READNG-BAT PUG 15 9.62 5.00 35 -3.9<br />
YTL PATENT DEC 20 15.25 4.62 23 -0.9<br />
TABLE II<br />
MTA Journal/May <strong>1985</strong> 56
AVERAGE ?ERCENT TIME VALUES for CPTION UNCERLYING STOCKS<br />
Run on: II-Apr-85 RUN tt: 1099<br />
3:<br />
;: .<br />
HIGHEST AVR ,Y TIHE VALUE HIGH:ST AVR X TIME VALUE<br />
RANKED ACCO RDING TO CPLLS RANKE3 ACCCRDIN; TO PUTS<br />
STOCK<br />
----------<br />
MOHAWK OAT<br />
GERBER SC1<br />
ALLIS CHAL<br />
C B S INC<br />
L T V CORP<br />
COHMOR INT<br />
ALL1 STORE<br />
NTL PATENT<br />
CRAY RSRCH<br />
KEY PHARM<br />
DOME MINES<br />
I T T CORP<br />
TELEX CORP<br />
ASARCO INC<br />
STORG TECH<br />
;&EM& :ZG<br />
HEWLETT PK<br />
SALLY HFG<br />
AMDAHL CP<br />
t&S;":$"<br />
GEN IhSTR<br />
COHP SC1<br />
DATA GENRL<br />
FED EXPRES<br />
ZENITH RAD<br />
ALCAN ALUM<br />
U S AIR<br />
;G; FAI;;'<br />
CNTRL DATA<br />
CHRYSLER<br />
TESORO PET<br />
GENRAD INC<br />
PRIME CHPR<br />
GuLF&WSTRN<br />
GLOBAL MAR<br />
VIACOM IkT<br />
COASTAL CP<br />
AVERAGE:<br />
AVR AVR<br />
CALL<br />
-----<br />
PUT<br />
m-m--<br />
NO<br />
--<br />
_<br />
'X 3.2c 5.75 s<br />
6:44<br />
6.00 x:<br />
:*;4 2:64<br />
3: 9<br />
5143 ;*A; 11':<br />
5.39<br />
;'g<br />
1194<br />
7.;;<br />
10<br />
2;<br />
5111 <strong>21</strong>42<br />
4.90 2.46 1:<br />
y; y9" :i<br />
4:80<br />
-9167<br />
yp ';A; 7<br />
4174<br />
4.65<br />
1177<br />
1.69<br />
9<br />
9<br />
2.;: $25 9<br />
2-z 4149 2:35 $91 1<br />
4:38<br />
4.18<br />
1173 1%<br />
t-i: . $.;a 0:59 4: 8<br />
4.05 3.29 18<br />
----- --m-s --<br />
4.91 1.88 10<br />
1<br />
TABLE III<br />
AVR AVR<br />
STOCK<br />
--e------- -----<br />
CALL PUT<br />
-e-e-<br />
NC<br />
--<br />
MCtiAwK DAT<br />
VALERC ENS<br />
APERADA HS<br />
;C; ;T;t$A<br />
ti i iNOUS<br />
UNIOk OIL<br />
A H F CORP<br />
COASTAL CP<br />
TELEX CORP<br />
NCRTHROF C<br />
GERBER SC1<br />
COMYDR IhT<br />
COLECO IN0<br />
CRAY RSRCH<br />
ASARCC INC<br />
ENSERCH CP<br />
CROUN ZELL<br />
4;T”;:: !P”<br />
VIACOM INT<br />
ALL1 STORE<br />
p’;“; pcmb<br />
A S A LTD<br />
REVLON INC<br />
NTL SEMICD<br />
NTL DISTLR<br />
TERADYNE<br />
ZAPATA CP<br />
CHHPN INTL<br />
LOUIS LAND<br />
f4oaxL coRP<br />
CtiLLINANE<br />
GENRA3 INC<br />
AVON PROD<br />
CHRYSLER<br />
I T T CORP<br />
STRLNG ERG<br />
MEDTRCNIC<br />
MTA Journal/May <strong>1985</strong> 57<br />
3.69<br />
4.05<br />
4.87<br />
2:s<br />
5155<br />
3.73<br />
zt<br />
2:46<br />
y*"2:<br />
2:97<br />
;*;z<br />
<strong>21</strong>75<br />
:*B<br />
3:04<br />
t=:i<br />
3:57<br />
2;7i<br />
2.72<br />
-----<br />
AVERAGE: 3.77<br />
5.75 8<br />
4.32 8<br />
x; I9<br />
z:9c :<br />
:=Pt z<br />
2:74<br />
2.69 ep<br />
2.67 11<br />
2.65 6<br />
2.64 6<br />
2.63 16<br />
z: 8<br />
<strong>21</strong>54 $<br />
2.53 8<br />
$2; lcj<br />
1147 6<br />
2.46 10<br />
2.46 11<br />
2.45 8<br />
2.42 9<br />
----e --<br />
2.97 10
LOWEST AVR % TIME VPLUE<br />
RANKED ACCORDING TO CALLS<br />
AVR AVR<br />
---s-e----<br />
STOCK<br />
WSTR CO NA<br />
DUKE POWER<br />
;R;K~oD;T?L<br />
CONS EDISO<br />
SHELL OIL<br />
CONTL TEL<br />
CROWN ZELL<br />
SOUTHRN CO<br />
AM ELEC PW<br />
DOMNON RES<br />
AVON PROD<br />
AM TELBTEL<br />
VERBATIM<br />
---mm<br />
CALL<br />
s---m<br />
PUT NO<br />
me<br />
I;=;;<br />
0:oo -d*Z ’<br />
-0:53 ':<br />
0.20 2.26 6<br />
0.34 1.83 s<br />
8-25 :<br />
2:33 13<br />
1.53 6<br />
STOCK<br />
----------<br />
STGRG TECH<br />
VER34TIM<br />
LEHMAN CP<br />
kSTR CO NA<br />
GLOBAL YAR<br />
iJSTR UNION<br />
REBDNG-SAT<br />
BETHLHM ST<br />
PARKR ORLL<br />
SKYLINE CO<br />
AM ELEC PW<br />
INTL FLAVO<br />
gE;;;I;oPA<br />
EXXON CORP<br />
AETNA LIFE<br />
TEXACO INC<br />
aELLSOUTH<br />
LILLY ELI<br />
TRAVELERS<br />
COOPER IN0<br />
LEHMAN CP<br />
BANKAHERCA<br />
GENUINE PA<br />
ROYAL DUTC<br />
ALLIED CP<br />
ATL RICHFL<br />
TRI-CONTL<br />
FST CHICAG<br />
SCHERING P<br />
1.10<br />
0.60<br />
0.63<br />
8<br />
9<br />
17<br />
MGH U4 ENT<br />
TRi-CONTL<br />
i : :Nk::n<br />
INEXCO OIL<br />
G E 0 INTL<br />
CCMPTRVISN<br />
APACHE CP<br />
NCVO INDUS<br />
SHELL GIL<br />
BLCK DECKR<br />
FIRESTONE<br />
AM HOSPITL<br />
SYBRON CP<br />
y::;': gy<br />
WARNR LA43<br />
MAPCO INC<br />
INTL FLAVO<br />
A# HOME PR<br />
:=x<br />
1:22<br />
1.23<br />
EkGLHRG<br />
SHAKLEE<br />
;Lp;<br />
CP<br />
CP<br />
p!;i<br />
AM EXPRESS<br />
CARTER H H<br />
CLOROX CC<br />
CiGNA CORP<br />
x<br />
1:25<br />
1.27<br />
OCW CHEM<br />
HARRIS CP<br />
KANE3 SRVC<br />
SE4RLE G D<br />
ECKERO<br />
GOODYEAR<br />
----------<br />
JAC<br />
T<br />
,,',B<br />
xi<br />
----- --<br />
i";<br />
----------<br />
ZF5 *NC<br />
AVERAGE:<br />
0.80 0.79 9<br />
AVERAGE:<br />
HIGHEST CALLS<br />
HIGHEST PUTS<br />
-------------<br />
MOHAWK OAT<br />
EEiB;RIi;I<br />
L T V CORP<br />
COMMOR INT<br />
ALLI STORE<br />
fRfYTR;;;;<br />
TELEX CORP<br />
ASARCO INC<br />
VALERO ENG<br />
NTL SEMICD<br />
CHRYSLER<br />
GENRAO INC<br />
VIACOM INT<br />
COASTAL CP<br />
LCWEST AVP % TIME VALUE<br />
RANKED ACCORDING TO PUTS<br />
DUAL<br />
-------------<br />
LISTINGS<br />
LOWEST CALLS HIGHEST CALLS<br />
LOWEST<br />
------------<br />
PUTS LOWEST<br />
-------------<br />
PUTS<br />
kSTR CO NA<br />
PARKR ORLL<br />
SHELL OIL<br />
AM ELEC PW<br />
VER8ATIM<br />
LEHMAN CP<br />
TRI-CONTL<br />
INTL FLAVO<br />
TABLE IV<br />
MTA Journal/May <strong>1985</strong><br />
ALLIS CrlAL<br />
STORS TECH<br />
flGM UP ENT<br />
GLOBAL MAR<br />
4VR<br />
CALL<br />
-em--<br />
$-S?<br />
0:8;<br />
-1%<br />
3:75<br />
3:;<br />
ti:oo<br />
3.27<br />
0.51<br />
1.22<br />
:=a;<br />
3:60<br />
3.97<br />
:*:i'<br />
0136<br />
:29<br />
2:<strong>21</strong><br />
"r=$l<br />
2:71<br />
AVR<br />
PUT<br />
-w--s<br />
I;=;;<br />
-3:19<br />
I; l ;;<br />
-I:24<br />
-0.65<br />
0.25<br />
0.25<br />
8%<br />
0:32<br />
3.34<br />
0.35<br />
--e-e ---a-<br />
2.46 -0.50<br />
HIGHEST PUTS<br />
LOWEST CALLS<br />
------------<br />
CROWN ZELL<br />
CPRTER H H<br />
AVON PROD
BIOGRAPHY<br />
After completing his formal education at UCLA, David Holt joined a Certified Public Accounting<br />
firm in Southern California, where he specialized in Municipal Auditing. He joined a NYSE member<br />
firm in 1961 as a registered representative. After several years, he went into private business,<br />
where he continued to gain experience as an investor. In January, 1972, he joined Trade Levels<br />
as Director of Advanced Planning. He is now President of T L Communications, Inc. and Editor<br />
of the nationally known Trade Levels Report and the Trade Levels Option Report.<br />
MTA Journal/May <strong>1985</strong> 59
This page left intentionally blank for notes, doodling, or writing articles and comments for the MTA Journal.<br />
MTA Journal/May <strong>1985</strong> 60
A VIEW FROM THE FLOOR<br />
Ralph Fogel<br />
The primary qualification for being an effective trader is experience. Experience is what the<br />
trader uses to define the three most important factors that underscore the decision-making<br />
process. Those factors are risk and reward, competitive edges, and the environment.<br />
Risks and rewards vary, depending on the trader’s area of responsibility. For example, a spe-<br />
cialist generally tries to keep his inventory small so that he can take advantage of the times<br />
when there are extreme buying or selling going on in his stocks. The specialists gauge their<br />
risks and rewards on the movement of stocks, balancing longs against shorts, opting for some-<br />
what smaller profits in light of assuredly smaller losses.<br />
On the other hand, option traders aim to set up positions with as little risk as possible for a wider<br />
play. The options trader takes advantage of the different options within the stock that he/she<br />
is trading. Often he becomes a trader who bases his risk decisions on the values of the indi-<br />
vidual options. Their decisions are also based on order flow and on the value of any given spread<br />
within the many options of a security and stock.<br />
Just as options traders draw on much different criteria than do specialists or off-the-floor trad-<br />
ers when considering risks and rewards, so do different criteria pave the way when the traders<br />
consider their competitive edges. Specialists have an edge in that they handle the same stocks<br />
daily; therefore, they gain a familiarity with what the ranges are of stock. They have a feel for<br />
any movement. They sell strengths and buy market weaknesses. Specialists have access to<br />
the ticker tape. For those people who are traders and are concerned with movement-to-move-<br />
ment transactions within the security, the ticker tape is the most important source of infor-<br />
mation, providing an edge over those who do not watch the tape on a moment-to-moment basis.<br />
Unlike specialists, options traders are not interested in the movement of stocks. They try to<br />
maximize the fact that they are buying and selling value. These traders try to buy spreads un-<br />
dervalued and sell them overvalued. The options trader’s competitive edge is that he sees or-<br />
der flow in the various strikes within the options which off-the-floor traders do not see.<br />
Prior to a recent rule, off-the-floor traders had a more difficult time than others when it came<br />
to establishing a competitive edge. However, since the advent of the Clearing Member Trade<br />
Agreement (CMTA), these traders now pay to see order flow. They also have an edge in that<br />
they do not have to be on two sides of a market at all times and do not have to make specific,<br />
standard required allotments of inventory on every transaction. In many ways, off-the-floor traders<br />
are not as limited by the rigid guidelines imposed on specialists and options traders. However,<br />
it could be argued that what the off-the-floor traders make up for by having less stringent guide-<br />
lines, they lose in environmental “deprivation.”<br />
The floor of the exchange is an environment like no other. Being on the floor gives traders a<br />
feel for the market. There is more than just a ticker tape and seeing order flow. Traders can<br />
almost feel the surge of orders--the tempo increases, the noise level increases, and the move-<br />
ment on the floor increases. It is the wordless sounds of excitement that inform traders of a<br />
turn in the market or of a rally.<br />
Specialists and options traders see the ticker tape and actually see the individual sales taking<br />
place--not just the accumulation of them as seen on a bar graph. Being stationed in that en-<br />
MTA JourndiMay <strong>1985</strong> 61
vironment, those traders see more than an end-of-day chart showing the market’s range and<br />
its highs and lows by closing time. They are provided access to information on a first hand ba-<br />
sis. The more information that the traders abstract from the environment, the better able they<br />
are to make profitable decisions.<br />
It is the experienced trader who learns to view various communication situations as “environ-<br />
ments.” Even the off-the-floor traders realize that the “market chatter” on a bus or train ride to<br />
work, the “street noise” on their way to get coffee, and the news media’s coverage of “rumors”<br />
are all valuable communication environments that just may hold the key to where a stock is<br />
headed on any given day.<br />
In conclusion, it is not any single factor (risk and reward, competitive edges, the environment)<br />
that influences a trader’s decisions. It is all the aforementioned factors being processed si-<br />
multaneously that provide the experienced trader with the needed information to make the best<br />
possible decisions.<br />
MTA Journal/May <strong>1985</strong>
BIOGRAPHY<br />
After graduating Brooklyn College, Mr. Fogel was employed by Spear, Leeds where he be-<br />
came vice-president in 1977. In 1980, he became a general partner of Spear, Leeds and Kel-<br />
logg, where his duties included all trading operations on the American Stock Exchange Floor.<br />
In April, 1984, Mr. Fogel was appointed as floor official on the American Stock Exchange. Ralph<br />
Fogel is currently a specialist for the XMI options and senior partner on the American Stock<br />
Exchange Floor.<br />
MTA Journal/May <strong>1985</strong> 63
CENTERFOLD:<br />
THE SEMINAR INDICATOR<br />
1983 MTA<br />
QSXC (ff ) LEN'IKD TO QSXC (EF )<br />
c<br />
x9.6432- -------p - 165.1688<br />
5B.wee-<br />
-9.82!6-<br />
- 134.1254<br />
- il~.Em<br />
-69.64?2-<br />
LAST -<strong>21</strong>.2281<br />
- le3.8386<br />
RKIVE 172 8%<br />
.Lxw 0% 8%<br />
BE!d 83% 92$<br />
TOTRLS !@a 4838<br />
[ISXC/ffi/IISXC/BF~~RV/KT-RV.5/VLT.5~tKT.5/RV~/~~/BF~LS/LS.5~~/6RD/U(0/WYS/~/OLB<br />
)<br />
14-MY-8.3 4.55<br />
Above is the result as produced on 14-May-83 at 455 PM. eastern time. It represents the ratio<br />
of the five-week change in share liquidity to the five-week change in value of the Standard and<br />
Poor’s 500 Index. The result of the ratio of these changes is arithmetically smoothed on a five-<br />
week basis again. The center of the chart, for presentation purposes, is put at 50 units on the<br />
left-hand scale (indicator scale). The Standard and Poor’s 500 weekly value, the O’s, is overlaid<br />
using the right-hand scale.<br />
Below is the chart of the indicator as of April 24, 1984, about a year later and just prior to the<br />
1984 MTA Seminar.<br />
At the top of the facing page is the chart of the indicator as of April 27, <strong>1985</strong>. Below it is the<br />
daily (rather than weekly) variation of the seminar indicator chart.<br />
t-83 MW<br />
125,3226- ~~~~~~~~~-~--~~-----______ - 170.4100<br />
87+&$13- --------------------- ----------_-------- - 153.5800<br />
50*0000- .-.-e-.-1- - 136.7500<br />
- 119.9200<br />
-25.3226- --19v-B-----~---+----------1983-----------+’- - 103.0900<br />
LAST 37.5361<br />
ABOVE 6% 2x<br />
.oooo 0% 0%<br />
BELOW 94% 98%<br />
TOTALS 99 3627<br />
OSXC/RS/RSXC/BF((AV/VLT-~V.~/VLT.5~~VLT.5/~V~/~~S~/~F~LS/LS.5~~/G~~~/W~O/~UW5/~F5<br />
O/OLB Z4-APR-84 6.01<br />
MTA Journal/May <strong>1985</strong> 64
CENTERFOLD:<br />
THE SEMINAR INDICATOR<br />
5a. 000d- - :t5.66a0<br />
- : 57. c00a<br />
-344.333:- ---1383-----------+-------- - 143. 34d8<br />
LRST -35. ‘3257<br />
FIBOVE Z7% 13%<br />
. a000 8% 0%<br />
BE!-ti# 73% 61%<br />
TOTFlLS 180 3717<br />
BSXC/RS/L!SXC/HF I (QviVLT-Rv. S/VLT. 3) *VLT. 5/RV) /hRS c/i+ (,-S/is. 5) ) /tiRD/WKi3/PUWS/NF5<br />
-182’. 8386-<br />
G!SXC (hF ) COmFg6akED T!2 DSXC (i3’- I<br />
-415.6772-<br />
LFIS I- -86. 3EE1<br />
FlbUVk 34x r: ‘3 7.<br />
. 0080 0% 0%<br />
BELOW 66% 71%<br />
TOTQLS 14s 24714<br />
G!SXC/RS/G!SXC/BF ( (GIv/VLT-AV. >/Vie!. =~)+VL-I. s/t-iv) /ncis(/~+ (~b/&Sj. 5) ) /GR~/DQ~/PUWS/~‘JF~<br />
MTA Journal/May <strong>1985</strong> 65
This page left intentionally blank for notes, doodling, or writing articles and comments for the MTA Journal.<br />
MTA Journal/May <strong>1985</strong> 66
PRE-OPERATION DECISIONS<br />
ALL SYSTEMS:<br />
I. Natural Cycle Test<br />
OPTIMIZATION<br />
Software Review Workshop<br />
Barbara B. Diamond<br />
A. Relative Strength Index 5, 7, and 9<br />
B. Fourier Analysis<br />
II. Order of Analysis<br />
A. Optimize Individual Studies<br />
B. Combine Studies for Systems Analysis<br />
III. Length of Data<br />
IBM - 510 Data Point - equiv approx<br />
A. lntra Day - 80 days, 1 hour<br />
B. Daily - 2 years<br />
C. Weekly - 10 years<br />
D. Monthly - 42 years<br />
APPLE - 240 Data Points - equiv approx<br />
A. lntra Day - 40 days, 1 hour<br />
B. Daily - 1 year<br />
C. Weekly - 4 years<br />
D. Monthly - 20 Years<br />
MTA Journal/May <strong>1985</strong> 67
PRODUCTREFERENCE<br />
Product: <strong>Market</strong>:<br />
Compu Trac<br />
Compu Tiac, inc.<br />
Box 15951<br />
New Orleans, LA 70175<br />
Profit Optimizer<br />
Micro Vest<br />
Box 272<br />
Macomb. IL 61455<br />
ProfitTaker<br />
Distek, Inc.<br />
Box 1108<br />
Lake Mary, FL 32746-9990<br />
TechniFilter<br />
RTR Software Systems, Inc.<br />
444 Executive Center Boulevard<br />
El Paso, TX 29902<br />
MTA Journal/May <strong>1985</strong> 68<br />
Stock/Futures<br />
Stock/Futures<br />
Futures<br />
Stock
Natural Cycle Test -Fourier Analysis<br />
J<br />
i 0 3<br />
Y<br />
I<br />
FIGURE 1 - INTRODUCTION<br />
MTA Journal/May <strong>1985</strong> 69
0 4<br />
Formulas :<br />
Cl : c<br />
c2 : CYl<br />
c3 : CA14<br />
c4 : CG7<br />
c5 : CG14<br />
C6 :J<br />
c7 :P<br />
0<br />
a<br />
C8 : ((H-L)/C)RlOO<br />
Conditions :<br />
1 : Cl>C2<br />
2 : ClC3 @BUY<br />
4: Clc5<br />
6 : c4100<br />
8 : C8>1.50<br />
@CLOSE<br />
@CLOSE YESTERDAY<br />
@14 DAY MOV AVG<br />
@ 7 DAY RSI<br />
@14 DAY RSI<br />
@POSITIVE VOLUME INDICATOR<br />
@PRICE VOLUME TREND<br />
@PERCENT DAILY VOLATILITY<br />
@SELL<br />
@SELL<br />
@SELL<br />
FIGURE 2 - TECHNIFILTER<br />
MTA Journal/May <strong>1985</strong> 70
Formula Set : SAMPLE<br />
SYMBOL<br />
------<br />
,TXN<br />
,GRA<br />
,DGN<br />
rDJ<br />
, XON<br />
, SQB<br />
,MOT<br />
,DEC<br />
,AMD<br />
,LUV<br />
,TDY<br />
,sy<br />
,IBM<br />
,JNJ<br />
,PRM<br />
, UTX<br />
,CDA<br />
,AIR<br />
,HCA<br />
,HWP<br />
,wx<br />
,NCR<br />
,UAL<br />
,DD<br />
,TL<br />
,SLB<br />
,HON<br />
,SOH<br />
,GW<br />
,GRL<br />
,MHS<br />
,HUM<br />
,GD<br />
,TAN<br />
,DAL<br />
,HNG<br />
,CBU<br />
,AMR<br />
,SNE<br />
,MD<br />
Date : 02-22-<strong>1985</strong><br />
Ordered By Column 5<br />
1<br />
---_-<br />
2<br />
-----<br />
3<br />
-----<br />
4<br />
-----<br />
5<br />
-----<br />
- 6<br />
-----<br />
7<br />
-----<br />
8<br />
-----<br />
117.75 118.38 122.16 27.17 <strong>21</strong>.36 84.27 -286.38 0.64<br />
40.50 40.63 41.16 27.91 25.96 114.18 -234.52 0.94<br />
57.75 58.00 64.79 41.32 27.52 130.00 -452.47 1.51<br />
43.00 43.75 45.44 25.51 33.33 103.53 -373.11 3.79<br />
46.38 46.25 47.14 27.88 33.39 115.79 64.13 1.08<br />
51.75 51.88 51.95 77.78 35.27 136.55 726.16 0.73<br />
34.88 35.00 36.54 4.51 36.22 82.87-2270.11 2.52<br />
113.75 113.75 117.90 34.15 39.05 155.48 5398.71 1.32<br />
32.88 33.63 34.37 31.22 39.72 95.00 1845.78 2.65<br />
24.13 24.13 25.11 <strong>21</strong>.97 40.56 82.17 1435.73 0.54<br />
260.88 265.50 263.75 46.37 41.18 142.59 2346.39 2.59<br />
47.38 47.38 47.83 56.09 41.29 113.11 136.72 1.06<br />
132.88 133.75 134.17 57.42 42.85 116.39 3782.84 1.60<br />
37.38 37.50 38.35 29.79 42.93 109.19 278.80 1.69<br />
18.75 18.63 18.53 50.00 46.78 189.32 3616.49 1.97<br />
41.88 42.00 42.81 42.48 47.72 142.83 429.22 1.19<br />
35.63 35.88 36.74 31.25 47.72 166.22 463.70 1.40<br />
20.88 <strong>21</strong>.13 20.76 50.00 48.55 126.70 491.66 2.39<br />
44.63 44.88 45.77 50.00 49.16 111.43 777.45 1.41<br />
37.00 37.75 37.37 55.90 50.70 99.36-1023.69 2.38<br />
30.50 30.75 31.65 44.75 50.79 137.79 3738.96 1.25<br />
28.88 29.50 29.58 48.09 51.92 76.87 -88.92 2.60<br />
45.13 45.75 45.64 44.44 52.96 111.61-1625.87 2.22<br />
52.88 53.38 53.30 63.29 53.10 149.85 1656.05 1.17<br />
48.00 48.50 49.74 0.00 53.70 73.96 -552.51 1.56<br />
40.75 41.50 41.40 27.78 54.07 104.64 1424.19 3.36<br />
62.75 63.25 63.28 56.83 54.90 100.99 86.64 1.40<br />
45.00 44.38 44.58 52.27 55.48 134.78 -323.09 1.38<br />
32.63<br />
19.88<br />
32.88<br />
20.38<br />
32.23<br />
20.82<br />
65.17<br />
23.95<br />
55.84<br />
57.14<br />
127.01<br />
108.06<br />
264.95<br />
-88.76<br />
0.77<br />
3.77<br />
83.50 84.63 84.47 53.14 57.43 106.00 195.45 1.50<br />
29.00 30.00 29.67 51.78 57.87 136.14 2574.41 3.90<br />
79.00 78.00 78.38 39.75 58.95 134.92 1194.27 2.22<br />
30.63 30.63 31.16 34.53 63.66 80.12-2650.48 2.02<br />
45.00 45.00 44.58 60.00 63.86 140.66 664.87 1.67<br />
46.25<br />
13.50<br />
46.38<br />
14.00<br />
46.48<br />
13.16<br />
43.78<br />
41.61<br />
64.04<br />
65.06<br />
134.48 2548.47<br />
26.05-.lOE+OS<br />
2.44<br />
5.56<br />
39.50 39.75 38.35 74.93 65.20 131.72 65.83 1.59<br />
16.75<br />
81.38<br />
16.63<br />
81.50<br />
16.34<br />
78.86<br />
64.76<br />
82.57<br />
65.53<br />
86.12<br />
181.89<br />
131.30<br />
6405.68<br />
782.84<br />
0.78<br />
0.77<br />
FIGURE 3 - TECHNIFILTER<br />
MTA Journal/May <strong>1985</strong> 71
Summary Of Results : SAMPLE<br />
CONDITIONS<br />
SYMBOL 12345678<br />
------ --------<br />
,LUV<br />
, TXN<br />
, SQB<br />
,GW<br />
,MD<br />
,SNE<br />
,GRA<br />
rsy<br />
, XON<br />
,DD<br />
,UTX<br />
,wx<br />
,DEC<br />
,SOH<br />
,HON<br />
,CDA<br />
,HCA<br />
,MHS<br />
,DGN<br />
,TL<br />
,AMR<br />
,IBM<br />
,DAL<br />
,JNJ<br />
,PRM<br />
,TAN<br />
,GD<br />
,UAL<br />
,HWP<br />
,AIR<br />
,HNG<br />
,MOT<br />
,TDY<br />
,NCR<br />
,AMD<br />
,SLB<br />
,GRL<br />
,DJ<br />
,HUM<br />
,CBU<br />
1<br />
---<br />
6<br />
38<br />
30<br />
11<br />
35<br />
2<br />
18<br />
28<br />
27<br />
31<br />
20<br />
9<br />
37<br />
23<br />
33<br />
14<br />
22<br />
36<br />
32<br />
29<br />
17<br />
39<br />
24<br />
16<br />
3<br />
10<br />
34<br />
25<br />
15<br />
5<br />
26<br />
13<br />
40<br />
7<br />
12<br />
19<br />
4<br />
<strong>21</strong><br />
8<br />
1<br />
FORMULA RANK<br />
2<br />
---<br />
3<br />
---<br />
4<br />
---<br />
5<br />
---<br />
6 6 3 10<br />
38 38 6 1<br />
30 30 39 6<br />
11 11 37 29<br />
35 35 40 40<br />
2 2 36 39<br />
18 18 9 2<br />
28 28 31 12<br />
26 27 8 5<br />
31 31 35 24<br />
20 20 18 16<br />
10 10 <strong>21</strong> <strong>21</strong><br />
37 37 13 8<br />
22 22 28 28<br />
33 32 32 27<br />
14 14 12 17<br />
23 25 25 19<br />
36 36 29 31<br />
32 33 16 3<br />
29 29 1 25<br />
17 17 38 38<br />
39 39 33 13<br />
24 <strong>21</strong> 34 35<br />
15 16 10 14<br />
3 3 26 15<br />
9 9 14 34<br />
34 34 15 33<br />
25 24 20 23<br />
16 15 30 20<br />
5 4 24 18<br />
27 26 19 36<br />
13 13 2 7<br />
40 40 22 11<br />
7 7 23 22<br />
12 12 11 9<br />
19 19 7 26<br />
4 5 4 30<br />
<strong>21</strong> 23 5 4<br />
8 8 27 32<br />
1 1 17 37<br />
FIGURE 4 - TECHNIFILTER<br />
MTA Journal/May <strong>1985</strong> 72<br />
Date : 02-22-<strong>1985</strong><br />
6<br />
---<br />
5<br />
7<br />
31<br />
23<br />
25<br />
39<br />
19<br />
18<br />
20<br />
36<br />
35<br />
32<br />
37<br />
28<br />
10<br />
38<br />
16<br />
13<br />
24<br />
2<br />
26<br />
<strong>21</strong><br />
33<br />
15<br />
40<br />
4<br />
29<br />
17<br />
9<br />
22<br />
27<br />
6<br />
34<br />
3<br />
8<br />
12<br />
14<br />
11<br />
30<br />
1<br />
7<br />
---<br />
8<br />
---<br />
30 1<br />
10 2<br />
25 6 p<br />
19<br />
3<br />
4<br />
27 5<br />
40 6<br />
11 7<br />
17 8<br />
14 9<br />
31 10<br />
<strong>21</strong> 11<br />
37 12<br />
39 13<br />
9 14<br />
16 15<br />
22 16<br />
26 17<br />
18 18<br />
7 19<br />
6 20<br />
15 <strong>21</strong><br />
38 22<br />
24 23<br />
20 24<br />
36 25<br />
2 26<br />
28 27<br />
4 28<br />
5 29<br />
23 30<br />
34 31<br />
3 32<br />
33 33<br />
12 34<br />
32 35<br />
29 36<br />
13 37<br />
8 38<br />
35 39<br />
1 40
0 A<br />
(1)<br />
-----<br />
8<br />
10<br />
12<br />
14<br />
16<br />
18<br />
20<br />
22<br />
24<br />
Formula Set : SAMPLE<br />
PARAMETERS LONG<br />
(2) (3) (4) GAIN #TRADES<br />
_____ _---- ----- ---- -------<br />
I==0<br />
M9<br />
l 11.34 2.20 52 50<br />
3.72 38<br />
8.70 32<br />
7.14 32<br />
-11.07 38<br />
0 B Formula Set : SAMPLE<br />
PARAMETERS LONG<br />
(1) (2) (3) (4) GAIN #TRADES<br />
_---- ----- __--- -----<br />
5<br />
7<br />
9 12.46 54<br />
11 8.69 72<br />
Symbol : ,LUV<br />
SHORT<br />
GAIN #TRADES<br />
_--- -------<br />
43.91 104<br />
25.81 100<br />
30.73 78<br />
41.70 64<br />
40.76 64<br />
2.47 76<br />
Symbol : ,LUV<br />
SHORT<br />
GAIN #TRADES<br />
---- -------<br />
64.96 y 124<br />
45.91 Q 106<br />
46.<strong>21</strong> 108<br />
38.52 144<br />
Formula Set : SAMPLE Symbol : ,LUV<br />
PARAMETERS LONG SHORT<br />
(1)<br />
-----<br />
5<br />
7<br />
(2) (3) (4) GAIN<br />
----<br />
#TRADES<br />
-------<br />
GAIN<br />
----<br />
#TRADES<br />
-------<br />
0 P Formula Set : SAMPLE<br />
PARAMETERS LONG<br />
(1) (2) (3) (4) GAIN #TRADES<br />
----- ^---- _---- ----- ---- _------<br />
12 5 8.74 32<br />
12 7 8.40 32<br />
14 5<br />
14 7<br />
FIGURE 5 - TECHNIFILTER<br />
MTA Journal/May <strong>1985</strong> 73<br />
Symbol : ,LUV<br />
SHORT<br />
GAIN #TRADES<br />
---- --e---v<br />
39.29 64<br />
39.66 64
command<br />
( )An<br />
B<br />
C<br />
i Fl<br />
( )Gn<br />
H<br />
( )I1<br />
J<br />
K<br />
L<br />
( Wn<br />
( )Nn<br />
P<br />
( )h<br />
Q<br />
( )Sn<br />
( P-<br />
( Wl<br />
V<br />
( Wn<br />
( Wn<br />
Zn<br />
( )“n<br />
fLulction<br />
Today’s n-day simple moving<br />
average of ( ).<br />
Negative Volume Indicator (NVI).<br />
Today’s closing price.<br />
Sum of all values of ( ) from the<br />
beginning of the data base to the<br />
present.<br />
The n-day Welles Wilder relative<br />
strength of the quantity ( ).<br />
Today’s high price.<br />
Indicates if ( ) is positive, zero, or<br />
negative by assigning 1.00, 0.00, or<br />
- 1 .OO respectively.<br />
Positive Volume Indicator (PVI).<br />
On Balance Volume (CVI).<br />
Today’s low price.<br />
Largest value of ( ) over the last n<br />
days.<br />
Smallest value of ( ) over the last n<br />
days.<br />
Price Volume Trend (PVT).<br />
Daily Volume Indicator (DVI).<br />
Multiply ( ) by the positive number<br />
n.<br />
Sum of the last n values of the<br />
quantity( ).<br />
Copy the quantity ( ).<br />
Absolute value of ( ).<br />
Today’s trading volume.<br />
Slope of the Least Squares line of<br />
the last n values of ( ).<br />
Today’s n-day exponential moving<br />
average of ( ). Here, n must be<br />
bigger than two.<br />
Use the computation in an earlier<br />
formula.<br />
Compute the nth power of ( ), for<br />
n positive.<br />
FIGURE 6 - TECHNIFILTER<br />
MTA Journal/May <strong>1985</strong> 74
f7 = Abort Test<br />
d-<br />
= Timing Ok<br />
Optimized<br />
FrofitGptisi:er Scanary Repcrt fcr [SF:<br />
Syc,tea Gate [96-27-14841<br />
Frofit Set [Print All.1 Pq!: I<br />
ii SJ LO LSa SSa Entr<br />
-- -- -_ --- --- -----<br />
Exit<br />
-----<br />
Ziiin Trd<br />
_ - - - - - - -<br />
Cus P/L HaxLoss !Is:iDdwn ?rf?act<br />
_ _ - -- - __-_--- _------ -------<br />
3 8 45 .en .ao Open zpEr! 0.759 3,424 ! 1,463 3.‘43<br />
3 946 .B;J -23 Gpen hen 0.6&9 2!2?9 : 1,463 !.%9<br />
3 B 47 .CO .i?O Open Own 9.583 1,624 2 2,325 l.:? 3<br />
5 9 45 .09 .F9 Gpen Dpen 9.750 3,424 1 1,463 T.349<br />
j 9 45 ,90 -90 Open hen<br />
3 9 47 .QB .ee UpEn Gpen<br />
3 !9 45 .80 .90 @pen Open<br />
9.m 2,299<br />
9.580 1,567<br />
a.i;a 3,367<br />
1<br />
2<br />
1<br />
1,463<br />
: _. f7C -*<br />
l,lbJ<br />
l.%B<br />
1.4ei<br />
j..:15<br />
3 19 46 .9a .0a Open Open 9.590 2,249 1 1,463 1.365<br />
3 19 47 .?3 .a9 Open Open a.5i;9 1.574 2 :.g25 l.?EC<br />
4 6 45 .29 .fd hen hen 9.73 1,424 I 1,453 :.3:a<br />
4 8 46 .iT0 .ea Open Open a.ir39 2,299 1 1,465 !.993<br />
4 8 47 .aa .08 Open Open 9.5ao 1!624 2 2,ar; 1.49a<br />
4 9 45 -99 .a0 Open Open e.759 3,424 1 1!463 3.340<br />
4 9 46 .K$ .09 Open Open 9.6!0 2,299 1 1,463 1.%6<br />
4 9 47 .90 .99 Open Cpen 0.589 1,537 2 2,925 !.436<br />
4 18 45 -90 -90 Open Open 9.75a 3,307 ! 1,40; 3.315<br />
4 19 46 .P9 .BB Gpen Gpen 9.689 2,249 1 1,463 1.865<br />
4 19 47 .Q9 .09 Open Ups 0.509 1,574 2 2.325 1.480<br />
5 8 45 .Ea .98 Cpen Open<br />
5 8 46 .OO .Q0 Cpen Open<br />
5 8 47 .ca .oa Open Open<br />
a.759 3,799<br />
9.699 2,299<br />
ma 1,624 1<br />
!<br />
1<br />
2<br />
l,NE<br />
1,463<br />
2,025<br />
4.4?2<br />
l.EtiB<br />
1.478<br />
5 9 45 .@a .02 Cpen Gpen 0.750 3,735 1 i,w 4.492<br />
5 9 46 .9a .OB Open Open 0.699 2,29? 1 !,463 1.885<br />
5 9 47 .ea -90 Open Open a.:ea 1,587 ” 2,025 1.486<br />
5 10 45 .90 .ea Open Cpen a . i
++r**s*****;g<br />
Possibilities<br />
f9 =<br />
f7 = Abort Test<br />
Change. Y = Timing Ok Entr=Open htt=apen<br />
Optimized<br />
Prof:tB:tirizer Sumry Reoort for [SF:<br />
Svstea Date [Bb-27-19341<br />
Prciit Set 1 K,:i?Bl Pace: 1<br />
iF ss LD is3 353 En!r Exit Win ird C?ND P/L ?,axLsss tls:DJHn iriiart<br />
__ _- __ _-- __- -_-_- ----- _____-_- __----- ------- _ - - - - - - - - - - -- -<br />
5 9 45 .%I .99 Open Ojfn I.758 3,793 1 1,m 4.492<br />
5 945 .oa .31 Gpen Cpen O.667 3,349 1 1,233 4.533<br />
5 9 45 .93 .a2 lloen Cpen r.aaa 4,662 a a 8. a90<br />
5 9 45 .90 .C3 CpE!l Oren !.EEB 4,bb? 0 D a.cea<br />
5 ? 45 .til .e4 O:en Epen i.aea 4,hi2 9 e e. RIO<br />
5 945 .?C .95 ripen @;ER l.N# 4,662 a a 0.PZP<br />
s 945 .!a .I6 Open Open !.X0 4,661 a a 6.809<br />
5 945 .Ol .02 hen Open !.CE0 3,Fxi 0 0 a.??9<br />
5 9 45 -01 .K Open Cpen i.ZiGl 3,908 a 0 e.eza<br />
5 ?45 .O! .04 hen Bpen !.993 j ,"a2 a a a.eea<br />
5 9 45 -81 .15 Open Dpen 1.m 3,913 R a<br />
3 3.aoa<br />
5 9 45 .Ol .ab Open Open 1 929 3,?03 0 3 u.oiY<br />
5 9 45 -02 .a2 Dvn Open !.a?3 4,225<br />
0 '3 0 . va .,<br />
5 945 .t2 .03 Open Gpen 1.000 4,225 0 0 0.389<br />
5 9 45 .a2 .B4 Cpen Open 1 3iY 4 I- 125 i? c 3.8ia<br />
5 9 45 .a2 -05 Open Open i.00a 4 , 2?5 -<br />
0 0 o.occ<br />
5 9 45 .82 .95 Gpen Open 1.000 4,225 il 9 9.3?3<br />
5 34: .i33 .a2 Open Open ! 0% 4,225 6 1 a.%?9<br />
5 9 45 .03 .Ej Ot2n Open t.eze 4 ,-J ,?C<br />
a J a.aaa<br />
5 9 45 .83 ,@4 Open Gpen t.eae 4,225 0 B a.iw<br />
5 9 45 ,83 .fl5 Open Open 1 i?J0 4,225 a 9 2.9c9<br />
5 9 45 ,fl3 .05 Open Open 1.080 4 ,&Ad ??C 0 9 il.GiM<br />
S 9 45 .94 -02 OFen Open 1.2’90 4,225 a 0 c, 223<br />
5 9 45 .94 .03 Own Open t oaa 4,225<br />
B 9 0.9e0<br />
5 9 45 .a4 .a4 Ojen Open l.iW 4,225 D 9 o.300<br />
5 9 45 .@4 .a5 Open Open 1.W 4,225 B 3 a.m<br />
5 9 45 .94 .i% Open Open 1 ‘39B 4;225 9 0 0.300<br />
Total Successful Criteria Printed I 27i<br />
Total Failed Criteria not Printed I 81<br />
Total of all Tested Criteria..... i 351<br />
Tota! Test Tine [iJ0:0b:Zll End of Octizi:ing<br />
FIGURE 8 - PROFIT TAKER<br />
MTA Journal/May <strong>1985</strong> 76
I<br />
status<br />
f7 = Abort Test<br />
The ProfitAnalyst Indicat<br />
Tlminq Filter Short Dxrect Long Direct 1 Lsensltlvlty SSensi tlvi ty<br />
Profit0ptimizer %mry P,eecrt ix lSF1 .<br />
Syjtes 2ate lO5-27-1934i<br />
Profit Set [ t4,5a@j Pq2: 1<br />
Ti SD LD LS2 SS2 Entr Exit 3in Trd Cm P/L ilSL35j !lSXP?*i; iriF3It<br />
__ __ __ ___ __- -____ __--- _-_____- ------- ------- ______- --__---<br />
5 9 45 .M .5Z hen Cpen 1.030 4,'s a B 0.m<br />
5 9 45 .08 .a? ClCSe ClC55 i.EZJ 5,224 B a 0.OE;1<br />
5 9 45 .a8 .a? Gm Cias. !.a24 4,652 0 B 0.G!o<br />
5 945 .OJ .02 Class Open ,-BE;! < d,iL. "4 a 5 0.m<br />
_______-_____-______------------------------ --_--_----<br />
Criteria PrintEd [<br />
iota1 Test Tiae [03:00::83 End ci @;tiri:ing<br />
FIGURE 9 - PROFIT TAKER<br />
MTA Journal/May <strong>1985</strong> 77
TumLcLosEDmTRADEs<br />
T;oNGwINNmEs<br />
..............................<br />
..................................<br />
17<br />
____- __---<br />
SHRT WINNING TRADES .................................. 9<br />
mAL WINNING TRADES .................................<br />
-LcfGT;osmEs----- ...................................<br />
12<br />
3<br />
------<br />
SHRrmSINGTRADES ................................... 2<br />
TOTAL mm<br />
mx.-BR<br />
TFaJxs ..................................<br />
. --..--. ..... _-_...._--- ................<br />
-.__----<br />
.............<br />
5<br />
...----.O-- p<br />
%WINNINGTFViDES .....................................<br />
_-_-.--<br />
--%-I;oSING-TRADES ......................................<br />
.705<br />
7294<br />
..<br />
%BRUKEVENTFNXS ................................... 0<br />
-TmAI;-REAT;ImTEIOFITSr ... -- -.-.<br />
...............................<br />
..- .... - -----. - .. -...- ... 255g6-.~--. --- .~ .~. --~- -<br />
lwrAL -zED LOSSES ................................ -7801<br />
CWULATIVE PROFIT OR LOSS ............................<br />
RATIO-CUMULATIVFPEIT--TO--TOTE REALIZEIIzOSSE.-;I---7;281---<br />
17795<br />
&9- -_~-.- -.-<br />
MAXIMUMh'INNINGTRADE ................................<br />
..~~.T;OS~-.T--.- ..- .-. _-- .. -.--. ..... -.-.---.._ .- -~_ .._. ...<br />
.................................<br />
AVERAGEWINNINGTRADE ................................ <strong>21</strong>33<br />
AVERAGELOSINGTRADE .................................<br />
RATIOXJEEVCE- wINNwc;-lu -msING-TRAD~-- d ................<br />
-1560.200<br />
M6.j--------- -<br />
AVERAGE PROFIT OR UXS PER TRADE ..................... 1046.764<br />
...<br />
MPXMUM NUMBER CCNSECUXVE LOSING TRADE .............. 1<br />
I4?aIMmDo-CONSECU?IVEms.. ...................<br />
M?XIMUMDRAkXGWN- CLOSED WI! TRADES.; ............ ;.-: #- -- ... . ~---~~~ - ....<br />
PROFIT FACJXJR ........................................<br />
SHARPE RATIO-- .......................... :;. ............... :;<br />
3.281<br />
..i*24 ............ - ....<br />
TBILLRATE ...........................................<br />
LEVERAGEFACIUR ......................................<br />
am41ss1CNS - CLOSED cm TRmEs ......................<br />
ExECurIONsLIPPAGE ...................................<br />
mTm m...-Zm.PmFIT-oR m ;T, ; ; ; . ;;,;, ;-<br />
RATIO CCM4 AND SLIP To CL&l NET REALIZED PmFIT .......<br />
~~TQT~ZEIS-PKiFITST;N--OPEN-TRADT- ...............<br />
TOTAL UNREAWIZED LOSSES CN OPEN TRADE ................<br />
!ixYr& TRADIIGDAYS . - . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .<br />
TOTALHOLIDAYS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .<br />
-- ..____ --.-.<br />
.- 638---..- -<br />
16<br />
.- -.--<br />
ToTAL<br />
'IWTALDAYS<br />
.-~~~.-.<br />
INFILES<br />
----<br />
. .<br />
-<br />
.<br />
-..-<br />
. . . . . . .<br />
~---.<br />
. . . . . . . . . . .<br />
.._.~.<br />
. . . . .<br />
._.<br />
. . . . .<br />
_._<br />
. . .<br />
__~<br />
1515<br />
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 -- --<br />
CCWVERSIONFACIOR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2<br />
..-<br />
-POINT- -...._-____ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .<br />
DAILy L;CMIT --.--.- ----.-.- _----<br />
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .<br />
12.500<br />
150------------<br />
END OF HIWRY TEST FOR RANGE: 1<br />
FIGURE 10 - PROFIT TAKER<br />
MTA Journal/May <strong>1985</strong> 78
0 4 - Strateqies Options<br />
0. Move to Index Menu<br />
1. Buy or Sell Only (Hedging)<br />
2. Select a Stop Strategy<br />
3. Combine Two Indexes<br />
4. Implement Pyramiding Strategy<br />
5. Trade Toward Cash Price<br />
j 6. Define Your Entry Points<br />
Indexes<br />
1 - STD. MOV. AVG.<br />
3 - PERCENT R<br />
5 - VAR. OSC.<br />
7 - VOLATILITY<br />
9 - G-PERCENT<br />
11 - MA. VAR. G%<br />
13 - REGRES. SLOPE<br />
15 - MA OSCILLATOR<br />
17 - EXP. MOV. AVG.<br />
19 - MA OSCL. CROS.<br />
LENGTH OF MA #l 3<br />
INCR OF MA #1 3<br />
END OF MA#l 12<br />
LENGTH OF MA #2 6<br />
INCR OF MA #2 4<br />
END OF MA #2 18<br />
LENGTH OF MA #3 9<br />
INCR OF MA #3 6<br />
END OF MA #3 27<br />
FIGURE 11 - PROFIT OPTIMIZER<br />
MTA Journal/May <strong>1985</strong><br />
2 - REL. STR. IND.<br />
4- HI/LO OSCILLATOR<br />
6 - PAR. TIME PRICE<br />
8 - SWING INDEX<br />
10 - VAR. G-PERCENT<br />
12 - VOL. OP/INT IND.<br />
14 - ACCUM/DIST.<br />
16 - WHT. MOV. AVG.<br />
18 - DIR. MOV. IND.<br />
20 - MOV. AVG. BANDS<br />
79
0 9<br />
CNTRCT/STK . . . . JUN 84 84 T-BILLS INDEX #l . . . . 16 DAY RSI<br />
COMMISSION . . . . 50 SHORT AT . . . . 75<br />
TRADING . . . . . BUYING ONLY LONG AT . . . . 25<br />
TAKING PROFITS . . 30 INDEX #2 . . . . 10 DAY %R<br />
TWO INDEXES . . . SAME SIGNALS SHORT AT . . . . 20<br />
RE-ENTER . . . . . REVERSE SIGNAL LONG AT . . . . 80<br />
# OF TRADES . . . . . . 2 # OF LOSING TRADES . . . 0<br />
COMMISSIONS . . . . . . 100 LARGEST LOSING TRADE . . 0<br />
LARGEST UNREAL LOSS . . -24 # OF WINNING TRADES . . 2<br />
TOTAL PROFIT OR LOSS . 1400 LARGEST WIN. TRADE . . . 700<br />
# OF DAYS IN MARKET . 33 AVG. DAILY GAIN/LOSS . . 42<br />
PROFIT DISTRIBUTION<br />
~_-~~--~~----~------<br />
-3.523 <strong>21</strong>.33<br />
~_-~~--~~----~~-----<br />
1 *<br />
2 *<br />
3 *<br />
4 *<br />
5 *<br />
6 *<br />
7 *<br />
8 *<br />
9 *<br />
10 *<br />
11 *<br />
12 *<br />
13 *<br />
14 *<br />
15 *<br />
16 *<br />
17 *<br />
18 *<br />
19 *<br />
20 *<br />
c PROFIT DISTRIBUTION<br />
-_-----~---~---~--~~--<br />
C<br />
FIGURE 12 - PROFIT OPTIMIZER<br />
MTA Journal/May <strong>1985</strong> 80<br />
-4.355 -10.34<br />
----~--~~--~_--_--~~--<br />
1 *<br />
2*<br />
3 *<br />
4 *<br />
5 *<br />
6 *<br />
7 *<br />
8 *<br />
9 *<br />
10 *<br />
11 *<br />
12 *<br />
'-
0 6<br />
0 e<br />
Pass Amount<br />
F/L - Marlmum 5 $0. osu2<br />
E,wity - Plax~mum 5 0.08V2<br />
-------------------------------------------------------------------------------<br />
24-~Jan-s5 13:39:‘% 0 WARKS NF828& ; 3TA.FFT Page: 1<br />
‘WlA. SCH : SMA.TSK Pas*: h<br />
rAlE. EN. YIM. tad. ROSE. u va. u 01, w:35<br />
PlPX PIL 0.0823 I,. 08’7 I,. 0’327<br />
. .oate 9411<strong>21</strong> n41023 841023<br />
Ml” P/L -0.0109 -0.OOQ7 -0.0109<br />
. . oate 130406 830405 830406<br />
i-lax Equity 0.0950 o.lOlh 0. 10 16.<br />
. . oat e 841107 850114 850114<br />
Pl,n Equ,ty -0.0097 -0.010? -0.0109<br />
. . Date a30405 3304oc 8304Oh<br />
Statlstlcs<br />
Pcrluds<br />
Trtdee<br />
* Profitable<br />
1 LosIng<br />
% Profltablc<br />
% LosIng<br />
Cummlsslon<br />
Sl ip*ag*<br />
Gross P/L<br />
open P/L<br />
PIL<br />
Equity<br />
127 348 509<br />
12 10 22<br />
3 6 0<br />
9 4 13<br />
25.00 ho. 00 40.‘?1<br />
75.00 40.00 59. 09<br />
Results<br />
0.0000 0.0000 0. 0000<br />
0.0000 Cl. 0000 0. oooo<br />
-0.0048 0.0871 0.0823<br />
0.0000 0.0000 0.0000<br />
-0.0048 0.0871 0.0823<br />
-0.0048 0.0871 0.0823<br />
FIGURE 13 - COMPU TRAC<br />
MTA Journal/May <strong>1985</strong>
FIGURE 14 - COMPU TRAC<br />
MTA Journal/May <strong>1985</strong> 82
0 A Relation Elements<br />
A - and<br />
B - or<br />
C - increasing<br />
D - decreasing<br />
E - previous<br />
F - next<br />
G - min<br />
H - max<br />
I - average<br />
J - entry-price<br />
0 B<br />
0 c<br />
Pun&ion Elwents<br />
A-+<br />
B--<br />
c-*<br />
D -- /<br />
E-C<br />
F-><br />
G-z<br />
H-c<br />
T-1<br />
THE STUDIES<br />
FIGURE 15 - COMPU TRAC<br />
MTA Journal/May <strong>1985</strong><br />
add<br />
subtract<br />
multiply<br />
divide<br />
less than<br />
greater than<br />
equal to<br />
open parenthesis<br />
close parenthesis<br />
Advance-Decline .......................<br />
Commodity Channel Index ...............<br />
Commodity Selection Index .............<br />
Demand Index ..........................<br />
Detrend ...............................<br />
Directional Movement Index ............<br />
HjlL Momentum Index ....................<br />
Haurlan Index .........................<br />
McClellan Oscillator ..................<br />
Momentum ..............................<br />
Moving Average ........................<br />
Moving Average Convergence/Divergence.<br />
Open Interest .........................<br />
Oscillator ............................<br />
Parabolic (SAR) .......................<br />
Point & Figure ........................<br />
Rate of Change ........................<br />
Ratio .................................<br />
Relative Strength Index ...............<br />
Short Term Trading Index ..............<br />
Spread. ...............................<br />
Stochastic ............................<br />
Volume ................................<br />
Weighted Close ........................<br />
Williws XR ...........................<br />
83
FIGURE 16 - COMPU TRAC D<br />
MTA Journal/May <strong>1985</strong> 84
FIGURE 17 - COMPU TRAC D<br />
MTA Journal/May <strong>1985</strong> 85
0 4 /// slJ=aY 0fAvaileble Arithmetic Operatom<br />
key definition example<br />
+ add see "EXAWLX", below<br />
subtract high - low<br />
multiply see "EXAMPLE", below<br />
divide 11 ,I II 11 1, 11<br />
= equal to<br />
AZ not equal to<br />
> greater than<br />
< less than<br />
>= greater than or equal to<br />
30<br />
rsi < 60<br />
close >= 350<br />
close 30 IL oscl > 0<br />
rsi > 30 : oscl > 0<br />
open parenthesis see "EXAMPLE", below<br />
closed parenthesis ,t t1 ,I 0 1, ,I<br />
[xl offset ** close > close[l]<br />
* The sign for "or" is generated by typing .<br />
This character is NOT a colon.<br />
** For example, "cloae[5]" yields the value of the close 5 days previous<br />
to the current close. The number enclosed by [I (open & closed square<br />
brackets) must be a positive constant in the range l...x, not an<br />
expression. "x" is the maximum number of dates accumulated for the item.<br />
This number depends on the format of the data diskette.<br />
lLKb!PLh': (high + low + 2 # close) / 4<br />
--Co.oditions You Cen Set<br />
key definition example<br />
=<br />
*=<br />
><br />
<<br />
>=<br />
70<br />
rsi=350<br />
close70&oscl>O<br />
rsi>70:oscl>O<br />
NOTE: The sign for "or" is generated by typing .<br />
This character is NOT a colon.<br />
FIGURE 18 - COMPU TRAC D<br />
MTA Journal/May <strong>1985</strong> 86<br />
.
0 4<br />
Resident Analysis Routines<br />
Moving Average ...............................<br />
Relative Strength Index ......................<br />
Spread .......................................<br />
Ratio ........................................<br />
Oscillator ...................................<br />
Momentum .....................................<br />
Weighted Close ...............................<br />
Commodity Channel Index ......................<br />
Commodity Channel Index: General Information.<br />
Rate of Change ...............................<br />
Stochastic ...................................<br />
Stochastic: General Infomation ..............<br />
Directional Movement Index ...................<br />
Moving Average Convergence/Divergence ........<br />
6 0 Advanced T~dePlaa Operations<br />
Enter long when....<br />
Close greater than previous close and --- close > close[l] &<br />
High greater than previous high and --- high > high[l] &<br />
RSI greater than previous RSI and --- rsi > rsi[l] &<br />
RSI greater than 30 and --- rsi > 30 &<br />
Oscillator greater than previous Oscillator and -- oscl > OSC~[~] &<br />
Oscillator greater than zero --- oscl > 0<br />
Exit long when....<br />
Close less than previous close and<br />
etc.....<br />
The TR4LU Fuzxtion<br />
-- close < close[l] &<br />
a) Trade Enter a set of trading rules for each of four potential<br />
market positions: long entry, long exit, short entry,<br />
short exit. The trading rules are based on the performance<br />
and interaction of other elements in the TradePlan.<br />
For example, you could tell the system to take a long<br />
position when the Relative Strength Index reaches 70 and<br />
to exit the long position when the RSI comes back to 70.<br />
There are many such possibilities.<br />
b) Open-PL<br />
c) Close-PL<br />
d) Trade-PL<br />
e) return<br />
Open-PL yields a profit/loss figure for all open<br />
positions - positions you have entered but have not<br />
exited.<br />
Track the profit/loss for all closed positions -<br />
positions which have been entered AND exited.<br />
Mark profit/loss each time a trade is exited. This is<br />
not a cumulative measure.<br />
Track annualized "return on investment" by relating<br />
profit/loss to margin requirements.<br />
FIGURE 19 - COMPU TRAC D<br />
MTA Journal/May <strong>1985</strong> 87
FIGURE 20 - MITRONIX II<br />
MTA Journal/May <strong>1985</strong>
BIOGRAPHY<br />
Barbara Diamond is President of Diamond Services Group, Ltd., established in 1978, a con-<br />
sulting company for market data/software, international communications and broker services.<br />
She also serves as a Director for Brokerage Support Services, Pty., Ltd., a firm specializing in<br />
services to the Pacific Basin area. Ms. Diamond is a charter member of the international Mon-<br />
etary <strong>Market</strong> and the Singapore International Monetary Exchange.<br />
MTA Journal/May <strong>1985</strong> 89
This page left intentionally blank for notes, doodling, or writing articles and comments for the MTA Journal.<br />
MTA Journal/May <strong>1985</strong> 90
ARTIFICIAL INTELLIGENCE / PAITERN RECOGNITION<br />
APPLIED TO<br />
FORECASTING FINANCIAL MARKET TRENDS<br />
David R. Aronson<br />
Abstract<br />
Current use of computers by most financial market analysts (FMAs) barely scratches the sur-<br />
face of what is possible. Computers have been used primarily to speed up tasks done by hand<br />
and desk-top calculator in the pre-computer era. Such tasks include graphing data, calculation<br />
of indicators, testing humanly derived trading rules, and fitting simplistic models. A new and<br />
powerful application of computers to the FMA’s domain is in its infancy: exploiting artificial in-<br />
telligence and pattern recognition (AIIPR) to improve the accuracy of price-trend forecasting<br />
and security selection. AI/PR is a computer-intensive data analysis and modeling process that<br />
enables a computer to generate predictive models from histories of numerous indicator vari-<br />
ables. AI/PR employs automated inductive inferencing to make associations and discover<br />
complex relationships (i.e., conditional probabilities) between sets of indicators and subse-<br />
quent market trends. Complex relations can escape traditional computer-based modeling ap-<br />
proaches.<br />
Forecasting financial markets is extremely challenging. Valid predictive models are hard to de-<br />
velop, and they are likely to have complex structure. This is explained by several factors in-<br />
cluding limitations in human intelligence, the efficiency of financial markets and the complexity<br />
of the price setting mechanism. Although computers can help, traditional methods often pro-<br />
duce models that “fit” the historical data well but perform poorly when put to use on “new data.”<br />
Such model failures are usually due to a misapplication of the computer and inherent weak-<br />
nesses of a given modeling methodology. However, AI/PR when properly applied can produce<br />
effective forecasting models by avoiding the pitfalls of traditional modeling methods. Traditional<br />
assumptions are relaxed by virtue of a much more intense use of the computer.<br />
Forecasting models synthesized from hundreds of candidate indications have been produced<br />
with an AI/PR system called PRISM (Pattern Recognition Informations Synthesis Modeling).<br />
Directional forecast accuracy on out-of-sample (new data) has been statistically significant. With<br />
AI/PR approaches there is a significant opportunity for synergy between the FMA and the com-<br />
puter, as their information processing capacities are different and complementary.<br />
This article is organized into six parts. Part I discusses a new role for the computer in the work<br />
of the FMA. Part II considers the role of the FMA as an investigator of historical data to discover<br />
predictive laws. Part III discusses difficulties associated with the discovery of valid predictive<br />
laws as well as why they are likely to be complex. Part IV examines a number of common fore-<br />
casting procedures in light of how well they address the difficulties discussed in Part III. Part<br />
V introduces the AI/PR approach and how it deals with the difficulties raised in Part III. Part VI<br />
discusses a number of applications of an AI/PR system called PRISM.<br />
MTA Journal/May <strong>1985</strong> 91
A. Old Roles and Old Views<br />
PART I<br />
A NEW ROLE FOR THE COMPUTER<br />
Financial market analysts (FMA) are failing to exploit computers fully and properly in their ef-<br />
forts to forecast price-trends of equities, bonds, commodities, currencies, etc. Most computer<br />
applications merely speed up what had been done by hand and desk-top calculator in the pre-<br />
computer era. Tasks such as creating graphs, calculating indicators, and testing theories and<br />
strategies are common uses of the computer. Some are starting to use it to optimize trading<br />
strategies and fit various models to historical data including exponential smoothing, Box Jen-<br />
kins (ARIMA), and multiple regression. In many cases the models are over-optimized or over-<br />
fitted (i.e., they fit historical data well but predict poorly on “new data”).<br />
However, computers can be used more effectively For the last twenty years researchers in<br />
various scientific and engineering fields have been utilizing the computer in a different way to<br />
produce models and amplify their intelligence. By using the computer more creatively and in-<br />
tensely, they have been able to escape the limitations of pre-computer era methods. Spe-<br />
cialized software has enabled it to perform both inductive and deductive reasoning, infer patterns<br />
in data, detect complex relationships and generate effective prediction models. When cast into<br />
this new role computers have equaled and sometimes exceeded the performance of human<br />
experts, thus prompting the term “artificial intelligence.” In this article we will discuss a partic-<br />
ular type of artificial intelligence called “pattern recognition” (AI/PR). It is particularly applicable<br />
to the domain of the FMA.<br />
The failure to fully exploit the computer stems from an outdated view of its potential. Typical is<br />
the comment of a senior vice-president of a large money management firm. (Wall Street Com-<br />
puter Review 3185)<br />
“What you get out of a computer is only as good as what you put into it, and it is a<br />
very subjective approach.”<br />
We contend this view fails to see that computers can be programmed to discover relationships<br />
and patterns buried in masses of data. This discovered information (i.e., a predictive model)<br />
is extremely valuable, and exists by virtue of the computers ability to transform the raw input<br />
data in a very useful way. We are not, however, getting something for nothing, nor is this the<br />
latest addition to a line of “perpetual motion machines.” The information gleaned by the com-<br />
puter comes at a significant expense over and above the input data. First is the effort that went<br />
into the design of the AI/PR software and secondly, a large expenditure of computational re-<br />
sources required to perform the analysis.<br />
B. Artificial Intelligence and Heuristic Problem Solving<br />
We define artificial intelligence (Al) to mean capacities programmed into a computer, that, if<br />
displayed by a human, would be described as intelligent. This includes the ability to recognize<br />
and assess situations, perform significant logical steps, make decisions, and learn from prior<br />
experience. One type of Al called “heuristic searching” solves a problem by trial and error.<br />
Success or failure on any given trial is fed back to the program, helping it make “an intelligent”<br />
choice as to what to try next. Thus, not all possible choices are tried, only those that lead in<br />
MTA Journal/May <strong>1985</strong> 92
the direction of a good solution. Of course, this requires that the program be given an explicit<br />
definition of what constitutes a good solution.<br />
C. Pattern Recognition<br />
Pattern recognition is a specific type of heuristic program that was developed in the 1960s. The<br />
objective of AI/PR software is to develop a model that can be used to classify a sample, object,<br />
event, etc., based on the sample’s important attributes. The method is based on inductive logic,<br />
the type of reasoning that derives a general rule from a study of specific cases. When there<br />
are only two possible classes, we have the fundamental problem in pattern recognition: two<br />
class discrimination.<br />
For example, a college admissions office may wish to classify (predict) high school graduates<br />
as to their ability to succeed in college. A pattern recognition approach to the problem would<br />
start with a large sample of college students including some who did well and some who did<br />
poorly (i.e., the outcome is already known). Each student is described by numerous quanti-<br />
fiable attributes available prior to entering college. This is the data available to an admissions<br />
officer and might include SAT scores, high school average, high school rank, etc. The job of<br />
the AliPR program is to find a group of similar patterns, called a pattern-class that tends to be<br />
associated with those who ultimately did succeed. This task includes identifying the relevant<br />
attributes of the pattern, as well as value ranges, for each attribute.<br />
To an FMA the term “pattern” might mean a visual pattern seen on a price chart such as the<br />
famous “head and shoulders,” or a certain oscillator configuration. However, “pattern” in the<br />
context of pattern recognition means something very specific. It refers to a set of measure-<br />
ments that describes a single sample. Each measurement relates to a different attribute. For<br />
example a three attribute pattern that might be used to describe a person are: height, age, and<br />
weight. For these attributes, the author’s pattern is: height = 5’8”, age = 39, and weight =<br />
165. Other terms synonymous with “attribute” are factor, variable, feature, or indicator.<br />
The basic assumption of pattern recognition is samples from the same class will tend to have<br />
similar patterns. The mathematical formalism used in AVPR is to represent a pattern as a point<br />
in a multi-dimensional space, where each axis of the space represents one attribute. Thus,<br />
samples from the same class will tend to cluster or clump in the same regions of the multidi-<br />
mensional space. However, clumping will take place if, and only if,the attribute axes are rel-<br />
evant (i.e., they have useful information). In addition, irrelevant attributes must be avoided for<br />
they can dilute the information of the useful ones. Hence, the heart of the pattern recognition<br />
problem is isolating the relevant attributes from a larger initial list.<br />
The relevance of an attribute depends entirely on the classification task we wish to perform.<br />
We know, without the benefit of a computer, that “height” will be a useful attribute in placing<br />
an individual into either the class of jockeys or the class of basketball players. The attribute<br />
“eye color” will not. The problems for which the important class qualifying attributes are known<br />
don’t require AVPR.<br />
AVPR is usefully applied to complex problems whose relevant attributes are poorly under-<br />
stood, the problems where even experts make poor judgements and predictions. Examples<br />
would include the geological attributes of a site that is likely to contain oil or the indicator at-<br />
tributes of significant bottoms in the gold market. An interesting paradox is that to the extent<br />
most FMAs know the attributes of bottoms in gold they cease to be such. Thus, the real attri-<br />
butes can only be known to a few FMAs. In such cases, AVPR analysis can be and has been<br />
useful.<br />
MTA Journal/May <strong>1985</strong> 93
Using heuristic searching, the program tries to identify the most useful combination of attri-<br />
butes (indicators) within a much larger set. With only twenty candidate indicators, there are<br />
over six thousand possible combinations. Thus, an exhaustive search of each one would be<br />
prohibitive, even for super computers. To deal with this, Al/PI3 programs use rules of intelligent<br />
searching to pare down the domain of the search. Humans do this, for example, when looking<br />
for a pair of lost eyeglasses. Few would consider the brute force approach of starting at the<br />
North Pole and working south in expanding concentric circles. Most people would be intelligent<br />
enough to start searching where they last remembered having them.<br />
Financial market forecasting lends itself to the AVPR approach. First, forecasting can be easily<br />
translated into a classification problem. Is today a bottom and, therefore, a time to recommend<br />
buying? Is a particular stock likely to out-perform the market? Is the trend up or down? Second,<br />
there is an ample supply historical samples of known class. For example, we know August 12,<br />
1982 was a bottom day. Third, FMAs have numerous indicators that describe the state of the<br />
market on a continual basis. Fourth, they are interested in determining how to best use those<br />
indicators to forecast.<br />
Readers familiar with multi-variate linear disciminant analysis may note a good deal of simi-<br />
larity between it and AVPR. Although these approaches have similar purposes, there are sig-<br />
nificant differences in their assumptions, methodology, and robustness that will be addressed<br />
in a later section.<br />
A. The Multi-Indicator Approach<br />
PART II<br />
FMAS ROLE AS FORECASTER AND HISTORIAN<br />
An important, on-going task of the FMA is accurately forecasting price trends in financial mar-<br />
kets Accuracy is of particular importance when the current price-trend is about to reverse di-<br />
rection (i.e., the transition from bull market to bear market and vice versa).<br />
A common approach to trend forecasting and reversal detection is the multi-indicator ap-<br />
proach. The FMA considers the current readings on a multitude of indicators that measure var-<br />
ious characteristics of the market. Among the dozens that may be found in a FMA’s work book<br />
are indicators of price and volume change, market psychology, monetary and interest rates,<br />
institutional liquidity, etc. Each is evaluated as to its current implications or signal (bullish, bear-<br />
ish or neutral). Finally, the FMA arrives at a forecast, which is a consensus of all the separate<br />
indicator signals. This seems entirely reasonable. However, producing a forecast by properly<br />
integrating the signals from a multitude of separate indicators is far more complex.<br />
B. Rules Derived From Historical Data<br />
Before considering the production of a forecast from numerous indicators, let’s consider how<br />
a FMA typically derives a forecast from just a single indicator. In other words, how is the current<br />
level of a given indicator interpreted to be either bullish, bearish, or neutral. Such interpreta-<br />
tions are based on rules derived from historical data. A rule is a distillation of historical prec-<br />
edents and is, in fact, a simple prediction models. A typical rule might be: if indicator x has a<br />
level greater than 1.20 grade it is bullish; otherwise, grade it neutral. Therefore, the FMA must<br />
MTA Journal/May <strong>1985</strong> 94
adopt the role of market historian or study the works of other historians before taking on the<br />
role of forecaster.<br />
A fundamental assumption of FMAs is that historical data series contain discoverable rules<br />
(models) that can be used to make predictions from current data. However, FMAs vary widely<br />
in the rigorousness of their methods to derive the rules.<br />
Thus, indicator interpretation rules are really emperical laws that have been inductively gen-<br />
eralized from historical data. Inductive generalization is the process by which a law or model<br />
is derived from numerous past observations of a phenomena. If an observer notes that on one<br />
hundred occasions, each time condition A was observed event B followed eighty times, in-<br />
ductive inference permits the leap to the more general statement: whenever condition A oc-<br />
curs, then event B is predicted to occur with a research probability of 0.8. This type of reasoning<br />
underlies the historical research of FMAs, as well as the scientific method in general. As with<br />
all reasoning, it must be carried out properly to arrive at valid conclusions.<br />
For example, consider how a single indicator model (rule) based on the PSSR (Public to Spe-<br />
cialists Short Sale Ratio), a well-known indicator of stock market sentiment, might be induced<br />
from historical data. Assume that an FMA has the past thirty years of weekly readings on PSSR,<br />
as well as the history of the Standard and Poors 500 (the item to be forecast). The FMA ex-<br />
amines the history of the PSSR data series, noting its level at specific points in time as well as<br />
what the market did subsequently. If it is seen that on many prior occasions when the PSSR<br />
exceeds a value of 0.60 (a relatively high level), an interpretation rule (model) for the PSSR<br />
can be inductively generalized.<br />
If PSSR exceeds 0.60 expect prices to trend up.<br />
The PSSR rule is a simple model that translates a current indicator level into a forecast of the<br />
trend.<br />
C. Multi-Indicator Models<br />
However, few FMAs rely on the predictive power of a single indicator and rightly so. In an effort<br />
to get greater predictive accuracy, most practitioners wish to combine the signals of numerous<br />
indicators to produce a forecast. Attempting this more ambitious approach creates a number<br />
of problems. First is confusion. Most of the time indicators are giving conflicting signals. Some<br />
are bullish, some are bearish and some are neutral. Resolving this conflict without hedging is<br />
extremely difficult. Second, deriving a forecast from numerous indicators is far more difficult<br />
than single indicator forecasting. Third, the vast difference in difficulty is not readily apparent.<br />
The subtle, but substantial complexity of the multi-indicator forecasting leads many efforts astray<br />
For example, one common approach attempts to derive a forecast from a consensus voting<br />
of numerous single indicator signals (i.e., algebraically summing the signals where<br />
bullish = + 1, bearish = - 1, and neutral = 0). Such an approach incorrectly assumes that each<br />
indicator has equivalent, valid, non-redundant, independent information. The fact is a good<br />
forecast that truly resolves indicator conflict requires a multi-indicator model that takes into ac-<br />
count indicator interrelationships This includes their relative importance or lack thereof, pos-<br />
sible redundancy, and possible non-additive (i.e., non-linear) interactions. Data-modeling methods<br />
that attempt to take such effects into account are termed “multi-variate.” The AVPR approach<br />
to be discussed is one multi-variate method that is particularly powerful when modeling the<br />
behavior of complex systems.<br />
MTA Journal/May <strong>1985</strong> 95
The complexity of producing a multi-indicator model results from the combinational explosion<br />
that takes place when considering numerous indiators. With only twenty indicators to consider,<br />
there are over six thousand possible indicator models (i.e., combinations): all combinations of<br />
indicators taken 19 at a time, plus all combinations taken 18 at a time, plus all combinations<br />
taken 17 at a time, etc. With that many possibilities relative to the small number of historical<br />
data samples (for example, there have been approximately forty significant, intermediate term<br />
turning points in the stock market over the last thirty years), there is a larger danger of con-<br />
triving a law that fits the past perfectly, but is truly devoid of predictive power. This problem<br />
called over-fit will be discussed in the next section.<br />
Harry Truman said “the only thing new in the world is the history we don’t know.” Although this<br />
is an exaggeration, the former president knew that much that surprises us could be anticipated<br />
by better study of history. In the next section we will look at the problems that hamper the FMA’s<br />
search for predictive laws. Not only are they difficult to unearth, but the most useful ones are<br />
likely to be complex.<br />
PART III<br />
WHY FORECASTING FINANCIAL MARKETS IS SO DIFFICULT<br />
Why is the FMA’s task of developing accurate forcasting models so difficult, and why are the<br />
models likely to be so complex (i.e., contain numerous variables related in highly non-linear<br />
ways)? Several factors account for this.<br />
1) Human limits in configural thinking and pattern induction<br />
2) Efficiency of financial market (The Efficient <strong>Market</strong> Hypothosis)<br />
3) Inherent complexity of the price setting mechanism in financial<br />
markets (Cybernetics and the Law of Requisite Variety)<br />
4) Limitations and pitfalls in traditional computerized data analysis<br />
A. Human Limits in Configural Thinking and Pattern Induction<br />
Cognitive Psychology, a field concerned with studying the brain as an information processing<br />
machine, has discovered limits in man’s ability to effectively analyze numerous variables si-<br />
muftaneously and detect relationships among them. Herbert Simon, the Nobel Laureate, called<br />
it the “principle of bounded rationality.” The capacity of the human mind for formulating and<br />
solving complex problems is very small compared with the size of the problems whose solution<br />
is required for objectively rational behavior in the real world-or even for a reasonable approx-<br />
imation to such objective rationality.(l)<br />
In addition, it has been discovered that under certain conditions, the mind tends to imagine<br />
patterns in data known to be random. Both discoveries imply that the FMA relying solely on<br />
brain power will experience great difficulty.<br />
An FMA performing multi-i,ndicator analysis is engaging in “configural thinking,” a known weak<br />
link in man’s intellectual capacities. A configural thought task is one that requires the mind to<br />
MTA Journal/May <strong>1985</strong> 96
grasp the significance of a set of facts or variables, in a holistic fashion. In such problems, the<br />
key variables produce their effect in a highly interdependent way rather than acting as single<br />
agents. Therefore, to predict an outcome, the analyst must keep all the variables in mind at<br />
the same time rather than considering each one in isolation. We point out below that both the<br />
efficiency and the complexity of financial markets support the contention that the FMAs must<br />
engage in configural thinking to accurately forecast trends.<br />
Data Over-Load -Confusion<br />
FIGURE 1<br />
The Strained Brain of the F.M.A.<br />
Configural thinking limits make it-difficult for the FMA to perform multi-indicator forecasting.<br />
MTA Journal/May <strong>1985</strong><br />
97
Research indicates man’s abilities are strained when four or more variables need to be con-<br />
sidered configuratively(2). Thus, if there is a valid predictive law (i.e., a multi-indicator model)<br />
involving four or more factors it will be difficult for FMAs to detect. The problem is compounded<br />
when the FMA is searching through a list of twenty plus indicators for valid laws composed of<br />
a subset. In such a case, even two or three indicator laws are likely to be missed among the<br />
several thousand possible combinations.<br />
There is yet another intellectual roadblock. The mind sometimes erroneously perceives pat-<br />
terns and relationships in data that don’t really exist. Research indicates that the mind is dis-<br />
pleased by chaos and will try to impose patterns on sensory data, even if there is none. In an<br />
experiment done at Stanford University, subjects were presented with patterns that were gen-<br />
erated randomly (a randomly moving light source over a photographic plate). They did not know<br />
the patterns were random. The subject was asked to classify each photo into one of two classes,<br />
based on perceived similarity of pattern. After each classification, the subject’s choice was de-<br />
scribed as correct or incorrect by the experimenter on a random basis. In other words, it was<br />
a totally random situation. Yet, each subject thought they had discovered two distinct pattern<br />
classes. Even after they were told of the hoax, the subjects maintained that the two pattern<br />
types were real.{3} Similar experiments with random sequences of colored lights showed sim-<br />
ilar results. Subjects insisted they could predict the next color to appear even after being told<br />
they had been observing a random process.<br />
B. Efficiency in the Financial <strong>Market</strong>s<br />
According to the Efficient <strong>Market</strong> Hypothosis (EMH), the financial markets are examples of nearly<br />
“efficient markets.” The hypothosis implies the impossibility of forecasting price-trends such<br />
that above average returns, adjusted for risk, can be earned. This is because an efficient mar-<br />
ket is one in which current prices fully reflect (i.e., already take into account) all known and<br />
knowable information. Such “efficient” pricing results from many intelligent, rational, and well-<br />
informed participants trading in the market. As soon as some influencing information becomes<br />
known, or even knowable, the buying and selling of these knowledgeable participants will push<br />
prices quickly to a level that discounts that information. Attempting to analyze the market’s fun-<br />
damental and/or technical information to gain a predictive edge is, therefore, pointless.<br />
We agree that EMH has validity, up to a point. Forecasting market trends is extremely difficult,<br />
and one ought to be suspicious of simplistic approaches that promise to forecast accurately.<br />
Even if a simple system worked initially, its simplicity means it could be easily reproduced and<br />
followed by enough adherents to “dull” the edge it once had.<br />
But EMH has some possible loopholes, and significant opportunities may exist for the FMA<br />
with superior analytical tools. We contend the market is efficient only to the degree that people<br />
are able to properly analyze the information and understand its implications. Clearly, not all par-<br />
ticipants are equally able to analyze the masses of data. Good research is expensive so those<br />
with the most money have an advantage. But superior research funding is not enough. The<br />
analytic methodology must be superior, as well. To the extent that data analysis methods, man<br />
or machine-based, fail to capture and utilize all relevant information, the market remains in-<br />
efficient. This creates the opportunity for the analysts with the better data analysis methods.<br />
Where might these opportunities lie? We saw in the previous section that human intelligence<br />
has difficulty detecting the complex multi-indicator patterns in data. In addition we shall see<br />
many existing computerized data analysis methods have limitations that cause them to miss<br />
certain types of complex multi-variate information as well (part 4, section D). Thus, undiscov-<br />
ered predictive laws (models) are likely to be based on a multitude of complexly interrelated<br />
indicators.<br />
MTA Journal/May <strong>1985</strong> 98
In an efficient market, to the extent that a predictive law is easily discoverable, it is without value.<br />
The truly valuable information must always lie beyond the grasp of most market analysts and<br />
modeling methods.<br />
C. Complexity of the Price Setting Mechanism & The Law of Requisite Variety<br />
An additional reason for believing that only relatively complex forecasting models can be suc-<br />
cessful comes from the field of Cybernetics. Cybernetics is the interdisciplinary study of com-<br />
plex living and non-living, goal-seeking systems. It is concerned with the ways in which the<br />
system absorbs and utilizes information from its environment to achieve its goal. Norbert Wie-<br />
ner coined the term in titling his landmark work, Cybernetics: the Science of Communication<br />
and Control Processes in Animals and Machines. Wiener was the first to notice that com-<br />
plex living and machine systems were alike in many ways, and their complexity was such that<br />
a new discipline was required to understand them better. Actually, the field draws on many ex-<br />
isting disciplines including mathematics, biology, economics, statistics, etc.<br />
The term “system” here is used in its most general sense: an arrangement of elements so re-<br />
lated as to form an organic-interrelated whole. The attributes common to complex systems are:<br />
1)<br />
2)<br />
3)<br />
4)<br />
5)<br />
6)<br />
Has a goal or purpose.<br />
Self-awareness of how well it is achieving the goal through<br />
feed-back, and the ability to make adjustments.<br />
Complex interrelationships between the various parts of the<br />
system, not easily modeled by traditional mathematical models.<br />
Marked dependence on receiving adequate information from its<br />
environment to achieve its goal.<br />
The viability of the system results from a cooperative<br />
interlocking of its various parts (synergy): the whole is<br />
much more than the sum of its parts.<br />
The system can adapt to changes in the environment even if<br />
such changes were not anticipated in its original design.<br />
One of the principles of Cybernetics that is of particular significance to FMAs is the Law of Req-<br />
uisite Variety.{5,6} The Law supports our twofold contention:<br />
1) That viable market prediction models must be complex.<br />
2) Simplistic approaches based on too few factors linked in<br />
overly simplistic ways are likely to fail.<br />
The Law states: Problems involving complex systems require complex solutions. More spe-<br />
cifically, attempts to control or predict the behavior of complex systems will be successful only<br />
to the degree the control or prediction mechanism approaches the complexity of the system<br />
itself. The Law of Requisite Variety implies that a successful market forecasting system must<br />
be based on a relatively large variety of information (different types of indicators about different<br />
aspects of the market). In addition, those indicators must be integrated in a way that reflects<br />
the interrelational complexity of the market price setting mechanism (i.e., highly non-linear re-<br />
lationships).<br />
MTA Journal/May <strong>1985</strong> 99
The financial markets are examples of complex systems. The price setting mechanism is a<br />
complex sub-system. Its operational complexity cannot be captured by traditional methods of<br />
analysis and modeling. The human mind or even traditional statistical modeling are inadequate<br />
to the task of controlling or predicting the behavior of such systems.<br />
Of most interest to the FMA is the price setting mechanism. Its goal is to set a price where<br />
buyers and sellers are temporarily in balance. As new developments occur to change supply<br />
or demand, the price changes in an effort to bring offers and bids back into equilibrium. The<br />
number of inter-acting influences that cause and describe price changes are so varied and<br />
complexly linked with each other that the process defies the descriptions offered by simplistic<br />
models.<br />
The implications of the Law of Requisite Variety are consistent with the implications of the Ef-<br />
ficient <strong>Market</strong> Theory, as well as the findings of Cognitive Psychology: successful forecasting<br />
models will be difficult to develop and complex.<br />
D. Limitations of Traditional Computerized Data Modeling<br />
It would seem that the computer holds the answer to the FMA’s problem. Used properly, the<br />
computer can assist greatly, used improperly, it produces its own set of problems.<br />
The most serious problem plaguing computerized data modeling is “overfit.” Overfitted models<br />
fit the historical data very well but give poor predictions when applied to new data. It results<br />
from taking too much freedom and using too much force in conforming the model to the his‘-<br />
torical data. A second problem is underfit and results from the preconceived notions underlying<br />
the given data analysis methodology. Such models will have sub-optimal power as well.<br />
The objective of sound data modeling should be to capture all available information in a given<br />
set of data, while at the same time rejecting “noise” (random effects) present. The result is a<br />
model that is neither underfit nor overfit.<br />
1. Overfitting the Model to Data<br />
This is the trap that most data analysts fall into whether they use eye/brain or computer. In an<br />
overzealous and misguided attempt to gain perfect predictive accuracy the analyst takes what-<br />
ever liberty necessary to force the model to fit the historical data. The power of the computer<br />
often seduces the analyst into this trap. It is simply too easy to keep revising a model to in-<br />
crease fit. This applies to models that are functional relations, such as regression, and rule-<br />
based trading systems, which are sets of conditions connected with various logical operators.<br />
Wiih a finite amount of data, a perfectly fitted model can be derived if permitted to include enough<br />
complex rules, free parameters, etc. The model becomes a detailed description of the ana-<br />
lyzed data, including the random effects. Thus, fit may not be the best criteria, nor is it what<br />
we really want. The real objective is predictive power on new data (i.e., on samples that have<br />
not been analyzed in developing the model). Yet, most existing data analysis methods, whether<br />
by machine or by man, strive to fit the data.<br />
Let’s see how a well-intended analysis can lead to the absurdity of overfit. Consider the ex-<br />
ample of a FMA attempting to derive a rule-based model for signaling intermediate-term stock<br />
market bottoms. Because his favorite indicator is the PSSR ratio (public to specialists short<br />
sale ratio), he starts by studying it. After some study he notices a rather effective rule.<br />
MTA Journal/May <strong>1985</strong> 100
If PSSR exceeds 0.60 then buy.<br />
Although this rule works fairly well, some false signals are given and some bottoms are missed.<br />
So the FMA revisits the data to develop a more refined model. After much additional study, the<br />
FMA is quite pleased at having produced a set of rules that give no false signals and call every<br />
bottom. The model that resulted is presented below:<br />
If PSSR exceeds 0.60<br />
or The Short Term Trading Index (TRIN) 1 O-day moving average has<br />
been over 1.20 within the last twelve days<br />
or T-Bill Rates have fallen below and have remained below a 13-week<br />
moving average<br />
or Mutual Fund Cash is greater than 9.0%<br />
Then Buy unless<br />
it is a leap year<br />
or the current Miss America is from a state south of the Mason<br />
Dixon Line and a Democratic president is in office or Jupiter<br />
is in the constellation Orion and the planet Pluto........<br />
No doubt Jupiter’s position in Orion or Miss America’s home state are conditions that permit<br />
the fit to be perfect. But these factors are clearly randomly related to the behavior of the stock<br />
market. In fact, if fit is too good to a process known to contain at least some noise, one should<br />
be suspicious. For unless the movements of financial markets are completely free of random<br />
movement, and not even the most extreme opponent of the EMH would assert this, perfect fit<br />
should be impossible.<br />
Optimum fit (complexity) occurs when a model has captured the real information in a sample<br />
but not its noise? One way to achieve this is keeping some historical data set aside as an in-<br />
dependent sample for testing models generated on another portion of the data. This method<br />
is called cross-validation (see part 5, section E), and it seeks to impose the rigor of the scientific<br />
method on the process of data modeling.<br />
Consider the following: An FMA notes that for the period 1965 to 1975, whenever the T-Bill rate<br />
falls below its five week moving average, the stock market is about to start an advance. This<br />
set of observations is the basis for generalizing a tentative rule-based prediction model.<br />
If T-Bill rates fall below their 5week moving average it is a bullish signal.<br />
According to the Scientific Method this statement should not yet be considered a valid law. It<br />
is only a hypothesis awaiting validation and that requires testing it on an independent set of<br />
data. If the hypothesis has validity in the data sample 1976 to 1984, then we have some reason<br />
to believe in its forecasting power, and may elevate it to the status of an emperical law.<br />
If additional rules are added in an attempt to improve accuracy (i.e., the model is made more<br />
complex), the revised model must also be tested on the reserved data set. If the more complex<br />
version is valid, the predictive accuracy on the reserved data will improve. However, when rules<br />
that are really descriptions of random effects are added (overfit), the test in reserved data will<br />
show a decline in performance. Thus, cross-validation provides the FMA with feedback that<br />
can help prevent overfit.<br />
MTA Journal/May <strong>1985</strong> 101
2. Underfit<br />
Another problem plaguing computerized data modeling is underfit. It is the opposite of overfit.<br />
While overfitted models mistakes random effects for true phenomena, underfitted models fail<br />
to capture all the real information in the data. Some analysis methods can produce models that<br />
suffer both effects at the same time.<br />
Many data modeling approaches assume that a specific type of model is appropriate prior to<br />
starting the analysis. For example, linear regression assumes the model is of the linear form.<br />
The equation below illustrates this simple model structure.<br />
Y = w,x, + w,x, + w,x, . . . + W,X” + Wn+,<br />
Figure 2 below depicts a linear regression model employing two predictors Xl and X2. The<br />
altitude of the surface is the dependent variable (Y) and can be thought of as the probability<br />
that a point on the plane below is a BUY DAY. The greater the altitude of the regression surface<br />
the more likely a BUY DAY. Notice that the regression plane is flat (i.e., linear). This shape is<br />
fixed by an assumption of the linear regression method. Flexibility is limited to the angle of as-<br />
cent along each axis. In fact, the fitting of a regression model is concerened with finding the<br />
best angles (i.e., values for the W’s) for the plane such that it most evenly “slices” through the<br />
sample data.<br />
/<br />
-3<br />
-2<br />
-I<br />
Y=W,X,+WrX2+W.3<br />
FIGURE 2<br />
A Linear Regression Model<br />
REGRESSION SURFACE<br />
The regression surface of a linear model structure is restricted to a f/at shape. Only the slope of the surface can<br />
be adjusted to best estimate the relationship between Y and the two predictor variables Xl and X2.<br />
MTA Journal/May <strong>1985</strong> 102<br />
2
A linear model structure implies that all variables are of the first degree and are combined by<br />
an additive operator. Obvious non-linear effects can be dealt with to a limited degree by certain<br />
univariate transformations and explicit variable interaction terms. All this requires deep insight<br />
on the part of the analyst.<br />
In the diagram below one can get a visual sense of the limitation imposed by assuming a linear<br />
model. Assume that the true underlying phenomena is represented by the hilly surface in figure<br />
3. Many important effects are “brushed aside” by forcing the flat surface through the more<br />
complex, non-linear phenomena. Obviously predictive accuracy will be less than optimal.<br />
A Linear Model “Trying” to Represent a Non-Linear Phenomena<br />
Y<br />
I LINEAR REGRESSION SURFACE<br />
\<br />
The linear model (the plane) can’t begin to capture the true non-linear relationship between Y (dependent vari-<br />
able) and the two predictor variables (X7 and X2). However, advanced non-linear methods can if given enough<br />
sample data.<br />
One of the main assumptions of linear models is that the predictor variables (i.e., the indicators)<br />
are independently related to the dependent variable (i.e., the market trend). Our suspicion, based<br />
on cybernetic considerations, is that financial markets are examples of complex systems. The<br />
behavior of such systems are generated by many complexly interrelated variables. Thus, data<br />
modeling procedures that pre-assume linear relationships or any other relationship may fail to<br />
capture much useful information. Accurate modeling of complex systems requires a meth-<br />
odology that is flexibile enough to capture important relationships regardless of their com-<br />
plexity (i.e., shape).<br />
MTA Journal/May <strong>1985</strong>
PART IV<br />
CURRENT DATA MODELING AND FORECASTING APPROACHES<br />
In this section we will review a number of approaches FMAs are using to analyze data, gen-<br />
erate models, and do forecasting. Our purpose is to consider some potential weakness in light<br />
of the difficulties outlined in Part IV.<br />
A. Subjective Multi-Indicator Analysis<br />
In this approach the computer plays a relatively minor role. It is used to calculate and display<br />
the indicators, but it is not called upon extensively as a historical research tool. The FMA eval-<br />
uates the current indicator levels in light of past experience. Forecasting is not based on rig-<br />
orously researched statistical models or specific rule sets. Rather, rules of thumb and subjective<br />
criteria derived from examination of historical data are the basis for forecasting. The analyst<br />
relies on mental power to weigh and evaluate the various indicators to arrive at a forecast. A<br />
virtue of this approach is that it can attempt to deduce the consequences of unique or infre-<br />
quent events not easily incorporated into historical data models.<br />
Given human limitations in configurai reasoning, the experience base of the analyst is likely to<br />
be missing multi-indicator relationships with predictive power. Although the mind can deal with<br />
as many as three variables in a configural sense, signifiant three variable laws are still likely<br />
to elude the FMA. This is so because sifting out a two or three indicator rule (model) from a<br />
larger number of potential indicators will involve considering many more than three variables<br />
at a time. Thus, the FMA will have great difficulty discovering laws that are obscure enough to<br />
circumvent the efficiency of the market or complex enough to comply with the Law of Requisite<br />
Variety.<br />
B. Computer-Based Trend Following System<br />
Some FMAs, notably managers of several large commodity trading funds, have chosen the<br />
objectivity of the computerized price-trend-following systems. Trend-following models use var-<br />
ious kinds of price-based indicators such as moving averages, recent price ranges (channels),<br />
oscillators, etc. An example might be if the price exceeds a 20-day moving average by a cer-<br />
tain percentage, an up-trend criteria is met and a long position is taken. Trend persistance is<br />
the underlying assumption.<br />
The computer has been used extensively in the development and optimization of such sys-<br />
tems. Because they are relatively simple to implement and test, thousands of variations on the<br />
basic trend-following theme have been investigated.<br />
Although such systems have produced significant profits when markets are in welldefined trends,<br />
they generate many false signals and cause severe capital errosion during choppy or trendless<br />
markets. Thus, the investment returns show significant volatility (risk). Undercapitalized or un-<br />
disciplined traders tend to abandon the system before trends and profits materialize.<br />
The low signal reliability and attendant risks are not surprising. First, most trend-following sys-<br />
tems tend to be highly similar (i.e., buy on strength and sell on weakness), and there are many<br />
of them. Above average risk-adjusted returns in an efficient market require a degree of unique-<br />
MTA Journal/May <strong>1985</strong> 104
ness. But unique they are not. Second, because such strategies are based only on historical<br />
price data analyzed in simple ways, they are not in conformity with the Law of Requisite Variety.<br />
Third, many trend systems are overfitted. This results from efforts to improve the poor signal<br />
accuracy of the basic trend-following system (usually correct 40 to 50 percent of the time). Ov-<br />
erfit results when numerous qualifying rules are added and many free parameters are opti-<br />
mized for a given set of historical data. Validation on independent data is rarely done. Often a<br />
“newly” developed trend-following system shows excellent results in historical simulation, but<br />
produces actual results that are much worse - a sure-fire symptom of overfit. After some hard<br />
knocks in the real world, many trend system developers are found back at the “drawingboard,”<br />
re-optimizing and refitting their systems.<br />
C. Optimization of Rule Based Models<br />
This approach makes heavy use of the computer, but frequently, it is a misuse. Optimization<br />
refers to the progressive development and refinement of a set of trading rules by repeated re-<br />
visitations to the same body of data. After each pass, rules are added or changed to improve<br />
signal accuracy. Of course, results seem to get better after each pass, but this is simply be-<br />
cause the model is being forced to fit the data closer and closer. Overfit is the inevitable result<br />
unless precautionary measures are built into the optimization process. Predictions in new data<br />
are usually quite disappointing.<br />
A helpful measure would be to reserve some data for testing each newly revised version of<br />
the model. So long as signal accuracy in the reserved data improves, the modifications make<br />
sense. When the model becomes overfit, the optimization process would be stopped.<br />
Another potential weakness is the FMA’s reliance on his configural thinking to propose rule sets<br />
to test. The computer is used only to test what the analyst hypothesizes. Configural thinking<br />
limits prevent efficient searching of the large number of possible rule sets when the number<br />
of indicators exceeds just a few.<br />
D. Indicator Voting<br />
This is a common approach to synthesizing a forecast from many indicators. Each indiator is<br />
graded as to its current bullish or bearish or neutral implications. Then bullish and bearish votes<br />
are added algebraically, creating a composite score. A bullish indicator is given a rating of + 1,<br />
and a bearish one is given a -1. If seven are graded as bullish, and ten are graded as bearish<br />
( + 7-l 0 = 3), a bearish prediction is given.<br />
This approach is subject to a number of weaknesses. First, combining indicators by addition<br />
or subtraction creates a linear model and ignores the possibility of more complex non-linear<br />
indicator relationships. The Law of Requisite Variety tells us to expect complex relationships<br />
among variables when modeling a complex system such as a financial market. Second, the<br />
Efficient <strong>Market</strong> Hypothesis implies that the voting model is unlikely to achieve high levels of<br />
predictive power because many analysts can create such simplistic composite models. An ex-<br />
ception would be if an FMA had some powerful indicators not known to others. Third, it as-<br />
sumes that each indicator has equivalent, relevant, and non-redundant predictive information.<br />
Summing two indicators that are relevant but redundant amounts to double counting. Including<br />
irrelevant indicators adds noise that can dilute or destroy the information of good indicators.<br />
On the other hand if the FMA attempts the task of selection and weighing, he falls prey to the<br />
mind’s limited configural thinking abilities.<br />
MTA Journal/May <strong>1985</strong> 105
E. Multiple Disciminant and Multiple Regression Models<br />
By far the most intelligent use of the computer by FMAs has been the application of computer-<br />
based multi-variate modeling methods. Disciminant analysis produces models that distinguish<br />
one class of items from another such as BUY-DAYS and NON-BUY-DAYS. Regression anal-<br />
ysis produces models that estimate a continuous variable (for example, the percent change in<br />
the Standard and Poor’s 500 over the next thirty days). The mathematical basis of both meth-<br />
ods is quite similar.<br />
These multi-variate approaches create models that are combination-weighted variables (in-<br />
dicators), that best classify or predict the variable of interest (indicators), that best classify or<br />
predict the variable of interest (the dependent variable). The weights are determined by the<br />
computer so that the most important variables get the most weight. In addition, careful atten-<br />
tion is paid to avoid including variables that are redundant. One version called step-wise<br />
regression (discriminant) builds a model in steps, searching for variables that increase the fit<br />
of the model. This approach starts with a relatively large number of indicators and selects the<br />
indicator with highest linear correlation to the dependent variable from them. The second var-<br />
iable added to the model is the one that in conjunction with the first provides the biggest gain<br />
in correlation. The process continues as long as each new variable is significant according the<br />
selection criteria used to build the model. At least two organizations have produced models<br />
with this procedure, and real time predictions (ex-ante) since 1975 appear to be good for pre-<br />
dicting trends over time frames as short as three months.{7, 8)<br />
Despite the virtues, these modeling approaches rely on a number of assumptions that are likely<br />
to be overly simplistic and restrictive for financial market predictive modeling. Thus, complex<br />
relationships with significant forecasting power may get overlooked.<br />
Both regression and discriminant modeling attempt to fit an equation to data. A potential weak-<br />
ness in any methodology that attempts to fit equations to data is the assumption that the cor-<br />
rect structural form of the equation (e.g. linear, quadradic, cubic, etc.) is known by the analyst<br />
prior to analyzing the data. In the sciences, there is often good theory to indicate the proper<br />
form. However, when complex, poorly understood problems are being analyzed, this is often<br />
not the case. In such cases, the analyst often assumes a linear structure and hopes that non-<br />
linear effects can be treated by transforming some variables, and/or including cross product<br />
terms. However, the Law of Requisite Variety tells us to expect an almost unlimited variety of<br />
complex relational forms. Thus, the analyst is on the horns of a dilemma. Given the known<br />
limitations of human intellect, it is unreasonable to expect an analyst to have sufficient insight<br />
to choose a correct model equation for a complex system. Yet, these modeling methods re-<br />
quire the analyst to choose. The choice is usually convenient but too simple.<br />
Any data modeling procedure based on a fixed and pre-assumed form is likely to miss im-<br />
portant but complex relationships when analyzing financial market data. The Efficient <strong>Market</strong><br />
Hypothesis implies valuable forecasting information may lie buried among complex relation-<br />
ships, for that is where most analysts are unable to look.<br />
While underfitted with respect to structure, traditional regression models tend to be overfitted<br />
with respect to the number of variables they include. Most practitioners do not reserve data for<br />
testing the predictive power of the fitted model. Thus, there is a tendency to allow too many<br />
variables into the model relative to the amount of sample data. By including enough predictor<br />
variables relative to the number of samples analyzed, the model will fit very close to the data<br />
even if the assumed form is linear. Each time a new variable is added, the space within which<br />
the samples are projected increases by one dimension. Each additional dimension gives the<br />
hyper-plane of the model a new independent direction in which to angle itself. This permits ever<br />
greater degrees of fit, though not necessarily greater degrees of predictive power.<br />
MTA Journal/May <strong>1985</strong> 106
F. Towards a More Robust Methodology<br />
In Part III, we outlined a number of difficult <strong>issue</strong>s that must be confronted when developing<br />
financial market predictive models from historical data. In this section we have considered sev-<br />
eral methodologies in current use and point out how they fail to adequately deal with one or<br />
more of the difficulties.<br />
In light of these considerations, we contend that robust data analysis and modeling methods<br />
should meet the following objectives.<br />
1) Ability to detect subtle information that has likely been<br />
ignored by most FMAs. More specifically predictive laws<br />
that involve three or more complexly interrelated indicators.<br />
2) Ability to detect highly non-linear relationships without<br />
having to specify those relations in advance of the analysis.<br />
In other words, the analysis discovers the form of the model<br />
expressed in the data.<br />
3) Ability to deal with large numbers of candidate indicators,<br />
yet perform the analysis in a reasonable amount of time, by<br />
utilizing intelligent search methods.<br />
4) Ability to extract maximum information from data without<br />
committing the sin of overfitting. This will require some<br />
type of feedback that lets the method know when the model<br />
is being forced to fit “noise” rather than information.<br />
5) No need to assume the correct statistical distribution is known.<br />
In the next section we outline the AliPR data analysis methodology as one possible approach<br />
to meeting these objectives.<br />
PART V<br />
ARTIFICIAL INTELLIGENCE / PA-I-TERN RECOGNITION<br />
A. Vector Spaces to Represent Patterns<br />
A pattern is a set of measurements that describe a given sample. Each measurement quan-<br />
tifies one attribute of the sample. A sample could be a day in the history of the stock market,<br />
and its attributes could be levels of various indicators on that day. A class of samples of interest<br />
to the FMA are all days that were low points prior to the inception of an uptrend. AI/PR can<br />
help us determine if there is an indicator pattern common to that class.<br />
In order for a computer to perform AI/PR, the patterns must be represented in a way that can<br />
be utilized by a digital computer. A common way to represent a sample’s pattern is by the lo-<br />
cation of a point in a multi-dimensional grid or space. Such a space is known as a vector space<br />
MTA Journal/May <strong>1985</strong> 107
or attribute space. Each dimension or axis of the space represents one measurable attribute<br />
of the sample.<br />
The simplest space or grid, is composed of one dimension (1-D) and is exemplified by a straight<br />
line. Any point in a 1-D space can be uniquely specified by a single number representing its<br />
distance for the origin (0 value) of the space. If a sample is characterized by only a single at-<br />
tribute such as “height,” a 1-D space suffices. See figure 4. However, if we wish to display<br />
more information about a sample, a higher dimensional space is required. A two dimensional<br />
(2-D) space or grid is exemplified by a piece of graph paper. To uniquely specify a location in<br />
a 2-D space requires two numbers representing distances along two mutually perpendicular<br />
axes. See figure 5. A three dimensional (3-D) space is required to display samples charac-<br />
terized by three measurable attributes. See figure 6. Though we can’t visualize them, hyper-<br />
space grids (i.e., spaces containing more than three dimensions) are used when available<br />
samples are described by many attributes. Readers may recognize that an attribute space is<br />
nothing more than a Euclidenan space of one or more dimensions, with each axis mutually<br />
perpendicular to all others.<br />
AUTHOR'5 PATTERN<br />
J<br />
FIGURE 4<br />
A One Dimensional Attribute Space<br />
The location of the point at 5’8” indicates the author’s height. The point is a 1-D pattern.<br />
MTA Journal/May <strong>1985</strong> 108
WEIGHT<br />
AUTHOR'5 PATTERN<br />
5’ 5’8” 6’ 7’ 8’<br />
FIGURE 5<br />
A Two Dimensional Attribute Space<br />
HEIGHT<br />
The location of the point in 2-D space gives additionalinformation about the author. The 2-D pattern specifies both<br />
his height and weight. Note: the axes are perpendicular, but don’t appear so due to the perspective.<br />
Note the axes are perpendicular, but don’t appear so due to the perspective.<br />
MTA Journal/May <strong>1985</strong> 109
FIGURE 6<br />
A Three Dimensional Attribute Space<br />
The 3-D pattern shown indicates the author’s height, weight, and age by its location in the 3-D space. Note: the<br />
three axes are mutually perpendicular. People can’t visualize spaces composed of more than three dimensions,<br />
but computers can perform calculations for spaces of N dimensions.<br />
It is interesting to note a possible connection between our inability to visualize spaces con-<br />
taining more than three dimensions and the number of facts that we can effectively process in<br />
configural thinking (i.e., a maximum of three or four). Computers, on the other hand, can easily<br />
construct vector spaces containing many dimensions. A 20-D space is no more difficult than<br />
a 3-D space. This makes it a simple matter for a computer to detect degrees of similarity among<br />
samples characterized by numerous attributes. Thus, the question “what attributes are really<br />
common to BUY DAYS” becomes answerable.<br />
MTA Journal/May <strong>1985</strong> 110
B. Discrimination<br />
Discrimination in this context refers to the task of classifying a sample whose true class is un-<br />
certain, but whose attributes are known. Thus, an enhancement in the ability to discriminate<br />
is equivalent to reduction in uncertainty. FMAs don’t know with certainty if today is a BUY DAY<br />
(i.e., an uptrend is about to occur), but levels of various technical indicators are known with<br />
certainty. We wish to reduce our uncertainty about today’s class (BUY DAY or NON-BUY DAY),<br />
based on those known pieces of indicator information.<br />
In vector space terms, discrimination is possible when samples from each class cluster in dis-<br />
tinct regions of the attribute space. In other wordds, the clump(s) of BUY DAYS is far removed<br />
from the clump(s) of NON-BUY DAYS. Obviously, for this to occur, the attribute axes must con-<br />
tain class separating information. In other words, in a good attribute space “birds of a feather<br />
flock together, but not with others.” When attempting to classify a sample of unknown class,<br />
its known attributes locate a unique point in vector space. If the immediate vicinity is dominated<br />
by known BUY DAY samples, then by the fundamental axiom of pattern recognition, the “mys-<br />
tery” sample is inferred to be a BUY DAY as well. For in pattern recognition, vector space prox-<br />
imity is equivalent to class similarity. Since there can be varying degrees of proximity, the<br />
classification is stated by a probability rather than an absolute yes or no.<br />
This concept is illustrated in figures 7, 8, and 9. Consider the “1-D height space.”<br />
000 xxx<br />
00000 xxxxxx<br />
00000000 xxxxxxxxx<br />
00000000000 xxxxxxxxxxxx<br />
o= JOCKEY<br />
UNKNOWN<br />
X= BASKETBALL PLAYER<br />
FIGURE 7<br />
Jockeys and Basketball Players in Height Space<br />
Samples from the two classes are located in 1 -D height space. The classification power of this attribute is evi-<br />
denced by the fact that jockeys clump in a height range that is very distinct from the region occupied by the bas-<br />
ketball player cluster. Classification of the unknown with a height of 6’5” is easy as it lands in a region dominated<br />
by basketballplayers. Computers use a measure of distance to determine which cluster is closer to the unknown.<br />
MTA Journal/May <strong>1985</strong> 111
Figure 7 shows samples of known jockeys (symbol “0”) and known basketball players (symbol<br />
“x”) located in height space. The two classes tend to cluster in distinct regions of attribute space.<br />
This is highly desirable as it allows easy classification of a sample of uncertain class. This is<br />
equivalent to saying that the attribute “height” contains extremely useful class separating in-<br />
formation. For most problems, one attribute is not sufficient.<br />
Thus, the task of the AVPR program is to determine which combination attributes display the<br />
best class separation. The AVPR process starts with an ample supply of samples whose class<br />
is already known. Each time a new combination of attributes (i.e., vector space) is evaluated,<br />
the program can measure how well the goal of class separation is being achieved.<br />
Now consider a slightly more difficult problem, one requiring two attributes. We wish to dis-<br />
criminate between football players and basketball players. See figure 8 below.<br />
00000000<br />
00000<br />
000<br />
0= FOOTBALL PLAYER<br />
X= BASKETBALL PLAYER<br />
FIGURE 8<br />
UNKNOWN<br />
Basketball Players and Football Players in Height Space<br />
Height, by itself, is not a sufficiently informative attribute to separate the two classes, as both tend to occupy similar<br />
height ranges. Classification of the unknown is uncertain, as samples from both classes are found in its immediate<br />
vicinity More information is needed.<br />
Since both classes tend to be tall, we get an ambiguous situation in “height” space. The two<br />
clusters (classes) have a degree of overlap. If given the problem of classifying an unknown<br />
whose height is 6’5”, no definite conclusion could be reached because samples from both classes<br />
are found in that region. The attribute height, by itself, is not a sufficiently informative indicator<br />
to produce a good pattern recognition model. It’s a more complex problem than the basketball<br />
versus jockey problem, thus, requiring more information. However, when we cleverly add weight<br />
to the space and project our samples into the 2-D height/weight space, we get good class sep-<br />
aration. See Figure 9 below.<br />
MTAJournaliMay<strong>1985</strong> 112
300<br />
LB-S.<br />
T<br />
000 xxx<br />
00000 xxxxx<br />
0000000 xxxxxxx<br />
00000000 xxxxxxxx<br />
00000000 xxxxxxxx<br />
00000000 xxxxxxxx<br />
00000000 xxxxxxxx<br />
00000000 xxxxxxxx<br />
0 0000000 xxxxxxxx<br />
00000000 xxxxxxxx UNKNOWN<br />
000000000<br />
000000000<br />
_ - C2QO~QQQQQ~~XXY<br />
xE+gK~/<br />
000000000<br />
xxxxxx xx<br />
000000000<br />
xxxxxxx,xx<br />
00000000 xxxxxxp<br />
00000000<br />
XXXXXX~X<br />
0000000<br />
XXXXXfX<br />
00000 XXXXY<br />
0000 XXXXI<br />
I<br />
I<br />
I<br />
0= FOOTBALL PLAYER<br />
X= BASKETBALL PLAYER<br />
FIGURE 9<br />
Height and Weight Separate Classes<br />
The samples of football players and basketball players from the prior illustration are shown by 2-D height/weight<br />
space. The additional information provided by weight causes the two classes to separate, permitting classifi-<br />
cation of the unknown. The pattern height = 6’5”, weight = <strong>21</strong>0 Ibs., locates a point in a region dominated by<br />
basketball players. Conclusion, the unknown plays basketball.<br />
MTA Journal/May <strong>1985</strong> 113
With the addition of “weight” as an indicator, a good AliPR program would sense class sep-<br />
aration had been achieved and cease the pattern induction process. With this model in hand,<br />
classification of a person of unknown class, but known height and weight (height =6’5”,<br />
weight = <strong>21</strong>0 Ibs.) is possible. The pattern of the unknown is clearly that of a basketball player,<br />
as its 2-D (i.e., coordinates) pattern is a point in heightiweight space that “lands” squarely in<br />
the basketball player cluster.<br />
Incidentally, it’s fortunate that we were smart enough to select weight as the second attribute.<br />
If we had been foolish enough to pick zodiac sign or hair color, we would not have gotten such<br />
excellent class separation. Or worse would have been our selecting an attribute that due to<br />
random effects worked on this particular sample but had no general validity. For example, if<br />
all football player samples were from New York, and all basketball players were from Los Angeles<br />
(due to poor sampling), a home-town attribute would cause separation, but clearly for the wrong<br />
reasons.<br />
Real world pattern recognition problems are rarely this clear due to inherent complexity and<br />
high levels of noise. Often the difference in information content between the best and worst<br />
attribute spaces is small. Sharply defined clusters are never seen. However, the sensitive<br />
measurements of AVPR software, combined with the power of the computer, can detect useful<br />
information for complex phenomena contaminated by high levels of randomness. Some in-<br />
teresting problems that have been successfully approached with AliPR include disease di-<br />
agnosis, searching for oil, weather prediction, economic forecasting, and financial market<br />
prediction.<br />
C. Steps in Building a Model<br />
The process of building a model with AliPR takes place in a number of steps. They are:<br />
1) Defining the two classes<br />
2) Proposing candidate indicators<br />
3) Division of historical data into training and testing sets<br />
4) Reducing the number of candidate indicators<br />
5) Model Construction<br />
6) Ex-Ante Testing<br />
7) Adaptation<br />
1) Defining the Two Classes: Top Days and Bottom Days<br />
The first step is to identify for the AI/PR programs days in the past that belong to the two classes<br />
of interest. For this example, we have chosen to construct a model that will classify each day<br />
as a TOP DAY (i.e., we are at an intermediate term top) or a BOTTOM DAY (i.e., we are at an<br />
intermediate term bottom). Let’s define the criteria for an intermediate term trend as a price<br />
move of at least 15%.<br />
With hindsight using eye or computer, we go back over our historical database and identify all<br />
days that were highs in intermediate moves (e.g., trends in which prices moved at least 15%),<br />
and lable them TOP DAYS. Then we do the same on all lows and label them BOTTOM DAYS.<br />
We now have a two class pattern recognition problem. See figure 10 below.<br />
MTA Journal/May <strong>1985</strong> 114
SCP500<br />
IO0<br />
75<br />
25<br />
O=TOP DAY x = BOTTOM DAY<br />
FIGURE 10<br />
Historical Samples for a Top/Bottom Discrimination Model<br />
The historical sample data is the raw material for the development of an AliPR model. Samples A, C, E, and G<br />
are known TOP days. B, D, F, and H are known BOTTOM days. indicator levels for those days are known, as well.<br />
Discovering an indicator space that will cause the two classes to truly separate is given to AIIPR<br />
2) Proposing a Set of Candidate Indicators<br />
This step is performed by the FMA. Because a computer cannot be creative, a knowledgeable<br />
human must give it a list of candidate indicators thought to contain useful information. The FMA’s<br />
experience and intelligence is crucial here, and the success of the analysis rests on a good<br />
starting set of indicators. A list of several hundred can easily be created as each variation of<br />
an indicator is considered a separate candidate. For example, the five-week change in T-Bill<br />
rates is one; a ten-week change is a second. AVPR will determine which is better for a given<br />
problem.<br />
The first step in generating a set of candidate indicators is to identify raw data series thought<br />
to contain some useful information. In the case of the stock market, the list may include market<br />
price data, advance-decline data, total volume, short sales, volume, interest rates on T-Bills,<br />
odd-lot volume, put-call statistics, etc. However, raw data is generally not useful in building Al/<br />
PR models. It must be transformed in various ways to amplify its information content.<br />
What are commonly known as technical “indicators” are created by transforming raw market<br />
data with various mathematical operations, such as moving averages, ratios, differences, etc.<br />
Most indicator transformations attempt to “normalize” the raw series in some way. Normali-<br />
zation can mean removal of measurement units, removal of a trend (i.e., stabilizing its mean),<br />
stabilizing its variance, etc. For example, the PSSR (Public to Specialists Short Sales Ratio)<br />
indicator is created by taking the ratio of the raw public shorting volume to specialists shorting<br />
volume, and then smoothing the figure with a moving average of 4 to 1 O-weeks. Raw advance/<br />
MTA Journal/May <strong>1985</strong> 115<br />
TIME
decline (ND) data can be transformed into the well-known ND cumulative line, or a 1 O-day net<br />
ND oscillator.<br />
The better the indicators, the better the final model. Unless the initial set of candidate indicators<br />
contains some useful information, the best AliPR program will fail to produce a good predictive<br />
model. The human is very much in the “loop.”<br />
3) Division of Historical Data Into Training and Testing Sets<br />
A fundamental aspect of the scientific method is the requirement that a hypothesis be validated<br />
on data other than that which gave rise to it. In this spirit, the AliPR method holds aside a por-<br />
tion of the historical sample data. Usually the data is divided into two sets, each containing half<br />
of the samples. Each time a new indicator combination is examined, the first portion, called<br />
the “training set,” is analyzed for potential class separating power. If some is evident, the sec-<br />
ond portion called the “testing set” is used to validate or invalidate the suspected classification<br />
utility of the space. This procedure is called cross-validation. It is explained and illustrated later<br />
on.<br />
Let’s assume that our entire historical database consists of one hundred samples: fifty BOT-<br />
TOM days and fifty TOP days. Half of the BOTTOM samples and half of the TOP samples would<br />
be put into the training set. The remaining samples would be placed in the testing set.<br />
If the training set shows PSSR to be a good indicator (i.e., bottoms and tops segregate in PSSR<br />
1-D space), the AVPR program will then check the testing set. If the indicator shows similar<br />
discrimination power in the testing set, the program will retain it for testing in combination with<br />
other indicators. Cross-validation is an extremely important defense against overfit when the<br />
number of possible indicator combinations is large. If a candidate set of indicators is large (twenty<br />
or more), the number of possible indicator combinations relative to the number of samples makes<br />
it highly likely that false discrimination power will show up in many instances. This is particularly<br />
so in high dimension combinations (i.e., four or more indicators). Figures 12a, 12b, and 13a,<br />
13b illustrate the cross validation concept.<br />
4) Reduction and Compression of Candidate Indicator List<br />
Because complex processes can involve so many potential indicators (candidate set) and Al/<br />
PR is compute intensive, there is a need to reduce the size of the initial set prior to model build-<br />
ing. There are two things that can be done.<br />
First, irrelevant and redundant indicators can be eliminated from further consideration. This<br />
step is not simple, as some indicators that appear without value on a stand-alone basis are<br />
extremely valuable in multi-indicator combination. Thus, there are trade-offs to be made.<br />
Second, the information of several indicators can be compressed (i.e., projected) onto a single<br />
new indicator. Typically, this is accomplished by rotating the axes of a multi-indicator space into<br />
a more favorable position, thus, enhancing class separation, indicator independence, or var-<br />
iance explanation.<br />
Mass indicator screening and reduction is carried out with pattern recognition methods that are<br />
less refined, but more rapid than those used for model construction. Certain types of AVPR<br />
are useful for screening candidate sets containing up to five hundred indicators.<br />
MTA Journal/May <strong>1985</strong> 116
The subject of variable screening, dimensionality reduction, and information compression is a<br />
large one and well beyond the scope of this paper. The important point is that much can and<br />
should be done in the way of reducing the size of the initial set of indicators.<br />
5) Model Construction<br />
When the candidate set has been reduced to approximately thirty indicators, the model con-<br />
struction starts. There are numerous schemes for generating a model, and all involve trade-<br />
offs. The most ambitious would be to allow the AliPR program to consider all possible com-<br />
binations of indicators taken one at a time, two at a time, three at a time, up to all thirty of them.<br />
Although this approach guarantees that the best model will be found, it is feasible with only the<br />
largest and fastest computer and very large research budgets.<br />
Good, though not the best models can be found using step-wise procedures. Such ap-<br />
proaches limit the number of possible indicator combinations searched. First, all indicators are<br />
considered on a stand alone basis. The one with the highest predictive power is selected. As-<br />
sume indicator 26 was the 1 -D winner. Then all two indicator combinations involving indicator<br />
26 are tried. The pair with the highest predictive power is selected. Assume that pair was 26<br />
and 7. Next all possible 3-D models using 26,7 and each of the remaining variables are tested<br />
for predictive power. The tri-indicator set with the highest power is selected, and an attempt is<br />
made to add a fourth indicator. An interesting aspect of the selection process is that indicators<br />
selected after the first one often look useless on a stand alone basis, but have significant in-<br />
formation when acting in concert with several other indicators. These kinds of synergistic ef-<br />
fects are not visible to data analysis methods that assume each indicator has independent<br />
predictive value.<br />
The program stops when the addition of another indicator results in a decline in predictive power.<br />
It may seem counter to common sense that more indicators could produce a decline in pre-<br />
dictive accuracy, but in practice, it does. One reason is that the number of historical samples<br />
is limited. As a vector space grows in dimensionality, its volume expands rapidly. Consider how<br />
much more “room” there is in a one foot cube than in a one foot square. The finite number of<br />
samples become sparser and sparser, until the clusters of like-class samples dissipate and<br />
get lost in the “noise.” This phenomena is known as “Bellman’s Curse of Dimensionality.”<br />
Let’s consider how the AI/PR program measures the predictive power of given combination of<br />
indicators. Recall that in an indicator space composed of relevant attribute axes, the samples<br />
from the class of BOTTOM days will form one or more clusters that are distinct from clusters<br />
of TOP days. Each time a different indicator space is considered, the program measures two<br />
criteria of “goodness” (predictive power). First, the training samples are examined to see if the<br />
BOTTOM DAYS and TOP DAYS separate to some degree. If class separation is not present<br />
(i.e., the classes appear randomly intermixed) the indicator combination is rejected. (See figure<br />
11.) But if class separation appears, the program then determines if that separation is also ev-<br />
ident in the testing samples. Classification power in both sets is required to rate the indicator<br />
combination as worthy of further consideration. See the diagrams below.<br />
MTA Journal/May <strong>1985</strong> 117
0<br />
X0<br />
X<br />
X<br />
OX<br />
X<br />
0<br />
f<br />
+10<br />
X<br />
0<br />
0<br />
x0x x<br />
X<br />
X<br />
0<br />
0<br />
X<br />
0 --<br />
X<br />
0<br />
Ox ox--<br />
0 0<br />
X<br />
i<br />
- -10<br />
X<br />
0 xx<br />
x 0 OX<br />
x O<br />
0 ox O,“,<br />
X<br />
X0 0 X<br />
X<br />
x0 0<br />
0 0 0<br />
X0<br />
X=BOTTOM DAY O=TOP DAY<br />
FIGURE1 1<br />
A Poor 2-D Indicator Space - Classes Intermixed<br />
x x<br />
If samples from the training set show no class separation, as in the illustration, the AIIPR program ceases its in-<br />
vestigation of that space.<br />
MTA Journal/May <strong>1985</strong> 118
TRAINING SAMPLES<br />
FIGURE 12a<br />
‘IO<br />
GOOD SEPAfiATlON<br />
Training Set Shows Good Top/Bottom Separation<br />
TESTING SAMPLES<br />
-10<br />
tIII::~::I=:III;II1iI<br />
000 0::<br />
o”$)~oo ::<br />
00 0;::<br />
::xx x<br />
1: xXxX xx<br />
xX<br />
1-x xxx<br />
_: x<br />
x XxX<br />
*IO<br />
-10 x2<br />
GOOD JEPARATION<br />
X=BOTTOM DAY O=TOP DAY<br />
FIGURE 12b<br />
Cross-Validation: Testing Set Confirms Good Class Separation<br />
In the training set BOTTOMS and TOPS separate in the Xl, X2 indicator space. But before accepting Xl, X2 as<br />
a good 2-D model, its separation power must be validated in the test set samples. Classes separate there as well,<br />
so the AIIPR program grades the indicator combination favorably<br />
MTA Journal/May <strong>1985</strong><br />
119<br />
Xl
TRAINING SAMPLES<br />
TESTING SAMPLES<br />
GOOD SEPARATION<br />
FIGURE 13a<br />
Training Set Shows Good Class Separation.....<br />
iI0<br />
T<br />
0 “0 xo I: )( 0 )(<br />
X<br />
IIOX 0 x<br />
x O --<br />
0 x<br />
IO<br />
--Ox O +I0<br />
l:"';'::'$:'I'::::I,<br />
O xo<br />
0<br />
x0x<br />
ox x<br />
0<br />
ox<br />
X x0 O<br />
x0 x<br />
1<br />
POOR SEPARATION<br />
X'BOTTOM DAY O=TOP DAY<br />
FIGURE 13b<br />
But Testing Set Does Not Confirm It!<br />
The training samples seem to indicate ndicators X4 and X<strong>21</strong> have good conjoint classification power, but cross<br />
validation shows the effect was false, as BOl7OMS and TOPS fail to separate in the test set. The spurious class<br />
separation in the training set was due to chance, a frequent occurrence when examining thousands of indicator<br />
combinations and relatively few samples.<br />
MTA Journal/May <strong>1985</strong> 120
6) Ex-Ante Testing<br />
The ultimate test of an AI/PR model is its ability to predict or classify on future data. Ex-ante<br />
or out-of-sample data is from a period of time that was in neither the training samples nor test<br />
samples. For example, if our training samples and testing samples came from the period 1970<br />
to 1980, we might use 1981 through 1984 as the ex-ante sample. This will indicate if the pat-<br />
terns found in the 1970-l 980 time period continue to have validity.<br />
7) Adaptation<br />
If the process under study is suspected to evolve over time, there is a need to allow the model<br />
to adapt. There are a number of adaptive techniques. For example, as new samples take on<br />
known class membership, they can be incorporated into the model, the oldest samples can<br />
be deleted, or more recent samples can be given higher weights, etc. There are many things<br />
that can be done to permit gradual adaptation of the model.<br />
D. Linear and Non-Linear Pattern Boundaries<br />
The EMH, as well as the Law of Requisite Variety, leads us to suspect that pattern boundaries<br />
(i.e., the surfaces that separate the classes) will be complex. Thus, pattern recognition meth-<br />
ods that assume the pattern classes to be linearly separable (e.g., with straight line, plane, etc.)<br />
may not be able to provide accurate discimination. Fisher’s Linear Discriminant Analysis is the<br />
best known method of this type. In the figures below we see one problem that is solvable by<br />
linear methods, and a more complex one that is not.<br />
T<br />
FIGURE 14a<br />
Classes Are Linearly Separable<br />
MTA Journal/May <strong>1985</strong> 1<strong>21</strong>
X= BOTTOM DAY O=TOP DAY<br />
FIGURE 14b<br />
Non-Linear Boundary Required<br />
In Figure 14a, the classes can be separated by a classical linear discriminant model. However, in Figure 14b<br />
a flexible non-linear method is required to find the true class boundary Complex systems display an unlimited<br />
variety of non-linear effects. See Figures 16 and 17.<br />
The most recent advances in AVPR have been toward the development of methods that can<br />
separate classes that are not linearly separable. This is desirable as it is a more general method<br />
that can solve linear, as well as non-linear problems. The spirit of such approaches is to let the<br />
data express its own message rather than force fit a pattern boundary whose shape was con-<br />
ceived prior to the analysis. Flexible non-linear methods permit the true shape of the surface,<br />
such as that in figure 3, to be approximated.<br />
E. Overfitted Patterns<br />
A potential danger of the more flexible methods (i.e., non-parametric, non-linear) is the gen-<br />
eration of overfitted pattern boundaries. Figure 15a shows a 2-D indicator space devoid of class<br />
separating information, but figure 15b shows how a contrived pattern boundary can achieve<br />
apparent class separation in that same space. Clearly, the pattern boundary is spurious.<br />
MTA Journal/May <strong>1985</strong> 122
0 x - +ro<br />
OX 0 0 -- X0<br />
X<br />
0 X<br />
0 ox<br />
x”ox<br />
X<br />
x”o 0<br />
xx--<br />
X 0<br />
--<br />
x<br />
X0<br />
0<br />
0<br />
0<br />
0<br />
X<br />
X<br />
0 x--<br />
x o”<br />
,:I:::::;;;:;::::;;’<br />
-10<br />
II *,,,I ,,,,,,,,,,,,<br />
0<br />
X<br />
0<br />
0<br />
x0<br />
0<br />
0<br />
X0<br />
0 x<br />
0<br />
x<br />
0<br />
x Ox<br />
0<br />
X<br />
x O<br />
X<br />
0<br />
X<br />
_ 0<br />
- -20<br />
X0<br />
x 0<br />
o” 0 X<br />
X=BOTTOM DAY O=TOP DAY<br />
FIGURE 15a<br />
A 2-D Space That Should Be Rejected<br />
0 x x+‘o<br />
0<br />
OX<br />
0<br />
0<br />
x x<br />
X<br />
BO77OMS and TOPS are scattered randomly throughout this space. But if men or machines fir freely enough,<br />
over-fitted pattern boundaries can emerge........ (see 15b).<br />
MTA Journal/May <strong>1985</strong> 123
- TOP DAY REGION<br />
X= BOTTOM DAY O=TOP DAY<br />
FIGURE 15b<br />
An Overfitted Classification Boundary<br />
Without the feedback provided by cross-validation, the computer will define highly contrived and false class<br />
boundaries, even in random data. A test of this boundary on independent data would likely reveal its lack of va-<br />
lidity<br />
Although there are a number of approaches to avoiding overfit, the cross-validation approach<br />
is the most conservative. The highly contrived pattern in the illustration would be seen to be<br />
false when cross validated on the test set. Thus, robust AI/PR methods can adapt the pattern<br />
boundaries to something that approximates their true nature, while avoiding the trap of mod-<br />
eling the “noise” in the data.<br />
E. Expert Systems Versus AliPR Models<br />
Much attention in the artificial intelligence area has been on “expert systems.” Such systems<br />
incorporate rules and knowledge of experts organized into a knowledge base. An “inference”<br />
engine operates on the knowledge base to give the kinds of conclusions and explanations that<br />
an expert would give when asked for advice. Successful expert systems have been developed<br />
to aid in medical diagnosis, find mineral deposits, fix diesel motors, and offer certain kinds of<br />
financial advice.<br />
The effectiveness of an expert system is highly dependent on the accuracy of the knowledge<br />
base. In situations where even experts don’t have successful rules, the results may not be sat-<br />
MTA Journal/May <strong>1985</strong> 124
isfactory. Financial markets and other complex processes are characterized by extremely<br />
complex workings not easily understood by people. Thus, expert system alone is not likely to<br />
forecast market trends well. On the other hand, expert systems can incorporate aspects of market<br />
behavior that are difficult to quantify. Elliott Wave analysis may be a type of financial market<br />
forecasting that would lend itself to a rule-based expert system.<br />
The AI/PR approach has the ability to infer rules and laws from data that may not be apparent<br />
to even the best experts. But it is confined to quantifiable indicators and rules for which ex-<br />
tensive histories can be made available. So, a combined AI/PR system and an expert system<br />
might make sense.<br />
F Limitations of the AI/PR Method<br />
1) Requires large amounts of historical data<br />
AliPR induces predictive models from numerous observations (the more, the better). The more<br />
complex the phenomena, the more examples needed. Thus, unique or infrequent events can-<br />
not easily be incorporated into AliPR models. Wars, strikes, government policy changes of an<br />
infrequent nature, or indicators that signal once a generation are examples. Yet, these events<br />
may be extremely important.<br />
2) Information Must be Quantified<br />
AI/PR can digest information if it can be quantified and extensive histories made available. Many<br />
aspects of the FMA’s work can be quantified, but some cannot. The information that is non-<br />
quantifiable cannot be used in an AI/PR system.<br />
3) Patterns Assumed to Remain Valid or Change Slowly<br />
The pattern attributes and boundaries are assumed to have some durability. If either are sub-<br />
ject to large and abrupt changes, the models will not be able to adapt and predictive accuracy<br />
will be poor. This applies to any predictive method based on historical analysis.<br />
4) Cost to Develop AI/PR Models are High<br />
Computer running cost for AI/PR are high. The process requires large computers or special<br />
purpose computers designed for vector processing. Computer resources for an AI/PR anal-<br />
ysis may be ten to one hundred times that of a traditional regression analysis done on the same<br />
problem. A mitigating factor is some AI/PR programs can sense early in an analysis if any<br />
worthwhile information exists in the candidate inputs.<br />
5) A Good Candidate List of Indicators is Required<br />
MTA Journal/May <strong>1985</strong> 125
AliPR can mine information from a database if the data exists. Some of the candidate indi-<br />
cators must contain useful information. Thus, intelligent and experienced people are needed<br />
to create an initial list. If the candidates are weak, the analysis is doomed from the start.<br />
PART VI<br />
PRISM: PAT-TERN RECOGNITION INFORMATION SYNTHESIS MODELING<br />
PRISM is a non-parametric, non-linear AliPR system developed by Raden Research Group<br />
used for generating multi-variate classification, estimation, and prediction models. The design<br />
philosophy was to make intensive use of the computer in order to confront the complexities<br />
associated with complex unstructured problems for which large amounts of data exist.<br />
1) PRISM can accept up to five hundred candidate variables.<br />
2) PRISM makes no a prior assumptions about the structural form of the models or distribu-<br />
tions of the variables. There is no need for variables to be linearly related to the dependent<br />
variable, normally distributed, or uncorrelated with each other. PRISM models can take on highly<br />
non-linear structure, if such is indicated by the data.<br />
3) If the data is random or the candidate variables contain insufficient information to generate<br />
a predictive model, PRISM will indicate such at an early stage of the analysis.<br />
4) PRISM makes extensive use of cross-validation to avoid overfit.<br />
5) Candidate variables can be of a scalar, binary, ordinal or categorical type.<br />
Since the completion of PRISM (version 1 .O) in 1982, it has been applied to developing pre-<br />
diction models for a variety of financial market applications and Department of Defense ap-<br />
plications. Below we describe some of the models.<br />
A. Application to Dow Jones Prediction<br />
The dependent variable was defined as the percent change in the Dow Jones Industrial Av-<br />
erage over the next sixteen weeks. The candidate list of variables were approximately two<br />
hundred indicators designed by an analyst not associated with Raden Research Group. The<br />
entire historical database extended from 1964 through 1983. Data from 1964 through 1978<br />
was used to develop the model (i.e., training set and testing set samples came from this pe-<br />
riod). A model composed of three variables was produced.<br />
The model was then tested on data from 1979 through 1983. During 1979 predictions in the<br />
trading range market of that period were marginally profitable. In 1980 several large trends were<br />
correctly predicted just prior to or just after trend turning points. For example, a sharp decline<br />
between 2/15/80 and 4/22/80 was predicted by negative forecasts that persisted between 2/<br />
18/80 and 4/24/80. The model also stayed positive during a strong up trend that began in April<br />
of that year, but turned negative somewhat early (August 20, 1980). The bear market in the<br />
summer of 1981 was correctly predicted, as was the rally in the fall of that year. The large bull<br />
MTA Journal/May <strong>1985</strong> 126
market starting August 14, 1982, was preceded by bullish forecasts starting in June of 1982.<br />
The model estimated from 1964 to 1978 data was left unchanged throughout the 1983 period<br />
(i.e., no adaptation was permitted). In 1983, two variables in the model went to levels never<br />
seen in the historical data, and the forecasts became inaccurate. In general, when variables<br />
go out of historical ranges, the mathematics of the PRISM model pushes the forecast back to<br />
the grand mean of the entire data set. In sum, the model did provide accurate predictions at<br />
major and intermediate turning points for four of the five years of ex-ante data.<br />
8. Soybean Model: Raden lnhouse Research<br />
The dependent variable of the model was defined as the slope of a linear regression ten days<br />
into the future. Training set and testing set data were taken from the time period 1977 through<br />
1980. Raw data series were limited to price, volume, and open interest of the soybean futures<br />
market. Thus, the model was based only on technical indicators. Each of the three raw data<br />
series were transformed into twenty technical indicators, for a total of sixty candidate inputs.<br />
Thus, there were twenty indicators based on price data, twenty based on volume, and twenty<br />
based on open interest. In general, the indicators were of the oscillator type (i.e., bandpass<br />
filter outputs). PRISM produced a model composed of three of the sixty candidate indicators.<br />
Interestingly enough, all three of the selected indicators were derived from the open interest<br />
data. This is in marked contrasts to most commodity trading models, which are based on price<br />
data.<br />
Ex-ante testing on data from 1981 through 1983 indicated the model had the ability to detect<br />
important trend reversals, though signal was sometimes early. This also contrasts with trend-<br />
following models which detect trend changes with a lag. In 1983, a 12-month trading test com-<br />
menced using the mini-contract of soybeans on the Mid-American Commodity Exchange. A<br />
profit exceeding one hundred percent on capital was earned. Four of five signals were correct,<br />
which was consistent with earlier levels of signal accuracy.<br />
C. CyberTech Research Partnership<br />
In 1984, Raden Research Group was engaged to develop a series of short-term prediction<br />
models for twelve different futures markets by the CyberTech research and development part-<br />
nership. For each model, five to ten years of historical data were used. The candidate indi-<br />
cators were based on data that related to the future being modeled, as well as exogenous data<br />
series.<br />
Ex-ante testing for each model was done on the most recent two years of data. Two measures<br />
of predictive power were used. First, the predictions were correlated with actual outcomes.<br />
Second, the fraction of times the forecast was directionally correct was noted. In general the<br />
forecasts were found to contain significant information when they exceed a threshold. In other<br />
words, forecasts close to zero were randomly correct. For example, the test of directional ac-<br />
curacy lends itself to the binomial test for nonrandomness. (See MTA Journal February, 1981,<br />
Significance: What Is It? by Arthur Merrill).<br />
A model to forecast 5day changes in the S&P 500 Index was directionally correct over seventy<br />
percent of the time when the forecast exceeded a specified threshold. This level of accuracy<br />
is significant at the ninety-nine percent level. Other models were correct on direction often enough<br />
to be significant at the 99.9 percent level.<br />
MTA Journal/May <strong>1985</strong> 127
D. Other Examples<br />
Prior to the development of PRISM, one of its designers investigated the application of AVPR<br />
to major trend forecasting and stock selection. These models were developed with techniques<br />
that are relatively seminal by today’s standards, yet, the results were encouraging. The figures<br />
16 and 17 are 2-D cross sections of the two models.<br />
n = MAJOR MARKET BOTTOMS<br />
= NEUTRAL TREND<br />
= MAJOR MARKET TOPS<br />
FIGURE 16<br />
Stock <strong>Market</strong> Forecasting Model<br />
Two macroeconomic variables (Xl, X2) were selected from over 100 by an AIIPR system for their ability to classify<br />
major market tops and bottoms. Note the non-linear boundaries. The dark region at the top of 2-D indicator space<br />
is where bottoms occurred, and the white region at the right has been associated with market tops. Developed<br />
for a financial institution in 1970s.<br />
MTA Journal/May <strong>1985</strong> 128
n = BEST PERFORMING STOCKS RELATIVE TO MARKET<br />
n = SECOND BEST<br />
= THIRD BEST<br />
l-l = UNDERPERFORMING THE MARKET<br />
FIGURE 17<br />
Stock Selection Model<br />
Balance sheet and income statement data was used to construct candidate variables for a model to forecast rel-<br />
ative price performance of stocks (versus S&P 500). The figure is a 2-D cross section of a model that is composed<br />
of four indicators. The model has been in real-time use since mid 7970s by the institution for whom it was original/y<br />
developed. The best performing stocks are found in the darkest regions. Stocks in the white region are likely to<br />
underperform the market.<br />
MTA Journal/May <strong>1985</strong> 129
The Future<br />
There is a significant opportunity for a synergy between the FMA and the computer, for each<br />
has unique and complementary talents. The FMA uses experience and creative intellect to de-<br />
rive new indicators and improve existing ones (computers have no such ability). Computers<br />
programmed with AI/PR are exploited for their ability to analyze many factors simultaneously,<br />
detect complex relationships, and generate multi-indicator forecasting models, a task beyond<br />
the intellectual powers of men.<br />
Although the first phase of the Industrial Revolution saw the development of machines to am-<br />
plify man’s physical powers, a second phase is starting that will result in machines to amplify<br />
man’s intellectual powers. Semantic debates that question a computer’s ability to think as hu-<br />
mans do miss the point. The real <strong>issue</strong> is whether or not such machines can be of practical<br />
value to man. Initial indications are that, indeed, they can.<br />
FOOTNOTES<br />
1. Simon, Herbert A., Models of Man (New York: John<br />
Wiley, 1957) p.198<br />
2. Hayes, J. R., Human Data Processing Limits in Decision<br />
Making, Report ESDTDR6248 (Massachusetts: Air Force<br />
Systems Command, Electronics System Division, 1962).<br />
3. Rivett, Patrick, Model Building for Decision Analysis,<br />
(New York: John Wiley, 1980), p.1415, discussion work<br />
of Bavelas at Stanford University.<br />
4. Wiener, Norbert, Cybernetics or the Science of Communication<br />
and Control Processes in Animals and Machines (Massachusetts:<br />
MIT Press, 1948).<br />
5. Ashby, W. R., introduction to Cybernetics, (New York:<br />
John Wiley, Inc., 1963).<br />
6. Felsen, Jerry, Cybernetic Approach to Stock <strong>Market</strong> Analysis,<br />
(Hicksville, New York: Exposition Press, 1975) p.50. Book is<br />
available from Raden Research Group, New York, New York.<br />
7. Two firms that have developed market forecasting models with<br />
multi-variate linear disciminant and/or regression analysis are:<br />
The Institute for Econometric Research, Inc., 3471 North Federal<br />
Highway, Fort Lauderdale, Florida 33306; and William Finnegan<br />
Associates, inc., <strong>21</strong>235 Pacific Coast Highway, Malibu, California<br />
90265.<br />
MTA Journal/May <strong>1985</strong> 130
OTHERREFERENCES<br />
1. Felsen, Jerry, Decision Making Under Uncertainty: An<br />
Artificial intelligence Approach, (Jamaica, New York:<br />
CDS Publishing Company). Book is available from Raden<br />
Research Group, New York, New York.<br />
2. Fosback, Norman G., Stock <strong>Market</strong> Logic, (Fort Lauderdale,<br />
Florida: The Institute for Econometric Research, ninth<br />
printing <strong>1985</strong>)<br />
3. Bellman, Richard, An introduction to Artificial Intelligence:<br />
Can Computers Think? (San Francisco: Boyd & Fraser, 1978).<br />
4. Batchelor, Bruce G., Pattern Recognition Ideas in Practice,<br />
(New York: Plenum Press, 1981).<br />
5. Kaufmann, Arnold, The Science of Decision Making,<br />
(New York: World University Library/McGraw Hill, 1968).<br />
6. Albus, James S., Brains, Behavior and Robotics, (Peterborough,<br />
New Hampshire: Byte Books/McGraw Hill, 1981).<br />
MTA Journal/May <strong>1985</strong> 131
BIOGRAPHY<br />
Currently the President of Raden Research Group Incorporated, David R. Aronson began his<br />
career in business as an Account Executive with Merrill Lynch in 1973. After four years, he<br />
conducted research on the performance and trading strategies of several hundred commodity<br />
trading advisors for AdvoCom Corporation. In 1978, he began the Raden project to determine<br />
feasibility of applying advanced data analysis and modeling techniques to forecasting the be-<br />
havior of financial markets. Mr. Aronson also began the development of PRISM (Pattern Rec-<br />
ognition Information Synthesis Modeling), a multi-variate pattern recognition system employing<br />
automated inductive inferencing, heuristic programming and cybernetics.<br />
Mr. Aronson received his BA in Philosophy from Lafayette College in 1967.<br />
MTA Journal/May <strong>1985</strong> 132