10.03.2015 Views

Economic Models - Convex Optimization

Economic Models - Convex Optimization

Economic Models - Convex Optimization

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

ECONOMIC MODELS<br />

Methods, Theory<br />

and Applications


This page intentionally left blank


ECONOMIC MODELS<br />

Methods, Theory<br />

and Applications<br />

editor<br />

Dipak Basu<br />

Nagasaki University, Japan<br />

World Scientific<br />

N E W J E R S E Y • L O N D O N • S I N G A P O R E • B E I J I N G • S H A N G H A I • H O N G K O N G • TA I P E I • C H E N N A I


Published by<br />

World Scientific Publishing Co. Pte. Ltd.<br />

5 Toh Tuck Link, Singapore 596224<br />

USA office: 27 Warren Street, Suite 401-402, Hackensack, NJ 07601<br />

UK office: 57 Shelton Street, Covent Garden, London WC2H 9HE<br />

British Library Cataloguing-in-Publication Data<br />

A catalogue record for this book is available from the British Library.<br />

ECONOMIC MODELS<br />

Methods, Theory and Applications<br />

Copyright © 2009 by World Scientific Publishing Co. Pte. Ltd.<br />

All rights reserved. This book, or parts thereof, may not be reproduced in any form or by any means,<br />

electronic or mechanical, including photocopying, recording or any information storage and retrieval<br />

system now known or to be invented, without written permission from the Publisher.<br />

For photocopying of material in this volume, please pay a copying fee through the Copyright<br />

Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, USA. In this case permission to<br />

photocopy is not required from the publisher.<br />

ISBN-13 978-981-283-645-8<br />

ISBN-10 981-283-645-4<br />

Typeset by Stallion Press<br />

Email: enquiries@stallionpress.com<br />

Printed in Singapore.


This Volume is Dedicated to the Memory of<br />

Prof. Tom Oskar Martin Kronsjo


This page intentionally left blank


Contents<br />

Tom Oskar Martin Kronsjo: A Profile<br />

About the Editor<br />

Contributors<br />

Introduction<br />

Chapter 1: Methods of Modelling: Mathematical<br />

Programming<br />

Topic 1: Some Unresolved Problems of Mathematical<br />

Programming<br />

by Olav Bjerkholt<br />

ix<br />

xiii<br />

xv<br />

xix<br />

1<br />

3<br />

Chapter 2: Methods of Modeling: Econometrics<br />

and Adaptive Control System<br />

21<br />

Topic 1: A Novel Method of Estimation Under Co-Integration 23<br />

by Alexis Lazaridis<br />

Topic 2: Time Varying Responses of Output to Monetary<br />

and Fiscal Policy<br />

by Dipak R. Basu and Alexis Lazaridis<br />

43<br />

Chapter 3: Mathematical Modeling in Macroeconomics 67<br />

Topic 1: The Advantages of Fiscal Leadership in an Economy<br />

with Independent Monetary Policies<br />

by Andrew Hughes Hallet<br />

Topic 2: Cheap-Talk Multiple Equilibria and Pareto —<br />

Improvement in an Environmental Taxation Games<br />

by Chirstophe Deissenberg and Pavel Ševčík<br />

69<br />

101<br />

vii


viii<br />

Contents<br />

Chapter 4: Modeling Business Organization 119<br />

Topic 1: Enterprise Modeling and Integration: Review<br />

and New Directions<br />

by Nikitas Spiros Koutsoukis<br />

Topic 2: Toward a Theory of Japanese Organizational Culture<br />

and Corporate Performance<br />

by Victoria Miroshnik<br />

121<br />

137<br />

Topic 3: Health Service Management Using Goal Programming 151<br />

by Anna-Maria Mouza<br />

Chapter 5: Modeling National Economies 171<br />

Topic 1: Inflation Control in Central and Eastern European<br />

Countries<br />

by Fabrizio Iacone and Renzo Orsi<br />

Topic 2: Credit and Income: Co-Integration Dynamics<br />

of the US Economy<br />

by Athanasios Athanasenas<br />

173<br />

201<br />

Index 223


Tom Oskar Martin Kronsjo: A Profile<br />

Lydia Kronsjo<br />

University of Birmingham, England<br />

Tom Kronsjo was born in Stockholm, Sweden, on 16 August 1932, the<br />

only child of Elvira and Erik Kronsjo. Both his parents were gymnasium<br />

teachers.<br />

In his school days Tom Kronsjo was a bright, popular student, often<br />

a leader among his school contemporaries and friends. During his school<br />

years, he organized and chaired a National Science Society that became<br />

very popular with his school colleagues.<br />

From an early age, Tom Kronsjo’s mother encouraged him to learn<br />

foreign languages. He put these studies to a good test during a 1948 summer<br />

vacation when he hitch-hiked through Denmark, Germany, the Netherlands,<br />

England, Scotland, Ireland, France, and Switzerland. The same year Tom<br />

Kronsjo founded and chaired a technical society in his school. In the summer<br />

vacation of 1949, Tom Kronsjo organized an expedition to North Africa<br />

by seven members of the technical society, their ages being 14–24. The<br />

young people traveled by coach through the austere post-Second World War<br />

Europe. All the equipment for travel, including the coach, were donated<br />

by sympathetic individuals and companies, who were impressed by an<br />

intelligent, enthusiastic group of youngsters, eager to learn about the world.<br />

This event caught a wide interest in the Swedish press at that time and was<br />

vividly reported.<br />

Tom Kronsjo used to mention that as a particularly personally exciting<br />

experience of his school years he remembered volunteering to give a series<br />

of morning school sermons on the need for the world governmental organizations,<br />

which would encourage and support mutual understanding of the<br />

needs of the people to live in peace.<br />

Tom Kronsjo’s university years were spent at both sides of the great<br />

divide. Preparing himself for a career in international economics, he wanted<br />

to experience and see for himself the different nations’ ways of life and<br />

points of view. He studied in London, Belgrade,Warsaw, Oslo, and Moscow,<br />

before returning to his home country, Sweden, to pass examinations and<br />

ix


x<br />

Tom Oskar Martin Kronsjo<br />

submit required work at the Universities of Uppsala, Stockholm and Lund<br />

for his Fil.kand. degree in economics, statistics, mathematics and Slavonic<br />

Languages, and Fil.lic. degree in economics. By then, Tom Kronsjo besides<br />

his mother tongue — Swedish, was fluent in six languages — German,<br />

English, French, Polish, Serbo-Croatian, and Russian. During this period,<br />

Tom was awarded a number of scholarships for study in Sweden and abroad.<br />

Tom Kronsjo met his wife while studying in Moscow. He was a visiting<br />

graduate student at the Moscow State University and she a graduate student<br />

with the Academy of Sciences of the USSR. They married in Moscow in<br />

February 1962. A few years later, a very happy event in the family life was<br />

the arrival of their adoptive son from Korea in October 1973.<br />

As a post-graduate student in economics, Tom Kronsjo was one of a<br />

new breed of Western social scientists that applied rigorous mathematical<br />

and computer-based modeling to economics. In 1963, Tom Kronsjo was<br />

invited by Professor Ragnar Frish, later Nobel prize winner in econometrics,<br />

to work under him and Dr. Salah Hamid in Egypt as a visiting Assistant<br />

Professor in Operations Research at the UAR Institute of National Planning.<br />

His pioneering papers on optimization of foreign trade were soon<br />

considered classic and brought invitations to work as a guest researcher by<br />

the GDR Ministry of Foreign Trade and Inner German Trade, and by the<br />

Polish Ministry of Foreign Trade.<br />

In 1964, Tom Kronsjo was appointed an Associate Professor in Econometrics<br />

and Social statistics at the Faculty of Commerce and Social Sciences,<br />

the University of Birmingham, U.K. He and his wife moved to the<br />

United Kingdom.<br />

In Birmingham, together with Professor R.W. Davies Tom Kronsjo<br />

developed diploma, and master of social sciences courses in national economic<br />

planning. The courses, offered from 1967, attracted many talented<br />

students from all over the world. Many of these students then stayed<br />

with Tom Kronsjo as research students and many of these people are<br />

today university professors, industrialists, and economic advisors. In 1969,<br />

at the age of 36, Tom Kronsjo was appointed Professor of <strong>Economic</strong><br />

Planning. By then Tom Kronsjo had published over a hundred research<br />

papers.<br />

Tom Kronsjo has always had a strong interest in educational innovations.<br />

He was among the first professors at the university to introduce a<br />

television-based teaching, making it possible to widen the scope of material<br />

taught.<br />

Tom Kronsjo was generous towards his students with his research ideas,<br />

stimulating his students intellectually, unlocking wide opportunities for his


Tom Oskar Martin Kronsjo<br />

students through this generosity. He had the ability to see the possibilities<br />

in any situation and taught his students to use them.<br />

Tom Kronsjo’s research interests were wide and multifaceted. One<br />

area of his interest was planning and management at various levels of<br />

the national economy. He developed important methods of decomposition<br />

of large economic systems, which were considered by his contemporaries<br />

a major contribution to the theory of mathematical planning and<br />

programming. These results were published in 1972 in a book “<strong>Economic</strong>s<br />

of Foreign Trade” written jointly by Tom Kronsjo, Dr. Z. Zawada, and<br />

Professor J. Krynicki, both of the Warsaw University, and published in<br />

Poland.<br />

In 1972, Tom Kronsjo was invited as a Visiting Professor of <strong>Economic</strong>s<br />

and Industrial Administration at Purdue University, Purdue, USA.<br />

In 1974, Tom Kronsjo was a Distinguished Visiting Professor of <strong>Economic</strong>s,<br />

at San Diego State University, San Diego, USA.<br />

In the year 1975, Tom Kronsjo found himself as a Senior Fellow in the<br />

Foreign Policy Studies Program of the Brookings Institution, Washington,<br />

D.C., USA. In collaboration with three other scientists, he was engaged<br />

in the project entitled “Trade and Employment Effects of the Multilateral<br />

Trade Negotiations”. In the words of his colleagues, Tom Kronsjo made an<br />

indispensable contribution to the project. Tom Kronsjo’s ingenious, tireless,<br />

conceptual, and implementing efforts made it possible to carry out<br />

calculations involving millions of pieces of information on trade and tariffs,<br />

permitting the Brookings model to become the most sophisticated and comprehensive<br />

model available at the time for investigation into effects of the<br />

current multilateral trade negotiations. In addition, Tom Kronsjo conceived<br />

and designed the means for implementing, perhaps the most imaginative<br />

portion of the project, the calculation of “optimal” tariff cutting formulas<br />

using linear programming and taking account of balance of payments and<br />

other constraints on trade liberalization. A monograph on “Trade, Welfare,<br />

and Employment Effects of Trade Negotiations in the Tokyo Round” by<br />

Dr. W.R. Cline, Dr. Noboru Kawanabe, Tom Kronsjo, and T. Williams was<br />

published in 1978.<br />

Tom Kronsjo’s general interests have been in the field of conflicts and<br />

co-operation, war and peace, ways out of the arms race, East-West joint<br />

enterprises and world development. He published papers on unemployment,<br />

work incentives, and nuclear disarmament.<br />

Tom Kronsjo served as one of the five examiners for one of the Nobel<br />

prizes in economic sciences. From time to time, he was invited to nominate<br />

a candidate or candidates for the Nobel prize in economics.<br />

xi


xii<br />

Tom Oskar Martin Kronsjo<br />

Tom Kronsjo took early retirement from the University of Birmingham<br />

in 1988 and devoted subsequent years working with young researchers from<br />

China, Russia, Poland, Estonia, and other countries.<br />

Tom Kronsjo was a great enthusiast of vegetarianism and veganism,<br />

of Buddhist philosophy, yoga, and inter-space communication, of aviation<br />

and flying. He learned to fly in the United States and held a pilot license<br />

since 1972.<br />

Tom Kronsjo was a man who lived his life to the fullest, who constantly<br />

looked for ways to bring peace to the world. He was an exceptional individual<br />

who had the incredible vision and an ability to foresee new trends.<br />

He was a man of great charisma, one of those people of whom we say “he<br />

is a man larger than life”.<br />

Tom Oskar Martin Kronsjo died in June 2005 at the age of 72.


About the Editor<br />

Prof. Dipak Basu is currently a Professor in International <strong>Economic</strong>s in<br />

Nagasaki University in Japan. He has obtained his Ph.D. from the University<br />

of Birmingham, England and did his post-doctoral research in Cambridge<br />

University. He was previously a Lecturer in Oxford University — Institute<br />

of Agricultural <strong>Economic</strong>s, Senior Economist in the Ministry of Foreign<br />

Affairs, Saudi Arabia and Senior Economist in charge of Middle East &<br />

Africa Division of Standard & Poor (Data Resources Inc). He has published<br />

six books and more than 60 research papers in leading academic journals.<br />

xiii


This page intentionally left blank


Contributors<br />

Athanasios Athanasenas is an Assistant Professor of Operations Research<br />

& <strong>Economic</strong> Development in the School of Administration & <strong>Economic</strong>s,<br />

Institute of Technology and Education, Serres, Greece. He was educated in<br />

Colorado and Virginia Universities and received his Ph.D. from the University<br />

of Minnesota. He was a Full-Bright scholar.<br />

Olav Bjerkholt, Professor in <strong>Economic</strong>s, University of Oslo, was previously<br />

the Head of the Research at the Norwegian Statistical Office and an<br />

Assistant Professor of Energy <strong>Economic</strong>s at the University of Oslo. He has<br />

obtained his Ph.D. from the University of Oslo. He was a Visiting Professor<br />

in the Massachusetts Institute of Technology and was a consultant to the<br />

United Nations and the IMF.<br />

Christophe Deissenberg is a Professor in <strong>Economic</strong>s, at Université dela<br />

Méditerranée and GREQAM, France. He is a co-editor of the Journal of<br />

<strong>Optimization</strong> Theory and Applications and of Macroeconomic Dynamics,<br />

a member of the editorial board of Computational <strong>Economic</strong>s and of Computational<br />

Management Science. He is a member of the Advisory Board of<br />

the Society for Computational <strong>Economic</strong>s and a member of the Management<br />

Committee of the ESF action COSTP10–Physics of Risk, and a team<br />

leader of the European STREP project EURACE.<br />

Andrew Hughes Hallett is a Professor of <strong>Economic</strong>s and Public Policy<br />

in the School of Public Policy at George Mason University, USA. Previously<br />

he was a Professor of <strong>Economic</strong>s at Vanderbilt University, and the<br />

University of Strathclyde in Glasgow, Scotland. He was educated in the<br />

University of Warwick and London School of <strong>Economic</strong>s, holds a doctorate<br />

from Oxford University. He was a Visiting Professor in Princeton<br />

University, Free University of Berlin, Universities of Warwick, Frankfurt,<br />

Rome, Paris X and in the Copenhagen Business School. He is a Fellow of<br />

the Royal Society of Edinburgh.<br />

xv


xvi<br />

Contributors<br />

Fabrizio Iacone obtained a Ph.D. in <strong>Economic</strong>s at the London School of<br />

<strong>Economic</strong>s and is currently a Lecturer in the Department of <strong>Economic</strong>s at<br />

the University of York, UK. He has already published several articles in<br />

leading journals.<br />

Nikos Konidaris is currently a Ph.D. student at Hellenic Open University,<br />

School of Social Sciences, researching in the field of Business Administration.<br />

He holds an MA degree in Business and Management and a Bachelor’s<br />

Degree in <strong>Economic</strong>s & Marketing from Luton University UK.<br />

Nikitas Spiros Koutsoukis is anAssistant Professor of Decision Modelling<br />

and Scientific Management in the Department of International <strong>Economic</strong><br />

Relations and Development, Democritus University, Greece. He holds MSc<br />

and Ph.D. in decision modeling from Brunel University (UK). He is interested<br />

mainly in decision modeling and governance systems, with applications<br />

in risk management, business intelligence and operational research<br />

based decision support.<br />

Alexis Lazaridis, Professor in Econometrics, Aristotle University of Thessaloniki,<br />

Greece, has obtained his Ph.D. from the University of Birmingham,<br />

England. He was a member of the Administrative Board of the Organization<br />

of Mediation and Arbitration, of the Administrative Board of the<br />

Centre of International and European <strong>Economic</strong> Law, and of the Supreme<br />

Labour Council of Greece. During his term of service as Head of the <strong>Economic</strong><br />

Department Faculty of Law and <strong>Economic</strong>s, Aristotle University of<br />

Thessaloniki, he was conferred an honorary doctoral degree by the President<br />

of the Republic of Cyprus. He was also pronounced an honorary<br />

member of the Centre of International and <strong>Economic</strong> European Law by the<br />

Secretary-General of the Legal Services of E.U. He has published several<br />

books, large number of scientific papers and has patents on two computer<br />

software programs.<br />

Athanassios Mihiotis is an Assistant Professor at the School of Social<br />

Science of the Hellenic Open University. He has obtained his Ph.D. in<br />

industrial management from the National Technical University of Athens,<br />

Greece. He is the editor of the International Journal of Decision Sciences,<br />

Risk and Management. He has in the past worked as Planning and Logistics<br />

Director for multinational and domestic companies and has served as<br />

member of project teams in the Greek public sector.


Contributors<br />

Victoria Miroshnik, is currently anAdam Smith Research Scholar, Faculty<br />

of Law and Social Sciences, University of Glasgow, Scotland. She was<br />

educated in Moscow State University and was an Assistant Professor in<br />

Tbilisi State University, Georgia. She has published two books and a large<br />

number of scientific papers in leading management journals.<br />

Anna-Maria Mouza, Assistant Professor in Business Administration,<br />

Technological Educational Institute of Serres, Greece, has obtained her<br />

Ph.D. from the School of Health Sciences, Aristotle University of<br />

Thessaloniki, Greece. Previously, she was an Assistant Professor in the<br />

School of Technology Applications of the Technological Institute of<br />

Thessaloniki, in the Department of Agriculture and in the Faculty of Health<br />

Sciences of the Aristotle University of Thessaloniki. She is the author of<br />

two books and many published scientific papers.<br />

R. Orsi, Professor in Econometrics, University of Bologna, Italy, has<br />

obtained his Ph.D. from Universitè Catholique de Louvain, Belgium. He<br />

was previously educated in the University of Bologna and in the University<br />

of Rome. He was an Associate Professor in the University of Modena, and<br />

Professor of Econometrics in the University of Calabria, Italy. He was a<br />

Visiting Professor in the Center for Operation Research and Econometrics<br />

(CORE), Universitè-Catholique de Louvain, Belgium, Nato Senior Fellow,<br />

Head of the Department of <strong>Economic</strong>s at the University of Calabria<br />

from 1987 to 1989, Director of CIDE (the Inter-Universitary Center of<br />

Econometrics) from 1997 to1999, and from 2000 to 2002, and the Head of<br />

Department of <strong>Economic</strong>s, University of Bologna from 1999 to 2002.<br />

Pavel Ševčík, was educated in the Université delaMéditerranée. He is<br />

presently completing his Ph.D in the Universilé de Montréal, Canada.<br />

xvii


Introduction<br />

We define “model building in economics,” as the fruitful area of economics<br />

designed to solve real-world problems using all available methods available<br />

without distinctions: mathematical, computational, and analytical.<br />

Wherever needed we should not be shy to develop new techniques —<br />

whether mathematical or computational. This was the philosophy of Prof<br />

Tom Kronsjo, in whose memory we dedicate this volume. The idea is to<br />

develop stylized facts amenable to analytical methods to solve problems of<br />

the world.<br />

The articles in this volume are divided into three distinct groups: methods,<br />

theory, and applications.<br />

In the method section there are two parts: mathematical programming<br />

and computation of co-integrating vectors, which are widely used in econometric<br />

analysis. Both these parts are important to analyze and develop policies<br />

in any analytical model. Berjkholt has reviewed the history of development<br />

of mathematical programming and its impacts in economic modeling.<br />

In the process, he has drawn our attention to some methods, first developed<br />

by Ragner Frisch, which can provide solution where the standard Simplex<br />

method may fail. Frisch has developed these methods while working on the<br />

Second Five Year’s Plan for India and has developed a method, which is<br />

similar to that of Karmarkar.<br />

Lazaridis presents a completely new method to compute co-integration<br />

vectors, widely used in econometric analysis, by applying singular value<br />

decomposition. With this method, one can easily accommodate in the cointegrating<br />

vectors any deterministic factors, such as dummies, apart from<br />

the constant term and the trend. In addition, a comparatively simple procedure<br />

is developed for determining the order of integration of a different stationary<br />

series. Besides, with this procedure one can directly detect whether<br />

the differencing process produces a stationary series or not, since it seems<br />

to be a common belief that differencing a variable (one or more times) we<br />

will always get a stationary series, although this is not necessarily the case.<br />

Basu and Lazaridis have proposed a new method of estimation and<br />

solution of an econometric model with parameters moving over time. This<br />

xix


xx<br />

Introduction<br />

type of model is very realistic for economies, which are in transitional<br />

phases. The model was applied for the Indian economy to derive monetaryfiscal<br />

policy structure that would respond to the changing environments of<br />

the economy. The authors provide analysis to show that the response of the<br />

economy would be variable i.e., not at all fixed over time.<br />

In the macroeconomic theory section Hughes-Hallet has examined the<br />

impacts of fiscal policy in a regime with independent monetary authority<br />

and how to co-ordinate public policies in such a regime. Independence<br />

of the central bank is a crucial issue for both developed and developing<br />

country. In this theoretical model, the author has discovered some important<br />

conclusions that fiscal leadership leads to improved outcomes because it<br />

implies a degree of co-ordination and reduced conflicts between institutions,<br />

without the central bank having to lose its ability to act independently.<br />

Deissenberg and Pavel consider a dynamic model of environmental<br />

taxation that exhibits time inconsistency. There are two categories of firms,<br />

believers (who take the non-binding tax announcements made by the regulator<br />

at face value), and non-believers (who make rational but costly predictions<br />

of the true regulator’s decisions). The proportion of believers and<br />

non-believers changes over time depending on the relative profits of both<br />

groups. If the regulator is sufficiently patient and if the firms react sufficiently<br />

fast to the profit differences, multiple equilibria can arise. Depending<br />

on the initial number of believers, the regulator will use cheap-talk together<br />

with actual taxes to steer the economy either to the standard, perfect prediction<br />

solution suggested in the literature, or to equilibrium with a mixed<br />

population of believers and non-believers who both make prediction errors.<br />

This model can be very useful in the analysis of environmental economics.<br />

In the section on theory of business organization, Victoria Miroshnik<br />

has formulated a model of Japanese organization and how the superior<br />

corporate performances in the Japanese multi-national companies are the<br />

result of a multi-layer structure of cultures, national, personal, organizational.<br />

The model goes deep into the analysis of fundamental ingredient of<br />

Japanese organizational culture and human resource management system<br />

and how these contribute to corporate performance. The model produces a<br />

novel scheme, how to estimate phenomena which are seemingly unobserved<br />

psychological and sociological characteristics.<br />

Mihiotis et al., discussed enterprise modeling and integration challenges,<br />

and rationale as well as current tools and methodology for deeper<br />

analysis. Over the last decades, much has been written and a lot of research<br />

has been undertaken on the issue of enterprise modeling and integration.<br />

The purpose of this discussion is to provide an overview and understanding


Introduction<br />

of the key concepts. The authors have outlined a framework for advancing<br />

enterprise integration modeling based on the state-of-the art techniques.<br />

Anna Maria Mauza in a path breaking research presents a model suitable<br />

for an efficient budget management of a health service unit, by applying goal<br />

programming. She analyzes all the details needed to formulate the proper<br />

model, in order to successfully apply goal programming, in an attempt to<br />

satisfy the expectations of the decision maker in the best possible way,<br />

providing at the same time alternative scenarios considering various socioeconomic<br />

factors.<br />

In the section for policy analysis, Iacone and Orsi applied a small macroeconometric<br />

model to ascertain if the inflation dynamics and controls for<br />

Poland, Czech Republic, and Slovenia, are compatible with the remaining<br />

EU member countries. They found that the real exchange rate is the most<br />

effective instrument to stabilize inflation whereas direct inflation control<br />

mechanisms may be ineffective in certain cases. These experiments are<br />

very useful to design anti-inflation policies in open economies.<br />

AthanasiosAthanasenas investigated the co-integration dynamics of the<br />

credit–income nexus, within the economic growth process of the post-war<br />

US economy, over the period from 1957 up to 2007. Given the existing<br />

empirical research on the credit-lending channel and the established relationship<br />

between financial intermediation and economic growth in general,<br />

the main purpose is to analyze in detail the causal relationship between<br />

finance and growth by focusing on bank credit and income GNP, in the<br />

post-war US economy. This is a new application of an innovative technique<br />

of co-integration analysis with emphasis on system stability analysis. The<br />

results show that there is no short-run effect of credit changes on income<br />

changes, but only in the long-run, credit affects money income.<br />

The book covers most of the important areas of economics with the basic<br />

analytical framework to formulate a logical structure and then suggest and<br />

implement methods to quantify the structure to derive applicable policies.<br />

We hope the book would be a source of joy for anyone interested to make<br />

economics a useful discipline to enhance human welfare rather than being<br />

a sterile discourse devoid of reality.<br />

xxi


This page intentionally left blank


Chapter 1<br />

Methods of Modelling: Mathematical<br />

Programming


This page intentionally left blank


Topic 1<br />

Some Unresolved Problems of Mathematical<br />

Programming<br />

Olav Bjerkholt<br />

University of Oslo, Norway<br />

1. Introduction<br />

Linear programming (LP) emerged in the United States in the early postwar<br />

years. One may to a considerable degree see the development of LP<br />

as a direct result of the mobilization of research efforts during the war.<br />

George B. Dantzig, who was employed by the US Armed Forces, played a<br />

key role in developing the new tool by his discovery in 1947 of the Simplex<br />

method for solving LP problems. Linear programming was thus a military<br />

product, which soon appeared to have very widespread civilian applications.<br />

The US Armed Forces continued its support of Dantzig’s LP work, as the<br />

most widespread textbook in LP in the 1960s, namely Dantzig (1963), was<br />

sponsored by the US Air Force.<br />

Linear programming emerged at the same time as the first electronic<br />

computers were developed. The uses and improvements in LP as an optimization<br />

tool developed in step with the fast development of electronic<br />

computers.<br />

A standard formulation of a LP problem in n variables is as follows:<br />

min{c ′ x |Ax = b, x ≥ 0}<br />

x<br />

where c ∈ R n , b ∈ R m and A a (m, n) matrix of rank m


4 Olav Bjerkholt<br />

Among these was also Ragnar Frisch. Frisch was broadly oriented towards<br />

macroeconomic policy and planning problems and was highly interested<br />

in the promising new optimization tool. Frisch, who had a strong background<br />

in mathematics and also was very proficient in numerical methods,<br />

made the development of solution algorithms for LP problems a part of his<br />

research agenda. During the 1950s, Frisch invented and tested out various<br />

solution algorithms along other lines than the Simplex method for solving<br />

LP problems.<br />

With regard to the magnitude of LP problems that could be solved<br />

at different times with the computer equipment at disposal, Orden (1993)<br />

gives the following rough indication. In the first year after 1950, the number<br />

m of constraints in a solvable problem was of order 100 and has grown<br />

with a factor of 10 for each decade, implying that currently LP problems<br />

may have a number of constraints that runs into tens of millions. Linear<br />

programming has been steadily taken into use for new kinds of problems<br />

and the computer development has allowed the problems to be bigger and<br />

bigger.<br />

The Simplex method has throughout this period been the dominant<br />

algorithm for solving LP problems, not least in the textbooks. It is perhaps<br />

relatively rare that an algorithm developed at an early stage for new problem,<br />

as Dantzig did with the Simplex method for the LP problem, has retained<br />

such a position. The Simplex method will surely survive in textbook presentations<br />

and for its historical role, but for the solution of large-scale LP<br />

problems it is yielding ground to alternative algorithms. Ragnar Frisch’s<br />

work on algorithms is interesting in this perspective and has been given<br />

little attention. It may on a modest scale be a case of a pioneering effort<br />

that was more insightful than considered at the time and thus did not get<br />

the attention it deserved. In the history of science, there are surely many<br />

such cases.<br />

Let us first make a remark on the meaning of algorithms. The mathematical<br />

meaning of “algorithm” in relation to a given type of problem is a<br />

procedure, which after finite number of steps finds the solution to the problem,<br />

or determines that there is no solution. Such an algorithmic procedure<br />

can be executed by a computer program, an idea that goes back to Alan<br />

Turing. But when we talk about algorithms with regard to the LP problem<br />

it is not in the mathematical sense, but a more practical one. An LP<br />

algorithm is a practically executable technique for finding the solution to<br />

the problem. Dantzig’s Simplex method is an algorithm in both meanings.<br />

But one may have an algorithm in the mathematical sense, which is not a<br />

practical algorithm (and indeed also vice versa).


Some Unresolved Problems 5<br />

In the mathematical sense of algorithm, the LP problem is trivial. It<br />

can easily be shown that the feasibility region is a convex set with a linear<br />

surface. The optimum point of a linear criterion over such a set must be in a<br />

corner or possibly in a linear manifold of dimension greater than one.As the<br />

number of corners is finite, the solution can be found, for example, by setting<br />

n − m of the x’s equal to zero and solve Ax = b for the remaining ones.<br />

Then all the corners can be searched for the lowest value of the optimality<br />

criterion.<br />

Dantzig (1984) gives a beautiful illustration of why such a search for<br />

optimality is not viable. He takes a classical assignment problem, the distribution<br />

of a given number of jobs among the same number of workers.<br />

Assume that 70 persons shall be assigned to 70 jobs and the return of each<br />

worker-job combination is known. There are thus 70! possible assignment<br />

combinations. Dantzig’s comment about the possibility of looking at all<br />

these combinations runs as follows:<br />

“Now 70! is a big number, greater than 10 100 . Suppose we had an IBM<br />

370-168 available at the time of the big bang 15 billion years ago. Would<br />

it have been able to look at all the 70! combinations by the year 1981?<br />

No! Suppose instead it could examine 1 billion assignments per second?<br />

The answer is still no. Even if the earth were filled with such computers<br />

all working in parallel, the answer would still be no. If, however, there<br />

were 10 50 earths or 10 44 suns all filled with nano-second speed computers<br />

all programmed in parallel from the time of the big bang until sun grows<br />

cold, then perhaps the answer is yes.” (Dantzig, 1984, p. 106)<br />

In view of these enormous combinatorial possibilities, the Simplex<br />

method is a most impressive tool by making the just mentioned and even<br />

bigger problems practically solvable. The Simplex method is to search for<br />

the optimum on the surface of the feasible region, or, more precisely, in the<br />

corners of the feasible region, in such a way that the optimality criterion<br />

improves at each step.<br />

A completely different strategy to search for the optimum is to search<br />

the interior of the feasible region and approach the optimal corner (or one of<br />

them) from within, so to speak. It may seem to speak for the advantage of the<br />

Simplex method that it searches an area where the solution is known to be.<br />

A watershed in the history of LP took place in 1984 when Narendra<br />

Karmarkar’s algorithm was presented at a conference (Karmarkar, 1984a)<br />

and published later the same year in a slightly revised version (Karmarkar,<br />

1984b). Karmarkar’s algorithm, which the New York Times found worthy<br />

as first page news as a great scientific breakthrough, is such an “interior”


6 Olav Bjerkholt<br />

method, searching the interior of the feasibility area in the direction of the<br />

optimal point.<br />

This idea had, however, been pursued by Ragnar Frisch. To the best of<br />

my knowledge this was first noted by Roger Koenker a few years ago:<br />

“But it is an interesting irony, illustrating the spasmodic progress of science,<br />

that the most fruitful practical formulation of the interior point revolution<br />

of Karmarkar (1984) can be traced back to a series of Oslo working<br />

papers by Ragnar Frisch in the early 1950s.” (Koenker, 2000).<br />

2. Linear Programming in <strong>Economic</strong>s<br />

Linear programming has a somewhat curious relationship with academic<br />

economics. Few would today consider LP as a part of economic science. But<br />

LP was, so to say, launched within economics, or even within econometrics,<br />

and given much attention by leading economists for about 10 years or so.<br />

After around 1960, the ties to economics were severed. Linear programming<br />

disappeared from the economics curriculum and lost the attention of<br />

academic economists. It belonged from then on to operations research and<br />

management science on one hand and to computer science on the other.<br />

It is hardly possible to answer to give an exact date for when “LP”<br />

was born or first appeared. It originated at around the same time as game<br />

theory with which it shares some features, and also at the time of some<br />

applied problems such as the transportation problem and the diet problem.<br />

Linear programming has various roots and forerunners in economics and<br />

mathematics in attempts to deal with economic or other problems using<br />

linear mathematical techniques. One such forerunner, but only slightly, was<br />

Wassily Leontief’s input-output analysis. Leontief developed his “closed<br />

model” in the 1930s in an attempt to give empirical content to Walrasian<br />

general equilibrium. Leontief’s equilibrium approach was transformed in<br />

his cooperation with the US Bureau of Labor Statistics to the open inputoutput<br />

model, see Kohli (2001).<br />

In the early post-war years, LP and input-output analysis seemed within<br />

economics to be two sides of the same coin. The two terms were, for<br />

a short term, even used interchangeably. The origin of LP could be set<br />

to 1947, which, as already mentioned, was when Dantzig developed the<br />

Simplex algorithm. If we state the question as to when “LP” was first<br />

used in its current meaning in the title of a paper presented at a scientific<br />

meeting, the answer is to the best of my knowledge 1948. At<br />

the Econometric Society meeting in Cleveland at the end of December


Some Unresolved Problems 7<br />

1948, Leonid Hurwicz presented a paper on LP and the theory of optimal<br />

behavior with the first paragraph providing a concise definition for<br />

economists:<br />

“The term linear programming is used to denote a problem of a type<br />

familiar to economists: maximization (or minimization) under restrictions.<br />

What distinguishes linear programming from other problems in<br />

this class is that both the function to be maximized and the restrictions<br />

(equalities or inequalities) are linear in the variables.” (Hurwicz, 1949).<br />

Hurwicz’s paper appeared in a session on LP, which must have been<br />

the first ever with that title. One of the other two papers in the session was<br />

by Wood and Dantzig and discussed the problem of selecting the “best”<br />

method of accomplishing any given set of objectives within given restriction<br />

as a LP problem, using an airlift operation as an illustrative example. The<br />

general model was presented as an “elaboration of Leontief’s input-output<br />

model.”<br />

At an earlier meeting of the Econometric Society in 1948, Dantzig had<br />

presented the general idea of LP in a symposium on game theory. Koopmans<br />

had, at the same meeting, presented his general activity analysis production<br />

model. The conference papers of both Wood and Dantzig appeared in<br />

Econometrica in 1949. Dantzig (1949) mentioned as examples of problems<br />

for which the new technique could be used, Stigler (1945), known in the<br />

literature as having introduced the “diet problem,” and a paper by Koopmans<br />

on the “transportation problem,” presented at an Econometric Society<br />

meeting in 1947 (Koopmans, 1948).<br />

Dantzig (1949) stated the LP problem, not yet in standard format, but the<br />

solution technique was not discussed. The paper referred not only Leontief’s<br />

input-output model but also to John von Neumann’s work on economic equilibrium<br />

growth, i.e., to recent papers firmly within the realm of economics.<br />

Wood and Dantzig (1949) in the same issue stated an airlift problem.<br />

In 1949, Cowles Commission and RAND jointly arranged a conference<br />

on LP. The conference meant a breakthrough for the new optimization<br />

technique and had prominent participation. At the conference were<br />

economists, Tjalling Koopmans, Paul Samuelson, Kenneth Arrow, Leonid<br />

Hurwicz, Robert Dorfman, Abba Lerner and Nicholas Georgescu-Roegen,<br />

mathematicians, Al Tucker, Harold Kuhn and David Gale, and several military<br />

researchers including Dantzig.<br />

Dantzig had discovered the Simplex method but admitted many years<br />

later that he had not really realized how important this discovery was. Few<br />

people had a proper overview of linear models to place the new discovery in


8 Olav Bjerkholt<br />

context, but one of the few was John von Neumann, at the time an authority<br />

on a wide range of problems form nuclear physics to the development<br />

of computers. Dantzig decided to consult him about his work on solution<br />

techniques for the LP problem:<br />

“I decided to consult with the “great” Johnny von Neumann to see what<br />

he could suggest in the way of solution techniques. He was considered<br />

by many as the leading mathematician in the world. On October 3, 1947<br />

I visited him for the first time at the Institute for Advanced Study at<br />

Princeton. I remember trying to describe to von Neumann, as I would to<br />

an ordinary mortal, the Air Force problem. I began with the formulation<br />

of the linear programming model in terms of activities and items, etc. Von<br />

Neumann did something, which I believe was uncharacteristic of him.<br />

“Get to the point,” he said impatiently. Having at times a somewhat low<br />

kindling point, I said to myself “O.K., if he wants a quicky, then that’s what<br />

he’ll get.” In under one minute I slapped the geometric and the algebraic<br />

version of the problem on the blackboard. Von Neumann stood up and<br />

said “Oh that!” Then for the next hour and a half, he proceeded to give<br />

me a lecture on the mathematical theory of linear programs.” (Dantzig,<br />

1984).<br />

Von Neumann could immediately recognize the core issue as he saw the<br />

similarity with the game theory. The meeting became the first time Dantzig<br />

heard about duality and Farkas’ lemma.<br />

One could underline how LP once seemed firmly anchored in economics<br />

by pointing to the number of Nobel Laureates in economics who<br />

undertook studies involving LP. These comprise Samuelson and Solow, coauthors<br />

with Dorfman of Dorfman, Samuelson and Solow (1958), Leontief<br />

who applied LP in his input-output models, and Koopmans and Stigler,<br />

originators of the transportation problem and the diet problem, respectively.<br />

One also has to include Frisch, as we shall look at in more detail below,<br />

Arrow, co-author of Arrow et al. (1958), Kantorovich, who is known to<br />

have formulated the LP problem already in 1939, Modigliani, and Simon,<br />

who both were co-authors of Holt et al. (1956). Finally, to be included<br />

are also all the three Nobel Laureates in economics for 1990: Markowitz<br />

et al. (1957), Charnes et al. (1959) and Sharpe (1967). Koopmans and<br />

Kantorovich shared the Nobel Prize in economics for 1975 for their work<br />

in LP. John von Neumann died long before the Nobel Prize in economics<br />

was established, and George Dantzig never got the prize he ought to have<br />

shared with Koopmans and Kantorovich in 1975 for reasons that are not<br />

easy to understand.


Some Unresolved Problems 9<br />

Among all these Nobel Laureates, it seems that Ragnar Frisch had the<br />

deepest involvement with LP. But he published poorly and his achievements<br />

went largely unrecognized.<br />

3. Frisch and Programming, Pre-War Attempts<br />

Ragnar Frisch had a strong mathematical background and a passion for<br />

numerical analysis. He had been a co-founder of the Econometric Society<br />

in 1930 and to him econometrics meant both the formulation of economic<br />

theory in a precise way by means of mathematics and the development<br />

of methods for confronting theory with empirical data. He wrote both<br />

of these aims into the constitution of the Econometric Society as rendered<br />

for many years in every issue of Econometrica. He became editor of<br />

Econometrica from its first issue in 1933 and held this position for more than<br />

20 years.<br />

His impressive work in several fields of econometrics must be left<br />

uncommented here. Frisch pioneered the use of matrix algebra in econometrics<br />

in 1929 and had introduced many other innovations in the use of<br />

mathematics for economic analysis.<br />

Frisch’s most active and creative as a mathematical economist coincided<br />

with the Great Depression, which he observed at close range both<br />

in the United States and in Norway. In 1934, he published an article<br />

in Econometrica (Frisch, 1934a) where he discussed the organization of<br />

national exchange organization that could take over the need for trade<br />

when markets had collapsed due to the depression, see Bjerkholt and Knell<br />

(2006). In many ways, the article may seem naïve with regard to the subject<br />

matter, but it had a very sophisticated mathematical underpinning.<br />

The article was 93 pages long, the longest regular article ever published<br />

in Econometrica. Frisch had also published the article without letting it<br />

undergo proper referee process. Perhaps it was the urgency of getting it<br />

circulated that caused him to do this mistake for which he was rebuked<br />

by some of his fellow econometricians. The article was written as literally<br />

squeezed in between Frisch two most famous econometric contributions<br />

in the 1930s, his business cycle theory (Frisch, 1933) and his confluence<br />

theory (Frisch, 1934b).<br />

Frisch (1934a) developed a linear structure representing inputs for production<br />

in different industries, rather similar to the input-output model<br />

of Leontief, although the underlying idea and motivation was quite different.<br />

Frisch then addressed the question of how the input needs could


10 Olav Bjerkholt<br />

be modified in the least costly way in order to achieve a higher total<br />

production capacity of the entire economy. The variables in his problem<br />

(x i ) represented (relative) deviations from observed input coefficients.<br />

He formulated a cost function as a quadratic function of these deviations.<br />

His reasoning thus led to a quadratic target function to be minimized<br />

subject to a set of linear constraints. The problem was stated as<br />

follows:<br />

[ ]<br />

min = 1 x<br />

2<br />

1<br />

+ x2 2<br />

+···+ x2 n<br />

wrt. x k k = 1, 2,...,n<br />

2 ε 1 ε 2 ε n<br />

subject to: k f ik x k ≥ C ∗ − a i0<br />

i = 1,...,n<br />

P i<br />

where C ∗ is the set target of overall production and the constraint the condition<br />

that the deviation allowed this target value to be achieved.<br />

By solving for alternative values of C ∗ , the function = (C ∗ ),<br />

representing the cost of achieving the production level C ∗ could be mapped.<br />

Frisch naturally did not possess an algorithm for this problem. This did<br />

not prevent him from working out a numerical solution of an illuminating<br />

example and making various observations on the nature of such optimising<br />

problems. His solution of the problem for a given example with n = 3,<br />

determining as a function of C ∗ ran over 12 pages in Econometrica, see<br />

Bjerkholt (2006).<br />

Another problem Frisch worked on before the war with connections<br />

to LP was the diet problem. One of Frisch’s students worked on a dissertation<br />

on health and nutrition, drawing on various investigations of the<br />

nutritional value of alternative diets. Frisch supervised the dissertation in<br />

the years immediately before World War II when it led him to formulate<br />

the diet problem, years before Stigler (1945). Frisch’s work was published<br />

as the introduction to his student’s monograph (Frisch, 1941). In a footnote,<br />

he gave a somewhat rudimentary LP formulation of the diet problem,<br />

cf. Sandmo (1993).<br />

4. Frisch and Linear Programming<br />

Frisch was not present at the Econometric Society meetings in 1948 mentioned<br />

above; neither did he participate in the Cowles Commission-RAND<br />

conference in 1949. He could still survey and follow these developments<br />

rather closely. He was still the editor of Econometrica and thus had accepted<br />

and surely studied the two papers by Dantzig in 1949. He had contact with<br />

Cowles Commission in Chicago, its two successive research directors in


Some Unresolved Problems 11<br />

these years, Jacob Marschak and Tjalling Koopmans, and other leading<br />

members of the Econometric Society. Frisch visited in the early post-war<br />

years frequently in the United States, primarily due to his position as chairman<br />

of the UN Sub-Commission for Employment and <strong>Economic</strong> Stability.<br />

The new developments over the whole range of related areas of linear techniques<br />

had surely caught his interest.<br />

He first touched upon LP in lectures to his students at the University of<br />

Oslo in September 1950. Two students drafted lecture notes that Frisch corroborated<br />

and approved. In these early lectures Frisch used “input-output<br />

analysis” and “LP” as almost synonymous concepts. The content of these<br />

early lectures was input-output analysis with emphasis on optimisation by<br />

means of LP, e.g., optimisation under a given import constraint or given<br />

labor supply, they did not really address LP techniques as a separate issue.<br />

Still these lectures may well have been among the first applying LP to<br />

the macroeconomic problems of the day in Western Europe, years before<br />

Dorfman et al. (1958) popularised this tool for economists’ benefit. One<br />

major reason for Frisch pioneering efforts here was, of course, that the<br />

new ideas touched or even overlapped with his own pre-war attempts as<br />

discussed above. In the very first post-war years, he had indeed applied<br />

the ideas of Frisch (1934a) to the problem of achieving the largest possible<br />

multilateral balanced trade in the highly constrained world economy<br />

(Frisch, 1947; 1948).<br />

The first trace of LP as a topic in its own right on Frisch’s agenda<br />

was a single lecture he gave to his students on February 15th 1952. Leif<br />

Johansen, who was Frisch’s assistant at the time, took notes that were issued<br />

in the Memorandum series two days later (Johansen, 1952). The lecture<br />

gave a brief mathematical statement of the LP problem and then as an<br />

important example discussed the problem related to producing different<br />

qualities of airplane fuel from crude oil derivates, with reference to an<br />

article by Charnes et al. in Econometrica. Charnes et al. (1952) is, indeed,<br />

a famous contribution in the LP literature as the first published paper on<br />

an industrial application of LP. A special thing about the Frisch’s use of<br />

this example was, however, that his lecture did not refer the students to<br />

an article that had already appeared in Econometrica, but to one that was<br />

forthcoming! Frisch had apparently been so eager to present these ideas<br />

that he had simply used (or perhaps misused) his editorial prerogative in<br />

lecturing and issuing lecture notes on an article not yet published!<br />

After the singular lecture on LP in February 1952, Frisch did not lecture<br />

on this topic till the autumn term in 1953, as part of another lecture series<br />

on input-output analysis. In these lectures, he went by way of introduction


12 Olav Bjerkholt<br />

through a number of concrete examples of problem that were amenable to<br />

be solved by LP, most of which have already been mentioned above. When<br />

he came to the diet problem, it was not with reference to Stigler (1945), but<br />

to Frisch (1941). He exemplified, not least, with reconstruction in Norway,<br />

the building of dwellings under the tight post-war constraints and various<br />

macroeconomic problems.<br />

In his brief introduction to LP, Frisch in a splendid pedagogical way<br />

used examples, amply illustrated with illuminating figures, and appealed to<br />

geometric intuition. He started out with cost minimization of a potato and<br />

herring diet with minimum requirements of fat, proteins and vitamin B, and<br />

then moved to macroeconomic problems within an input-output framework,<br />

somewhat similar in tenor to Dorfman (1958), still in the offing.<br />

Frisch confined, however, his discussion to the formulation of the LP<br />

problem and the characteristics of the solution, he did not enter into solution<br />

techniques. But clearly he had already put considerable work into this.<br />

Because after mentioning Dantzig’s Simplex method, referring to Dantzig<br />

(1951), Dorfman (1951) and Charnes et al. (1953), he added that there<br />

were two other methods, the logarithm potential method and the basis<br />

variable method, both developed at the Institute of <strong>Economic</strong>s. He added<br />

the remark that when the number of constraints was moderate and thus the<br />

number of degrees of freedom large, the Simplex method might well be<br />

the most efficient one. But in other cases, the alternative methods would be<br />

more efficient. Needless to say, there were no other sources, which referred<br />

to these methods at this time. Frisch’s research work on LP methods and<br />

alternatives to the Simplex method, can be followed almost step by step<br />

thanks to his working style of incorporating new results in the Institute<br />

Memorandum series. Between the middle of October 1953 and May 1954,<br />

he issued 10 memoranda.They were all in Norwegian, but Frisch was clearly<br />

preparing for presenting his work. As were to be expected Frisch’s work on<br />

LP was sprinkled with new terms, some of them adapted from earlier works.<br />

Some of the memoranda comprised comprehensive numerical experiments.<br />

The memoranda were the following:<br />

18 October 1953 Logaritmepotensialmetoden til løsning av lineære<br />

programmeringsproblemer [The logarithm potential<br />

method for solving LP problems], 11 pages.<br />

7 November 1953 Litt analytisk geometri i flere dimensjoner [Some<br />

multi-dimensional analytic geometry], 11 pages.


Some Unresolved Problems 13<br />

(Continued )<br />

13 November 1953 Substitumalutforming av logaritmepotensialmetoden til<br />

løsning av lineære programmeringsproblemer<br />

[Substitumal formulation of the logarithmic potential<br />

method for solving LP problems], 3 pages.<br />

14 January 1954 Notater i forbindelse med logaritmepotensialmetoden til<br />

løsning av lineære programmeringsproblemer [Notes<br />

in connection with the logarithmic potential method<br />

for solving LP problems], 48 pages.<br />

7 March 1954 Finalhopp og substitumalfølging ved lineær<br />

programmering [Final jumps and moving along the<br />

substitumal in LP], 8 pages.<br />

29 March 1954 Trunkering som forberedelse til finalhopp ved lineær<br />

programmering [Truncation as preparation for a final<br />

jump in LP], 6 pages.<br />

27 April 1954 Et 22-variables eksempel på anvendelse av<br />

logaritmepotensialmetoden til løsning av lineære<br />

programmeringsproblemer [A 22 variable example in<br />

application of the logarithmic potential method for the<br />

solution of LP problems].<br />

1 May 1954 Basisvariabel-metoden til løsning av lineære<br />

programmeringsproblemer [The basis-variable<br />

method for solving LP problems], 21 pages.<br />

5 May 1954 Simplex-metoden til løsning av lineære<br />

programmeringsproblemer [The Simplex method for<br />

solving LP problems], 14 pages.<br />

7 May 1954 Generelle merknader om løsningsstrukturen ved det<br />

lineære programmeringsproblem [Generfal remarks<br />

on the solution structure of the LP problem],<br />

12 pages.<br />

At this stage, Frisch was ready to present his ideas internationally. This<br />

happened first at a conference in Varenna in July 1954, subsequently issued<br />

as a memorandum. During the summer, Frisch added another couple of<br />

memoranda. Then he went to India, invited by Pandit Nehru, and spent the<br />

entire academic year 1954–55 at the Indian Statistical Institute, directed<br />

by P. C. Mahalanobis, to take part in the preparation of the new five-year<br />

plan for India. While he was away, he sent home the MSS for three more


14 Olav Bjerkholt<br />

memoranda. These six papers were the following:<br />

June 21st 1954<br />

August 13th 1954<br />

August 23rd 1954<br />

October 18th 1954<br />

March 29th 1955<br />

April 15th 1955<br />

Methods of solving LP problems, Synopsis of lecture to be<br />

given at the International Seminar on Input-Output<br />

Analysis, Varenna (Lake Como), June–July 1954,<br />

91 pages.<br />

Nye notater i forbindelse med logaritmepotensialmetoden<br />

til løsning av lineære programmeringsproblemer [New<br />

notes in connection with the logarithmic potential<br />

method for solving LP problems], 74 pages.<br />

Merknad om formuleringen av det lineære<br />

programmeringsproblem [A remark on the formulation of<br />

the LP problem], 3 pages.<br />

Principles of LP. With particular reference to the double<br />

gradient form of the logarithmic potential method,<br />

219 pages.<br />

A labour saving method of performing freedom truncations<br />

in international trade. Part I, 21 pages.<br />

A labour saving method of performing freedom truncations<br />

in international trade. Part II, 8 pages.<br />

Before he left for India he had already accepted an invitation to present<br />

his ideas on programming in Paris. This he did on three different occasions<br />

in May–June 1955. He presented in French, but issued synopsis in English<br />

in the memorandum series. Thus, his Paris presentation were the following:<br />

7th May 1955 The logarithmic potential method for solving linear<br />

programming problems, 16 pp. Synopsis of an exposition to<br />

be made on 1 June 1955 in the seminar of Professor René<br />

Roy, Paris (published as Frisch, 1956b).<br />

13 May 1955 The logarithmic potential method of convex programming. With<br />

particular application to the dynamics of planning for<br />

national developments, 35 pp. Synopsis of a presentation to<br />

be made at the international colloquium in Paris, 23–28 May<br />

1955 (published as Frisch, 1956c).<br />

25 May 1955 Linear and convex programming problems studied by means of<br />

the double gradient form of the logarithmic potential method,<br />

16pp. Synopsis of a presentation to be given in the seminar of<br />

Professor Allais, Paris, 26 May 1955.


Some Unresolved Problems 15<br />

After this, Frisch published some additional memoranda on LP. He<br />

launched a new method, named the Multiplex method, towards the end of<br />

1955, later published in a long article in Sankhya (Frisch, 1955; 1957).<br />

Worth mentioning is also his contribution to the festschrift for Erik Lindahl<br />

in which he discussed the logarithmic potential method and linked it to the<br />

general problem of macroeconomic planning (Frisch, 1956a; 1956d).<br />

From this time, Frisch’s efforts subsided with regard to pure LP problems,<br />

as he concentrated on non-linear and more complex optimisation,<br />

drawing naturally on the methods he had developed for LP.<br />

As Frisch managed to publish his results only to a limited degree and<br />

not very prominently there are few traces of Frisch in the international<br />

literature. Dantzig (1963) has, however, references to Frisch’s work. Frisch<br />

also corresponded with Dantzig, Charnes, von Neumann and others about<br />

LP methods. Extracts from such correspondence are quoted in some of the<br />

memoranda.<br />

5. Frisch’s Radar: An Anticipation of Karmarkar’s Result<br />

In 1972, severe doubts were created about the ability of the Simplex method<br />

to solve even larger problems. Klee and Minty (1972), whom we must<br />

assume knew very well how the Simplex algorithm worked, constructed<br />

a LP problem, which when attacked by the Simplex method turned out<br />

to need a number of elementary operations which grew exponentially in<br />

the number of variables in the problem, i.e., the algorithm had exponential<br />

complexity. Exponential complexity in the algorithm is for obvious reasons<br />

a rather ruining property for attacking large problems.<br />

In practice, however, the Simplex method did not seem to confront such<br />

problems, but the demonstration of Klee and Minty (1972) showed a worst<br />

case complexity, implying that it could never be proved what had been<br />

largely assumed, namely that the Simplex method only had polynomial<br />

complexity.<br />

A response to Klee and Minty’s result came in 1978 by the Russian<br />

mathematician Khachiyan 1978 (published in English in 1979 and 1980).<br />

He used a so-called “ellipsoidal” method for solving the LP problem, a<br />

method developed in the 1970s for solving convex optimisation problems by<br />

the Russian mathematicians Shor and Yudin and Nemirovskii. Khachiyan<br />

managed to prove that this method could indeed solve LP problems and<br />

had polynomial complexity as the time needed to reach a solution increased<br />

with the forth power of the size of the problem. But the method itself was


16 Olav Bjerkholt<br />

hopelessly ineffective compared to the Simplex method and was in practice<br />

never used for LP problems.<br />

A few years later, Karmarkar presented his algorithm (Karmarkar,<br />

1984a; b). It has polynomial complexity, not only of a lower degree than<br />

Khachiyan’s method, but is asserted to be more effective than the Simplex<br />

method. The news about Karmarkar’s path-breaking result became frontpage<br />

news in the New York Times. For a while, there was some confusion as<br />

to whether Karmarkar’s method really was more effective than the Simplex<br />

method, partly because Karmarkar had used a somewhat special formulation<br />

of the LP problem. But it was soon confirmed that for sufficently large<br />

problems, the Simplex method was less effcient than Karmarkar’s result.<br />

Karmarkar’s contribution initiated a hectic new development towards even<br />

better methods.<br />

Karmarkar’s approach may be said to be to consider the LP just as<br />

another convex optimalization problem rather that exploiting the fact that<br />

the solution must be found in a corner by searching only corner points. One<br />

of those who has improved Karmarkar’s method, Clovis C. Gonzaga, and<br />

for a while was in the lead with regard to possessing the nest method wrt.<br />

polynomial complexity, said the following about Karmarkar’s approach:<br />

“Karmarkar’s algorithm performs well by avoiding the boundary of the<br />

feasible set. And it does this with the help of a classical resource, first<br />

used in optimization by Frisch in 1955: the logarithmic barrier function:<br />

x ∈ R n , x > 0 → p(x) =− ∑ log x i .<br />

This function grows indefinitely near the boundary of the feasible set S,<br />

and can be used as a penalty attached to these points. Combining p(·)<br />

with the objective makes points near the boundary expensive, and forces<br />

any minimization algorithm to avoid them.” (Gonzaga, 1992)<br />

Gonzaga’s reference to Frisch was surprising; it was to the not very<br />

accessible memorandum version in English of one of the three Paris papers. 1<br />

Frisch had indeed introduced the logarithmic barrier function in his logarithmic<br />

potential method. Frisch’s approach was summarized by Koenker<br />

as follows:<br />

“The basic idea of Frisch (1956b) was to replace the linear inequality<br />

constraints of the LP, by what he called a log barrier, or potential function.<br />

Thus, in place of the canonical linear program<br />

1 In Gonzaga’s reference, the author’s name was given as K.R. Frisch, which can also be<br />

found in other rerences to Frisch in recent literature, suggesting that these refrences had a<br />

common source.


(1) min{c ′ x|Ax = b, x ≥ 0}<br />

we may associate the logarithmic barrier reformulation<br />

(2) min{B(x, µ)|Ax = b}<br />

where<br />

(3) B(x, µ) = c ′ x − µ ∑ log x k<br />

Some Unresolved Problems 17<br />

In effect, (2) replaces the inequality constraints in (1) by the penalty term<br />

of the log barrier. Solving (2) with a sequence of parameters µ such that<br />

µ → 0 we obtain in the limit a solution to the original problem (1).”<br />

(Koenker, 2000, p. 20)<br />

Frisch had not provided exact proofs and he had above all not published<br />

properly. But he had the exact same idea as Karmarkar came up with almost<br />

30 years later. Why did he not make his results better known? In fact there<br />

are other, even more important cases, in Frisch’s work of not publishing. The<br />

reasons for this might be that his investigations had not yet come to an end.<br />

In the programming work Frisch pushed on to more complex programming<br />

problems, spurred by the possibility of using first- and second-generation<br />

computers. They might have seemed powerful to him at that time, but they<br />

hardly had the capacity to match Frisch’s ambitions. Another reason for<br />

his results remaining largely unknown, was perhaps that he was too much<br />

ahead. The Simplex method had not really been challenged yet, by large<br />

enough problems to necessitate better method. The problem may have seen,<br />

not as much as a question of efficient algorithms as that of powerful enough<br />

computers.<br />

We finish off with Frisch’s nice illustrative description of his method in<br />

the only publication he got properly published (but unfortunately not very<br />

well distributed) on the logarithmic potential method:<br />

“Ma méthode d’approche est d’une espèce toute différente. Dans cette<br />

méthode nous travaillons systématiquement à l’intérieur de la région<br />

admissible et utilisons un potentiel logarithmique comme un guide–<br />

une sorte du radar–pour nous éviter de traverser la limite.” (Frisch,<br />

1956b, p. 13)<br />

The corresponding quote in the memorandum synoptic note is:<br />

“My method of approach is of an entirely different sort [than the Simplex<br />

method]. In this method we work systematically in the interior of the<br />

admissible region and use a logarithmic potential as a guiding device — a<br />

sort of radar — to prevent us from hitting the boundary.” (Memorandum<br />

of 7 May 1955, p. 8)


18 Olav Bjerkholt<br />

References<br />

Arrow, KJ, L Hurwicz and H Uzawa (1958). Studies in Linear and Non-Linear Programming.<br />

Stanford, CA: Stanford University Press.<br />

Bjerkholt, O and M Knell (2006). Ragnar Frisch and input-output analysis. <strong>Economic</strong><br />

Systems Research 18, (forthcoming).<br />

Charnes, A, WW Cooper and A Henderson (1953). An Introduction to Linear Programming.<br />

New York and London: John Wiley & Sons.<br />

Charnes, A, WW Cooper and B Mellon (1952). Bledning aviation gasolines — a study in<br />

programming interdependent activities in an integrated oil company, Econometrica 20,<br />

135–159.<br />

Charnes, A, WW Cooper and MW Miller (1959). An application of linear programming to<br />

financial budgeting and the costing of funds. The Journal of Business 20, 20–46.<br />

Dantzig, GB (1949). Programming of interdependent activities: II mathematical model.<br />

Econometrica 17, 200–211.<br />

Dantzig, GB (1951). Maximization of a linear function of variables subject to linear inequalities.<br />

In Activity Analysis of Production and Allocation, T Koopmans (ed.), New York<br />

and London: John Wiley & Sons, pp. 339–347.<br />

Dantzig, GB (1963). Linear Programming and Extensions. Princeton, NJ: Princeton<br />

University Press.<br />

Dantzig, GB (1984). Reminiscences about the origin of linear programming. In<br />

Mathematical Programming, RW Cottle, ML Kelmanson and B Korte (eds.),<br />

Amsterdam: Elsevier (North-Holland), pp. 217–226.<br />

Dorfman, R (1951). An Application of Linear Programming to the Theory of the Firm.<br />

Berkeley: University of California Press.<br />

Dorfman, R, PA Samuelson and RM Solow, (1958). Linear Programming and <strong>Economic</strong><br />

Analysis. New York: McGraw-Hill.<br />

Frisch, R (1933). Propagation problems and impulse problems in economics. In<br />

<strong>Economic</strong> Essays in Honor of Gustav Cassel, R Frisch (ed.), London: Allen & Unwin,<br />

pp. 171–205.<br />

Frisch, R (1934a). Circulation planning: proposal for a national organization of a commodity<br />

and service exchange. Econometrica 2, 258–336 and 422–435.<br />

Frisch, R (1934b). Statistical Confluence Analysis by Means of Complete Regression<br />

Systems. Publikasjon nr 5, Oslo: Institute of <strong>Economic</strong>s.<br />

Frisch, R (1941). Innledning. In Kosthold og levestandard. En økonomisk undersøkelse,<br />

K Getz Wold (ed.), Oslo: Fabritius og Sønners Forlag, pp. 1–23.<br />

Frisch, R (1947). On the need for forecasting a multilateral balance of payments. The American<br />

<strong>Economic</strong> Review 37(1), 535–551.<br />

Frisch, R (1948). The problem of multicompensatory trade. Outline of a system of multicompensatory<br />

trade. The Review of <strong>Economic</strong>s and Statistics 30, 265–271.<br />

Frisch, R (1955). The multiplex method for linear programming. Memorandum from Institute<br />

of <strong>Economic</strong>s, Oslo: University of Oslo (17 October 1955).<br />

Frisch, R (1956a). Macroeconomics and linear programming. Memorandum from Institute<br />

of <strong>Economic</strong>s, Oslo: University of Oslo, (10 January 1956). (Also published shortened<br />

and simplified as Frisch (1956d)).<br />

Frisch, R (1956b). La résolution des problèmes de programme linéaire par la méthode du<br />

potential logarithmique. In Cahiers du Séminaire d’Econometrie: No4—Programme


Some Unresolved Problems 19<br />

Linéaire — Agrégation et Nombre Indices, pp. 7–20. (The publication also gives an<br />

extract of the oral discussion in René Roy’s seminar, pp. 20–23.)<br />

Frisch, R (1956c). Programme convexe et planification pour le développement national.<br />

Colloques Internationaux du Centre National de la Recherche Scientifique LXII. Les<br />

modèles dynamiques. Conférence à Paris, mai 1955, 47–80.<br />

Frisch, R (1956d). Macroeconomics and linear programming. In 25 <strong>Economic</strong> Essays. In<br />

honour of Erik Lindahl 21 November 1956, R Frisch (ed.), pp. 38–67. Stockholm:<br />

Ekonomisk Tidskrift, (Slightly simplified version of Frisch (1956a)).<br />

Frisch, R (1957). The multiplex method for linear programming. Sankhyá: The Indian<br />

Journal of Statistics 18, 329–362 [Practically identical to Frisch (1955)]<br />

Gonzaga, CC (1992). Path-following methods for linear programming. SIAM Review 34,<br />

167–224.<br />

Holt, CC, F Modigliani, JF Muth and HA Simon (1956). Derivation of a linear decision rule<br />

for production and employment. Management Science 2, 159–177.<br />

Hurwicz, L (1949). Linear programming and general theory of optimal behaviour (abstract).<br />

Econometrica 17, 161–162.<br />

Johansen, L (1952). Lineær programming. (Notes from Professor Ragnar Frisch’s lecture<br />

15 February 1952), pp. 15. Memorandum from Institute of <strong>Economic</strong>s, University of<br />

Oslo, 17 February 1952, 15pp.<br />

Karmarkar, NK (1984). A new polynomial-time algorithm for linear programming. Combinatorica,<br />

4, 373–395.<br />

Klee, V and G Minty, (1972). How good is the simplex algorithm. In Inequalities III, O<br />

Sisha (ed.), pp. 159–175. New York, NY: Academic Press.<br />

Koenker, R (2000). Galton, Edgeworth, Frisch, and prospects for quantile regression in<br />

econometrics. Journal of Econometrics, 95(2), 347–374.<br />

Kohli, MC (2001). Leontief and the bureau of labor statistics, 1941–54: developing a<br />

framework for measurement. In The Age of <strong>Economic</strong> Measurement, JL Klein and<br />

MS Morgan (eds.), pp. 190–212. Durham and London: Duke University Press.<br />

Koopmans, T (1948). Optimum utilization of the transportation system (abstract).<br />

Econometrica 16, 66.<br />

Koopmans, TC (ed.) (1951). Activity Analysis of Production and Allocation. Cowles<br />

Commission Monographs No 13. New York: John Wiley & Sons.<br />

Markowitz, H (1957). The elimination form of the inverse and its application in linear<br />

programming. Management Science 3, 255–269.<br />

Orden, A (1993). LP from the ’40s to the ’90s. Interfaces 23, 2–12.<br />

Sandmo, A (1993). Ragnar Frisch on the optimal diet. History of Political Economy 25,<br />

313–327.<br />

Sharpe, W (1967). A linear programming algorithm for mutual fund portfolio selection.<br />

Management Science 13, 499–511.<br />

Stigler, G (1945). The cost of subsistence. Journal of Farm <strong>Economic</strong>s 27, 303–314.<br />

Wood, MK and GB Dantzig (1949). Programming of interdependent activities: I general<br />

discussion. Econometrica 17, 193–199.


This page intentionally left blank


Chapter 2<br />

Methods of Modeling: Econometrics<br />

and Adaptive Control System


This page intentionally left blank


Topic 1<br />

A Novel Method of Estimation<br />

Under Co-Integration<br />

Alexis Lazaridis<br />

Aristotle University of Thessaloniki, Greece<br />

1. Introduction<br />

We start with the unrestricted VAR(p):<br />

p∑<br />

x i = δ + µt i + A j x i−j + zd i + w i (1)<br />

j=1<br />

where x ∈ E n (here, E denotes the Euclidean space), t i and d i are the<br />

trend and dummy, respectively and w is the noise vector. The matrices of<br />

coefficients A j , are defined on E n × E n . After estimation, we may compute<br />

matrix from:<br />

⎛ ⎞<br />

p∑<br />

= ⎝ A j<br />

⎠ − I<br />

(1a)<br />

j=1<br />

or, directly, from the corresponding Error CorrectionVAR (ECVAR), which<br />

is presented later.Also, an estimate of matrix can be obtained by applying<br />

OLS, according to the procedure suggested by Kleibergen and Van Dijk<br />

(1994). It is re-called that this type of VAR models like the one presented<br />

above, have been recommended by Sims (1980), as a way to estimate the<br />

dynamic relationships among jointly endogenous variables.<br />

It is known that matrix can be decomposed such that = AC,<br />

where the elements of A are the coefficients of adjustment and the rows of<br />

the matrix C are the (possible) co-integrating vectors. It should be clarified<br />

at the very outset, that in many books etc., C is denoted by β ′ (prime shows<br />

transposition) and A by α, although capital letters are used for denoting<br />

matrices. Besides, in the general linear model, y = Xβ + u,β is the vector<br />

of coefficients. To avoid any confusion, we adopted the notation used here.<br />

23


24 Alexis Lazaridis<br />

A straightforward approach to obtain matrices A and C, provided that<br />

some rank conditions of are satisfied, is to solve the eigenproblem (Johnston<br />

and Di Nardo, 1997, pp. 287–296).<br />

= VV −1 (2)<br />

where is the diagonal matrix of eigenvalues and V is the matrix of the<br />

corresponding eigenvectors. Note that since is not symmetric, V ̸= V −1 .<br />

Eq. (2) holds if is (n × n) and all its eigenvalues are real, which implies<br />

that the eigenvectors are real too. In cases that is defined on E n × E m ,<br />

with m>n, Eq. (2) is not applicable. If these necessary conditions hold,<br />

then from Eq. (2) follows that matrix A = V and C = V −1 . Further, the<br />

co-integrating vectors are normalized so that — usually — the first element<br />

to be equal to 1. In other words, if c i 1 (̸=0), denotes the first element1 of<br />

the ith row of matrix C, i.e., 2 c ′ i. and a j the jth column of matrix A, then<br />

the normalization process can be summarized as c ′ i. /ci 1 and a i × c i 1 , for<br />

i = 1,...,n.<br />

It is known that according to the maximum likelihood (ML) method,<br />

matrix C can be obtained by solving the constrained problem:<br />

{min det(I − C ˆ k0 ˆ −1<br />

00 ˆ 0k C ′ ) | C ˆ kk C ′ = I} (3)<br />

where matrices ˆ 00 , ˆ 0k , ˆ kk and ˆ k0 are residual co-variance matrices of<br />

specific regressions (Harris, 1995, p. 78; Johansen Juselius, 1990). These<br />

matrices appear in the last term of the concentrated likelihood of 3 Eq. (5)<br />

together with C, where is replaced by AC.<br />

Note that the minimization of Eq. (3) is with respect to the elements<br />

of matrix C. It is re-called that Eq. (3) is minimized by solving a general<br />

eigenvalue problem, i.e., finding the eigenvalues from:<br />

|λ ˆ kk − ˆ k0 ˆ −1<br />

00 ˆ 0k |=0. (4)<br />

Applying Cholesky’s factorization on the positive definite symmetric<br />

matrix ˆ −1<br />

kk , such that ˆ −1<br />

kk = LL′ ⇒ ˆ kk = (L ′ ) −1 L −1 , where L is lower<br />

1 In some computer programs, ci i is taken to be the ith element of the ith row.<br />

2 Note that dot is necessary to distinguish the jth row of C, i.e., c i. ′ , from the transposed of<br />

the jth column of this matrix (i.e., c i ′).<br />

3 Equation (5) is presented below.


A Novel Method of Estimation Under Co-Integration 25<br />

triangular, Eq. (4) takes the conventional form 4 :<br />

|λI − L ′ ˆ k0 ˆ −1<br />

00 ˆ 0k L|=0. (4a)<br />

After obtaining matrix V = V ′ of the eigenvectors, we can easily<br />

compute the (unormalized) C from C = V ′ L ′ . Note also that C ˆ kk C ′ =<br />

V ′ L ′ (L ′ ) −1 L −1 LV = I. Finally, matrix A is obtained from A = ˆ 0k C ′ .<br />

Hence, the initial problem reduces to finding one of the eigen values and<br />

eigenvectors of a symmetric matrix, which are real. And this is the main<br />

advantage of the ML method.According to this procedure, a constant and/or<br />

trend can be included in the co-integrating vectors, but it is not clear of how<br />

to accommodate additional deterministic factors, such as one or more dummies.<br />

Besides, nothing is said about the variance of the (disequilibrium)<br />

errors, which correspond to the finally adopted co-integration vector.<br />

We propose in the first part of this chapter, a comparatively simple<br />

method, to directly obtain co-integrating vectors, which may include, apart<br />

from the standard deterministic components, any number of dummies, in<br />

cases that this is necessary. Additionally, as we will see in what follows, the<br />

errors corresponding to the proper co-integration vector have the property<br />

of minimum variance.<br />

2. Some Estimation Remarks<br />

It was mentioned that after estimation, matrix can be computed from<br />

Eq. (1a). It is obvious that each equation of the VAR can be estimated by<br />

OLS. At this stage, one should be aware to comply with the basic assumptions<br />

regarding the error terms, in the sense that proper tests must ensure<br />

that the noises are white. This way, the order p of the model is defined<br />

imposing at the same time restrictions — if necessary — on some elements<br />

of A’s and vectors µ and z to be zero.<br />

It can be easily shown that Eq. (1) may be expressed in the form of an<br />

equivalent ECVAR, i.e.,<br />

p−1<br />

∑<br />

x i = δ + µt i + Q j x i−j + x i−p + zd i + w i (5)<br />

j=1<br />

4 It should be noted that it is also necessary to pre-multiply the elements of Eq. (4) by L ′<br />

and post-multiply them by L.


26 Alexis Lazaridis<br />

where<br />

⎛ ⎞<br />

j∑<br />

Q j = ⎝ A k − I⎠ .<br />

k=1<br />

3. The Singular Value Decomposition<br />

Matrix in Eq. (5) may be augmented to accommodate, as an additional<br />

column, the vector δ, µ or z. It is also possible to accommodate all these<br />

vectors in the following way:<br />

˜ = [ .δµz ] (6)<br />

In this case, ˜ is defined on E n × E m , with m>n.<br />

For conformability, the vector x i−p in Eq. (5) should be augmented<br />

accordingly by using the linear advance operator L (such that L k y i = y i+k ),<br />

i.e.,<br />

⎡ ⎤<br />

x i−p<br />

1<br />

˜x i−p = ⎢<br />

⎣ L p ⎥<br />

t i−p ⎦ .<br />

L p d i−p<br />

Hence, Eq. (5) takes the form:<br />

p−1<br />

∑<br />

x i = Q j x i−j + ˜ ˜X i−p + w i . (7)<br />

j=1<br />

We can compute the unique generalized inverse (or pseudo-inverse) of ˜,<br />

denoted by ˜ + (Lazaridis and Basu, 1981), from:<br />

˜ + = VF ∗ U ′ (8)<br />

where<br />

V is of dimension (m×m), and its columns are the orthonormal eigenvectors<br />

of ˜ ′ ˜.<br />

U is (n × m), and its columns are the orthonormal eigenvectors of ˜ ˜ ′ ,<br />

corresponding to the largest m eigenvalues of this matrix so that:<br />

U ′ U = V ′ V = VV ′ = I m (9)<br />

F is diagonal (m × m), with elements f ii f i , called the singular values<br />

of ˜, which are the non-negative square roots of the eigenvalues of ˜ ′ ˜.


If the rank of ˜ is k, i.e.,<br />

A Novel Method of Estimation Under Co-Integration 27<br />

r( ˜) = k ≤ n, then f 1 ≥ f 2 ≥···≥f k > 0<br />

and f k+1 = f k+2 =···=f m = 0<br />

F ∗ is diagonal (m × m), and fii ∗ f i ∗ = 1/f i .<br />

It should be noted that all the above matrices are real. Also note that if<br />

˜ has full row rank, then ˜ + is the right inverse of ˜, i.e., ˜ ˜ + = I n<br />

Hence, the singular value decomposition (SVD) of ˜ is:<br />

˜ = UFV ′ . (10)<br />

4. Computing the Co-Integrating Vectors<br />

We proceed to form a matrix F 1 of dimension (m × n) such that:<br />

f 1(i,j) =<br />

{ 0, if i ̸= j<br />

√<br />

fi , if i = j<br />

(11)<br />

In accordance to Eq. (11), we next form a matrix F 2 of dimension (n × m).<br />

It is verified that F 1 F 2 = F. Hence, Eq. (10) can be written as:<br />

˜ = AC<br />

where A = UF 1 of dimension (n×n) and C = F 2 V of dimension (n×m).<br />

After normalization, we get the possible co-integrating vectors as the<br />

rows of matrix C, whereas the elements of A can be viewed as approximations<br />

to the corresponding coefficients of adjustment.<br />

It is apparent that the co-integrating vectors can always be computed,<br />

given that r( ˜) > 0. Needless to say that the same steps are followed if ,<br />

instead of ˜, is considered.<br />

5. Case Study 1: Comparing Co-Integrating Vectors<br />

We first consider the VAR(2) model, presented by Holden and Peaman<br />

(1994). The x ∼I(1) vector consists of the variables C (consumption), I<br />

(income) and W (wealth), so that x ∈ E 3 . We used the same set of data<br />

presented at the end of this book (Table D3, pp. 196–198). To simplify<br />

the presentation, we did not consider the three dummies DD682, DD792,<br />

and DD883. Hence, the VAR in this case has a form analogous to Eq. (1),<br />

without trend and dummy components.


28 Alexis Lazaridis<br />

Estimating the equations of this VAR by OLS, we get the following<br />

results.<br />

⎡<br />

⎤ ⎡<br />

⎤<br />

0.059472 −0.073043 0.0072427<br />

0.064873<br />

⎢<br />

⎥ ⎢<br />

⎥<br />

= ⎣0.5486237 −0.5244858 −0.028041 ⎦ , δ = ⎣ 0.16781451⎦ .<br />

0.3595042 −0.2938160 −0.039400 −0.1554421<br />

(12)<br />

Hat (ˆ) is omitted for simplicity. One possible co-integration vector with<br />

intercept, obtained by applying the ML method is:<br />

1 − 0.9574698 − 0.048531 + 0.2912925 (13)<br />

and corresponds to the long-run relationship:<br />

C i = 0.9574698I i + 0.048531W i − 0.2912925.<br />

For the vector in Eq. (13) to be a co-integration vector, the necessary and<br />

sufficient condition is that the (disequilibrium) errors û i computed from:<br />

û i = C i − 0.9574698I i − 0.048531W i + 0.2912925 (14)<br />

to be stationary, i.e., {û i }∼I(0), since x ∼I(1).<br />

For distinct purposes, these ML errors obtained from Eq. (14) will be<br />

denoted by uml i .<br />

Applying SVD on ˜ = [.δ] from Eq. (12), we get matrix C which is:<br />

⎡<br />

⎤<br />

1 −0.9206225 −0.06545528 0.1110597<br />

⎢<br />

⎥<br />

C = ⎣ 1 0.3586208 −0.6305243 −6.403011 ⎦<br />

1 1.283901 −2.050713 0.4300255<br />

⎛ ⎞<br />

1.365344<br />

⎜ ⎟<br />

Euclidean norm = ⎝ 6.521098 ⎠ .<br />

2.653064<br />

In the column at the right-hand side, the (Euclidean) norm of each row<br />

of C is presented. It is noted also that the singular values of are:<br />

f 1 = 0.8979247, f 2 = 0.2304381 and f 3 = 0.0083471695.<br />

All f i are less than 1, indicating thus that at least one row of C can be<br />

regarded as a possible co-integration vector, in the sense that the resulting<br />

errors are likely to be stationary. This row usually has the smallest norm,


A Novel Method of Estimation Under Co-Integration 29<br />

which in this case, is the first one. Hence, the errors corresponding to this<br />

co-integration vector are obtained from:<br />

û i = C i − 0.9206225I i − 0.06545528W i + 0.1110597. (15)<br />

These errors will be denoted by usvd i . It should be pointed out here<br />

that the smallest norm condition is just an indication as to which row of<br />

matrix C to be considered first.<br />

It may be useful to re-call at this point that the co-integration vector<br />

reported by the authors (p. 111), which is obtained by applying OLS after<br />

omitting the first two observations from the initial sample is:<br />

1 − 0.9109 − 0.0790 + 0.1778.<br />

It should be clarified that this vector has been obtained by applying OLS<br />

to estimate directly the long-run relationship. Repeating this procedure,<br />

we get:<br />

1 − 0.910971 − 0.079761 + 0.178455.<br />

Hence, the errors are computed from:<br />

û i = C i − 0.910971I i − 0.079761W i + 0.178455. (16)<br />

These errors will be denoted by ubook i .<br />

The next task is to compare the above three series of errors, having a<br />

first look at the corresponding graphs, which are shown in Fig. 1.<br />

Although by a first glance the three series seem to be stationary, we shall<br />

go a bit further to see their properties in detail. Before going to the next<br />

section, it is worth to mention here that for the three series, the normality<br />

test using the Jarque-Bera criterion favors the null.<br />

Figure 1. The series {uml i }, {usvd i }, and {ubook i }.


30 Alexis Lazaridis<br />

5.1. Testing for Co-integration<br />

Since for the first two cases, ∑ û i ̸= 0, we estimated the following regression:<br />

q∑<br />

û i = b 1 + b 2 û i−1 + b j+2 û i−j + ε i . (17)<br />

The value of q was set such that the noises ε i to be white, as it will be<br />

explained in detail later on. Regarding the last case, the intercept is omitted<br />

from Eq. (17), since û i ’s are OLS residuals, so that ∑ û i = 0.<br />

The estimation results are as follows:<br />

u ˆml i =−0.00612 − 0.381773 uml i−1 − 0.281651uml i−1<br />

(0.193688)<br />

j=1<br />

t =−3.682<br />

uŝvd i = 0.006525 − 0.428278 usvd i−1 − 0.259066usvd i−1<br />

(0.108269)<br />

t =−3.956<br />

ubôok i =−0.606759 ubook i−1<br />

(0.09522)<br />

t =−6.37.<br />

Since these error series are the results of specific calculations and in<br />

the simplest case are the OLS residuals, it is not advisable (Harris, 1995,<br />

pp. 54–55). to apply the Dickey-Fuller (DF/ADF) test, as we do with any<br />

variable in the initial data set. We have to compute the t u statistic from McKinnon<br />

(1991, p. 267–276) critical values (see also Harris, 1995, Table A6,<br />

p. 158). This statistic (t u = ∞ + 1 /T + 2 /T 2 ) for three (independent)<br />

variables in the long-run relationship with intercept is:<br />

α = 0.01 α = 0.05 α = 0.10<br />

t u =−4.45 t u =−3.83 t u =−3.52.<br />

Hence, {book i } is stationary for α = (0.01, 0.05, 0.10), {usvd i } is stationary<br />

for α = (0.05, 0.10) and {uml i } is stationary only for α = 0.10.<br />

Recall that the null [û i ∼ I(1)] is rejected in favor of H 1 [û i ∼ I(0)], if<br />

t


A Novel Method of Estimation Under Co-Integration 31<br />

It is obvious from these findings, the super consistency of the OLS.<br />

However, the method we introduce here, produces almost minimum variance<br />

errors, which is a quite desirable property, as suggested by Engle and<br />

Granger (Dickey et al., 1994, p. 27). Hence, we will mainly focus on this<br />

property in our next section.<br />

6. Case Study 2: Further Evidence Regarding the Minimum<br />

Variance Property<br />

With the same set of data, the co-integration vector without an intercept<br />

obtained by applying the ML method is:<br />

1 − 0.9422994 − 0.058564<br />

The errors corresponding to this vector are obtained from:<br />

û i = C i − 0.9422994I i − 0.058564W i (18)<br />

Again these errors will be denoted by uml i .<br />

Next, we apply SVD to in Eq. (12) to obtain matrix C, which is:<br />

⎡<br />

⎤<br />

1 −0.9192042 −0.066081211<br />

⎢<br />

⎥<br />

C = ⎣ 1 1.160720 −1.012970 ⎦<br />

1 0.9395341 2.063768<br />

and<br />

⎛ ⎞<br />

1.359891<br />

⎜ ⎟<br />

Euclidean norm = ⎝ 1.836676 ⎠<br />

2.47828<br />

The singular values of are:<br />

f 1 = 0.89514, f 2 = 0.0403121 and f 3 = 0.0026218249<br />

Again, all f i ’s are less than 1. We consider the first row of C, so that<br />

the errors corresponding to this vector are obtained from:<br />

û i = C i − 0.9192042I i − 0.066081211W i (19)<br />

These errors will be also denoted by usvd i .<br />

The co-integrating vector reported by the authors (p. 109), which is<br />

obtained from the VAR(2), including the dummies mentioned previously,


32 Alexis Lazaridis<br />

by applying the ML procedure is given by:<br />

1 − 0.93638 − 0.03804<br />

and the errors obtained from this vector are computed from:<br />

û i = C i − 0.93638I i − 0.03804W i (20)<br />

As in the previous case, these errors will be denoted by ubook i .<br />

The minimum variance property can be seen from the following:<br />

Standard deviation of Standard deviation of Standard deviation of<br />

{uml i } {usvd i } {book i }<br />

0.0169 0.0166 0.0193<br />

Next, we consider the model presented by Banerjee et al. (1993,<br />

pp. 292–293) that refers to VAR(2) with constant dummy and trend. The<br />

state vector x ∈ E 4 consists of the variables (m − p), p, y, and R. The<br />

data (in logs) are presented in Harris (1995, statistical appendix, pp. 153–<br />

155. Note that the entries for 1967:1 and 1973:2 have to be corrected from<br />

0.076195 and 0.56569498 to 9.076195 and 9.56569498, respectively). Considering<br />

the co-integration vector reported by the authors, which has been<br />

obtained by applying the ML method, we compute the errors from:<br />

û i = (m − p) i + 6.3966p i − 0.8938y i + 7.6838R i . (21)<br />

These errors will be denoted by uml i .<br />

Estimating the same VAR by OLS, we get,<br />

⎡<br />

⎤<br />

−0.089584 0.16375 −0.184282 −0.933878<br />

−0.012116 −0.9605 0.229635 0.206963<br />

= ⎢<br />

⎥<br />

⎣ 0.021703 0.676612 −0.476152 0.447157 ⎦ ,<br />

−0.006877 0.118891 0.048932 −0.076414<br />

⎡ ⎤<br />

⎡ ⎤<br />

3.129631<br />

0.001867<br />

−2.46328<br />

δ = ⎢ ⎥<br />

⎣ 5.181506 ⎦ and µ =<br />

−0.001442<br />

⎢ ⎥<br />

⎣ 0.003078 ⎦<br />

−0.470803<br />

−0.000366


A Novel Method of Estimation Under Co-Integration 33<br />

Following the procedure described above, we obtain matrix C which is:<br />

⎡<br />

⎤<br />

1 −48.30517 24.60144 37.75685<br />

1 5.491415 −0.6510755 7.423318<br />

C = ⎢<br />

⎥<br />

⎣ 1 −2.055229 −5.466846 0.9061699 ⎦<br />

1 0.0051181894 0.1603713 −0.1244312<br />

⎛ ⎞<br />

66.06966<br />

9.310488<br />

Euclidean norm = ⎜ ⎟<br />

⎝ 5.99429 ⎠<br />

1.020406<br />

The singular values of are:<br />

f 1 = 1.503069, f 2 = 0.7752298, f 3 = 0.1977592<br />

and f 4 = 0.012560162<br />

The fact that one singular value is greater than one, is an indication that<br />

unless a dummy is present — we hardly can trace one row of matrix C,<br />

which may be related to stationary errors û i . As shown in Fig. 1, from the<br />

graphs, from which refer to the errors corresponding to the rows of C, we<br />

see that only the second row produces reasonably results. In this case, the<br />

errors are computed from:<br />

û i = (m − p) i + 5.491415p i − 0.6510755y i + 7.42318R i . (22)<br />

As usual, these errors will be denoted by usvd i .<br />

The minimum variance property can be seen from:<br />

Standard deviation of {uml i } Standard deviation of {usvd i }<br />

0.2671 0.2375<br />

To see that none of the two error series is stationary, as it was indicated<br />

by the first singular value, we present the following results.<br />

u ˆml i = 0.268599 − 0.178771 uml i−1 − 0.214098uml i−1<br />

(0.065634)<br />

t =−2.724<br />

uŝvd i = 0.832329 − 0.193432 usvd i−1 − 0.160135usvd i−1<br />

(0.066421)<br />

t =−2.912<br />

For a long-run relationship with three (independent) variables without<br />

trend and intercept, the critical value t u for α = 0.10 is less than −3.


34 Alexis Lazaridis<br />

Nevertheless, the method proposed, produces better results than the ML<br />

approach does.<br />

Finally, we will consider the first of the two co-integration vectors<br />

(since, it produces slightly better results), reported by Harris (1995, p. 102).<br />

From this vector, which was obtained by the ML method, one gets the<br />

following errors.<br />

û i = (m − p) i + 7.325p i − 1.073y i + 6.808R i (23)<br />

Also we obtain<br />

standard deviation of û i ’s = 0.268 and<br />

û i =−0.117378 − 0.178896<br />

(0.068296) ûi−1 − 0.30095417û i−1<br />

t =−2.619.<br />

It is obvious that these additional results further support the previous<br />

findings, which are due to some properties of SVD (Lazaridis, 1986).<br />

6.1. A Note on Integration<br />

It seems to be a common belief that differentiating a variable say n times,<br />

we will always get a stationary series (Harris, 1995, p. 16). Hence, the<br />

initial series is said to be I(n). But, this is not necessarily the case, as it is<br />

verified that if we take into consideration the exports of goods and services<br />

for Greece, as presented in Table 1. Besides if n is large enough, is it of<br />

any good to obtain such a stationary series when economic variables are<br />

considered?<br />

Table 1. The series {x i }.<br />

Exports of goods and services in bil. Eur. Greece 1956–2000.<br />

X<br />

2.4944974E-02 2.9053558E-02 2.9347029E-02 2.8760089E-02 2.8173149E-02<br />

3.2575201E-02 3.5803374E-02 4.1379310E-02 4.2553190E-02 4.7248717E-02<br />

6.6030815E-02 6.7498162E-02 6.6030815E-02 7.6008804E-02 8.8041089E-02<br />

0.1000734 0.1300073 0.2022010 0.2664710 0.3325018<br />

0.4258254 0.4763023 0.5998532 0.7325019 1.049743<br />

1.239618 1.388114 1.788701 2.419956 2.868965<br />

3.618782 4.510052 4.978430 5.818929 6.485106<br />

7.690976 9.316215 9.847396 11.45708 14.08716<br />

15.39428 18.87601 20.90506 22.56992 25.51577<br />

Source: International Financial Statistics. (International Monetary Fund, Washington DC)


A Novel Method of Estimation Under Co-Integration 35<br />

The series { ∆xi}, { ∆ 2 x i }, and { ∆<br />

3 x i }<br />

Figure 2. Differences of the NI series {x i }.<br />

Using the series {x i } as shown in Table 1, we can easily verify from the<br />

corresponding graph, that even for n = 5, the series { 5 x i } is not stationary.<br />

In Fig. 2, the series {x i }, { 2 x i }, and { 3 x i } are presented, for a better<br />

understanding of this situation. We will call the variables, which belong to<br />

this category near the integrated (NI) (Banerjee et al., 1993, p. 95) series.<br />

To trace that a given series is NI, we may run the regression:<br />

x i = β 1 + β 2 x i−1 + β 3 t i +<br />

q∑<br />

β j+3 x i−j + u i (24)<br />

which is a re-parametrized AR(q) with constant, where we added a trend<br />

too. As shown in Eq. (17), the value of q is set such that the noises u i to<br />

be white. A comparatively simple way for this verification is to compute<br />

the residuals û i and to consider the corresponding Ljung-Box Q statistics<br />

and particularly their p-values, which should be much greater than<br />

0.1, to say that no autocorrelation (AC) is present. On the other hand, if<br />

we trace heteroscedasticity, this is a strong indication that we have a NI<br />

series. For the data presented in Table 1, we found that q = 2. The corresponding<br />

Q statistics (Column 4) together with p-values are presented<br />

in Table 2.<br />

We see that for all k (column 1), the corresponding p-values (column 5)<br />

are greater than 0.1. But, if we look at the residuals graph (Fig. 3), we can<br />

verify the presence of heteroscedasticity.<br />

j=1


36 Alexis Lazaridis<br />

Table 2.<br />

Residuals: Autocorrelations, PAC, Q statistics and p-values.<br />

Autocorrelation (AC), partial autocor coefficients (PAC) Q statistics and p values (prob).<br />

Values AC PAC Q Stat(L-B) p value Q Stat(B-P) p value<br />

of k<br />

1 0.040101 0.040101 0.072482 0.787756 0.067540 0.794952<br />

2 −0.061329 −0.063039 0.246254 0.884151 0.225515 0.893367<br />

3 −0.008189 −0.003064 0.249432 0.969240 0.228331 0.972891<br />

4 −0.285494 −0.290504 4.213238 0.377916 3.651618 0.455202<br />

5 0.042856 0.072784 4.304969 0.506394 3.728756 0.589090<br />

6 0.099724 0.058378 4.815469 0.567689 4.146438 0.656867<br />

7 −0.151904 −0.167680 6.033816 0.535806 5.115577 0.645861<br />

8 −0.007846 −0.069642 6.037162 0.643069 5.118163 0.744875<br />

9 0.001750 0.023203 6.037333 0.736176 5.118291 0.823877<br />

10 0.109773 0.165620 6.733228 0.750367 5.624397 0.845771<br />

11 0.134538 0.022368 7.812250 0.730017 6.384616 0.846509<br />

12 −0.029101 −0.048363 7.864417 0.795634 6.420185 0.893438<br />

13 −0.074997 −0.028063 8.222832 0.828786 6.656414 0.918979<br />

14 −0.084284 −0.017969 8.691681 0.850280 6.954772 0.936441<br />

15 −0.113984 −0.104484 9.580937 0.845240 7.500452 0.942248<br />

16 −0.081662 −0.157803 10.05493 0.863744 7.780540 0.955133<br />

17 −0.001015 −0.005026 10.05501 0.901288 7.780582 0.971016<br />

18 −0.053178 −0.056574 10.27276 0.922639 7.899356 0.980097<br />

Figure 3. The residuals from Eq. (24) (q = 2).


A Novel Method of Estimation Under Co-Integration 37<br />

Another easy and practical way to trace heteroscedasticity, is to select<br />

the explanatory variable, which yields the smallest p-value for the corresponding<br />

Spearman’s correlation coefficient (r s ). Note that in models like<br />

the one we have seen in Eq. (24), such a variable is the trend, resulting<br />

in this case to: r s = 0.3739,t = 2.394,p = 0.0219. This means that<br />

for a significance level α>0.022, heteroscedasticity is present. Hence,<br />

we have an initially NI series {x i }, which has to be weighed somehow in<br />

order to be transformed to a difference stationary series (DSS). In some<br />

cases, this transformation can be achieved if the initial series is expressed<br />

in real terms, or as percentage of growth. In usual applications, the (natural)<br />

logs [i.e., ln(x i )] are considered, given that {x i /x i } > {ln(x i )} ><br />

{x i /x i−1 }.<br />

7. Case Study 3: How to Obtain Reliable Results Regarding<br />

the Order of Integration?<br />

It is a common practice to apply DF/ADF for determining the order of integration<br />

for a given series. However, the analytical step-by-step application<br />

of this test (Enders, 1995, p. 257; Holden and Perman, 1994, pp. 64–65),<br />

leaves the impression that the cure is worse than the disease itself. On the<br />

other hand, by simply adopting the results produced by some commercial<br />

computer programs, which is the usual practice in many applications,<br />

we may reach some faulty conclusions. It should be noted at this point that<br />

according to the results obtained by a well-known commercial package, the<br />

series {x i }, as shown in Table 1, should be considered as I(2), for α = 0.05,<br />

which is not the case.<br />

Here, we present a comparatively simple procedure to determine the<br />

order of integration of a DSS, by inspecting a model analogous to Eq. (24).<br />

Assuming that {z i } is DSS, we proceed as follows:<br />

• Start with k = 1 and estimate the regression:<br />

k z i = β 1 + β 2 k−1 z i−1 + β 3 t i +<br />

q∑<br />

β j+3 k z i−j + u i . (25)<br />

Be sure that the noises in Eq. (25) are white, by applying the relevant<br />

tests as described above. This way, we set the lag-length q. Note that if<br />

k = 1, its value is omitted from Eq. (25) and that 0 z i−1 = z i−1 .<br />

j=1


38 Alexis Lazaridis<br />

Table 3.<br />

Critical values for the DF F-test.<br />

Sample size α = 0.01 α = 0.05 α = 0.10<br />

25 10.61 7.24 5.91<br />

50 9.31 6.73 5.61<br />

100 8.73 6.49 5.47<br />

250 8.43 6.34 5.39<br />

500 8.34 6.30 5.36<br />

>500 8.27 6.25 5.34<br />

• Proceed to test the hypothesis:<br />

H 0 : β 2 = β 3 = 0. (26)<br />

To compare the computed F-statistic, we have to consider, in this case,<br />

the Dickey-Fuller F-distribution. The critical values (Dickey and Fuller,<br />

1981) are reproduced in Table 3, to facilitate the presentation.<br />

Note that in order to reject Eq. (26), F-statistic should be much greater<br />

than the corresponding critical values, as shown in Table 3. If we reject<br />

Eq. (26), this means that the series {z i } is I(k − 1). If the null hypothesis<br />

is not rejected, then increase k by 1(k = k + 1) and repeat the same<br />

procedure, i.e., starting from estimating Eq. (25), testing at each stage so<br />

that the model disturbances are white noises.<br />

• In case of known structural break(s) in the series, a dummy d i should be<br />

added in Eq. (25), such that:<br />

{ 0 if i ≤ r<br />

d i =<br />

w i if i>r<br />

where w i = 1(∀ i) for the shift in mean, or w i = t i for the shift in<br />

trend. It is re-called that r denotes the date of the break or shift. With<br />

this specification, the results obtained are alike the ones that Perron et al.<br />

(1992a; b) procedure yields.<br />

We applied the proposed technique to find the order of integration of the<br />

variable m i as shown in Eq. (21), which is the log of nominal M1 (money<br />

supply). Harris (1995, p. 38) after applying DF/ADF test reports that “...<br />

and the interest rate as I(1) and prices as (I2). The nominal money supply<br />

might be either, given its path over time....”. Since the latter variable is m i ,<br />

it is clear that, with this particular test, it was not possible to get a precise<br />

answer as to whether {m i } is either I(1) or I(2).


A Novel Method of Estimation Under Co-Integration 39<br />

To show the details, underline the first stage of this procedure; we<br />

present some estimation results with different values of q.<br />

For q = 3<br />

Most of the p-values in the 5th column of the table, which is analogous<br />

to Table 2, are equal to zero.<br />

Obviously, this is not the proper lag-length.<br />

For q = 4<br />

All p-values are greater than 0.1, as shown in Table 2. r s =−0.193,<br />

p = 0.05652. F(2, 94) = 4.2524.<br />

This lag-length is acceptable.<br />

For q = 5<br />

All p-values are greater than 0.1, as shown in Table 2. r s =<br />

−0.2094,p= 0.0395.F(2, 92) = 4.2278.<br />

This lag-length is questionable, since for α>0.04, we face the problem<br />

of heteroscedasticity.<br />

q = 6<br />

All p-values are greater than 0.1, as we have seen in Table 2. r s =<br />

−0.166,p= 0.1038.F(2, 90) = 5.937.<br />

This lag-length is quite acceptable.<br />

q = 7<br />

All p-values are greater than 0.1, as shown in Table 2. r s =−0.1928,p=<br />

0.0608.F(2, 88) = 4.768.<br />

This lag-length is also acceptable, but it is inferior when compared to<br />

the previous one.<br />

From these analytical results, it can be easily verified that the proper<br />

lag-length is 6 (q = 6). It may also be worthy to mention that according<br />

to the normality test (Jarque-Bera), we should accept the null. The value<br />

of F = 5.937 is less than the corresponding critical values as shown in<br />

Table 3, for 100 observations and α = (0.01, 0.05). Hence, we accept<br />

Eq. (26) and increase the value of k by one. The estimation results, after<br />

properly selecting the value of q(q = 5), are:<br />

2 m i = 0.0047 − 0.894892 m i−1 + 0.00033 t i<br />

(0.0046) (0.21784) (0.000108)<br />

+<br />

5∑<br />

ˆβ j+3 2 m i−j +û i . (27)<br />

j=1<br />

Regarding the residuals, all p-values in the 5th column of the table,<br />

which is analogous to Table 2, are greater than 0.1. Further, we have:<br />

r s =−0.1556,p= 0.127. Jarque-Bera = 0.8(p = 0.67).F(2, 91) = 8.44.


40 Alexis Lazaridis<br />

Figure 4. The series {m i } and { 2 m i }.<br />

Since F


A Novel Method of Estimation Under Co-Integration 41<br />

Due to the properties of SVD, we can easily and directly include in the<br />

co-integrating vectors, as many deterministic components as we want to.<br />

Besides, we found that the errors corresponding to the finally selected cointegration<br />

vector, are characterized by the minimum variance property. It<br />

should be noted that the procedure proposed here is not mentioned in the<br />

relevant literature.<br />

Further, we analyzed the stages of a relevant technique to obtain reliable<br />

results regarding the order of integration of a given series provided that<br />

it is DSS. We also demonstrated the properties of the so-called near integrated<br />

series, which can be traced by applying this technique. It should be<br />

emphasized that with this kind of series (NI), we show that the conventional<br />

methods for unit root test may produce faulty results.<br />

References<br />

Banerjee, A, JJ Dolado, JW Galbraith and DF Hendry (1993). Co-integration, Error Correction,<br />

and the Econometric Analysis of Non-Stationary Data. New York: Oxford<br />

University Press.<br />

Dickey, DA and WA Fuller (1981). Likelihood ratio statistics for autoregressive time series<br />

with a unit root. Econometrica 49, 1057–1072.<br />

Dickey, DA, DW Jansen and DL Thornton (1994). A primer on cointegration with an application<br />

to money and income. In Cointegration for the Applied Economist, BB Rao<br />

(ed.), UK: St. Martin’s Press, 9–45.<br />

Enders, W (1995). Applied Econometrics Time Series. New York: John Wiley.<br />

Harris, R (1995). Using Cointegration Analysis in Econometric Modelling. London: Prentice<br />

Hall.<br />

Holden, D and R Perman (1994). Unit roots and cointegration for the economist. In Cointegration<br />

for the Applied Economist, BB Rao (ed.), UK: St. Martin’s Press, 47–112.<br />

Johansen, S and K Juselius (1990). Maximum likelihood estimation and inference on cointegration<br />

with application to the demand for money. Oxford Bulletin of <strong>Economic</strong>s and<br />

Statistics 52, 169–210.<br />

Johnston, J and J DiNardo (1997). Econometric <strong>Models</strong>. New York: McGraw-Hill International.<br />

Kleibergen, F and V Dijk (1994). Direct cointegration testing in error correction models.<br />

Journal of Econometrics 63, 61–103.<br />

Lazaridis, A and D Basu (1983). Stochastic optimal control by pseudo-inverse. Review of<br />

<strong>Economic</strong>s and Statistics 65(2), 347–350.<br />

Lazaridis, A (1986). A note regarding the problem of perfect multicollinearity. Quantity and<br />

Quality 120, 297–306.<br />

McKinnon, J (1991). Critical values for co-integration tests. In Long-Run <strong>Economic</strong> Relationships,<br />

RF Engle and CWJ Granger (eds.), New York: Oxford University Press.<br />

Perron, P and T Vogelsang (1992a). Nonstationarity and level shifts with an application<br />

to purchasing power parity. Journal of Business and <strong>Economic</strong> Statistics 10,<br />

301–320.


42 Alexis Lazaridis<br />

Perron, P and T Vogelsang (1992b). Testing for a unit root in a time series with a changing<br />

mean: Corrections and extensions. Journal of Business and <strong>Economic</strong> Statistics 10,<br />

467–470.<br />

Sims, CA (1980). Macroeconomics and reality. Econometrica 48, 1–48.


Topic 2<br />

Time Varying Responses of Output to Monetary<br />

and Fiscal Policy<br />

Dipak R. Basu<br />

Nagasaki University, Nagasaki, Japan<br />

Alexis Lazaridis<br />

Aristotle University of Thessalonikki, Greece<br />

1. Introduction<br />

There are large volumes of literatures on the effectiveness of monetary and<br />

fiscal policies and on lags in the effects of monetary policies (Friedman and<br />

Schwartz, 1963; Hamburger, 1971; Meyer, 1967; Tobin, 1970). Although<br />

the lengths of lags are important in determining the role of monetary-fiscal<br />

policies, the relationship between money and income vary over time due<br />

to changes in the lag structure. Mayer (1967), Poole (1975) and Warburton<br />

(l971) attempted to analyze the empirical variability of the lag structure<br />

by analyzing the turning points in general business activity. Sargent and<br />

Wallace (1973) as well as Lucas (1972) have drawn attention to the role of<br />

time-dependant response coefficients to changes in stabilization policies.<br />

Cargill and Meyer (1977; 1978) have estimated the time-varying relationship<br />

between national income and monetary-fiscal policies. The results<br />

then indicated existences of time variations and exclusions of time variations<br />

of the coefficients, lead to exclusions of prior information inherent in<br />

the models from the estimation process. Blanchard and Perotti (2002) as<br />

well as Smets and Wouters (2003) have obtained similar characteristics of<br />

the monetary and fiscal policy.<br />

The existence of time dependency of effects of monetary fiscal policies,<br />

reflects considerable doubts on the policy prescription based on constant<br />

coefficient estimates. However, more reasonable results can be obtained if<br />

we try to estimate dynamics and movements of these relationships over<br />

43


44 Dipak R. Basu and Alexis Lazaridis<br />

time. For this purpose, adaptive control systems with time-varying parameters<br />

(Astrom and Wittenmark, 1995; Basu and Lazaridis, 1986; Brannas<br />

and Westlund, 1980; Nicolao, 1992; Radenkovic and Michel, 1992) provide<br />

a fruitful approach. There are alternative estimation methods, e.g., rational<br />

expectation modeling, VAR approaches etc., to estimate time-varying systems.<br />

These methods, however, are mainly descriptive, i.e., cannot have<br />

immediate application in policy planning. The purpose of this paper is to<br />

explore this possibility in terms of a planning model where adaptive control<br />

rules will be implemented so that we can derive time-varying reduced form<br />

coefficients from the original structural model to demonstrate the dynamics<br />

of monetary-fiscal policies.<br />

The model follows the basic theoretical ideas of monetary approach to<br />

balance of payments and structural adjustments (Berdell, 1995; Humphrey,<br />

1981; 1993; Khan and Montiel, 1989). In Section 2, the method of adaptive<br />

optimization and the updating method of the time-varying model are analyzed.<br />

In Section 3, the model and its estimates are described. This includes<br />

some necessary adjustments for a developing country. In Section 4, the<br />

results of the time-varying impacts of monetary-fiscal policies are analyzed.<br />

2. The Method of Adaptive <strong>Optimization</strong><br />

Suppose a dynamic econometric model can be converted to an equivalent<br />

first order dynamic system of the form<br />

˜x i = Øx i−1 + ˜Cũ i + ˜D˜z i +ẽ i (1)<br />

where ˜x i is the vector of endogenous variables, ũ i is the vector of control<br />

variables, ˜z i is the vector of exogenous variables, and ẽ i is the vector<br />

of noises, which are assumed to be white Gaussian and Ã, ˜C, and ˜D are<br />

coefficient matrices of proper dimensions. It should be noted that a certain<br />

element of ˜z i is 1 and corresponds to the constant terms. The parameters of<br />

the above system are assumed to be random.<br />

Shifting to period i + 1, we can write<br />

˜x i+1 = Øx i + ˜Cũ i+1 + ˜D˜z i+1 +ẽ i+1 . (2)<br />

Now, we define the following augmented vectors and matrices.<br />

[ ] [ ] [ ]<br />

˜xi<br />

˜xi+1<br />

ẽi+1<br />

x i = , x<br />

ũ i+1 = , e<br />

i ũ i+1 = ,<br />

i+1 0<br />

A =<br />

[ ]<br />

à 0<br />

, C =<br />

0 0<br />

[ ] ˜C<br />

, D =<br />

I<br />

[ ] ˜D<br />

.<br />

0


Time Varying Responses 45<br />

Hence (2) can be written as<br />

x i+1 = Ax i + Cũ i+1 + D˜z i+1 + e i+1 (3)<br />

Using the linear advance operator L, such that L k y i = y i+k and defining<br />

the vectors u, z and ε from<br />

then Eq. (3) can take the form<br />

u i = Lũ i<br />

z i = L˜z i<br />

ε i = Le i<br />

x i+1 = Ax i + Cu i + Dz i + ε i (4)<br />

which is a typical linear control system.<br />

We can formulate an optimal control problem of the general form<br />

T −1<br />

min J = 1 2 ‖x T −ˆx T ‖ 2 Q T<br />

+ 1 ∑<br />

‖x i −ˆx i ‖ 2 Q<br />

2<br />

i<br />

(5)<br />

subject to the system transition equation shown in Eq. (4).<br />

It is noted that T indicates the terminal time of the control period, {Q}<br />

is the sequence of weighing matrices and ˆx i (i = 1, 2,...,T)is the desired<br />

state and control trajectory, according to our formulation.<br />

The solution to this problem can be obtained according to the minimization<br />

principle by solving the Ricatti-type equations (Astrom and<br />

Wittenmark, 1995).<br />

K T = Q T (6)<br />

i =−(E i C ′ K i+1 C) −1 (E i C ′ K i+1 A) (7)<br />

K i = E i A ′ K i+1 A + ′ i (E iC ′ K i−1 A) + Q i (8)<br />

h T =−Q T ˆx T (9)<br />

h i = i (E i C ′ K i+1 D)z i + i (E i C ′ )h i+1<br />

+ (E i A ′ K i+1 D)z i + (E i A ′ )h i+1 − Q i ˆx i (10)<br />

g i =−(E i C ′ K i+1 C) −1 [(E i C ′ K i+1 D)z i + (E i C ′ )h i+1 ] (11)<br />

xi ∗ = [E i A + (E i C) i ] xi ∗ + (E iC) g i + (E i D)z i (12)<br />

u ∗ i = i xi ∗ + g i (13)<br />

i=1


46 Dipak R. Basu and Alexis Lazaridis<br />

where u ∗ i (i = 0, 1,...,T − 1), the optimal control sequence and x∗ i+1 , the<br />

corresponding state trajectory, which constitutes the solution to the stated<br />

optimal control problem.<br />

In the above equations, i is the matrix of feedback coefficients and<br />

g i is the vector of intercepts. The notation E i denotes the conditional<br />

expectations, given all information up to the period i. Expressions like<br />

E i C ′ K i+1 C, E i C ′ K i+1 A, E i C ′ K i+1 D are evaluated, taking into account<br />

the reduced form coefficients of the econometric model and their covariance<br />

matrix, which are to be updated continuously, along with the implementation<br />

of the control rules. These rules should be re-adjusted according<br />

to “passive-learning” methods, where the joint densities of matrices A, C,<br />

and D are assumed to remain constant over the control period.<br />

2.1. Updating Method of Reduced-Form Coefficients and Their<br />

Co-Variance Matrices<br />

Suppose we have a simultaneous-equation system of the form<br />

XB ′ + UƔ ′ = R (14)<br />

where X is the matrix of endogenous variable defined on E N × E n and<br />

B is the matrix of structural coefficients, which refer to the endogenous<br />

variables and is defined on E n ×E n . U is the matrix of explanatory variables<br />

defined on E N ×E g and Ɣ is the matrix of the structural coefficients, which<br />

refer to the explanatory variables, defined on E N × E g . R is the matrix of<br />

noises defined on E N ×E n . The reduced form coefficients matrix is then<br />

defined from:<br />

=−B −1 Ɣ.<br />

(14a)<br />

Goldberger et al. (1961) have shown that the asymptotic co-variance<br />

matrix, say of the vector ˆπ, which consists of the g column of matrix ˆ<br />

can be approximated by<br />

˜ =<br />

[[ ˆ<br />

I g<br />

]<br />

⊗ ( ˆB ′ ) −1 ] ′<br />

F<br />

[[ ˆ<br />

I g<br />

]<br />

⊗ ( ˆB ′ ) −1 ]<br />

(15)<br />

where ⊗ denotes the Kroneker product, ˆ and ˆB are the estimated coefficients<br />

by standard econometric techniques and F denotes the asymptotic<br />

co-variance matrix of the n + g columns of ( ˆB ˆƔ), which assumed to be<br />

consistent and asymptotically unbiased estimate of (BƔ).


Time Varying Responses 47<br />

Combining Eqs. (14) and (14a) we have<br />

BX ′ =−ƔU ′ + R ′ ⇒ X ′ =−B −1 ƔU ′ + B −1 R ′<br />

⇒ X ′ = U ′ + W ′ (16)<br />

where W ′ = B −1 R ′<br />

Denoting the ith column of matrix X ′ by x i and the ith column of matrix<br />

W ′ by w i , we can write<br />

⎡<br />

⎤<br />

u 1i 0 ··· 0 u 2i 0 ··· 0 u gi 0 ··· 0<br />

0 u 1i ··· 0 0 u 2i ··· 0 0 u gi ··· 0<br />

x i =<br />

· · · · · · · · ·<br />

⎢ · · · · · · · · ·<br />

⎥<br />

⎣ · · · · · · · · · ⎦<br />

0 0 ··· u 1i 0 0 ··· u 2i ··· 0 ... u gi<br />

π + w i<br />

(17)<br />

where u ij is the element of the jth column and ith row of matrix U. The<br />

vector π ∈ E ng , as mentioned earlier, consists of the g column of matrix .<br />

Equation (17) can be written in a compact form, as<br />

x i = H i π + w i , i = 1, 2,...,N (17a)<br />

where x i ∈ E n ,w i ∈ E n and the observation matrix H i is defined on<br />

E n × E ng .<br />

In a time-invariant econometric model, the coefficients vector π is<br />

assumed random with constant expectation overtime, so that<br />

π i+1 = π i , for all i. (18)<br />

In a time-varying and stochastic model we can have<br />

π i+1 = π i + ε i<br />

(18a)<br />

where ε i ∈ E ng is the noise.<br />

Based on these, we can re-write Eq. (17a) as<br />

x i+1 = H i+1 π i+1 + w i+1 i = 0, 1,...,N − 1. (19)<br />

We make the following assumptions.<br />

(a) The vector x i+1 and matrix H i+1 can he measured exactly for all i.<br />

(b) The noises ε i and w i+1 are independent discrete white noises with<br />

known statistics, i.e.,<br />

E(ε i ) = 0; E(w i+1 ) = 0<br />

E(ε i w ′ i+1 ) = 0


48 Dipak R. Basu and Alexis Lazaridis<br />

E(c i ε ′ i ) = Q 1δ ij ,<br />

E(w i w ′ i ) = Q 2δ ij .<br />

where δ ij is the Kronecker delta, and<br />

The above co-variance matrices, assumed to be positive definite.<br />

(c) The state vector is normally distributed with a finite co-variance matrix.<br />

(d) Regarding Eqs. (18a) and (19), the Jacobians of the transformation of<br />

ε i into π i+1 and of w i+1 into x i+1 are unities. Hence, the corresponding<br />

conditional probability densities are:<br />

p(π i+1 |π i ) = p(ε i )<br />

p(x i+1 |π i+1 ) = p(w i+1 ).<br />

Under the above assumptions and given Eqs. (18a) and (19), the problem<br />

set is to evaluate<br />

and<br />

E(π i+1 |x i+1 ) = π ∗ i+1<br />

cov (π i+1 |x i+1 ) = S i+1<br />

(the error co-variance matrix)<br />

where x i+1 = x 1 ,x 2 ,x 3 ,...,x i+1 .<br />

The solution to this problem (Basu and Lazaridis, 1986; Lazaridis,<br />

1980) is given by the following set of recursive equations, as it is briefly<br />

shown in Appendix A.<br />

π ∗ i+1 = π∗ i + K i+1(x i+1 − H i+1 π ∗ i ) (20)<br />

K i+1 = S i+1 H i+1 ′ Q−1 2<br />

(21)<br />

Si+1 −1 i+1 + H i+1 ′ Q−1 2 H i+1 (22)<br />

Pi+1 −1 1 + S i ) −1 . (23)<br />

The recursive process is initiated by regarding K 0 and H 0 as null matrices<br />

and computing π ∗ 0 and S 0 from<br />

π0 ∗ =ˆπ i.e., the reduced form coefficients (columns of matrix ˆ)<br />

S 0 = P 0 = ˜.<br />

The reduced form coefficients, along with their co-variance matrices,<br />

can be updated by this recursive process and at each stage the set of Riccati


Time Varying Responses 49<br />

equations should be updated accordingly so that adaptive control rules may<br />

be derived.<br />

Once we estimate the model (described in the next section), using the<br />

full information maximum likelihood (FIML) method, we can obtain both<br />

the structural model and the probability density functions along with all<br />

associated matrices mentioned above. We first convert the structural econometric<br />

model to a state-variable form according to Eq. (1). Once we specify<br />

the targets for the state and control variables, the objective function is to be<br />

minimized, the weights attached to each state and control variables, then<br />

we can calculate the results of the optimization process for the entire period<br />

using Eqs. (6)–(13). Thereafter, we can update all probability density functions<br />

and all other associated matrices using Eqs. (20)–(23). These will<br />

effectively update the coefficients of the model in its state-variable form.<br />

We can repeat the optimization process over and over as we update the<br />

model, its associated matrices, probability density functions and use these<br />

as new information. When the process will converge, i.e., the difference<br />

between the old and new estimates are insignificant according to some preassigned<br />

criteria, we can obtain the final updated coefficients of the model<br />

along with the results of the optimization process. The dynamics of the<br />

time-varying coefficients (in terms of response-multipliers) are described<br />

later in Section 4. This mechanism is a great improvement compared to the<br />

fixed-coefficient model as we can incorporate in our calculation, changing<br />

information sets derived from the implementation of the optimization<br />

process.<br />

3. Monetary-Fiscal Policy Model<br />

We describe below a monetary-fiscal policy model, which was estimated<br />

with data from the Indian economy. The model accepts the definition of<br />

balance of payments and money stock according to monetary approach<br />

of balance of payments (Khan, 1976; Khan and Montiel, 1989; 1990).<br />

The decision to choose this particular model is influenced by the fact that<br />

it is commonly known as the fund-bank adjustment policy model, used<br />

extensively by the World Bank and the International Monetary Fund (IMF).<br />

In the version adopted for this paper, there is no explicit investment<br />

or consumption function, because the domestic absorption can reflect the<br />

combined response of both private and public investment and consumption,<br />

to the planned target for national income set by the planning authority<br />

and to various market forces reflected in the money market interest


50 Dipak R. Basu and Alexis Lazaridis<br />

rate and exchange rate. The “New Cambridge” model of the UK economy<br />

(Cripps and Godley, 1976; Godley and Lavoie, 2002; 2007) has postulated<br />

a similar combined consumption-investment function for the UK<br />

as well.<br />

The model was estimated by applying FIML method, using the data<br />

from the Indian economy for the period 1951–1996. The estimated parameters<br />

were then used as the initial starting point for the stochastic control<br />

model. The estimated econometric model can be transformed into a state<br />

variable form (Basu and Lazaridis, 1986), in order to formulate the optimal<br />

control problem, i.e., to minimize the quadratic objective function, subject<br />

to Eq. (4), which is the system transition equation. As already mentioned,<br />

the recursive estimation process of the time-varying response-multipliers<br />

is presented in brief in Appendix A.<br />

3.1. Absorption Function and National Income<br />

Domestic absorption reflects the behavior of both the private and public<br />

sector regarding consumption and investment too. Domestic real adsorption<br />

is influenced by real national income, market interest rate and foreign<br />

exchange rate. We assume a linear relationship.<br />

(A/P) t = a 0 + a 1 (Y/P) t − a 2 (IR) t − a 3 EXR t ,<br />

t = time period.<br />

The relation between the national income and adsorption can be defined<br />

as follows:<br />

Y t = A t + TY t + R t − G t + GBS t − LR t<br />

where A is the value of domestic absorption, P is the price level, Y is the<br />

national income, IR is the market interest rates, EXR is the exchange rate,<br />

TY is the government tax revenue, G is the public consumption, GBS is the<br />

government bond sales, LR is the net lending by the central government to<br />

the states (which is not part of the planned public expenditure) and R is<br />

the changes in the foreign exchange reserve reflecting the behavior of the<br />

foreign trade sector.<br />

The government budget deficit (BD t ) is defined by the following<br />

equation<br />

BD t = (G t + LR t + PF t ) − (TY t + GBS t + AF t + BF t )<br />

where PF is the foreign payments due to existing foreign debts, which<br />

may include both amortization and interest payments, AF is the foreign


Time Varying Responses 51<br />

assistance which is an insignificant feature, FB is the total foreign borrowing,<br />

assuming only the government can borrow from foreign sources.<br />

We assume AF and LR as exogenous, whereas FB, G, and TY as policy<br />

instruments. However, PF depends on the level of existing foreign debt and<br />

the world interest rate, although a sizable part of the foreign borrowing can<br />

be at a concessional rate<br />

PF t = a 4 + a 5<br />

t∑<br />

r=−20<br />

( ) WIR<br />

FB r + a 6 .<br />

EXR t<br />

Government bond sales (GBS) depends on its attractiveness reflected on<br />

the interest rate (IR), on the ability of the domestic economy to absorb (A)<br />

on the requirements of the government (G) and on the alternative sources of<br />

finances reflected on the tax revenue (TY) and on government’s borrowing<br />

from the central bank i.e., the net domestic asset (NDA) creation by the<br />

central bank.<br />

GBS t = a 7 + a 8 A t + a 9 IR t + a 10 G t − a 11 TY t − a 12 NDA t<br />

3.2. Monetary Sector<br />

We assume the flow equilibrium in the money market, i.e.,<br />

MD t = M t = MS t<br />

where MD is the money demand and MS is the money supply. The stock<br />

of money supply depends on the stock of high powered money and the<br />

money-multiplier, as follows:<br />

MS t = [(1 + CD)/(CD + RR)] t (R + NDA) t .<br />

(R + NDA) reflect the stock of high-powered money and the expression<br />

within the square bracket is the money-multiplier, which depends on credit<br />

to deposit ratio of the commercial banking sector (CD) and the reserve to<br />

deposit liabilities in the commercial banking sector (RR). Whereas NDA<br />

is an instrument, R depends on the foreign trade sector. However, the<br />

government can influence CD and RR to control the money supply. RR,<br />

which is the actual reserve ratio, depends on the demand for loans created by<br />

private sectors and the commercial bank’s willingness to lend. We assume<br />

that the desired reserve ratio RR ∗ t is a function of national income and


52 Dipak R. Basu and Alexis Lazaridis<br />

market interest rate i.e.,<br />

RR ∗ t = a 13 + a 14 Y t + a 15 IR ∗ t .<br />

The commercial banks may adjust their actual reserve ratio to the<br />

desired reserve ratio with a lag.<br />

Thus, we can write<br />

RR t = α(RR ∗ t − RR t−1 )<br />

where 0


Time Varying Responses 53<br />

The import cost in turn depends on the exchange rate (EXR) and the world<br />

price of imported goods (WPM). We assume the desired price level (P ∗ )is<br />

represented by the following equation:<br />

P ∗ t = a 27 − a 28 (A t ) + a 29 (IMC t ).<br />

The desired price level reflects private sector’s reaction to their expected<br />

domestic absorption of the expected import cost. Suppose the actual price<br />

will move according to the difference between the desired price in period t<br />

and the actual price level in the previous period<br />

Pt = β(P ∗ t − P t−1 );<br />

0


54 Dipak R. Basu and Alexis Lazaridis<br />

3.5. Stability of the Model<br />

Characteristic roots (real) of the system transition matrix<br />

− 0.0000008<br />

− 0.0000008<br />

− 0.1213610<br />

0.2442519<br />

0.3087129<br />

0.5123629<br />

0.7824260<br />

0.0000001<br />

0.0000001<br />

All the roots are real and they are less than unity, so the system is<br />

stable. [In the equivalent control system, the rank of the controllability<br />

matrix is the same as the dimension of the reduced state vector so that the<br />

system is controllable and observable, i.e., the system parameters can be<br />

identified.]<br />

We can, however, transform our model to the following form:<br />

Y t = ρY t−1 + e t , t = 1, 2,...,n (24)<br />

where ρ is a real number and (e t ) is a sequence of normally distributed<br />

random variables with mean zero and variance σt 2 . Box and Pierce (1970)<br />

suggested the following test statistic<br />

where<br />

Q m = n<br />

m∑<br />

rk 2 (25)<br />

k=1<br />

/(<br />

n∑<br />

n∑<br />

r k = ê t ê t−k<br />

t=k+1 t=1<br />

ê 2 t<br />

)<br />

(26)<br />

n = the number of observations, m = n − k, where k = the number of<br />

parameters estimated, and ê t are the residuals from the fitted model.<br />

If (Y t ) satisfies the system, then under the null hypothesis, Q m is<br />

distributed as a chi-squared random variable with m degrees of freedom.<br />

The null hypothesis is that ρ = 1 where ê t = Y t − Y t−1 and thus k = 0.


Time Varying Responses 55<br />

The estimated value of Q m is 3.85, where the null hypothesis is rejected<br />

at 0.05 significance level. So, we accept the alternative hypothesis that if<br />

ρ


56 Dipak R. Basu and Alexis Lazaridis<br />

Table 4. Response-multiplier period 3.<br />

Endogenous Y GBS P BD CD MS IR<br />

EXR −0.02199 −0.00069 0.05269 0.00133 −0.00042 −0.00573 −0.00062<br />

G 0.03001 0.00499 0.16536 0.02924 0.05923 0.04217 0.00570<br />

TY −0.02470 −0.01149 −0.16281 0.02472 −0.07031 −0.06325 −0.02220<br />

CI 0.00215 0.04676 −0.00252 −0.00399 −0.00465 −0.00671 0.06228<br />

Table 5. Response-multiplier period 5.<br />

Endogenous Y GBS P BD CD MS IR<br />

EXR −0.02133 −0.00088 0.05190 0.00152 −0.00201 −0.00595 −0.00089<br />

G 0.02058 0.00558 0.18078 0.01160 0.50890 0.07248 0.00530<br />

TY −0.01676 −0.00410 −0.16655 0.00695 −0.55026 −0.07779 −0.01013<br />

CI 0.00268 0.04465 −0.00267 −0.00403 −0.01967 −0.00726 0.05953<br />

Table 6. Response-multiplier period 7.<br />

Endogenous Y GBS P BD CD MS IR<br />

EXR −0.02067 −0.00116 0.04987 0.00135 −0.00104 −0.00558 −0.00129<br />

G 0.00733 0.01033 0.16455 0.00048 0.57357 0.10760 0.01206<br />

TY −0.00402 −0.00345 −0.14832 −0.01434 −0.58067 −0.10904 −0.00869<br />

CI 0.00200 0.04174 −0.00236 −0.00293 −0.02186 −0.00551 0.05566<br />

Table 7.<br />

Response of output to monetary and fiscal policies.<br />

Periods 1 2 3 5 7<br />

dY/dCI −0.00226 −0.00273 −0.00215 −0.00268 −0.00200<br />

dY/dEXR −0.02254 −0.02209 −0.02188 −0.02133 −0.02067<br />

dP/dG 0.14849 0.15603 0.16536 0.18078 0.16455<br />

dY/dG 0.03572 0.03203 0.03001 0.02058 0.00733<br />

dP/dTY −0.15549 −0.15688 −0.16281 −0.16655 −0.14832<br />

dY/dTY −0.03590 −0.02691 −0.02470 −0.01676 −0.00402<br />

dGBS/dG 0.00403 0.00568 0.00499 0.00558 0.01033


Time Varying Responses 57<br />

4. Policy Analysis<br />

The dynamics of the monetary and fiscal policy are analyzed in terms of<br />

the analysis of the dynamic response-multipliers and their relationship with<br />

the monetary-fiscal policy regimes. The response-multiplier of the system<br />

will move over time within an adaptive control framework (Tables 1–6).<br />

The movements of the coefficients response of response-multipliers in a<br />

monetary policy framework are described below. It is essential that for the<br />

stability of the control system, these dynamics of the response-multiplier<br />

should be slow (Das and Cristi, 1990; Tsakalis and Ioannou, 1990).<br />

Analysis of the response-multipliers shows that devaluation would have<br />

negative effect on the national income, devaluation would also have negative<br />

effect on the government bond sales, CD, interest rate, and money<br />

demand. This is due to the fact that Indian exports may not be that elastic<br />

in response to devaluation, whereas devaluation will reduce import abilities<br />

significantly. As a result, national income and domestic activity would<br />

have negative effects, which would depress the activity of the private sector.<br />

Because of this CD will decline.<br />

Due to the downturn of the economic activity, there will be less demand<br />

for government bonds. Devaluations also can have an inflationary effect due<br />

to increased import costs. Government expenditure, on the other hand, will<br />

have positive effects on every variable. This implies that increased public<br />

expenditure will stimulate the national income and the private sector despite<br />

increased interest rate. However, the increased public expenditure will have<br />

inflationary impacts on the economy at the same time.<br />

Increased tax revenue will depress national income, as it will reduce<br />

government bond sales. Lower level of national income will reduce money<br />

supply and the private sector’s activity, which will be reflected in the reduced<br />

currency to deposit ratio and the reduced level of money demand. Reduced<br />

level of national income, in this case, will have reduced price level but budget<br />

deficit will go up as well because of the reduced demand for government<br />

bonds. Increased rate of central bank’s discount rate will depress national<br />

income and general price level. At the same time, private sector’s activity<br />

will be reduced, as reflected in the CD and money demand, due to increased<br />

rate of interest. Although government bond sales will go up, budget deficit<br />

will be increased due to reduced level of economic activity.<br />

The effect of the central bank discount rate (CI ) on interest rate varies<br />

from 0.061 in period 1 to 0.055 in period 7. This result is primarily due to<br />

the fact that the influence of G (public expenditure) on IR has increased<br />

from 0.003 in period 1 to 0.012 in period 7. If the public expenditure goes


58 Dipak R. Basu and Alexis Lazaridis<br />

up, there will be direct pressure on the central bank to provide loans to the<br />

government. The central bank can put pressure on the commercial banks to<br />

extend some loans in public sector and less to the private sector, and the CD<br />

of the commercial banks will be reduced as a result (at the same time reserve<br />

ratio will be increased). While CD goes up in response to government<br />

expenditure (G), the CI will have a similar effect on the currency to deposit<br />

ratio.<br />

Effects of the CD on prices declines rapidly, whereas the effect of the<br />

exchange rate on price level is most prominent and does not vary much. It<br />

demonstrates that although during the initial phase of planning, tight credit<br />

policy may influence prices, in the longer run structural factors, public<br />

expenditure, and import costs reflected through the exchange rate will affect<br />

price levels more significantly. The effect of the CI on the money supply<br />

(MS) will be reduced from −0.005 in period 1 to −0.007 in period 6, and<br />

it will slightly increase in period 7.<br />

The impacts of the public expenditure and tax revenues on the two most<br />

important variables, GNP (Y) and price level (P) can be analyzed as follows.<br />

The increased government expenditure (G) will increase price level and<br />

the impact will be intensified. The impact of the government expenditure<br />

on the GNP will be reduced, which is consistent with the reduced impacts<br />

of the tax revenues on the GNP over the planning period. Tax revenue will<br />

have negative impacts on the GNP as well as on the price level, and the<br />

impacts will decline over time. The negative impacts of the tax revenue on<br />

the GNP is partly explained by the negative effects of tax revenue on the<br />

currency to deposit ratios of the commercial banks, the main indicator for<br />

the private sector activities. The government expenditure also have positive<br />

impacts on the government bond sales and the impact will increase over<br />

time due to the increasing difficulties of raising taxes, which will have<br />

negative impacts on the growth prospects.<br />

The effect of the exchange rate declines gradually, whereas the effect of<br />

the interest rate is cyclical. This is due to the cyclical pattern of the central<br />

discount rate in the optimum path. It is quite obvious that the impacts of<br />

fiscal and monetary policies on output will fluctuate over time depending<br />

on the direction of change of these policies. The variations of the effects are<br />

not unsystematic, as feared by Friedman and Schwartz (1963). However,<br />

here we are analyzing an optimum path not the historical path.<br />

5. Conclusion<br />

The importance of time-varying coefficients was recognized in statistical<br />

literature long ago (Davis, 1941), where frequency domain approach of


Time Varying Responses 59<br />

the time series analysis was utilized (Mayer, 1972). Several authors since<br />

then have used varying parameter regression analysis (Cooley and Prescott,<br />

1973; Farley et al., 1975). In the above analysis, adaptive control system<br />

was used in a state-space model where the reduced form parameters can<br />

move over time.<br />

However, variations over time are slow which indicates any absence<br />

of explosive response (Das and Cristi, 1990; Tsakalis and Ioannou, 1990).<br />

Cargil and Mayer (1978) also have observed stable movements of the coefficients.<br />

Thus the role of the monetary-fiscal policies on the economy is not<br />

unsystematic, although it can vary over time.<br />

Results obtained by other researchers showed that the effects of<br />

monetary and fiscal policy change over time, and it is important to analyze<br />

these changes in order to obtain time-consistent monetary-fiscal policy (De<br />

Castro, 2006; Folster and Henrekson, 2001; Muscatelli and Tirelli, 2005).<br />

The results obtained by using adaptive control method, showed similar<br />

characteristics of the monetary-fiscal policy.<br />

The implication for the public policy is quite obvious. Time-consistent<br />

monetary-fiscal policy demands continuous revision, otherwise, the effectiveness<br />

of the policy may deteriorate and as a result, the effects of monetary<br />

and fiscal policy on major target variables of the economy may deviate from<br />

their desired level. Our approach is a systematic way forward to analyze<br />

these dynamics of monetary and fiscal policy.<br />

References<br />

Astrom, KJ and K Wittenmark (1995). Adaptive Control. Reading, MA: Addison-Wesley.<br />

Basu, D andA Lazaridis (1986).A method of stochastic optimal control by Bayesian filtering<br />

techniques. International Journal of System Sciences 17(1), 81–85.<br />

Brannas, K and A Westlund (1980). On the recursive estimation of stochastic and time<br />

varying parameters in economic system. In <strong>Optimization</strong> Techniques, K Iracki (ed.),<br />

Berlin: Springer Verlag, 265–274.<br />

Berdell, JF (1995). The prevent relevance of Hume’s open-economy monetary dynamics.<br />

<strong>Economic</strong> Journal l05(432), September, 1205–1217.<br />

Box, GEP and DA Pierce (1970). Distribution of residual autocorrelation in autoregressiveintegrated<br />

moving average time series models. Journal of the American Statistical<br />

Association 65(3), 1509–1526.<br />

Blanchard, O and R Perotti (2002). An empirical characterization of the dynamic effects of<br />

changes in government spending and taxes on output. Quarterly Journal of <strong>Economic</strong>s,<br />

117(4), November, 1329–1368.<br />

Cargil, TF and RA Meyer (1977). Intertemporal stability of the relationship between interest<br />

races and prices. Journal of Finance 32, September, 427–448.<br />

Cargil, TF and RA Mayer (1978). The time varying response of income to changes in<br />

monetary and fiscal policy. Review of <strong>Economic</strong>s and Statistics LX(1), February, 1–7.


60 Dipak R. Basu and Alexis Lazaridis<br />

Cripps, F and WAH Godley (1976). A formal analysis of the Cambridge economic policy<br />

group Model. <strong>Economic</strong>a 43, August, 335–348.<br />

Cooley, TF and EC Prescott (1973). An adaptive regression model. International <strong>Economic</strong><br />

Review l4, June, 248–256.<br />

Das, M and R Cristi (1990). Robustness of an adaptive pole placement algorithm<br />

in the presence of bounded disturbances and slow time variation of parameters.<br />

IEEE Transactions on Automatic Control 35(6), June, 762–802.<br />

Davis, HT (1941). The Analysis of <strong>Economic</strong> Time Series. Bloomington: Principia Press.<br />

De Castro, F (2006). Macroeconomic effects of fiscal policy in Spain. Applied <strong>Economic</strong>s<br />

38(8–10), May, 913–924.<br />

Farley, J, M Hinich and T McGuire (1975). Some comparison of tests for a shift in the slopes<br />

of a multivariate time series model. Journal of Econometrics 3, August, 297–318.<br />

Friedman, M and A Schwartz (1963). A monetary history of the United States, 1867–1960.<br />

Princeton: Princeton University Press.<br />

Folster, S and M Henrekson (2001). Growth effects of government expenditure and taxation<br />

in rich countries. European <strong>Economic</strong> Review 45(8), 1501–1520.<br />

Goldberger, AC, AL Nagar and HS Odeh (1961). The covariance matrices of reduced<br />

form coefficients and forecasts for a structural econometric model. Econometrica 29,<br />

556–573.<br />

Godley, W and M Lavoie (2002). Kaleckian models of growth in a coherent stock-flow<br />

monetary framework: A Kaldorian view. Journal of Post Keynesian <strong>Economic</strong>s 24(2),<br />

277–312.<br />

Godley, W and M Lavoie (2007). Fiscal policy in a stock-flow consistent (SFC) model.<br />

Journal of Post Keynesian <strong>Economic</strong>s 30(1), 79–100.<br />

Hamberger, MJ (1971). The lag in the effect of monetary policy:A survey of recent literature.<br />

Monthly Review-Federal Reserve Bank of New York, December, 289–297.<br />

Humphrey, TM (1981). Adam Smith and the monetary approach to the balance of payments.<br />

<strong>Economic</strong> Review, Federal Reserve Bank of Richmond 67.<br />

Humphrey, TM (1993). Money, Ranking and Inflation. Aldershot: Edward Elgar.<br />

Khan, MS (1976).A monetary model of balance of payments: The case ofVenezuela. Journal<br />

of Monetary <strong>Economic</strong>s 2, July, 311–332.<br />

Khan, MS and PJ Montiel (1989). Growth oriented adjustment programs. IMF Staff Papers<br />

36, June, 279–306.<br />

Khan, MS and PJ Montiel (1990), A marriage between fund and bank models. IMF Staff<br />

Papers 37, March, 187–191.<br />

Lazaridis, A (1980). Application of filtering methods in econometrics. International Journal<br />

of Systems Science 11(11), 1315–1325.<br />

Lucas, RE, Jr (1972). Expectation and the neutrality of money. Journal of <strong>Economic</strong> Theory<br />

4, April, 103–124.<br />

Meyer, T (1967). The lag in effect of monetary policy, some criticisms. Western <strong>Economic</strong><br />

Journal 5, September.<br />

Mayer, R (1972) Estimating coefficients that change over time. International <strong>Economic</strong><br />

Review 11, October.<br />

Muscatelli, VA and P Tirelli (2005). Analyzing the interaction of monetary and fiscal policy:<br />

does fiscal policy play a valuable role in stabilization. CESIFO <strong>Economic</strong> Studies 51(4),<br />

1–17.<br />

Nicolao, G (1992). On the time-varying Riccati difference equation of optimal filtering.<br />

SIAM Journal on Control and <strong>Optimization</strong> 30(6), November, 1251–1269.


Time Varying Responses 61<br />

Poole, W (1975). The relationship of monetary deceleration to business cycle peaks. Journal<br />

of Finance 30, June, 697–712.<br />

Radenkovic, M and A Michel (1992). Verification of the self-stabilization mechanism in<br />

robust stochastic adaptive control using Lyapunov function arguments. SIAM Journal<br />

on Control and <strong>Optimization</strong> 30(6), November, 1270–1294.<br />

Sargent, TJ and N Wallace (1973). Rational expectations and the theory of economic policy.<br />

Journal of Monetary <strong>Economic</strong>s 2, April, 169–183.<br />

Smets, V and R Wouters (2003). An estimated dynamic stochastic general equilibrium<br />

model of the Euro area. Journal of European <strong>Economic</strong> Association 1(5), September,<br />

1123–1175.<br />

Tobin, J (1970). Money and income: Post Hoc Ergo Propter Hoc. Quarterly Journal of<br />

<strong>Economic</strong>s 84, May, 301–317.<br />

Tsakalis, K and P Ioannou (1990). A new direct adaptive control scheme for time-varying<br />

plant. IEEE Transactions on Automatic Control 35(6) June, 356–369.<br />

Warburton, C (1971). Variability of the lag in the effect of monetary policy 1919–1965.<br />

Western <strong>Economic</strong> Journal 9, June, 1919–1965.<br />

Appendix A<br />

Probability Density Function and Recursive Process<br />

for Re-Estimation<br />

Under the assumptions stated in Section 3 and according to Bayes’ rule, the<br />

conditional probability density function of π i given x i+1 is Gaussian and is<br />

given by:<br />

∫<br />

p(π i+1 |x i+1 ) = p(π i |x i )p(π i+1 ,x i+1 |π i ,x i )dπ i<br />

where<br />

(<br />

p(π i+1 |x i ) = constant exp − 1 )<br />

2 ||π i+1 − πi ∗ ||2 Si<br />

−1<br />

πi ∗ = E(π i |x i )<br />

S i = cov(π i |x i )<br />

(S i assumed to be invertible)<br />

[<br />

P(π i+1 ,x i+1 |π i ,x i ) = constant exp − 1 ∥ π i+1 − π i ∥∥∥<br />

2<br />

2 ∥ x i+1 − H i+1 π i+1<br />

C =<br />

[ ] [<br />

Q1 0<br />

, C<br />

0 Q −1 Q<br />

−1<br />

=<br />

2<br />

1<br />

0<br />

0 Q −1<br />

2<br />

]<br />

.<br />

C −1 ]


62 Dipak R. Basu and Alexis Lazaridis<br />

Hence<br />

∫<br />

p(π i+1 |x i+1 ) = constant<br />

(<br />

exp − 1 )<br />

J<br />

2 ˜ i dπ i<br />

where<br />

J˜<br />

i = (π i − πi ∗ )′ (Si<br />

−1 + Q −1<br />

1 )(π i − πi ∗ )<br />

+ (π i+1 − πi ∗ )′ (Q −1<br />

1<br />

+ H i+1 ′ Q−1 2 H i+1)(π i+1 − πi ∗ )<br />

− 2(π i − πi ∗ )′ Q −1<br />

1 (π i − πi ∗ )<br />

+ (x i+1 − H i+1 πi ∗ )′ Q −1<br />

2 (x i+1 − H i+1 πi ∗ )<br />

− 2(π i+1 − πi ∗ )′ H i+1 ′ Q−1 2 (x i+1 − H i+1 πi ∗ ). (a)<br />

Now, define G −1<br />

i<br />

= (S −1<br />

i<br />

+ Q −1<br />

1<br />

) and consider the expression<br />

J˜<br />

i = [(π i − πi ∗ )′ − G i Q −1<br />

1 (π i+1 − πi ∗ )]′<br />

× G −1<br />

i<br />

[(π i − πi ∗ )′ − G i Q −1<br />

1 (π i+1 − πi ∗ )]. (b)<br />

Note that G i is symmetric, since it is the sum of two (symmetric) co-variance<br />

matrices. Expanding Eq. (b) and noting that G i G −1<br />

i<br />

= I we obtain<br />

J˜<br />

i = (π i − πi ∗ )′ G −1<br />

i<br />

(π i+1 − πi ∗ ) − 2(π i − πi ∗ )′ Q −1<br />

1 (π i+1 − πi ∗ )<br />

+ (π i+1 − πi ∗ )′ Q −1<br />

1 G iQ −1<br />

1 (π i+1 − πi ∗ ). (c)<br />

In view of Eqs. (b) and (c), Eq. (a) can be written as<br />

J˜<br />

i = [(π i − πi ∗ )′ − G i Q −1<br />

1 (π i+1 − πi ∗ )]′<br />

× G −1<br />

i<br />

[(π i − πi ∗ )′ − G i Q −1<br />

1 (π i+1 − πi ∗ )]<br />

+ (π i+1 − πi ∗ )′ (Q −1<br />

1<br />

+ H i+1 ′ Q−1 2 H i+1 − Q −1<br />

1 G iQ −1<br />

1 )(π i+1 − πi ∗ )<br />

+ (x i+1 − H i+1 πi ∗ )′ Q −1<br />

2 (x i+1 − H i+1 πi ∗ )<br />

− 2(π i+1 − πi ∗ )′ H i+1 ′ Q−1 2 (x i+1 − H i+1 πi ∗ ).<br />

∫ ( )<br />

Integration with respect to π i yields constant exp −<br />

1 ˜ 2<br />

J i dπi =<br />

constant exp ( − 1 ...<br />

3 J i)<br />

where<br />

...<br />

J i = ( π i+1 − πi ∗ )′ (Q −1<br />

1<br />

− Q −1<br />

1 G iQ −1<br />

1<br />

+ H i+1 ′ Q−1 2 H i+1) ( π i+1 − πi<br />

∗ )<br />

+ ( x i+1 − H i+1 πi ∗ )′ Q −1 (<br />

2 xi+1 − H i+1 πi<br />

∗ )<br />

− 2 ( π i+1 − πi ∗ )′ H i+1 ′ ( Q−1 2 xi+1 − H i+1 πi<br />

∗ )<br />

.


Time Varying Responses 63<br />

Hence<br />

(<br />

p(π i+1 |x i+1 ) = constant exp − 1 )<br />

...<br />

J i .<br />

2<br />

Since p(π i+1 |x i+1 ) is proportional to the likelihood function, by maximizing<br />

the conditional density function, we are also maximizing the likelihood<br />

in order to determine πi+1 ∗ ...<br />

. Note that minimization of J i, is equivalent to<br />

maximizing p(π i+1 |x i+1 ). To minimize ...<br />

J i, we expand Eq. (c) eliminating<br />

terms not containing π i+1 . Then, we differentiate with respect to π<br />

i+1 ′ and<br />

after equating to zero, we finally obtain (Lazaridis, 1980).<br />

πi+1 ∗ = π∗ i + (Q−1 1<br />

− Q −1<br />

1 G iQ −1<br />

1<br />

+ H i+1 ′ Q−1 2 H i+1) −1<br />

× H i+1 ′ Q−1 2 (x i+1 − H i+1 πi ∗ ). (d)<br />

Now, consider the composite matrix:<br />

Q −1<br />

1<br />

− Q −1<br />

1 G iQ1 −1 where G i = (Si<br />

−1 + Q −1<br />

1 )−1 .<br />

Considering the matrix identity of householder, we can write<br />

Q −1<br />

1<br />

− Q −1<br />

1 (S−1 i<br />

+ Q −1<br />

1 )−1 Q −1<br />

1<br />

= (Q 1 + S i ) −1 Pi+1 −1 .<br />

Hence Eq. (d) takes the form:<br />

πi+1 ∗ = π∗ i + K i+1(x i+1 − H i+1 πi ∗ )<br />

where K i+1 ,Si+1 −1 and P−1<br />

i+1<br />

are defined in Eqs. (21)–(23).<br />

It is recalled that H 0 is a null matrix, since no observations exist beyond<br />

period 1 in the estimation process. The same applies for the vector x 0 .<br />

Appendix B<br />

It is recalled that the model was estimated using FIML method. The FIML<br />

estimates are as follows (R 2 and adjusted R 2 refer to the corresponding 2<br />

SLS estimates. Numbers in brackets are the corresponding t-statistics).<br />

Estimated Model<br />

1. A t = 0.842Y t<br />

(3.48)<br />

+ 55191.5<br />

(4.87)<br />

+ 0.024A t−1 − 1.319IR t + 1.051IR t−t − 284EXR t<br />

(1.74) (1.34) (1.16) (1.29)<br />

R 2 = 0.99, R 2 = 0.92, DW = 1.76, ρ = 0.27


64 Dipak R. Basu and Alexis Lazaridis<br />

2. (Y t ) = A t + R t<br />

3. (BD) t = (G t + LR t + PF t ) − (TY t + GBS t + AF t + FB t )<br />

4. PF t = 1148.80 + 0.169 CFB t−1 + 144.374 WIR t<br />

(1.48) (2.39)<br />

(1.89)<br />

5. GBS t = 0.641G t<br />

(1.14)<br />

+ 0.677G t−1<br />

(1.41)<br />

+ 1.591IR t<br />

(1.64)<br />

− 0.33AF t<br />

(0.23)<br />

R 2 = 0.89, R 2 = 0.84, DW = 2.36, ρ = 0.23<br />

(1.51)<br />

6. MD = MS<br />

7. (MS) t = [(1 + CD t )/(CD t + RR t )](R t + NDA t )<br />

8. RR t = 0.008Y t<br />

(3.47)<br />

+ 0.065<br />

(2.72)<br />

9. CD t = 0.4391IR t<br />

(−1.49)<br />

10. MD t = 2.733RR t<br />

(−2.52)<br />

− 0.002Y t−1<br />

(−1.84)<br />

+ 1.2571R t<br />

(1.87)<br />

+ 2.7031R t−1<br />

(1.88)<br />

− 0.003T<br />

(2.87)<br />

R 2 = 0.86, R 2 = 0.79, DW = 2.88, ρ = 0.26<br />

(1.23)<br />

− 0.0057T<br />

(−6.269) + 0.193<br />

(11.95)<br />

+ 0.158CD t−1 + 0.0007Y t + 0.009Y t−1<br />

(1.16) (1.56) (1.73)<br />

R 2 = 0.94, R 2 = 0.92, DW = 1.22, ρ = 0.64<br />

(4.21)<br />

− 2.19IR t<br />

(−0.24)<br />

+ 1.713IR t−1 + 1.275Y t<br />

(2.17) (6.67)<br />

− 113335.0<br />

(−0.088)<br />

R 2 = 0.86, R 2 = 0.81, DW = 1.92, ρ = 0.26<br />

(1.31)<br />

11. IR t = 0.413MD t−1<br />

(1.93)<br />

+ 0.406Y t−1<br />

(1.56)<br />

12. P t = 0.0004A t<br />

(−5.71)<br />

− 0.814IR t−1 + 8.656CI t − 1.351CI t−1<br />

(1.403) (1.68)<br />

− 7.02<br />

(−1.84)<br />

R 2 = 0.98, R 2 = 0.95, DW = 2.48,ρ = 0.58<br />

(3.21)<br />

+ 0.0002A t−1 + 0.105IMC t<br />

(1.77)<br />

(1.48)<br />

+ 0.421P t−1<br />

(3.79)<br />

R 2 = 0.98, R 2 = 0.93, DW = 2.09,ρ = 0.48<br />

13. IMC t = 5.347 + 19.352WPM t + 9.017EXR t<br />

(1.34) (2.03)<br />

(0.97)<br />

R 2 = 0.98, R 2 = 0.97, DW = 2.07, ρ = 0.31<br />

(1.37)<br />

14. R t = X t − IM t + K t + PFT t + FB t − PF t + AF t<br />

+ 32.895<br />

(1.87)


15. IM t =−24.04 − 0.025IM t−1<br />

(−0.82)<br />

+ 1.123T<br />

(0.27)<br />

16. CFB t = ∑ t<br />

r=−20 FB r<br />

(−0.86)<br />

+ 0.104Y t<br />

(1.59)<br />

Time Varying Responses 65<br />

− 0.089Y t−1 − 0.654IMC t<br />

(9.21) (−1.26)<br />

R 2 = 0.96, R2 = 0.92, DW = 2.52, ρ =−0.62<br />

(−1.72)<br />

λ 2 = 563.09<br />

The estimated Chi-square satisfies the goodness of fit test for the FIML<br />

estimation.<br />

Notations<br />

A = Domestic absorption<br />

AF = Foreign receipts (grants etc.)<br />

BD = Government budget deficits<br />

CD = Credit to deposit ratio in the commercial banking sector<br />

CFB = Cumulative foreign borrowing, i.e., foreign debt over a period of<br />

20 years<br />

CT = Discount rate of the RBI<br />

FB = Foreign borrowing<br />

G = Government expenditure<br />

GBS = Government bond sales<br />

IM = Value of imports<br />

IMC = Import price index (1990 = 100)<br />

IR = Interest rate in the money-market<br />

K = Foreign capital inflows<br />

LR = Lending (minus repayments to the states)<br />

MD = Money demand<br />

MS = Money supply<br />

NDA = Net domestic asset creation by the RBI<br />

P = Consumers’ price index (1990 = 100)<br />

PFT = Private foreign transactions<br />

PF = Foreign payments<br />

R = Changes in foreign exchange reserve<br />

RR = Reserve to deposit ratio in the commercial banking sector<br />

TY = Government tax revenue<br />

T = Time trend


66 Dipak R. Basu and Alexis Lazaridis<br />

WPM = World price index of India’s imports (1990 = 100)<br />

WIR = World interest rate, average of European and US money market<br />

rate<br />

EXR = Exchange rate (Rs/US$)<br />

X = Value of exports<br />

Y = GNP at constant 1990 prices


Chapter 3<br />

Mathematical<br />

Modeling in Macroeconomics


This page intentionally left blank


Topic 1<br />

The Advantages of Fiscal Leadership in an<br />

Economy with Independent Monetary Policies<br />

Andrew Hughes Hallet<br />

James Mason University, USA<br />

1. Introduction<br />

In January 2006, Gordon Brown (as Britain’s finance minister) was widely<br />

criticized by the European Commission and other policy makers for running<br />

fiscal policies, which they considered to be too loose and irresponsible. The<br />

UK fiscal deficit had breached the 3% of GDP that is considered to be safe.<br />

To make its point, the European Commission initiated an “excessive deficit”<br />

procedure against the UK government, claiming that its fiscal policies constituted<br />

a danger for the good performance of the UK economy and its<br />

neighbors, if these deficits were not contained and reversed. Yet the United<br />

Kingdom, alone among its European partners, allows fiscal policy to lead in<br />

order to achieve certain social expenditure and medium-term output objectives<br />

and a degree of co-ordination with the monetary policies designed to<br />

reach a certain inflation target. And the economic performance has been<br />

no worse, and may have been better than elsewhere in the Eurozone. Was<br />

Gordon Brown’s strategy so mistaken after all?<br />

The British fiscal policy has changed radically since the days when it<br />

tried to micro-manage all of the aggregate demand with an accommodating<br />

monetary policy in the 1960s and 1970s; and again during the 1980s, it<br />

was designed to strengthen the economy’s supply-side responses, while the<br />

monetary policies were intended to secure lower inflation.<br />

The 1990s saw a return to more activist fiscal policies — but policies<br />

designed strictly in combination with an equally active monetary policy<br />

based on inflation targeting and an independent Bank of England. They are<br />

set, in the main, to gain a series of medium- to long-term objectives — low<br />

debt, the provision of public services and investment, social equality, and<br />

economic efficiency. The income stabilizing aspects of the fiscal policy<br />

69


70 Andrew Hughes Hallet<br />

have, therefore, been left to act passively through the automatic stabilizers,<br />

which are part of any fiscal system, while the discretionary part (the bulk of<br />

the policy measures) is set to achieve these long-term objectives. Monetary<br />

policy, meanwhile, is left to take care of any short-run stabilization around<br />

the cycle; that is, beyond what, predictably, would be done by the automatic<br />

stabilizers. 1<br />

To draw a sharp distinction between actively managed long-run policies,<br />

and non-discretionary short-run stabilization efforts restricted to the<br />

automatic stabilizers, is of course the strategic policy prescription of Taylor<br />

(2000). Marrying this with an activist, monetary policy directed at cyclical<br />

stabilization, but based on an independent Bank of England and a monetary<br />

policy committee with the instrument (but not target) independence,<br />

appears to have been the innovation in the UK policies. It implies a leadership<br />

role for the fiscal policy, which allows both fiscal and monetary<br />

policies to be better co-ordinated — but without either losing their ability<br />

to act independently. 2<br />

Thus, Britain appears to have adopted a Stackelberg solution, which lies<br />

somewhere between the discretionary (but Pareto superior) co-operative<br />

solution, and the inferior but independent (non-co-operative) solution, as<br />

shown in Fig. 1. 3 Nonetheless, by forcing the focus onto long-run objectives,<br />

to the exclusion of the short term, this set-up has imposed a degree of<br />

pre-commitment (and potential for electoral punishment) on fiscal policy<br />

because governments naturally wish to lead. But, the regime remains nonco-operative<br />

so that there is no incentive to renege on earlier plans in the<br />

1 The Treasury estimates that the automatic stabilizers will, in normal circumstances, stabilize<br />

some 30% of the cycle; the remaining 70% being left to monetary policy (HM Treasury,<br />

2003). The option to undertake discretionary stabilizing interventions is retained “for<br />

exceptional circumstances” however. Nevertheless, the need for any such additional interventions<br />

is unlikely: first, because of the effectiveness of the forward looking, symmetric<br />

and an activist inflation targeting mechanism adopted by the Bank of England; and, second,<br />

because the long-term expenditure (and tax) plans are deliberately constructed in nominal<br />

terms so that they add to the stabilizing power of the automatic stabilizers in more serious<br />

booms or slumps.<br />

2 For details on how this leadership vs. stabilization assignment is intended to work, see HM<br />

Treasury (2003) and Section 3 below. Australia and Sweden operate rather similar regimes.<br />

3 Figure 1 is the standard representation of the outcomes of the different game theory equilibria<br />

when the reaction functions form an acute angle. The latter is assured for our context<br />

when both sets of policy makers have some interest in inflation control and output stabilization,<br />

and the policy multipliers have the conventional signs (Hughes Hallett and Viegi,<br />

2002). Stackelberg solutions, therefore, imply a degree of co-ordination in the outcomes,<br />

relative to the non-co-operative (unco-ordinated) Nash solution.


The Advantages of Fiscal Leadership in an Economy 71<br />

Figure 1.<br />

Monetary-Fiscal Interactions and the Central Bank.<br />

absence of changes in information. Thus, the policies and their intended<br />

outcomes will be sustained by the government of the day. 4<br />

2. The Fiscal Leadership Hypothesis<br />

The hypothesis is that the United Kingdom’s improved performance is due<br />

to the fact that the fiscal policy leads an independent monetary policy. This<br />

leadership derives from the fact that fiscal policies typically have long-run<br />

targets (sustainability and low debt), and is not easily reversible (public services<br />

and social equality), and does not stabilize well if efficiency is to be<br />

maintained. Nevertheless, there are also automatic stabilizers in any fiscal<br />

policy framework, implying that monetary policy must condition itself on<br />

the fiscal stance at each point. This automatically puts the latter in a follower’s<br />

role. This is helpful because it allows the economy to secure the benefits<br />

of an independent monetary policy but also enjoys a certain measure<br />

of co-ordination between the two sets of policy makers — discretionary/<br />

automatic fiscal policies on one side, and active monetary policies on the<br />

4 Stackelberg games, with fiscal policy leading, produce fiscal commitment: i.e., sub-game<br />

perfection with either strong or weak time consistency (Basar, 1989). In this sense, we have<br />

“rules rather than discretion”. Commitment to the stabilization policies is then assured by<br />

the independence of the monetary authority.


72 Andrew Hughes Hallet<br />

other. The extra co-ordination arises in this case because the constraints<br />

on, and responses to an agreed leadership role reduce the externalties of<br />

self-interested behavior, which independent agents would otherwise impose<br />

on one another. This allows a Pareto improvement over the conventional<br />

non-co-operative (full independence) solution, without reducing the central<br />

bank’s ability to act independently. It is important to realize that the<br />

co-ordination here is implicit and therefore rule-based, not discretionary.<br />

To show that the UK fiscal policy does lead monetary policy in this<br />

sense, and that this is the reason for the Pareto improved results in Table 1,<br />

I produce evidence in three parts:<br />

• Institutional evidence: taken from the UK Treasury’s own account of<br />

how its decision-making works, what goals it needs to achieve, and how<br />

the monetary stance may affect it or vice versa;<br />

Table 1.<br />

Generalized Taylor rules in the UK and EU-12.<br />

S.No. Const r t−1<br />

π j,k<br />

e<br />

j, k gap pd debt<br />

Dependent variable: central bank lending rate, r t :<br />

For the UK, monthly data from June 1997–January 2004<br />

(1) −1.72 0.711 1.394 +6, +18 0.540<br />

(2.16) (5.88) (2.66)<br />

(2.06)<br />

(2) −2.57<br />

(2.59)<br />

0.598<br />

(4.38)<br />

R 2 = 0.91, F 3,21 = 82.47, N = 29<br />

1.289<br />

(0.66)<br />

+9, +21 1.10<br />

(1.53)<br />

— —<br />

−0.67<br />

(0.83)<br />

0.43<br />

(0.50)<br />

R 2 = 0.90, F 5,19 = 44.1, N = 25<br />

For the Eurozone (EU-12), monthly data from January 1999–January 2004<br />

(1) −0.996<br />

(0.76)<br />

(2) −13.67<br />

(0.59)<br />

0.274<br />

(1.04)<br />

0.274<br />

(1.04)<br />

1.714<br />

(3.89)<br />

+9, +21 0.610<br />

(1.77)<br />

R 2 = 0.82, F 3,11 = 22.2, N = 15<br />

1.110<br />

(1.19)<br />

−6, +6 0.341<br />

(1.22)<br />

— —<br />

0.463<br />

(2.89)<br />

R 2 = 0.87, F 5,13 = 24.7, N = 19<br />

0.191<br />

(0.63)<br />

Notation: j, k give the bank’s inflation forecast interval; πj,k e represents the average inflation<br />

rate expected over the interval t + j to t + k: Eπ t+j,t+k ;gap= GDP–trend GDP; pd =<br />

primary deficit/surplus as a percentage of GDP (a surplus > 0) and debt = debt/GDP ratio.<br />

Estimation: instrumented 2SLS; t-ratios in parentheses; j, k determined by search; and the<br />

output gap is obtained from a conventional HP filter to determine trend output.


The Advantages of Fiscal Leadership in an Economy 73<br />

• Empirical evidence: the extent to which the monetary policy is affected<br />

by the fiscal policy, but fiscal policy with its longer objectives does not<br />

depend on monetary policies and<br />

• Theoretical evidence: shows how, with different goals for governments<br />

and central banks, Pareto improving results can be expected from fiscal<br />

leadership. The United Kingdom has, therefore, had the incentive, and<br />

the capacity, to operate in this way.<br />

3. Institutional Evidence: The Treasury’s Mandate<br />

3.1. History<br />

The British fiscal policy has changed a great deal over the past 30 years.<br />

Fiscal policy was the principal instrument of the economic policy in the<br />

1960s and 1970s, and was focused almost exclusively on demand management.<br />

Monetary policy and the provision of public services (subject to a<br />

lower bound) were essentially accommodating factors, since the main constraint<br />

was perceived to be the financing of persistent current-account trade<br />

deficits. The result of this was a short-term view, in which the conflicts<br />

between the desire for growth (employment) and the recurring evidence<br />

of overheating (trade deficits) led to an unavoidable sequence of “stop-go”<br />

policies. There were many (Dow, 1964), who argued that the real problem<br />

was a lack of long-run goals for the fiscal policy; and that the lags inherent<br />

in recognizing the need for a policy change and in implementing it through<br />

Parliament till it takes hold in the markets, had produced a system that was<br />

actually destabilizing rather than stabilizing. But, there would have been<br />

conflicts anyway because there were two or more competing goals, but<br />

effectively only one instrument to reach them.<br />

Such a system could not provide the long-run goal of public services,<br />

public investment, and social equity. It certainly proved too difficult to turn<br />

the fiscal policy on and off as fast and frequently as required, and precommit<br />

to certain expenditure and taxation plans at the same time. The<br />

point here is that, if you cannot pre-commit fiscal policy to certain goals,<br />

then you will not be able to pre-commit monetary policy either. This means<br />

that the “stable growth with low inflation” objective will be lost.<br />

In the 1980s, the strategy changed when a new regime took office, committed<br />

to reducing the size and role of the government in the economy. The<br />

demand management role of the fiscal policy was phased out. This role<br />

passed over to inflation control and monetary management — with limited


74 Andrew Hughes Hallet<br />

success in the earlier phases of monetary and exchange rate targeting, but<br />

with rather more success when it was formalized as an inflation-targeting<br />

regime under an independent Bank of England and monetary policy committee.<br />

In this period, therefore, the role of fiscal policy was to provide<br />

the “conditioning information” within which the monetary management<br />

had to work. It was split into two distinct parts. Short-term stabilization<br />

of output and employment was left to the automatic stabilizers inherent in<br />

the prevailing tax and expenditures rules. Discretionary adjustments would<br />

only be used in exceptional circumstances, if at all. 5 The rest of the fiscal<br />

policy, being the larger part, could then be directed at long-term objectives:<br />

in this instance, changes to free up the supply side (and enhance competitiveness)<br />

on the premise that greater microeconomic flexibility would<br />

both reduce the need for stabilizing interventions (relative wage and price<br />

adjustments would do the job better) and provide the conditions for stronger<br />

employment and growth (HMT, 2003). Given this, the long-run goals of<br />

better public services, efficiency, and social equity could then be attained.<br />

In addition, as the market flexibility changes were made, this part of fiscal<br />

policy could be increasingly focused on providing the long-run goals<br />

directly. 6<br />

3.2. Current Practice<br />

According to the treasury’s own assessment (HMT, 2003), the UK fiscal policy<br />

now leads monetary policy in two senses. First, fiscal policy is decided<br />

in advance of monetary or other policies (pp. 5, 63, 64, 74), 7 and with a<br />

long-time horizon (pp. 5, 42, 48, 61). Second, monetary policy is charged<br />

with controlling inflation and stabilizing output around the cycle — rather<br />

than steering the economy as such (pp. 61–63). Fiscal policy, therefore, sets<br />

the conditions within which monetary policy has to respond and achieve its<br />

own objectives (pp. 9, 15, 67–68). The short-term fiscal interventions, now<br />

less than half the total and declining (p. 48, Box 5.3, Section 6), are restricted<br />

to the automatic stabilizers — with effects that are known and predictable.<br />

5 This follows the recommendations in Taylor (2000). However, because of the inevitable<br />

trade-off between cyclical stability and budget stability, it is likely to be successful only in<br />

markets with sufficiently flexible wages and prices (see Fatas et al., 2003).<br />

6 This summary is taken from the UK Treasury’s own view (HMT, 2003), Sections 2, 4, and<br />

pp. 34–38.<br />

7 In this section, page or section numbers given without further reference are all cited from<br />

HMT (2003).


The Advantages of Fiscal Leadership in an Economy 75<br />

The short-run discretionary components are negligible (p. 59, Table 5.5),<br />

and the long-run objectives of the policy will always take precedence in<br />

cases of conflict (pp. 61, 63–68). Fiscal policy would, therefore, not be<br />

used for fine tuning (pp. 11, 63), or for stabilization (pp. 1, 14). This burden<br />

will be carried by interest rates, given the fiscal stance and its forward plans<br />

(pp. 1, 7, 11, 37).<br />

Third, given the evidence that consumers and firms often do not look<br />

forward efficiently and may be credit-constrained, and also that the impacts<br />

of the fiscal policy are uncertain and have variable lags (pp. 19, 26, 48;<br />

Taylor, 2000), it makes sense for the fiscal policy to be used consistently<br />

and sparingly and in a way that is clearly identified with the long-term<br />

objectives. The United Kingdom has determined these objectives to be (pp.<br />

11–13, 39–41, 61–63, 81–82):<br />

(a) The achievement of sustainable public finances in the long term; low<br />

debt (40% of GDP) and symmetric stabilization over the cycle.<br />

(b) A sustainable and improving delivery of public services (health and education);<br />

improving competitiveness/supply-side efficiency in the long<br />

term; and the preservation of social (and inter-generational) equity and<br />

alleviation of poverty.<br />

(c) Recognition that the achievement of these objectives is often contractual<br />

and cannot easily be reversed once committed. The long lead times<br />

needed to build up these programs mean that the necessary commitments<br />

must be made well in advance. Frequent changes would conflict<br />

with the government’s medium-term sustainability objectives, and cannot<br />

be fulfilled. Similarly, changes in direct taxes will affect both equity<br />

and efficiency, and should seldom be made.<br />

(d) The formulation of clear numerical objectives is consistent with these<br />

goals; a transparent set of institutional rules to achieve them; and a commitment<br />

to a clear separation of these goals from economic stabilization<br />

around the cycle. 8<br />

(e) To ensure that fiscal policy can operate along these lines, public expenditures<br />

are planned with fixed three-year departmental expenditure limits<br />

which, when combined with decision and implementation lags of up<br />

to two years, means that the bulk of the fiscal policy has to be planned<br />

8 The credibility of fiscal policy, and by extension an independent monetary policy, is seen as<br />

depending on these steps, and on symmetric stabilization. See also Dixit (2001), and Dixit<br />

and Lambertini (2003).


76 Andrew Hughes Hallet<br />

with a horizon of up to five years — vs a maximum of two years in<br />

the inflation targeting rules operated by the Bank of England. Moreover,<br />

these spending limits are constraints defined in nominal terms, so<br />

that they will be met. In addition, the Treasury operates a tax smoothing<br />

approach — having rejected “formulaic rules” that might have<br />

adjusted taxes or expenditures, or the various tax regulators, or credit<br />

taxes that could have stabilized the economy in the short term. The<br />

bulk of fiscal policy will remain focused on the medium to long term<br />

therefore.<br />

To summarize, the institutional structure itself has introduced fiscal<br />

leadership. And, since the discretionary elements are smaller than elsewhere<br />

— smaller than the automatic stabilizers (Tables 1–5) and declining<br />

(Box 5.3) — something else (monetary policy) must have taken up the burden<br />

of short-run stabilization. This arrangement is, therefore, best modeled<br />

as a Stackelberg game, with institutional parameters for goals, independence,<br />

priorities etc. I shall argue that this has had the very desirable result<br />

of increasing the degree of co-ordination between policy makers without<br />

compromising their formal or institutional independence — or indeed, their<br />

credibility for delivering stability and low inflation. Constrained independence,<br />

in other words, designed to reduce the externalities, which each party<br />

might, otherwise, impose on the other.<br />

4. Empirical Evidence<br />

The next step is to establish whether the UK authorities have followed<br />

the leadership model described above. The fact that they say that they<br />

follow such a strategy does not prove that they do so. The difficulty<br />

here is that, although the asymmetry in ex-ante (anticipated) responses<br />

between the follower and leader is clear enough in the theoretical model —<br />

the follower expects no further changes from the leader after the follower<br />

chooses his reaction function, whereas the leader takes this reaction<br />

function into account — such an asymmetry and zero restriction<br />

will not appear in the ex-post (observed) responses that emerge in the<br />

final solution (Basar and Olsder, 1999; Brandsma and Hughes Hallett,<br />

1984; Hughes Hallett, 1984). Since I have no data on anticipations, this<br />

makes it difficult to test for leadership directly. But, we can use indirect<br />

tests based on the degree of competition, or complimentarity, between the<br />

instruments.


4.1. Monetary Responses<br />

The Advantages of Fiscal Leadership in an Economy 77<br />

For monetary policy, it is widely argued that the authorities’ decisions can<br />

best be modeled by a Taylor rule 9 :<br />

r t = ρr t−1 + αE t π t+k + βgap t+h α, β, ρ ≥ 0 (1)<br />

where k, h represent the authorities’forecast horizon 10 and may be positive<br />

or negative. Normally, α>1 will be required to avoid indeterminancy:<br />

that is, arbitrary variations in output or inflation, as a result of unanchored<br />

expectations in the private sector. The relative size of α and β then reveals<br />

the strength of the authorities’ attempts to control inflation vs. income stabilization;<br />

and ρ their preference for gradualism. I set h = 0 in Eq. (1),<br />

since monetary policy appears not to depend on the expected output gaps<br />

(Dieppe et al., 2004).<br />

In order to obtain an idea of the influence of fiscal policies on monetary<br />

policy, I include some Taylor rule estimates — with and without fiscal<br />

variables — in Table 1. They show such rules for the United Kingdom and<br />

the Eurozone since 1997 and 1999, respectively, the dates when new policy<br />

regimes were introduced. The Eurozone has been included to emphasize<br />

the potential contrast between fiscal leadership in the United Kingdom, and<br />

the lack of it in Europe.<br />

4.2. Different Types of Leadership<br />

Conventional wisdom would suggest that Europe has either monetary leadership<br />

or independent policies, and hence policies which are either jointly<br />

dependent in the usual way, or which are complementary and mutually supporting.<br />

The latter implies that the monetary policy tends to expand/contract<br />

whenever fiscal policy needs to expand or contract — but not necessarily<br />

vice versa, when money is expanding or contracting and is sufficient to<br />

9 Taylor (1993b). One can argue that the policy should be based on fully optimal rules<br />

(Svensson, 2003), of which Eq. (1) will be a special case. But, it is hard to argue that policy<br />

makers actually do optimize when the additional gains from doing so may be small and when<br />

the uncertainties in their information, policy transmissions, or the economy’s responses may<br />

be quite large. In practice, therefore, the Taylor rule approach is often found to fit central<br />

bank behavior better.<br />

10 In principle, k and h may be positive or negative: positive if the policy rule is based on<br />

future-expected inflation, to head off an anticipated problem, as in the Bank of England. But<br />

negative if interest rates are to follow a feedback rule to correct past mistakes or failures.


78 Andrew Hughes Hallet<br />

control inflation on its own. This is a weak form of monetary leadership in<br />

which fiscal policies are an additional instrument for use in cases of particular<br />

difficulty, rather than the policies in a Nash game with conflicting<br />

aims that need to be reconciled.<br />

More generally, leadership implies complimentarity among policy<br />

instruments in the leader’s reaction function, but conflicts among them in<br />

the follower’s responses. A weak form of leadership also allows for independence<br />

among instruments in the leader’s policy rules. Thus, monetary<br />

leadership would imply some complimentarity (or independence) in the<br />

Taylor rule, but conflicts in the fiscal responses.And fiscal leadership would<br />

mean complimentarity or independence in the fiscal rule, but conflicts in<br />

the monetary responses. Evidently, from Section 3, we might expect Stackelberg<br />

leadership (with fiscal policy leading) in the UK, but the opposite in<br />

the Eurozone.<br />

4.3. Observed Behavior<br />

The upper equations in each panel of Table 1 yield the standard results for<br />

the monetary behavior in both the United Kingdom and the Eurozone. Both<br />

monetary authorities have targeted expected inflation more than the output<br />

gap since the late 1990s — and with horizons of 18–21 months ahead. The<br />

European Central Bank (ECB) has been more aggressive in this respect.<br />

But, contrary to conventional wisdom, it was also more sensitive to the<br />

output gap and had a longer horizon and less policy inertia.<br />

However, if we allow monetary policies to react to the changes in fiscal<br />

stance, we get different results (the lower equations). Here, we see that the<br />

UK monetary decisions may take fiscal policy into account, but the effect is<br />

not significant or well defined. However, this model of monetary behavior<br />

does imply more activist policies, a longer forecast horizon (up to two years<br />

as the Bank of England claims) and greater attention to the output gap —<br />

the symmetry in the UK’s policy rule. And to the extent that fiscal policy<br />

does have an influence, it would be as a substitute (or competitor) for monetary<br />

policy — fiscal deficits lead to higher interest rates. This is potentially<br />

consistent with fiscal leadership, since this form of leadership can allow<br />

independence or complimentarity between instruments in the leader’s rule.<br />

We need to check the fiscal reaction functions directly. The lack of significance<br />

for the debt ratio is easily understood, however. Since this is a<br />

declared long-run objective of the fiscal policy, it would not be necessary<br />

for the monetary policy to take it into account. So far, the evidence could


The Advantages of Fiscal Leadership in an Economy 79<br />

imply a Stackelberg follower role or independence for monetary policy,<br />

easing the competition (externalities) among policy instruments.<br />

The ECB results look quite different. Once fiscal effects are included,<br />

the concentration on inflation control is much reduced and the forecast<br />

horizon shrinks to six months. Moreover, a feedback element, to correct<br />

the past mistakes, comes in. At the same time, output stabilization becomes<br />

less important, which implies that the symmetric targeting goes out. Instead,<br />

monetary policy now appears to react to fiscal policy, but with the “wrong”<br />

sign: the larger the primary deficit, the looser will be the monetary policy.<br />

In this case, therefore, the policies are acting as complements — circumstances,<br />

which call for a primary deficit will also call for a relaxation of the<br />

monetary policy. Thus, we have an evidence of monetary leadership, where<br />

fiscal policy is used as an additional policy instrument.<br />

4.4. Fiscal Reaction Functions in the UK and Eurozone<br />

Many analysts have hypothesized that fiscal policy responses can best be<br />

modeled by means of a “fiscal Taylor rule”:<br />

d t = a t + γgap t + sd γ > 0 (2)<br />

where sd = structural deficit ratio, d t is the actual deficit ratio (d >0<br />

denotes a surplus), 11 and a t represents all the other factors such as the<br />

influence of monetary policy, existing or anticipated inflation, the debt burden,<br />

or discretionary fiscal interventions. The coefficient γ then gives a<br />

measure of an economy’s automatic stabilizers. The European Commission<br />

(2002), for example, estimates γ ≈ 1/2 for Europe — a little more in<br />

countries with an extensive social security system, a little less elsewhere.<br />

And a similar relationship is thought to underlie UK’s fiscal policy (see<br />

HMT, 2003: Boxes 5.2 and 6.2).<br />

The expected signs of any remaining factors are not so clear. The debt<br />

burden should increase current deficits, unless there is a systematic debt<br />

reduction program underway. Inflation should have a positive impact on the<br />

deficit ratio if fiscal policy is used for stabilization purposes, but it had no<br />

effect otherwise. Finally, the output gap should also have a positive impact<br />

on the deficit ratio, if the latter is being used for stabilization purposes —<br />

in which case interest rates should be negatively correlated with the size<br />

11 See Taylor (2000), Gali and Perotti (2003), Canzoneri (2004), and Turrini and in ‘t Veld<br />

(2004) for similar formulations. The European Commission (2002) uses a similar rule, but<br />

defines d>0 to be a deficit. They therefore expect γ to be negative.


80 Andrew Hughes Hallet<br />

Table 2.<br />

Fiscal policy reaction functions.<br />

For the United Kingdom, sample period 1997q3–2004q2<br />

d t =−0.444 + 0.845debt t + 0.0685r t + 1.076gap t<br />

(0.37) (7.64) (0.30) (1.72)<br />

For the Eurozone, sample period 1999q1–2004q2<br />

d t = 3.36 + 1.477d t−1 − 0.740π<br />

(2.96)<br />

+9,21<br />

e − 0.207r t − 0.337gap t<br />

(9.66) (2.21) (1.16) (2.18)<br />

R 2 = 0.78,<br />

F 3,24 = 33.23<br />

R 2 = 0.95,<br />

F 4,11 = 54.55<br />

Key: d t = gross deficit/GDP (%), where d


The Advantages of Fiscal Leadership in an Economy 81<br />

Consequently, the United Kingdom appears to use fiscal policy for stabilization<br />

in a minor way as claimed, while the Euro economies seem not<br />

to use fiscal policy this way at all. Confirmation of this comes from the<br />

responses to interest rate changes, which are positive but very small and<br />

statistically insignificant in the United Kingdom, but negative and near significant<br />

in Europe. This implies that the British fiscal policies are chosen<br />

independently of the monetary policy, as suggested by our weak leadership<br />

model. But, if there is any association at all, then both the fiscal and<br />

monetary policies would be compliments and weakly co-ordinated.<br />

The UK results are therefore inconclusive. They are consistent with<br />

independent policies, or fiscal leadership — the latter, despite the insignificance<br />

of the direct fiscal-monetary linkage, because of the significant output<br />

gap term in the fiscal equation which, given the same effect is not found in<br />

the monetary policy reactions, suggests fiscal leadership may in fact have<br />

been operating. In any event, there is no suggestion of monetary leadership;<br />

if anything happens, the results imply independence or fiscal leadership.<br />

In the Eurozone, the results are quite different. Here, the significant<br />

result is the conflict among instruments in monetary policy (and essentially<br />

the same conflict in the fiscal policy reactions). Hence, the policies are<br />

competitive, which suggests that they form a Nash equilibrium (or possibly<br />

monetary leadership, since the coefficient on fiscal policy in the Taylor<br />

rule is small and there is no output gap smoothing). This suggests weak<br />

monetary leadership, or a simple non-co-operative game.<br />

5. Theoretical Evidence: A Model of Fiscal Leadership<br />

5.1. The <strong>Economic</strong> Model and Policy Constraints<br />

The key question now is: would governments want to pursue fiscal precommitment?<br />

Do they have an incentive to do so? And would there be a<br />

clear improvement in terms of an economic performance if they did? More<br />

important, would the fiscal leadership model be more advantageous, if fiscal<br />

policy was limited by a deficit rule in the form of “hard” targets (as in the<br />

original stability pact) or “soft” targets?<br />

To answer these questions, we extend a model used in Hughes Hallett<br />

and Weymark (2002; 2004a;b; 2005) to examine the problem of monetary<br />

policy design when there are interactions with fiscal policy. For exposition<br />

purposes, we suppress the spillovers among countries and focus on the<br />

following three equations to represent the economic structure of any one


82 Andrew Hughes Hallet<br />

country 13 :<br />

π t = π e t + αy t + u t (3)<br />

y t = β(m t − π t ) + γg t + ε t (4)<br />

g t = m t + s(by t − τ t ) (5)<br />

where π t is inflation in period t, y t is the growth in output (relative to trend)<br />

in period t, and πt e represents the rate of inflation that the rational agents<br />

expect to prevail in period t, conditional on the information available at the<br />

time expectations are formed. Next, m t ,g t , and τ t represent the growth in<br />

the money supply, government expenditures, and tax revenues in period t,<br />

and u t and ε t are random disturbances, which are distributed independently<br />

with zero mean and constant variance. All variables are deviations from<br />

their steady-state (or equilibrium) growth paths, and we treat trend budget<br />

variables as pre-committed and balanced. Deviations from the trend budget<br />

are, therefore, the only discretionary fiscal policy choices available. The<br />

coefficients α, β, γ, s, and b are all positive by assumption. The assumption<br />

that γ is positive is sometimes controversial. 14 However, the short-run<br />

impact multipliers derived from Taylor’s (1993a) multicountry estimation<br />

provide an empirical support for this assumption (as does the HMT, 2003).<br />

According to Eq. (3), inflation is increasing, as the rate of inflation predicted<br />

by private agents and in output growth. Equation (4) indicates that<br />

both monetary and fiscal policies have an impact on the output gap. The<br />

micro-foundations of the aggregate supply Eq. (3), originally derived by<br />

Lucas (1972; 1973), are well known. McCallum (1989) shows that aggregate<br />

demand equation like Eq. (4) can be derived from a standard, multiperiod<br />

utility-maximization problem.<br />

Equation (5) describes the government’s budget constraint. In earlier<br />

chapters, we allowed discretionary tax revenues to be used for<br />

13 Technically, we assume a blockwise orthogonalization of the traditional multicountry<br />

model to produce independent semi-reduced forms for each country. The disturbance terms<br />

may, therefore, contain foreign variables, but they will have zero means so long as these<br />

countries remain on their long-run (equilibrium) growth paths on average (all variables being<br />

defined as deviations from their equilibrium growth paths).<br />

14 Barro (1981) argues that government purchases have a contractionary impact on output.<br />

Our model, by contrast, treats fiscal policy as important because: (i) fiscal policy is widely<br />

used to achieve re-distributive and public service objectives; (ii) governments cannot precommit<br />

monetary policy with any credibility if fiscal policy is not pre-committed (Dixit<br />

and Lambertini, 2003), and (iii) Central Banks, and the ECB, in particular, worry intensely<br />

about the impact of fiscal policy on inflation and financial stability (Dixit, 2001).


The Advantages of Fiscal Leadership in an Economy 83<br />

re-distributive purposes only, but permitted discretionary expenditures for<br />

enhancing output.We further assumed that there were two types of agents —<br />

rich and poor — and that only the rich use their savings to buy government<br />

bonds. Based on this view, b would be the proportion of pre-tax income<br />

(output) going to the rich and s, the proportion of after-tax income that the<br />

rich allocate to saving. Tax revenues, τ t , can then be used by the government<br />

to re-distribute income from the rich to poor, either directly or via public<br />

services. This structure, therefore, has output-enhancing expenditures g t ,<br />

and discretionary transfers τ t . Both are financed by aggregate tax revenues;<br />

that is, from discretionary and trend revenues. Expenditures above these<br />

revenues must be financed by the sale of bonds.<br />

5.2. An Alternative Interpretation<br />

We could, however, take a completely different interpretation of Eq. (5).<br />

We could take the term s(by t − τ t ) to be the proportion of the budget-deficit<br />

ratio, as it currently exists, adjusted for the effect of growth on this ratio<br />

and any discretionary taxes raised in this period, which the government<br />

now proposes to spend in period t. The deficit ratio itself is of course<br />

d = (G − T)/Y = (e − r), where G and T are the absolute levels of<br />

government spending and tax revenues, and e and r are their counterparts as<br />

a proportion of Y. If the economy grows, then d = (G−T)/Y −ẏ(e−r)<br />

where ẏ denotes the growth rate in national income. If we wish to see what<br />

would happen to this deficit ratio if fiscal policies were not changed, it<br />

means new spending can only be allowed to take place out of new revenues<br />

generated by growth: G = T = rY. Inserting this, we have d =<br />

−(e − r)ẏ =−dẏ. Since y t is the deviation of Y from its steady-state<br />

path, by t in Eq. (5) will equal d, the change in the existing deficit ratio<br />

under existing expenditure plans and tax codes, if b =−(e − r)/Y 1 . The<br />

government will spend some proportion of that, s, less new discretionary<br />

taxes in the current period. Hence, s would typically equal 1, although it<br />

might be less, if some parts were saved in social security funds or other<br />

assets. This defines what would happen to the deficit ratio if there were no<br />

change in existing fiscal policies; but the term in τ t shows that governments<br />

may also raise additional taxes in order to reduce the current deficit ratio<br />

to some desired target value, θ, say. This target is likely to be θ = 0, to<br />

balance the budget over the cycle, as required by the stability pact. We give<br />

governments such a target, in the form of either a soft or a hard rule, in the<br />

objective functions that are discussed in Section 5.


84 Andrew Hughes Hallet<br />

Given this new interpretation, we can now solve for πt e,π t, and y t from<br />

Eqs. (3) and (4). This yields the following reduced forms:<br />

[<br />

π t (g t ,m t ) = (1 + αβ) −1 αβm t + αγg t + m e t + γ ]<br />

β ge t + αε t + u t (6)<br />

y t (g t ,m t ) = (1 + αβ) −1 [βm t + γg t − βm e t − γge t + ε t − βu t ]. (7)<br />

Solving for τ t using Eqs. (5) and (7), then yields:<br />

τ t (g t ,m t ) = [s(1 + αβ)] −1 [(1 + αβ + sbβ)m t − (1 + αβ − sbγ)g t<br />

− sbβm e t − sbγge t + sb(ε t − βu t )]. (8)<br />

5.3. Government and Central Bank Objectives<br />

In our formulation, we allow for the possibility that the government and<br />

an independent central bank may differ in their objectives. In particular,<br />

we assume that the government cares about inflation stabilization, output<br />

growth, and the provision of public services (and hence the size of the public<br />

sector deficit or debt); whereas the central bank, if left to itself, would be<br />

concerned only with the first two objectives, 15 and possibly only the first<br />

one. We also assume that the government has been elected by majority<br />

vote, so that the government’s loss function reflects society’s preferences<br />

to a significant extent.<br />

Formally, the government’s loss function is given by:<br />

L g t = 1 2 (π t −ˆπ) 2 − λ g 1 y t + λg 2<br />

2 [(b − θ)y t − τ t ] 2 (9)<br />

where ˆπ is the government’s inflation target, λ g 1<br />

is the relative weight or<br />

importance that the government assigns to output growth, 16 and λ g 2<br />

is the<br />

relative weight, which it assigns to the debt or deficit rule. The parameter θ<br />

represents the target value for the debt or deficit to the GDP ratio, which the<br />

government would like to reach: hence (b − θ)y t becomes the target for its<br />

15 Since the central bank has no instruments to control the debt itself, it can only react to<br />

poor fiscal discipline indirectly: e.g., to the extent that its inflation objective is compromised,<br />

where it does have an instrument.<br />

16 Barro and Gordon (1983) also adopt a linear output target. In the delegation literature, the<br />

output component in the government’s loss function is usually represented as quadratic to<br />

reflect an output stability objective. In our model, the quadratic term in debt/deficits allows<br />

monetary and fiscal policy to play a stabilization role as well as pick a position on the<br />

economy’s output-inflation trade-off.


The Advantages of Fiscal Leadership in an Economy 85<br />

discretionary revenues, τ t . All the other variables are as previously defined.<br />

We may regard large values of λ g 2 , say λg 2 > max[1,λg 1<br />

], as defining a<br />

“hard” debt or deficit rule; and smaller values, λ g 2 < min[1,λg 1<br />

], as a “soft”<br />

debt or deficit rule.<br />

The objectives of the central bank, however, may be quite different from<br />

those of the government. We model them as follows:<br />

L cb<br />

t<br />

= 1 2 (π −ˆπ)2 − (1 − δ)λ cb y t − δλ g 1 y t<br />

+ δλg 2<br />

2 [(b − θ)y t − τ t ] 2 (10)<br />

where 0 ≤ δ ≤ 1, and λ cb is the weight, which the central bank assigns<br />

to the output growth. The parameter δ measures the degree to which the<br />

central bank is forced to take the government’s objectives into account.<br />

The closer δ is to 0, the greater is the independence of the central bank in<br />

making its choices. And the lower is λ cb is, the greater is the degree of its<br />

conservatism in making these choices.<br />

In Eq. (9), we have defined the government’s inflation target as ˆπ. The<br />

fact that the same inflation target appears in Eq. (10) would be correct for<br />

the cases where the bank has the instrument independence, but not target<br />

independence. However, it is easy to relax this assumption and allow the<br />

central bank to choose its own target, as the ECB does. But, as I show in<br />

Hughes Hallett (2005), there is no advantage in doing so since the government<br />

would simply adjust its parameters to compensate. Hence, only if the<br />

bank is free to choose the value of λ cb as well, do we get an extra advantage.<br />

Yet, even this will not be enough to outweigh the advantages to be gained<br />

from fiscal leadership — for the reasons noted in Section 5.<br />

A second feature is that Eqs. (9) and (10) specify symmetric inflation<br />

targets around ˆπ. Symmetric inflation targets are particularly emphasized<br />

as being required of both monetary policy and the fiscal authorities (HMT,<br />

2003). We have, therefore, specified both to be features of the government<br />

and the central bank objective functions here. However, it is not clear that<br />

symmetry has been a feature of the European policy making in practice.<br />

Indeed, the ECB has acquired a reputation for being more concerned to<br />

tighten monetary policy when π> ˆπ, than it is to loosen it when π< ˆπ.<br />

Consequently, rather than symmetry in the output-gap target, we have<br />

asymmetric penalties, which tighten the fiscal constraints in recessions,<br />

but weaken them in a boom when the debt and deficits would be falling<br />

anyway. On one hand, this might not be ideal, since it introduces an element<br />

of pro-cyclicality; on the other hand, it brings the rule closer to reality.


86 Andrew Hughes Hallet<br />

5.4. Institutional Design and Policy Choices<br />

We characterize the strategic interaction between the government and the<br />

central bank as a two-stage non-co-operative game in which the structure of<br />

the model and the objective functions are common knowledge. In the first<br />

stage, the government chooses the institutional parameters δ and λ cb . The<br />

second stage is a Stackelberg game in which fiscal policy takes on a leadership<br />

role. In this stage, the government and the monetary authority set their<br />

policy instruments, given the δ and λ cb values determined at the previous<br />

stage. Private agents understand the game and form rational expectations<br />

for future prices in the second stage. Formally, the policy game runs as<br />

follows:<br />

5.4.1. Stage 1<br />

The government solves the problem:<br />

{ 1<br />

min ELg (g t, m t, δ, λ cb ) = E<br />

δ,λ cb 2 [π t(g t ,m t ) −ˆπ] 2 − λ 2 1 [y t(g t ,m t )]}<br />

+ λg 2<br />

2 E[(b − θ)y t(g t ,m t ) − τ t (g t ,m t )] 2 (11)<br />

where L g (g t ,m t ,δ,λ cb ) is Eq. (9) evaluated at (g t ,m t ,δ,λ cb ), and E<br />

denotes expectations.<br />

5.4.2. Stage 2<br />

(a) Private agents form rational expectations about future prices πt e, before<br />

the shocks u t and ε t are realized.<br />

(b) The shocks u t and ε t are realized and observed by both the government<br />

and the central bank.<br />

(c) The government chooses g t , before m t is chosen by the central bank,<br />

to minimize L g (g t ,m t , ¯δ, ¯λ cb ) where ¯δ and ¯λ cb are at the values determined<br />

at stage 1.<br />

(d) The central bank then chooses m t , taking g t as given, to minimize:<br />

L cb (g t ,m t , ¯δ, ¯λ cb ) = (1 − ¯δ)<br />

[π t (g t ,m t ) −ˆπ] 2<br />

2<br />

− (1 − ¯δ)¯λ cb [y t (g t ,m t )] + ¯δL g (g t ,m t , ¯δ, ¯λ cb )<br />

(12)


The Advantages of Fiscal Leadership in an Economy 87<br />

We solve this game by solving the second stage (for the policy choices)<br />

first, and then substituting the results back into Eq. (11) to determine the<br />

optimal operating parameters δ and λ cb . From stage 2, we get:<br />

π t (δ, λ cb ) =ˆπ + (1 − δ)β(φ − η)λcb + δ(βφ + γ)λ g 1<br />

α[β(φ − η) + δ(βη + γ)]<br />

(13)<br />

y t (δ, λ cb ) =−u t /α (14)<br />

where<br />

τ t (δ, λ cb ) = (1 − δ)βs(βη + γ)(λcb − λ g 1 )<br />

[β(φ − η) + δ(βη + γ)]λ 9 − (b − θ)u t<br />

α<br />

2<br />

η = ∂m t<br />

∂g t<br />

= −α2 γβs 2 + δφλ g 2<br />

(αβs) 2 + δ 2 λ g 2<br />

(15)<br />

(16)<br />

φ = 1 + αβ − γθs (17)<br />

and<br />

= 1 + αβ + βθs. (18)<br />

Evidently, is positive. We assume φ to be positive as well. One would<br />

certainly expect φ>0 since, with θ0.<br />

Nevertheless, the conflict which stands revealed within the fiscal policy is<br />

important. In order to get φ0.


88 Andrew Hughes Hallet<br />

Substituting Eqs. (13)–(15) back into Eq. (11), we can now get the<br />

stage 1 solution from:<br />

{<br />

min ELg (δ, λ cb ) = 1 (1 − δ)β(φ − η)λ cb + δ(βφ + γ)λ g } 2<br />

1<br />

δ,λ cb 2 α[β(φ − η) + δ(βη + γ)]<br />

{<br />

+ λg 2 (1 − δ)βs(βη + γ)(λ cb − λ g 1 )<br />

} 2<br />

2 [β(φ − η) + δ(βη + γ)]λ g . (19)<br />

2<br />

This part of the problem has first-order conditions:<br />

and<br />

(1 − δ)(φ − η)λ g 2 {(1 − δ)β(φ − η)λcb + δ(βφ + γ)λ g 1 }<br />

−(1 − δ) 2 (βη + γ) 2 α 2 s 2 β(λ g 1 − λcb ) = 0 (20)<br />

{(1 − δ)β(φ − η)λ cb + δ(βφ + γ)λ g 1 }<br />

× (λ g 1 − λcb ) {δ(1 − δ) + (φ − η)} λ g 2<br />

−(1 − δ)(βη + γ)α 2 s 2 β {(βη + γ) − (1 − δ)β} (λ g 1 − λcb ) 2 = 0.<br />

(21)<br />

where = ∂η/∂δ. There are two real-valued solutions which satisfy this<br />

pair of first-order conditions. 19 Both are satisfied when δ = 1 and λ cb = λ g 1 .<br />

This solution describes a fully dependent central bank, which is not appropriate<br />

in the Eurozone case. And, it turns out to be inferior to the second<br />

solution: δ = λ cb = 0. In this solution, the central bank is fully independent<br />

and exclusively concerned with the economy’s inflation performance.<br />

Out of the two possibilities, the solution which yields the lowest welfare<br />

loss, as measured by the government’s (society’s) loss function, can be<br />

identified by comparing Eq. (19) to the expected loss that would be suffered<br />

under the alternative institutional arrangement. Substituting, δ = 1 and<br />

λ cb = λ g 1 in Eq. (19) results in: EL g = (λg 1 )2<br />

2α 2 . (22)<br />

19 Because η is a function of δ, Eq. (21) is quartic in δ. This polynomial has four distinct<br />

roots, of which only two are real-valued. The complete solution may be found in Hughes<br />

Hallett and Weymark (2002).


The Advantages of Fiscal Leadership in an Economy 89<br />

But substituting, δ = λ cb = 0 in the right-hand-side of Eq. (19) yields:<br />

EL g = 0. (23)<br />

Consequently, our results show that, when there is fiscal leadership, society’s<br />

welfare loss (as measured by Eq. (19)) is minimized when the government<br />

appoints independent central bankers who are concerned only with<br />

the achievement of a mandated inflation target and completely disregard<br />

the impact their policies may have on the output.<br />

However, our results also indicate that fiscal leadership with an independent<br />

central bank can be beneficial under more general conditions. When<br />

δ = 0, βη + γ = 0 and the externalities between policy makers are neutralized.<br />

20 As a result, Eq. (19) will become:<br />

{ λ<br />

cb<br />

} 2<br />

(24)<br />

EL g = 1 2<br />

α<br />

for any value of λ cb . Hence, an independent central bank will always produce<br />

better results than a dependent one, so long as it is more conservative<br />

than the government (λ cb


90 Andrew Hughes Hallet<br />

This is always smaller than the loss incurred when fiscal leadership is combined<br />

with a dependent central bank. However, the optimal degree of conservatism<br />

for an independent central bank, in this case, is obtained by setting<br />

δ = 0 in Eq. (25) to yield:<br />

λ cb∗ = (αγs)2 λ g 1<br />

(αγs) 2 + φ 2 λ g . (27)<br />

2<br />

It is straightforward to show that the value of EL g in Eq. (24) is always<br />

less than Eq. (26) as long as:<br />

λ cb < [λ g 1 λcb∗ ] 1/2 . (28)<br />

It is also evident that λ cb∗ 0. Consequently, fiscal leadership<br />

with any λ cb


The Advantages of Fiscal Leadership in an Economy 91<br />

Comparing these two sets of outcomes, we have five conclusions. These<br />

conclusions signify the main results of this paper:<br />

(a) Fiscal leadership eliminates the inflationary bias, and results in lower<br />

inflation without any loss in output or output volatility. The proximate<br />

cause of this surprising result is that optimization under fiscal leadership<br />

leads to higher taxes and larger debt repayments. 22<br />

(b) The deeper reason is that there is a self-limiting aspect to the longrun<br />

design of fiscal policy. Unless the effect of fiscal policy on output<br />

is so large as to generate savings/taxes that could finance both fiscal<br />

expansions and debt re-payments (this possibility is ruled out in the<br />

deficit rule case), each expansion designed to increase output would<br />

simply be accompanied by a greater burden of debt. Hence, in order to<br />

preserve the long-run sustainability of its finances, the government must<br />

eventually raise taxes. This fact takes some inflationary pressure off the<br />

central bank when fiscal expansions are called for. As a result, the bank<br />

is less likely to tighten the monetary policy, and externalities are reduced<br />

on both sides. This is what generates a better co-ordination between<br />

the policy makers, as noted in Section 5. But, this can only happen<br />

if the government has a genuine commitment to the long-run goals<br />

(sustainable public finances, and a debt or deficit rule in this case), since<br />

only then will the fiscal policy leader take into account the predictable<br />

reactions (of the follower) to any short-run deviations by the leader.<br />

The follower knows that the leader is not likely to sacrifice his long-run<br />

goals by making such deviations (and anyway has the opportunity to<br />

clear up the mess, according to the follower’s own preferences, if the<br />

leader does so). Thus, in the short-run under simultaneous moves, each<br />

player fails to consider the predictable response, and costs, which the<br />

externalities he/she imposes on his/her rival would create. The ability to<br />

co-ordinate would be lost, and the outcomes worse for both. We show<br />

this in Section 5 (and in a more formal analysis in Hughes Hallett,<br />

2005).<br />

(c) These results hold independently of the commitment to the debt rule<br />

(λ g 2<br />

), or its target value (θ), and independently of the government’s preference<br />

to stabilize or spend (λ g 1<br />

,s), and of the economy’s transmission<br />

parameters (α,β,γ)or the particular value of b as discussed above.<br />

22 Taxes are lower under simultaneous moves because λ cb


92 Andrew Hughes Hallet<br />

(d) Since Eτt ∗ < 0 in Eq. (32), but Eτ t = 0 in (29), the simultaneous moves<br />

regime will always end up in increasing the deficit levels. Therefore, it<br />

will eventually exceed any limit set for the debt ratio. Fiscal leadership<br />

will do neither of these things. More generally, the simultaneous moves<br />

game always leads to fiscal expansions if money financing is small. 23<br />

Indeed, the expected value of s(byt<br />

∗ − τt ∗ ) in Eq. (5) is zero under<br />

fiscal leadership, but φλ cb∗ /(α 2 γ) > 0 in the simultaneous moves case<br />

(as expected, since the monetary policy externalities would be larger).<br />

And since revenues are lower in the latter case, budget deficits are<br />

larger. Hence, the gains from the fiscal leadership are more than just<br />

better outcomes. Leadership also implies less expansionary budgets<br />

and tighter public expenditure controls. The ECB should find this a<br />

significantly more comfortable environment to operate in.<br />

(e) The simultaneous moves regime approaches fiscal leadership, as the<br />

debt rule becomes progressively “harder”: that is, EL g → 0,λ cb∗ →<br />

0; πt ∗ →ˆπ, and Eτt ∗ → 0, as λ g 2 →∞.<br />

6. The Co-Ordination Effect<br />

In our model, the central bank is independent. Without further institutional<br />

restraints, interactions between an independent central bank and the government<br />

would lead to non-co-operative outcomes. But, if the government<br />

is committed to long-term leadership in the manner that I have described,<br />

the policy game will become an example of rule-based co-ordination in<br />

which both parties can gain without any reduction in the central bank’s<br />

independence. However, this is not to say that both parties gain equally. If<br />

the inflation target is strengthened after the fiscal policy is set, the leadership<br />

gains to the government will be less (Hughes Hallett and Viegi, 2002). This<br />

is an important point because it implies that granting leadership to a central<br />

bank whose inflation aversion is already greater than the government’s, or<br />

whose commitment is greater, will produce no additional gains. For this<br />

reason, granting leadership to a central bank that cannot influence debt or<br />

deficit directly, but puts a higher priority on inflation control will bring<br />

smaller gains from a society’s point of view (whatever it may do for the<br />

central bank). I demonstrate these points formally in Hughes Hallett (2005).<br />

23 m t ≈ 0; see Appendix B. This result still holds good when β>γ, even if monetary<br />

financing is allowed. It also holds good if the central bank is conservative, or the government<br />

is liberal, when β


The Advantages of Fiscal Leadership in an Economy 93<br />

Finally, given the structure of the Stackelberg game, there is no incentive<br />

to re-optimize for either party — so long as the government remains<br />

committed to long-term leadership rather than short-term demand management<br />

— unless both parties agree to reduce their independence through<br />

discretionary co-ordination. 24 This is a general result: it holds good for any<br />

model where both inflation and output depend on both monetary and fiscal<br />

policy, and where inflation is targeted to some degree by both players<br />

(Demertzis et al., 2004; Hughes Hallett and Viegi, 2002). Thus, although<br />

Pareto improvements, over no co-operation, do not emerge for both parties<br />

in all Stackelberg games, they do so in problems of the kind examined here.<br />

Our results are robust.<br />

6.1. Empirical Evidence<br />

Whether or not these results are of practical importance is another matter.<br />

In Table 3, I have computed the expected losses under the simultaneous<br />

move and fiscal leadership regimes for 10 countries when both fiscal policy<br />

and the central bank are constructed optimally. The data I have used is from<br />

1998, the year in which the Eurozone was created. The data itself, and its<br />

sources, are summarized in Appendix A.<br />

The countries that are selected fall into three groups:<br />

(a) Eurozone countries, with independently set fiscal and monetary policies<br />

and target independence: France, Germany, Italy, and the Netherlands;<br />

(b) Explicit inflation targeters with fiscal leadership: Sweden, New<br />

Zealand, and the United Kingdom and<br />

(c) Federalists, with joint inflation and stabilization objectives: United<br />

States, Canada, and Switzerland.<br />

In the first group, monetary policy is conducted at the European level<br />

and fiscal policy at the national level. Policy interactions, in this group,<br />

can be characterized in terms of a simultaneous move game, with target as<br />

well as with instrument independence for both sets of policy authorities.<br />

24 This supplies the political economy of our results. We need no pre-commitment beyond<br />

leadership (the Stackelberg game supplies sub-game perfection, see Note 4) and no punishment<br />

beyond electoral results. The former will hold so long as governments have a natural<br />

commitment to maintaining sustainable public finances, public services (health, education,<br />

and defence), or social equity, all long-run “contractual” issues. The latter will then arise<br />

from the political competition inherent in any democracy.


94 Andrew Hughes Hallet<br />

Table 3.<br />

Expected losses under a deficit rule, leadership vs. other strategies.<br />

Full Fiscal Simultaneous Losses under<br />

dependence leadership moves simultaneous<br />

δ = 1 δ = 0; λ g 1 = 1 1 = 1 moves: Growth-<br />

λ cb = λ g 1 λ cb = 0<br />

λ cb = λ cb∗ rate equivalents<br />

France 5.78 0.00 0.134 13.4<br />

Germany 16.14 0.00 0.053 5.30<br />

Italy 1.28 0.00 0.207 20.7<br />

Netherlands 1.28 0.00 0.165 16.5<br />

Sweden 4.51 0.00 0.034 3.40<br />

New Zealand 8.40 0.00 0.071 7.10<br />

United Kingdom 3.37 0.00 0.057 5.70<br />

United States 6.46 0.00 0.248 24.8<br />

of America<br />

Canada 12.50 0.00 0.118 11.8<br />

Switzerland 4.79 0.00 0.066 6.60<br />

It is, perhaps, arguable that the Eurozone has adopted a monetary leadership<br />

regime; but the empirical evidence for this was weak, and the fiscal<br />

constraints that could have sustained such a leadership were widely ignored.<br />

The second group of countries has adopted explicit, and mostly publicly<br />

announced, inflation targets. Central banks in these countries have<br />

been granted instrument independence, but not target independence. The<br />

government either sets, or helps to set, the inflation target. In each case, the<br />

government has adopted long-term (supply side) fiscal policies, leaving an<br />

active demand management to monetary policy. These are clear cases in<br />

which there is both fiscal leadership and instrument independence for the<br />

central bank.<br />

The third group represents a set of more flexible economies, with<br />

implicit inflation targeting and a statutory concern for stabilization; also federalism<br />

in fiscal policy and independence at the central bank, and therefore<br />

no declared leadership in either fiscal or monetary policies. This provides<br />

a useful benchmark for the other two groups.<br />

The performance under the different fiscal regimes when the fiscal constraint<br />

is a relatively soft deficit rule is reported in Table 3. Column 1<br />

shows the losses under a dependent central bank in welfare units — leadership<br />

or not. Column 2 reflects the losses that would be incurred under


The Advantages of Fiscal Leadership in an Economy 95<br />

the government leadership with an independent central bank that directs<br />

monetary policy exclusively towards the achievement of the inflation target<br />

(i.e., δ = λ cb = 0). The third column gives the minimum loss associated<br />

with a simultaneous decision-making version of the same game. In each<br />

case, the deficit rule is soft for the Eurozone countries (λ g 2<br />

= 0.25); medium<br />

strength for the United States, Canada, and Switzerland (λ g 2<br />

= 0.5); and<br />

harder for Sweden, the United Kingdom and New Zealand (λ g 2<br />

= 1). The<br />

deficit ratio targets are 0% in all countries (a balanced budget in the medium<br />

term), but the propensity to spend, s, out of any remaining deficit or any<br />

endogenous budget improvements is one.<br />

Evidently, complete dependence in monetary policy is extremely unfavorable,<br />

although the magnitude of the loss varies considerably from country<br />

to country. The losses in Column 3 (Table 3) appear to be relatively small<br />

in comparison. However, when these figures are converted into “growth-rate<br />

equivalents”, I find these losses to be significant. A growth-rate equivalent<br />

is the loss in output growth that would produce the same welfare loss, if all<br />

other variables remain fixed at their optimized values. 25<br />

The figures in Column 4 are more interesting. They show that the losses<br />

associated with simultaneous decision-making are equivalent to having permanent<br />

reductions of up to 20% in the level of national income. That is,<br />

France, Italy, and the Netherlands could have expected to have grown about<br />

15%–20% more, had they adopted fiscal leadership with deficit targets; and<br />

Sweden, the United Kingdom, and New Zealand around 3%–5% less, had<br />

they not done so. Similarly, fiscal leadership with debt targeting would<br />

be worth 10%–20% extra GDP in the United States, and Canada. These<br />

are significant gains and significantly larger than could be expected from<br />

international policy co-ordination itself (Currie et al., 1989).<br />

Thus, the immediate effect of introducing a stability pact deficit rule<br />

has been to increase the costs of an unco-ordinated (simultaneous moves)<br />

regime dramatically. This reveals the weakness of the deficit rule as a<br />

25 Currie et al. (1989). To obtain these figures, I compute the marginal rates of transformation<br />

around each government’s indifference curve to find the change in output growth, dy t , that<br />

would yield the welfare losses in Column 3. Formally,<br />

dy t =<br />

dEL g t<br />

[λ g 2 {(b − θ)y t − τ t }(b − θ) − λ g 1 ]<br />

using Eq. (9). The minimum value of dy t is, therefore, attained when tax revenues τ t grow at<br />

the same rate as the re-payment target (b − θ)y t . These are the losses reported in Column 4.


96 Andrew Hughes Hallet<br />

restraint mechanism. Because it is a flow and not a stock, it has no inherent<br />

persistence. Moreover, a larger part (the current automatic stabilizers,<br />

social expenditures, and tax revenues) will be endogenous, varying with the<br />

level of economic activity and with the pressures of the electoral cycle. It is,<br />

therefore, much harder to use as a commitment device with any credibility<br />

in the long term. If we assume that it can be pre-committed, as in the leadership<br />

scenario, then we get good results of course. But, if this assumption<br />

is likely to be violated (or challenged), then the fact that the results are so<br />

much worse than in the simultaneous moves case shows how difficult it<br />

is to pre-commit deficits in advance. Being only a small component in a<br />

moving total, it does not have the persistence of a cumulated debt criterion<br />

because violations are not carried forward. So, when policy conflicts arise,<br />

the deficit either fails its target or it has to be restrained to such a degree<br />

that it starts to do serious damage to overall economic performance.<br />

7. Conclusions<br />

I conclude this chapter with the following observations:<br />

(a) Fiscal leadership leads to improved outcomes because it implies a<br />

degree of co-ordination and reduced conflicts between institutions,<br />

without the central bank having to lose its ability to act independently.<br />

This places the outcomes somewhere between the superior (but discretionary)<br />

policies of complete co-operation; and the non-co-operative<br />

(or rule-based) policies of complete independence.<br />

(b) The co-ordination gains come from the self-limiting action of fiscal<br />

policy when fiscal policy is given a long-run focus in the form of a<br />

(soft) debt or deficit rule. Leadership is the crucial element here; these<br />

gains do not appear in a straight-forward competitive regime with the<br />

same debt or deficit rule.<br />

(c) The leadership model predicts improvements in inflation and the fiscal<br />

targets, without any loss in growth or output volatility. In addition,<br />

leadership requires less precision in the setting of the strategic or institutional<br />

parameters. It is easier to implement.<br />

(d) Debt ratios will increase in a policy game without co-ordination, but<br />

do not do so under fiscal leadership. Likewise, deficit rules without<br />

co-ordination to restrain fiscal policy imply larger deficits (and more<br />

expansionary policies) than do debt rules of a similar specification<br />

because deficits (being a flow not a stock, and with less natural persistence)<br />

are less easy to pre-commit credibly. These two results explain


The Advantages of Fiscal Leadership in an Economy 97<br />

why the ECB prefers hard restraints (and why the fiscal policy makers<br />

do not), and suggests that the ECB may actually prefer fiscal leadership<br />

because of the greater degree of pre-commitment implied.<br />

References<br />

Barro, RJ (1981). Output effects of government purchases. Journal of Political Economy<br />

89, 1086–1121.<br />

Barro, RJ and DB Gordon (1983). Rules, discretion, and reputation in a model of monetary<br />

policy. Journal of Monetary <strong>Economic</strong>s 12, 101–121.<br />

Basar, T (1989). Time consistency and robustness of equilibria in non-cooperative dynamic<br />

games. In Dynamic Policy Games in <strong>Economic</strong>s, F van der Ploeg and A de Zeeuw<br />

(eds.), pp. 9–54. Amsterdam: North Holland.<br />

Basar, T and G Olsder (1989). Dynamic noncooperative game theory. In Classics in Applied<br />

Mathematics, SIAM series, Philadelphia: SIAM.<br />

Brandsma, A and A Hughes Hallett (1984). <strong>Economic</strong> conflict and the solution of dynamic<br />

games. European <strong>Economic</strong> Review 26, 13–32.<br />

Buti, M, S Eijffinger and D Franco (2003). Revisiting the stability and growth pact: grand<br />

design or internal adjustment? Discussion Paper 3692. London: Centre for <strong>Economic</strong><br />

Policy Research.<br />

Canzoneri, M (2004). A New View on The Transatlantic Transmission of Fiscal Policy<br />

and Macroeconomic Policy Coordination. In The Transatlantic Transmission of Fiscal<br />

Policy and Macroeconomic Policy Coordination, M Buti (ed.), Cambridge: Cambridge<br />

University Press.<br />

Currie, DA, G Holtham andA Hughes Hallett (1989). The theory and practice of international<br />

economic policy coordination: does coordination pay? In Macroeconomic Policies in<br />

an Interdependent World, R Bryant, D Currie, J Frenkel, P Masson and R Portes (eds.),<br />

Washington, DC: International Monetary Fund, 125–148.<br />

Demertzis, M, A Hughes Hallett and N Yiegi (2004). An Independent Central Bank Faced<br />

with Elected Governments: A Political Economy Conflict. In European Journal of<br />

Political Economy 20(4), 907–922.<br />

Dieppe, A, K Kuster and P McAdam (2004). Optimal monetary policy rules for the Euro<br />

Area: an analysis using the area wide model. Working Paper 360. Frankfurt: European<br />

Central Bank.<br />

Dixit,A (2001). Games of monetary and fiscal interactions in the EMU. European <strong>Economic</strong><br />

Review 45, 589–613.<br />

Dixit, AK and L Lambertini (2003). Interactions of commitment and discretion in monetary<br />

and fiscal issues. American <strong>Economic</strong> Review 93, 1522–1542.<br />

Dow, JCR (1964). The Management of the British Economy 1945–1960. Cambridge:<br />

Cambridge University Press.<br />

European Commission (2002). Public finances in EMU:2002. European Economy: Studies<br />

and Reports 4, Brussels: European Commission.<br />

Fatas, A, J von Hagen, A Hughes Hallett, R Strauch and A Sibert (2003). Stability and<br />

Growth in Europe. London: Centre for <strong>Economic</strong> Policy Research (MEI-13).<br />

Gali, J and R Perotti (2003). Fiscal policy and monetary integration in Europe. <strong>Economic</strong><br />

Policy 37, 533–571.


98 Andrew Hughes Hallet<br />

HM Treasury (HMT) (2003). Fiscal Stabilization and EMU. HM Stationary Office, Cmnd<br />

799373. Norwich: also available from www.hm-treasury.gov.uk.<br />

Hughes Hallett, A (1984). Noncooperative strategies for dynamic policy games and the<br />

problem of time inconsistency. Oxford <strong>Economic</strong> Papers 36, 381–399.<br />

Hughes Hallett,A (2005). In praise of fiscal restraint and debt rules: what the Euro zone might<br />

do now. Discussion Paper 5043. London: Centre for <strong>Economic</strong> Research, 251–279.<br />

Hughes Hallett, A and D Weymark (2002). Government leadership and central bank design.<br />

Discussion Paper 3395, London: Centre for <strong>Economic</strong> Research.<br />

Hughes Hallett, A and N Viegi (2002). Inflation targeting as a coordination device. Open<br />

Economies Review 13, 341–362.<br />

Hughes Hallett, A and D Weymark (2004a). Policy games and the optimal design of central<br />

banks. In Money Matters, P Minford (ed.). London: Edward Elgar.<br />

Hughes Hallett,A and DWeymark (2004b). Independent monetary policies and social equity.<br />

<strong>Economic</strong>s Letters 85, 103–110.<br />

Hughes Hallett,A and D Weymark (2005). Independence before conservatism: transparency,<br />

politics and central bank design. German <strong>Economic</strong> Review 6, 1–25.<br />

Lucas, RE (1972). Expectations and the neutrality of money. Journal of <strong>Economic</strong> Theory<br />

4, 103–124.<br />

Lucas, RE (1973). Some international evidence on output-inflation trade-offs. American<br />

<strong>Economic</strong> Review 63, 326–334.<br />

McCallum, BT (1989). Monetary <strong>Economic</strong>s: Theory and Policy. New York: Macmillan.<br />

Svensson, L (2003). What is wrong with Taylor rules? Using judgement in monetary rules<br />

through targeting rules. Journal of <strong>Economic</strong> Literature 41, 426–477.<br />

Taylor, JB (1993a). Macroeconomic Policy in a World Economy: From Econometric Design<br />

to Practical Operation. New York: W.W. Norton and Company.<br />

Taylor, JB (1993b). Discretion vs. policy rules in practice. Carnegie-Rochester Conference<br />

Series on Public Policy 39, 195–214.<br />

Taylor, JB (2000). Discretionary fiscal policies. Journal of <strong>Economic</strong> Perspectives 14, 1–23.<br />

Turrini, A and J in ‘t Veld (2004). The Impact of the EU Fiscal Framework on <strong>Economic</strong><br />

Activity. DGII, Brussels: European Commission.<br />

Appendix A: Data Sources for Table 3<br />

The data used in Table 3 are set out in the table below. They come from different<br />

sources and represent “stylized facts” for the corresponding parameters.<br />

The Phillips curve parameter, α, is the inverse of the annualized<br />

sacrifice ratios on quarterly data from 1971–1998. The values for β and γ,<br />

the impact multipliers for fiscal and monetary policy, respectively, are the<br />

one-year simulated multipliers for these policies in Taylor’s multicountry<br />

model (Taylor, 1993a). I have calibrated the remaining parameters using<br />

stylized facts from 1998: the year that EMU started, and the year that new<br />

fiscal and monetary regimes took effect in the United Kingdom and Sweden.<br />

Thus, s is the automatic stablizer effect on output according to the European<br />

Commission (2002) and HMT (2003) estimates, allowing for larger social


The Advantages of Fiscal Leadership in an Economy 99<br />

Table 4.<br />

Data for the deficit rule calculations.<br />

α β γ s θ φ λ g 2<br />

France 0.294 0.500 0.570 1 0 1.147 0.25<br />

Germany 0.176 0.533 0.430 1 0 1.094 0.25<br />

Italy 0.625 0.433 0.600 1 0 1.271 0.25<br />

Netherlands 0.625 0.489 0.533 1 0 1.306 0.25<br />

Sweden 0.333 0.489 0.533 1 0 1.163 1<br />

New Zealand 0.244 0.400 0.850 1 0 1.098 1<br />

UK 0.385 0.133 0.580 1 0 1.038 1<br />

United States 0.278 0.467 1.150 1 0 1.130 0.5<br />

Canada 0.200 0.400 0.850 1 0 1.080 0.5<br />

Switzerland 0.323 0.489 0.533 1 0 1.158 0.5<br />

security sectors in the Netherlands and Sweden. Similarly, θ is set to give a<br />

debt target somewhat below each country’s 1998 debt ratio and the SGP’s<br />

60% limit. Likewise, λ g 2<br />

was set to imply a low, high, or medium commitment<br />

to debt/deficit limits, as reflected in actual behavior in 2004. Finally,<br />

I have set λ g 1<br />

= 1 in each country to suggest that governments, if left to<br />

themselves, are equally concerned about inflation and output stabilization.<br />

For Table 4, I suppose that the deficit that remains after growth effects<br />

and new discretionary taxes are accounted for will be spent (s = 1), and<br />

that the medium-term deficit target is zero (θ = 0).<br />

Appendix B: Derivation of the Change in Fiscal Stance<br />

Between Regimes<br />

The expected value of s(by t − τ t ) under fiscal leadership and simultaneous<br />

moves can be obtained by inserting values first from Eq. (29), and then from<br />

Eqs. (31) and (32), respectively. Taking the expected values, this yields<br />

zero and φλ cb∗ /(α 2 γ) > 0 in each case. Applying this result to Eq. (4),<br />

it shows that the expected change in output between regimes, which we<br />

know to be zero, is (β − γ)m − βλ cb∗ /(αλ g 1<br />

) + γ[s(by − τ)] = 0.<br />

Hence,<br />

m ={γ[s(by − τ)] + βλ cb∗ /(αλ g 1<br />

)}/(β − γ)<br />

which, given the change in s(by t − τ t ), is positive, if β>γ. In this case,<br />

the simultaneous move regime is unambiguously more expansionary, and<br />

will run larger deficits, than a regime with fiscal leadership. But, since


100 Andrew Hughes Hallet<br />

γ/(β − γ) > 1, this result may not follow if β


Topic 2<br />

Cheap-Talk Multiple Equilibria and Pareto —<br />

Improvement in an Environmental<br />

Taxation Games<br />

Chirstophe Deissenberg<br />

UniversitédelaMéditerranée and GREQAM, France<br />

Pavel Ševčík<br />

Universilé de Montréal, Canada<br />

1. Introduction<br />

The use of taxes as regulatory instrument in environmental economics is a<br />

classic topic. In a nutshell, the need for regulation usually arises because<br />

producing causes detrimental emissions. Due to the lack of a proper market,<br />

the firms do not internalize the impact of these emissions on the utility of<br />

other agents. Thus, they take their decisions on the basis of prices that do not<br />

reflect the true social costs of their production. Taxes can be used to modify<br />

the prices, confronting the firms so that the socially desirable decisions are<br />

taken. The problem has been exhaustively investigated in static settings,<br />

where there is no room for strategic interaction between the regulator and<br />

the firms.<br />

Consider, however, the following situations:<br />

(1) The emission taxes have a dual effect, they incite the firms to reduce<br />

production and to undertake investments in abatement technology. This<br />

is typically the case, when the emissions are increasing in the output<br />

and decreasing in the abatement technology;<br />

(2) Emission reduction is socially desirable. The reduction of production<br />

is not and<br />

(3) The investments are irreversible. In this case, the regulator must find<br />

an optimal compromise between implementing high taxes to motivate<br />

high investments, and keeping the taxes low to encourage production.<br />

101


102 Chirstophe Deissenberg and Pavel Ševčík<br />

The fact that the investments are irreversible introduces a strategic<br />

element in the problem. If the firms are naïve and believe his announcements,<br />

the regulatory can insure high production and important investments<br />

by first declaring high taxes and reducing them, once the corresponding<br />

investments have been realized. More sophisticated firms, however, recognize<br />

that the initially high taxes will not be implemented, and are reluctant<br />

to invest in the first place. In other words, one is confronted with a typical<br />

time inconsistency problem, which has been extensively treated in the monetary<br />

policy literature following Kydland and Prescott (1977) and Barro and<br />

Gordon (1983). In environmental economics, the time inconsistency problem<br />

has received yet only limited attention, although it frequently occurs.<br />

See among others Gersbach and Glazer (1999) for a number of examples<br />

and for an interesting model, see Abrego and Perroni (1999); Batabyal<br />

(1996a;b); Dijkstra (2002); Marsiliani and Renstrom (2000) and Petrakis<br />

and Xepapadeas (2003).<br />

The time inconsistency is directly related to the fact that the situation<br />

described above, defines a Stackelberg game between the regulator (the<br />

leader) and the firms (the followers). As noted in the seminal work of<br />

Simaan and Cruz (1973a;b), inconsistency arises because the Stackelberg<br />

equilibrium is not defined by mutual best responses. It implies that the<br />

follower uses a best response in reaction to the leader’s action, but not that<br />

the leader’s actions is itself a best response to the follower’s. This opens<br />

the door to a re-optimization by the leader once the follower has played.<br />

Thus, a regulator, who announces that it will implement the Stackelberg<br />

solution, is not credible. An usual conclusion is that, in the absence of<br />

additional mechanisms, the economy is doomed to converge towards the<br />

less desirable Nash solution.<br />

A number of options to insure credible solutions have been considered<br />

in the literature — credible binding commitments by the Stackelberg<br />

leader, reputation building, use of trigger strategies by the followers, etc.<br />

(see McCallum (1997), for a review in a monetary policy context). Schematically,<br />

these solutions aim at assuring the time consistency of Stackelberg<br />

solution with either the regulator or the firms as a leader. Usually, these<br />

solutions are not efficient and can be Pareto-improved.<br />

In Dawid et al. (2005), we demonstrated the usefulness of a new solution<br />

to the time inconsistency problem in environmental policy. We showed<br />

that tax announcements, that are not respected, can increase the payoff not<br />

only of the regulator, but also of all firms, if these include any number<br />

of naïve believers who take the announcements at face value. Moreover,<br />

if firms tend to adopt the behavior of the most successful ones, a unique


Cheap-talk Multiple Equilibria and Pareto 103<br />

stable equilibrium may exist where a positive fraction of firms are believers.<br />

This equilibrium Pareto-dominates the one, where all firms anticipate<br />

perfectly the Regulator’s action. To attain the superior equilibrium, the<br />

regulator builds reputation and leadership by making announcements and<br />

implementing taxes in a way that generates good results for the believers,<br />

rather than by pre-committing to his announcements.<br />

The potential usefulness of employing misleading announcements to<br />

Pareto-improve, upon standard game-theoretic equilibrium solutions was<br />

suggested for the case of general linear-quadratic dynamic games in Vallee<br />

et al. (1999) and developed by the same authors in subsequent papers.<br />

An early application to environmental economies is Vallee (1998). The<br />

believers/non-believers dichotomy was introduced by Deissenberg and<br />

Gonzalez (2002), who study the credibility problem in monetary economics<br />

in a discrete-time framework with reinforcement learning. A similar monetary<br />

policy problem has been investigated by Dawid and Deissenberg (2005)<br />

in a continuous-time setting akin to the used in the present work.<br />

In this chapter, we re-visit the model proposed by Dawid et al. (2005)<br />

and show that a minor (and plausible) modification of the regulator’s objective<br />

function, opens the door to the existence of multiple stable equilibria<br />

with distinct basins of attraction — with important interpretations and<br />

potential consequences for the practical conduct of environmental policy.<br />

This chapter is organized as follows. In Section 2, we present the model<br />

of environmental taxation and introduce the imitation-type dynamics that<br />

determinate the evolution of the number to believers in the economy. In<br />

Section 3, we derive and discuss the solution of the sequential Nash game,<br />

one obtains by assuming a constant proportion of believers. In Section 4,<br />

we formulate the dynamic problem and investigate its solution numerically.<br />

Finally, in Section 6, we summarize the mechanisms at work in the model<br />

and the main insights.<br />

2. The Model<br />

We consider an economy, consisting of a Regulator R and a continuum of<br />

atomistic, profit-maximizing firms i with an identical production technology.<br />

Time τ is continuous. To keep the notation simple, we do not index<br />

the variables with either i or r unless it is useful for a better understanding.<br />

In a nutshell, the situation of interest is the following: The regulator<br />

can tax the firms to incite them to reduce their emission, and to generate<br />

tax income, taxes, however, have a negative impact on the employment


104 Chirstophe Deissenberg and Pavel Ševčík<br />

(and thus, on the labor income) and the profits. Thus, R has to choose the<br />

tax level in order to achieve an optimal compromise among tax revenue,<br />

private sector income, emission reduction, and employment levels. The<br />

following sequence of event (S) occurs in every τ:<br />

— R makes a non-binding announcement t a ≥ 0 about the tax level t ≥ 0<br />

it will choose.<br />

—Givent a , the firms form expectations t e about the true level of the<br />

environmental tax. As will be described in more detail later, there are<br />

two different ways for an individual firm to form its expectations.<br />

— Each firm decides about its level of investment v based on its expectation<br />

t e .<br />

— R choose the actual level of tax t ≥ 0. The tax level t is the amount<br />

each firm has to pay per unit of its emissions.<br />

— Each firm produces a quantity x.<br />

— Each firm may revise the way it forms its expectations depending on its<br />

relative profits.<br />

All firms are identical, except for the way they form their expectations t e .<br />

We assume full information: the regulator and the firms perfectly know<br />

both the sequence, we just presented and the economic structure will be<br />

(production and prediction technology, objective functions ...) described<br />

in the remainder of this section.<br />

2.1. The Firms, Bs and NBs<br />

Each firm produces the same homogenous goods using a linear production<br />

technology: the production of x units of output, requires x units of labor and<br />

generates x units of environmentally damaging emissions. The production<br />

costs are given by:<br />

c(x) = wx + c x x 2 , (1)<br />

where x is the output, w>0 the fixed wage rate, and c x > 0 a parameter for<br />

simplicity sake, the demand is assumed infinitely elastic at the given price<br />

˜p >w. Let p := ˜p − w>0. At each point of time, each firm can spend<br />

an additional amount of money γ in order to reduce its current emissions<br />

x. The investment<br />

γ(v) = 1 2 v2 (2)<br />

is needed, in order to reduce the firm’s emissions from x to max [x − v, 0].<br />

The investment in one period has no impact on the emissions in future


Cheap-talk Multiple Equilibria and Pareto 105<br />

period. Rather than expenditures in emission-reducing capital, γ can therefore<br />

be interpreted as the additional costs resulting of a temporary switch<br />

to a cleaner resource — for e.g., of a switch from coal to natural gas.<br />

The firms attempts to maximize their profit. As we shall see at a later<br />

place, one can assume without loss of generality that they disregard the<br />

future and maximize in every τ, their current instantaneous profit. However,<br />

the actual tax rate t enters their instantaneous profit function and is unknown<br />

at the time when they choose v. Thus, they need to base their optimal<br />

choice of v on an expectation (or, prediction) t e of t. In reality, a firm<br />

can invest larger or smaller amounts of money in order to make or less<br />

accurate predictions of the tax, the government will actually implement.<br />

In this model, we assume for simplicity that each firm has two (extreme)<br />

options for predicting the implemented tax. It can either suppose that R’s<br />

announcement is truthful, that is, that t will be equal to t a , or it can do costly<br />

research and try to better predict the t that will actually be implemented.<br />

Depending upon the expectation-formation mechanism the firm chooses,<br />

we speak of believer B or of a non-believer NB:<br />

A believer considers the regulator’s announcement to be truthful<br />

and sets<br />

t e = t a (3)<br />

This does not cost anything.<br />

A non-believer makes a “rational” prediction of t, considering the current<br />

number of Bs and NBs as constant. Making the prediction in any given<br />

period τ costs δ>0.<br />

Details on the derivation of the NBs’prediction will be given later. Note<br />

that a firm can change its choice of expectation-formation mechanism at<br />

any moment in time. That is, a B can become a NB, and vice versa. We<br />

denote with π = π(t) ∈ [0, 1] the current fraction of Bs in the population.<br />

Thus, 1 − π is the current fraction of NBs.<br />

2.2. The Regulator, R<br />

The regulator’s goal is to maximize with respect to t a ≥ 0 and t ≥ 0, the<br />

inter-temporal social welfare function.<br />

(t a ,t)=<br />

∫ ∞<br />

0<br />

e −ρτ ϕ(t a ,t)dτ


106 Chirstophe Deissenberg and Pavel Ševčík<br />

with<br />

ϕ(t a ,t)= (πx B + (1 − π)x NB ) + πg B + (1 − π)g NB<br />

− (π(x B − v B ) + (1 − π)(x NB − v NB )) 2<br />

+ t(π(x B − v B ) + (1 − π)(x NB − v NB )), (4)<br />

where x B ,x NB ,v B ,v NB denote the production respectively, investment<br />

chosen by the believers B and the non-believers NB, and where g B (.)<br />

and g NB (.) are their instantaneous profits,<br />

g B = px B − c x (x B ) 2 − t(x B − v B ) − 1 2 (vB ) 2<br />

g NB = px NB − c x (x NB ) 2 − t(x NB − v NB ) − 1 2 (vNB ) 2 − δ.<br />

The strictly positive parameter ρ is a social discount factor.<br />

Thus, the instantaneous welfare φ is the sum of: (1) the economy’s<br />

output, that is also, the level of employment (remember that output and<br />

employment are one-to-one in this economy); (2) the total profits of Bs<br />

and NBs; (3) the squared volume of emissions and (4) the tax revenue. This<br />

function can be recognized as a reasonable approximation of the economy’s<br />

social surplus. The difference between the results of Dawid et al. (2005) and<br />

those of the present article stem directly from the different specification of<br />

the regulator’s objective function. In the former article, this function does<br />

not include the firm’s profit and is linear in the emissions.<br />

2.3. Dynamics<br />

The firms switch between the two possible expectation-formation mechanisms<br />

(B or NB), according to a imitation-type dynamics, see Dawid (1999);<br />

Hofbauer and Sigmund (1998). Specifically, at each τ the firms meet randomly<br />

two-by-two, each pairing being equiprobable. At every encounter,<br />

the firm with the lower current profit adopts the belief of the other firm with<br />

a probability proportional to the current difference between the individual<br />

profits. Thus, on the average, the firms tend to adopt the type of prediction,<br />

N or NB, which may currently leads to the highest profits. Above<br />

mechanism gives rise to the dynamics.<br />

dπ<br />

=˙π = βπ(1 − π)g B − g NB . (6)<br />

dt<br />

Notice that ˙π reaches its maximum for π =<br />

2 1 (the value or π for<br />

which the probability, for encounter between firms with different profits are<br />

(5)


Cheap-talk Multiple Equilibria and Pareto 107<br />

maximized), and tends towards 0, for π → 0 and π → 1 (for extreme values<br />

of π, all but very few firms have the same profits). The parameter β ≥ 0,<br />

that captures the probability with which a firm changes its strategy during<br />

one of these meetings, calibrates the speed of the information flow between<br />

Bs and NBs. It can be interpreted as a measure of the firms’ willingness to<br />

change strategies, that is, of the flexibility of the firms.<br />

Equation (6) implies that by choosing the value of (t a ,t)at time τ the<br />

regulator not only influences the instantaneous social welfare, but also the<br />

future proportion of Bs in the economy. This, in turn, has an impact on<br />

the future social welfare. Hence, there are no explicit dynamics for the<br />

economy. This again has an impact on the future social welfare. Hence,<br />

although there are no explicit dynamics for the economic variables v and<br />

x, R faces a non-trivial inter-temporal optimization problem.<br />

On the other hand, since the firms are atomistic, each single producer<br />

is too small to influence the dynamics Eq. (6) of π, the only source of<br />

dynamics in the model. Thus, the single firm does not take into account<br />

any inter-temporal effect and, independently of its true planing horizon,<br />

maximizes its current profit in every τ. This naturally leads us to examine<br />

the (Nash) solution of the game between the regulator and the non-believers<br />

that follows from the sequence (S) when, for an arbitrary, fixed value of π,<br />

(a) R maximizes the integrand φ in Eq. (4) with respect to t a and t and (b)<br />

the NBs maximize their profits with respect to ν NB and x NB . As previously<br />

mentioned, we assume full information. In particular, the R and the NBs<br />

are fully aware of the (non-strategic) behavior of the NBs.<br />

The analysis of the static case is also necessary since, as previously<br />

mentioned, it provides the prediction t e of the NBs.<br />

3. The Sequential Nash Game<br />

The game is solved by backwards induction, taking into account the fact<br />

that the firms, being atomistic, rightly assume that their individual actions<br />

do not affect the aggregate values, that is consider these values as given. We<br />

consider only symmetric solutions where all Bs respectively, NBs choose<br />

the same values for v and x.<br />

3.1. Derivation of the Solution<br />

The last decision in the sequence (S) is the choice of the production level x<br />

by the firms. This choice is made after v and t are known. The firms being


108 Chirstophe Deissenberg and Pavel Ševčík<br />

takers, the optimal symmetric production decision is:<br />

x B = x NB = x = p − t<br />

2c x<br />

. (7)<br />

Note that the optimal production is the same for Bs and NBs, as it<br />

exclusively depends upon the realized taxes t.<br />

In the previous stage, R chooses t given v B . Maximizing v NB with<br />

respect to t, with x is given by (7) gives the optimal reaction function:<br />

t(v B ,v NB ) = p − c x[2(πv B + (1 − π)v NB ) + 1]<br />

1 + c x<br />

. (8)<br />

When the firms determine their investment v, Eqs. (7) and (8), and t a<br />

are known, but the actual tax rate t is not. Using Eq. (7) in Eq. (5), one<br />

recognizes that the firms choose v in order to maximize<br />

That is, they set:<br />

(p − t e 2<br />

)<br />

+ t e v − 1 4c x 2 v2 . (9)<br />

v = t e (10)<br />

The Bs predict that t will be equal to its announced value, t e = t a . Thus,<br />

v B = t a . (11)<br />

The NBs know that the regulator will act according to Eq. (8). Thus, they<br />

choose:<br />

v NB = p − c x[2(πv B + (1 − π)v NB ) + 1]<br />

1 + c x<br />

. (12)<br />

Solving this last equation for v NB gives for the equilibrium value:<br />

v NB = v NB (t a ) p − c x(1 + 2πt a )<br />

1 + c x (3 − 2π) . (13)<br />

The first decision to be taken is the choice of t a by R. The regulator’s<br />

instantaneous objective function φ, using the results from the previous<br />

stages, i.e., Eq. (7) for x, Eq. (11) for v B and Eq. (13) for v NB , is concave


with respect to t a for all π ∈ [0, 1], if,<br />

Cheap-talk Multiple Equilibria and Pareto 109<br />

c x > 1/ √ 3. (14)<br />

We shall assume in the following that this inequality holds. Then, the<br />

regulator will choose<br />

t a$ =<br />

1 + p . (15)<br />

1 + 3c x<br />

Using this last equation in Eq. (8) with v NB given by Eq. (13), one obtains<br />

for the equilibrium value of t<br />

t $ =<br />

1 + p 1 + c x<br />

−<br />

1 + 3c x c x (3 − 2π) + 1 . (16)<br />

Both t a$ and t $ will be non-negative if<br />

which we assume from now on.<br />

p ≥<br />

3.2. Some Properties of the Solution<br />

3<br />

3c x − 2 , (17)<br />

For all c x > 0 and π ∈ [0, 1], the optimal announcement t a$ is a constant,<br />

and always less than t a$ . It is optimal to announce a high tax in order to<br />

elicit a high investment from the believers, and to implement a low tax to<br />

motivate a high production level. Moreover, t $ decreases in π: When the<br />

proportion of Bs augments, the announcement t a becomes a more powerful<br />

instrument making a recourse to t less necessary.<br />

The difference between the statically optimal (i.e., evaluated at t $ ) profits<br />

of NBs and Bs<br />

g NB$ − g B$ = (1 + c x) 2<br />

− δ, (18)<br />

2c x (2π − 3) 2<br />

is likewise increasing in π. Modulo δ, the profit of the NBs is always higher<br />

than the profit of the Bs whenever π>0, reflecting the fact that the latter<br />

make a systematic error about the true value of t. The profit of the Bs,<br />

however, can exceed the one of the NBs if the prediction costs δ are high<br />

enough.<br />

Since t a$ is constant and t $ decreasing in π and since t $ ≤ t a$ , the<br />

difference t a$ − t $ , increases with π. Therefore, the difference Eq. (18)<br />

between the profits of the NBs and Bs is also increasing in the difference<br />

between the announced and the implemented tax, t a$ − t $ ,


110 Chirstophe Deissenberg and Pavel Ševčík<br />

Figure 1. The profits of the Bs (lower curve), of the NBs (middle curve), and the<br />

regulator’s utility (upper curve) as a function of π for δ = 0.00025, c x = 0.6, p = 2.<br />

As one would intuitively expect, the regulator’s instantaneous utility φ<br />

too, increases with π. Most remarkable, however, is the fact that the profits of<br />

both the Bs and NBs are also increasing in π (see Fig. 1 for an illustration).<br />

An intuitive explanation is the following: For π = 0 the outcome is the<br />

Nash solution of a game between the NBs and the regulator. This outcome<br />

is not efficient leaving room for Pareto-improvement. As π increases, the<br />

problem tends towards the solution a standard optimization situation with<br />

the regulator as the sole decision-maker. This solution is efficient in the sense<br />

that it does maximize the regulator’s objective function under the given<br />

constraints. Since the objectives of the regulator are not extremely different<br />

of the objectives of the firm, moving from the inefficient game-theoretic<br />

solution to the efficient control-theoretic solution one improves the position<br />

of all actors. Specifically, as π increases, the Bs enjoy the benefits of a lower<br />

t, while the investment remains fixed at t a$ . Likewise, the NBs benefit from<br />

the decrease in t. And R also benefits, because a raising number of Bs<br />

are led by R’s announcement to invest more than they would, otherwise<br />

and, subsequently, to produce more, to make higher profits, and to pollute<br />

subsequently, to produce more, to make higher profits, and to pollute less.<br />

In dynamic settings, however, the impact of the tax announcements<br />

on the π dynamics may incite the regulator to reduce the spread between<br />

t and t a , and in particular to choose t a


Cheap-talk Multiple Equilibria and Pareto 111<br />

a high proportion of Bs to a low one, since it has one more instrument<br />

(namely, t a ) to influence the Bs than to regulate the NBs. A high spread<br />

t a − t, however, implies that the profits of the Bs will be low compared<br />

to those of the NBs. This, by Eq. (6), reduces the value of ˙π and leads<br />

over time to a lower proportion of Bs, diminishing the instantaneous utility<br />

of R. Therefore, in the dynamic problem, R will have to find an optimal<br />

compromise between choosing a high value of t a , which allows a low value<br />

of t and insures the regulator a high instantaneous utility, and choosing a<br />

lower one, leading to a more favorable proportion of Bs in the future. We<br />

are going to explore these mechanisms in more details in the next section.<br />

4. The Dynamic <strong>Optimization</strong> Problem<br />

4.1. Formulation<br />

Being atomistic, the firms do not recognize that their individual decisions<br />

to choose the B or NB prediction technology influence π over time. Assume<br />

that they consider π as constant. The regulator’s optimization problem is<br />

then:<br />

max (t a ,t) 0 ≤ t a (τ), t(τ)<br />

(19)<br />

such that Eqs.(6), (7), (11), (13), and π 0 = π(0).<br />

Using<br />

g := p − c x(1 + 2πt a )<br />

1 + c x (3 − 2π) ,<br />

h := p − t<br />

(20)<br />

.<br />

2c x<br />

The Hamiltonian of the above problem is given by<br />

H(t a ,t,π,λ)= h + π<br />

[h −<br />

(p − t)2<br />

4c x<br />

+ t(h − t a ) − ta2<br />

2<br />

(p − t)2<br />

+ (1 − π)<br />

[h − + t(h − g) − 1 4c x<br />

2 g2 − δ<br />

+ t[π(h − t a ) + (1 − π)(h − g)]<br />

− [π(h − t a ) + (1 − π)(h − g)] 2<br />

+ λπ(1 − π)β<br />

[tt a − ta2<br />

2 − th + 1 ]<br />

2 h2 + δ ,<br />

where λ is the co-state.<br />

]<br />

]


112 Chirstophe Deissenberg and Pavel Ševčík<br />

The first order maximization condition with respect to t a and t yields<br />

for the dynamically optimal values*<br />

t a∗ = (p − c x) + [c x (3 − 2π) + 1](1 + c x ) 2<br />

, (21)<br />

(1 + 3c x )[βλ(1 − π) + K]<br />

with<br />

t ∗ = (p − c x) + 2c x (1 + c x )π[c x (βλ(1 + 3c x )(1 − π) + 1)]<br />

(1 + 3c x )[βπ(1 − π) + K]<br />

K = 1 − 6c 3 x β2 λ 2 (1 − π) 2 π + 2c x [2 + 2βλ(1 − π)π]<br />

+ cx 2 [3 − 2π + βλ(1 − π)(3 − 2βλ(1 − π)π)]. (22)<br />

According to Pontriagin’s maximum principle, an optimal solution<br />

(π(τ), (λ)) has to satisfy the state dynamics (6), evaluated at (t a∗ ,t ∗ ) plus:<br />

˙λ = ρλ − ∂H(ta∗ (π, λ), t ∗ (π, λ), π, λ)<br />

(23)<br />

∂π<br />

0 = lim<br />

τ→∞ e−ρτ λ(τ). (24)<br />

Due to the highly non-linear structure of the optimization problem an<br />

explicit derivation of the optimal (π(τ), λ(τ)) seems infeasible. Hence, we<br />

used Mathematica r○ to carry out a numerical analysis of the canonical<br />

system Eqs. (6)–(23) in order to shed light on the optimal solution.<br />

4.2. Numerical Results<br />

Depending upon the parameter values, the model can have either two stable<br />

equilibria separated by a so-called Skiba point, or a unique stable equilibrium.We<br />

discuss first the more interesting case of multiple equilibria, before<br />

carrying a summary of sensitivity analysis and briefly addressing the question<br />

of the bifurcations, leading to a situation with a situation with unique<br />

equilibrium. Unless and otherwise specified, the results presented pertain<br />

to the reference parameter configuration c x = 0.6, p = 2, δ = 0.00025 that<br />

underlies Fig. 1, together with ρ = 1, 0.0015. These are robust, with respect<br />

to sufficiently small parameter variations: The same qualitative insights<br />

would be obtained for a compact set of alternative parameter configurations.<br />

From the onset, let us stress the fundamental mechanism that underlies<br />

most of the results. Ceteris paribus, R would like to face as many Bs as<br />

possible, since dφ/dπ >0. If the prediction costs δ are sufficiently high,<br />

the profit difference g NB$ − g B$ see Eq. (18), will be negative at the current


Cheap-talk Multiple Equilibria and Pareto 113<br />

value of π. In this case, by Eq. (6), implementing the statically optimal<br />

actions t a$ , t $ suffices (albeit not optimally so) to insure that π increases<br />

locally. In fact, for δ sufficiently large, the thus controlled system will<br />

globally converge to a situation where there are only believers, π = 1.<br />

However, if δ is small, the regulator cannot implement the actions t a$ , t $<br />

since in that case g NB


114 Chirstophe Deissenberg and Pavel Ševčík<br />

Figure 2. Phase diagram of the canonical system, c x = 0.6, p = 2, δ = 0.00025,<br />

β = 1, and ρ = 0.0015.<br />

E U<br />

Table 1. Profits, welfare, and taxes at E U<br />

and E L , c x = 0.6, p = 2, δ = 0.0025, β = 1,<br />

p = 0.0015.<br />

g φ t a<br />

t<br />

E U 1 2.25 1.07 0.12<br />

E L 0.98 2.18 N.A. 0.62<br />

The Pareto-superiority of E U is illustrated in Table 1 (remember that at<br />

the equilibrium g N = g NB ).<br />

The situation captured in Fig. 2 (an unstable equilibrium surrounded<br />

by two stable ones) implies the existence of a threshold such that it is<br />

optimal for R to follow a policy leading in the long-run towards the stable<br />

equilibrium E L = (0,λ L ), whenever the initial value π 0 of π is inferior to<br />

the threshold value, π 0


Cheap-talk Multiple Equilibria and Pareto 115<br />

Whenever π 0 π s lies in the following<br />

fact: The “deviation costs” that the regulator will have to accept initially in<br />

order to steer the system towards the more favorable E U are so high that the<br />

net benefit of going to E U would be less than the one obtained by letting<br />

the system optimally converge towards the inferior E L .<br />

The above results suggest that the policy of an environmental regulator<br />

maybe very different depending on the fraction of the population that trusts<br />

it when it initiates a new policy. If the initial confidence is high, the costs of<br />

building a large fraction of Bs are compensated in the long-run by accrued<br />

benefits. However, if the regulator’s trustworthiness is low from the onset,<br />

its interest is to exploit its initial credibility for making short-term gains,<br />

although this ultimately leads to an inferior situation where nobody trusts<br />

the regulator.<br />

As already mentioned, the dynamics of the canonical system Eqs. (6)–<br />

(23) at the unstable middle equilibrium E M form a spiral. Therefore, the<br />

threshold value π s is typically close to, but distinct from π M (i.e., π M ̸= π s )<br />

and the threshold takes the particularly challenging form of a Skiba point.<br />

No local analytical expression exists that characterizes a Skiba point. Thus,<br />

Skiba points must be computed numerically. This was not done in this paper.<br />

For a reference article on thresholds and Skiba points in economic models,<br />

see Deissenberg et al. (2004).<br />

5. Sensitivity Analysis and Bifurcations Towards<br />

a Unique Equilibrium<br />

Numerical analyses show that an increase in the firms’ flexibility (i.e., an<br />

increase in β) shifts the stable equilibrium E U to the right and the unstable


116 Chirstophe Deissenberg and Pavel Ševčík<br />

equilibrium E M as well as the Skiba point π s to the left. In other words,<br />

if the firms react quickly to profit differences, the regulator will follow a<br />

policy that leads to the upper equilibrium E U even if the initial proportion<br />

of Bs is small. Indeed, the high speed of reaction of the firms means that the<br />

accumulated costs incurred by the regulator on the way to E U will be small<br />

and easily compensated by the gains around and at E U . Moreover, R does<br />

not have to make Bs much better off than NBs to insure a fast reaction. As a<br />

result, for β large, the equilibrium E U is characterized by a large proportion<br />

of Bs and thus insures high stationary gains to all actors.<br />

Not surprisingly, an increase of the discount factor ρ has the opposite<br />

effect. Impatient regulators want to build up confidence, only if the initial<br />

proportion of Bs is high. The resulting equilibrium value π U will be relatively<br />

low, since the time and efforts needed now for an additional increase<br />

in the stock of Bs weighs heavily compared to the expected future benefits.<br />

An increase of the prediction cost δ finally, shifts E M to the left and<br />

E U to the right: It is easier for R to incite the firms to use the B strategy,<br />

favoring the convergence to equilibrium with a large number of believers.<br />

The scenario with two stable equilibria separated by a Skiba point can<br />

change qualitatively, when the parameter values are sufficiently altered.<br />

For small values of β and/or δ and/or for large values of ρ, the equilibrium<br />

E U collides with E M and disappears. Only one (stable) equilibrium<br />

remains, E L . Thus, in the long-run, the proportion of believers tends to zero.<br />

On the other hand, for large values of δ trying to outguess the regulator is<br />

never optimal. The only remaining equilibrium is the one where everybody<br />

is a believer.<br />

6. Conclusion<br />

The starting point of this paper is a static situation, where standard rational<br />

behavior by all agents leads to a Pareto-inferior outcome, although<br />

there is no fundamental conflict of interest either among the different firms<br />

or between the firms and the regulator, and although the firms are perfectly<br />

informed and perfectly predict the regulator’s actions. In addition<br />

to the environmental problem studied here, such a situation is common for<br />

instance in monetary economics, in disaster relief, patent protection, capital<br />

levies, or default on debt.<br />

The present paper and previous work strongly suggest that, in such a<br />

situation, the existence of believers who take the announcements of a macroeconomic<br />

regulator at face value typically, Pareto-improve the economic


Cheap-talk Multiple Equilibria and Pareto 117<br />

outcome. This property hinges crucially on the fact that the private sector<br />

is atomistic, and thus, that the single agents do not anticipate the collative<br />

impact of their individual decisions. Moreover, it requires a relatively<br />

benevolent regulator, whose objective function does not differ too drastically<br />

from the one of the private agents.<br />

To introduce dynamics, we assumed that the proportion of believers<br />

and non-believers in the economy changes over time as a function of the<br />

difference in profits for the two types of firms. The change obeys a word-ofmouth<br />

process driven by the relative success of the one or the other private<br />

strategy, to believe or to predict. It is supposed that the regulator recognizes<br />

its ability to influence the word-of-month dynamics, by an appropriate<br />

choice of actions and is interested not only in its instantaneous but also<br />

in its future losses. It is shown that, when the firms react fast enough to the<br />

difference in profits of Bs and NBs, a sufficiently patient regulator may use<br />

incorrect announcements to Pareto-improve the economic outcome compared<br />

to the situation where all firms are non-believers that perfectly predict<br />

the regulator’s actions. A particularly interesting case arise, when a stable<br />

Pareto-superior equilibrium with a positive fraction of believers co-exists<br />

with the (likewise stable) perfect prediction but inferior equilibrium where<br />

all firms are non-believers.<br />

In this case, the optimal policy of the regulator (to converge towards<br />

the one or the other equilibrium) depends upon the initial proportion of<br />

believers in the population. This history dependence of the solution suggests<br />

a promising meta-analysis of the importance of good reputation based on the<br />

previous good governance, in situations where the same macro-economic<br />

policy-maker is confronted with recurrent, distinct decision-making issues.<br />

These results of this paper are obtained under the assumption that the<br />

non-believers do not exploit the fact that the regulator announced tax differs<br />

from the equilibrium announcement for the static game. Likewise, in the<br />

model, the non-believers do not attempt to predict the dynamics of the number<br />

of believers.We leave to future research, the non-trivial task of exploring<br />

the consequences of a relaxation, of these two restrictive hypotheses.<br />

References<br />

Abrego, L and C Perroni (1999). Investment subsidies and time-consistent environmental<br />

policy. Discussion Paper, Coventry: University of Warwick, 1–35.<br />

Barro, R and D Gordon (1983). Rules, discretion and reputation in a model of monetary<br />

policy. Journal of Monetary <strong>Economic</strong>s 12, 101–122.


118 Chirstophe Deissenberg and Pavel Ševčík<br />

Batabyal, A (1996a). Consistency and optimality in a dynamic game of pollution control I:<br />

competition. Environmental and Resource <strong>Economic</strong>s 8, 205–220.<br />

Batabyal, A (1996b). Consistency and optimality in a dynamic game of pollution control II:<br />

monopoly. Environmental and Resource <strong>Economic</strong>s 8, 315–330.<br />

Dawid, H (1999). On the dynamics of word of mouth learning with and without anticipations.<br />

Annals of Operations Research 89, 273–295.<br />

Dawid, H and C Deissenberg (2005). On the efficiency-effects of private (dis)-trust in the<br />

government. Journal of <strong>Economic</strong> Behavior and Organization 57, 530–550.<br />

Dawid, H, C Deissenberg and P Ševčík (2005). Cheap talk, gullibility and welfare in an<br />

environmental taxation game. In: Dynamic Games: Theory and Applications, A Haurie<br />

and G Zaccour (eds.), Amsterdam: Kluwer, 175–192.<br />

Deissenberg, C and F Alvarez Gonzalez (2002). Pareto-improving cheating in a monetary<br />

policy game. Journal of <strong>Economic</strong> Dynamics and Control 26, 1457–1479.<br />

Deissenberg, C, G Feichtinger, W Semmler and F Wirl (2004). Multiple equilibria, history<br />

dependence, and global dynamics in intertemporal optimization models. In <strong>Economic</strong><br />

Complexity: Non-Linear Dynamics, Multi-agents Economies, and Learning,<br />

W Barnett, C Deissenberg and G Feichtinger (eds.), ISETE, Vol. 14, Amsterdam:<br />

Elsevier, 91–122.<br />

Dijkstra, B (2002). Time consistency and investment incentives in environmental policy.<br />

Discussion Paper 02/12.U.K.: School of <strong>Economic</strong>s, University of Nottingham, 2–12.<br />

Gersbach, H and A Glazer (1999). Markets and regulatory hold-up problems. Journal of<br />

Environmental <strong>Economic</strong>s and Management 37, 151–164.<br />

Hofbauer, J and K Sigmund (1998). Evolutionary Games and Population Dynamics.<br />

Cambridge: Cambridge University Press.<br />

Kydland, F and E Prescott (1977). Rules rather than discretion: the inconsistency of optimal<br />

plans. Journal of Political Economy 85, 473–491.<br />

Marsiliani, L and T Renstrom (2000). Time inconsistency in environmental policy: tax<br />

earmarking as a commitment solution. <strong>Economic</strong> Journal 110, C123–C138.<br />

McCallum, B (1997). Critical issues concerning central bank independence. Journal of<br />

Monetary <strong>Economic</strong>s 39, 99–112.<br />

Petrakis, E and A Xepapadeas (2003). Location decisions of a polluting firm and the time<br />

consistency of environmental policy. Resource and Energy <strong>Economic</strong>s 25, 197–214.<br />

Simaan, M and JB Cruz (1973a). On the Stackelberg strategy in nonzero-sum games. Journal<br />

of <strong>Optimization</strong> Theory and Applications 11, 533–555.<br />

Simaan, M and JB Cruz (1973b). Additional aspects of the Stackelberg strategy in nonzerosum<br />

games. Journal of <strong>Optimization</strong> Theory and Applications 11, 613–626.<br />

Vallee, T (1998). Comparison of different Stackelberg solutions in a deterministic<br />

dynamic pollution control problem. Discussion Paper, LEN-C3E. France: University of<br />

Nantes, 1–30.<br />

Vallee, T, C Deissenberg and T Basar (1999). Optimal open loop cheating in dynamic<br />

reversed linear-quadratic Stackelberg games. Annals of Operations Research 88,<br />

247–266.


Chapter 4<br />

Modeling Business Organization


This page intentionally left blank


Topic 1<br />

Enterprise Modeling and Integration:<br />

Review and New Directions<br />

Nikitas Spiros Koutsoukis<br />

Hellenic Open University, and Democritus University<br />

of Thrace, Greece<br />

1. Introduction<br />

Enterprise modeling (EM), Enterprise integration (EI), and Enterprise<br />

integration modeling (EIM) are concerned with the definition, analysis,<br />

re-design and integration of information, human capital, business process,<br />

processing of data and knowledge, software applications and information<br />

systems regarding the enterprise and its external environment to achieve<br />

advancements in organization performance (Soon et al., 1997).<br />

Enterprise modeling and Enterprise integration still are still ongoing<br />

fields of research where the background and interests of contributors in<br />

these areas are heterogeneous, including disciplines like mechanical manufacturing,<br />

business process re-engineering, and software engineering.<br />

However, the following two definitions (Vernadat, 1996) are representative<br />

for current practices:<br />

• Enterprise modeling is concerned with the explicit representation of<br />

knowledge regarding the enterprise in terms models.<br />

• Enterprise integration consists of breaking down organizational barriers<br />

to improve synergy within the enterprise so that business goals are<br />

achieved in a more productive way.<br />

The main objective of this paper is to review the literature on EM and integration<br />

and to provide a concise description of related topics. The paper also<br />

discusses EM and integration challenges, and rationale as well as current<br />

tools and methodologies for deeper analysis.<br />

121


122 Nikitas Spiros Koutsoukis<br />

2. Model and Enterprise Terminology<br />

Model and enterprise are the two key terms that appear throughout this<br />

paper. An enterprise can be perceived as “a set of concurrent processes executed<br />

on the enterprise means (resources or functional entities) according<br />

to the enterprise objectives and subject to business or external constraints”<br />

(Vernadat, 1996). A model can be defined as “a structure that a system<br />

can use to simulate or anticipate the behaviour of something else” (Hoyte,<br />

1992). This is a generalization of a definition given by Minsky (1968):<br />

“a model is a useful representation of some subject. It is a (more or less<br />

formal) abstraction of a reality (or universe of discourse) expressed in terms<br />

of some formalism (or language) defined by modeling constructs for the<br />

purpose of the user”.<br />

Instead of attempting to form a definition, which may be redundant, we<br />

will mention some important features of enterprise models instead.<br />

• An enterprise model provides a computational representation of the<br />

structure, activities, processes, information, resources, people, behavior,<br />

goals, and constraints of a business, government, or other enterprise.<br />

It can be both descriptive and definitional-spanning what is and what<br />

should be. The role of an enterprise model is to achieve model-driven<br />

enterprise design, analysis, and operation (Fox, 1993).<br />

• Enterprise models must either be represented in the mind of human beings<br />

or in computers (Christensen et al., 1995).<br />

• All enterprise models are built with a particular purpose in mind. Depending<br />

on the purpose, the model will emphasize different types of information<br />

at different levels of detail.<br />

3. Enterprise Modeling<br />

During the last decades, EM has been partly addressed by several<br />

approaches to the same problem of making the enterprise more efficient.All<br />

these approaches have included the construction of symbolic representations<br />

of aspects of the real world, using various notations and techniques that<br />

commonly could be denoted “conceptual modeling” (Solvberg and Kung,<br />

1993). The idea of EM, as an extension of the standard for the exchange<br />

of product model data (STEP) product modeling approach, contributes to<br />

integrating design (which focuses on product information) and construction<br />

(focusing on activity and resource information) (Kemmerer, 1999).


Enterprise Modeling and Integration 123<br />

In the 1980s and during the early 1990s, when database technology was<br />

applied on a larger scale, the need for better application design and development<br />

methods became evident (Martin, 1982; Sager, 1988). Different<br />

modeling methodologies, like the structured analysis and design technique<br />

(SADT), were developed. A narrow focus on a single department or task<br />

during development often caused operational problems across application<br />

boundaries. The need for making more complete models aroused, integrating<br />

operations across departmental or functional boundaries.<br />

The problem in EM regarding the manufacturing of EI and control was<br />

to ensure timely execution of business processes on the functional entities<br />

of the enterprise (i.e., human and technical agents) to process enterprise<br />

objects. Processes are made of functional operations and functional operations<br />

are executed by functional entities. Objects flowing through the enterprise<br />

can be information entities (data) as well as material objects (parts,<br />

products, tools, etc.) (Rumbaugh, 1993).<br />

In order to design the flexible information and communication<br />

infrastructure needed, different enterprise models are useful, and the model<br />

may in fact become a part of the integrated enterprise itself. The trend<br />

now is away from the first single perspective enterprise models, which was<br />

only focusing on the product data information handling, to more complete<br />

enterprise models covering both informational and human aspects of the<br />

organization (Fox, 1993).<br />

During the 1990s, the question aroused on how to integrate different<br />

departments in an enterprise, and how to connect the enterprise with its<br />

external environment i.e., suppliers and customers, to improve co-operation<br />

and logistics. During the 1990s, conglomerate industries took a more developmental<br />

approach, and the research area of EI emerged. Enterprise modeling<br />

is clearly a pre-requisite for EI as all things to be integrated and<br />

coordinated need to modeled to some extent (Petrie, 1992).<br />

Since 1990, major R&D programs for computer-integrated manufacturing<br />

(CIM) have resulted in the following tools/methods or architectures<br />

for EM:<br />

(1) IDEF modeling tools: These are complementary modeling tools developed<br />

by the integrated computer aided manufacturing (ICAM) project,<br />

introducing IDEF0 for functional modeling, IDEF1 for information<br />

modeling, IDEF2 for simulation modeling, IDEF3 for business process<br />

modeling, IDEF4 for object modeling, and IDEF5 for ontology<br />

modeling. IDEF models are mainly used for requirements definitions.<br />

(2) Generic Artificial Intelligence (GRAI) integrated methodology: It is a<br />

methodology which uses the GRAI grid, GRAI nets, developed by the


124 Nikitas Spiros Koutsoukis<br />

GRAI laboratories of France and includes modeling IDEF0, and group<br />

technology to model the decision making processes of the enterprise.<br />

The combination is called GRAI integrated methodology (GIM).<br />

(3) Computer integrated manufacturing open system architecture<br />

(CIM-OSA): CIMOSA has been developed to assist enterprises adapt to<br />

internal and external environmental changes so as to face global competition<br />

and improve themselves regarding product prices, product quality,<br />

and product delivery time. It provides four different types of views<br />

of the enterprise: modeling function view, information view, resource<br />

view, and organization view (Beeckman 1993; Vernadat, 1996).<br />

(4) Purdue enterprise reference architecture (PERA): It is a methodology<br />

developed at Purdue University in 1989 which focuses on separating<br />

the human-based functions of an enterprise from the manufacturing<br />

and information functions (Williams, 1994).<br />

(5) Generalized enterprise reference architecture and methodology<br />

(GERAM): It is a generalization of CIMOSA, GIM, and PERA. The<br />

general concepts, identified and defined in this reference methodology,<br />

consist of methodological guidelines for enterprise engineering (from<br />

PERA and GIM), life-cycle guidelines (from PERA), and model views<br />

modeling (e.g., CIMOSA constructs).<br />

Modeling method relies upon a specific purpose defining its finality, i.e.,<br />

the goal of the modeler. This finality has a direct influence on the definition<br />

of modeling method. We adopt the position that any enterprise is made of a<br />

large collection of concurrent business processes, executed by an open set<br />

of functional entities (or agents) to achieve business objectives (as set by<br />

management). Enterprise modeling and integration is essentially a matter<br />

of modeling and integrating these process and agents (Kosanke and Nell,<br />

1997; Ladet and Vernadat, 1995).<br />

The prime goal of an EM approach is to support analysis of an enterprise<br />

rather than to model the entire enterprise, even though this is theoretically<br />

possible. In addition, another goal is to model relevant business process and<br />

enterprise objects concerned with business integration.<br />

According to Vernadat (1996), the aim of EM is to provide:<br />

• a better perspective of the enterprise structure and operations;<br />

• reference methods for enterprise engineering of existing or new parts of<br />

the enterprise both in terms of analysis, simulation, and decision-making<br />

and<br />

• a model in order to manage efficiently the enterprise operations.


Whereas, the main motivations for EM are:<br />

Enterprise Modeling and Integration 125<br />

• better understanding of the structure and functions of the enterprise;<br />

• managing complexity of operations;<br />

• creating enterprise knowledge and know-how for all;<br />

• efficient management of processes;<br />

• enterprise engineering and<br />

• EI itself.<br />

4. Enterprise Integration<br />

Enterprise integration was discussed since the early days of computers<br />

in industry and especially in the manufacturing industry with CIM as the<br />

acronym for operations integration (Vernadat, 1996). In spite of the different<br />

understandings of the scope of integration in CIM, it was always understood<br />

for information integration across or at least parts of the enterprise. Information<br />

integration essentially consists of providing the right information,<br />

to the right people, at the right place, at the right time (Norrie et al., 1995).<br />

Information that is available to the right people, at the right place, at the<br />

right time, and that enables them to act quickly and decisively, is a crucial<br />

requirement of enterprises that intend to grow and remain profitable. The<br />

intelligent delivery of information must obtain maturity levels consistent<br />

with enterprise goals and must employ the appropriate technology to facilitate<br />

this purpose. In today’s enterprise environment, users must consolidate<br />

data and deliver it as useful information and knowledge that empowers the<br />

right business decisions.<br />

With this understanding the different needs in EI can be identified:<br />

(1) Identify the right information to gain knowledge: Elicit knowledge from<br />

gathering information from the intra-enterprise activities and use models<br />

to represent this knowledge regarding product attributes and information,<br />

process management issues, and the decision-making process.<br />

(2) Provide the right information at the right place:The need for implementing<br />

information sharing systems and integration platforms to support<br />

information transaction across heterogeneous environments regarding<br />

hardware, different operating systems, and the legacy systems. This<br />

is a step towards inter-enterprise integration overcoming barriers and<br />

provides connection between enterprise environments by linking their<br />

operations to create extended and virtual enterprises.


126 Nikitas Spiros Koutsoukis<br />

(3) Up-date the information in real time to reflect the actual state of the<br />

enterprise operation: enterprises must be able to adopt changes regarding<br />

both their external environment i.e., technology, legislation, and<br />

their internal environment i.e., production operations in order to protect<br />

the enterprise goal and objective from becoming obsolete.<br />

(4) Co-ordinate business processes: This requires decisional capabilities<br />

and know-how within the enterprise for real-time decision support<br />

and evaluation of operational alternatives based on business process<br />

modeling.<br />

The aims of EI are:<br />

• to enable communication among the various functional entities of the<br />

enterprise;<br />

• to provide interoperability of IT applications (plug-and-play approach)<br />

and<br />

• to facilitate coordination of functional entities for executing business<br />

processes so that they synergistically fulfill enterprise goals.<br />

The main motivation for EI is to make possible true interoperability<br />

among heterogeneous, remotely located, systems within, or outside the<br />

enterprise in an independent IT vendor environment.<br />

Enterprise integration, especially for networked companies and large<br />

manufacturing enterprises is becoming a reality. The issue is that EI is both<br />

an organizational and a technological matter. The organizational matter<br />

is partly understood so far whereas the technological matter lies with the<br />

major advances over the last decade. It concerns mostly computer communications<br />

technology, data exchange formats, distributed databases, object<br />

technology, Internet, object request brokers (ORB such as OMG/CORBA),<br />

distributed computing environments (such as OSF/DCE and MS DCOM,<br />

and now Java to enterprise edition and execution environments (J2EE).<br />

Some important projects developing integrating infrastructure (IIS)<br />

technology for manufacturing environments include:<br />

• CIMOSA IIS: The CIMOSA integrated infrastructure (IIS) enables<br />

business process control in a heterogeneous IT environment. In IIS, an<br />

enterprise is composed of functional entities, which are connected by a<br />

communication system. Functional entities can be a machine, a software<br />

application, or a human (Vernadat, 2002).<br />

• AIT IP: This is an integration platform developed as an AIT project and<br />

based on the CIMOSA IIS concepts (Waite, 1998).


Enterprise Modeling and Integration 127<br />

• OPAL: This is also an AIT project that has proved that EI can be achieved<br />

in design and manufacturing environments using existing ICT solutions<br />

(Bueno, 1998).<br />

• NIIIP (National industrial information infrastructure protocols) (Bolton<br />

et al., 1997).<br />

However, EI suffers from inherent complexity of the field, lack of established<br />

standards, which sometimes happen after the fact, and the rapid and<br />

unstable growth of the basic technology with a lack of commonly supported<br />

global strategy. The EI modeling research community is growing at<br />

a very rapid pace (Fox and Gruninger, 2003). The EIM research so far has<br />

been mostly performed in large research organizations. A few pilot studies<br />

have also been done in large organizations. Hunt (1991) suggests that EIM<br />

is most beneficial to large industrial and government organizations. The<br />

emphasis of current EIM research on large organizations is risky, and may<br />

even have hampered the growth of the field. Enterprise wide modeling is<br />

difficult in large organizations for two reasons: there is seldom any consensus<br />

on exactly how the organization functions and the modeling task may<br />

be so big that by the time any realistic model is built, the underlying domain<br />

may have changed. Whatever these terms mean, we have to seek solutions<br />

to solve industry problems, and this applies to EM and integration.<br />

5. A Framework Setting for EI Modeling<br />

Enterprise integration is a multifaceted task employed for addressing challenges<br />

stemming from various aspects of the globalized economy. The contemporary<br />

business world, economically, politically, and technologically is<br />

a fast-changing world and most if not all enterprises, from small to large,<br />

are continually challenged to evolve as rapidly as the world around them<br />

evolves. Enterprise integration allows an enterprise to function coherently<br />

as a whole and to shift its primary value chain activities in tandem with<br />

its secondary value chain activities according to the challenges or requirements<br />

posed by the enterprise environment or goals set by the enterprise<br />

itself, and most frequently a combination of both (Giachetti, 2004). In this<br />

context, the value chain’s primary and secondary activities are used simply<br />

as a well-known functional-economic view of the enterprise in order to<br />

communicate coherently, as opposed to a semantically accurate or complete<br />

view of the enterprise (Porter, 1985).


128 Nikitas Spiros Koutsoukis<br />

The multiple facets of EI refer to the multiple dimensions, entities, and<br />

constituents where integration is primarily aimed at, and through which<br />

enterprise integration is achieved.<br />

Enterprise integration dimensions represent infrastructural views, upon<br />

which integration is founded, built and subsequently achieved. Some of the<br />

most commonly used integration dimensions are the following:<br />

• the data and information dimension;<br />

• the processes dimension;<br />

• the economic activity or business dimension and<br />

• the technology or technical dimension.<br />

Because these dimensions can easily be extended throughout the enterprise<br />

regardless of the enterprise functions, they are frequently used as<br />

fundamental reference points for EI efforts (Lankhorst, 2005).<br />

Each of these dimensions has been used successfully for seeking out and<br />

achieving EI. For instance, in the data and information dimension, which<br />

is mainly the realm of corporate information systems, data warehousing<br />

and enterprise resource planning (ERP), have been used successfully to<br />

introduce enterprise-wide, unified views of data, information and related<br />

analyses. In the processes dimension, business process management (BPM)<br />

or re-engineering (BPR) have been successfully applied in developing and<br />

refining old, and new processes which are more integrated and efficient<br />

across the enterprise. In the economic or business dimension, business<br />

strategy models, like the core-competency theory, balanced-score cards,<br />

or economic value-added models like the activity-based costing (ABC)<br />

are also used successfully to integrate across the enterprise (Kaplan and<br />

Norton, 1992; Prahalad and Hamel, 1990). In the technology or technical<br />

dimension, the key requirement is for interoperability of systems, hardware,<br />

and software. The success with which many multinational enterprises<br />

operate shows that certain milestones have been reached in this<br />

direction.<br />

Enterprise entities refer to function-oriented enterprise constructs<br />

that are typically goal-oriented, collectively contributing to achieving<br />

the enterprise mission. Entities form hierarchical views of the enterprise,<br />

and frequently tend to match formal or informal structures as in<br />

most organizational charts. From a functional perspective, entities refer<br />

to key organizational activities or divisions, such as administration, production,<br />

accounting, sales, marketing, procurement, and R&D. As is well<br />

known, entities can also be organized in terms of authority or delegation


Enterprise Modeling and Integration 129<br />

layers, communication layers, goal-contribution layers, geographically<br />

based activities and so forth.<br />

The sole purpose for having such functional-oriented constructs is to<br />

decompose the enterprise into smaller, more specific, and more manageable<br />

parts. The contribution of each entity to the enterprise objectives is thus<br />

readily identified and narrowed down in scope, horizontally or vertically.<br />

It is only through the use of one or more dimensions that the entities are<br />

re-combined in an enterprise-wide integration model.<br />

Enterprise constituents or constituent groups refer to the human factor<br />

which is ultimately responsible for setting out, applying and achieving EI.<br />

The most prevalent constituent groups are the following:<br />

• Stakeholders<br />

• Decision-makers<br />

• Employees<br />

which are internal to the enterprise, and<br />

• Suppliers and<br />

• Clients<br />

which are external to the enterprise.<br />

Some of the main pitfalls in EI lie at the constituents’ level. This is<br />

because the constituents are, in one way or another, autonomous or semiautonomous<br />

agents acting primarily in their own interest, which is defined<br />

mainly by their position in enterprise coordinates terms. We consider the<br />

following fundamental dimensions as the main enterprise coordinates for<br />

positioning the constituents:<br />

• level of authority and<br />

• immediacy of contribution to goals (from strategic to operational, abstract<br />

to specific)<br />

which can also be inverted to<br />

• level of responsibility and<br />

• level of activity (from strategic to operational, or abstract to specific)<br />

The higher the level of authority, the more strategic or abstract are the<br />

actions that the constituents take in enterprise terms. At this level, constituents<br />

have a bird’s-eye view of the enterprise and find it easier to match<br />

organizational outcomes to personal interest i.e., the bird’s-eye view makes<br />

it easier to view the enterprise as a single entity. On the other hand, the


130 Nikitas Spiros Koutsoukis<br />

lower the level of authority the more operational or specific the actions<br />

that the constituents take in enterprise terms. At this level, constituents find<br />

it harder to consider a single-entity view of the organization, and hence<br />

organizational outcomes are less easily matched to personal interests.<br />

It follows that inconsistencies in the constituents’ perspectives have the<br />

ability to cancel out some of the main benefits that the other two facets bring<br />

forward, namely the unified view of the enterprise as single entity and the<br />

immediacy with which operational activities contribute to strategic goals.<br />

Inconsistencies in constituents’ perspectives are being addressed mainly<br />

through leadership and human resource practices, which aim to put in place<br />

an organizational culture, or otherwise a unified perception of what the<br />

enterprise stands for, its values and its practices among other things. Mission<br />

statements, codes of practice, and team building exercises are some of the<br />

most frequently used tools for scoping and setting out a unified, constituent<br />

view of the enterprise.<br />

Thus, a strong requirement is surfacing for developing new EI modeling<br />

frameworks, which are able to incorporate heterogeneous modeling constructs,<br />

such as the dimensions, entities or constituents considered above.<br />

We find that some of the key requirements are set out as follows:<br />

(a) Interchangeable dimensions: EI models should be able to shift between<br />

the dimensions considered above, without loss of context.<br />

(b) Use of hybrid modeling constructs: EI models should be able to utilize<br />

interchangeably modeling constructs for representing any type of enterprise<br />

activity unit, including procedural units, physical or logical units,<br />

inputs or outputs, tangible and intangible processes, decision-making<br />

points, etc.<br />

(c) Ability to consolidate or analyze to successive levels of detail: this is<br />

particularly useful for communicating enterprise structures throughout<br />

multiple echelons and hierarchy levels. For instance, stakeholders will<br />

most likely need access to a consolidated view of the integrated enterprise<br />

whereas employees are most likely to require a position-centred<br />

view of the enterprise, with detailed surroundings that are consolidated<br />

further away from the employee’s position.<br />

(d) Ability to interchange between declarative and goal-seeking states of<br />

the integration models: when in declarative mode, models work simply<br />

as formal structures, or simulators of how the integrated enterprise<br />

operates. In goal-seeking mode, the model emphasizes improvements<br />

that can lead to goal attainment or “satisfying,” to use Simon’s term<br />

(Simon, 1957).


Enterprise Modeling and Integration 131<br />

Of course, these requirements can be analyzed further and in detail, but<br />

such a discussion would be beyond the scope of this paper. Still, we also<br />

find that these requirements are suffice to set out an agenda for further work<br />

in the domain of EI and EI modeling.<br />

6. Discussion<br />

At the current state of technology, we find that EM and integration are<br />

becoming familiar within the large enterprises that seem to understand the<br />

need of acquiring their concept. However, in accordance with Williams<br />

(1994) we also find that the reason for the moderate success and rather not<br />

so extended use of EM and integration initiatives include:<br />

(a) Cost: It is commonly accepted that such efforts are very expensive and<br />

it is rather difficult to argue on the benefits of the outcome from the<br />

beginning of each project.<br />

(b) Project size and duration: Indeed, such projects cannot be completed<br />

but they require time. Quite a large number of people will be involved<br />

and high levels of commitment on behalf of the enterprise are required.<br />

(c) Complexity: EI is a never-ending process and parameters such as the<br />

legacy systems within large enterprises resisting integration have to<br />

be seriously considered. There is the need for a global vision with a<br />

modular implementation approach.<br />

(d) Management support: Management facing high cost may resent to support<br />

such projects. It is very important that management be part of the<br />

project and be fully committed to it.<br />

(e) Skilled people: The human factor concerning high-skilled employees<br />

involved in the projects is vital for success. Unfortunately, due to limited<br />

training schemes experienced users are scarce.<br />

Nevertheless, the enthusiasm of the research community and the industry<br />

remains intact trying to overcome problems and difficulties by creating<br />

tools and adopting methodology that will make a difference. Nowadays,<br />

EM and integration mostly rely upon advanced computer networks, the<br />

worldwide web and Internet, integration platforms as well as ERPs, and<br />

data exchange formats (Chen and Vernadat, 2002; Martin et al., 2004).<br />

A big issue is that both EM and EI are nearly totally ignored by SME’s<br />

and there is still a long way to go before SME’s will master these techniques<br />

on their own and in this case, they have to quickly catch on the technology<br />

and define which tools they have to use.


132 Nikitas Spiros Koutsoukis<br />

Some researchers like Vernadat, Kosanke, Tolone, and Zeigler consider<br />

that this technology would better penetrate and serve any kind of<br />

enterprises if:<br />

• They identify users and a specific EIM problem. An example of an EIM<br />

problem is how to produce high-quality products at low cost.<br />

• They make a prototype EIM that addresses the problem, contributing in<br />

the decision-making process based on a simple initial model accepted by<br />

the majority of the users (Tolone et al., 2002).<br />

• There was a standard vision on what EM really depend and there was an<br />

international consensus on the underlying concept for the benefit of the<br />

business users (Kosanke et al., 2000).<br />

• There was a standard, user-oriented, interface in the form of a unified<br />

enterprise modeling language (UEML) based on the previous consensus<br />

to be available on all commercial modeling tools.<br />

• There were real EM and simulation tools commercially available in the<br />

market place taking into account function, information, resource, organization,<br />

and financial aspects of an enterprise including human aspects,<br />

exception handling, and process coordination. Simulation tools need to<br />

be configurable, distributed, and agent-based simulation tools (Vernadat<br />

and Zeigler, 2000).<br />

However, it is rather unlikely that a single EIM that will be equally<br />

applicable to all organizations. Each organization will have to develop its<br />

own strategy and philosophy (Fox and Huang, 2005). Despite the different<br />

approaches (frameworks) of EIM, we always have to bear in mind the<br />

following functional requirements of an EIM framework (Patankar and<br />

Adiga, 1995):<br />

(1) Decision orientation. An EI model based on any particular concept of<br />

an EM framework should support the decision-making process considering<br />

thoroughly all factors of the external and internal environment<br />

affecting the enterprise.<br />

(2) Completeness. EM frameworks should be able to represent all aspects<br />

of the enterprise such as business units, business data, information<br />

systems, and external partners i.e., suppliers, sub-contractors, and<br />

customers.<br />

(3) Appropriateness. EIM frameworks should be representing the enterprise<br />

at appropriate levels by providing models that can be understood<br />

by people with heterogeneous backgrounds.


Enterprise Modeling and Integration 133<br />

(4) Permanence. The EI model should remain consistent with the enterprise’s<br />

overall objectives even when the enterprise is in the process of<br />

changing due to management changes or other. <strong>Models</strong> should remain<br />

true and valid within the enterprise.<br />

Finally, the EIM frameworks should be simply structured, flexible,<br />

modular, and generic to ensure that the framework is applicable across different<br />

manufacturing operations and enterprises. In addition, the framework<br />

should relate to both manual and automatic processes. This will provide the<br />

basis for proper and complete integration.<br />

Modeling and integration always mean effort in time and costs.<br />

Nevertheless, to handle complexity in business environments we see the<br />

above-mentioned elements moving from “luxury articles” to “everyday<br />

necessities”.<br />

These are some of the challenges to be solved in future to build extended<br />

interoperable manufacturing enterprises as there is unlikely to be a single<br />

EIM methodology, which will be equally applicable to all organizations<br />

(Li and Williams, 2004). Each organization will have to develop its own<br />

strategy and philosophy. However, EIM promises to be a revolutionary<br />

technology.<br />

Finally, the success of EI efforts lies in the ability to iron out incoherencies<br />

and lack coordination between the multiple facets considered above.<br />

This is a complex and challenging task, and lies at the heart for EI modeling.<br />

Many of the multiple modeling frameworks considered above, like<br />

IDEF, CIM-OSA, or GERAM for example, have stemmed from integration<br />

efforts in the areas of manufacturing or engineering, and while successful<br />

in their own right, these frameworks are still lagging behind when it comes<br />

to integrating across the multiple dimensions, entities and constituents of a<br />

contemporary enterprise that we have identified above.<br />

7. Epilogue<br />

This paper has provided a description of the main issues on EM and EI.<br />

It has become quite evident from the literature that both the issues are complex<br />

and require knowledge on various disciplines particularly on organizational<br />

management, engineering, and computer science. As such, it can be<br />

generalized that EM and integration mainly revolves around organizational<br />

modeling from which the analysis, design, implementation and integration<br />

of the organization management, business processes, software applications,<br />

and physical computer systems within an enterprise are supported.


134 Nikitas Spiros Koutsoukis<br />

We believe that although we are rich in technology to solve isolated<br />

business problems (e.g., product scheduling, quality control, production<br />

planning, accounting, marketing, etc.), many other problems still reside<br />

within the enterprise. Solving these problems require an integrated approach<br />

to problem solving. This integration must start at the representation level<br />

and the entire enterprise together with its structure, functionality, goals,<br />

objectives, constraints, people, processes, products must be modeled. The<br />

solution will probably be the outcome of an EIM framework.<br />

References<br />

Beeckman, D (1993). CIM-OSA: computer integrated manufacturing open system architecture.<br />

International Journal of Computer Integrated Manufacturing 2(2), 94–105.<br />

Bolton, R, A Dewey, A Goldschmidt and P Horstmann (1997). NIIIP — the national industrial<br />

information infrastructure protocols. In Enterprise Engineering and Integration:<br />

Building International Consensus, K Kosanke and JG Nell (eds.), Berlin: Springer-<br />

Verlag. pp. 293–306.<br />

Bueno, R (1998). Integrated information and process management in manufacturing engineering.<br />

Proc. of the 9th IFAC Symposium on Information Control in Manufacturing<br />

(INCOM’98), pp. 109–112. France: Nancy-Metz, May 24–26.<br />

Chen, D and FVernadat (2002). Enterprise interoperability: a standardisation view. ICEIMT<br />

2002, 273–282.<br />

Christensen, LC, BW Johansen, N Midjo, J Onarheim, TG Syvertsen and T Totland (1995).<br />

Enterprise Modeling-Practices and Perspectives. In Proc. of 9th ASME Engineering<br />

Database Symposium, Rangan (ed.), Boston: Massachusetts, 1171–1181.<br />

Fox, MS (1993). Issues in Enterprise Modeling. Proc. of the IEEE Conference on Systems.<br />

Man and Cybernetics, France: Le Touquet, 83–98.<br />

Fox, MS and M Gruninger (2003). The logic of enterprise modeling, modeling and methodologies<br />

for enterprise integration. In Modelling and Methodologies of Enterprise<br />

Integration, P Bernus and L Nemes (eds.), Cornwall, Great Britain: Chapman and<br />

Hall, 83–98.<br />

Fox, MS and J Huang (2005). Knowledge provenance in enterprise information.<br />

International Journal of Production Research 43(20), 4471–4492.<br />

Giachetti, RE (2004). A framework to review the information integration of the enterprise.<br />

International Journal of Production Research 42(6), 1147–1166.<br />

Hoyte, J (1992). Digital models in engineering — a study on why and how engineers build<br />

and operate digital models for decision support. PHD Thesis, University of Trondheim,<br />

Norway.<br />

Hunt, DV (1991). The Integration of CALS, CE, TQM, PDES, RAMP, and CIM, Enterprise<br />

Integration Sourcebook. San Diego, CA: Academic Press.<br />

Kaplan, RS and DP Norton (1992). The balanced scorecard: measures that drive<br />

performance. Harvard Business Review 70(1), 71–79.<br />

Kemmerer, J (1999). STEP: The Grand Experience, NIST Special Publication SP939,<br />

Washington, C 20402, USA: US Government Printing Office.<br />

Kosanke, K and JG NelI (1997). Enterprise Engineering and Integration: Building<br />

International Consensus. Berlin: Springer-Verlag.


Enterprise Modeling and Integration 135<br />

Kosanke, K, F Vernadat and M Zelm (2000). Enterprise engineering and integration in the<br />

global environment. Advanced Network Enterprises 2000, Berlin, 61–70.<br />

Ladet, P and FB Vernadat (1995). Integrated Manufacturing Systems Engineering. London:<br />

Chapman & Hall.<br />

Lankhorst, M (2005). Enterprise Architecture at Work. Berlin: Springer-Verlag.<br />

Li, H and TJ Williams (2004). A Vision of Enterprise Integration Considerations. Toronto,<br />

Canada: ICEIMT 2004.<br />

Martin, J (1982). Strategic Data-Planning Methodologies. Englewood Cliffs, New York:<br />

Prentice-Hall.<br />

Martin, J, E Robertson and J Springer (2004). Architectural Principles for Enterprise Frameworks:<br />

Guidance for Interoperability. Toronto, Canada: ICEIMT 2004.<br />

Minsky, M (1968). Semantic Information Processing. Cambridge, MA: The MIT Press.<br />

Norrie, M, M Wunderli, R Montau, U Leonhardt, W Schaad and HJ Schek (1995).<br />

Coordination approaches for CIM. In Integrated Manufacturing Systems Engineering.<br />

P Ladet and FB Vernadat (eds.), London: Chapman & Hall pp. 221–235.<br />

Patankar, AK, and S Adiga (1995). Enterprise integration modeling: a review of theory and<br />

practice. Computer Integrated manufacturing Systems 8(1), 21–34.<br />

Petrie, CJ (1992). Enterprise Integration Modeling. Cambridge, MA: The MIT Press.<br />

Porter, M (1985). Competitive Advantage. New York: Free Press.<br />

Prahalad, CK and G Hamel (1990). The core competence of the organisation. Harvard<br />

Business Review 68(3), 79–91.<br />

Rumbaugh, J (1993). Objects in the constitution — enterprise modeling. Journal on Object<br />

Oriented Programming, January issue, 18–24.<br />

Sager, MT (1988). Data concerned enterprise modeling methodologies — a study of practice<br />

and potential. The Australian Computer Journal 20(3), 59–67.<br />

Simon, H (1957). <strong>Models</strong> of Man: Social and Rational. New York: Wiley.<br />

Solvberg, A and DC Kung (1993). Information Systems Engineering — An Introduction.<br />

New York: Springer-Verlag.<br />

Soon, HL, J Neal and A Pennington (1997). Enterprise modeling and integration: taxonomy<br />

of seven key aspects. Computers in Industry 34, 339–359.<br />

Tolone, W, B Chu, G.-J Ahn, R Wilhelm and JE Sims (2002). Challenges to Multi-Enterprise<br />

Integration. ICEIMT 2002, 205–216.<br />

Vernadat, FB (1996). Enterprise Modeling and Integration: Principles and Applications.<br />

London: Chapman & Hall.<br />

Vernadat, F (2002). Enterprise modeling and integration. ICEIMT 2002, 25–33.<br />

Vernadat, FB and BP Zeigler (2000). New simulation requirements for model-based Enterprise<br />

Engineering. Proc. of IFAC/IEEE/INRIA Internal Conference on Manufacturing<br />

Control and Production Logistics (MCPL’2000). Grenoble, 4–7 July 2000. CD-ROM.<br />

Waite, EJ (1998). Advanced Information Technology for Design and Manufacture. In Enterprise<br />

Engineering and Integration: Building International Consensus. K Kosanke and<br />

JG Nell (eds.), Berlin: Springer-Verlag, pp. 256–264.<br />

Williams, TJ (1994). The purdue enterprise reference architecture. Computers in Industry,<br />

24(2–3), 141–158.


This page intentionally left blank


Topic 2<br />

Toward a Theory of Japanese Organizational<br />

Culture and Corporate Performance<br />

Victoria Miroshnik<br />

University of Glasgow, UK<br />

1. Introduction<br />

Japanese national culture has a specific influence on the organizational<br />

culture (OC) of the Japanese companies. In order to understand these aspects<br />

of culture that are affecting corporate performance, a systematic analysis of<br />

the national culture (NC) and OC is essential. The purpose of this chapter<br />

is to provide a scheme to analyze this issue in a systematic way.<br />

National culture can affect corporate OC (Aoki, 1990; Axel, 1995;<br />

Hofstede, 1980) and corporate performance (CP) (Cameron and Quinn,<br />

1999; Kotter and Heskett, 1992). Multinational corporations with its supposedly<br />

common OC demonstrate different behaviors in different countries,<br />

which can affect their performances (Barlett and Yoshihara, 1988; Calori<br />

and Sarnin, 1991). National culture with its various manifestations can create<br />

unique organizational values for a country, which in conjunction with<br />

the human resources management (HRM) practice may create some unique<br />

OC for a corporation. Organizational culture along with a leadership style,<br />

which is the product of OC, NC, and HRM practices, creates a synergy,<br />

which shapes the fortune of the company’s performances. Thus, NC, organizational<br />

values, OCs, leadership styles, human resources practices, and<br />

CPs are inter-related and each depend on the other. If a company maintains<br />

this harmony it can perform well, otherwise it declines.<br />

The structure of this chapter is as follows. In Section 2, relationship<br />

between NCs, OC, and performances, as given in the existing literature, is<br />

analyzed. In Section 3, the Japanese culture was analyzed along with the<br />

controversies on this issue. In Section 4, the new approach of the nationorganization-performance<br />

exchange model (NOPX) is explained in detail<br />

and the theoretical propositions underlying this model are analyzed.<br />

137


138 Victoria Miroshnik<br />

2. Corporate Performance, the Highest Stage of National<br />

and Organizational Culture: A Synthesis<br />

Culture is multidimensional, comprising of several layers of inter-related<br />

variables. As society and organizations are continuously evolving, there is<br />

no theory of culture valid at all times and locations. The core values of a<br />

society (macro-values (MAVs)) can be analyzed by asking the following<br />

questions:<br />

(a) What are the relationships between human and nature? (b) What are<br />

the characteristics of innate human nature? (c) What is the focus regarding<br />

time — whether past, future, or present, or a combination of all these?<br />

(d) What are the modalities of human activity, whether spontaneous or<br />

introspective or result-oriented or a combination of these? (e) What is the<br />

basis of relationship between one man and another in the society?<br />

In the structure of the NC, there are certain micro or individual values,<br />

which include sense of belonging, excitement, fun and enjoyment,<br />

warm relationship with others, self-fulfilment, being well-respected, and a<br />

sense of accomplishment, security, and self-respect (Kahle, et al., 1988).<br />

Although Hofstede (1990; 1993) has the opinion that the perceived practices<br />

are the roots of the OC rather than values, most researchers have accepted<br />

the importance of values in shaping cultures in several organizations.<br />

Combinations of micro-values (MIVs) and macro-values (MAVs) can<br />

give rise to an OC, given certain meso-values (MEV) or organizational<br />

values, which are specific for a country. Meso-values are certain codes of<br />

behavior, which influences the future, and the expected behavior of the<br />

members of the society, if they want to belong to the main stream. These<br />

vary from one society to another, as the expectations of different societies,<br />

as products of historical experiences, religious, and moral values are<br />

different.<br />

Cultural values are shared ideas about what is good, desirable, and<br />

justified; these are expressed in a variety of ways. Social order, respect for<br />

traditions, security, and wisdom are important for the society, which is like<br />

an extended family.<br />

According to Schein (1997), OC emerges from some common assumptions<br />

about the organization, which the members share as a result of their<br />

experiences in that organization. These are reflected in their pattern of<br />

behaviors, expressed values, and observed artefacts. Organizational culture<br />

provides accepted solutions to known problems, which the members<br />

learn and feel about and forms a set of shared philosophies, expectations,


Toward a Theory of Japanese Organizational Culture 139<br />

norms, and behavior patterns, which promotes higher level of achievements<br />

(Kilman et al., 1985; Marcoulides and Heck, 1993; Schein, 1997).<br />

Organizational culture, for a large and geographically dispersed organization,<br />

may have many different cultures. According to Kotter and Heskett<br />

(1992), leaders create certain vision or philosophy and business strategy<br />

for the company. A corporate culture emerges that reflects the vision and<br />

strategy of the leaders and experiences they had while implementing these<br />

strategies. However, the question is whether it is valid for every NC.<br />

A combination of MAVs, MEVs, and MIVs creates a specific OC,<br />

which varies from country to country according to their differences in<br />

NC. Hofstede (1980; 1985; 1990; 1993) showed the relationship between<br />

NC and OC. Global leadership and organizational behavior effectiveness<br />

(GLOBE) research project has tried to identify the relationships among<br />

leadership, social culture, and OC (House, 1999).<br />

Cameron and Quinn (1999) have mentioned that the most important<br />

competitive advantage of a company is its OC. If an organization<br />

has a “strong culture” with “well-integrated and an effective” set of values,<br />

beliefs, and behaviors, it normally demonstrates high level of CPs<br />

(Ouchi, 1981).<br />

Denison and Mishra (1995) have attempted to relate OC and performances<br />

based on different characteristics of the OC. However, different<br />

types of OC enhance different types of business (Kotter and Heskett, 1992).<br />

There is no single cultural formula for long-run effectiveness. As Siehl and<br />

Martin (1988) have observed, culture may serve as a filter for factors that<br />

influence the performance of an organization. These factors are different for<br />

different organizations. Thus, a thorough analysis regarding the relationship<br />

between culture and performances is essential.<br />

3. Japanese National and Organizational Culture<br />

Japanese firms are considered to be of a family unit with long-term orientations<br />

for HRM and with close ties with other like-minded firms,<br />

banks, and the government. The OC promotes loyalty, harmony, hard<br />

work, self-sacrifice, and consensus decision-making. These, along with<br />

lifetime employment and seniority-based promotions, are considered to<br />

be the natural outcome of the Japanese NC. (Ikeda, 1987; Imai, 1986;<br />

Ouchi, 1981). Japanese workers’ psychological dependency on companies<br />

emerges from their intimate dependent relationships with the society and the<br />

nation.


140 Victoria Miroshnik<br />

Japanese NC has certain MIVs: demonstration of appropriate attitudes<br />

(taido); the way of thinking (kangaekata); and spirit (ishiki), which forms<br />

the basic value system for the Japanese (Kobayashi, 1980). Certain MEVs<br />

can be identified as the core of the cultural life for the Japanese, if they want<br />

to belong to the mainstream Japanese society (Basu, 1999; Fujino, 1998).<br />

These are (a) Senpai-Kohai system; (b) Conformity; (c) Hou-Ren-Sou and<br />

(d) Kaizen or continuous improvements.<br />

Senpai-Kohai or senior-junior relationships are formed from the level<br />

of primary schools where junior students have followed the orders of the<br />

senior students, who in turn, may help the juniors in learning. The process<br />

continues throughout the lifetime for the Japanese. In work places, Senpais<br />

will explain Kohais how to do their work, the basic code of conducts and<br />

norms.<br />

From this system emerges the second MEV, Conformity, which is better<br />

understood from the saying that “nails that sticks out should be beaten<br />

down”. The inner meaning is that unless someone conforms to the rules<br />

of the community or co-workers, he/she would be an outcaste. There is no<br />

room for individualism in the Japanese society or in work place.<br />

The third item, Hou-Ren-Sou, is the basic feature of the Japanese<br />

organizations. Hou-Ren-Sou, is a combination of three different words in<br />

Japanese: Houkoku i.e., to report, Renraku i.e., to inform and Soudan i.e.,<br />

to consult or pre-consult.<br />

Subordinates should always report to the superior. Superiors and subordinates<br />

share information. Consultations and pre-consultations are required;<br />

no one can make his/her decision by himself/herself even within the delegated<br />

authority. There is no space in which the delegation of authority may<br />

function. Combining these words, Houkoku, Renraku, and Soudan, “Hou-<br />

Ren-Sou” is the core value of the culture in Japan. Making suggestions, for<br />

improvements, without pre-consultations is considered to be an offensive<br />

behavior in Japanese culture. Everyone from the clerk to the president, from<br />

the entry day of the working life to the date of retirement, every Japanese<br />

must follow the “Hou-Ren-Sou” value system.<br />

Kaizen i.e., continuous improvement is another MEV that is one of the<br />

basic ingredients of the Japanese culture. Search for continuous improvement<br />

during the Meiji Government of the 19th century led them to search the<br />

world for knowledge. Japanese companies today are doing the same in terms<br />

of both acquisitions of knowledge by setting up R&D centers throughout<br />

the western world, and by having “quality circles” in work places in order to<br />

implement new knowledge to improve the product quality and to increase<br />

its efficiency (Kujawa, 1979; Kumagai, 1996; Miyajima, 1996).


Toward a Theory of Japanese Organizational Culture 141<br />

3.1. Japanese OC<br />

In order to understand the Japanese OC, it is essential to know the Japanese<br />

system of management, which gave rise to the unique Japanese style of OC.<br />

What is written below is the result of several visits to the major Japanese<br />

corporations (like Toyota, Mitsubishi, Honda, and Nissan) and interviews<br />

with several senior executives, including members of the board and vicepresidents<br />

of these companies.<br />

Japanese system of management is a complete philosophy of organization,<br />

which can affect every part of the enterprise. There are three basic<br />

ingredients: lean production system, total quality management (TQM) and<br />

HRMs (Kobayashi, 1980). These three ingredients are inter-linked in order<br />

to produce total effect on the management of the Japanese enterprises.<br />

Because Japanese overseas affiliates are part of the family of the parent<br />

company, their management systems are part of the management strategy<br />

of the parent company (Basu, 1999; Morishima, 1996; Morita, 1992;<br />

Shimada, 1993).<br />

The basic idea of the lean production system and the fundamental<br />

organizational principles is the “Human-ware” (Shimada, 1993), which is<br />

described in Fig. 1. “Human-ware” is defined as the integration and interdependence<br />

of machinery and human relations and a concept to differentiate<br />

among different types of production systems. The purpose of the lean production<br />

philosophy, which was developed at the Toyota Motor Company,<br />

is to lower the costs. This is done through the elimination of waste —<br />

everything that does not add value to the product. The constant strive for<br />

perfection (Kaizen in Japanese) is the overriding concept behind good management,<br />

in which the production system is being constantly improved;<br />

perfection is the only goal. Involving everyone in the work of improvement<br />

is often accomplished through quality circles. Lean production system uses<br />

“autonomous defect control” (Pokayoke in Japanese), which is an inexpensive<br />

means of conducting inspection for all units to ensure zero defects.<br />

Quality assurance is the responsibility of everyone. Manufacturing tasks<br />

are organized into teams. The principle of “just-in-time” (JIT) means, each<br />

NC OC<br />

LC CP<br />

Where “NC” is “National Culture”; “OC” is “Organizational Culture”; “LC” is<br />

“Leader Culture”; and, “CP” is “Corporate Performance”.<br />

Figure 1.<br />

Sub-model 1: culture-performance system.


142 Victoria Miroshnik<br />

process should be provided with the right part, in the right quantity at exactly<br />

the right point of time. The ultimate goal is that every process should be<br />

provided with one part at a time, exactly when that part is needed.<br />

The most important feature of the organizational set-up of the lean<br />

production system is the extensive use of multifunctional teams, which<br />

are groups of workers who are able to perform many different works. The<br />

teams are organized along a cell-based production flow system. The number<br />

of job-classifications also declines. Workers have received training to<br />

perform a number of different tasks, such as the statistical process control,<br />

quality instruments, computers, set-up performances, maintenance etc. In<br />

the lean production system responsibilities are decentralized. There is no<br />

supervisory level in the hierarchy. The multifunctional team is expected<br />

to perform supervisory tasks. This is done through the rotations of team<br />

leadership among workers. As a result, the number of hierarchical levels in<br />

the organization can be reduced. The number of functional areas that are<br />

the responsibility of the teams increases (Kumazawa and Yamada, 1989).<br />

In a multifunctional set-up, it is vital to provide information in time<br />

and continuously in the production flow. The production system is changing<br />

and gradually adopting a more flexible system in order to adopt itself to<br />

changes in demand; which may reduce costs of production and increase efficiency.<br />

There are extensive usages of Kaizen activities and TQM and Total<br />

Productive Maintenance (TPM) to increase the effectiveness of corporate<br />

performances.<br />

4. A Nation-Organization-Leader-Performance Model<br />

Organizational culture is a product of a number of factors: NC, human<br />

resources practice, and leadership styles have prominent influences on it.<br />

These in turn, along with the OC, affect CPs. The proposed model, as in<br />

Fig. 1, is the synthesis of the ideas relating to this central theme. Figures 2<br />

and 3 describe the proposed theoretical model relating OC and CPs. Corporate<br />

performances are measured in terms of the following criteria:<br />

(a) customers’ satisfaction;<br />

(b) employee’s satisfaction and commitment and<br />

(c) contributions of the organization to the society and environment.<br />

Most companies in Japan use these criteria to signify CPs (Basu, 1999;<br />

Nakane, 1970).<br />

There are several latent or hypothetical variables, which jointly can<br />

influence the outcome. National culture is affected by two constructs:


Toward a Theory of Japanese Organizational Culture 143<br />

MIVs and MAVs. Micro-values (MIVs) are composed of several factors<br />

(Kahle et al., 1988). These are: (1) the sense of belonging; (2) respect<br />

and recognition from others and (3) a sense of life accomplishments and<br />

(4) self-respect.<br />

Macro-values, according to the opinions of the executives of the<br />

Japanese, firms are religious values, habitual values, and moral values.<br />

There may be other MAVs, which are important for other nations like geography,<br />

racial origin, etc., but for the Japanese, these are not so important<br />

(Basu, 1999). Although moral and habitual values can be the results of<br />

religious values, it is better to separate out these three values as distinct.<br />

The moral and habitual values are different in Japan from other East Asian<br />

nations. Honor and “respect from others” are central to the Japanese psychology.<br />

Ritual suicides (Harakiri) are honorable acts for the Japanese, if<br />

they fail in some way to do their duty. Habitual values are extreme politeness<br />

on the surface, beautifications of everything, community spirit, and cleanness<br />

are unique in Japan, which are not followed in any other countries<br />

in Asia.<br />

This is particularly true if we examine certain MEVs, which are neither<br />

MIV nor MAV values, but are derived from the NC. It is possible to identify<br />

five different MEVs, which are important outcomes of the Japanese NC.<br />

These are discussed as follows:<br />

(a) Exclusivity or insider-outsider (Uchi-Soto in Japanese) psychology by<br />

which Japanese exclude anyone who is not an ethnic Japanese from<br />

social discourse; (It is different from color or religious exclusivities. For<br />

example, the Chinese or Koreans, who are living in Japan for centuries,<br />

are excluded from the Japanese social circles.)<br />

(b) Conformity or the doctrine of “nail that sticks up should be beaten<br />

down”(Deru Kuiwa Utareru in Japanese) — deviations from the mainstream<br />

norms are not tolerated;<br />

(c) Seniority system (Senpai-Kohai in Japanese) by which every junior<br />

must obey and show respects to the seniors;<br />

(d) Collectivism in decision-making process (“Hou-Ren-Sou” system in<br />

Japanese) and<br />

(e) Continuous improvements (Keizen in Japanese), which is the fundamental<br />

philosophy of the Japanese society.<br />

These MEVs are exclusively Japanese, a reflection of the unique<br />

Japanese culture (Basu, 1999; Nakane, 1970) and is fundamental to the<br />

Japanese organizations.


144 Victoria Miroshnik<br />

National culture construct along with the MEVs affect both the organizational<br />

culture construct (OR), HRM system and leadership styles (LS)<br />

constructs. Japanese OR construct is affected by the NC, MEVs, and the<br />

HRM system. The similarities in OCs in different companies are due to<br />

the similarities in NC and MEV; the differences are due mainly to the differences<br />

in HRM. In Japan human resource practices vary from one company<br />

to another and as a result, OCs and performances vary. In Toyota,<br />

it is very consistent and strong culture with great emphasis on discipline,<br />

Keizen, TQM and just in time (JIT) production-inventory system.<br />

In many other companies in Japan, situations are not the same; as a result,<br />

OC differs.<br />

Organizational culture is affected by the NC, MEVs, and HRM. If the<br />

company’s leader is also the founder, then only the leadership style (LS) can<br />

affect the HRM and OC. Otherwise, when the leaders are professional managers<br />

trained and grown up within the company (In Japan and in Japanese<br />

companies abroad, it is out of practice to accept an outsider for the executive<br />

positions. All executives enter the company after their graduation and<br />

stay until they retire), appropriate LCs are developed by the OC and HRM.<br />

Due to the collective decision-making process (Hou-Ren-Sou) leadership,<br />

a MEV in the model, cannot affect an organizational culture or the<br />

HRM system. It is very different from the Western (America, European,<br />

or Australian) companies where the leaders are hired from outside and are<br />

expected to be innovative regarding the OC and HRM system. This is quite<br />

alien to the Japanese culture. Leadership style in Japan is an outcome, not<br />

an independent variable as in the Western companies. Leadership style is<br />

affected by the NC, MEVs, OC, and the HRM.<br />

Organizational culture (OC) is affected by four factors, according to this<br />

proposed model. These factors are: (a) stability; (b) flexibility; (c) internal<br />

focus and (d) external focus (Cameron and Quinn, 1999). These factors are<br />

universal, not restricted to the Japanese companies. However, how an OC<br />

is being affected by NC, MEVs, and HRM would vary from country to<br />

country.<br />

Leadership style is a function of four factors. These factors are leader:<br />

(a) as a facilitator; (b) as an innovator; (c) as a technical expert and (d) as<br />

an effective competitor for rival firms (Cameron and Quinn, 1999). These<br />

four characteristics of a leader are universal, but the emphasis varies from<br />

country to country.<br />

Finally, the corporate performance (CP) is affected by a number of<br />

influencing factors: (a) customers’ satisfaction, (b) employees’ satisfaction


Toward a Theory of Japanese Organizational Culture 145<br />

MAV<br />

LC<br />

CP<br />

NC<br />

OC<br />

MEV<br />

MIV<br />

HRMPJ<br />

Where “NC” is “National Culture”; “OC” is “Organizational Culture”; “LC” is “Leader<br />

Culture”; “MAV” is “Macro-Values of Culture”; “MEV” is “Meso-Values of Culture”;<br />

“MIV” is “Micro-Values of Culture”; “HRMPJ” is “Human Resources Management<br />

Practices in Japan”, and “CP” is “Corporate Performance”.<br />

Figure 2.<br />

Value–culture–performance: a thematic presentation.<br />

and commitments, and (c) the contribution of the organization to the society.<br />

If we consider the contribution of the organization to society that is<br />

already taken into account by the customers’ satisfaction and employees’<br />

satisfaction, then CP has these two important contributory factors.<br />

Theoretical relationship between the observed variables and the underlying<br />

constructs are specified in an exploratory model, represented in Fig. 2.<br />

In Fig. 3, a more elaborate model, including the unique HRM system of the<br />

Japanese companies is described. Figure 3 represents a Linear Structural<br />

Relations (LISREL) version (Joreskog and Sorbom, 1999) of the model,<br />

which can be estimated using the methods of structural equation modeling<br />

(Raykov and Marcoulides, 2000).<br />

5. Discussion<br />

The proposed theory describes the relationship between NC and CPs<br />

through OC with the HRM system affecting the nature of the culture and<br />

leadership. These inter-relationships are important aspects of the Japanese<br />

corporate behaviors in a multinational setting and this proposed theory is<br />

an attempt to understand the Japanese corporate culture.<br />

Analysis of the Japanese culture and organizations by outsiders so far<br />

suffers from a number of defects (Ikeda, 1987; Watanabe, 1998). They neither<br />

try to understand the effects of the Japanese value system on their<br />

organizations nor do they give any importance to the relationship between


146 Victoria Miroshnik<br />

A1<br />

A2<br />

A3<br />

D1<br />

D2 D3 D4<br />

B1<br />

B2<br />

B3<br />

MAV<br />

NC OC<br />

LC CP<br />

H1<br />

B4<br />

MEV<br />

MIV<br />

G1 G2 G3 FG<br />

F6<br />

H21<br />

C1 C2 C3 C4 C5<br />

HRMPJ<br />

F1<br />

F2<br />

F3<br />

F4<br />

F5<br />

E1<br />

E24<br />

Where “NC” is “National Culture”; “OC” is “Organizational Culture”; “LC” is “Leader Culture”;<br />

“MAV” is “Macro-Values of Culture”; “MEV” is “Meso-Values of Culture”; “MIV” is<br />

“Micro-Values of Culture”, and “CP” is “Corporate Performance”; A1 “Religious Values”, A2<br />

“Moral Values” A3 “Habitual Values” are indicators for MAV variable; B1 “Power Distance”,<br />

B2 “Individualism/Collectivism”, B3 “Masculinity/Femininity” are indicators for NC variable;<br />

C1 “Uchi/Soto Value”, C2 “Deru Kuiwa Utareru Value”, C3 “Senpai/Kohai Value”, C4<br />

“Hou-Ren-Sou Value”, C5 “Kaizen Value” are indicators for MEV variable; D1 “Stability”,<br />

D2 “Flexibility”, D3 “External Focus”, D4 “Internal Focus are indicators for OC variable”;<br />

E1 “On-Job-Training”; E2 “Emphasis on Personality of Recruits” are indicators for HRMPJ<br />

variable; F1 “Sense of Belonging to a Group Value”, F2 “Respect and Recognition from Others<br />

Value”, F3 “Sense of Life Accomplishment Value”, F4 “Self-respect Value”, F5 “Kangaekata<br />

Value”, F6 “Ishiki Value” are indicators for MIV variable; G1 “Leader as Facilitator”, G2<br />

“Leader as Innovator”, G3 “Leader as Technical Expert”, G4 “Leader as Competitor” are<br />

indicators for LC variable; H1 “Customers Satisfaction”, H2 “Employees satisfaction and<br />

Commitment” are indicators for CP variable.<br />

Figure 3. Research model: Values-culture-performance (VCP): predicted paths<br />

between variables and indicators.<br />

performance and culture. So far, the success of the Japanese companies was<br />

analyzed only in terms of their unique production and operations management<br />

system. However, the inter-relationship between the production and<br />

operations management system, which is a part of the OC, and the specific<br />

HRM system it requires for its proper implementation, was not given<br />

sufficient attention.<br />

The typical example is the analysis of the failure of the implementations<br />

of Japanese operations management system in Britain by Oliver and<br />

Williamson (1992) or in China by Taylor (1999). They could not explain<br />

the reason for the failure, as they have not analyzed the Japanese OC and<br />

particularly the HRM system.


Toward a Theory of Japanese Organizational Culture 147<br />

In a Japanese company, the LSs and the OC are designed by the HRM<br />

system; these are not coming from outside. An effective corporate performance<br />

is the result of these underlying determinants of OC and LSs. Thus,<br />

in a Japanese organization, LS is rooted in the HRM system, which emerges<br />

from the MEVs of the Japanese national culture.<br />

Whether efficient CPs can be repeated in foreign locations depends on<br />

the transmission mechanism of the Japanese OC, which in turn, depends on<br />

the adaptations of the Japanese style HRM system. This can face obstacles<br />

due to the different MEVs in a foreign society and as a result, OC in a<br />

foreign location may be different from the OC of the company in Japan.<br />

Adaptation of the operations management system alone will not create an<br />

effective performance of a Japanese company in a foreign location. Creation<br />

of an appropriate HRM system and appropriate OC are essential for an<br />

effective performance.<br />

6. Conclusion<br />

A proposed theory of the relationship, among NC, OC, human resources<br />

practices, and CPs for the Japanese multinational companies, is narrated.<br />

Cultural differences are important and these factors affect every component<br />

of the company behaviors and performances. The question was raised<br />

regarding the factors, which are relatively more important than the others<br />

and whether it will be possible to manipulate these factors in order to<br />

increase the performances of the multinational companies. The proposed<br />

theory may provide an answer to these issues.<br />

References<br />

Aoki, M (1990). Towards an economic model of the Japanese firm. Journal of <strong>Economic</strong><br />

Literature 28, 1–27.<br />

Axel, M (1995). Culture-bound aspects of Japanese management. Management International<br />

Review 35(2), 57–73.<br />

Barlett, CA and H Yoshihara (1988). New challenges for Japanese multinationals: is organization<br />

adaptation their Achilles Heel. Human Resources Management 27(1), 19–43.<br />

Basu, S (1999). Corporate Purpose. New York: Garland Publishers.<br />

Calori, R and P Sarnin (1991). Corporate culture and economic performance: a French study.<br />

Organization Studies 12(1), 49–74.<br />

Cameron, KS and R Quinn (1999). Diagnosing and Changing Organizational Culture:<br />

Based on the Competing Values Framework. New York: Addison-Wesley.<br />

Denison, DR and AK Mishra (1995). Toward a theory of organizational culture and effectiveness.<br />

Organizational Science 6(2), March–April, 204–223.


148 Victoria Miroshnik<br />

Fujino, T (1998). How Japanese companies have globalized. Management Japan 31(2),<br />

1–28.<br />

Hofstede, G (1980). Culture’s Consequences: International Differences in Work-related<br />

Values. Beverly Hills, CA: Sage.<br />

Hofstede, G (1985). The interaction between national and organisational value system.<br />

Journal of Management Studies 22(3), 347–357.<br />

Hofstede, G (1990). Measuring organizational cultures: a qualitative and quantitative study<br />

across twenty cases. Administrative Science Quarterly 35, 286–316.<br />

Hofstede, G (1993). Cultural constraints in management theories. Academy of Management<br />

Executive 7(1), 81–94.<br />

House, RJ et al. (1999). Cultural influences on leadership: Project GLOBE. In Advances in<br />

Global Leadership, W Moblet, J Gessner and V Arnold (eds.), Vol. 1. Greenwick: CT,<br />

JAI Press, 171–234.<br />

Ikeda, K (1987). Nihon oyobi nihonjin ni tsuiteno imeji (Image of Japan and Japanese). In<br />

Sekai wa Nihon wo Dou Miteiruka (How Does the World See Japan). A Tsujimura,<br />

K Furuhata and H Akuto (eds.) Tokyo: Nihon Hyoronsa, pp. 12–31.<br />

Imai, M (1986). Kaizen: The Key to Japan’s Competitive Success. New York, NY:<br />

McGraw-Hill.<br />

Joreskog, KG and D Sorbom (1999). LISREL 8.30: User’s Reference Guide. Chicago:<br />

Scientific Software.<br />

Kahle, LR, RJ Best and RJ Kennedy (1988). An alternative method for measuring valuebased<br />

segmentation and advertisement positioning. Current Issues in Research in<br />

Advertising 11, 139–155.<br />

Kilman, R, MJ Saxton, and R Serpa (1985). Introduction: five key issues in understanding<br />

and changing culture, In Gaining Control of the Organizational Culture, R Kilman,<br />

M Gaxton and R Serpa (eds.), San Francisco: Jossey-Bass, pp. 1–16.<br />

Kobayashi, N (1980). Nihon no Takokuseki Kigyo (Japanese multinational corporations).<br />

Tokyo: Chuo Keizaisha.<br />

Kotter, JP and JL Heskett (1992). Corporate Culture and Performance. NY: The Free Press.<br />

Kujawa, D (1979). The labour relations of US multinationals abroad: comparative and<br />

prospective views. Labour and Society, 4, 3–25.<br />

Kumagai, F (1996). Nihon Teki Seisan Sisutemu in USA (Japanese Production System in<br />

USA). Tokyo: JETRO.<br />

Kumazawa, M and J Yamada (1989). Jobs and skills under the lifelong Nenko employment<br />

practice. In The Transformation of Work. S Wood (ed.). London: Unwin Hyman, 56–79.<br />

Marcoulides, GA and RH Heck (1993). Organization culture and performance: proposing<br />

and testing a model. Organization Science 14(2), May, 209–225.<br />

Morishima, M (1996). Renegotiating psychological controls, Japanese style. In Trends in<br />

Organizational Behavior. CL Cooper (ed.), Vol. 3. NewYork: John Wiley & Sons Ltd.,<br />

149–165.<br />

Morita, A (1992). A moment for Japanese management. Japan-Echo 19(2), 1–12.<br />

Miyajima, H (1996). The evolution and change of contingent governance structure in the<br />

J-firm system: An approach to presidential turnover and firm performance. Working<br />

Paper No. 9606. Institute for Research in Contemporary Political and <strong>Economic</strong>Affairs.<br />

Tokyo: Waseda University.<br />

Nakane, G (1970). Japanese Society. Harmondsworth: Penguin.


Toward a Theory of Japanese Organizational Culture 149<br />

Oliver, N and B Wilkinson (1992). The Japanization of British Industry. Cambridge,<br />

Massachussets: Blackwell.<br />

Ouchi, W (1981). Theory Z: How American Business can Meet the Japanese Challenge.<br />

Reading, MA: Addison-Wesley.<br />

Raykov, T and GA Marcoulides, (2000). A First Course in Structural Equation Modeling.<br />

London: Lawrence Erlbaum.<br />

Schein, EH (1997). Organizational Culture and Leadership. San-Francisco: Jossey-Bass<br />

Publishers.<br />

Shimada, H (1993). Japanese management of auto-production in the United States: An<br />

overview of human technology in international labour organization. In Lean Production<br />

and Beyond, Geneva: ILO, 23–47.<br />

Taylor, B (1999). Patterns of control within Japanese manufacturing plants in China: doubts<br />

about Japanization in Asia. Journal of Management Studies 36(6), 853–873.<br />

Watanabe, S (1998). Manager-subordinate relationship in cross-cultural organizations: the<br />

case of Japanese subsidiaries in the United States. International Journal of Japanese<br />

Sociology 7, 23–43.


This page intentionally left blank


Topic 3<br />

Health Service Management Using<br />

Goal Programming<br />

Anna-Maria Mouza<br />

Institute of Technology and Education, Greece<br />

1. Introduction<br />

Private production units usually operate on the basis of profit maximization.<br />

However, to face competition from similar units and to be in line with some<br />

major socio-economic factors, some decision makers are forced to relax this<br />

basic objective. In order to present propositions for optimal management<br />

decisions, one should combine the priorities of the decision maker with the<br />

technical and socio-economic factors, which are involved in the operating<br />

process in the best possible way. The production unit considered in this<br />

paper is a relatively small (60 beds) private clinic in northern Greece, which<br />

is similar to the orthopedic department of a hospital studied and described<br />

elsewhere (Mouza, 1996).<br />

However, in this particular case, there is no out-patient services. To<br />

facilitate the presentation, I consider a five-year operational plan where<br />

the personnel claims, the manpower working pattern, the various expenses,<br />

together with the profit maximization target, and the expected number of<br />

patients are described in detail. The necessary projections are based on<br />

reliable techniques presented elsewhere (Mouza, 2002; 2006).<br />

After setting the priorities of the various targets, I formulate a proper<br />

“goal programming” problem using deviational variables (di<br />

− and d<br />

i + ) and<br />

obtained the first-run solution. Then, I proceed to the goal attainment evaluation,<br />

which dictated a re-adjustment of the priorities. The second-run<br />

solution indicates that a slower acceleration of the profit maximization rate<br />

is preferable, to avoid over-charging the patients, at the same time satisfying<br />

most of the requirements for personnel management. Furthermore, if the<br />

decision maker’s preference is relatively rigid, a lower rate of the increase<br />

151


152 Anna-Maria Mouza<br />

of salaries of the employees should be adopted. All computational results<br />

are presented at the end of the relevant section.<br />

2. The Details of the Private Clinic in Relation<br />

to the Five-Year Plan<br />

In Table 1, I present the composition of the personnel, and their salary<br />

at the base year (t 0 ). Note that the 5-year planning horizon, refers to the<br />

time-periods (years) t 1 , t 2 , t 3 , t 4 , and t 5 .<br />

In columns 4 and 5 of Table 1, the total amount of the wages for all<br />

employees of the same group, for the initial and final year is stated. Note<br />

that the wages are an average of the salaries earned by each person in<br />

the individual group. For each group, an average percent of annual salary<br />

Table 1.<br />

Personnel of the orthopedic clinic and their wages.<br />

Groups<br />

Position of the<br />

group members<br />

Members<br />

at each<br />

group<br />

Annual salary at<br />

year t 0 for all<br />

group members<br />

(in thousand )<br />

Salary at final<br />

year t 5 , for all<br />

group members<br />

(in thousand )<br />

1 Orthopedic surgeons, 10 260 356.222<br />

anaesthiologists and<br />

microbiologists<br />

2 Pharmacist 1 24.7 33.054<br />

3 Operating-room nurses 3 54.6 68.04<br />

4 Nurses plus laboratory 14 163.8 199.29<br />

staff<br />

5 Axial-tomography and 3 58.5 74.307<br />

X-ray technicians<br />

6 Clinic manager 1 27.3 36.190<br />

7 Secretary and<br />

3 32.76 39.476<br />

administrative<br />

service staff<br />

8 Office clerks 4 41.6 49.647<br />

9 Cooking and food 4 42.64 50.643<br />

service staff<br />

10 Other staff (guard,<br />

bearers maintenance,<br />

cleaning service etc.)<br />

10 101.4 119.851


Health Service Management 153<br />

increase, say R i , is considered. It should be noted that in some groups, this<br />

rate is well above the rate of inflation growth, and the rate stated in the<br />

collective wage and salary agreements for each employees group. These<br />

rates of annual salary increase for all groups are presented in Table 3.<br />

Denoting by A 0 (i) the annual salary of all members of group i at the base<br />

year t 0 , and the final annual salary at year t 5 for all members of the same<br />

group, denoted by A n (i), is computed from:<br />

(<br />

A n (i) = A 0 (i) 1 + R ) 5<br />

i<br />

. (1)<br />

100<br />

Thus for group 1, where R 1 = 6.5%, the final salary for all group members<br />

will be:<br />

A n (1) = 260(1 + 0.065) 5 = 356.222.<br />

It should be noted, that these rates of salary increases have been formed<br />

in accordance to the personnel claims. Other additional expenses for the<br />

final year of of the planning period are presented in Table 2.<br />

On the top of Table 2, the forecast regarding the expected number<br />

of patients for the final year is stated. It should be pointed out that not<br />

all patients necessarily need an axial-tomography and/or X-ray treatment.<br />

That is why the average per patient cost stated in Table 2 seems to be<br />

relatively low.<br />

The interactive voice response (IVR) systems which are effectively used<br />

in healthcare and hospitals are described by Mouza (2003). It should be<br />

noted that replacement expenses together with expenses for obtaining new<br />

devices, seems more reasonable to be equally distributed over the five years<br />

of the planning period. Needless to say, that in case of replacements, the<br />

estimated salvage on old equipment has to be subtracted from the relevant<br />

cost. Thus, at the final year, only one fifth of such expenses is stated. For the<br />

case under consideration, the replacement refers to the server of the local<br />

area computer network (LAN), together with two stations. Their net cost<br />

sum up to 2000, which has been equally spread over the five years of the<br />

planning period.<br />

From Table 1, I computed the average salary per employee at the final<br />

year which, when multiplied by the group size gives the total salary for the<br />

corresponding group, as it can be verified from Table 3.<br />

It can be seen from Table 3, that X i (i = 1,...,10) denotes the size (i.e.,<br />

the members) of group i and c i , the average salary of each group member.<br />

Thus, the total salary of all the members of group 1, for instance, can also


154 Anna-Maria Mouza<br />

Table 2.<br />

final year.<br />

Forecasted number of patients and various expenses for the<br />

Patient expenses<br />

Forecasted average<br />

per patient, at the final<br />

period t f ( )<br />

Forecasted number of patients for the period (year) t 5 = 3625<br />

Axial-tomography 9.1<br />

X-ray 5.7<br />

Medical supplies 59.4<br />

Administrative and<br />

32.1<br />

miscellaneous<br />

Other expected<br />

Total ( ) Amount for the final year t 5<br />

expenses<br />

Installation of IVR<br />

3000 600<br />

system<br />

Server and computers 2350 (salvage value: 350) 400<br />

replacement<br />

Personnel insurance<br />

(19% of total<br />

Endogenous (decision<br />

variable X 39 )<br />

salaries)<br />

Laboratory expenses 60,000<br />

Fixed expenses plus<br />

180,000<br />

depreciation<br />

All other expenses 80,000<br />

be computed from:<br />

c 1 × X 1 = 35622.2 × 10 = 356222.<br />

If I denote by X 10+i , the salary expenses of group i, then I may write<br />

X 10+i = c i × X i (i = 1,...,10). (2)<br />

so that the total salary expenses for the final year can be determined by<br />

evaluating the summation:<br />

20∑<br />

i=11<br />

X i . (3)


Health Service Management 155<br />

Table 3.<br />

Employees’salary at the initial and final year of the planning period.<br />

Groups (i)<br />

Number<br />

of group<br />

members<br />

(X i )<br />

Annual<br />

salary at<br />

year t 0 ,for<br />

all the<br />

members of<br />

group i (in<br />

thousand )<br />

Annual<br />

rate of<br />

increase,<br />

R i (in %)<br />

R i<br />

Salary at<br />

final year t f ,<br />

for all the<br />

members of<br />

group i<br />

(in )<br />

Average<br />

per person<br />

at each<br />

group (c c i )<br />

1 10 260 6.5 356,222 35,622.2<br />

2 1 24.7 6 33,054 33,054<br />

3 3 54.6 4.5 68,040 22,680<br />

4 14 163.8 4 199,290 14,235<br />

5 3 58.5 4.9 74,307 24,769<br />

6 1 27.3 5.8 36,190 36,190<br />

7 3 32.76 3.8 39,476 13,158.666<br />

8 4 41.6 3.6 49,647 12,411.75<br />

9 4 42.64 3.5 50,643 12,660.75<br />

10 10 101.4 3.4 119,851 11,985.10<br />

Another point of interest refers to the average working hours of the<br />

members of each group. The relative figures are analytically presented in<br />

Table 4.<br />

It can be easily verified that in Table 4, column 5 is obtained by multiplying<br />

the elements of column 4 by 52.<br />

Denoting by X 22+i (i = 1,...,10) the total average working hours per<br />

year for all the group members, then I may write<br />

X 22+i = w i × X i (i = 1,...,10). (4)<br />

3. The Nature of the Problem<br />

Given all the information presented in Tables 1–4, the resultant problem<br />

is, how to formulate a feasible operating plan, which should combine<br />

the desired salary increases and the full utilization of working hours,<br />

the expenses incurred, together with the desired gross profit at the final<br />

year set by the decision maker, in the best possible way with the minimum<br />

sacrifice. This problem can be answered by the goal programming<br />

method, originally developed by Charnes and Cooper (1961). It should


156 Anna-Maria Mouza<br />

Table 4.<br />

Working hours of each group personnel.<br />

Groups (i)<br />

Number<br />

of group<br />

members<br />

Hours per<br />

week<br />

(average),<br />

for each<br />

member of<br />

group i (in<br />

thousand )<br />

Total hours<br />

per week<br />

for all the<br />

members of<br />

group i<br />

Total<br />

hours per<br />

year, for<br />

all the<br />

members<br />

of each<br />

group<br />

Average<br />

per person<br />

at each<br />

group (w i )<br />

1 10 65 650 33,800 3380<br />

2 1 45 45 2340 2340<br />

3 3 65 195 10,140 3380<br />

4 14 40 560 29,120 2080<br />

5 3 40 120 6240 2080<br />

6 1 45 45 2340 2340<br />

7 3 40 120 6240 2080<br />

8 4 40 160 8320 2080<br />

9 4 30 120 6240 1560<br />

10 10 30 300 15,600 1560<br />

be recalled that goal programming is a mathematical programming technique<br />

with the capability of handling multiple goals given certain priority<br />

structure. Furthermore, a goal programming model may be composed of<br />

non-homogeneous units of measure, which is the case under consideration.<br />

In brief, the main task here refers to the compensation of total expenses<br />

with the decision maker’s requirement regarding gross profits, without any<br />

change in the operating pattern of the clinic, as far as the manpower level is<br />

concerned.<br />

3.1. Specification of the Problem Decision Variables<br />

• The first 10 decision or choice variables X i (i = 1,...,10) are<br />

already defined in Table 3 and refer to the number of personnel in<br />

group i.<br />

— Variables X 11 ...X 20 , stand for the salary cost of each group.<br />

— Variable X 21 denotes total salary cost at the final year.<br />

— Variables X 22 ...X 31 refer to the total working hours, observed in<br />

the 5th column of Table 4.


Health Service Management 157<br />

• Variables X 32 ...X 35 refer to the average cost per patient, as it is stated<br />

in the first four rows of Table 2.<br />

— Variable X 36 denotes total per patient expenses.<br />

— Variables X 37 and X 38 refer to the equipment purchase and<br />

replacement.<br />

• Variable X 39 stands for the personnel insurance expenses.<br />

— Variable X 40 refers to the depreciation, laboratory, fixed and other<br />

expenses.<br />

• Variable X 41 is used to sum up all the above expenses (i.e., X 37 + X 38 +<br />

X 39 + X 40 )<br />

• X 42 , which also is a definition variable, is used to denote total operating<br />

cost for the final year t f (i.e., t 5 ) of the planning period.<br />

• Finally, variable X 43 denotes the average charge per patient, so that the<br />

desired gross profit would not be less than 750,000.<br />

3.2. Formulation of the Constraints (Goals)<br />

To facilitate the presentation, I classified the goals into the following seven<br />

categories.<br />

3.2.1. Personnel Employed<br />

The operation pattern of the clinic, regarding the personnel employed is<br />

presented in Table 1. Given that, the decision maker does not want to alter<br />

this pattern, I must first introduce the following 10 constraints (goals)<br />

X i + d − i<br />

− d + i<br />

= b i (i = 1,...,10) (5)<br />

where b 1 = 10, b 2 = 1, b 3 = 3, b 4 = 14, b 5 = 3, b 6 = 1, b 7 = 3, b 8 = 4,<br />

b 9 = 4 and b 10 = 10, are the desired targets, regarding the members of<br />

each group as shown in Tables 1, 3 and 4.<br />

It is useful to recall that di − and d<br />

i + denote the so-called deviational<br />

variables from the goals, which are complementary to each other, so that<br />

di − × d<br />

i + = 0. If the ith goal is not completely achieved, then the observed<br />

slack will be expressed by di − , which represents the negative deviation from<br />

the goal. On the other hand, if the ith goal is over achieved, then d<br />

i<br />

+ will<br />

take a non-zero value. Finally, if this particular goal is exactly achieved,<br />

then both di − and d<br />

i + will be zero.


158 Anna-Maria Mouza<br />

3.2.2. Personnel Claims for Salary Increase<br />

According to Eq. (2), I have to introduce the following 10 constraints in<br />

order to define the total salary expenses for each group.<br />

X 10+i − c i X i + d10+i − − d+ 10+i<br />

= 0, (i = 1,...,10). (6)<br />

It is recalled that coefficients c i are presented in Table 3. An additional<br />

variable X 21 is introduced to sum up salary expenses for all personnel at<br />

the final year. The computation of this total is based on Eq. (3).<br />

X 21 −<br />

20∑<br />

i=11<br />

X i + d21 − − d+ 21<br />

= 0. (7)<br />

3.2.3. Complete Utilization of Personnel Capacity in Terms<br />

of Working Hours<br />

The forecasted number of patients for the final year is not exceeding the<br />

clinic capabilities, regarding the existing infrastructure. Furthermore, the<br />

decision maker does not want to alter the existing operation pattern, since<br />

he considers that the present personnel manpower potential will be adequate<br />

to provide satisfactory service to the number of patients forecasted,<br />

provided that the under utilization of the regular working hours will be<br />

avoided. Hence, I introduced the following constraints in accordance to<br />

Eq. (4), which on the other hand may be regarded as a measure to provide<br />

job security to all personnel by avoiding underutilization of their regular<br />

working hours as shown in Table 4.<br />

X 21+i − w i X i + d21+i − − d+ 21+i<br />

= 0, (i = 1,...,10). (8)<br />

Note, the co-efficients w i , are also presented in Table 4.<br />

3.2.4. Patient expenses<br />

The axial-tomography expenses per patient can be expressed as:<br />

X 32 + d32 − − d+ 32 = 9.1.<br />

X-ray expenses per patient:<br />

X 33 + d33 − − d+ 33 = 5.7.<br />

Medical expenses per patient:<br />

X 34 + d34 − − d+ 34 = 59.4.<br />

(9a)<br />

(9b)<br />

(9c)


Administrative and miscellaneous expenses per patient:<br />

Health Service Management 159<br />

X 35 + d − 35 − d+ 35 = 32.1.<br />

Variable X 36 depicts total per patient expenses at the final year.<br />

X 36 − (X 32 + X 33 + X 34 + X 35 ) + d − 36 − d+ 36 = 0.<br />

(9d)<br />

(9e)<br />

3.2.5. Other Expected Expenses<br />

Installation of the IVR system.<br />

Computers and server replacement.<br />

X 37 + d − 37 − d+ 37 = 600.<br />

X 38 + d − 38 − d+ 38 = 400.<br />

(10a)<br />

(10b)<br />

The insurance funds.<br />

From Table 2, we see that the average expenses for personnel insurance<br />

amounts to 19% of the total employees’ salary. This is represented by the<br />

following constraint, taking into account the variable X 21 .<br />

X 39 − 0.19X 21 + d − 39 − d+ 39 = 0<br />

Depreciation, laboratory, fixed and other expenses<br />

X 40 + d40 − − d+ 40<br />

= (60,000 + 180,000 + 80,000). (10d)<br />

To sum all other expenses, I introduce the following relation.<br />

X 41 − (X 37 + X 38 + X 39 + X 40 ) + d − 41 − d+ 41 = 0.<br />

(10c)<br />

(10e)<br />

3.2.6. Total Expenses<br />

It is already mentioned that I introduce the variable X 43 in order to sum up<br />

total expenses, in the following way.<br />

X 42 − (X 21 + 3625X 36 + X 41 ) + d42 − − d+ 42<br />

= 0. (11)<br />

3.2.7. Total Gross Profit<br />

The gross profit achievement as set by the decision maker, may be introduced<br />

in the following way.<br />

3625X 43 − X 42 + d43 − − d+ 43<br />

= 750,000 (12)


160 Anna-Maria Mouza<br />

where X 43 , as it was mentioned earlier, represents the adequate charge per<br />

patient, in order to satisfy Eq. (12). It is recalled that the coefficient 3625,<br />

which appears in Eq. (11) too, is the stated forecast, regarding the expected<br />

number of patients for the final year t f .<br />

With the above formulation, we see that the number of constraints is<br />

equal to the number of choice variables (43).<br />

3.3. Priority of the goals — The objective function<br />

It is obvious that the decision maker, apart from determining the goals of<br />

the five-year plan, also has to specify the priorities (Gass, 1986), denoted by<br />

P i , in order to accomplish the optimum allocation of resources for achieving<br />

these goals in the best possible manner. One further point, that has to<br />

be considered in the formulation of the model, is the possibility of weighing<br />

the deviational variables at the same priority level (Gass, 1987). The<br />

list of decision maker’s goals is presented below in descending order of<br />

importance.<br />

1. To retain the present operating pattern of the clinic.<br />

This implies that no alterations are desired regarding the number and<br />

composition of the existing personnel (P 1 ). Different weights have been<br />

assigned to the various personnel groups.<br />

2. To achieve the desired level of gross profits. Also, to provide reserves for<br />

the personnel insurance, to cover all expenses for equipment replacements<br />

and installation of a new IVR system (P 2 ).<br />

3. To avoid underutilization of the personnel regular working hours, providing<br />

at the same time job security to all employees (P 3 ).<br />

4. To provide the funds necessary to cover all patient’s expenses (P 4 ).<br />

5. To provide adequate funds for covering laboratory and fixed expenses,<br />

depreciation etc. (P 5 ).<br />

6. To provide the wage increase, claimed by the personnel at the various<br />

groups (P 6 ).<br />

In this case, weights are used to discriminate the salary increase of<br />

each group. Heavier weights have been assigned to groups with lower<br />

salaries.<br />

According to the above ranking of the goals priority and taking into<br />

account the relations for the corresponding totals, the following objective


Health Service Management 161<br />

function is formulated.<br />

min z = (10P 1 d − 1 + 9P 1d − 2 + 8P 1d − 3 + 7P 1d − 4 + 6P 1d − 5 + 5P 1d − 6<br />

+ 4P 1 d − 7 + 3P 1d − 8 + 2P 1d − 9 + P 1d − 10 ) + (P 2d − 37 + P 2d − 38<br />

+ P 2 d − 39 + 3P 2d − 42 + P 2d − 43 ) + P 3<br />

31∑<br />

i=22<br />

di − ∑36<br />

+ P 4<br />

i=32<br />

+ (P 5 d − 40 + P 5d − 41 ) + (P 6d − 21 + 2P 6d − 11 + 2P 6d − 12 + 3P 6d − 13<br />

+ 4P 6 d − 14 + 5P 6d − 15 + 6P 6d − 16 + 7P 6d − 17 + 8P 6d − 18<br />

+ 9P 6 d19 − + 10P 6d20 − ). (13)<br />

d − i<br />

4. Initial Results<br />

With this specification of the goal programming model, I obtain the results<br />

as given on the left side of Table 5 (solution I). It is easily verified that all<br />

goals are achieved. Hence, this pattern dictated by the optimal solution is<br />

readily applicable. It should be pointed out that according to this solution,<br />

an average rate of total salary increase of about 4.9% is realized, whereas<br />

the mean salary per worker increases from 15,232.1 to 19,372.1. In fact, this<br />

average rate r (i.e., 4.9) has been computed from Eq. (1) in the following<br />

way, using natural logs.<br />

ln(x)<br />

5<br />

= ln(1 + r), where x = 1,026,720<br />

807,300<br />

and r = (y − 1) × 100, where y = e Z and Z = ln(x)<br />

5<br />

However, the charge per patient (∼740 ) is considered being higher,<br />

when compared to the average of the local market, affecting thus the<br />

competition level of the clinic.<br />

5. Imposition of a New Constraint<br />

Given the social character of health service, which has to be combined with<br />

the fact that the unit under consideration must be competitive, it is necessary<br />

to impose the following constraint.<br />

X 43 + d44 − − d+ 44<br />

= 700. (14)


162 Anna-Maria Mouza<br />

Thus, the total number of constrains has become 44. It should be noted<br />

that minimizing d<br />

44 + in the objective function, an upper limit of 700 is<br />

set to the average charge per patient. Hence, I added in the objective as in<br />

Eq. (13) the following term<br />

2P 2 d<br />

44<br />

+<br />

which implies that the restriction regarding the maximum average charge<br />

per patient has been considered as a second priority goal, properly weighed.<br />

5.1. Second-Run Results<br />

With the new constraint, the solution obtained is presented on the right<br />

side of Table 5 (solution II). I observe that constraint Eq. (14) is binding,<br />

and in order to achieve the level of gross profit set by the decision<br />

maker, the solution dictates that the total salary expenses must be reduced<br />

by 118,180.125, which is the value of the deviational variable d21 − , shown<br />

in Eq. (7). Evaluating the results of the second run, one may argue as to<br />

Table 5.<br />

Solution I (left) and solution II (right).<br />

No. Variable name Value No. Variable name Value<br />

The simplex solution (Number of patients = 3625)<br />

87 ×1 10.000 89 ×1 10.000<br />

88 ×2 1.000 90 ×2 1.000<br />

89 ×3 3.000 91 ×3 3.000<br />

90 ×4 14.000 92 ×4 14.000<br />

91 ×5 3.000 93 ×5 3.000<br />

92 ×6 1.000 94 ×6 1.000<br />

93 ×7 3.000 95 ×7 3.000<br />

94 ×8 4.000 96 ×8 4.000<br />

95 ×9 4.000 97 ×9 4.000<br />

96 ×10 10.000 98 ×10 10.000<br />

97 ×11 356,222.000 99 ×11 356,222.000<br />

98 ×12 33,054.000 100 ×12 33,054.000<br />

99 ×13 68,040.000 101 ×13 68,040.000<br />

100 ×14 199,290.000 102 ×14 199,290.000<br />

101 ×15 74,307.000 103 ×15 74,307.000<br />

102 ×16 36,190.000 104 ×16 36,190.000<br />

(Continued )


Table 5. (Continued )<br />

Health Service Management 163<br />

No. Variable name Value No. Variable name Value<br />

The simplex solution (Number of patients = 3625)<br />

103 ×17 39,476.000 105 ×17 39,476.000<br />

104 ×18 49,647.000 106 ×18 49,647.000<br />

105 ×19 50,643.000 107 ×19 50,643.000<br />

106 ×20 119,851.000 108 ×20 119,851.000<br />

107 ×21 1,026,720.000 21 d − 21<br />

118,180.125<br />

108 ×22 33,300.000 110 ×22 33,800.000<br />

109 ×23 2340.000 111 ×23 2340.000<br />

110 ×24 10,140.000 112 ×24 10,140.000<br />

111 ×25 29,120.000 113 ×25 29,120.000<br />

112 ×26 6240.000 114 ×26 6240.000<br />

113 ×27 2340.000 115 ×27 2340.000<br />

114 ×28 6240.000 116 ×28 6240.000<br />

115 ×29 8320.000 117 ×29 8320.000<br />

116 ×30 6240.000 118 ×30 6240.000<br />

117 ×31 15,600.000 119 ×31 15,600.000<br />

118 ×32 9.100 120 ×32 9.100<br />

119 ×33 5.700 121 ×33 5.700<br />

120 ×34 59.400 122 ×34 59.400<br />

121 ×35 32.100 123 ×35 32.100<br />

122 ×36 106.300 124 ×36 106.300<br />

123 ×37 600.000 125 ×37 600.000<br />

124 ×38 400.000 126 ×38 400.000<br />

125 ×39 195,076.797 127 ×39 172,622.578<br />

126 ×40 320,000.000 128 ×40 320,000.000<br />

127 ×41 516,076.812 129 ×41 493,622.562<br />

128 ×42 1,928,134.375 130 ×42 1,787,500.000<br />

129 ×43 738.796 131 ×43 700.000<br />

109 ×21 908,539.875<br />

whether this is an optimal solution, from the socio-economic point of view,<br />

since any reduction has to directly affect the employees salary solely. On<br />

the other hand, the decision maker is not willing to entirely undertake this<br />

reduction (118,180.125), decreasing gross profits by almost 16%. With all<br />

these in mind, I present the following proposition, which may be considered<br />

of particular importance.


164 Anna-Maria Mouza<br />

6. Some Alternatives<br />

After proper negotiations, the decision maker and the employees have to<br />

agree on the proportion, say a (where 0


Table 7.<br />

Salary at the initial and final years together with the annual rates of increase.<br />

Groups (i)<br />

Salary at the<br />

period t 0 ( )<br />

Initially<br />

claimed<br />

salary for the<br />

final year ( )<br />

Amount of<br />

reduction<br />

Salary to be<br />

realized at the<br />

final year ( )<br />

Actual annual<br />

rate of salary<br />

increase (%)<br />

New<br />

coefficients c i<br />

c i<br />

1 260,000 356,222 37,090.172 319,131.81 4.1836 31,913.181<br />

2 24,700 33,054 2859.187 30,194.81 4.0991 30,194.81<br />

3 54,600 68,040 2942.746 65,097.25 3.5795 21,699.083<br />

4 163,800 199,290 6384.697 192,905.3 3.3252 13,778.95<br />

5 58,500 74,307 4082.711 70,224.29 3.7209 23,408.096<br />

6 27,300 36,190 2689.870 33,500.13 4.1782 33,500.13<br />

7 32,760 39,476 961.173 38,514.83 3.2897 12,838.278<br />

8 41,600 49,647 858.898 48,788.10 3.2391 12,197.025<br />

9 42,640 50,643 567.861 50,075.14 3.2669 12,518.785<br />

10 101,400 119,851 652.748 119,198.25 3.2872 11,919.825<br />

Health Service Management 165


166 Anna-Maria Mouza<br />

increase for each group, the number of employees per group together with<br />

the ranking scores, I computed the index presented in the 4th column of<br />

Table 6. Multiplying the elements of this column by 59,090.0625, we get<br />

the entries of the 5th column.<br />

From the last column of Table 6, I computed the end period salary<br />

of each group and the corresponding annual rate of increase, which are<br />

presented in Table 7.<br />

From Table 7, we can see that the requirement for the annual rate of<br />

salary increase at each group to exceed 3.1%, is fully satisfied. In an opposite<br />

situation however, one has to reduce the value of “a” by, say, 0.05 and<br />

to repeat all calculations. It should be also noted that with the allocation<br />

proposed so far, total initial salary, spending will increase by an annual<br />

rate of 3.69% (from 807,300 to 967,629.9) and the annual wage per capita<br />

will increase by 19.86% (from 15,232.1 to 18,257.2). It is clear that all<br />

employees will be better off with smaller value of a. The composite index<br />

and all other entries of Tables 6 and 7 can be computed with the following<br />

code segment written in Visual Fortran.<br />

MODULE data_and_params<br />

! Goups=number of groups ! Planning_hor=Planning horizon (years)<br />

! R = Initial rate of salary increase at each group (vector)<br />

! Members = Number of employees at each group (vector)<br />

! Salary_in = Initial employees salary at each group (vector)<br />

! Salary_f = Final claimed salary for each group (vector)<br />

! Deficit = The value of deviational variable<br />

! a = Value of coefficient a (0


Health Service Management 167<br />

WRITE(out,*)’ Composite index and the final rate of salary increase’<br />

WRITE(out,’(/)’)<br />

Deficit=Total_Deficit*a ; sortR=R ; Salary=Salary_f/members<br />

! Perform ranking operation<br />

CALL SORTQQ(LOC(sortR),groups,SRT$REAL4)<br />

DO i=1,groups<br />

s=R(i)<br />

DO j=1,groups<br />

IF(sortR(j) == s) scores(i)= j<br />

ENDDO ; ENDDO<br />

! Compute composite index<br />

Rsmall=R/100. ; Y=Rsmall*Salary ; s=SUM(Y)<br />

Employees=SUM(Members)<br />

rate=(y*100)/s/100. ; X=rate*deficit/Employees<br />

s=SUM(X)<br />

rate=(x*100.)/s/100. ; s=SUM(X) ; X=rate*Members<br />

s=SUM(X)<br />

Rate=(X*100.)/s/100. ; s=SUM(Rate) ; X=rate*scores<br />

s=SUM(X)<br />

rate=(X*100)/s/100. ; s=SUM(rate) ; X=Deficit*rate<br />

! Compute revised annual rates of salary increase<br />

Y=Salary_f-X<br />

Logvector=(LOG(Y/Salary_in))/ planning_hor<br />

newrate=(EXP(logvector) -1.)*100.<br />

WRITE(out,*) ’ Index Reduction Final Salary New rate’<br />

DO i=1,groups<br />

WRITE(out,’(1X,F9.6,2X,F10.3,4X,F10.2,6X,F6.4)’) Rate(i)&<br />

&, X(i), Y(i), newrate(i)<br />

ENDDO<br />

END<br />

7. Final Results<br />

In the last column of Table 7, the revised coefficients c i are stated. These<br />

new coefficients are to be considered in Eq. (2), and in the corresponding<br />

restrictions seeing in Eq. (6) for the final run. Also, the objective in Eq. (13)<br />

has been slightly reformed as follows:<br />

min z = (10P 1 d1 − + 9P 1d2 − + 8P 1d3 − + 7P 1d4 − + 6P 1d − 5 + 5P 1d6<br />

−<br />

+ 4P 1 d − 7 + 3P 1d − 8 + 2P 1d − 9 + P 1d − 10 ) + (P 2d − 37 + P 2d − 38<br />

+ P 2 d − 39 + 3P 2d − 42 + 2P 2d + 44 ) + P 3<br />

31∑<br />

i=22<br />

di − ∑36<br />

+ P 4<br />

+ (P 5 d − 40 + P 5d − 41 ) + (P 6d − 21 + 2P 6d − 11 + 2P 6d − 12<br />

i=32<br />

+ 3P 6 d − 13 + 4P 6d − 14 + 5P 6d − 15 + 6P 6d − 16 + 7P 6d − 17 + 8P 6d − 18<br />

+ 9P 6 d − 19 + 10P 6d − 20 ) + P 7d − 43 .<br />

d − i


168 Anna-Maria Mouza<br />

The solution obtained with the re-stated restrictions satisfies all requirements.<br />

Now, total expenses sum up to 1,857,817.125. It has to be underlined,<br />

however, that since the value of the deviational variable d43 − is<br />

70,317.2, it is implied that total gross profits undergo a reduction of about<br />

9.38% (from 750,000 to 679,682.8), which is a bit higher when compared<br />

to the corresponding percentage (∼6%) of total salary reduction. Nevertheless,<br />

the results provide a sound basis to achieve an optimal solution, in the<br />

sense that it is acceptable by all people involved without any violations of<br />

the existing rules.<br />

8. Conclusions<br />

Many clinics like the one presented here benefit the patients, the health<br />

service in a convenient way, and help attain consistency and continuity<br />

of treatment. These achievements are heavily dependent on a successful<br />

budget planning. The characteristics of a model of this type for a clinic<br />

are based upon many factors, such as the type of the clinic, the location,<br />

the size, the medical specialty, etc. Hence, it is difficult to design a general<br />

model that can be applied to all types of clinics. However, once a budget<br />

planning model has been developed, it can be easily modified at a later<br />

stage to fit many other types of clinics. With this in mind, I consider in this<br />

paper an orthopedic clinic and implemented a model designed for a fiveyear<br />

planning period, using the goal programming approach of aggregate<br />

budget planning for this particular health care unit. To start with, the goals<br />

and the relevant priorities are set in advance, to formulate the corresponding<br />

goal programming model. The solution obtained is operational in the sense<br />

that, through the analysis of resource requirements, the acceptable charge<br />

per patient, the desired level of gross profits, the claimed rate of salary<br />

increases by the employees, and the trade-offs among the set of goals are<br />

achieved with a minimum sacrifice. The model shows that this sacrifice is<br />

negotiable between different sides, so that the business manager can finally<br />

establish an aggregative budget planning with the best perspectives, since<br />

the realization of a fair wage policy can be obtained through using the<br />

composite index, introduced in this paper.<br />

References<br />

Charnes, A and W Cooper (1961). Management <strong>Models</strong> and Industrial Applications of<br />

Linear Programming (Vols. 1 and 2). New York: Wiley.<br />

Everett, JE (2002). A decision support simulation model for the management of an elective<br />

surgery waiting system. Health Care Management Science 5, 89–95.


Health Service Management 169<br />

Flessa, S (2000). Where efficiency saves lives: a linear programme for the optimal allocation<br />

of health care resources in developing countries. Health Care Management Science 3,<br />

249–267.<br />

Gass, SI (1986). A process for determining priorities and weights for large scale linear goal<br />

programming. Journal of Operational Research Society 37, 779–785.<br />

Gass, SI (1987). The setting of weights in linear goal programming. Computers and Operations<br />

Research 14, 227–229.<br />

Mouza and Anna-Maria (1996). An economic approach of the Greek health sector, based<br />

on domestic data. Modeling the hospital operation process. Ph.D. Theses, Greece:<br />

Aristotle University of Thessaloniki.<br />

Mouza and Anna-Maria (2002). Estimation of the total number of hospital admissions and<br />

bed requirements for 2011: the case for Greece. Health Service Management Research<br />

15, 186–192.<br />

Mouza and Anna-Maria (2003). IVR and administrative operations in healthcare and hospitals.<br />

Journal of Healthcare Information Management 17(1), 68–71.<br />

Mouza and Anna-Maria (2006). A note regarding the projections of some major health<br />

indicators. Quality Management in Health Care 15(3), 184–199.<br />

Ogulata, SN and R Erol (2003). A hierarchical multiple criteria mathematical programming<br />

approach for scheduling surgery operations in large hospitals. Journal of Medical<br />

Systems 27(3), 259–270.


This page intentionally left blank


Chapter 5<br />

Modeling National Economies


This page intentionally left blank


Topic 1<br />

Inflation Control in Central and Eastern<br />

European Countries<br />

Fabrizio Iacone and Renzo Orsi<br />

University of Bologna, Italy<br />

1. Introduction<br />

The recent accession to the European Union (EU) of 10 new members<br />

officially opened the agenda of their participation in the European Monetary<br />

Union (EMU) too.<br />

Although, the accession of the new European partners to the euro-area<br />

is formally analyzed, keeping the Maastricht criteria as the main reference,<br />

these do not necessarily reflect the effective integration and convergence of<br />

the economies. With this respect, we think it is important to study if and how<br />

the Eurosystem can successfully control inflation after the extension of the<br />

EMU: we then investigated the monetary policy transmission mechanism in<br />

the Central and Eastern European Countries (CEECs) to see how compatible<br />

it is with one of the current members of euro-area.<br />

Knowledge of the mechanism of transmission of monetary policy is<br />

also necessary to the CEECs to choose the monetary strategy for the period<br />

preceding the accession to the euro-area. The discussion is open, both in<br />

the literature and among monetary institutions, on whether a direct inflation<br />

targeting or some kind of exchange rate commitment should be preferred:<br />

by comparing the mechanism of transmission of policy impulses in different<br />

regimes, we can see if a certain strategy made inflation control easier or<br />

more difficult, and assess its cost by looking at the impact on the economic<br />

activity.<br />

We focused on Poland, Czech Republic, and Slovenia because they<br />

are very different for the initial conditions from which they started the<br />

transition to functioning market economies and later the integration in the<br />

EU, for the relative size and the degree of openness to the international<br />

trade, for the policies implemented in due course and even for their historical<br />

173


174 Fabrizio Iacone and Renzo Orsi<br />

developments: we think that because of these differences they are, together,<br />

representative of the whole group of CEECs. For the euro-area, we took<br />

Germany as the reference country.<br />

2. Inflation Control over 1989–2004<br />

2.1. Transition, Integration, and Accession<br />

Although Poland, Czech Republic, and Slovenia shared the goals of implementing<br />

a functioning market economy and of acceding to the EU, the<br />

policies they implemented to reach them were rather different.<br />

The first reason for the difference is in the productive structure at the<br />

beginning of the transition. Yugoslavia established a quasi-market system<br />

well before 1989, with quite independent firms and relatively hard budget<br />

constraints; production was much more centralized and budget constraints<br />

much softer in Poland and Czech Republic, and, to make things tougher for<br />

these two countries, they also had to suffer the break-up of the council for<br />

mutual economic assistance (CMEA), in which they were well integrated<br />

(Czech Republic also had to endure the additional trade disruption, caused<br />

by the separation from Slovakia). Poland and Czech Republic represent<br />

an average position with respect to the general condition of the productive<br />

structure of the CEECs at the beginning of the transition. Local differences<br />

remain: for example, Hungary implemented some timid reforms before<br />

1989, while the three Baltic countries experienced a much bigger shock,<br />

because the disintegration of the Soviet Union resulted in the loss of their<br />

largest trade partner. The stronger centralization, the softer budget constraint,<br />

and the higher disruption caused by the break. up of the CMEA,<br />

and of Czechoslovakia could have coeteris paribus caused a more turbulent<br />

and longer transition for Poland and Czech Republic, but in reality Slovenia<br />

had to face the major shock due to the collapse of Yugoslavia and of the<br />

subsequent wars, which deprived the country of its biggest market and had<br />

other adverse effects including the loss of revenues and hard currency due<br />

to the fall of tourism.<br />

A more precise ranking emerges when the size and the openness to<br />

international trade is considered, with Poland being the largest country and<br />

Slovenia the smallest one. As far as the monetary policies implemented for<br />

inflation stabilization, all the CEECs except Slovenia, began their transition<br />

with a strong commitment to a fixed exchange rate. The initial price<br />

liberalization, in fact, left the system without a nominal anchor, and, also


Inflation Control in Central and Eastern European Countries 175<br />

because of a natural price rigidity with respect to negative corrections, the<br />

re-alignment of relative prices took the form of a sudden increase and a steep<br />

inflation followed: by opening to the international trade (and by introducing<br />

the proper legislation, in order to force the local producers to face a harder<br />

budget constraint), the countries in transition had a chance to import a price<br />

structure similar to the one of their commercial partners. In fact, being the<br />

exchange rate fixed, the domestic producers could not raise their prices too<br />

much without losing competitiveness with respect to the foreign ones.<br />

Slovenia followed a different approach to disinflation, targeting the<br />

growth of a relatively narrow monetary aggregate (M1); according to Capriolo<br />

and Lavrac (2003), anyway the key characteristic of the period was the<br />

continuous intervention on the foreign exchange market in order to prevent<br />

an excessive real rate appreciation (contrary to the strategy of the Bundesbank<br />

and of the Eurosystem, the control of M1 was also enforced with nonmarket<br />

instruments and procedures). Since mainstream optimal currency<br />

area literature prescribes a greater incentive to adopt a stable exchange rate<br />

to the countries more open to the international trade, the fact that the outcome<br />

is actually reversed means that price stabilization rather than trade<br />

was the priority in the exchange rate commitment served in Poland and in<br />

Czechoslovakia.<br />

Fixing the exchange rate may have contributed to price stabilization,<br />

and inflation actually dropped very quickly, but there are certain differences<br />

remained with respect to the EU, leading to a strong real exchange rate<br />

appreciation. This may have been partially due to the Balassa-Samuelson<br />

effect: assuming that the marginal productivity and the wage in the sector<br />

of internationally traded goods, is set on the foreign market and that the<br />

same wage is transferred to the production of the non-traded goods, then<br />

the productivity gap between the two sectors is compensated by a price<br />

increase in the less productive domestic sector. This yields a progressive<br />

appreciation of the real exchange rate and higher domestic inflation.<br />

The faster growth of productivity, determined by the transfer of<br />

technology from the international trade partners and the more efficient allocation<br />

of the resources within the economies, compensated part of the pressure<br />

on the domestic producers, but the fixed exchange rate arrangements<br />

were not sustainable for a long time: Poland switched to a regime of preannounced<br />

crawling peg as early as 1991, later coupling it with an oscillation<br />

band which was gradually widened until it was finally abandoned in 2000;<br />

Czech Republic resorted to a free float without explicit commitment as<br />

early as 1997 under the pressure of a speculative attack, its currency having<br />

progressively over-evaluated in real terms over time. Both the countries


176 Fabrizio Iacone and Renzo Orsi<br />

replaced the monetary anchor by introducing a direct inflation target (Czech<br />

Republic starting in 1998, Poland in 1999). The progressive weakening of<br />

the exchange rate commitment took place in Hungary and Slovakia too, but<br />

in these last two countries, the switch of targets was not complete: Hungary<br />

retained a fixed exchange rate commitment, albeit with a very wide band,<br />

while Slovakia did not implement an explicit inflation targeting because of<br />

the high role that administrated prices still had in the consumer prices. Anyway<br />

this is not an unanimous attitude: the three Baltic States kept a fixed<br />

exchange rate with an extremely tight band, apparently trying to exploit the<br />

credibility acquired with it during the first phase of the transition, and Estonia<br />

even withstood a moderate speculative attack rather than devaluating.<br />

Once again, Slovenia went in the opposite direction: according to Capriolo<br />

and Lavraac, a more aggressive exchange rate strategy has been pursued<br />

since 2001, with an active, albeit informal, crawling peg.<br />

2.2. Exchange Rate and Direct Inflation Targeting<br />

The fixed exchange rate played a key role in the stabilization phase; nowadays,<br />

it is still the reference of monetary policy in the Baltic States, and it is<br />

also present in Hungary. Since, the exchange rate commitment is also part of<br />

the Maastricht criteria, where two years of participation to the exchange rate<br />

mechanism (ERM) 2 is required, the decision of Poland, Czech Republic,<br />

and Slovakia, to completely abandon it may indeed seem to be a paradox.<br />

Surely, a small country cannot ignore the effect of the exchange rate<br />

fluctuations on the stability of financial institutions and the economy as<br />

a whole, but a commitment to a fixed parity is a much stronger policy.<br />

Whether, macroeconomic stability and inflation control are helped or<br />

hindered by this depends on the credibility of the commitment itself.<br />

There can be little doubt that an exchange rate commitment is a risky<br />

policy: the integration of the financial markets allows the mobilization of<br />

large amounts of funds, and no central bank can muster enough foreign<br />

exchange reserves to resist a sustained attack, nor indeed may desire to do<br />

it, because the mere defence of the currency could require a substantial rise<br />

in the domestic interest rates, for which the economy may be even more<br />

detrimental than the devaluation that it was trying to prevent. Against an<br />

exchange rate commitment, stands the evidence of speculative attacks in<br />

the past: consider, for example, the break up of the ERM 1, when even<br />

the French currency suffered some pressure, although the fiscal, financial,<br />

and macroeconomic situation of the country was not worse than it was<br />

in Germany. The same lesson can indeed be derived by looking at the


Inflation Control in Central and Eastern European Countries 177<br />

Hungarian experience: when the two objectives were in conflict, as in June<br />

2003, one had to be abandoned, and a devaluation took place.<br />

Since the accession to the EU also required the complete adoption of<br />

the acquis communautaire, thereby including the elimination of capital<br />

controls, the apparent paradox of several CEECs switching to a free float<br />

and inflation targeting, for the period preceding the adoption of the euro,<br />

is then a strategy that may protect their currencies from unrealistic parities<br />

and speculative attacks.<br />

On the other hand, a direct inflation targeting can only be successful if<br />

the monetary authority has earned itself a solid reputation, and this primarily<br />

requires the independence from any political interference. Since the CEECs<br />

only acquired a formal central bank independence in the last few years, as a<br />

part of the process of adoption of the acquis communautaire, their credibility<br />

may still be weak. Central bank independence is even more a concern,<br />

when the definition is extended beyond the mere legal requirement: in most<br />

of the cases for example, the credibility was seriously weakened because<br />

the fiscal authority had the opportunity to partially finance its expenditures<br />

through the central bank; political pressures, as experienced in the Czech<br />

Republic and in Poland, may undermine the credibility of the central bank<br />

even when they are resisted, because they may erode the consensus in<br />

the country. Several indices of central bank independence, were proposed<br />

in the literature: according to Maliszewski (2000), and to the extensive<br />

survey in Dvorsky (2000), the CEECs were lagging behind with respect<br />

to their Western counterparts, Poland and the Czech Republic faring better<br />

than Slovenia. Indeed, it is exactly because the credibility of the monetary<br />

authorities is lower in the CEECs that Amato and Gerlach (2002) proposed<br />

to supplement the inflation targeting with a mild monetary commitment,<br />

this would increase the information in the market.<br />

It is then important to ascertain if the exchange rate commitment is a<br />

risk worth taking, not only for the countries on the way to EMU that are still<br />

more than two years away from the formal discussion of their convergence<br />

according to the Maastricht criteria, but also for other emerging economies<br />

in the world.<br />

2.3. A Summary of the Results of Previous Analyses<br />

Several empirical analyses on inflation control in CEECs appeared in the<br />

recent years. Iacone and Orsi (2004) surveyed many applied works: these<br />

varied for the country that was analyzed, for the period considered and for<br />

the methodology adopted, making comparison rather difficult.


178 Fabrizio Iacone and Renzo Orsi<br />

Interpretation of the results was often dubious because either the variables<br />

of interest were replaced by first or second differences, or the intermediate<br />

steps linking the monetary instrument to the target were omitted,<br />

thus obscuring any potential evidence about the channel through which<br />

inflation may be controlled. Reliability was also hampered by the fact that<br />

in many cases no stability analysis was provided, despite the potentially<br />

disruptive effects of the transition on the estimates, nor any attempt was<br />

made to model the slow formation of a market economy despite this being<br />

the main feature of the period.<br />

It seems nevertheless fair to conclude that the exchange rate is very<br />

important for the stabilization of inflation, an appreciation of the real<br />

exchange rate reducing the growth rate of prices. The empirical evidence<br />

supporting the existence of a standard interest rate channel was much more<br />

controversial, the results depending largely on the econometric approach<br />

and model adopted by the researcher and on the sampling period considered.<br />

2.4. A Structural Model for Monetary Policy<br />

It is possible that the weak support provided by these analyses is at least<br />

partially due to the methodology adopted: VAR models have the advantage<br />

of requiring a minimal structural specification, but, in return, they need<br />

the estimation of many parameters. Considering the short length of the<br />

sampling size and the potential extreme instability of the parameters due to<br />

the transition, large standard errors and non-significant estimates seem to<br />

be unavoidable consequences.<br />

A structural model can provide a more parsimonious approach: we<br />

followed Rude-bush and Svensson (1999) and considered the model<br />

⎧<br />

⎨ π t = α 0 + α π1 π t−1 + α π2 π t−2 + α π3 π t−3<br />

+ α π4 π t−4 + α y y t−1 + ε π,t PC<br />

⎩<br />

y t = β 0 + β y y t−1 − β<br />

AD<br />

where y t is a measure of the economic activity, π t is the annualized inflation<br />

within the quarter, it is a short-term interest rate, ¯π =<br />

4 1 ∑ 3j=0<br />

π t−j is the<br />

average inflation rate in the last four quarters (i.e., the inflation over the<br />

last year), and ī =<br />

4 1 ∑ 3j=0<br />

i t−j is the average interest rate over the same<br />

period.<br />

The equations are referred to as Phillips Curve (PC) and aggregate<br />

demand (AD), respectively because the first one can be given a structural<br />

interpretation as a PC while the second one may describe the transmission<br />

of monetary policy on the economic activity quite like in an AD function.


Inflation Control in Central and Eastern European Countries 179<br />

With this specification, agents extrapolate from the past inflation to formulate<br />

the current period price level. The mechanism of transmission of<br />

monetary policy to inflation is indirect: an expansionary monetary policy<br />

takes place as a decrease in the real rate ī t−1 −¯π t−1 , which boosts the<br />

economic activity in the AD equation; in the second moment, the pressure<br />

on the demand enhances inflation, as illustrated in the PC equation.<br />

The instrument and the mechanism of transmission of monetary policy are<br />

then consistent with the operating procedures of both the FED and of the<br />

Bundesbank, and later of the ECB.<br />

Rudebush and Svensson (1999) allowed for an additional term β y2 y t−2<br />

in the AD equation, but we found that, possibly also due to the smaller<br />

dimension of our samples and the high collinearity with β y2 y t−2 , the contribution<br />

of the two different effects in general could not be distinguished.<br />

We, then considered the proposed AD equation sufficient, also because<br />

one lag proved to be enough to have no serial correlation in the residuals.<br />

Rudebush and Svensson also specified the model without intercepts α 0 ,β 0 ,<br />

having formulated for mean-corrected data, but we preferred to allow for<br />

the constant and test, for its possible exclusion explicitly.<br />

Rudebush and Svensson (2002) remarked that ∑ 4<br />

j=1 α πj = 1 can be<br />

expected as it corresponds to a vertical PC in the long run. They also<br />

explicitly discussed the fact that the backward looking formulation of<br />

the PC and AD equations imply adaptive rather than rationale expectations,<br />

but they argued that this may well be the case especially in a phase<br />

of monetary and inflationary instability; as an alternative, they suggested<br />

that the given specification may simply be considered as reduced form<br />

equations.<br />

Notice that even if the constraint ∑ 4<br />

j=1 α πj = 1 is applied, the model<br />

still does not necessarily impose a unit root to inflation, because the dynamics<br />

of y t has to be taken into account too: consider for example α π1 = 1,<br />

α π2 = α π3 = α π4 = 0, β y = 0, and the monetary policy rule ī t =¯π t + π t ;<br />

the PC equation then becomes β r ∈ y, t+ ∈π, t, thus, inducing a stationary<br />

behavior. It is indeed, precisely by steering the output gap using the interest<br />

rate that the stabilization of inflation is ultimately possible.<br />

This model may well be applied to the transition economies too, especially<br />

considering that the accession of these countries to the EU requires<br />

them to adopt a monetary policy setting that is institutionally compatible<br />

with the one of the ECB. In the case of a small open economy, though,<br />

two other factors should be taken into account: the demand of domestic<br />

products generated by the trade partners and the effect of the international<br />

competition on local prices.


180 Fabrizio Iacone and Renzo Orsi<br />

Following Svensson (2000), we then augmented the model by introducing<br />

the level of the economic activity in the international partners wt and the<br />

percent real exchange rate depreciation (q t −q t−1 ), where q t = p ∗ t +e t −p t<br />

is the logarithm of the real exchange rate and p t ,p ∗ t and e t are the logarithm<br />

of the local and foreign level of prices and of the nominal exchange<br />

rate, respectively. Both the variables are added to the AD equation, and<br />

the real exchange rate is added to the PC equation as well. For the AD,<br />

the assumption is that phases of high economic activity abroad result in<br />

higher demand of locally produced goods, while a too high level of prices<br />

(in real terms) causes a shift of the domestic and foreign demand towards<br />

the external producers. The real exchange rate entered the PC equation,<br />

because the international competition imposes some price discipline to the<br />

local producers: for given foreign prices, a nominal exchange rate appreciation<br />

makes foreign goods cheaper in local currency, thus, forcing the<br />

domestic producers to lower their prices to keep up with the external competitors,<br />

while for given nominal exchange rate higher foreign prices give<br />

the domestic producers to set higher prices and stay in the market; we also<br />

refer to Svensson (2000), where he explains how an increase in foreign<br />

prices in local currency, either due to the move of the foreign price itself or<br />

to the exchange rate, results in higher cost of inputs and then feeds back in<br />

higher prices of output.<br />

Maintaining the autoregressive structure of Rudebush and Svensson,<br />

we describe the economy with the two equations model<br />

π t = α 0 + α π1 π t−1 + α π2 π t−2 + α π3 π t−3<br />

+ α π4 π t−4 + α y y t−1 − α q (q t−1 − q t−5 ) + ε π,t PC<br />

y t = β 0 + β y y t−1 − β w w t−1 − β q (q t−1 − q t−5 ) + ε y,t AD<br />

where we actually considered relevant for the exchange rate the depreciation<br />

over the whole year.<br />

In the original model, Svensson considered the real exchange rate rather<br />

than the percent depreciation: in our model specification, we preferred the<br />

latter because as we saw the real exchange rates of most of the CEECs<br />

appreciated since the beginning of the transition without resulting in a<br />

dramatic decline of the economic activity.<br />

Moreover, we kept the AD/PC structure as a general reference, but<br />

given that the specification is somewhat arbitrary, we tried to adapt it to<br />

the economies considered. Our sample starts from 1990, and to avoid the<br />

risk of model instability in the very first part of the sample, the data were<br />

used only to compute the output gap or as lags. In some cases, we did


Inflation Control in Central and Eastern European Countries 181<br />

not find adequate data so the sample period was even shorter. Data are<br />

monthly, but in order to reduce the volatility, we used quarterly averages,<br />

adjusting the frequency properly. We took the CPI to compute inflation and<br />

the real exchange rate, and the seasonally adjusted industrial production<br />

to derive a measure of economic activity (when only the nonseasonally<br />

adjusted series was available, we run a preliminary step using the X-12<br />

filter in EViews to remove seasonality); for the nominal interest rate we<br />

used a three-months inter-bank rate or the Treasury bills rate of the same<br />

maturity. More details on the data used are at the end of this paper. We<br />

always considered Germany as the international reference, even if in the<br />

first part of the sample period some countries pegged the exchange rate to<br />

the US$: the EU in fact, constitutes by far the biggest trade partner for the<br />

CEECs.<br />

There is a strong seasonal component in the quarterly inflation, which<br />

is likely to shift the weights towards the fourth lag in the PC equation;<br />

the choice of the year appreciation of the real exchange rate was then also<br />

convenient to remove the seasonal effect.<br />

As a measure of the economic activity we took the output gap, defined<br />

the difference between the observed and the potential output computed<br />

using the Hodrik Prescott (HP) filter on the quarterly average of the production<br />

index. Even if the HP gap is usually considered an inexact measure<br />

of the economic cycle, due to its purely numerical definition, we still think<br />

that it is at least informative of the effective output gap. Plots of the inflation<br />

in the four countries are in Fig. 1 (figures are all collected at the end of<br />

Figure 1.<br />

Yearly inflation, Germany, Poland, Czech Republic, and Slovenia.


182 Fabrizio Iacone and Renzo Orsi<br />

the work). The inflation used for this graph was defined as the growth rate<br />

of prices on the previous year: we presented it albeit, we used a different<br />

measure in our empirical study because it is very common in the descriptive<br />

analysis in the literature. Clearly, though, averaging the inflation in this way<br />

induces spurious autocorrelation in the data, so in the empirical analysis we<br />

measure inflation as the annualized growth rate of prices over each period.<br />

We plotted this and the output gap for Germany in Fig. 2; in Figs. 3–5,<br />

we plot the same data and the yearly real exchange rate depreciation for<br />

Poland, Czech Republic, and Slovenia, respectively. From these figures, it<br />

Figure 2.<br />

Quarterly inflation and output gap, Germany.<br />

Figure 3.<br />

Quarterly inflation, output gap, real exchange rate depreciation, Poland.


Inflation Control in Central and Eastern European Countries 183<br />

Figure 4.<br />

Republic.<br />

Quarterly inflation, output gap, real exchange rate depreciation, Czech<br />

Figure 5.<br />

Slovenia.<br />

Quarterly inflation, output gap, real exchange rate depreciation,<br />

is already possible to notice that, while for Germany, the dynamics of inflation<br />

is dictated by the output gap, in the three new EU members inflation<br />

is largely dictated by the real exchange rate dynamics.<br />

Estimation was carried out using OLS equation by equation, as in Rudebush<br />

and Svensson (1999). We did not introduce any other modification to


184 Fabrizio Iacone and Renzo Orsi<br />

the AD equation, but we re-wrote the PC as<br />

π t = α 0 + ρ 0 π t−1 + ρ 1 π t−1 + ρ 2 π t−2 + ρ 3 π t−3 + ρ y y t−1<br />

+ α q (q t−1 − q t−5 ) + ε π,t PC<br />

(the term q t−1 − q t−5 does not appear in the equation for Germany). This<br />

is a re-parametrization of the original equation, with<br />

ρ 0 = (α π1 + α π2 + α π3 + α π4 − 1)<br />

ρ 1 =−(α π2 + α π3 + α π4 ), ρ 2 =−(α π3 + α π4 ), ρ 3 =−α π4<br />

Being simply a re-parametrization, the residual sum of squares that<br />

has to be minimized is the same, so the OLS estimates are not different<br />

than the estimates that one would obtain in the original model in levels.<br />

The advantage of this specification is that long and short dynamics are<br />

explicitly separated, ∑ 4<br />

j=1 α πj = 1 corresponding to p 0 = 0, and that,<br />

more importantly, the multi-collinearity due to the terms π t−1 to π t−4 is<br />

much less severe.<br />

Correct specification was primarily checked by looking at the tests for<br />

normality (Jarque and Bera), autocorrelation (up to the fourth lag: Breusch<br />

.Godfrey LM) and heteroskedasticity (white without cross terms) in the<br />

residuals: we referred to these tests using N for the normality, AC for the<br />

autocorrelation, and H for the heteroskedasticity. If the model appeared to<br />

be correctly specified, we then analyzed the stability of the parameters over<br />

time. If we did not reject the hypothesis of stability either, we moved to<br />

test specific restrictions on the parameters, and to estimate a more efficient<br />

version of the model. In all the tests, we used the conventional 5% size. In<br />

the tables in the Appendix, we presented both the P-values associated to<br />

the tests and a summary of the equation estimated when the restriction was<br />

imposed.<br />

We considered the normality test first, because we used its result to<br />

choose between the small or large sample version of the other tests. Since<br />

the small and large sample statistics are asymptotically equivalent in large<br />

samples, the asymptotic interpretation of the results is not affected. Admittedly,<br />

due to the asymptotic nature of the Jarque and Bera test, and to the<br />

small sample lower order bias in the estimation of autoregressive coefficients,<br />

the reader may be cautious in the interpretation of the results, especially,<br />

when only portions of the dataset are used to estimate parameters<br />

(like in the Chow stability test or in the estimates on sub-samples): notice<br />

anyway that this is a problem that depends on the dimension of the sample;<br />

so, indeed the same caveat applies to all the empirical analyzes present in


Inflation Control in Central and Eastern European Countries 185<br />

the literature. The reader may then want to consider the estimates and the<br />

statistics computed on the estimated parameters more as general indications<br />

in small samples (especially, if the sub-sample 1998–2004 only is used),<br />

but we think that despite these drawbacks, the estimates and the tests do<br />

still provide some information on the structure that we intend to analyze,<br />

albeit an approximate one.<br />

Autocorrelation in the residuals is a serious mis-specification since it<br />

results in inconsistent estimates given that both the equations include a<br />

lagged dependent variable. Heteroskedasticity is a minor problem, because<br />

estimation is simply inefficient (compared to the GLS estimator) but not<br />

inconsistent. Yet, since the standard errors indicated by the regression are<br />

not correct, evidence of heteroskedasticity requires at least a different estimation<br />

of the variance-co-variance matrix of the estimates.<br />

Due to the peculiarity of the transition, we think it is particularly important<br />

to control the parameter stability over time, and possibly to model it<br />

when apparent. We used as preliminary diagnostic the CUSUM squared<br />

test, but also considered as a particular breakpoint in 1998, because of the<br />

opening of the negotiation to access the EU. Clearly, this is just a broad<br />

indication: one could also argue that by that time the negotiation opened<br />

the transition to a market economy was already nearly complete, and the<br />

issue was convergence to the EU standards. Yet, that period may still be<br />

informative about a potential break even if it was not located exactly there.<br />

We also considered country-specific breakpoints in the PC equations to<br />

analyze the effect on the inflation dynamics of monetary regime changes:<br />

for Germany, we allowed a break in 1999Q1, when the control of monetary<br />

policy was transferred from the Bundesbank to the ECB, because it may<br />

have been perceived as a weakening of the anti-inflationary stance, inducing<br />

higher inflationary expectations; for Poland, we considered a break in<br />

2000Q2, when the exchange rate commitment was formally abandoned;<br />

for Slovenia, the change is in 2001Q1, when the approach towards the real<br />

exchange rate became effective. We tested the presence of the break with a<br />

standard Chow test, unless the sample after the break was very little, in this<br />

case we used the Forecast test, which is exactly designed to test for breaks<br />

when only a little data are located after the break.<br />

If the model was correctly specified and stable, we improved the efficiency<br />

of the estimates by removing the variables whose coefficients were<br />

estimated with a sign opposite to the one prescribed in the economic theory<br />

(we assumed that α y , α q , β y , β w , β q , β r are not negative), and by imposing<br />

other restrictions (including ∑ 4<br />

j=1 α πj = 1): we tested these with a Wald<br />

statistic, and referred to these tests as W in the tables.


186 Fabrizio Iacone and Renzo Orsi<br />

Of course, given the small dimension of the samples and the potential<br />

instability of the models due to the transition, a certain caution should still<br />

be exerted in the interpretation of the results. But, we think that even in<br />

this case our analysis is at least able to get a general, broad picture of the<br />

situation: the lack of precision that may be attributed to the small dimension<br />

of the sample, for example, is already taken into account in the standard<br />

error. If, under normality, a test statistic rejects the null hypothesis despite<br />

the large variability due to the small sample, then we would still regard the<br />

estimate as informative, although may be only on the sign.<br />

3. The Empirical Evidence<br />

3.1. Germany<br />

As a benchmark case, we first estimated the model for Germany over the<br />

period 1991Q1–2004Q1. The choice of the sample is two-fold: first, only<br />

data after the German re-unification were considered, leaving many of the<br />

problems related to that particular transition out of the discussion; second,<br />

this time span is roughly the same one covered by the data of the CEECs in<br />

most of the applied analyzes, making the results comparable in this sense.<br />

We called this a “benchmark” model because it is widely assumed that this<br />

transmission mechanism for monetary policy was at work in Germany in<br />

the years we are interested in. Furthermore, Germany is geographically<br />

and economically close to the CEECs, accounting for a large share of their<br />

international trade. Finally, it is an interesting reference because Germany<br />

is the biggest country in the euro-area: if the monetary transmission in the<br />

CEECs works in a very different way, their integration in the euro may<br />

adversely affect the dynamics of inflation.<br />

The estimated model for Germany was<br />

⎧<br />

ŷ ⎪⎨ t = 0.011 + 0.769y t−1<br />

(0.005) (0.084)<br />

⎪⎩ ̂π t =−1.02π t−1<br />

(0.10)<br />

− 0.0038y t−1<br />

(0.00163)<br />

− 0.86π t−2<br />

(0.12)<br />

− 0.71π t−3 + 17.62y t−2<br />

(0.10) (12.18)<br />

over the period 1991Q2–2004Q1 for the AD, and 1993Q1–2004Q1 for<br />

the PC.<br />

According to the diagnostic tests, the AD equation was correctly specified<br />

and stable over the whole sample; the CUSUMsq hit the boundary in<br />

the period 1993–1996, but we found no evidence of break with the Chow


Inflation Control in Central and Eastern European Countries 187<br />

test (further details about all the tests and a summary of the estimates are<br />

in the tables at the end).<br />

The inflation dynamics, on the other hand, was not stable when the<br />

whole sample was taken into account. The instability, though was clearly<br />

restricted to the first part of the sample, and could indeed be removed dropping<br />

the years 1991 and 1992. It is interesting to observe that it is exactly<br />

the period of the recovery of the inflationary shock due to the German<br />

re-unification, a phenomenon that was usually regarded as temporary and<br />

exceptional. Diagnostic tests confirmed that the equation estimated using<br />

data from 1993 onwards was correctly specified and stable; notice anyway,<br />

the distribution of the residuals did not appear to be normal, so the interpretation<br />

of the other tests can only be justified asymptotically. Point estimates<br />

of ρ 1 , ρ 2 , and ρ 3 were all very close to −1, suggesting a very high value of<br />

α 4π , or a strong seasonal component in inflation; we also found that lagged<br />

values of yt − 1 had a more significant effect, albeit even in this case the P-<br />

value of the test H 0 : {αy = 0} vs. H 0 : {α y > 0} was between 5% and 10%.<br />

Considering the model as a whole, anyway, we find these estimates<br />

satisfactory, although not as much as those presented by Rudebush and<br />

Svensson for the United States: we conjecture that the lower precision of<br />

the estimates for Germany depended on the smaller number of observations<br />

available.<br />

3.2. Poland<br />

Opposite to Germany, we consider Poland (and the other CEECs later)<br />

as a country small enough to be affected by the international trade, so<br />

we augmented the model with the real exchange rate depreciation with<br />

respect to Germany and with the German output gap. Albeit our dataset<br />

included 1993 too, we found that both the equations suffered from residual<br />

autocorrelation, resulting in inconsistent estimates. We interpreted this as<br />

evidence of instability, because when 1994 was taken as a starting point,<br />

the residual autocorrelation was largely removed and the other diagnostic<br />

tests were broadly compatible with a stable, correctly specified model. The<br />

estimated model was<br />

⎧<br />

ŷ ⎪⎨ t = 0.020 + 0.467y t−1 − 0.0027y t−1<br />

(0.008) (0.142) (0.001)<br />

⎪⎩<br />

̂π t =−0.64π t−1<br />

(0.12)<br />

− 0.67π t−2<br />

(0.101)<br />

− 0.51π t−3 + 21.46(q t−1 − q t−5 )<br />

(0.10)<br />

(7.88)<br />

over the period 1994Q1–2004Q1 for the AD, and 1994Q1–2004Q1 for<br />

the PC.


188 Fabrizio Iacone and Renzo Orsi<br />

The variables referred to the trade partners were then not significant in<br />

the AD equation: we found a certain effect at least for the real exchange rate<br />

depreciation, and with a sign consistent with the macroeconomic theory,<br />

but the realization of the t statistic was still rather below the critical value.<br />

The absence of the two variables referring to the international trade could<br />

be interpreted as a sign that Poland is still large enough to be more sensible<br />

to the internal rather than to the international developments, but notice that<br />

the situation was reversed for the PC equation, where only the external<br />

conditions had a significant effect on the inflationary dynamics (again we<br />

found that the effect of the omitted explanatory variable, y t−1 in this case,<br />

had the sign predicted by the macroeconomic theory, but it failed to be<br />

significantly different than 0).<br />

We concluded performing a Forecast test for a break in 2000Q2 in<br />

the PC equation, rejecting it (P-value 0.65). Since at that point, the Polish<br />

national bank abandoned the exchange rate commitment and left the direct<br />

inflation targeting to stand alone, it is fair to conclude that the change of<br />

monetary target did not result in a destabilization of the inflation dynamics.<br />

Since the target for monetary policy is defined mainly because it is assumed<br />

that in that way the expected inflation can be favorably influenced, we prefer<br />

not to give a structural interpretation of a backward looking formulation<br />

of the inflation equation, but in this case we can at least conclude that the<br />

change of target did not result in a worsening of the expectation of inflation.<br />

Thus, it is likely that by 2000, the Polish central bank acquired sufficient<br />

credibility to move away from a commitment that would have exposed it<br />

to pressure from market speculation.<br />

Although we summarized the diagnostic tests, concluding that the instability<br />

that may be associated to the transition to a market economy can be<br />

largely confined to the period before 1994, we also estimated a more restrictive<br />

specification, in which the sample for the AD equation is from 1995<br />

onwards, and for the PC from 1998 onwards. We considered this second<br />

specification, because the residuals were not normally distributed, and an<br />

asymptotic interpretation of the outcome of the autocorrelation test for the<br />

AD equation was particularly dubious because the test statistic reached<br />

approximately, the limit critical value, the P-value being slightly lower<br />

than 0.05 (it was 0.046). As far as the PC equation, the Chow test did not<br />

identify any break in 1998Q1 but the CUSUMsq statistic did indeed signal<br />

a potential break in this year, albeit only mildly. The estimates in the more


estrictive model are<br />

⎧<br />

ŷ t = 0.021<br />

⎪⎨<br />

+ 0.454y t−1<br />

(0.009)<br />

⎪⎩<br />

Inflation Control in Central and Eastern European Countries 189<br />

(0.154)<br />

̂π t =−0.57π t−1<br />

(0.19)<br />

− 0.0029y t−1<br />

(0.001)<br />

− 0.76π t−2<br />

(0.13)<br />

− 0.47π t−3 + 16.09(q t−1 − q t−5 )<br />

(0.20)<br />

(7.23)<br />

over the period 1995Q1–2004Q1 for the AD, and 1998Q1–2004Q1 for<br />

the PC, very similar then to the estimates on the longer sample. For the PC<br />

equation, we also present the alternative specification<br />

̂π t =−0.66π t−1<br />

(0.20)<br />

− 0.78π t−2<br />

(0.13)<br />

+ 19.22(q t−1 − q t−5 )<br />

(7.46)<br />

− 0.59π t−3 + 39.24π t−1<br />

(0.20) (29.00)<br />

estimated over the sample 1998Q1–2004Q1: the lagged output gap still<br />

appeared in the determinants of the inflation dynamics, it had the correct<br />

sign and it was also marginally significant with the 10% critical value and a<br />

one-sided t test: a possible interpretation of this result could be that by 1998,<br />

the PC equation evolved even closer to the standard textbook specification,<br />

but we think that a longer dataset is needed to address this particular issue.<br />

In any case, considering the whole model, we confidently conclude that<br />

the Polish monetary authority was able to control inflation during these<br />

years, but we think it mainly worked through the exchange rate management,<br />

the evidence of the closed economy transmission mechanism, in fact,<br />

being dubious.<br />

3.3. Czech Republic<br />

Czech Republic originated in 1993, from the split of the former Czechoslovakia.<br />

Data availability is then slightly reduced, and it is also possible that<br />

the division of the country acted as a source of instability additional to the<br />

one generated to the transition. Although it is not possible to distinguish<br />

between these two causes, the evidence of instability of the macroeconomic<br />

structure was quite clear.<br />

The sample for the estimation of theAD equation started in 1994Q4, but<br />

we found that a break took place around 1999, according to the CUSUMsq<br />

statistic. This is approximately also the point for which the Chow test is<br />

to be computed, and we found strong evidence in favor of it (the P-value<br />

associated to it was 0.01). Comparing the estimates in the two sub-samples,


190 Fabrizio Iacone and Renzo Orsi<br />

the most relevant differences were in the fact that the coefficient of the real<br />

rate, that signals the transmission of the monetary policy impulse from the<br />

monetary authority to the economy, and the real exchange rate depreciation,<br />

were only significant and with the correct sign in the second part of the<br />

sample (the economic activity in Germany remained insignificant).<br />

It is possible that instability affected the PC equation too, albeit in<br />

this case the evidence was less compelling. Using the whole sample that<br />

started in 1994, the CUSUMsq statistic did not indicate the presence of any<br />

break, but according to the Chow test a discontinuity took place in 1998.<br />

The estimated coefficient of the economic activity was largely insignificant,<br />

despite having the same sign predicted by the economic theory, while we<br />

found that the real exchange rate appreciation had a strongly significant<br />

anti-inflationary effect. The estimated model was<br />

⎧<br />

ŷ ⎪⎨ t = 0.010 + 0.620y t−1<br />

(0.006) (0.116)<br />

⎪⎩ ̂π t =−0.76π t−1<br />

(0.23)<br />

− 0.03y t−1 − 0.083(q t−1 − q t−5 )<br />

(0.001)<br />

(0.050)<br />

− 0.21π t−2<br />

(0.25)<br />

− 0.31π t−3 + 32.55(q t−1 − q t−5 )<br />

(0.18)<br />

(11.97)<br />

over the period 1998Q1–2004Q1 for both the AD and the PC, while if we<br />

relied on the CUSUMsq statistic rather than on the Chow one for the PC,<br />

the PC equation estimated over the period 1994Q1–2004Q1 was<br />

̂π t =−0.84π t−1<br />

(0.15)<br />

− 0.33π t−2<br />

(0.17)<br />

− 0.34π t−3 + 25.338(q t−1 − q t−5 )<br />

(0.09)<br />

(9.186)<br />

Comparing these estimates, notice that if a break did indeed take place,<br />

then the consequence was that the real exchange rate appreciation had a<br />

stronger effect against inflation in the last part of the sample, that was<br />

exactly when Czech central bank used inflation rather than exchange rate<br />

targeting. As in the case of Poland, we then found no evidence that the<br />

switch of monetary policy target resulted in a worsening of the inflationary<br />

dynamics. On the contrary, if a change in the dynamics of the growth rate<br />

of prices took place at all, when inflation targeting was introduced, it was<br />

in favor of an easier control of it.<br />

3.4. Slovenia<br />

As we did for Czech Republic, in the case of Slovenia too we only considered<br />

the part of the sample corresponding to the existence of an independent<br />

state.


Inflation Control in Central and Eastern European Countries 191<br />

The data available for the estimation started from 1994Q1 for the AD<br />

equation and in 1993Q2 for the PC. We found anyway, evidence of model<br />

mis-specification when the first year is included, resulting in residual autocorrelation<br />

and inconsistent estimates: it is also fair to suspect, on the basis<br />

of the CUSUMsq statistic, that a break took place in the first part of the<br />

sample. We then removed 1993 from the dataset, and estimated the model<br />

from 1994 onwards (sample period 1994Q1–2003Q3 for theAD, 1994Q1–<br />

2003Q4 for the PC), obtaining<br />

⎧<br />

ŷ ⎪⎨ t = 0.0226 + 0.343w t−1<br />

(0.167) (0.234)<br />

⎪⎩<br />

̂π t =−0.60π t−1<br />

(0.14)<br />

− 0.66π t−2<br />

(0.12)<br />

− 0.45π t−3 + 52.64(q t−1 − q t−5 )<br />

(0.12)<br />

(15.60)<br />

The diagnostic tests confirmed that both the equations were correctly<br />

specified and stable in the period considered, albeit, we acknowledge that<br />

the estimates in the AD equation are only significant using a 10% threshold.<br />

We are inclined to consider them anyway, suspecting that a more clear<br />

link will emerge in the future, with a longer sample and an even higher<br />

integration in the European economy. Notice, on the other hand, the absence<br />

of the real interest rate, whose estimated coefficient even failed to have the<br />

sign prescribed in the economic theory: this supported the argument of<br />

Capriolo and Lavraac that, until recently, the interference of the central<br />

bank on the financial markets made the interest rates not informative about<br />

the monetary stance.<br />

The estimated PC is qualitatively similar to the Polish and Czech one,<br />

with inflation only driven by the exchange rate dynamics but not by the<br />

output gap: indeed, ˆβ y even resulted as negative, so in the final specification<br />

we excluded y t−1 from the explanatory variables altogether. In order to see,<br />

if the more active exchange rate policy resulted in an easier inflation control<br />

and then in a larger β q , we also tested (with a Forecast test) for a break in<br />

2001, but we did not reject the hypothesis of stability.<br />

3.5. Some Concluding Remarks<br />

Poland, Czech Republic, and Slovenia are small economies that in a few<br />

years re-shaped their productive and distributive structures, opened to the<br />

international trade, established a functioning market economy and joined<br />

the EU. Despite these common features, they differ quite remarkably for<br />

the time and the condition, in which they started the transition to the market<br />

economy, for the policies they implemented since 1989, and for the size


192 Fabrizio Iacone and Renzo Orsi<br />

and the relative importance of the trade with the rest of the EU. It is then<br />

fair to expect that both the similarities and the differences emerge, and this<br />

is indeed the case.<br />

The most relevant common feature is the presence of a certain instability<br />

in the first part of the sample. To appreciate the importance of the transition,<br />

we observed that, according to a study of Coricelli and Jasbec (2004), the<br />

characteristics that may be associated to the initial conditions were the main<br />

driving forces of the real exchange rate dynamics. The variables that were<br />

more directly related to the economic notions of supply and demand, only<br />

acquired importance after a period of approximately five years. We found<br />

that the macroeconomic model structure begun to show a stable pattern<br />

only from 1994 onward, confirming the findings of Coricelli and Jasbec.<br />

Even taking 1994 as a starting value, anyway, some elements of instability<br />

remained, especially with respect to Czech Republic.<br />

The instability of the estimated macroeconomic relations is also informative<br />

about the speed of the transition. In this respect, we found that in<br />

Slovenia the adverse effect of theYugoslavian civil war canceled the advantage<br />

due to the self-management reform; Czech Republic, which had to face<br />

not only the decentralization and the break-up of CMEA but even the split<br />

of Czechoslovakia, fared worse.<br />

The relation between the variables got closer to the standard textbook<br />

description of a market supply and demand structure in the last few years.<br />

Yet, even in that part of the sample evidence of a transmission mechanism<br />

of monetary policy based on the interest rates was only present for<br />

Poland, and that too turned out to be very weak. Although it is not surprising<br />

that the exchange rate has a prominent role in inflation control in small<br />

open economies, our findings cannot be explained simply by this argument,<br />

because we explicitly include the exchange rate in both the equations of<br />

the model, then taking it into account (and holding it fixed in the interpretation<br />

of the results) in the multiple regression framework. One has rather<br />

to conclude that the interest rate channel of transmission of monetary policy<br />

was weak or obstructed, as indeed explicitly argued for Slovenia. This<br />

also means that, although these economies were acknowledged as functioning<br />

market economies, they may still be lagging behind; nonetheless,<br />

since a good development took place during the accession period, it is fair<br />

to expect that participation to the EU will foster integration. Meanwhile,<br />

their presence should not hamper inflation control too much, because the<br />

exchange rate is clearly a valid instrument for that purpose: this means that<br />

in a widened euro-zone, the Eurosystem may control inflation in the CEECs<br />

by controlling it in the core area.


Inflation Control in Central and Eastern European Countries 193<br />

Although they are all small open economies, a pattern can be envisaged<br />

in terms of dimension and openness to international trade, and this<br />

is reflected in the estimated models: Poland, is the only country in which<br />

the international factors did not affect the AD. Another way to appreciate<br />

the importance of the international openness is to look at the estimated<br />

effect of a 1% appreciation of the exchange rate: the impact on the inflation<br />

dynamics is clearly ranked according to the relative dimension of trade.<br />

Stability analysis is also important because it allows us to understand if<br />

the switch in Poland and in Czech Republic from exchange rate to inflation<br />

targeting affected the price setting formation. We did not find any evidence<br />

of it, nor we found that a more aggressive exchange rate policy in Slovenia<br />

eased the costs of disinflation from 2001 onwards. The instability that we<br />

found for Czech Republic seems to suggest an evolution of the macroeconomic<br />

structure towards a market economy, and should then have more to<br />

do with EU accession than with inflation targeting. One main implication<br />

that we perceive is that the exchange rate commitment is either irrelevant<br />

or, more likely, replaceable with inflation targeting as a nominal anchor.<br />

We think that this conclusion is of particular interest to help choosing the<br />

monetary target for the period preceding the last two years before fixing<br />

the exchange rate in the ERM2: inflation targeting should be preferred and<br />

ERM2 membership should only be limited to the two mandatory years.<br />

We also think that this lesson can be extended to other emerging market<br />

economies, and that inflation targeting, rather than explicit exchange rate<br />

commitment, should be preferred, especially if strict capital controls are<br />

not in place.<br />

References<br />

Amato, JD and S Gerlach (2002). Inflation targeting in emerging market and transition<br />

economies: lessons after a decade. European <strong>Economic</strong> Review 46, 781–790.<br />

Capriolo, G and V Lavrac (2003). Monetary and exchange rate policy in Slovenia. Eastward<br />

Enlargement of the Euro-Zone Working Papers, 17G. Berlin: Free University Berlin,<br />

Jean Monnet Centre of Excellence, 1–35.<br />

Coricelli, F and B Jasbec (2004). Real exchange rate dynamics in transition economies.<br />

Structural Change and <strong>Economic</strong> Dynamics 15, 83–100.<br />

Dvorsky, S (2000). Measuring central bank independence in selected transition countries<br />

and the disinflation process. Bank of Finland, Bofit Discussion Paper 13. Helsinki:<br />

Bank of Finland, 1–14.<br />

Iacone, F and R Orsi (2004). Exchange rate regimes and monetary policy strategies for<br />

accession countries. In Fiscal, Monetary and Exchange Rate Issues of the Eurozone<br />

Enlargement, K Zukrowska, R Orsi and V Lavrac (eds.), pp. 89–136. Warsaw: Warsaw<br />

School of <strong>Economic</strong>s.<br />

Maliszewski, WS (2000). Central bank independence in transition economies. <strong>Economic</strong>s<br />

of Transition 8, 749–789.


194 Fabrizio Iacone and Renzo Orsi<br />

Rudebush, GD and LEO Svensson (1999). Policy rules for inflation targeting. In Monetary<br />

Policy Rules, John B Taylor (ed.), pp. 203–246. Chicago: Chicago University Press.<br />

Rudebush, GD and LEO Svensson (2002). Eurosystem monetary targeting: lessons from<br />

U.S. data. European <strong>Economic</strong> Review 46, 417–442.<br />

Svensson, LEO (2000). Open-economy inflation targeting. Journal of International <strong>Economic</strong>s<br />

50, 155–183.<br />

Appendix A<br />

Summary of the Results<br />

Note to the Tables: for each equation, N indicates the normality test, AC<br />

is the LM test of no autoregressive structure in the residuals and H is the<br />

test for the heteroskedasticity; W indicates the test of those restrictions<br />

that reduce the corresponding equation as specified in general to the one<br />

specifically used. For each test, we report the P-value: the small sample<br />

version of the test is taken, unless normality is rejected. When the estimated<br />

equation is restricted, the test statistics N, AC, H, Chow, W are calculated<br />

in the unrestricted model.<br />

Table 1a.<br />

Germany.<br />

AD<br />

[sample: 91Q2–04Q1]<br />

ŷ t = 0.011 + 0.769 yt −1 − 0.038r t−1<br />

[N: 0.38] [AC: 0.20] [H: 0.17]<br />

PC<br />

[sample: 93Q1–04Q1]<br />

ˆπ t =−1.02π t−1 − 0.86π t−2 − 0.71π t−3 + 17.62y t−2<br />

[N: 0.00] [AC: 0.24] [H: 0.10] [Chow: 0.18] [W: 0.44]


Inflation Control in Central and Eastern European Countries 195<br />

Table 2a.<br />

Poland, 1994 onwards.<br />

AD<br />

[sample: 94Q1–04Q1]<br />

ŷ t = 0.020 + 0.467y t−1 − 0.0027r t−1<br />

[N: 0.00] [AC: 0.047] [H: 0.49] [Chow: 0.046] [W: 0.57]<br />

PC<br />

[sample: 94Q1–04Q1]<br />

ˆπ =−0.64π t−1 − 0.67π t−2 − 0.51π t−3 + 21.46(q t−1 − q t−5 )<br />

[N: 0.00] [AC: 0.056] [H: 0.29] [Chow: 0.36] [W: 0.06] [Chow: 2000Q2: 0.65]<br />

Table 2b.<br />

Poland, restricted sample.<br />

AD<br />

[sample: 95Q1–04Q1]<br />

ŷ t = 0.021 + 0.454y t−1 − 0.0029r t−1<br />

[N: 0.00] [AC: 0.07] [H: 0.54] [Chow: 0.54] [W: 0.64]<br />

PC<br />

[sample: 94Q1–04Q1]<br />

ˆπ t =−0.57π t−1 − 0.76π t−2 − 0.47π t−3 + 16.09(q t−1 − q t−5 )<br />

[N: 0.57] [AC: 0.25] [H: 0.79] [W: 0.39]


196 Fabrizio Iacone and Renzo Orsi<br />

Table 3.<br />

Czech Republic, restricted sample.<br />

AD<br />

[sample: 98Q1–04Q1]<br />

ŷ t = 0.010 t−1 − 0.03r t−1 + 0.083(q t−1 − q t−5 )<br />

(0.006) (0.116) (0.001)<br />

(0.050)<br />

[N: 0.79] [AC: 0.07] [H: 0.54] [W: 0.15]<br />

PC<br />

[sample: 98Q1–04Q1]<br />

ˆπ t =−0.76π t−1 − 0.21π t−2 − 0.31π t−3 + 32.55(q t−1 − q t−5 )<br />

[N: 0.71] [AC: 0.34] [H: 0.33] [W: 0.26]<br />

Table 4.<br />

Slovenia.<br />

AD<br />

[sample: 94Q1–03Q3]<br />

ŷ t = 0.226y t−1 + 0.343w t−1<br />

(0.167) (0.234)<br />

[N: 0.30] [AC: 0.37] [H: 0.86] [Chow 0.31] [W: 0.31]<br />

PC<br />

[sample: 94Q1–03Q4]<br />

ˆπ t =−0.60π t−1 − 0.66π t−2 − 0.45π t−3 + 52.64(q t−1 − q t−5 )<br />

[N: 0.44] [AC: 0.35] [H: 0.65] [Chow: 0.83] [W: 0.15] [Break 2001Q1:0.69]


Inflation Control in Central and Eastern European Countries 197<br />

Datastream codes.<br />

Czech<br />

Germany Poland Republic Slovenia<br />

Exchange rate USXRUSD<br />

(US$ per ∈)<br />

Exchange rate BDESXUSD. POI..AF. CZXRUSD. SJXRUSD.<br />

(vs. US$)<br />

CPI BDCONPRCF POCONPRCF CZCONPRCF SJCONPRCF<br />

Industrial BDINPRODG POIPTOT5H CZIPTOT.H SJIPTO01H<br />

production<br />

3-month-rate BDINTER3 PO160C.. CZ160C.. SJI60B..<br />

Diagnostic tests of some specifications that we dismissed:<br />

Table 5b.<br />

Germany, PC equation, whole dataset.<br />

PC<br />

[sample: 91Q3–04Q1]<br />

ˆπ t = 0.47 − 0.29π t−1 − 0.66π t−1 − 0.53π t−2<br />

(0.48) (0.18) (0.18) (0.17)<br />

[N:0.00] [AC: 0.20] [H: 0.06] [Chow: 0.62]<br />

− 0.50π t−3 .<br />

(0.18)<br />

+ 7.22y t−1<br />

(13.41)


198 Fabrizio Iacone and Renzo Orsi<br />

Table 5c.<br />

Poland, whole dataset.<br />

AD<br />

ŷ t = 0.02 + 0.509y t−1 − 0.0025r t−1<br />

(0.009) (0.166) (0.001)<br />

[N: 0.00] [AC: 0.20] [H:0.53] [Chow: 0.06]<br />

PC<br />

ˆπ t =−0.42 − 0.10π t−1<br />

(1.25)<br />

(0.08)<br />

− 0.130w t−1<br />

(0.203)<br />

− 0.77π t−1 − 0.78π t−2<br />

(0.12) (0.11)<br />

− 0.67π t−3 . + 22.55y t−1<br />

(0.10) (27.42)<br />

[N: 0.00] [AC: 0.01] [H: 0.07] [Chow: 0.81]<br />

+ 18.08(q t−1 − q t−5 )<br />

(9.29)<br />

[sample: 93Q2–04Q1]<br />

− 0.054(q t−1 + q t−5 )<br />

(0.045)<br />

[sample: 93Q2–04Q1]


Inflation Control in Central and Eastern European Countries 199<br />

Table 5d.<br />

Czech Republic, whole dataset.<br />

AD<br />

ŷ t = 0.007 + 0.617y t−1 − 0.0001r t−1<br />

(0.005) (0.140) (0.001)<br />

[N: 0.44] [AC: 0.94] [H: 0.06] [Chow: 0.01]<br />

PC<br />

ˆπ t = 2.45 − 0.24π t−1<br />

(1.17)<br />

(0.19)<br />

− 0.186w t−1<br />

(0.229)<br />

− 0.68π t−1 − 0.28π t−2<br />

(0.21) (0.19)<br />

+33.22(q t−1 − q t−5 )<br />

− 0.36π t−3 +22.10y t−1<br />

(0.10) (24.4)<br />

[N: 0.00] [AC: 0.25] [H: 0.086 [Chow: 0.01]<br />

[sample: 93Q4–04Q1]<br />

+ 0.047(q t−1 + q t−5 )<br />

(0.063)<br />

[sample: 94Q1–04Q1]<br />

[W:0.19]


200 Fabrizio Iacone and Renzo Orsi<br />

Table 5e.<br />

Slovenia, PC equation, whole dataset.<br />

PC<br />

ˆπ t = 4.71 − 0.50π t−1<br />

(1.41)<br />

(0.19)<br />

− 0.03π t−1 − 0.40π t−2<br />

(0.11) (0.11)<br />

[sample: 93Q2–03Q4]<br />

− 0.10π t−3 . + 2.45y t−1 + 48.28(q t−1 − q t−5 )<br />

(0.07) (26.70)<br />

(17.73)<br />

[N: 0.84] [AC: 0.03] [H: 0.02] [Chow: 0.71]


Topic 2<br />

Credit and Income: Co-Integration Dynamics<br />

of the US Economy<br />

Athanasios Athanasenas<br />

Institute of Technology and Education, Greece<br />

1. Introduction<br />

In this study, I seek to identify the existence of a significant statistical relationship<br />

between credit (as commercial bank lending) and income (GDP),<br />

for the US post-war economy, in terms of a contemporary co-integration<br />

methodology.<br />

The main empirical finding regarding the hypothesis that causality, in<br />

the long run, is directed from credit to income, agrees with the “credit-view”.<br />

This can be verified by the the post-war US data. In this chapter, I have<br />

identified a dynamic causality effect in the short run, from income changes<br />

to credit ones. To obtain the results cited here, I have applied a co-integration<br />

analysis. In addition to considering the formulated VAR, I place particular<br />

emphasis upon the stability analysis of the estimated equivalent first-order<br />

dynamic system. Finally, dynamic forecasts from the corresponding error<br />

correction variance (ECVAR) are obtained, in order to verify the validity<br />

of the co-integration vector used.<br />

This study have four sections. Section 2 reviews, in a brief note,<br />

the literature on credit and income growth of the post-war US economy.<br />

Section 3 deals with methodological issues concerning the contemporary<br />

co-integration analysis and the data used. Section 4 presents the econometric<br />

analysis and the empirical results; while Section 5 concludes the<br />

credit-income causality issue, placing emphasis on the stability analysis<br />

of the obtained first-order dynamic system and the forecasts from the<br />

corresponding ECVAR.<br />

201


202 Athanasios Athanasenas<br />

2. The Credit Issue Researched: A Brief Note<br />

on the US Economy<br />

There is an evidence that money tends to “lead” income in a historical sense<br />

(see Friedman and Schwartz (1963) and Friedman (1961; 1964). Where the<br />

mainstream monetarists through the “quantity theory” explain the empirical<br />

observations as the causal relationship running from money to income.<br />

Since the seminal works of Sims, empirical validation of the relationship<br />

between money and income fluctuations has been established (Sims, 1972;<br />

1980a). Since the publication of the works of Goldsmith (1969), Mckinnon<br />

(1973), and Shaw (1973), the debate has been going on about the role of<br />

financial intermediaries and bank credit in promoting the long-run growth.<br />

On the basis of historical data, Friedman and Schwartz (1963) argue that<br />

major depressions of the US economy have been caused by autonomous<br />

movements in money stock. 1 Sims (1980a) concludes that on one hand,<br />

money stock emerges as causally prior accounting for a substantial fraction<br />

of income variance during the inter-war and post-war periods of the business<br />

cycles. On the other hand, old skepticism remains on the direction of<br />

causality between money and income.<br />

It is well known that traditional monetarists are consistent with the<br />

old-classic view that only “money matters”, so that even when tightening<br />

of bank credits does occur during recessions, they see these as part<br />

of the endogenous financial system, rather than exogenous events, which<br />

induce recessions. As Brunner and Meltzer (1988) argue, banking crises<br />

are endogenous financial forces, which directly affect business cycles conditional<br />

upon the monetary propagation mechanism. 2<br />

In contrast, the “new-credit” viewers stress further the importance of<br />

credit restrictions, while accepting the fundamental inefficiencies of the<br />

monetary policy. 3 Bernanke and Blinder (1988), in their seminal article,<br />

conclude that credit demand is becoming relatively more stable than money<br />

1 Ibid.<br />

2 Note that monetarists seem to stress mainly, the complementarity of credit and money<br />

channels of transmission, the short-run stability of the money multiplier, and the long-run<br />

neutrality of money.<br />

3 The “new-credit” view combines the “old-credit” view of the money to spend perspective,<br />

with the money to hold perspective of the money view. The “old-credit” view focused on<br />

decisions to create and spend money, on the expansion effects of bank lending, and emphasized<br />

the issue of the non-neutrality of credit money in the long-run (see also, Trautwein,<br />

2000).


Credit and Income 203<br />

demand since 1979 and throughout the 1980s, where, money demand<br />

shocks became much more important relative to credit demand shocks in<br />

the 1980s, supported by their estimation of greater variance of money.<br />

Evidently, one of the most challenging issues of the applied monetary<br />

theory is the existence and importance of the credit channel in the monetary<br />

transmission mechanism. Bernanke (1988) emphasizes that according to the<br />

proponents of the credit view, bank credit (loans) deserve special treatment<br />

due to the qualitative difference between a borrower going to the bank for<br />

a loan and a borrower raising funds by going to the financial markets and<br />

issuing stock or floating bonds. 4<br />

Kashyap and Stein (1994) leave no doubt while supporting Bernanke’s<br />

(1983), and Bernanke’s and James’s (1991) examination of the Great<br />

Depression in the United States, that the conventional explanation for the<br />

depth and persistence of the Depression is one of the strongest pieces of evidence<br />

supporting the view that shifts in loan supply can be quite important.<br />

The supporting evidence stands strong, considering also that the decline of<br />

intensity as seen in recessions are partially due to a cut-off in bank lending<br />

(Kashyap et al., 1994). 5<br />

King and Levine (1993) stress that financial indicators, such as the<br />

importance of the banks relative to the central bank, the percentage of<br />

credit allocated to private firms and the ratio of credit issued to private<br />

firms to GDP, are strongly related with growth. For the relevant role of the<br />

monetary process and the financial sector, see, in particular, (Angeloni et al.,<br />

2003; Bernanke and Blinder, 1992; Murdock and Stiglitz, 1993; Stock and<br />

Watson, 1989; Swanson, 1998).<br />

More recently, Kashyap and Stein (2000) proved that the impact of<br />

monetary policy on lending behavior is stronger for the banks with less<br />

liquid balance sheets, where liquidity is measured by the ratio of securities<br />

4 Stiglitz (1992) stresses the fact that the distinctive nature of the mechanisms by which<br />

credit is allocated plays a central role in the allocation process. He identifies several crucial<br />

reasons that banks are likely to reduce lending activity (p. 293), as the economy goes into<br />

recession, where banks’ unwillingness and the inability to make loans obviates the effect<br />

of monetary policy. Bernanke and Blinder (1992) find that the transmission of monetary<br />

policy works through bank loans, as well as through bank deposits. Tight monetary policy<br />

reduces deposits and over time banks respond to tightened money by terminating old loans<br />

and refusing to make new ones. Reduction of bank loans for credit can depress the economy.<br />

5 Both Bernanke and Blinder (1992) and Gertler and Gilchrist (1993) using innovations in<br />

the federal funds rate as an indicator of monetary policy disturbances, find that M1 or M2<br />

declines immediately, while bank loans are slower to fall, moving contemporaneously with<br />

output.


204 Athanasios Athanasenas<br />

to assets. Their principal conclusions can be narrowed down to: (a) the existence<br />

of the credit-lending channel of monetary transmission, at least for the<br />

United States is undeniable and (b) the precise quantification and accurate<br />

measurement of the aggregate loan-supply consequences of monetary policy<br />

remain very difficult in econometrics due to the large estimation biases.<br />

See also the works of Mishkin (1995), Bernanke and Gertler (1995), and<br />

Meltzer (1995), for the bank-lending channel. Lown and Morgan (2006)<br />

stress further the substantial issue of the role of fluctuations in commercial<br />

credit standards and credit cycles being correlated with real output.<br />

All previous works provide a clear evidence that is consistent with<br />

the existence of a credit-lending channel of monetary transmission. The<br />

very heart of the credit-lending view is the proposition that the Central<br />

Banks, in general, and the Federal Reserve in the United States, in particular,<br />

can shift banks’ loan supply schedules, by conducting open-market<br />

operations. 6<br />

Consequently, given the afore-mentioned empirical research on the<br />

credit-lending channel and the established relationship between financial<br />

intermediation and economic growth in general, our main focus here is to<br />

analyze in detail the causal relationship between finance and growth by<br />

focusing on bank credit and income GNP, in the post-war US economy.<br />

Analyzing in detail here means placing particular emphasis: (a) on a contemporary<br />

co-integration approach, and (b) on the stability analysis of the<br />

observed first-order dynamic system and the dynamic forecasts from the<br />

corresponding ECVAR.<br />

3. Methodological Issues and Data<br />

In the empirical analysis, for the case of the US post-war period 1957–2007,<br />

we test for the existence of causality between the total loans and leases at<br />

commercial banks, denoted by real credit, representing the development of<br />

the banking sector, and the money income, GNP, denoted by real income,<br />

representing national economic performance. Both the variables are quarterly<br />

seasonally adjusted from 1957:3 to 2007:3, i.e., 201 observations, and<br />

are expressed in real terms, indexed by 1982–1984 as base 100, in billions of<br />

US$. Considering the natural logs, I define two new variables (W i ,Y i ) such<br />

that Y i = ln(real income i ) and W i = ln(real credit i ), for i = 1, 2,...,201.<br />

6 It is noted that the credit-lending channel also requires an imperfect price adjustment and<br />

that some borrowers cannot find perfect substitutes for bank loans.


Credit and Income 205<br />

My data is obtained on line from the Federal Reserve Bank of Saint Louis,<br />

Missouri, through their open source database. 7<br />

Researchers usually employ some measure of the stock of money over<br />

GNP, as a proxy for the size of the “financial depth” (see for instance,<br />

Goldsmith, 1969; Mckinnon, 1973). This type of financial indicator does<br />

not inform us as to whether the liabilities are those of the central banks,<br />

commercial banks, or other financial intermediaries. In this way, the creditchannel<br />

issue cannot be analyzed, and the significant role of credit cannot<br />

be isolated. Moreover, this type of financial indicator is mainly applicable<br />

to the developing countries, since the financing of the economy is to a<br />

great extent carried out by the public sector and controlled mainly by its<br />

central bank.<br />

An advanced economy such as the United States, with a fully liberalized<br />

banking and financial system, provides an excellent case for testing the<br />

modern credit view such that the commercial bank credit itself has to be<br />

modeled as a fundamental, financial, and an independent monetary variable<br />

that affects money income, and hence the economic performance, in the<br />

long run.<br />

Based on the cited references, we would accept the value of commercial<br />

bank credit to the private sector, as a measure of the ability of the banking<br />

system to provide finance-led income growth. The extent to which money<br />

income growth is associated with the credit provision from the financial<br />

sector is examined through the use of contemporary co-integration methods<br />

and system stability analysis.<br />

Initially, the order of integration of the two variables is investigated<br />

using standard unit root tests (Dickey and Fuller, 1979; Harris, 1995). The<br />

next step is the computation of co-integration vectors by applying the maximum<br />

likelihood (ML) approach considering the error-correction representation<br />

of the VAR model (Harris, 1995; Johansen and Juselius, 1990). We<br />

transformed the initial VAR to an equivalent first-order dynamic system<br />

to perform the proper stability analysis, which revealed that the system<br />

under consideration is stable. It should be re-called that stability is of vital<br />

importance for obtaining consistent forecasts, and this issue is stressed<br />

appropriately in Sections 4 (4.2 and 4.6) and 5.<br />

I proceed with the utilization of the vector error-correction modeling<br />

following Engle and Granger (1987), and testing for the short-run dynamic<br />

relationship on causality effects between money and income.<br />

7 And, we are deeply thankful for this.


206 Athanasios Athanasenas<br />

I end up by applying a dynamic forecasting technique with the estimated<br />

ECVAR. The very low value of Theil’s inequality coefficient U, provides<br />

a concrete evidence that I have formulated the suitable ECVAR obtaining,<br />

thus, robust results. Our particular emphasis on the system stability is<br />

evident and it is clarified in detail in the following section.<br />

4. Co-Integration<br />

Considering the series {Y i }, {W i } and applying the Dickey–Fuller<br />

(DA/ADF) tests (Dickey and Fuller, 1979; 1981; Harris, 1995, pp. 28–47),<br />

I found that both series are not stationary, but they are integrated of order<br />

1, that is I(1). This implies that by differentiating the initial series, we<br />

get stationary ones. In other words, the series Y i = Y i − Y i−1 and<br />

W i = W i − W i−1 are stationary, that is I(0). This can be easily verified<br />

from Fig. 1, where the non-stationary series {Y i }, {W i } are also presented.<br />

Y i<br />

W i<br />

∆Y i<br />

∆W i<br />

Figure 1. The I(1) series {Y i }, {W i } and the stationary ones {Y i }, {W i }.


Credit and Income 207<br />

Since the variables Y i , W i are used in this study, it should be re-called that<br />

the parameters of a long-run relationship represent constant elasticities.<br />

It should be re-called that if two series are not stationary but they have<br />

common trends so that there exist a linear combination of these series that<br />

is stationary, which means that they are integrated in a similar way and for<br />

this reason, they are called co-integrated, in the sense that one or the other<br />

of these variables will tend to adjust so as to restore a long-run equilibrium.<br />

Also, if there is a linear trend in the data, as in the case of Y i and W i as<br />

shown in Fig. 1, then we specify a model including a time-trend variable,<br />

allowing thus the non-stationary relationships to drift.<br />

4.1. The VAR Model and The Equivalent ECVAR<br />

We start this section by formulating and estimating an unrestricted VAR of<br />

the form:<br />

p∑<br />

x i = δ + µt i + A j x i−j + w i (1)<br />

j=1<br />

where x ∈ E n (here E denotes the Euclidean space), δ is the vector of<br />

constant terms, t i is a time-trend variable, µ is the vector of corresponding<br />

coefficients, and w is the noise vector. The matrices of coefficients A j , are<br />

defined on E n × E n . It is noted that variable t i is included for the reason<br />

mentioned above. It can be easily shown (Johansen, 1995), that (1) may be<br />

expressed in the form of an equivalent ECVAR, that is:<br />

p−1<br />

∑<br />

x i = δ + µt i + Q j x i−j + x i−1 + w i (2)<br />

j=1<br />

where Q j = ( ∑ j<br />

k=1 A k − I).<br />

After estimation, we may compute matrix from:<br />

⎛ ⎞<br />

p∑<br />

= ⎝ A j<br />

⎠ − I<br />

j=1<br />

(2a)<br />

or, directly from the corresponding ECVAR as given in Eq. (2). It is re-called<br />

that this type of VAR models like the one presented above, is recommended<br />

by Sims (1980a; b), as a way to estimate the dynamic relationships among<br />

jointly endogenous variables. It should be noted that for the case under


208 Athanasios Athanasenas<br />

consideration, vector x in Eqs. (1) and (2) is 2-dimensional, i.e.,<br />

[ ]<br />

Yi<br />

x i = and it is x ∼ I(1).<br />

W i<br />

It is known that the matrix can be decomposed such that = AC,<br />

where the elements of A are the coefficients of adjustment and the rows<br />

of matrix C are the (possible) co-integrating vectors. In some books, A is<br />

denoted by α and B by β ′ , although capital letters are used for matrices.<br />

Besides, β denotes the coefficient vector in a general linear econometric<br />

model. To avoid any confusion, we adopted the notation used here, as well<br />

as in Mouza and Paschaloudis (2007).<br />

Usually, matrices A and C are obtained by applying the ML method<br />

(Harris, 1995; Johansen and Juselius, 1990, p. 78). According to this procedure,<br />

a constant and/or trend can be included in the co-integrating vectors<br />

in the following way.<br />

Considering Eq. (2), we can augment matrix to accommodate, as an<br />

additional column, the vectors δ and µ in the following way.<br />

˜ = [.µδ]. (3)<br />

In this case, ˜ is defined on E n × E m , with m>n. For conformability,<br />

the vector x i−1 in Eq. (2) should be augmented accordingly by using the<br />

linear advance operator L (such that L k y i = y i+k ), i.e.,<br />

⎡<br />

˜x i−1 = ⎣ x ⎤<br />

i−1<br />

Lt i−1<br />

⎦ .<br />

(3a)<br />

1<br />

Hence, Eq. (2) takes the form:<br />

p−1<br />

∑<br />

x i = Q j x i−j + ˜˜x i−1 + w i . (4)<br />

j=1<br />

Note that no constants (vector δ), as well as coefficients of the time trend<br />

(vector µ) are explicitly presented in the ECVAR in Eq. (4), which specifies<br />

precisely a relevant set of n ECM.<br />

It is assumed at this point that matrices C and A refer to matrix ˜ as<br />

seen in Eq. (4). Let us consider now the kth row of matrix C, that is c ′ k. .8 If<br />

this row is assumed to be a co-integration vector, then the (disequilibrium)<br />

8 Note that dot is necessary to distinguish the kth row of C, i.e., c ′ k. , from the transposed of<br />

the kth column of this matrix that is c ′ k .


Credit and Income 209<br />

errors computed from u i = c ′ k. ˜x i must be a stationary series. Considering<br />

now the kth column of matrix A, that is a k , and given that u i−1 = c ′ k. x i−1,<br />

then the ECVAR in Eq. (4) can be written as:<br />

p−1<br />

∑<br />

x i = Q j x i−j + a k u i−1 + w i . (5)<br />

j=1<br />

This is the conventional form of an ECVAR, when matrix ˜ instead<br />

of is considered. In any case, the maximum lag in Eq. (5) is p − 1. It<br />

should be noted that this specification of an ECM is fully justified from the<br />

theoretical point of view. However, if we want to include an intercept in<br />

Eq. (5), i.e., in the short-run model, it may be assumed that the intercept in<br />

the co-integration vector is cancelled by the intercept in the short-run model,<br />

leaving only an intercept in Eq. (5). Thus, when a vector of constants is to<br />

be included in Eq. (4), which, in this case, takes the form:<br />

p−1<br />

∑<br />

x i = δ + Q j x i−j + ˜˜x i−1 + w i (6)<br />

j=1<br />

then Eq. (5), matrix ˜ and the augmented vector ˜x will become:<br />

p−1<br />

∑<br />

x i = δ + Q j x i−j + a k u i−1 + w i<br />

j=1<br />

(6a)<br />

˜ = [.µ]<br />

(6b)<br />

[ ]<br />

xi−1<br />

˜x i−1 = . (6c)<br />

Lt i−1<br />

According to Eq. (6a), the co-integrating vector c ′ k.<br />

, used to compute<br />

u i = c ′ k. ˜x i, includes only the coefficient of the time-trend variable. The<br />

inclusion of a trend in the model can be justified, if we examine the estimation<br />

results of a long-run relationship considering Y and W. In such a<br />

case, we immediately realize that a trend should be present.<br />

After adopting the VAR as seen in Eq. (1), the next step is to determine<br />

the value of p from Table 1 (Holden and Perman, p. 108).<br />

We see from Table 1 that for α ≤ 0.05, p = 4, which means that we<br />

have to estimate a VAR(4), with only one deterministic terms (i.e., trend).<br />

After estimation, we found that matrix ˜ (hat is omitted for simplicity),


210 Athanasios Athanasenas<br />

Table 1. Determination of the value of p.<br />

Lag length p<br />

LR<br />

Value of p (probability)<br />

1 — —<br />

2 132.91 0.0000<br />

3 14.988 0.0047<br />

4 14.074 0.0071<br />

5 2.7328 0.6035<br />

6 5.1465 0.2726<br />

7 2.3439 0.6728<br />

8 5.2137 0.2661<br />

specified in Eq. (6b) has the following form:<br />

[ Y i W i t i ]<br />

−0.092971 0.028736 0.000320<br />

˜ =<br />

. (7)<br />

0.016209 −0.024594 0.000124<br />

4.2. Dynamic System Stability<br />

TheVAR(4) can be transformed to an equivalent first-order dynamic system,<br />

in the following way.<br />

⎡ ⎤ ⎡<br />

⎤ ⎡ ⎤<br />

x i A 1 A 2 A 3 A 4 x i−1<br />

⎢ Lx i<br />

⎣ L 2 ⎥<br />

x i<br />

⎦ = ⎢ I 2 0 0 0<br />

⎥ ⎢ Lx i−1<br />

⎣ 0 I 2 0 0 ⎦ ⎣ L 2 ⎥<br />

x i−1<br />

⎦<br />

L 3 x i 0 0 I 2 0 L 3 x i−1<br />

⎡ ⎤ ⎡ ⎤ ⎡ ⎤<br />

δ µ w i<br />

+ ⎢ 0<br />

⎥<br />

⎣ 0 ⎦ + ⎢ 0<br />

⎥<br />

⎣ 0 ⎦ t i + ⎢ 0<br />

⎥<br />

⎣ 0 ⎦ . (8)<br />

0 0 0<br />

L in this place, denotes the linear lag operator, such that L k z i = z i−k .<br />

Equation (8) can be written in a compact form as:<br />

y i = Ãy i−1 + d + zt i + v i<br />

(8a)<br />

where<br />

y i ′ = [x i Lx i L 2 x i L 3 x i ], d ′ = [δ 0 0 0],<br />

z ′ = [µ 0 0 0], v i ′ = [w i 0 0 0]<br />

and the dimension of matrix à is (8 × 8).


Credit and Income 211<br />

Table 2.<br />

Eigen values.<br />

No. Real part Coefficient of imaginary part Length<br />

1 0.94 0 0.94<br />

2 0.8567 0.0579 0.8586<br />

3 0.8567 −0.0579 0.8586<br />

4 0.3807 0.0 0.3807<br />

5 0.0367 0.3655 0.3674<br />

6 0.0367 −0.3677 0.3674<br />

7 −0.3581 0.0 0.3581<br />

8 −0.1142 0.0 0.1142<br />

To decide whether the dynamic system described by Eq. (8a) is stable, we<br />

compute the eigenvalues of matrix Ã, presented in Table 2.<br />

Since the greater length (0.94) is less then 1, the dynamic system as<br />

given in Eq. (8a) is stable and can be used for forecasting purposes. A side<br />

verification of this stability test, is that once the system is stable, then the<br />

elements of the state vector x in the initial VAR(4), which in this case are the<br />

series {Y i }, {W i }, are either difference stationary series (DSS), or in some<br />

cases, stationary series. In fact, we show that these series are DSS and in<br />

particular, they are I(1).<br />

4.3. The Estimated Co-Integrating Vectors<br />

We applied the ML method, to compute matrices C and A, which have the<br />

following form:<br />

Y W t<br />

[ ]<br />

1 −0.289613 −0.003621<br />

C =<br />

,<br />

1 −0.653108 −0.000292<br />

[ ]<br />

−0.087991 −0.004980<br />

A =<br />

−0.038535 0.054745<br />

(9)<br />

satisfying AC = ˜. We can easily verify that only the first row of matrix<br />

C is a promising candidate, verified from the computed value of the t-<br />

statistic (−0.23), which corresponds to the (1, 2) element of matrix A, that<br />

corresponds to the second row of matrix C. Thus, we have the co-integrating


212 Athanasios Athanasenas<br />

vector:<br />

[1 − 0.289613 − 0.003621] (10)<br />

producing the errors u i , which are the disequilibrium errors obtained from:<br />

u i = Y i − 0.289613W i − 0.003621t i .<br />

(10a)<br />

It should be noted that the vector specified in Eq. (10) is a co-integration<br />

vector, iff the series {u i } computed from Eq. (10a) is stationary. To find out<br />

that this series is stationary, we run the following regression with a constant<br />

term, since ∑ u i ̸= 0.<br />

q∑<br />

u i = a + b 1 u i−1 + b j+1 u i−j + ε i . (11)<br />

Note that the value of q is set such that the noises ε i to be white. With q = 3,<br />

the estimation results are as follows:<br />

3∑<br />

u i = 0.4275 − 0.07384 u i−1 + ˆb j+1 u i−j +ˆε i<br />

(0.1174) (0.02029) (11a)<br />

j=1<br />

t =−3.64.<br />

It should be noted that in Eq. (11a), there is no problem regarding<br />

autocorrelation and heteroscedasticity. We have to compute the t u -statistic<br />

from MacKinnon (1991, pp. 267–276) critical values (see also Granger and<br />

Newbold, 1974; Hamilton, 1994; Harris, 1995, Table A6, p. 158; Harris,<br />

1995, pp. 54–55). This statistic (t u = ∞ + 1 /T + 2 /T 2 , where T<br />

denotes the sample size) is evaluated from the relevant table, taking into<br />

account Eqs. (10a) and (11a). Note that in the latter equation, there is only<br />

one deterministic term (constant). The value of the t u -statistic for this case<br />

is −3.3676 (α = 0.05). Hence, since −3.64 < −3.3676, we reject the<br />

null that the series {u i } is not stationary. Recall that the null [u i ∼ I(1)] is<br />

rejected in favor of H 1 [u i ∼ I(0)], if t


Credit and Income 213<br />

4.4. Essential Remarks<br />

In seminal works about money and income (as for instance in Tobin, 1970),<br />

the analysis is restricted in a pure theoretical consideration. Some authors<br />

(see for instance, Friedman and Kuttner, 1992) write many explanations<br />

about co-integration, co-integrating vectors, and error-correction models,<br />

but I did not trace any attempt to compute either. In a later paper by the<br />

same authors (Friedman and Kuttner, 1993), I read (p. 200) the phrase,<br />

“vector autoregression system,” which in fact is understood as a VAR. The<br />

point is that no VAR model is specified in this work. Besides, the lag length<br />

is determined by simply applying the F-test (p. 191). In this context, the<br />

validity of the results is questionable. In other applications (see for instance,<br />

Arestis and Demetriades, 1997), where a VAR model has been formulated,<br />

nothing is mentioned about the test applied to determine the maximum lag<br />

length. Also, in this paper, the authors are restricted to the use of trace<br />

statistic (l (trace) ) to test for co-integration (p. 787), although this is the<br />

necessary but not the sufficient condition, as pointed out above. From other<br />

research works (see for instance, Arestis et al., 2001), I get the impression<br />

that there is not a clear notion regarding stationarity and/or integration,<br />

since the authors state [p. 22 immediately after Eq. (1)], that “D isaset<br />

of I(0) deterministic variables such as constant, trend and dummies. . . ”.<br />

It is clear that this verification is false. How can a trend, for instance, be<br />

a stationary, I(0), variable? 10 The authors declare (p. 24) that “We then<br />

perform co-integration analysis. . . ”. The point is that apart from some<br />

theoretical issues, I did not trace such an analysis in the way it is applied<br />

here. Besides, nothing is mentioned about any ECVAR, analogous to the one<br />

that is formulated and used in this study. Most essentially, I have not traced,<br />

In case that the disequilibrium errors are computed from:<br />

then the co-integration vector will have the form:<br />

u i = Ŷ i − Y i (13)<br />

[−10.289613 0.003621]. (14)<br />

It is noted also that hat (ˆ) is omitted from the disequilibrium errors u i , for avoiding any<br />

confusion with the OLS disturbances.<br />

It should be emphasized that in such a case, i.e., using the disequilibrium errors computed<br />

from Eq. (13), then the sign of the coefficient of adjustment, as will be seen later, will be<br />

changed.<br />

10 It should be re-called at this point, that if t i is a common trend, then t i = 1(∀ i).


214 Athanasios Athanasenas<br />

in relevant works, the stability analysis applied for the system presented<br />

in Eq. (8).<br />

4.5. The ECM Formulation<br />

The lagged values of the disequilibrium errors, that is u i−1 , serve as an<br />

error correction mechanism in a short-run dynamic relationship, where the<br />

additional explanatory variables may appear in lagged first differences. All<br />

variables in this equation, also known as ECM, are stationary so that, from<br />

the econometric point of view, it is a standard single equation model, where<br />

all the classical tests are applicable. It should be noted, that the lag structure<br />

and the details of the ECM, should be in line with the formulation as seen<br />

in Eq. (6a). Hence, I started from this relation considering the errors u i<br />

and estimated the following model, given that the maximum lag length is<br />

p − 1 i.e., 3.<br />

Y i = α 0 +<br />

3∑<br />

α j Y i−j +<br />

j=1<br />

3∑<br />

β j W i−j − a Y u i−1 + Y v i . (15)<br />

j=1<br />

Note that Y v i are the model disturbances. If the adjustment coefficient<br />

a Y is significant, then we may conclude that in the long run, W causes Y.<br />

If a Y = 0, then no such a causality effect exists. In case that all β j are<br />

significant, then there is a causality effect in the short run, from W to<br />

Y. Ifallβ j = 0, then no such causality effect exists. The estimation<br />

results are presented below.<br />

Y i = 0.5132+ 0.2618Y i−1 + 0.161Y i−2 − 0.00432Y i−3 + 0.0273W i−1<br />

(0.125) (0.074) (0.073) (0.0738) (0.075)<br />

p value 0.0001 0.0005 0.0276 0.953 0.719<br />

Hansen 0.0370 0.1380 0.2380 0.234 0.128<br />

+ 0.0487W i−2 − 0.076W i−3 − 0.0879u i−1 + Y ˆv i<br />

(0.075) (0.065) (0.022)<br />

p value 0.519 0.242 0.0001<br />

Hansen 0.065 0.098 0.025 (for all coefficients 2.316) (16)<br />

¯R 2 = 0.167,s= 0.009, DWd = 1.98,F (7,189) = 6.65,<br />

Condition number (CN) = 635.34.<br />

(p value = 0.0)


Credit and Income 215<br />

We observe that the estimated adjustment coefficient (−0.0879) is<br />

significant and almost identical to the (1, 1) element of matrix A, as seen<br />

in Eq. (9), which is computed by the ML method. According to Hansen<br />

statistics, all coefficients seem to be stable for α = 0.05. The BG test<br />

revealed that no autocorrelation problems exist. In models like the one seen<br />

in Eq. (16), the term u i−1 usually produces the smallest Spearman’s correlation<br />

coefficient (r s ). In this particular case, we have: r s =−0.0979,<br />

t =−1.37, p = 0.172, which means that no heteroscedasticity problems<br />

exist. Finally, the RESET test shows that there is no any specification error.<br />

In Eq. (16), we have an inflated CN due to spurious multicollinearity,<br />

which is verified from the revised CN, having the value of 3.42 and<br />

as Lazaridis (2007) has shown, this means that, in fact, there is not any<br />

serious multicollinearity problem, affecting the reliability of the estimation<br />

results. In many applications, particularly when the variables are in logs,<br />

then usually the value of this statistic (CN) is extremely high, which in most<br />

cases, as stated above, is due to the spurious multicollinearity, which can be<br />

revealed by computing the revised CN, which is not widely known and it<br />

seems that this is the main reason, for not reporting this statistic in relevant<br />

applications.<br />

To test the null, H 0 : β 1 = β 2 = β 3 = 0, where β j are the coefficients of<br />

W i−j (j = 1, 2, 3), we computed the F-statistic that is F (3,189) = 0.533<br />

(p- value = 0.66). Hence, the null is accepted, which means that there is not<br />

any short-run causality effect from W to Y, but only in the levels, i.e., in<br />

the long run, W causes Y. Observing the value of the adjustment coefficient<br />

(−0.0879), I may conclude that it gives a satisfactory percentage, regarding<br />

the money income convergence towards a long-run equilibrium. 11<br />

4.6. Dynamic Forecasting with an ECVAR<br />

To complete the ECVAR model, it is necessary to estimate a short-run relation<br />

for W. Thus, we consider an equation which is similar to Eq. (15) i.e.,<br />

W i = b 0 +<br />

3∑<br />

a j Y i−j +<br />

j=1<br />

3∑<br />

b j W i−j − a W u i−1 + W v i . (17)<br />

j=1<br />

11 It is already mentioned that if the disequilibrium errors are computed from Eq. (13),<br />

instead of Eq. (12), then the estimated coefficient of adjustment will have a positive sign.


216 Athanasios Athanasenas<br />

The estimation results are as follows 12 :<br />

W i = 0.2241+ 0.0977Y i−1 + 0.2256Y i−2 + 0.199Y i−3 + 0.4864W i−1<br />

(0.125) (0.0733) (0.072) (0.073) (0.075)<br />

p value 0.075 0.183 0.002 0.0072 0.000<br />

Hansen 0.075 0.181 0.041 0.7130 0.045<br />

+ 0.0123W i−2 + 0.068W i−3 − 0.03852u i−1 + W ˆv i<br />

(0.87) (0.29) (0.0216)<br />

p value 0.519 0.242 0.076<br />

Hansen 0.062 0.181 0.075 (for all coefficients 2.268)<br />

¯R 2 = 0.541, s = 0.009, DWd = 1.97,<br />

F (7,189) = 34.0(p value = 0.0), CN = 635.4, Revised CN = 3.42.<br />

(18)<br />

Presumably, we are on the right way, since the estimated adjustment coefficient<br />

is almost the same as the one computed by the ML method as in<br />

the previous case. This is the (2, 1) element of matrix A as seen in Eq. (9).<br />

Note that this coefficient is significant for α ≥ 0.08, which means that there<br />

is a rather weak causality effect from Y to W in the long run.<br />

To test the null H 0 : a 1 = a 2 = a 3 = 0, where the coefficients<br />

a j (j = 1, 2, 3) as seen in Eq. (17), we compute the relevant F-statistic,<br />

i.e., F (3,189) = 8.42 (p-value = 0.0). This implies that indeed there is a<br />

causality effect in the short run, from Y to W.<br />

Considering the complete ECVAR, i.e., Eqs. (16) and (18), Eq. (10a) is<br />

used to compute the series {u i }, together with some trivial identities, I form<br />

the following system:<br />

Y i = ˆβ 0 +<br />

W i = ˆb 0 +<br />

3∑<br />

ˆα j Y i−j +<br />

j=1<br />

3∑<br />

â j Y i−j +<br />

j=1<br />

3∑<br />

ˆβ j W i−j −â Y u i−1 + Y ˆv i<br />

j=1<br />

3∑<br />

ˆb j W i−j −â W u i−1 + W ˆv i<br />

j=1<br />

12 According to Hansen statistics, all coefficients seem to be stable for α = 0.05.<br />

The Spearman’s correlation coefficient (r s ) regarding the term u i−1 , is: r s = −0.075,<br />

t =−1.051, p = 0.294, which means that we do not have to bother about heteroscedasticity<br />

problems. Also the revised CN indicates that no multicollinearity problems exist, in<br />

order to affect the reliability of the estimation results.


Credit and Income 217<br />

Y i = Y i + Y i−1<br />

W i = W i + W i−1<br />

u i = Y i − 0.289613W i − 0.0036218t i .<br />

(19)<br />

It may be useful to note that there are three identities in the system<br />

and the only exogenous variable present in the system is the trend. We may<br />

obtain dynamic simulation results from the ECVAR specified in Eq. (19),<br />

in order to obtain predictions for Y i and W i and at the same time to verify<br />

the validity of the co-integration vector used. This can be achieved by first<br />

formulating the following deterministic system.<br />

K 0 y i = K 1 y i−1 + K 2 y i−2 + K 3 y i−3 + ˜Dq i (20)<br />

where y i = [ϒ i W i u i Y i W i ] ′ , q i = [t i 1] ′<br />

⎡<br />

K 0 =<br />

⎢<br />

⎣<br />

1 0 0 0 0<br />

0 1 0 0 0<br />

0 0 1 −1 0.289613<br />

−1 0 0 1 0<br />

0 −1 0 0 1<br />

⎤<br />

⎥<br />

⎦ ,<br />

⎡<br />

⎤<br />

0.2618 0.0273 −0.0879 0 0<br />

0.0977 0.4864 −0.03852 0 0<br />

K 1 =<br />

⎢ 0 0 0 0 0<br />

⎥<br />

⎣ 0 0 0 1 0⎦ ,<br />

0 0 0 0 1<br />

⎡<br />

⎤<br />

0.161 0.0487 0 0 0<br />

0.2256 0.0123 0 0 0<br />

K 2 =<br />

⎢ 0 0 0 0 0<br />

⎥<br />

⎣ 0 0 0 0 0⎦ ,<br />

0 0 0 0 0<br />

⎡<br />

⎤<br />

−0.00432 0.076 0 0 0<br />

0.199 −0.068 0 0 0<br />

K 3 =<br />

⎢ 0 0 0 0 0<br />

⎥<br />

⎣ 0 0 0 0 0⎦ ,<br />

0 0 0 0 0


218 Athanasios Athanasenas<br />

and<br />

⎡<br />

˜D =<br />

⎢<br />

⎣<br />

⎤<br />

0 0.5132<br />

0 0.2241<br />

⎥<br />

⎦ .<br />

−0.003621 0<br />

0 0<br />

0 0<br />

Pre-multiplying Eq. (20) by K0 −1 we get:<br />

y i = Q 1 y i−1 + Q 2 y i−2 + Q 3 y i−3 + Dq i (21)<br />

where Q 1 = K0 −1 K 1, Q 2 = K0 −1 K 2, Q 3 = K0 −1 K 3, and D = K0<br />

−1 ˜D.<br />

The system given in Eq. (21) can be transformed to an equivalent firstorder<br />

dynamic system, in a similar way to the one already described earlier<br />

[see Eqs. (8) and (8a)]. It is noted that in this case matrix à is of dimension<br />

(15 × 15). It is important to mention again that we should avoid using<br />

a dynamic system, for simulation purposes, that is unstable. Thus, from<br />

this system, we can obtain dynamic simulation results for the variables Y i<br />

and W i . These results are graphically presented (Fig. 2).<br />

The very low value of Theil’s inequality coefficient U, for both cases<br />

is a pronounced evidence that, apart from computing the indicated cointegration<br />

vector, we have also formulated the suitable ECVAR so that the<br />

results obtained are undoubtfully robust.<br />

5. Conclusions and Implications<br />

The purpose of this study is to contribute to the empirical investigation of<br />

the co-integration dynamics of the credit-income nexus, within the economic<br />

growth process of the post-war US economy, over the period from<br />

1957 to 2007.<br />

Utilizing advanced and contemporary co-integration analysis and<br />

applying vector ECM estimation, we place special emphasis on forecasting<br />

and system stability analysis. I can say that to the best of my knowledge,<br />

similar system stability and forecasting analysis, as the one applied here,<br />

is very difficult to meet in the relevant literature, after taking into consideration<br />

similar research works. See for example, (Arestis and Demetriades,<br />

1997; Arestis et al., 2001; Demetriades and Hussein, 1996; Friedman and<br />

Kuttner, 1992; 1993; Levine and Zervos, 1998; Rousseau and Wachtel,<br />

1998; 2000).<br />

My results state clearly that there is no short-run causality effect from<br />

credit changes to income changes, but only in the levels, that is in the long


Credit and Income 219<br />

Figure 2.<br />

Results of dynamic simulations.


220 Athanasios Athanasenas<br />

run, credit causes money income. From my co-integration analysis, I found<br />

that the adjustment coefficient is significant and its value (−0.0879), gives a<br />

satisfactory percentage, regarding the money income convergence towards<br />

a long-run equilibrium.<br />

Moreover, we observe a causality effect from income changes to credit<br />

changes, in the short run for the post-war US economy. The validity of<br />

the credit view theorists seems rather evident, by taking into account the<br />

contemporary co-integration and system stability approaches.<br />

Further, in terms of reasonable research implications, following our<br />

contemporary co-integration and system stability approaches, we may<br />

stress the need for an in-depth investigation of the credit cycle and fluctuations<br />

in commercial credit standards. 13<br />

References<br />

Angeloni, I, KA Kashyap and B Mojon (2003). Monetary Policy Transmission in the Euro<br />

Area. Part 4, Chap. 24 NY: Cambridge University Press.<br />

Arestis, P and P Demetriades (1997). Financial development and economic growth: assessing<br />

the evidence. The <strong>Economic</strong> Journal 107, 783–799.<br />

Arestis, P, PO Demetriades and KB Luintel (2001). Financial development and economic<br />

growth: the results of stock markets. Journal of Money, Credit and Banking 33(1),<br />

16–41.<br />

Bernanke, SB (1983). Non-monetary effects of the financial crisis in the propagation of the<br />

great depression. American <strong>Economic</strong> Review 73(3), 257–276.<br />

Bernanke, SB (1988). Monetary policy transmission: through money or credit? Business<br />

Review, November/December, 3–11 (Federal Reserve Bank of Philadelphia).<br />

Bernanke, SB and SA Blinder (1988). Credit, money, and aggregate demand. The American<br />

<strong>Economic</strong> Review, Papers and Proceedings 78(2), 435–439.<br />

Bernanke, SB and SA Blinder (1992). The federal funds rate and the channels of monetary<br />

transmission. The American <strong>Economic</strong> Review 82(4), 901–921.<br />

Bernanke, SB and M Gertler (1995). Inside the black box: the credit channel of monetary<br />

transmission. The Journal of <strong>Economic</strong> Perspectives 9(4), 27–48.<br />

Bernanke, SB and H James (1991). The gold standard, deflation, and financial crisis in the<br />

great depression: an International comparison. In Financial Markets and Financial<br />

Crises, RG Hubbard (ed.), Chicago: University of Chicago Press for NBER, 33–68.<br />

Brunner, K and HA Meltzer (1988). Money and credit in the monetary transmission process.<br />

American <strong>Economic</strong> Review 78, 446–451.<br />

Demetriades, PO and K Hussein (1996). Does financial development cause economic<br />

growth? Journal of Development <strong>Economic</strong>s 51, 387–411.<br />

13 See, for example, Lown and Morgan (2006), for parallel research on commercial credit<br />

standards, following “traditional” VAR analysis.


Credit and Income 221<br />

Dickey, DA and WA Fuller (1979). Distribution of the estimators for autoregressive<br />

time series with a unit root. Journal of the American Statistical Association 74,<br />

427–431.<br />

Dickey, DA and WA Fuller (1981). Likelihood ratio statistics for autoregressive time series<br />

with a unit root. Econometrica 49, 1057–1072.<br />

Engle, RF and CWJ Granger (1987). Cointegration and error-correction: representation,<br />

estimation, and testing. Econometrica 55, 251–276.<br />

Friedman, M (1961). The lag in the effect of monetary policy. Journal of Political Economy,<br />

October 69, 447–466.<br />

Friedman, M (1964). The monetary studies of the national bureau. National Bureau of <strong>Economic</strong><br />

Research, Annual Report. New York: National Bureau of <strong>Economic</strong> Research,<br />

1–234.<br />

Friedman, B and KN Kuttner (1992). Money, income, prices and interest rates. The American<br />

<strong>Economic</strong> Review 82(3), 472–492.<br />

Friedman, B and KN Kuttner (1993). Another look at the evidence of money-income causality.<br />

Journal of Econometrics 57, 189–203.<br />

Friedman, M and A Schwartz (1963). Money and business cycles. Review of <strong>Economic</strong>s and<br />

Statistics, February, (suppl. 45), 32–64.<br />

Gertler, M and S Gilchrist (1993). The role of credit market imperfections in the monetary<br />

transmission mechanism: arguments and evidence. Scandinavian Journal of <strong>Economic</strong>s<br />

95(1), 43–64.<br />

Goldsmith, RW (1969). Financial structure and development. New Haven, CT: Yale<br />

University Press.<br />

Granger, CWJ and P Newbold (1974). Spurious regressions in econometric model specification.<br />

Journal of Econometrics 2, 111–120.<br />

Hamilton, JD (1994). Time Series Analysis. Princeton: Princeton University Press.<br />

Harris, R (1995). Using Cointegration Analysis in Econometric Modeling. London: Prentice<br />

Hall.<br />

Holden, D and R Perman (1994). Unit roots and cointegration for the economists. In Cointegration<br />

for the Applied Economist, BB Rao (Ed.), UK: St. Martin’s Press, 47–112.<br />

Johansen, S (1995). Likelihood-Based Inference in Cointegrating Vector Autoregressive<br />

<strong>Models</strong>. New York: Oxford University Press.<br />

Johansen, S and K Juselius (1990). Maximum likelihood estimation and inference on cointegration<br />

with application to the demand for money. Oxford Bulletin of <strong>Economic</strong>s and<br />

Statistics 52, 169–210.<br />

Kashyap, KA and CJ Stein (1994). Monetary policy and bank lending. NBER Studies in<br />

Business Cycles 29, 221–256.<br />

Kashyap, KA and CJ Stein (2000). What do a million observations on banks say about the<br />

transmission of monetary policy? The American <strong>Economic</strong> Review 90(3), 407–428.<br />

Kashyap, KA, AO Lamont and CJ Stein (1994). Credit conditions and the cyclical behavior<br />

of inventories. The Quarterly Journal of <strong>Economic</strong>s 109(3), 565–592.<br />

King, GR and R Levine (1993). Finance, entrepreneurship and growth: theory and evidence.<br />

Journal of Monetary <strong>Economic</strong>s 32, 1–30.<br />

Lazaridis, A (2007). A note regarding the condition number: the case of spurious and latent<br />

multicollinearity. Quality & Quantity 41(1), 123–135.<br />

Levine, R and S Zervos (1998). Stock markets, banks, and economic growth. American<br />

<strong>Economic</strong> Review 88, 537–558.


222 Athanasios Athanasenas<br />

Lown, C and DP Morgan (2006). The credit cycle and the business cycle: new findings<br />

using the loan officer opinion survey. Journal of Money, Credit, and Banking 38(6),<br />

1575–1597.<br />

MacKinnon, J (1991). Critical values for co-integration tests. In Long-Run <strong>Economic</strong> Relationships,<br />

RF Engle and CWJ Granger (eds.), pp. 267–276. Oxford: Oxford University<br />

Press.<br />

Mckinnon, RL (1973). Money and Capital in <strong>Economic</strong> Development. Washington, DC:<br />

Brookings Institution.<br />

Meltzer, HA (1995). Monetary, credit and (other) transmission processes: a monetarist perspective.<br />

The Journal of <strong>Economic</strong> Perspectives 9(4), 49–72.<br />

Mishkin, SF (1995). Symposium on the monetary transmission mechanism. The Journal of<br />

<strong>Economic</strong> Perspectives 9(4), 3–10.<br />

Mouza, AM and D Paschaloudis (2007). Advertisement and sales: a contemporary cointegration<br />

analysis. Journal of Academy of Business and <strong>Economic</strong>s 7(2), 83–95.<br />

Murdock, K and J Stiglitz (1997). Financial restraint: towards a new paradigm. Journal of<br />

Monetary <strong>Economic</strong>s 35, 275–301.<br />

Rousseau, PL and P Wachtel (1998). Financial intermediation and economic performance:<br />

historical evidence from five industrialized countries. Journal of Money, Credit and<br />

Banking 30, 657–678.<br />

Rousseau, PL and P Wachtel (2000). Equity markets and growth: cross-country evidence on<br />

timing and outcomes, 1980–1995. Journal of Banking and Finance 24, 1933–1957.<br />

Shaw, ES (1973). Financial Deepening in <strong>Economic</strong> Development. New York: Oxford<br />

University Press.<br />

Sims, CA (1972). Money, income, and causality. The American <strong>Economic</strong> Review 62(4),<br />

540–552.<br />

Sims, CA (1980a). Comparison of interwar and postwar business cycles. The American<br />

<strong>Economic</strong> Review 70, 250–277.<br />

Sims, CA (1980b). Macroeconomics and reality. Econometrica 48, 1–48.<br />

Stiglitz, EJ (1992). Capital markets and economic fluctuations in capitalist economies.<br />

European <strong>Economic</strong> Review 36, 269–306.<br />

Stock, JM and MW Watson (1988). Testing for common trends. Journal of the American<br />

Statistical Association 83, 1097–1107.<br />

Stock, J and MW Watson (1989). Interpreting the evidence on money-output causality.<br />

Journal of Econometrics, January, 161–181.<br />

Swanson, N (1998). Money and output viewed through rolling window. Journal of Monetary<br />

<strong>Economic</strong>s 41, 455–473.<br />

Tobin, J (1970). Money and income: post hoc ergo propter hoc? Quarterly Journal of<br />

<strong>Economic</strong>s 84, 301–329.<br />

Trautwein, M (2000). The credit view, old and new. Journal of <strong>Economic</strong> Surveys 14(2)<br />

155–189.


INDEX<br />

adaptive control, 44, 49, 57, 59<br />

asymptotic co-variance, 46<br />

autocorrelation, 35, 36, 40, 182, 184, 185,<br />

187, 188, 191, 212, 215<br />

automatic stabilizer, 70, 71, 74, 76, 79, 80,<br />

96, 98<br />

balance of payments, 44, 49, 53<br />

bank credit, 202–205<br />

Bank of England, 69, 70, 74, 76–78, 90<br />

British fiscal policy, 69, 73, 81<br />

budget stability, 74<br />

business cycle, 9, 202<br />

cash-balance, 52<br />

causality, 201, 202, 204, 205, 214–216,<br />

218, 220<br />

Central Bank objective function, 85<br />

Cholesky’s factorization, 24<br />

Chow stability, 184<br />

co-integration, 23, 25, 28–32, 34, 41, 201,<br />

204–206, 208, 209, 212, 213, 217, 218,<br />

220<br />

collectivism, 143<br />

conformability, 26, 208<br />

conformity, 140, 143<br />

corporate culture, 139, 145<br />

credit and income, 201, 204<br />

credit demand, 202, 203<br />

critical value, 30, 33, 38–40, 188, 189, 212<br />

current deficits, 79, 83<br />

CUSUM, 185, 186, 188–191<br />

cyclical stability, 74<br />

Czech Republic, 173–177, 181–183,<br />

189–193, 196, 197, 199<br />

debt ratio, 72, 78, 92, 96, 99<br />

decision orientation, 132<br />

decomposition, 26, 27<br />

dependent central bank, 88, 90, 94<br />

depression, 9, 202, 203<br />

deterministic, 25, 41, 209, 212, 213, 217<br />

devaluation, 57, 176, 177<br />

diagnostic, 185–188, 191, 197<br />

Dickey–Fuller tests, 30, 206<br />

discount rate, 52, 57–59<br />

discretionary fiscal policy, 82<br />

discretionary tax revenues, 82<br />

disequilibrium, 25, 28, 208, 212–215<br />

domestic absorption, 49, 50, 53, 59<br />

dynamic response multiplier, 57<br />

Eastern European Countries, 173<br />

econometrics, 6, 9, 204<br />

ECVAR, 23, 25, 201, 204, 206–209, 213,<br />

215–218<br />

Eigen values, 24–26, 211<br />

emissions, 101, 103–106<br />

enterprise integration, 121, 125–128<br />

enterprise integration modeling, 121<br />

enterprise modeling, 121–124, 132<br />

environmental economics, 101–103<br />

environmental taxation, 101, 103<br />

ERM, 176, 193<br />

error co-variance matrix, 48<br />

error-correction, 23, 201, 205, 213, 214<br />

European Commission, 69, 79, 98<br />

Eurozone, 69, 72, 77–81, 88, 93–95<br />

ex-ante responses, 76<br />

exchange rate, 50, 53, 58, 60, 74,<br />

173–178, 180–183, 185, 187–193, 197<br />

exclusivity, 143<br />

feasibility region, 3, 5<br />

feasible set, 16<br />

feedback rule, 77<br />

223


224 Index<br />

fiscal expansions, 91, 92<br />

fiscal leadership, 69, 71, 73, 76–78, 81,<br />

85, 89–97, 99<br />

fixed parity, 176<br />

function oriented enterprise, 128<br />

functional entities, 122–124, 126<br />

fund-bank adjustment policy model, 49<br />

Gaussian, 44, 62<br />

Germany, 93, 94, 99, 174, 176, 181–187,<br />

190, 194, 197<br />

goal programming, 151, 155, 156, 161,<br />

168<br />

Hamiltonian, 111<br />

health care unit, 168<br />

health service management, 151<br />

heteroscedasticity, 35, 37, 39, 184, 185,<br />

194, 212, 215, 216<br />

Hou-Ren-Sou, 140, 143, 144<br />

human resources practices, 130, 142, 144<br />

human ware, 141<br />

hybrid modeling constructs, 130<br />

imitation-type dynamics, 103, 106<br />

independent monetary policies, 71, 75<br />

Indian economy, 49, 50<br />

inflation, 69, 70, 72–74, 76–80, 82, 84, 85,<br />

88–96, 99, 153, 164, 173–179,<br />

181–183, 185–193<br />

inflation control, 70, 73, 79, 92, 173, 174,<br />

176, 177, 191, 192<br />

input-output, 6–9, 11, 12, 14<br />

institutional parameters, 76, 86, 96<br />

integration challenges, 121<br />

inter-temporal optimization problem, 107<br />

interactive voice response, 153<br />

interest rate, 38, 50–52, 57–60, 75, 77–81,<br />

176, 178, 179, 181, 191, 192<br />

Japanese national culture, 137, 139, 140,<br />

147<br />

Japanese Organizational culture and<br />

corporate performance, 137<br />

Jaque-Bera criterion, 29<br />

just in time, 144<br />

Kaizen, 140–142<br />

Kroneker product, 46<br />

leadership, 69–73, 76–79, 81, 85, 86,<br />

89–97, 99, 103, 130, 137, 139, 142,<br />

144, 145<br />

lean production system, 141, 142<br />

linear constraints, 3, 10<br />

linear programming, 3, 4, 6–8, 10, 14<br />

liquidity, 203<br />

LISREL, 145<br />

Maastricht criteria, 173, 176, 177<br />

macro values, 138, 143<br />

management support, 131<br />

mathematical programming, 3, 156<br />

maximization, 7, 82, 112, 151<br />

maximum likelihood, 24, 49, 205<br />

meso values, 138<br />

meta-analysis, 117<br />

micro values, 138, 143<br />

minimum variance, 25, 31–33, 41<br />

ML method, 24, 25, 28, 31, 32, 34, 40,<br />

208, 211, 215, 216<br />

modeling method, 124<br />

monetary and fiscal policy, 43, 44, 49, 53,<br />

56–59, 82, 84, 93<br />

monetary policy, 43, 44, 49, 53, 56–59,<br />

69–82, 84, 85, 91–95, 98, 102, 103,<br />

173, 174, 176, 178, 179, 185, 186, 188,<br />

190, 192, 202–204<br />

multicollinearity, 215, 216<br />

multi functional team, 142<br />

multiple equilibria, 101, 112, 113<br />

Nash equilibrium, 81<br />

Nash solution, 70, 102, 107, 110<br />

New Cambridge model, 50<br />

new constraint, 161, 162<br />

new credit, 202<br />

non-believers, 103, 105–107, 117<br />

object request brokers, 126<br />

old credit, 202<br />

OLS, 23, 25, 28–32, 183, 184, 213<br />

optimal control, 45, 46, 50, 55


Index 225<br />

optimal currency area, 175<br />

optimization, 3, 4, 7, 16, 44, 49, 91, 107,<br />

110–112<br />

organizational culture, 130, 137–139, 142,<br />

144<br />

output gap, 72, 77–82, 179–183, 187, 189,<br />

191<br />

Pareto-improvement, 110<br />

Pareto-inferior, 116<br />

Phillips curve, 87, 98, 178<br />

Poland, 173–177, 181, 182, 185, 187, 188,<br />

190–193, 195, 197, 198<br />

primary deficit, 72, 79<br />

pro-cyclicality, 85, 87<br />

product modeling, 122<br />

productivity, 175<br />

pseudo-inverse, 26<br />

public sector deficit, 84<br />

quadratic function, 10<br />

quality circle, 141<br />

rational expectations, 44, 86, 179<br />

reduced form, 44, 46, 48, 58, 59, 82, 84,<br />

179<br />

reduced form coefficients, 44, 46, 48<br />

regulator, 76, 101–111, 113–117<br />

regulator’s trustworthiness, 115<br />

residuals, 24, 30, 35, 36, 39, 54, 179, 184,<br />

185, 187, 188, 191, 194<br />

resource information, 122<br />

Ricatti-equations, 45<br />

robustness, 55<br />

sequential Nash game, 103, 107<br />

short-run stabilization, 70, 76, 80<br />

simplex algorithm, 6, 15<br />

singular value, 26–28, 31, 33<br />

Skiba point, 112, 115, 116<br />

Slovenia, 173–177, 181–183, 185,<br />

190–193, 196, 197, 200<br />

stability, 11, 54, 55, 57, 74, 76, 80–84, 95,<br />

113, 144, 176, 178, 184, 185, 191, 193,<br />

201, 202, 204–206, 210, 211, 214, 218,<br />

220<br />

stabilization policy, 43, 71<br />

standard deviation, 30, 32–34<br />

state variable form, 49, 50<br />

statically optimal, 109, 113<br />

steady-state, 82, 83<br />

structural deficit, 79, 80<br />

structural model, 44, 49, 178<br />

symmetric stabilization, 75<br />

tax revenue, 50, 51, 57, 58, 60, 82, 83, 95,<br />

96, 104, 106<br />

Theil’s inequality, 206, 218<br />

time varying responses, 43<br />

time-invariant econometric model, 47<br />

time-varying model, 44, 47<br />

TQM, 141, 142, 144<br />

transition, 45, 50, 54, 173–176, 178–180,<br />

185, 186, 188, 189, 191, 192<br />

transmission of monetary policy, 173, 178,<br />

179, 190, 192, 203<br />

UK government, 69<br />

underutilization, 158, 160<br />

unstable middle equilibrium, 115<br />

updating method, 44, 46<br />

US economy, 201, 202, 204, 218, 220<br />

VAR, 23, 25, 27, 28, 32, 44, 178, 201, 205,<br />

207, 213, 220

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!