13.07.2015 Views

Download a Full Version (PDF 6.4 MB) - Isles International University

Download a Full Version (PDF 6.4 MB) - Isles International University

Download a Full Version (PDF 6.4 MB) - Isles International University

SHOW MORE
SHOW LESS
  • No tags were found...

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Editorial Note 3Words from the Vice-Chancellor (Rector) 5Articles Instructions for Contributors 6BusinessManagementandEconomicsEngineeringandTechnologyScienceSchool of Doctoral StudiesJournalEuropean UnionNumber 1, July 2009, Annual Publication of the School of Doctoral Studies of theEuropean Union at the <strong>Isles</strong> <strong>International</strong>e Université (European Union),Brussels, Belgium; Published by the IIU Press and Research Centre, A.C.Projects’ Analysis through CPM (Critical Path Method) 9Peter Stelth, Guy Le RoyIncome Disparity Measurement 52Alexandre Popov, Stefen L. FreinbergAnalysis on the European Union Regional Policy 77Jean Malais, Henk HaegemanSales and Advertisement Relationship for Selected Companies Operating in India 83Dr. Suparn Sharma, Dr. Jyoti SharmaIncrease of Agricultural Production based on Genetically Modified Food to meet PopulationGrowth Demand98Ernst Giger, Rudolf Prem, Michael LeenCorrosion in Concrete Bridge Girders 125Walter Unterweger, Kurt NiggeMaple and other CAS (Computer Algebra Systems) applied to Teaching and Assessing136MathematicsRobert Thomson, Arelli Santaella, Mark W. BoulatBrain Neuroplasticity and Computer-Aided Rehabilitation in ADHD 171Hansel Undsen, Melissa Brant, Jose Carlos AriasFeasibility of overcoming the technological barriers in the construction of nanomachines 210Günter Carr, Jeffrey DesslerSocial Science Society’s Identity Search through Art 220Simone Rothschild, Jim CurtsingerAnalysis on Modernism and Literary Impressionism 232Donatella Petri, Anna RichardsonA Study of Dyslexia among Primary School Students in Sarawak, Malaysia 250Rosana Bin Awang BolhasanUnderstanding of Religion and the Role Played by Cultural Sociology in the Process 269Sheila VaughamPolicy of Preemption or the Bush Doctrine 281Ana DresnerGeneral Information 286


School of Doctoral Studies (European Union) Journal3EDITORIAL NOTE<strong>Isles</strong> <strong>International</strong>e UniversitéSchool of Doctoral StudiesEuropean Unionhttp://www.iiuedu.euadmin@iiuedu.euEuropean Business School(Cambridge, UK)www.ebscambridge.acinfo@ebscambridge.acThe Cambridge Association ofManagerswww.cam-uk.orgadmin@cam-uk.orgThe Oxford Association ofManagementwww.oxim.orgadmin@oxim.orgThe <strong>International</strong> ManagementInstitute (Brussels)http://www.timi.edu/contact.htmladmin@ebs.edu.sgEuropean Business School(Singapore)http://www.ebs.edu.sg/contact.phpadmin@ebs.edu.sgSchool of Doctoral Studies (European Union JournalSchool of Doctoral Studies (European Union) Journal (SDSJ)publishes research analysis and inquiry into issues of importance to theacademic and scientific community. Articles in SDSJ examine emergingtrends and concerns in the areas of business management, economics,engineering, technology, natural and social science. The goal of SDSJis to broaden the knowledge of students, academicians and society ingeneral by promoting free access and provide valuable insight to humanknowledge related information, research and ideas, presenting what theSchool of Doctoral Studies of the European Union’s Faculty Membersmay consider as relevant contributions to human knowledge. SDSJ isan annual publication and all articles are peer-reviewed. SDSJ will bepublished annually (one volume per year) by the IIU Press and ResearchCentre, A.C. for the School of Doctoral Studies of the European Union,hosted at the <strong>Isles</strong> <strong>International</strong>e Université (European Union) in Brussels,Belgium.School of Doctoral Studies of the European UnionBrussels EU Parliament BuildingSquare de Meeus37 - 4th Floor1000 Brussels, Belgiumhttp://www.iiuedu.eu/school/about.htmlIIU Press and Research Centre, A.C.9, Boulevard de France, bât A1420 Braine-L’Alleud, Belgiumhttp//www.iiuedu.eu/iiupress/sdsj.htmlSchool of Doctoral Studies (European Union) JournalEDITOR:Dr. Anne D. SurreyASSOCIATE EDITORS:Dr. Mathew BlockProfessor Robert MunierProfessor Alexandra MoffettProfessor Anuar ShahREVIEWERS’ COORDINATORS: Professor Ivanna PetrovskaProfessor Melissa BrantProfessor József RuppelDr. Paul StenzelEDITION DESIGN:Pablo Gamez OlivoJosé Juan Pérez PérezJosé Antonio Arias CoyPUBLISHERS:IIU Press and ResearchCentre, A.C.Printed in CanadaFOR PRINTED VERSION ISSN 1918-8722FOR ON LINE VERSION ISSN 1918-8730School of Doctoral Studies (European Union) Journal - July, 2009 No. 1


4School of Doctoral Studies (European Union) JournalJulySchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 School of Doctoral Studies (European Union) Journal5Words from the <strong>Isles</strong> <strong>International</strong>e Université’sVice-Chancellor (Rector)Few things can be more exciting than looking at a tiny leaf emerging from the earth, when it has beenus who planted the seed and carefully watered the soil for a long time; as Lao Tzu quoted many years ago:“A tree trunk the size of a man grows from a blade thin as a hair. A tower nine stories high is built from asmall heap of earth”.As a self governed community of scholars, we are extremely pleased to present the School of DoctoralStudies (European Union) Journal’s first number, which is the result of our faculty members’, researchers’and postdoctoral students’ shared effort to provide the SDS with a tool to present the world what we allconsider to be relevant contributions to human knowledge, most of which have been proudly generatedfrom our scholars’ research work, over years of dedicated effort; other works admitted and presented inthe SDSJ come from highly recognized worldwide academicians’ and scientists’ research work at severaltuition institutions or research centers elsewhere of the SDS, and have been selected to become publishedin the SDSJ as our postgraduate school’s humble recognition to these scholars’ contribution to humankind’s progress.As the flagship of our journals’ fleet, a decision was made to designing SDSJ’s structure based uponour own School of Doctoral Studies’ departments design; so, the SDSJ includes four main sections, eachone regarding subjects and topics related to disciplines included within each school’s department: BusinessManagement and Economics, Engineering and Technology, Natural Science and Social Science.I want to warmly thank and congratulate our SDSJ’s editorial body’s superb effort on careful detailedselection of the works published in our SDSJ’s first number from over two thousand research worksreceived during 2008. This has been an extremely hard task, especially as very many other works alsodeserved this distinction; nevertheless, knowing about the supreme quality of our students’, researchers’and faculties’ minds, all them who know the secret, will agree with Albert Einstein’s words: “In light ofknowledge attained, the happy achievement seems almost a matter of course, and any intelligent studentcan grasp it without too much trouble. But the years of anxious searching in the dark, with their intenselonging, their alterations of confidence and exhaustion and the final emergence into the light -- only thosewho have experienced it can understand it.”Most Cordially,Professor Jose Carlos Arias (PhD, DBA)Vice-Chancellor (Rector)School of Doctoral Studies (European Union) Journal - July, 2009 No. 1


6School of Doctoral Studies (European Union) JournalInformation For ContributorsElectronic submission of manuscripts isstrongly encouraged, provided that the text, tables,and figures are included in a single MicrosoftWord file (preferably in Times New Roman, 12size font).Submit manuscript as e-mail attachment tothe SDSJ Editorial Office at: edit.sdsj@iiuedu.eu. A manuscript number will be mailed to thecorresponding author within the following 7days.The cover letter should include thecorresponding author’s full address and telephone/fax numbers and should be in an e-mail messagesent to the Editor, with the file, whose nameshould begin with the first author’s surname, asan attachment. The authors may also suggest twoto four reviewers for the manuscript (SDSJ maydesignate other reviewers).SDSJ will only accept manuscripts submittedas e-mail attachments.Article TypesThree types of manuscripts may be submitted:Regular Articles: These should describe newand carefully confirmed findings, and researchmethods should be given in sufficient detail forothers to verify the work. The length of a fullpaper should be the minimum required to describeand interpret the work clearly.Short Communications: A ShortCommunication is suitable for recording the resultsof complete small investigations or giving detailsof new models, innovative methods or techniques.The style of main sections need not conform tothat of full-length papers. Short communicationsare 2 to 4 printed pages (about 6 to 12 manuscriptpages) in length.Reviews: Submissions of reviews andperspectives covering topics of current interestare welcome and encouraged. Reviews should beconcise and no longer than 4-6 printed pages (about12 to 18 manuscript pages). Reviews manuscriptsare also peer-reviewed.Review ProcessAll manuscripts are reviewed by an editorand members of the Editorial Board or qualifiedoutside reviewers. Decisions will be made asrapidly as possible, and the journal strives to returnreviewers’ comments to authors within 3 weeks.The editorial board will re-review manuscripts thatare accepted pending revision. It is the goal of theSDSJ to publish manuscripts within the followingSDSJ edition after submissionRegular ArticlesAll portions of the manuscript must be typeddouble-spaced and all pages numbered startingfrom the title page.The Title should be a brief phrase describingthe contents of the paper. The Title Page shouldinclude the authors’ full names and affiliations,the name of the corresponding author alongwith phone, fax and e-mail information. Presentaddresses of authors should appear as a footnote.The Abstract should be informative andcompletely self-explanatory, briefly presentthe topic, state the scope of the work, indicatesignificant data, and point out major findings andconclusions. The Abstract should be 100 to 200words in length. Complete sentences, active verbs,and the third person should be used, and the abstractshould be written in the past tense. Standardnomenclature should be used and abbreviationsshould be avoided. No literature should be cited.Following the abstract, about 3 to 10 key wordsthat will provide indexing references to should belisted.A list of non-standard Abbreviations shouldbe added. In general, non-standard abbreviationsshould be used only when the full term is verylong and used often. Each abbreviation should bespelled out and introduced in parentheses the firsttime it is used in the text.The Introduction should provide a clearstatement of the problem, the relevant literature onthe subject, and the proposed approach or solution.JulySchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 School of Doctoral Studies (European Union) Journal7It should be understandable to colleagues from abroad range of disciplines.Materials and methods should be completeenough to allow possible replication of theresearch. However, only truly new researchmethods should be described in detail; previouslypublished methods should be cited, and importantmodifications of published methods should bementioned briefly. Capitalize trade names andinclude the manufacturer’s name and address.Subheadings should be used. Methods in generaluse need not be described in detail.Results should be presented with clarity andprecision. The results should be written in thepast tense when describing author’s findings.Previously published findings should be writtenin the present tense. Results should be explained,but largely without referring to the literature.Discussion, speculation and detailed interpretationof data should not be included in the Results butshould be put into the Discussion section.The Discussion should interpret the findingsin view of the results obtained in this and in paststudies on the topic. State the conclusions in a fewsentences at the end of the paper. The Results andDiscussion sections can include subheadings, andwhen appropriate, both sections can be combined.The Acknowledgments of people, grants,funds, etc should be brief.Tables should be kept to a minimum and bedesigned to be as simple as possible. Tables areto be typed double-spaced throughout, includingheadings and footnotes. Each table should be on aseparate page, numbered consecutively in Arabicnumerals and supplied with a heading and alegend. Tables should be self-explanatory withoutreference to the text. The details of the researchmethods should preferably be described in thelegend instead of in the text. The same data shouldnot be presented in both table and graph form orrepeated in the text.Figure legends should be typed in numericalorder on a separate sheet. Graphics should beprepared using applications capable of generatinghigh resolution GIF, TIFF, JPEG or PowerPointbefore pasting in the Microsoft Word manuscriptfile. Tables should be prepared in Microsoft Word.Use Arabic numerals to designate figures and uppercase letters for their parts (Figure 1). Begin eachlegend with a title and include sufficient descriptionso that the figure is understandable without readingthe text of the manuscript. Information given inlegends should not be repeated in the text.References: In the text, a reference identifiedby means of an author’s name should be followedby the date of the reference in parentheses. Whenthere are more than two authors, only the firstauthor’s name should be mentioned, followed by‘et al’. In the event that an author cited has had twoor more works published during the same year, thereference, both in the text and in the reference list,should be identified by a lower case letter like ‘a’and ‘b’ after the date to distinguish the works.Examples:Smith (2000), Wang et al. (2003), (Kelebeni,1983), (Usman and Smith, 1992), (Chege, 1998;Chukwura, 1987a,b; Tijani, 1993, 1995), (Kumasiet al., 2001)References should be listed at the end of thepaper in alphabetical order. Articles in preparationor articles submitted for publication, unpublishedobservations, personal communications, etc.should not be included in the reference list butshould only be mentioned in the article text (e.g.,A. Kingori, <strong>University</strong> of Nairobi, Kenya, personalcommunication). Journal names are abbreviatedaccording to Chemical Abstracts. Authors are fullyresponsible for the accuracy of the references.Examples:Papadogonas TA (2007). The financialperformance of large and small firms: evidencefrom Greece. Int. J. Financ. Serv. Manage. 2(1/2):14 – 20.Mihiotis AN, Konidaris NF (2007). Internalauditing: an essential tool for adding value andimproving the operations of financial institutionsand organizations. Int. J. Financ. Serv. Manage.2(1/2): 75 – 81.Gurau C (2006). Multi-channel banking inRomania: a comparative study of the strategicapproach adopted by domestic and foreign banksAfr. J. Financ. Servic. Manage. 1(4): 381 – 399.School of Doctoral Studies (European Union) Journal - July, 2009 No. 1


8School of Doctoral Studies (European Union) JournalJulyYoon CY,Leem CS (2004).Development ofan evaluation system of personal e-businesscompetency and maturity levels Int. J. Electron.Bus. 2(4): 404 – 437.Short CommunicationsShort Communications are limited to amaximum of two figures and one table. Theyshould present a complete study that is morelimited in scope than is found in full-length papers.The items of manuscript preparation listed aboveapply to Short Communications with the followingdifferences: (1) Abstracts are limited to 100 words;(2) instead of a separate Materials and Methodssection, research methods may be incorporatedinto Figure Legends and Table footnotes; (3)Results and Discussion should be combined intoa single section.Proofs and ReprintsElectronic proofs will be sent (e-mailattachment) to the corresponding author as a <strong>PDF</strong>file. Page proofs are considered to be the finalversion of the manuscript. With the exception oftypographical or minor clerical errors, no changeswill be made in the manuscript at the proof stage.Because SDSJ will be published online withoutaccess restrictions, authors will have electronicaccess to the full text (<strong>PDF</strong>) of the article.Authors can download the <strong>PDF</strong> file from whichthey can print unlimited copies of their articles.CopyrightSubmission of a manuscript implies: that thework described has not been published before(except in the form of an abstract or as part of apublished lecture, or thesis) that it is not underconsideration for publication elsewhere; that if andwhen the manuscript is accepted for publication,the authors agree to automatic transfer of thecopyright to the publisher.Costs for AuthorsRevision, edition and publishing costs will betotally paid by Secured Assets Yield CorporationLimited and authors’ sole contribution will beproviding BIS with their invaluable work.Publication DecisionsDecisions by the editor of all submittedmanuscripts reflect the recommendations ofmembers of the Editorial Board and otherqualified reviewers using a “blind” reviewprocess. Reviewers’ comments are made availableto authors. Manuscripts that are inappropriate orinsufficiently developed may be returned to theauthors without formal review for submission to amore suitable journal or for resubmission to SDSJfollowing further development.Manuscripts submitted will be judged primarilyon their substantive content, though writing style,structure and length will also be considered. Poorpresentation is sufficient reason for the rejectionof a manuscript. Manuscripts should also bewritten as concisely and simply as possible,without sacrificing clarity or meaningfulness ofexposition. Manuscripts will be evaluated by theeditor when first received, on their contributionto-length-ratio,meaning that manuscripts whichstrong contributions will be assigned more pagesthan those making narrower contributions. Papersintended to make very extensive contributions(over 35 double-space pages, using one inchmargins and Times New Roman twelve-pitchfont) will, at discretion of the editor, be allottedadditional space. Authors are expected to get anduse feedback from colleagues prior to submittinga manuscript for formal review.School of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Business Management and Economics Section9Business Management and Economics SectionContentProjects’ Analysis through CPM (Critical Path Method)CPM, a technique for analyzing projects by determining the longest sequence of tasks (or the sequence oftask with the least slack) through a project networkPeter Stelth, Guy Le RoyIncome Disparity MeasurementThis paper discusses the problems of measuring income disparity especially in the developing world.Alexandre Popov, Stefen L. FreinbergAnalysis on the European Union Regional PolicyAn examination of the goals and operation of a European Union regional policy to address incomeinequality among member regionsJean Malais, Henk HaegemanSales and Advertisement Relationship for Selected Companies Operatingin IndiaA Panel Data AnalysisDr. Suparn Sharma, Dr. Jyoti SharmaDepartment’s ReviewersDeputy Head of Department – Business Management - Dr. Mathew BlockChair of Accounting and Finance Studies - Prof. Ira JoubertChair of Human Resources Studies - Prof. Beverly LantingChair of Marketing Studies - Prof. Frans CooperChair of Operations and Production Studies - Prof. Guy Le RoyChair of Information and Knowledge Management Studies - Prof. Mitsuaki UnoChair of Leadership and Corporate Policy Studies - Prof. Tui UnterthinerChair of Change, Conflict and Crisis Studies - Prof. Takako IwagoDeputy Head of Department – Economics - Dr. Paul StenzelChair of Mathematical and Quantitative Methods - Prof. Maria NicklenChair of Microeconomics - Prof. Carlo GrunewaldChair of Macroeconomics and Monetary Economics - Prof. Stefen L. FreinbergChair of <strong>International</strong> Economics - Prof. Mitchell AlvarezChair of Schools of Economic Thought and Methodology - Prof. Vincent HaidingerSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


10School of Doctoral Studies (European Union) JournalJulyProjects’ Analysis through CPM (Critical PathMethod)Peter Stelth (MSc)Master of Science and candidate to PhD in Management Science at the School of Doctoral Studies, <strong>Isles</strong><strong>International</strong>e Université (European Union)Professor Guy Le Roy (PhD)Chair of Operations and Production Studies of the Department of Business Management and Economics,at the School of Doctoral Studies, <strong>Isles</strong> <strong>International</strong>e Université (European Union)AbstractCPM, a technique for analyzing projects by determining the longest sequence of tasks (or the sequence oftask with the least slack) through a project networkOrganizations today are also increasingly using virtual project management teams. They are procuringexpertise and materials from all corners of the world. Therefore, CPM and CCM process are even morecomplicated than in the past. These environments also create their own problems and bottleneck that haveto be also considered when studying and process or situation. The need to increase profits and revenues hasforced many establishments to try to optimize their resources. Every organization is created to serve anddevelop specific functions, procedures, and responsibilities. If these goals are achieved properly, the longtermstability of the organization is accomplished; and, in many cases, guaranteed. Increasing efficiencyand productivity have always been key factors in implementing any change.Key words: Management Engineering, Operations Management, Network Diagram Complexity StepsTeamSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Projects’ Analysis through CPM (Critical Path Method)11IntroductionManaging and running organizations is anevolutionary process over the ages. Such processeshave been under going many structural changes.Organizations have shifted from functionalmanaged structures to project based organizationalstructures. Consequently, project management inorganizations is becoming increasingly important.Indeed, it is critical for the success of thecompany. Most of the above mentioned processchanges have occurred in the last three decades.Irrespective of the type of industry or the domain,the need for managerial and structural change isbeing observed.This chapter aims to give an outline and scopeof the study that will be undertaken in this work.Any study that can help organizations understandthe factors that impact the management ofresources in an organization is beneficial. Thefocus of this study is to understand and evaluatecritical paths and critical chains in projects. Onceagain, this too is critical. The ability of the projectteam to identify these paths and formulate policiesand procedures to measure and monitor the criticalpaths are paramount.Background of the studyProject management is not a new concept fororganizations or managers. The concepts andideas behind effective project management arehowever constantly been undergoing modificationand improvement. A Dupont engineer, MorganR. Walker and a Remington-Rand computerexpert, James E. Kelly, Jr, initially conceivedthe Critical Path Method (CPM). They created aunique way of representing the operations in thesystem. Their methods involved using “uniquearrow filled diagrams or network methods” in1957. (Archibald and Villoria, 1966, Korman,2004) At approximately the same time, the U.S.Navy initiated a project called PERT (ProgramEvaluation Research Task) in order to providenaval management with an effective mannerby which they could periodically evaluate theinformation of the new Fleet Ballistic Missile(FBM) program. The US Navy could obtainvalid information of the progress of the projectand also have a reasonable accurate projectionof the completion of the project as desired. It ishowever, important to note, that PERT “deals onlywith the time constraints and does not includethe quantity, quality and cost information desiredin many projects; PERT should, therefore, beintegrated with other methods of planning andcontrol.” (Evarts, 1964)While these two methods were revolutionary(for their times), the true impact of the conceptsof CPM and PERT were often not as completeand holistic in their applicability. Theoreticaland practical implementations of CPM and PERTalso identified many areas of improvement overthe years. The Theory of constraints in the 1980sfollowed by the Critical Chain complementedmany of the concepts of these previous two tools.Every organization is driven to succeed. Anorganization’s success or failure often depends onthe clarity of its goals and objectives. It is often themanagement that defines these. (Morgan, 1998)In this environment therefore, it is important toidentify the paths and options that managers anddecision maker can utilize and refer to for complexand difficult issues. There are many related factorssuch as human perception and organizationalculture and values that affect the implementationof these new models and tools. Organizationsrealizing that often project managers and teammembers are more involved with ‘fighting fires’during the execution of the various tasks on theproject completion are always open to the ideaof finding new methods that can help projectsbecome more streamlined and successful. ThePareto Principle or the ‘80:20 Rule’ providesa realistic picture of the time utilization mostcommonly arising. This principle states that 80%of most effort is spent handling just 20% of themost important factors of a task. In the modernwork environment, lead-times and the times to(reach) market often determine the extent ofprofitability that can be obtained by any company.This is also true for construction projects, wherethe timely delivery of a building for residential useor commercial use can determine profits.P. Stelth (MSc) - Professor G. Le Roy (PhD) - Projects’ Analysis through CPM (Critical Path Method)


12School of Doctoral Studies (European Union) JournalJulyPurpose of this studyThe main goal for any organization is togenerate profits and revenue for the stakeholders.The task of determining how to run a lean and trimoperation for any organization is complicated byissues such as manufacturing and operational leadtimes, replenishment cycles, unexpected surges indemand of a product, review frequency and thefailure of establishing realistic target service levelsby all involved in the operations. The idea ofbalancing flow (and not the capacity) throughoutthe plant is the considered the starting point forimplementation of the Optimized ProductionTechnology (OPT) program proposed by the Dr.Goldratt. The need to constantly generate profitsfor any organization forces management withinthe organization to evaluate and understand theinternal and external factors that have the potentialto create the most variance. Management oforganizations is a complex process. In turn,organizations constantly seek methods anduse tools that will help them understand theiroperations and optimize their operating processesfor higher profits. This study by identifying thesalient features of critical paths and critical chainshopes to offer the reader insight into the potentialproblem areas and methodologies or optionsthat can be used to understand and evaluate theproblem. In addition, this study also evaluates thesimilarities and differences in the concepts of thecritical path and the critical chain.This thesis aims to study the following topics• Advantages and disadvantages of the CriticalPath Method (CPM)• Advantages and disadvantages of the CriticalChain Method (CCM)• The impact of CPM and CCM on projectmanagement• The complementing of the two methods andthe ability to use both in conjunction for anyproject being implemented• The issues of scheduling and the role oftracking and monitoring of the project’sprogress from start to completion• The effect of leadership, project team workingand decision making styles on the CPM andthe CCM used for projectsIt is important to recognize however, that anyreader realize that the topic and discussing of thistopic is generic and individual organizations mightrequire understanding the internal factors andculture in any organization wanting to implementand use this study. Evaluating the process ishowever, the first step in any improvement andchange process in an organization. Scheduling,supply chain management and logistic planning inan organization is an important factor in successfulachievement of any project.Importance of this studyThe mission and the goals defined in theorganization are often the guiding factors inany strategy planning. Understanding the corecompetencies of the organization and supportingfactors needed to achieve the objectives should bethe bases of any knowledge management endeavorof the organization. Many external factors suchas the competition in the industry for the sameproduct or services and business strategies such ascustomer relationship management, supply chainmanagement and logistics and planning all dependon the understanding of the critical path andcritical chain that the project goes through fromthe start to the finish of the project. In the currentmarketplace, customers are becoming more awareof the choices available to them. Competition ismore on a global scale than on a regional scale forany organization.Lead times are shorter. Product maturityperiod is also shorter. Obsolescence of productstakes place within a shorter duration of time.Profitable periods are shrinking constantly.Most organizations are realizing that too manypoor product launches can cost the company itsreputation and consequently its profit margin. BySchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Projects’ Analysis through CPM (Critical Path Method)13recognizing potential problems that can occur,decision-makers in project management situationscan plan and prepare the person accordingly for thesituation. Competition is very intense in modernday organizations. Companies are increasinglystriving to differentiate their products and servicesin the market in order to gain higher profits andmarket-shares.It is beyond the scope of any one study tocompletely investigate and study the impact ofevery variable that has created conditions thatcan disrupt a project from timely completion orlaunch. With this in mind, this study aims toidentify the relationship between a few variables inCPM and CCM and answer some of the questionsthat typically arise, associated with these toolapplications for projects.Scope of the studyThis thesis only investigates CPM and CCMmethodologies in a very generalized format.This is without stressing the importance of thesefactors on any specific industry. For example,the project management needs of the constructionindustry can differ significantly from the softwaredevelopment industry. While both industries oftenuse project teams for implementation of the taskand completion of the project, the approaches tousing CPM and CCM might not always run parallelin vision of execution. CPM and CCM will differconsiderably even within the same industry basedon the internal culture and the mission objectives ofthe organization in the industry. An ideal exampleis the construction industry where projects forresidential complex construction will differconsiderably from the construction of hazardousresearch facility although the variables involvedin the process might be the same at a fundamentallevel such as procurement of material and hiringof labor are the same; but the expertise requiredfor the construction will differ significantly from aresidential to a hazardous facility construction.Organizations today are also increasinglyusing virtual project management teams. They areprocuring expertise and materials from all cornersof the world. Therefore, CPM and CCM processare even more complicated than in the past. Theseenvironments also create their own problems andbottleneck that have to be also considered whenstudying and process or situation. The need toincrease profits and revenues has forced manyestablishments to try to optimize their resources.Every organization is created to serve and developspecific functions, procedures, and responsibilities.If these goals are achieved properly, the long-termstability of the organization is accomplished; and,in many cases, guaranteed. Increasing efficiencyand productivity have always been key factors inimplementing any change.Definition of termsThe terms defined below are used at regularintervals in this thesis. Definitions of these termsare provided to ensure that the reader understandsthe context within which these terms are used andapplied in this thesis. It is assumed that the readerwill have some prior knowledge of the topic. As aresult, only a few important terms are defined inthis section. If additional terms are used, they willbe defined when introduced in this thesis.Organization: “A company, corporation, firm,enterprise or institution, or part or combinationthereof, whether incorporated or not, publicor private, that has its own functions andadministration.” (GDRC, 2004)Project is defined as “an endeavor to accomplisha specific objective through a unique set ofinterrelated tasks and the effective utilization ofresources.” (Gido and Clements, 2003) Anyproject requires a plan of action in order toaccomplish desired results. Thus, it requires“project management.”Critical Path “In project management, a criticalpath is the sequence of project network terminalelements with the longest overall duration,determining the shortest time to complete theproject.” (WordHistory, 2004)P. Stelth (MSc) - Professor G. Le Roy (PhD) - Projects’ Analysis through CPM (Critical Path Method)


14School of Doctoral Studies (European Union) JournalJulyCritical Path Method (CPM) “is s techniquefor analyzing projects by determining the longestsequence of tasks (or the sequence of task with theleast slack) through a project network.” (Newbold,1998) By concentrating on the most critical tasksit can be ensured that the project is on time and iskeeping pace with the schedule set up.Activity: A specific set of tasks or a singletask that is required to be completed to ensure thecompletion of the project. All activities are relatedto each other and these relationships are calleddependencies. (Archibald and Villoria, 1966)Event: The result of completing one ormore activity. (Meredith and Mantel, 1995) It isa discrete point in time on the project life span.An event does not consume the project’s time orresources.Crash Time: represents the amount of time itwould take to complete an activity if managementwished to allocate additional resources to thatactivityCritical Chain: “The critical chain, in projectmanagement, is the sequence of both precedenceandresource-dependent terminal elements thatprevents a project from being completed in ashorter time, given finite resources.” (Wikipedia,2004)Parkinson’s Law: Work expands to fill (andoften exceed) the time allowed.Murphy’s Law: Whatever can go wrong, will.Project Buffer: Safety times that are introducedat the end of the critical chain prior to the due datethat ensures that the project will be completed bythe time due.Feeding Buffers: safety time introduced afternon-critical activities before they feed into thecritical chain. This is done to ensure that the noncriticalactivities are always completed prior totheir requirement on the critical chain.Resource buffers “usually in the form of anadvance warning, are placed whenever a resourcehas to perform an activity on the critical chain,and the previous critical chain activity is done bya different resource.” (Herroelen et al., 2002)Theory of Constraints: A managementPhilosophy that “provides tools and concepts thatcan help make people and organizations moreproductive according to their goals.” (Newbold,1998)Bottleneck is defined as a resource whosecapacity is less than or equal to the market demand.Bottleneck production should be on par with themarket demand. The data that has to be collectedfrom a bottleneck process has to be accurate.Schedule is defined as “the plan for completionof a project based on a logical arrangement ofactivities, resources available (emphasis added),and imposed dates or funding budgets.” (AACE,1990)Optimized Production Technology (OPT)philosophy: “the sum of local optimums doesnot equal the global optimum.” Scheduling andprioritizing can help a manufacturing organizationget their products to the market on time.Globalization: “refers to the process ofincreasing social and cultural inter-connectedness,political interdependence, and economic, financialand market integrations. Globalization makesalliances an integral part of a firm’s strategy tobetter satisfy customers and to achieve sustainablecompetitive advantage.” (Thoumrungroje andTansuhaj, 2004)Stakeholders: is defined as, “Those who areaffected by a development outcome or have aninterest in a development outcome. Stakeholdersinclude customers (including internal, intermediate,and ultimate customers) but can include morebroadly all those who might be affected adversely,or indirectly, by” an activity by the company.(USAID, 2004) Stakeholders include employees,suppliers, creditors, customers, shareholders, localSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Projects’ Analysis through CPM (Critical Path Method)15communities and anyone else who is affected bythe operations of the business.Environment: is defined “as the sum of allexternal influences and forces acting upon anobject”, the object can be either an individual oran organization. (Sewell, 1975) The traditionaldefinition of an environment with reference to anorganization is: “All the elements that lie beyondthe boundary of the organization and have thepotential to affect all or a part of the organization.”(Daft, 1997)Organizational Culture: Schein classifiesorganizational culture into three distinct anddifferent levels: 1) The most easy to observeand notice are the “artifacts”; the aspects likethe dress and language used which are easy todiscern but have their own symbolism; 2) at astratum below artifacts are the “espoused values;”these are consciously identified strategies, goalsand philosophies by the organization; 3) “thecore, or essence, of culture is represented by thebasic underlying assumptions and values, whichare difficult to discern because they exist at alargely unconscious level, yet provide the key tounderstanding why things happen the way theydo.” (Schein, 1992)Organizational Structure: Structure is an entity(such as an organization) made up of elementsor parts (such as people, resources, aspirations,market trends, levels of competence, rewardsystems, departmental mandates, and so on) thatimpact each other by the relationship they form. Astructural relationship is one in which the variousparts act upon each other, and consequentlygenerate particular types of behavior. (Fritz, 1996)Organizational structure defines the command,control and feedback relationships amongemployees in an agency, and the information thatthey might require to complete their task.Limitations of the studycan identify the trends and issues in any oneorganization or manufacturing sector, it does notprovide a complete and comprehensive overviewof every industry. The data that was evaluatedfor this study was obtained from books, magazinearticles, journal articles and other peer reviewedperiodicals and the Internet. This study reflectsthe common trends observed in the literaturereviewed. The literature is largely based on thepersonal opinions and viewpoints of individualswho have worked extensively in this field. Itconstitutes an important aspect of the printedand published opinion. Adequate collaborativeinformation was sought and reviewed to present ascomplete a picture of this topic as possible withoutconcentrating extensively on any one topic or areaof concerns. Sufficient collaborative informationwas verified for any given point of view prior tointroducing the concept in this thesis.There is an advantage to conducting secondaryresearch as compared to primary research.Secondary research methods are cheaper. Thetime factor is not as critical as in primary research.There are however, some limitations to usingsecondary data. Primary data collection (surveys)on the other hand can reveal facts and featuresof any one industry in the manufacturing fieldmore clearly and comprehensively. (Hutt andSpeh, 1985) Primary research is expensive andalso dependent on the quality of the informationgathered. The integrity and the quality of the datacan become questionably if the population fromwhich the data has been collected is not adequatelydiversified and independent.The term “project” can be applied to a widevariety of organizations and an even wider varietyof situations. This study will focus on projectsundertaken by organizations, both for-profit andnon-profit organizations. There is no perfectproject plan. Every plan has to be tweaked andmodified at periodic intervals as the projectprocess progresses. This study does not attempt todefine a project plan to reduce risks and improvedecision-making.This study is conducted using a secondaryresearch method. While primary researchP. Stelth (MSc) - Professor G. Le Roy (PhD) - Projects’ Analysis through CPM (Critical Path Method)


16School of Doctoral Studies (European Union) JournalJulyObjective of studyProjects are undertaken for a number of reasons.This study will identify some of these projectvariables that involve CPM and CCM. Considerthe following examples: a project on continuousimprovements, improving quality control, buildinga new facility for the company, designing theinternational space station, creating a new line ofcommercial aircrafts, a new highway or somethingas simple as designing new packaging material foran existing product line in the organization are justa few examples of projects that are undertaken byorganization. It would be impossible to find arealistic definition of project management that didnot have, “situation that needs attention,” “plan ofaction” and/or “implementation of the plan” in thewording of its text.Most often, projects are generally over budget;they take longer than the projected time; or theysimply have the wrong people selected for thetasks. Projects generally have a team assignedto them. Team effort and interaction is integral tothe success of the project. The morale, skill andmotivation of the team members in the projectteamplay an important role in the success of theproject. There are many organizational variablessuch as the structure and systems that affect thedecision-making styles. And as a consequence,so is the project management styles implementedby the project leader. Management styles havegone through faster evolutions in the past threedecades than they have in the past three centuries.Information and technology available to modernday managers is much better and reliable than inthe past. Overload of information however, canbe prohibitive. There is the fear that the individualinspecting the records might not be able to filterthrough the noise in the data, which might precludearriving at the correct conclusion. Planning andmonitoring of the project are more complicatedin today’s world of increasing outsourcedoperations.No two projects are ever alike. This istrue even if the starting variables for each arethe same. Many of the internal and externalenvironmental variables, economy of the region,the worker skill levels, cost of manufacturingand doing business and social changes affectproject completion. (Kerzner, 1979) In projectmanagement methodology, failure to manage (andcontrol) any one variable can result in the overallfailure in completing of the project. Such projects,even when complete, result in decreased profitsand lowered market acceptance. With marketsbecoming more global and organizations operatingin more than one continent, the environment hasbecome further complicated. Coordinating effortsand synchronizing tasks has become more criticalin this environment. Virtual teams for projects arebecoming more common; and the risks associatedwith these types of project teams are muchhigher.Literature surveyThis thesis identifies CPM and CCM. Both areused as management tools for ensuring that projectsare on time and within the budget. In a projecttype of organization structure, most of the tasksundertaken are one of a kind or at least have somelevel of uniqueness attached to them. For example,a construction company might have differentproject teams for each building being constructedor facility or a pharmaceutical company mightconsider each product manufactured an individualproject. Every project generally has a fixed timeframe and budget for completion of associatedtasks. A new facility construction has a budget.The time period allocated for the completion of theproject. In many cases, the life span of a projectbasedorganization is based on the duration of theproject itself. Project management is as muchan art as a science and involves more that justfollowing preset directions.It is important that every individuals associatedwith projects and especially, project managementunderstand the basic notion that the reason whya project is conceived, planned and executed isto serve a final customer or user of the project’soutcomes. Projects without any end ownership isnot sensible. (Martin, 1976, Pruitt, 1999) At thesame time however, projects are becoming morecomplex. The risks involved in project planningSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Projects’ Analysis through CPM (Critical Path Method)17and design are also higher. Organizations cansave money and resources by utilizing varioussimulation models to determine the effectivenessof the project. (Doloi and Jaafari, 2002) Planningis necessary for all projects. Simulating projectneeds at every stage of the project life cyclecan help decision makers view the changes ormodifications that might be needed in a plans.Research indicates that many of the problemsexperienced in projects are of a “management,organizational or behavioral nature” and rarelydue to inadequacies in technique or skill. Thisis especially true of software related projects.(Hartman and Ashrafi, 2002)The Resource based theory for managingprojects is now becoming more acceptable. Thistheory postulates that the physical capital, humancapital and organizational capital are all importantvariables in strategy planning. (Kotelnikov, 2004)Resources possessed by companies can be tangible(facility, equipment) or non-tangible (knowledgebase, patents). A project feasibility analysisshould be conducted at the initial stages and atperiodic intervals during the project life. (Cliftonand Fyffe, 1977) Projects should also have theinternal financial flexibility to adjust to changesand modification in the plan and design duringthe duration of the project. (Farrell, 2002) Projectfinancing and cost planning are important factorswhen finance planning for long-term projects,industrial projects and government projects.It is important that management and decisionmaker in organizations using the project basedmodel to realize that “projects are a highlydistinctive form of work organization.” (Sauer etal., 2001) Individuals who work in this environmenthave to constantly perform at very high optimumlevels, control structures have to be well defined,a fine balance between worker empowerment andmanagement control has to be obtained and theorganizational structure, culture and norms haveto be sufficiently flexible to maintain constantlyhigh energy levels within the organization. Inthis environment, management has to constantlymonitor motivational levels and the workers’dedication at every point in time during the projectduration. Often, employees, by virtue of theirskills, might function in more that one projectand at different levels of responsibilities. Allthese conditions are very conducive in creating anenvironment where job stress might be very high.Project strategies for any organization haveto be employed based on the type of product, thelife cycle of the product and the process involvedin marketing of the product. Reporting anddocumentation of tasks is an important way ofreviewing and understanding operations in anyorganization. All projects need a good method ofdocumentation and evaluation of the tasks. Theserecords can provide the bases for changes andimprovements in the structure of the organizationand the manner in which it does business. Theability of an organization to effectively document,archive and retire information in a timely mannerdetermines its competitive edge. (Back andMoreau, 2001) The success of a project dependson “its efficiency, effectiveness, and timeliness.”(Jiang et al., 2002) Self-evaluation in any projectis most likely the best method for evaluating theperformance of the individual members of theproject team and might help the members developsbetter skills and capabilities.The Critical Path Method (CPM)CPM as a management methodology has beenused from the mid 50s. The main objective ofthe CPM implementation was to determine howbest to reduce the time required to perform routineand repetitive tasks that are needed to supportan organization. Initially this methodology wasidentified to conduct routine tasks such as plantoverhaul, maintenance and construction. (Moderand Phillips, 1964) Critical path analysis is anextension of the bar chart. The CPM uses a workbreakdown structure where all projects are dividedinto individual tasks or activities. For any projectthere is a sequence of events that have to beundertaken. Some tasks might be dependent onthe completion of the previous tasks while othermight be independent of the tasks ahead and canbe undertaken at any given time. (Lowe, 1966)Job durations and completion times also differsignificantly. CP analysis helps decision makersP. Stelth (MSc) - Professor G. Le Roy (PhD) - Projects’ Analysis through CPM (Critical Path Method)


18School of Doctoral Studies (European Union) JournalJulyand project execution members to identify the bestestimates (based on accurate information) of thetime that is needed to complete the project.Several underlying assumptions are madein the CPM approach. The major ones arethat a project can be broken down into aseries of identifiable tasks, each of whichmay also be further broken down intosubtasks. Once this breakdown has beenaccomplished, the tasks are then placedin order against a timeline. Each task isassigned a start date, duration, and enddate, and may also have various resourcesattached to it. These resources can includespecific personnel, a budget, equipment,facilities, support services, and anythingelse that’s appropriate. The common wayto perform this task is to draw the tasks ashorizontal boxes against a vertical timescale. The resulting chart is called a Ganttchart. (Needleman, 1993)The CP analysis is also a helpful way ofidentifying if there are alternate paths or plansthat can be undertaken to reduce the interruptionand hurdles that can arise during the execution ofany task. Critical path analysis consists of threephases—Planning, Analysis, and Scheduling andControlling. All three activities are interdependent.But they require individual attention at all differentstages of the project. When the constraints in theproject are of a purely technical nature the “criticaltasks form a path (tasks linked by technologicalconstraints) extending from the project start to theproject completion, denominated critical path.”(Rivera and Duran, 2004) When the projectsexperience resource constraints critical tasks forma critical sequence.While CPM methods are ideal to identify thenature of the tasks and the time and money that isinvolved at every stage of the process, it shouldbe customized to suit the needs and goals of theorganization and the project. Communicationand information transfer issues are critical forsuccessful completion of any project. By definingand creating standard operating procedures (SOP)for similar tasks performed at more frequent intervalany organization can evaluate the progress and/orsuccess of a project team with reference to thesemetrics. SOPs are not static entities; but rather,change and evolve based on the environment, theculture and norms and the type of product marketedin the region. It is important when using CPM thatthe project team has some historical information ofthe processes and the task and are able to referencethis information during the planning and decisionmaking process.Control mechanisms in projects with respectto the alignment of the project outcomes with theplan initially proposed is important. As the personat the helm of a project, the project manager isresponsible for the success or failure of the projectas a whole. (Globerson and Zwikael, 2002) Itis the responsibility of the project manager tolook into the root cause of a problem if one existsand to identify the potential solutions that can beimplemented. If the project manager himself orherself is the cause of the problem however, thenarriving at an honest and appropriate solutionmight be impossible.Realistically determining the sequence of eventsneeded in the critical path is important. Naborsin the article ‘Considerations in planning andscheduling,’ identified that often in constructionjobs the sequence of events are not alwaysdependent. For example, the “electrical drawingsdid not have to be complete before foundationscould be constructed, that all engineering did nothave to be complete before construction couldstart.” (Nabors, 1994)There are two methods by which the CriticalPath can be identified. They are;1.The forward pass. Here, CPM calculatesthe earliest time within which a project canbe completed. “The date each activity isscheduled to begin is know as the “early start,”and the date that each activity is scheduled toend is called “early finish.” (Winter, 2003) Inthis method of critical path determination, theearliest possible date for starting of the projectis identified and then the activities are lined upto identify the completion date.School of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Projects’ Analysis through CPM (Critical Path Method)192.The backward pass. Here, selecting the datewhen the organization wishes to complete theproject or the last activity identifies CP. Timerequirements are based on working backwardfrom the final date desired for the last activityto the initial first activity. The dates identifiedin this method of CPM are called late startdates (for the starting of the first activity) andthe late finish dates (for the last activity in theproject.Important for the CPM using either the forwardpass or the backward pass is that the total timeneeded for completion of the project does notchange but the dates when the project can bestarted might differ based on the approach usedin the two methods. The selection of either theforward or the backward pass depends on the finaldesired results and the available documents andaccurate data needed to determine the time forevery activity on the network diagram. (Baram,1994) Slack or float is defined as the time betweenthe earliest starting time (using the forward passmethod) and the latest starting time (using thebackward pass method) used for identifying thecritical path. “Total float (float) is the amount oftime an activity can be delayed without delayingthe overall project completion time.” (Winter,2003)Typically, the critical path has little or no slackor float built into the activities. Therefore, it canbe stated that the activities on the critical path ifsubjected to extensive delays will make the projecttake longer to complete. If the earliest time thatany activity can be started is the same as the latesttime that the activity can be started then the timingof starting that activity is very important for theproject. In addition, ensuring that the activity hasall the necessary resources as and when requiredis paramount. CPM also connects the differentfunctional factors of planning and schedulingwith that of cost accounting and finance. In manysituations, schedules are often created withoutconsidering the resource needed (the availabilityof the resources at the time it is required) and costthat is incurred in case these resources are notavailable. (Just and Murphy, 1994) Often, duringthe scheduling process in CPM it is assumed thatthe resources of labor, equipment and capitalare unlimited when in reality this is not verytrue. The factor of resources can get even morecomplicated due to the interdependencies of thevarious resources on each other throughout theentire duration of the project.It is important to note that every activity timeidentified in determining the critical path aredone using a work calendar that is appropriateand relevant for the task at hand. In addition,with many projects the supply chain that spansthe activities can lie on more than one continentcomplicating the task of identifying accurate startand finish dates that are appropriate for all theactivities. The constraints in the system can alsoimpact the float that is identified in the process.Resource constraints are often the most difficultto identify and evaluate especially if the sameresource is required for more than one project.Projects are often managed very cost consciouslyduring the initial stages of the project. As theproject progresses, and if delays occur at variousstages of the project, the cost of the project mightbe compromised to satisfy the time of completionof the project.CPM identifies the two important variables ofany project the time and the cost of the project.When CPM was initially introduced the techniqueswere best suited for well-defined projects withrelatively small uncertainties in the executionof the project. During this time of CPM initialintroduction markets were also very regional andlocalized and there were few dominant players inany given market. CPM was also well suited foractivity-type network. PERT on the other handwas well suited for projects that had high degreesof uncertainty in the time and cost variables andwas suited for projects that were dependent onactivities that had to be conducted at variouslocations around the world.There are external variables that can affect theCPM logic during the planning, scheduling andmanagement process. “Priority changes, “acrossthe board” budget cuts, negotiations with otheragencies, evolving regulations, etc., can jointly orseverally impact the CPM schedule, necessitatingP. Stelth (MSc) - Professor G. Le Roy (PhD) - Projects’ Analysis through CPM (Critical Path Method)


20School of Doctoral Studies (European Union) JournalJulyfrequent and potentially complex modifications.”(Knoke and Garza, 2003) Organizations alsoundergo various modifications with respect o theimplementation of the management tools that theymight use during the duration of the project. Thesemanagement tools might affect the activities andthe manner in which they are undertaken.Organizational culture also impacts theCP analysis. It is normally observed that thework process “tends to accelerate as a deadlineapproaches.” (Cammarano, 1997) Most CPMaccount for buffer times in the activity duration.All the personnel involved in the project generallyknow this fact. Consequently work is often notalways started when stated and any uncertaintiesin the activity process can seriously impact thecompletion date. Slippage on any one of theactivities can result in the delay of the completionof the entire project. Corporate culture and valuesalso have the ability to impact the CP analysis andmanagement. An organization that has a trackrecord of always completing projects on time ismore likely to observe the start and finish dateswhen compared to an organization that has a poorproject time completion record. The mentalitythat ‘it is okay for project to be late as long asthey satisfy the requirements’ can also make theimplementation of CPM methods fruitless.The project manager’s attitude and mentalitytowards handling the activities in any projectalso can significantly affect the project. Often,stressing the fact that activities on paths that arenot on the critical chain are on time can preventa project manager from honestly evaluating thereasons for delays on the critical path. In somecases, incentives for completing a project ontime can impact the manner and attitude of theindividuals involved in the project. Some projectsare subjected to penalty fees and negative ratingsfor the organization that is completing the projectif it is not completed on time. These factorshowever, do offer some leeway for situations thatare beyond the control of the organization suchas damages to the facility caused by earthquakes,floods or hurricanes. Delays due to poor planningand scheduling of work are however penalized.In this environment, organizations work hard toensure that the tasks are always on time.The supply chain involved in the process ofcompletion of the project also impacts criticalpaths. Supply chain in any organization isgenerally identified as a group of organizationsor individual departments upstream (suppliers tothe company) or downstream (moving the productproduced by the company to the market or the nextuser), linked together to help move any productfrom the source to the supplier. (Trent, 2004) Afacility construction, for example, has to rely ona structural contractor, an electrical contractor,pluming and heating in addition to the ensuringthat all the material needed for the constructionis received on time for the process. Whenmanagement philosophies such as Just-in-time(JIT) are used for the procurement of materialdelays from the suppliers can seriously impact thecompletion date of the project.Critical analyses of every aspect of “valueaddition” to the product is needed along withnon-value added activity that only increases thefinal cost of the product but does not provideenhancement or additional benefits to the product.Identifying the needs for each project anddeveloping an appropriate request for proposal isvery critical. (Gido and Clements, 2003) The WBSfor any project identifies the major steps needed toundertake this project. “A good WBS simplifiesthe project by dividing the effort into manageablepieces.” (Rad and Cioffi, 2004) In addition, often,WBS offers members in the project an opportunityto define the standard operating procedure tohandle issues such as estimating and costing,change management and work completion review.(Baar and Jacobson, 2004, Lamers, 2002) “It isvital to the success of any project that one unifyingfoundation be established for the project controlssystem.” (Hobb and Sheafer, 2003)One of the main issues that most projectsface is overshooting the initial budget and nonadherenceto the time of launch. (Leemann, 2002)Scheduling and sequencing various tasks is alsoessential. It is essential that every task identifiedin the WBS be assigned an ID and an estimatedstart and finish date. It is important when creatingthe schedule that all the important variablesSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Projects’ Analysis through CPM (Critical Path Method)21that have the potential to push back the openingdate be verified. In addition, many constructionprojects are beginning to incorporate the CPM asmandatory requirements from the contractors andsubcontractors undertaking the contract. (Baram,1994)Resource planning and tracking projectschedules is very important for any project tobe successful. Resource leveling is concept inproject management that takes into account thatthe project might have many tasks that have to becompleted concurrently for different projects at thesame time. Consider an ideal example: a singleindividual who has the expertise to handle one taskbut many projects undertaken in the organizationhave to use the services of this individual at thesame time. This same factor might also affectequipment (one machine has to perform the sametask for 10 different projects at the same time) orcapital resources (too many projects will stressthe investment that can be potentially made forany one project) in the organization. If there isno leveling and no constraints of resources forthe project then the manpower peaks early inthe project. (Just and Murphy, 1994) Floats andcritical paths breakdowns are generally as a resultof the resource constraints and different methodsof crashing the project can yield different resultsfor the project.Advantages of using the Critical PathMethodIn the age where tools available to managementare constantly changing and improving the abilityof CPM to still command respect among the projectteams and managers is testimony to the fact thatthis tools has proved very valuable and beneficial.Listed below are some of the major reasons whyCPM is still used in organizations today• CPM encourages managers and projectmembers to graphically draw and identifyvarious activities that need to be accomplishedfor project completion. This step encouragesall members in the project team to evaluateand identify the requirements of the project ina critical and logical fashion. Activities thatprecede and follow other activities also requiretheir own evaluation and analysis. This factorbecome very important if the activities areconducted at different physical location andthe time and cost element is also subjected toexternal variables that have the potential toseriously impact the project time.• The network diagram also offers a predictionof the completion time of the project and cahelp in the planning and scheduling of theactivities needed for the completion of theproject.• Identifying the critical path for the project isthe next stage of the analysis of the networkdiagram. In doing this, the management ofthe project has a reasonable estimate of thepotential problems that might occur and theactivities at which these problems might occur.In many cases the critical path also determinesthe allocation of resources. The interpretationof the network diagram also ensures that thesame resource is not allocated for the sameperiod of time.• CPM also encourages a disciplined andlogical approach to planning, scheduling andmanaging a project over a long period of time.Often, the root cause of many project overrunsis the failure to identify the factors that havethe potential to seriously impact the project.By forcing individuals in the project team toidentify activities, attention to details can beachieved. In turn, this helps a true and muchmore accurate picture of the processes thatneed to be set up for the project and the timeand cost that is needed for every stage.• A SWOT (Strengths, Weaknesses,Opportunities, Threats) analysis of theorganization is also an important task to beundertaken at this point. In an organization,SWOT analysis should be undertaken at thecorporate level and at the department level.Carrying out an analysis using the SWOTframework will reveal changes that can beaccomplished; as opposed to those changesthat appear to be optimal solutions initiallyP. Stelth (MSc) - Professor G. Le Roy (PhD) - Projects’ Analysis through CPM (Critical Path Method)


22School of Doctoral Studies (European Union) JournalJulybut would not be as effective in the long run.Being realistic when evaluating variablesaffecting the organization’s function and itsfuture is very important in order to make theSWOT analysis exercise effective. Areas ofimprovements, problems faced, badly executeddecisions and avoidable choices made have tobe evaluated. The opportunities and the areaswhere the company can grow and improveshould also be evaluated along with realand perceived threats that face the company.Identifying methods for creating an effectiveteam performance across job-function stratain the organization; and analyzing the methodsfor assigning responsibilities and duties isimportant.• Optimization of the time-cost relationship inproject management is also possible usingthe CPM as managers can visually identifythe activities that can pose a problem if notmanaged and monitored effectively over aperiod of time. In many situations the coststructures in organizations are still basedon functional structures, although projectstructured organization might have differentform of costing used for different projects.The task of identifying the accurate cost ofthe project is not easy and is not universalto all projects or all companies. Developingtime-cost relationships for projects requiresthat project managers are able to identify rootcause of the problems that are impacting thetime and the cost variable.• Based on the time-cost variables, the projectcan be tweaked to best satisfy the goals andaims of the organization. For example, if aproject team is able to identify that they needmore time if the project has to be within acertain budget or vice versa this fact is clearright from the start of the project. While itis presumptuous to aver that every factorsaffecting activities can be identified at theinitial stages, a large portion of the factors andvariables can be understood and the risks anduncertainties associated with each are knownprior to starting of the project.• Tracking the CPM is also helpful. Managerscan identify areas where attention needs to befocused. Critical paths do not remain staticfor the life of the project; rather there is a veryhigh chance that the CP might changes dueto internal and external factors affecting theorganization. The 9/11 attacks, for example,shutdown every major port. CP that estimatedfor example seven days for delivery of rawmaterial from out of the U.S. had to changetheir critical paths. In some cases, the internalfactors such as union issues and suddenequipment failure might also impact thecritical path.• Scheduling of activities is possible. The CPMidentifies the entire chain of activities. Often,during the initials stages of the project thenumber of activities and the cost requirementsmight be high; but as the project progressesthe activities might sort themselves out intoroutine or critical. Project managers, insteadof tacking the entire issue, can focus theirattention to groups of activities that areimmediate and have the ability to impact thenext downstream activity.• The CPM also identifies slack and float timein the project. Thus, project managers canidentify when resources can be reallocated todifferent activities and the shifting and movingof activities to best optimize the utilization ofthe resources.• Critical paths are also updated periodicallyfor any project and offer the project managerand members a visual representation of thecompletion of various stages of the project andeasily identify problem areas where furtherattention might be required.• In many large projects, there can be morethan one critical path in the network diagrammapped out. When such a situation arises,CPM can help managers identify suitable planof actions to handle these multiple criticalpaths.• CPM has been widely used by a variety oforganizations in almost all industries withgreat success. CMP can also help estimatethe project duration and this information canSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Projects’ Analysis through CPM (Critical Path Method)23be used to minimized the sum of direct andindirect costs involved in the project planningand scheduling• CPM offers organizations a form ofdocumentation that they can reuse for similarprojects that they might undertake in thefuture. Documenting various activities and theroot causes of the problems can help futureprojectmanager avoid similar pitfalls. Inaddition, documentation can provide valuabledata for estimation of time requirements andcost factors, as opposed to managers usingestimations and guesses of the cost.• “Critical Path Analysis formally identifiestasks which must be completed on time for thewhole project to be completed on time, andalso identifies which tasks can be delayed fora while if resource needs to be reallocated tocatch up on missed tasks.” (MindTools, 2004)The CPM can identify the paths that can betaken to accelerate a project to be completedprior to its due date or identify the shortestpossible time or the least possible cost that isneeded to complete a task.• CPM methods are based on deterministicmodels and the estimation of time activitiesare based on historical data maintainedwithin the organization or data obtained fromexternal sources (like request for proposalsreturn information)Disadvantages of Critical Path methodCPM has a number of advantages and ithas been able to provide companies using it ayardstick and a reasonable estimate of the timeneeded for the completion of the project. Themain disadvantages of the critical path method arelisted below. Many disadvantages are as a resultof the technical and conceptual factors involved inthe CP analysis (CPA) process• The CPA process can become complicated asthe scope and extent of the project increases.Too many interconnecting activities canresult in the network diagram becoming verycomplicated. The risk of making a mistake incalculation of the critical chain becomes veryhigh as the number of activities increase.• The CPA depends on the fundamental conceptthat the managers and personnel involvedin the project team are well versed with thevarious activities. “Unfortunately, practicalexperience has shown that the principalassumption underlying CPM techniques, i.e.,the project team’s ability to reasonably predictthe scope, schedule, and cost of each project,is frequently far beyond control.” (Knoke andGarza, 2003)• The task of understanding the needs of thecritical path get more complicated when thereis more than one critical path in the project. Inmany situations, these paths might be paralleland feed into a common node in the networkdiagram. It becomes difficult in these situationsto identify the best utilization of technologyand resources for the critical paths.• In many cases, as the project progresses, thecritical paths might change and evolve andpast critical paths may no longer be valid andnew CP have to be identified for the projectat regular intervals. This implies that theproject manager and project member haveto constantly review the network diagraminitially created and identify the shifting andmovement of the critical path over time.• “The use of total float as a measure forassigning activities to their representativepaths can become problematic when analyzingas built schedules. CPM is unable to calculatetotal float on an as built schedule in whichestimated dates have been replaced by actualdates.” (Peters, 2003)• As critical paths and floats change thescheduling of personnel also changes.Reallocation of personnel is often very trickyas the individual might be working on morethan one project at a time and if the services ofthe individual are required on more than onecritical path the identification and distributionof the labor time can cause overloading of thepersonnel—creating a stressed out worker.• Very often, critical paths are not easy to identifyespecially if the project is unique and has neverP. Stelth (MSc) - Professor G. Le Roy (PhD) - Projects’ Analysis through CPM (Critical Path Method)


24School of Doctoral Studies (European Union) JournalJulybeen undertaken by the organization in thepast. The ability to provide estimates of timeand cost for every activity in a tradition CPMprocess depends on historical data maintainedby the company. In the absence of this data,decision makers are forced to speculate andassume time and cost requirements for theprojects.• Traditionally, any good CPA requires that theprocess is understood and evaluated using theforward and the backward pass to determineslack or float times. In reality, however, thetime constraints often result in decision makersusing only one method to find the time andcost requirements. As a safety, however, theseindividuals often ‘crash’ the project during theplanning stage and determine the maximumcost that would be needed to complete aproject. During estimation, they use a midwaycost value for the project thereby intentionallyhiking the cost of the project. This mentalitycan cost the often overestimation of timeand cost encourages the workers to postponethe start date of any activity on the networkdiagram to the last possible start data. Anyserious variances consequently result in theslippage of the project completion date therebyincreasing the cost of the project as the projectis then crashed from that point onwards.• CPA and network diagrams are highlydependent of information technology andcomputer software. The cost of set up ofsoftware systems in the organization can havehigh initial cost. Maintaining the softwarealso requires expertise and monitoringthat can quickly become very expensiveif the organization does not have in housecapabilities for this task.• Planning and strategizing for the projectbased on the final expectation and the internalculture and values of the organization is alsovery important.• Organizations are also becoming increasinglyglobal and political. Social and economicalinstability in one region of the world canseriously impact production in another. Iforganizations depend on activities that spanthe globe the task of coordinating the planningand scheduling of activities at various locationcan get further complicated.• In order to improve profits, it is necessary forcompanies to streamline their operations tomaintain their position in a constantly evolvingproduct market. To do this, companiesare forced to improve their manufacturingperformance and reduce the operations cost.Managers at every level are forced to evaluatetheir processes from suppliers to the enduser. A part of the analyses also extendedfrom the company’s supply chain and theindividual suppliers. Companies are goingfrom a multitude of suppliers to a few trustedand reliable ones in an effort to track qualityand keep down costs. However, this processalso is fraught with perils if the supplier isunreliable and sudden unforeseeable factorsimpact the activity time.• Although the CPM method is very valuable inthe extent of details that it provides, modifyingthe system constantly can be cumbersomeespecially if it involves reallocation ofresources and time.• In spite of the widespread use of CPM inorganization the manner in which it is usedcan differ significantly. Organizations thathave a strong culture of timely completionmight be utilizing the methodology in amore appropriate manner when compared tocompanies that use CPM only partially forplanning and scheduling.• Knowledge management of data is important.Defining knowledge is never easy. Knowledgeand information are different although theyare often assumed to be the same. Thereare important distinctions between data,information and knowledge. Data arethe raw facts collected by observation ormonitoring. When data are filtered out toidentify trends and organized it converts toinformation and when this information isused in the operation, planning and strategyit is converted to knowledge. (Yahya andGoh, 2002) Information and knowledge gettransmitted through an organization throughSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Projects’ Analysis through CPM (Critical Path Method)25communication networks. CPM depends onthe efficiency of these networks.• Knowledge is defined as “information ladenwith experience, truth, judgment, intuitionand values; a unique combination that allowsindividuals and organizations to assess newsituations and manage change.” The mainpurpose of any knowledge managementstrategy is to “reduce errors, create less work,provides more independence in time andspace for knowledge workers, generates fewerquestions, produces better decisions, reinventsfewer wheels, advances customer relations,improves service, and develops profitability.”(Karlsen and Gottschalk, 2004) In projectenvironments, using this knowledge as andwhen needed is critical.• In many recent cases, fear of litigation anddelay claims based on the CPA used bycompanies is also being observed. Lawyersare using experts to investigate the CPA thatwere undertaken by contractors for projectsand identifying the reasons for project delays.(Schumacher, 1997) When penalties and finesare imposed for late completion the CPM usedby contractors can be subjected to scrutiny andmight be responsible for organizations loosinga case.• Sometimes projects use different calendars forthe scheduling and planning and this can causemore complications. “There are numeroustypes of calendars used in construction projects.The following examples are most frequentlyfound in construction schedules. Constructionprojects typically run five-days-a-week (40-hours/week) calendar. Besides, non-workingSaturdays and Sundays, usually holidays arealso non-working days.” (Scavino, 2003) Somecontractors however, can also use the six-daycalendar or a seven-day calendar as needed.Scheduling a project using the combination ofcalendars can create confusion if the CPM ifthe individual analyzing the CP is not carefulabout evaluating the type of calendar used forthe different activities in the network diagram.This issue gets only more complicated if CPchanges constantly.• Many projects are generally long duration(3-5 years) in nature. It is often observedthat the personnel involved in the project alsochanges as the project evolves. Many of theinitial members might have left the companyor transferred to other departments or evenretired and the new member might not beas well versed with the initial concepts andbrainstorming that went into the creation of thenetwork diagram. Changes and modificationmade over the period of time on the networkdiagram can also be difficult to track if a goodmethod of documentation of the change is notmade. Often, poor documentation is the causeof the same mistakes being repeated over asecond time.• CPA also does not take into account thelearning curve for new members on theproject or for activities that are new andunique to the project. (Badiru, 1995) Usingpast information of learning curves can helpproject managers estimate time variations incase a new employee is put on the task or anew process is required for any activity tobe completed. CPM does not traditionallyconsider this as an important variable forallocation of time or resources.The Critical chain and understanding theTheory of ConstraintsGoldratt introduced the concepts of criticalchain for project management. He defined thecritical chain as “the longest chain of dependentsteps. The dependencies between steps can be aP. Stelth (MSc) - Professor G. Le Roy (PhD) - Projects’ Analysis through CPM (Critical Path Method)


26School of Doctoral Studies (European Union) JournalJulyresult of a path or a result of a common resource””(Goldratt, 1997) “The critical chain thus refers toa combination of the critical path and the scarceresources that together constitute the constraintsthat need to be managed.” (Elton and Roe,1998) The figure indicates the concept in greatergraphic detail. (Sciforma, 2004) The critical chainmethodology incorporates the benefits of theCPM and PERT methodologies with the humanand behavioral impact on project managementin an organization. The Human element wasnot a major concern in the CPM and PERT andhuman tendencies were not considered critical inthe completion of the tasks. The book CriticalChain applied the TOC to the task of projectmanagement. (Schuyler, 2002) Where in thepast TOC concentrated only on manufacturingand production, Goldratt with this book was ableto use the main concepts of TOC to improve theproductivity of the project management process.“The critical chain yields the expected projectcompletion date.” (Raz et al., 2003)There are five key factors incorporated in thecritical chain method that has the potential tosignificantly improve the project performance.They are:1.2.3.4.5.“Use of a synchronization mechanism tostagger work.Creation of project networks that are truestructures of dependency.Creation of schedules that place safetystrategically to protect against variabilityalong the longest path of task and resourcedependencies.More effective work and managementbehaviorsProject management and resourceassignment based on relative depletion ofproject safety” (CriticalChainLtd, 2003b)The critical chain refines and improvesupon the critical path method used in projectmanagement. Very often, problems commonto almost all projects are budget overruns, timeoverruns and the compromises in the quality andthe performance of the product. In many projectsituations, decision managers and project managerare far removed from the actual task function andas a consequence have to either rely on dependableinformation or assume a lot of the information.Top management also forces options (decreasingthe time to complete the project, cutting cost of theproject or reducing the resources available for theproject) on the project teams that are unrealistic.Every business has measurements. Theseare a result of the market economy. (Drucker,1974) One of the key performance measurementsused (more often) is the Economic Value Added(EVA). Here, management closely monitors if theoperations and the strategies are generating profitsfor the organization. (Fletcher and Smith, 2004)Constant self-assessment within an organizationcan help in the implementation of improvementsin market-share and profitability using the EVA.(Evans, 2001) An organization that uses EVAshould not attempt to use any accounting changeto evaluate the EVA. Rather, the evaluationshould be performed using realistic and accuratedata. The ability of any organization to identifythe fit between activity that would be performedand the strategy that would be employed by thecompany can help that organization stay focusedand dedicated. (Porter, 1996) Hard evidence interms of return on investment (ROI), inventoryturnover rates and better-cost structure impactsto colleagues, customers and stockholders shouldbe used constantly to ensure that the metrics usedfor EVA are appropriate for the situation. (Knightsand Morgan, 1991)In many modern organizations, the capitalinvestment made is subjected to a lot of scrutinyand organizations prefer to wait for the last possibleday to invest the monetary resources. “Projectmanagement must reconcile two conflictingaspects of projects -- the increasingly importantneed for speed in project delivery and the equallyimportant need for reliability in delivering theproject as promised.” (Patrick, 1999)Deming realized the effect of variance onthe production process. (Deming, 1950) Thereare two types of occurrences in manufacturingoperations: a dependent event in which theSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Projects’ Analysis through CPM (Critical Path Method)27progress is made from one process or machineto the next in a sequence that is predetermined;the second is a statistical fluctuation that occursdue to the process itself. Statistical fluctuations doexist in any operation and the ability to smooth outthe variance can be achieved only within a certainrange of the fluctuations. (DeVor et al., 1992) TheTOC model postulated by Goldratt duplicates therequirements of Statistical Process Control (SPC).(Goldratt and Cox, 1993) TOC forced companiesto look within their process at the constraints andbottlenecks that were hindrances in the generationof maximum profit. The theory of constraintslooked for the critical path in any process. Themachine with the slowest output, would determinethe constraint. Labor and employee requirementis an important intrinsic factor that affects theinternal environment in an organization in theTQM model. The TOC model enhances the TQMmodel in this arena.Goldratt stated that a production facility is onlyas fast as the slowest process in the critical chainof the manufacturing. Detailed understanding ofthe logistics involved in getting the product fromsuppliers to the customers, both internal and externalis important. (Ayers, 2001) TOC postulates thata seamless, flawless and well-connected supplychain can help keep manufacturing-costs down.TOC looks at the cost effectiveness of runningan operation and proposes that manufacturingshould not create waste on the sole basis thatwaste is useless and therefore cost the companymoney. TOC postulates that many constraintscould be eliminated or reduced by proper designand scheduling. There will however, always be anoperation on the critical path, which will determinethe rate of manufacture of a production plant. Forthis process to be successful, upper level managershave to be actively involved with the shop floorworkers in determining the critical path. Criticalpaths will change and evolve with every changethat is made to the flow of material in the plant.Analysis of the setup times in relation to the costof manufacturing in a batch was also consideredimportant by conventional standards for allresources bottleneck and non-bottlenecks beforethe launch of Goldratt’s TOC model. This setuptime however, is only considered really significanton bottleneck operations in the Goldratt model.An hour saved at a bottleneck is of very significantimportance and will determine the bottom-lineprofits for an organization. Bottlenecks governboth throughput and inventories in a manufacturingsystem. An hour lost in a bottleneck is an hour lostin the total system. Consequently, an hour savedat a bottleneck operation is an hour saved in theentire process. The cost incurred due to the lossof an hour at the bottleneck is in fact the cost of anhour in the entire system. (Goldratt, 1990)An attempt to run organization in a lean mannerand an awareness of the importance of continuousimprovement is growing in manufacturing basedorganizations. Creating a constancy of purposetowards improvement and strategy planning basedon long-term goals of the organization can helpenlighten those involved with the organizationto problems that they face or might face. Inthe book, “The Goal: A process of ongoingimprovement”, Goldratt and Cox evaluate theimportance of constraints and bottlenecks in themanufacturing process. Goldratt defines newways of understanding throughput, inventory andoperating expenses. (Goldratt and Cox, 1993)Throughput is defined as the rate at which moneyis generated by the system through sales. In thecase of project management, this is compared tothe number of projects that are completed andfor which revenue is obtained or recovered in theshortest possible time. A sale, not production, isthe important factor in measuring throughput. Ifthe product is manufactured, but if it is not soldthere is no throughput. Throughput is defined fromthe time the raw material enters the organizationtill the time it is purchased by the customer andmoney has been paid for the product. If theproduct is manufactured but if it is not sold, it doesnot generate money.Inventory is defined as the money the systemhas invested in purchasing things that it intendsto sell. In project management, work in progress(WIP) inventory is compared to that of activitiesthat are started prior to their start date needs. Theaccepted norm in an organizational environmentwas to keep all the machines/equipment workingP. Stelth (MSc) - Professor G. Le Roy (PhD) - Projects’ Analysis through CPM (Critical Path Method)


28School of Doctoral Studies (European Union) JournalJulyat optimum efficiency, and the workers constantlybusy is considered to be a waste by Goldratt. Theability to control the raw material that is releasedso that the semi-finished product from one machineis directly utilized at some other machine along theflow chain is important. Products should only bemanufactured if there is a market for the product,not to build up a stock reserve for projected markets.Operational Expense is the money the systemspends in order to turn inventory into throughput.Labor cost, as applicable to the manufacturingoperation, is considered as an operational expense.Money that is lost due to the expense paid toconvert raw material to finished product is alsoclassified as operational expense.Setup times are also important for the bottleneckresources. Analysis of the setup times in relationto the cost of manufacturing in a batch was alsoconsidered important by conventional standards forall resources bottleneck and non-bottlenecks beforethe launch of Goldratt’s new model. This setuptime however, is only considered really significanton bottleneck operations in the Goldratt model.The ‘Resource Time components’ at a bottleneckresource is the process time and the setup time.A non-bottleneck is a resource whose capacity isgreater than the market demand. The ‘ResourceTime components’ at a non-bottleneck resourceis the process time, the setup time and the idletime. The level of utilization of a non-bottleneckis not determined by its own potential but by someother constraint in the system. An hour saved at anon-bottleneck is of little importance and is just amirage. Bottleneck should preferably be run forlarger batches, to reduce setup time required for themachine. Non-bottlenecks can have smaller batchsize runs; the setup time does not interfere with theprocess time as it does in a bottleneck operation.Priorities have to be set in the process todetermine the sequence of activities on bottleneckresources. Careful study of the time spent (queuetime, wait time, transfer time and process time)by parts at any resource is required to determinethe feasibility of a demand and on time deliveryof a product to the customer. Many work centersare not bottlenecks; however, they have a capacityconstraint and a sudden demand of a large quantityof product may cause these capacity constrainedresources to cause a log jam in the process.Schedules should be established by evaluation ofall the constraints and the lead-time is a factor ofthese constraints.The goal for any organization therefore, is toincrease profits by simultaneously increasing theNet profit, Return on Investment and the CashFlow. A connection between these three measureshas to be established for an organization. Animportant relevance of the TOC for projects is the“Critical Chain Scheduling.” In this, the focus isshifted from “assuring the achievement of taskestimates and intermediate milestones to assuringthe only date that matters--the final promised duedate of a project. As a matter of fact, the schedulingmechanisms provided by Critical Chain Schedulingrequire the elimination of task due dates fromproject plans.” (Patrick, 1999)By removing the dates from the activities in thecritical path, Parkinson’s effects in the working ofproject planning is eliminated. Here, workers arenot restricted by the start time. This is especiallyimportant in projects with a large number ofactivities. When allocating time for each activityproject managers and planners often introducebuffer times. These buffer times might be smallnumbers for each activity that might be added toguard against statistical fluctuations that normallyoccur in each activity. While these numbers aresmall they add up over the entire project activitiesto a significant time frame. In addition, as theworkers realize that they have the necessary timebuilt in as buffers they are more likely to push outthe start of he job and concentrate their efforts onother task at hand.There have been many suggestions fromconsultants and theorist that elimination of thetime periods or building tighter schedules canforce workers to concentrate their efforts oncompleting the task at hand. In short, the criticalchain concept uses reduced time durations for theactivities that need to be conducted in the networkdiagram. A critical factor affect worker dedicationis the constantly changing responsibilities that areentrusted to the worker from their managers andsupervisors.School of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Projects’ Analysis through CPM (Critical Path Method)29There is a trend of encouraging workers tomultitask and perform more than one job scopesimultaneously. “Multitasking is “the practice ofassigning one person concurrently to two or moretasks.” Multitasking is also known as fractionalhead-count. One person is assigned to multipleprojects simultaneously (or multiple tasks withina project simultaneously)… The “efficiency” ofmultitasking is a myth. Just because a resource isutilized does not mean it is productive (or genuinelyefficient)” (Zultner, 2003) An employee might beresponsible for completing activities on more thanone project at a time. Often, the worker waste moretime starting and stopping different activities. Forinstance, an employee performing work on fourtasks per 8-hour day might devote 2 hours per task.However, he or she might require a few minutesto brush up of the activity of yesterday. That is,they might reread the work done previously tohelp refresh their memory. Valuable time is spentevery day brush up on the past day’s work. Criticalchain method stresses the importance of focusingthe workers time and attention on only one jobat a time. Thus, the worker can start and finishthe activity without any interruption where theattention might be devoted to another task. As theactivities are undertaken in one sequence, there isno need to refresh and review the past work everytime it is picked up.The milestones set in the CPM achieveapproximately 80% of results. The remaining 20%are generally what results in the projects gettingdelayed and slipping from the final due date.(Leach, 1999)Advantages of using the Critical ChainAnalysis method (CCAM)• CCAM focuses the attention of the projectmanagers and the employees/workers neededfor the project completion on the activitieson the critical chain with little or no attentiongiven to the start or the finish dates of theactivities.• CCA forces managers and decision makersto identify the constraints and bottlenecks inthe process or activities. Such a methodologyencourages the exploitation of the constraintsand ensures that full utilization of the constraintresources is always done.• CC encouraged managers not to start anyactivity too early as starting activities tooearly might block valuable resources that area part of the constraint. Rather, constraint allthe other resources to fulfill the needs of theactivities that need the complete use of theconstraint resource.• CCAM also looks for ways by whichthe productivity of the constraints can beincreased by improving or modifying themanner in which the constraint is used. Anideal example is eliminating multitasking, andletting the employee finish one single taskfrom start to finish without any disruption ofduties and functions for the entire period.• This model used a deterministic method forscheduling of activities. Every input to thenetwork model are all “singly determinedvalues.” (Schuyler, 2002)• The focus of this method is based on theactivities and the tasks that are yet to becompleted rather than focusing on the tasksthat have already been completed. CPMconcentrates on milestones achieved whilefailing to identify the trend of the future,CCAM focuses on the future and is notconcerned about any saving of time in thepast.• CC espouses the elimination of “slack andfloat” in the times for all activities, rather itconsiders using the median completion timefor any activity and using the variation createdas a result of normal fluctuation to ensure thatthe activities are being completed as desired.CCAM “reduces the effects of two behavioralproblems: biasing estimations and wastingslack times.” (Schuyler, 2002)• Managers can spend more time on planningand scheduling of “feeding Buffers” of thecritical chain rather than spreading out theresource buffer and project buffer over theentire network. By managing the buffers andconcentrating on activities that can benefit frombuffers, project managers can avoid Murphy’sP. Stelth (MSc) - Professor G. Le Roy (PhD) - Projects’ Analysis through CPM (Critical Path Method)


30School of Doctoral Studies (European Union) JournalJulylaws from delaying the project. The feedingbuffer offers protection to the critical chainagainst uncertainty in the feeding or inputtingnon-critical chain. This feeding buffer can beadjusted as needed to ensure that the criticalresource gets the feed as needed.• “Resource alerts and effective prioritizationof resource attention allow projects to takeadvantage of good luck and early task finisheswhile buffers protect against bad luck and laterthan scheduled finishes.” (Patrick, 1999) Timebuffers (feeding, project and capacity buffers)should be introduced in a systematic methodfor dealing with stochastic (unpredictable)variability in the project life cycle.• The CCAM strives to maintain a consistencythroughout the project life period by managingthe buffer times for the various activities.“Buffer management then amounts to thedynamic management of resources accordingto buffer contents or, equivalently, to bufferconsumption levels. For example, amongseveral competing activities, top priority inresource allocation is given to the activitywhose buffer consumption is the highest,namely its slack time is the least.” (Cohen etal., 2004)• From a conceptual point of view CCAMencourages managers to manage the projectoperation in a “pull” system rather than a pushsystem.• CCPM is conceptually easier when compared toother mathematical based project managementmethods such as Monte Carlo simulations andPERT analysis of network activities.• Identifying he critical chain for the project isthe most critical step. The advice and opinionsof experts in the field should be sough to ensurethat the critical chain selected is the correctone. Realistic times based on the median timerequirements for the completion of the projectshould also be obtained from the “experts” inthe field.• In organizations undertaking many projectssimultaneously, staggering the projects ina manner that encourages utilization ofcommon resources at staggered times as wellcan help all projects move faster through thesystem—the throughput is increased of theorganization. Staggering the projects basedon the bottleneck resource is important. Inthis manner the resource that has the capacityconstraint that impact the critical chain isnever idle.• The project buffer is an integral part of theproject and has to be assigned resources andscheduled as well. The project buffer is theaccumulate buffer of the entire system and“by pooling together the safety margins ofthe individual tasks, the protection againstuncertainty is improved.” (Raz et al., 2003)• Should more than one critical chain occur inthe project then managers and decision makersare required to focus and select the criticalchain that has the potential to impact the entireproject the most. (Herroelen et al., 2002)• The critical chain process forces managers toview the issue of the planning and schedulingof activities in a more holistic view therebyensuring that project managers are aware ofthe issues that they face and are more willingto allocate the necessary time and resourcesto areas that are deemed to be more critical.(Anonymous, 2001)• CCM forces closer interactions between teammembers as constant and frequent updating ofprogress is expected. Managers can observeslippages and changes in time for activitiesare more frequent intervals when compared tothe critical path managementDisadvantages of Critical Chain Method(CCM)If CPM is complex and difficult for largeprojects, CCAM is even more complicated. Thismethod requires that the managers and decisionmakers understand all the intricacies involved inthe completion of the project.Trust in the management not to overburdenor overstress the resources is an importantconsideration in the CCM. Employees should notperceive the CC as a management ploy to extractsuperhuman work performance from them. As noSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Projects’ Analysis through CPM (Critical Path Method)31dates are set, the workers might negatively impactthe project if they perceive that the management ismisusing their powers.Managers and experts of the activities are soonmade aware that the estimates of time provided bythem will be reduced by approximately 33%. Tocompensate for this factor there might be a tendencyto over inflate the initial time requirements for theproject. The level of over estimation by functionalmanagers might also not be the same. (Raz etal., 2003) Some managers might overestimateby 15%, others by 50% and even others by 75%,using the standard 33% to reduce the time mightnot be the right way to handle the problem of overestimation.Determining when a particular resource isneeded is difficult to predict using the criticalchain. Even if the activities are finished early thereis not absolute guarantee that the resources neededto complete the next activity will be available.Identifying the resources needed for the criticaltask has to be done at all times. This can be verylabor intensive, as additional administrative dutieswill have to be undertaken to constantly trackthe requirements during the progress through thecritical chain. This gets complicated just like thecritical path might not always match the criticalchain in the project.Critical chain all requires that all resourcesconstantly provide “current estimate of the time tocomplete their current task.” (Patrick, 1999) Thisrequires tremendous coordination of real timeinformation from all resources to a centralizeddatabase that can be accessed at all times by keypersonnel.As with CPM, Critical Chain ProjectManagement (CCPM) also relies heavily ofsoftware and computerization for tracking andmonitoring the progress of the project.The CCPM is applicable to projects that aremore manufacturing based and this managementmethod might not always be applicable to projectsthat start with a few central activities and theseactivities spilt up at various stages and then arerecombined at different periods of time in theproject. The predecessors and successors fromseveral chains can create very complex networksthat cannot be scrutinized by used the simplisticbuffer methodology.The critical chain and the associated buffersdepend on a number of complex algorithms(resource leveling) to determine the time. CCPMhowever, does not specify any new or uniquemethodology for solving the algorithm. (Raz etal., 2003)While Goldratt postulate that the critical chainis static and does not change, in reality the criticalchain can shift and change in a manner similar tothe critical path making the system very dependenton smart technology to constantly track the newcritical paths for the project.Buffer concept in CCPM also states that theresources should be offered to activities on thecritical chain that have the least buffers. Thisfactor however, does not take into account thepenalties or fines that might be imposed due to noncompletion of other activities that might not be onthe critical chainThe CCPM also assumes that the organization isin full and complete control of all the activities anduses all the powers within its capabilities to evaluateand understand the needs of the critical chain. Inreality, with the use of outsourcing capabilities andcontractors many tasks and activities are not underthe complete control of the project manager whencompared to similar manufacturing issues that themanufacturing department might experience.Eliminating multi tasking might not bethe solution to all project management issues.Studies indicate that the effectiveness of matrixorganizations is a reality. In many cases, there was“a relationship between the number of projects towhich research and development personnel wereassigned and key performance indicators of thefirm” such as the return on investment and the rateof sales growth. (Raz et al., 2003)It is not always practically possible to staggerprojects to accommodate the resource needs ofeach project. Different projects can have differentneed of resources and technology at differentstages of the project. Unless the two projects arevery similar, the likelihood that the sequence ofactivities in the network diagram is the same mightbe very slim.P. Stelth (MSc) - Professor G. Le Roy (PhD) - Projects’ Analysis through CPM (Critical Path Method)


32School of Doctoral Studies (European Union) JournalJulyNo matter what tool of project managementused, the dependency of the planning and theexecution of the task depend greatly on the skilland the dedication of the project manager andthe skill and dedication of the project team. Thisaspect of project management on successfulproject completion will be discussed in detail laterin this chapter.CCPM can confuse organizations that are newto project management as they might find some ofthe principles of project management drasticallydifferent from the mainstream methodologies(P<strong>MB</strong>OK® Guide) that are used.CCPM also require changes in the ManagementInformation Systems (MIS) and the technologyapplications and use in the organization. Thesetechnological changes have to also be accompaniedby the cultural changes with respect to the valueand norms followed by the organization. The goalsand the mission of the organization are also veryimportant in the successful implementation of theCCPM methodology. It is unclear if any significantimprovements in project management would beseen if the basic culture of the organization doesnot change.Training and education of the project staff atall levels of the organization are needed. Thisbecomes especially important if the organizationwas used to the mainstream methods of planning,scheduling and overseeing the projects. Trainingfor new software and technology is also requiredfor the staff involved with the CCPM to becomfortable with using the new technology.There is also a lack of consensus betweenGoldratt use of the median time and the ProductDevelopment Institute use of the mean time foractivity duration estimates in the critical chain.While many of the software used for the projectscheduling is very sophisticated and has manybuilt in checks and balances the expertise of theindividual evaluating the schedule and determiningthe critical chain is very important. This becomesvery significant if there are multiple critical chainsand the software picks one over the other.The feeding buffer for non-critical itemsmight create a mock situation of critical chainsthat are not realistic or accurate. False alarms inscheduling might be set up if the buffering is notmanaged accurately.Comparison between CPM and CCPMscheduling methodsCCAM introduced many new concepts thatare easily incorporated in the traditional CPMmanagement systems used for projects. If noresource contentions exist for the project activity,then the critical path and critical chain will beidentical. (Spoede and Jacob, 2002) New projectmanagement software automatically allocatesa project buffer at the end of the critical chainto ensure that the project is always completedby the due date. This introduction of the buffermight in fact preponing the start date for manyproject thereby ensuring that the projects willalways be completed on time. “The idea ofgenerating a deterministic baseline schedule andprotecting it against uncertainty is sound andappeals to management.” (Herroelen et al., 2002)A quick comparison between Critical Path andCritical Chain as stated by Critical Chain Ltd isprovided below (Direct extract from website).(CriticalChainLtd, 2003a)Buffering also allows the scheduling of the lastactivity of the critical chain much latter than any ofthe other non-critical chain activities. The feedingbuffer differentiates the non-critical activitiesCritical Path Approach Critical Chain Project ApproachThe project finish is a date we think we can hit (andthen we work like hell to make it)The project finish is planned with a chosen level oflikelihood, and assured with buffers throughoutSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Projects’ Analysis through CPM (Critical Path Method)33Critical Path Approach Critical Chain Project ApproachThe critical path determines the start and end ofthe project – and the path may change during theprojectVariation is implicit, and assumed to “average out”over the length of the projectTo keep the project on schedule, we must keep eachtask on schedule according to the calendarTask start and finishes are carefully tracked. Schedule“slippage” is important and must be monitoredcloselyPeople are evaluated in terms of whether their tasksare late relative to their committed calendar date fortask completionFixed-date “Stage gate” reviews are scheduled toevaluate project progress to dateThe amount of slack that non-critical paths have isnot as important and not trackedMaking progress on every project, during everyreporting period, is important, so resources aremulti-tasked to keep busyThe critical path determines the end of the project(after a project buffer is added to it), but the startis often determined by a non-critical activity. Thepath does not changeVariation is explicitly planned and managedthroughout the project with buffersTo keep the project on schedule, we manageour buffers, which allows us to absorb variationefficientlyBuffer status is carefully tracked. When any taskstarts or finishes relative to the calendar is notimportantHalf of all tasks are expected to take longer thanplanned, and the buffers absorb such variationFloating “stage gate” reviews are triggered byphase completion, and buffer status is reviewed forproject completion likelihoodNon-critical paths must have sufficient “feedingbuffers” to protect the critical pathMulti-tasking of resources is devastating, and isavoided at ALL costs, including delaying the startof projectsthere by allowing management to concentrate theirefforts on more important issues. Activity float inthe case of the CPM also offer some flexibility inthe critical path, but it does not change the mannerin which the project time can be changed. Inessence, they perform the same function—cushionthe entire project against any major variation inthe activity time, however the manner in whichthey perform the cushioning differs. Uncertaintiesexist in every operations and projects. Eliminationof all the risks in the project will result in makingthe project very costly and long. Finding the rightbalance for managing the risks and ensuring that aproject is within the time and budget is important.The CCA “methodology has acted as animportant eye-opener in project managementpractice. It correctly recognizes that the interactionbetween the time requirements of the projectactivities, the precedence relations defined amongthem, the activity resource requirements, andthe resource availabilities has a crucial impacton the duration of a project.” (Herroelen et al.,2002) Project management strategies for anyorganization have to be employed based on thetype of product, the life cycle of the product andthe process involved in marketing of the product.In practical implementations of CCPMmethods, the dates of start and end of the projectP. Stelth (MSc) - Professor G. Le Roy (PhD) - Projects’ Analysis through CPM (Critical Path Method)


34School of Doctoral Studies (European Union) JournalJulyare often used in spite of the “date-driven”behavior that Goldratt wanted to avoid in this newmethodology. “Critical chain project managersdo not criticize performers that overrun estimatedactivity durations, as long as the resources (a)start the activity as soon as they had the input,(b) work 100% on the activity (no multitasking),and (c) pass on the activity output as soon as itis completed. This is called “roadrunner” activityperformance. They expect 50% of the activities tooverrun.” (Leach, 1999)CPM assumes that the managers making thedecisions have extensive information about theactivities and the time that it will take to completeany given project. A lot of historical data isevaluated and tracked for this purpose. The moreknowledge the organization can archive andretrieve as needed the more accurate the entireCPM process is. Every organization is howevernot very efficient in the documentation otherknowledge or the projects differ considerably. Inthis environment, using the concepts of the CCPMcan help the project managers arrive at suitabletime limits for completion of the task.Both the CPM and the CCPM are dependent onsoftware packages for the planning and schedulingof the various activities in the project. Differentproject management software using these twoconcepts has different strengths and limitationsbased on the inbuilt algorithm in the software.It is important that project managers and usersof the software realize that this algorithm mightnot be the most suited to all situations that canarise during the project period. Depending on thestarting parameters for the CPM (forward pass orthe backward pass) different start and finish timeswill be provided and also the extent of resourceleveling that is undertaken.Although numerous project managementsoftwares are available, they have not helped inimprove the project completion times and thebudget limits set for the project. (Douglas, 1993)There is no doubt that the software has reducedthe time for significant analysis of the CP and theCC in the project, but the timely and within budgetcompletion of any project depends on the extentand use of the “tools” at all stages of the projectto ensure that the schedule and planning initiallyundertaken in maintained at all times. Initiallydeveloped schedules are not fixed and static. But,they might change as a result of the variation on theCP. The CC assumes that all tasks will offer somevariation and therefore completion of precedingtasks will determine the time start for the nextactivity on the CC. The CCAM does identifythe human element as an important variable forany project. The role of the project team and themethods used in the decision making process isalso important for successful project management.A brief review of project team requirementsand type of decision-making styles used will beoffered later in this chapter as this topic is viewedas salient for any successful project.The figure shown below indicates thedifferences between CCPM and the conventional(CPM) of project management. (Raz et al., 2003)Conventional Project ScheduleJob 1Job 2CCPM ScheduleTask suffers are hiddenwith in individual tasksJob 3Job 4Buffers are pooled,and made explicitFigure 1. Conventional Schedule and CCPM Schedule With Time Buffers Shown ExplicitlySchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Projects’ Analysis through CPM (Critical Path Method)35Leadership role in any projectmanagement and scheduling activityProjects generally require a project manager anda functional manager. The role of these individualsand their interactions or disputes can signal thesuccess or failure of the workers empowermentin the entire project. Most projects have a projectmanager. And successful project managers “arethose who can plan for the unexpected and areflexible enough to accommodate the unforeseen.”(Needleman, 1993) Organizations are alsoincreasingly using teams from various functionaldepartments for planning and execution of theactivities. Project teams are often not limitedto just the organization but there might also bemembers from supplier and contractors who playa vital role in ensuring that the project is on timeand within budget. The concept that the projectleader or manager will be measured on how theteam manages itself rather that how well the leadermanages the team will be important. (McKinlayand Taylor, 1996) No matter what the method ofplanning and scheduling used by the project team,guidance and motivation by the project manager isvery important.Project leaders typically display two typesof leadership styles in their dealing with others.Leadership performances are either “transactional”or “transformational.” Transactional leadershipseeks to motivate followers by appealing to theirown self-interest. Its principles are to motivate bythe exchange process. Any commodity and productcan be used in the exchange system; in manycases it can be higher monetary compensation,more prestige and power or more authority.Transformational leadership is intended to joinleaders and followers in a mutual pursuit for higherpurposes. Individuals who lead by encouragingparticipation and interest among their subordinateslead using the transformational style of leadership.Leaders using this style will try and convince theirfollowers that they need to work together to obtaintheir final goals.Project leaders should be individuals whoare willing to give up technical expertise. Theyshould also be able to communicate and conversesituations and issues with a wide variety ofindividuals with and out of the organization whomight be directly related to the project. Projectleaders should be selected with care and should begroomed in advance for the position that they willtake. This is especially important as the projectmanager or leader should be able to use soundjudgment and process and activity knowledge tomake calculated decisions for determining the CPor the CC depending on which method is used bythe organization.The question of selecting project leaders isoften debated in organizations. Is it better to haveproject leaders who are promoted from withinthe organization who understand the culture andstructure of decision-making and the manner inwhich all the members in the teamwork? Or isit better to recruit a project leader whose lack offamiliarity with the team will help the team focusof areas where improvements are needed and whocan inject fresh ideas and new thinking processes.The human element Project teams anddecision makingOperating a project based organizationalenvironment requires attention to many detailsand factors that are both intrinsic and extrinsic tothe organization. This section identifies the keyvariables that are needed for a project as manpower,technology, capabilities of the organization,and core competencies available within theorganization. The first variable—manpower—is probably the most important. Every projectrequires people to be wholeheartedly involved inthe achievement or completion of a set of goalsor objectives. Of all an organization’s assets, thehuman element can provide the most variabilityand therefore require the greatest attention.(Randolph and Posner, 1992)Most projects in organizations are accomplishedwith the help of teams. A team is defined as agroup of people that have complimentary skillsand a higher commitment to common goals. Thisgroup of people also possesses a higher degreeof interdependency and interaction. (French andBell, 1999) Team working and teams are however,P. Stelth (MSc) - Professor G. Le Roy (PhD) - Projects’ Analysis through CPM (Critical Path Method)


36School of Doctoral Studies (European Union) JournalJulynot without their own inherent problems. It isimportant to ensure that the team for any specifictask comprises members who are knowledgeableand posses the required skills needed to carry outthe task. Teams can be very fickle—the sameconditions and environmental factors may producedifferent results based on the team members. Thisproblem is only intensified when project teamsare not in close physical contact. (Joinson, 2002)Decentralized and independent work centersand factories are an important part of modernorganizations. Virtual teams for projects arebecoming increasingly common in multinationalcorporations. When working in global projectteams, identifying and hiring the right person canhelp develop a more cohesive team and deliver thedesired results. (Kirkman et al., 2001)Most project teams generally utilize five levelsof decision-making: command, consult, majority,consensus and unanimity. It is easy to understandthat these decision types are very closely related tothe time factors the teams may have in arriving ata decision. When the command type of decisionmakingprocess is used, the team leader identifiesthe tasks at hand and designates responsibilitiesto all the team members. This type of decisionmakingcan be accomplished in the shortestpossible time—it however, has a major drawback.It will be difficult for the team leader to get buy-infor the task from the team members if the membersdo not agree with the decision made.Unanimity, on the other hand, ensures that everymember likes and accepts a decision—they haveto “buy-in” to that decision. Achieving this is noteasy in focused-, task- or project-oriented teams.Unanimity decisions also take a lot of time. Whentime is of an essence, this type of decision-makingprocess might not be the best. Consensus is thenext option; it takes time to generate consensusfor any process. . In consensus, the team membersdiscuss the pros and cons of any issue extensively.A decision is made based on the discussion; everyteam member might not agree to it, however. Notagreeing “with” the decision is okay; not supportingthe tasks and functions required completion oncethe decision is made however, is damaging to theteam. And many time-constrained projects maysuffer as a result of the excessive time spent ingenerating consensus.Often, many teams do not even consider theconsult and buy in option, which can be a timesaverand also involve the entire team. In thissituation, the project leader can discuss the planof action with the people who will be responsiblein the execution of tasks and collects opinions andideas from them. Based on this, the team leader canmake a decision. This style of decision-making isfaster than the majority, consensus and unanimitytype of decision-making. But one can ask thequestion whether the leader consulted every teammember, making them feel appreciated and valued.Doing so may generate sufficient enthusiasm in theteam and help in the implementation of the task.Changing face of projects and the role ofCPM and CCPMSoftware and hardware technology integrationis required for all project endeavors. Modern dayorganizations are characterized by very dispersedfacility location and human expertise and there hasto be a common system interface for connectingall the sub systems together for the purposeof completing the project. Project plans anddesigns have to be communicated frequently andeffectively to all members in the project team andthis requires a high sophistication of equipmentand skills. Projects are generally undertaken in anorganization and there is a perception that whencompleted the end result will add considerable valueto the business, while improving profitability.All projects rely heavily on knowledge ofthe organization in handling the desired task.Knowledge is a dynamic blend of structuredexpertise, values, contextual information andinsight. It provides a framework for evaluatingand integrating experiences and information.Knowledge defines the intellectual assets of anorganization. The four knowledge transfer channelsidentified are: externalization, combination,internalization, and socialization. Variousfacets, levels and types of knowledge have beenidentified in management theory literature. In anyorganization, knowledge, over a period of timeSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Projects’ Analysis through CPM (Critical Path Method)37gets absorbed into the routines and operations ofthe organization. Over time, this knowledge mayget so infused that it is difficult to separate it fromthe organization.Knowledge can be classified as “Explicit” and“Tacit”. Explicit knowledge is the knowledge thatis objective and rational. Explicit knowledge canbe expressed in formal and systematic language.Tacit knowledge is subjective, experiential andhard to formalize and communicate. Knowledgetransfer occurs in one of four forms: from tacitto tacit; from explicit to explicit; from tacit toexplicit; or from explicit to tacit. Knowledgetransfer is a two-part process, sending andreceiving. Knowledge transfer can only take placewhen knowledge is transmitted by the sender andreceived by the receiver.Many projects in the present workplace arealso undertaken using virtual teams. The virtualworkplace is defined as one in which the employeeswork remotely from the organization, that isaway from managers and peers. (Cascio, 2000)Virtual teams are never in physical proximity witheach other. Studies indicate that virtual teamscommunicate differently as compared to face-tofaceteams. (Warkentin et al., 1997) The virtualteam set up to undertake design and manufactureof the next generation of Boeing planes is an idealexample of the trend of using this type of settingfor handling major R & D projects. (Foster, 2003)The need for smaller aircrafts flying longer routeswas emerging. Boeing did not have the design tomeet this new demand. Understanding the prosand cons of virtual teams, Boeing set up 238 virtualdesign teams to tackle the task. Standardizationof the technology and using a common platformfor information transfer is important. The airlinepioneer ensured that these systems were in placeto facilitate these technology systems. Factorssuch as manager-control and supervision are alsoeliminated in a virtual team setup.Managers overseeing the operation have to beconfident in the work ethics and accountability ofthe virtual employees. Keeping every member ofthe team aware of changes and periodic reviewsand meeting can keep these members constantlyin the loop. The advantages that Boeing gainedas a result of using virtual teams were tremendousfor this project. The company was able to bringto market a new plane in two years with a lowernumber of design changes and rework. Thesuccess of this mission also helped provide thefoundation for virtual teams that today havebeen very successful in building the <strong>International</strong>Space Station. The task of 16 countries workingtogether in designing the various modules of thespace station while being geographically distanthas been amazing. The critical nature of the work,the essential requirements that all pasts fit in spaceand the high profile nature of the task has ensuredthat the virtual teams work together to achieve thehigher objective.It is clear that handling these types of projectsusing any one method of project managementis not the most ideal. CPM and CCPM canhelp organizations undertaking projects of largemagnitude in terms of cost and time to betterunderstand the constraints, bottlenecks and socialand cultural issues that they might face in additionto the process and manufacturing constraints thatmight be a part of the activity completion.DiscussionDepending on the situation for which it isused CPM and CCAM can be both very effective.The CPM has been in use for a long time andwhile not perfect offers organizations a suitablemethodology to manage projects. At it essence, itfollows almost a similar concept as the scientificmanagement that was introduced for managinglabor by Taylor in the early 1900. By breakingup a project into small manageable sections thatcomprise of tasks and activities even enormousprojects could be handled as small and discretetasks that when linked together resulted in thecreation of a completed project. The significanceof this method was that it assigned dates and notjust the times associated with the completion forthe project.P. Stelth (MSc) - Professor G. Le Roy (PhD) - Projects’ Analysis through CPM (Critical Path Method)


38School of Doctoral Studies (European Union) JournalJulyThe Human elementHuman nature being what it is there is a naturalprocrastination by a portion of the workingpopulation to push out completing a task oractivity to the last possible date and then racingto complete the activity by the due date. It isimportant however, to not that all people do notwork in this manner but even a few who work inthis fashion have the ability to impact the otherswho work at the pace specified in the projectactivity. As resources are also often used for majorthan one project at a time, worker procrastinationis also compounded by the demand for the samelabor and equipment resources. Man, machine,material and money are essential requirements andresources for any organization. Project schedulingand planning have to ensure that these resourcesare used in the most optimized manner.Human relationships especially in the workplace are complex and the dynamic that existbetween worker and management or betweenworker and another worker cannot be easilycompartmentalized and segregated by the manageroverseeing the operations in the organization.“When people feel they are being treated likeobjects or problems, safety and trust plummet andwell-honed defense mechanisms of fight and flightcome online. While most people at work are toosavvy to act out these maneuvers in their mostovert and extreme forms, they adopt subtle andpowerful channels of expression, usually of the socalled“passive-aggressive” variety.” (Brightman,2004)The Critical chain addressed the human elementof project management in a manner that was notpreviously undertaken. Goldratt believed that byeliminating the start and finish days for activitiesthe worker is not presented with a mental checkof the dates that the activity was required to befinished by. The fact that all managers involvedwith the activities built sufficient time into theactivity task in order to have 95% confidencethat their workers would finish the task was wellknown.Training and educating the projectworkerTraining and education of the worker to movefrom the traditional methods of depending on thetimes set up the CPM software to one where theworker has to be motivated to work at a constantintensity in order to ensure that the activity isfinished in the least time possible accountingfor normal statistical variances in the activity.Changing the mindset of the worker is a difficulttask and while many organizational developmentand change agents believe the task can beaccomplished in reality changing the culture of theorganization is very difficult. Unless the workerhonestly perceives that the changes undertakenby the management will help the completionof the task in a more efficient manner withoutcreating undue stress on the worker the changesin project management styles will be viewed asanother management fad that is implemented bythe management.By offering adequate training to the employees,organizations are able to increase retention ratesand employee confidence. (Sullivan, 2003) Higherretention rates for an organization generallytranslate to higher productivity and a moreflexible workforce. Trained employees requirelesser supervision, thereby delivering higherquality products, and consequently, higher profits.Edwards Deming, one of the early advocates ofquality in organizations, believed that better jobrelatedtraining is instrumental in improving thequality of work and products manufactured in anorganization. (DeVor et al., 1992) The economicboom of the past few decades has indicated howa trained and educated workforce can improve theeconomy of a region. (Poirier and Bauer, 2000)A lack of integration between what the managerperceives that is needed and the efforts that aretaken to ensure that the need is satisfied by propertraining is often the reason for many of the newmanagement styles to fail in organizations.All organizations and companies have internalorganizational structures that have been set up overtime in order to ensure that the tasks required to becompleted by the organization has been completedSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Projects’ Analysis through CPM (Critical Path Method)39in the manner desired. Often, these structures arethe result of organizations selecting and keepingthe processes and the systems that work best forthem in the situation and industry within whichthey operate. The fact that even organizationswithin the same industry have radically differentways of managing projects is testimony to thefact that a ‘one-size-fit-all’ project managementmethod will not work for all. Among the bigfive auto manufacturers (Ford, General Motors,Daimler-Chrysler, Toyota and Honda) the methodsin which they handle new product development istestimony to the fact.For the TOC, one assumption made is that theworker selected for the task will always be ableto complete the task required by knowledge andexpertise in the field. Little or no literature isprovided on how organizations using project teamscan train and develop their personnel especiallywhen the workers are constantly involved withsome activity or the other. Even in the exampleof the Modem Company and the <strong>University</strong>provided by Goldratt in the book Critical Chain,the fact remains that all the individuals involvedin projects are assumed to be experts in theirfield. In reality organizations do not always haveindividuals within their projects who have 10-15years of experience within the organization. Jobhoppingis a factor that has not been focused on inthe critical chain human resources management.It is assumed that while the worker is an importantpart the worker might not be with the companythroughout the entire duration of the project.Motivation and guidance of the worker isalso very important. Elto Mayo, as far backas the 1920, with the help of the Hawthorne’sexperiments, showed that workers could beencouraged and motivated to perform to higherlevels of productivity by factors other than highsalaries. He concluded that every individualfelt a need for recognition, security and sense ofbelonging. These factors played an importantrole in a workers life, since work-activities andtheir consequences in modern times inevitablyextend to the home and the worker’s social life.The environment, both internal (from within thework environment) and external factors that affectthe organization, plays an important role in theworkers life. Elton Mayo was the first to bring thehuman element into the equation of management.(Mayo, 1977) Goldratt in the book the “Goal”also drew a parallel between the personal issuesthat Alex Rego faced and problems that he carriedover from his work life into his personal life.“Studies of commercial projects noted costand duration estimates overran by 70 and 40percent respectively.” (Sciforma, 2004) In manycases the customer of the project can be eitherinternal (another department in the organization orbuilding a facility for the organization) or external(supplier to another organizations, contractors fora construction job.) The successful completionof the task and activities also determines therelationship that the project manager or team canhave with these external customers. The ability tonurture and develop long-term relationships withcustomers whether internal or external can helpthe organization have a competitive advantageover its competitors. (Day, 2000) Organizationsare constantly looking for ways and means toimprove customer relationships. In the past, thecustomer was often perceived as just external tothe company. Peter Drucker stated that marketsare not passive entities beyond the control ofthe entrepreneur or organization; rather, they arevery interlinked. Markets can also be influenced.(Drucker, 1954)Scheduling of task and workerperformanceScheduling tools have helped projects. CPMand PERT were the first to identify the benefits thatcan be obtained by using a highly disciplined andmethodical way to handle the requirements of thejob. (Thomasen and Butterfield, 1993) Calendarconstraints were considered an important part ofthe scheduling process and project teams to ensurethat the activity is progressing as desired oftenuse both manual and electronic calendars. Projectmanagers and team members have got very usedto using this method for tracking the project andthe requirements of the CCPM of eliminating theuse of calendars can create anxiety in individualsP. Stelth (MSc) - Professor G. Le Roy (PhD) - Projects’ Analysis through CPM (Critical Path Method)


40School of Doctoral Studies (European Union) JournalJulyused to having a deadline. Implicit and explicitexpectations of the project managers of theworkers can also affect the anxiety that workersmight face to complete the project in the absenceof a deadline. The human thought process of“things were always done this way” exists in mostcompanies. The older the operation and the moreestablished the business the greater the resistanceto the new methods.People also have different paces at which thencan work to complete a task. CPM assumes thestandard times for a task allowing for the varianceof each worker undertaking the activity. In thecase of the CCPM method, different workingspeeds might make some workers “look bad”even though the work they undertake might bedefect free and of superior quality. CCPM offerslittle insights to how managers at the lower levelscan help motivate and encourage workers withdifferent working styles.One of the best features of CCPM is the advancewarning and awareness that can be provided toresources that have constraints and are on thecritical chain of activities to be completed. Thisfeature can help project team members becomeaware of the resource needs prior to the resourceactually needing it. For example, if a machine inthe welding department is on the critical chain andthe feeding task has the potential to be completed3 days ahead of the expected completion time,advance warning can be offered to the welderworking on the machine to either complete thetask he is conducting on the machine by the datethe critical chain task will advance to this machine.Thus, it is easier to reallocate resources in a veryshort period of time without major disruption inthe project time.The CCAM also focuses attention on identifyingthe root cause of the problem rather than attemptingto topically fix the problem when it occurs.Managers and project workers are constantlyaware that they have to identify the causes of theproblem when they occur rather that allowing theissue to slide because they have more time to “fixa mistake” with the help of the floats. Many ofthe symptoms that induce the problems are oftenrarely technical but rather more physiologicalresistance to the changes in organization. CCPMmethods encourage managers and workers to lookahead rather than dwell on the fact that task andactivities were completed in the past for the saidproject. By combining this feature with CPM,senior management can encourage managers andsupervisors to look forwards towards the tasksthat need to be completed and the strategies andplanning that are needed for the tasks still tocome.In the past, with CPM, decision makers wereoften faced with reviewing past as well as futuretasks and activities that needed to be completed.CCPM reduces the amount of paperwork that themanager has to review and offers managers thechance to stress on features that are more importantto the critical chain. CCPM method encourages aproactive behavior from all involved rather thana reactive behavior to an issue or a situation thatmight arise in the organization.Interpersonal relationship in projectteam and their impact on performanceProject teams also face more scrutiny with theCCPM method as the focus is on the task yet tobe completed. Smaller volume of paperwork alsoallows for greater in depth analysis of the criticalchain. This attention requires that teams workmore closely together and achieve the desiredresults. Issues such as conflicts and differencesof opinions can be very damaging for any team,but in a project team dedicated to a critical pathcannot afford to be embroiled in major conflictsand difference of opinions. There are generallytwo types of conflicts observed in organizationalsettings—emotional and cognitive. The firstbeing “Emotional conflict” is personal. Themanifestations are often defensive and based onresent. It is also known as “A-type conflict” or“affective conflict.” Emotional conflict is rootedin anger, personal friction, personality clashes,ego and tension. “Cognitive conflicts,” on theother hand are largely depersonalized; also knowas “C-type conflict,” consist of argumentationabout the merits of ideas, plans and projects. Atan interpersonal level there might be two reasonsSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Projects’ Analysis through CPM (Critical Path Method)41why conflicts originate. The first can be attributedto group identity. This conflict arises whenindividuals’ personalities and behavior patterns donot synchronize. In the second case, the conflictsarise not from the individual’s personality butrather from the group or team with which he orshe associates.Conflicts in teams can be generally classifiedinto three types: relationship conflicts, task conflictsand process conflicts. Each of these conflictscan have different impacts on teams within anorganization. (Jackson et al., 2003) Interpersonalconflicts or A-type conflicts can make workingconditions difficult in an organization or within ateam. (Van Slyke, 1999) Emotional and personalfeeling can distort and overshadow the purposeor agenda of the team making task execution andcompletion difficult. This situation might devolveinto a win-lose competition where each disputinggroup is unwilling to arrive at a compromise.There are three factors that affect the behaviorof employees within an organization. They arepersonal, organizational and environmental.Psychological characteristics (perception ofempowerment, involvement in decision making),demographic information, job experienceare personal factors that affect a worker’sperformance. Supervisory control, managerialstyles and compensation models (Satisfactiontheory, Incentive theory or Intrinsic theory ofcompensation) (Handy, 1993) and the controlsystems used constitute the organizational factorsthat can affect worker performance.Technical issues of application of CCPMand CPMFrom a technical standpoint project managerscan conceivably use the advantages of boththe CPM and the CCPM methodology. Bothmethods do a lot to increase the knowledge ofthe organization towards the activities beingperformed. Intellectual capital is knowledge,which is considered an asset to the organization.There are four types of intellectual capital: humancapital, structural capital, customer capital, andsocial capital. (Svelby, 2001) The CCA addressthe topic intellectual capital that might be missedby the CPM. CCPM also address the issue of starttime from the backward pass thereby ensuringthat the last possible time allowing workers lesseropportunities for wastage of time on the job. Inthe CPM, scheduling is often done as-soon-aspossible(ASAP) from the project start date whereas the CCPM asks are scheduled as-late-as-possible(ALAP) based upon the target finish date.The critics of the CCM process however see nodifference between the float times that the CPMuses and the buffer that the CCM uses. The factthat both these methods understand that variancedoes occur in the task creates little differences,as too how the “safety net” is set up for thesevariances. The use of the mean and the medianof the times by many users of the CCPM alsocreate a confusion of which method is the mostappropriate for the project management process.Another danger in the CCPM process is that byusing the ALAP method there is the risk that alltasks become critical due to the time constraintson the resources needed for the job.Another major pitfall of CCPM is that many“commercial software packages do not embodyoptimal algorithms for resource leveling andresource-constrained scheduling, but rely onthe use of simple priority rules for generatinga precedence and resource feasible schedule.”(Herroelen and Leus, 2001) Many softwarepackages used for project scheduling usealgorithms that are not publicly available as thesoftware companies consider their proprietyknowledge as such managers are not sure aboutthe reasons why certain paths might be selected ascritical chains over others. CCPM attempts, at alltimes, to reduce the project’s WIP. It advocatesdelaying the task until the last possible time. Inthis situation, should there be a rework or a majorhurdle, the snowball effect of this occurrence canresult in major delays of the project. In addition,CCPM does not address “practically relevantproblems, such as the resource leveling problem,the resource-constrained project schedulingproblem under generalized precedence constraints,the time/cost and time/resource trade-off problem,P. Stelth (MSc) - Professor G. Le Roy (PhD) - Projects’ Analysis through CPM (Critical Path Method)


42School of Doctoral Studies (European Union) JournalJulyand the multi-mode resource-constrained projectscheduling problem.” (Herroelen and Leus, 2001)What CCPM offers to CPMThe book Critical Path however helps highlightother organizational issues that hamper successfulproject management—namely the impact ofmanagement control over the project. Theconstant tug-a-war between senior management toreduce time and cost of project is always stronglyopposed by lower level managers and workers whoare constantly seeking more time and resourcesfor completing the task at hand. Managementgenerally controls the capital in an organization.Labor, on the other hand, has always offered some“resistance” to the managerial functions. Differentfunctional departments involved with the projectmanagement might also impact the perception oftime and resource reduction/increases that takeplace within the realms of the project managementnegotiations.While social and professional relationshipsamong different strata of an organizationsstructure have changed tremendously over theyears, employer attitudes towards workers andpotential conflicts between managers, supervisorsand workers are real and exist in almost allorganizations. To a major extent, CCPMsimplifies the issues of labor management andmentality in organizations. Project managers areactively involved with the WBS but also “designit to validate some core assumptions related to theproject’s fiscal requirements.” (Elton and Roe,1998)CCPM encourages the entire project team torefocus on coordination and communication of taskand mission—rather than project team membersworking in functional silos in organizations.While many people covet the opportunity tomanage and direct subordinates, every individualdoes not posses the ideal leadership qualities.The project leader had to realize that he or shenow had to provide the holistic viewpoint of theproject and have the entire picture in mind. Noindividual is perfect. Management skills can bedeveloped—the qualities required can be learned.Poor work ethics imply that the project membersdo not have a vision and a dream for the companyin the long run. When leaders do not appreciategood quality of work that is done, employees canget disillusioned. Motivational levels can dropand the quality of the work generated by a workercan be seriously affected. In any project team,interpersonal communication and mutual respectis important. Managing time, communication andresources efficiently is very important for the teamif it has to achieve its objectives.Trust, is the most important factor in anyorganizational relationship. (Child, 2001) In thebusiness sense, trust is having confidence in thepartner or workers to conduct and perform theirtask in a manner agreed on. This confidence willvary depending on a number of external factorssuch as the past relationships that the worker mayhave had with the leader, the culture and valuesof the worker and the interaction of the task andthe skills required with the worker’s skill set. Aproject team’s success depends on the level ofmutual accountability, contribution and sharedvalues that the group members feel towardsachieving the goal.Values often tell a lot about someone orsomething—both for an organization as wellas for an individual. Employees are constantlyseeking satisfaction and meaning from theirwork lives, and more balance in their lives as awhole. By defining the values and the culture ofthe work place employees who are more in tunewith the organization will come aboard. Thesocial, cultural and political factors greatly affectthe organization. Competency management canalso help organizations understand their skill setrequirement. Human motivation encompasses awide variety of topics and arenas. Factors affectingmotivation can be intrinsic as well as extrinsic tothe individual and the environment in which theindividual operates. Every individual has his or herown individuality and style—a true comparison isnot possible. A lot of the value and innovationin organizations today comes from workersknowledge and intelligent work processes, whichmay be technically very difficult to evaluate andappraise.School of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Projects’ Analysis through CPM (Critical Path Method)43These “social and soft” aspects of managementare some of the important factors that CCPMintroduces to organizations. “Project performanceis often less a matter of understanding theconstraints of the project and more a function ofthe personal skills and capabilities of the potentialleaders available.” (Elton and Roe, 1998) CCPMalso challenges the traditional concepts ofperformance measurement for the individual andthe project. Projects often lose direction and runout or resources due to the lack of understandingof the task between the senior business managers,project managers and functional managers. Itshould be clear to all involved in a project however,that variability is an integral part of the process andas such managers should be able to deal with theoccurrence of these variables within the system.Another major pitfall of the CPM and theCCPM is the extent of the paperwork requiredfor the entire process. Often, individuals havingto constantly deal with this problem consider ita major hurdle in the entire monitoring process.(Haughey, N.D.) Often, senior managers anddecision makers do not have the time to completelyevaluate and review the entire contents of the reportor document handed to them by their subordinates.Failure to periodically review document can resultin errors and slippages of the project getting nonotice at the time where minor adjustments mightmanage to put the project back on track. Toavoid this problem, senior managers can affordindividual project managers more autonomyof their tasks and greater job-enrichment. Jobenrichmentis referred to the latitude and personalresponsibility that is conferred on a worker toallow him or her to perform their task in a mannerthat they perceive as comfortable and the bestoption while still producing the desired outcomesin manner specified and with the quality desired.This drastically differs from the concept of jobenlargementwhere the worker is forced to performand undertaken more than his share of work inan attempt by the management to reorganize orrestructure the job scope of the individual in anygiven position.Job-enrichment has to be a constant processand has to be communicated effectively to allmembers in the organization. Defining rules andguidelines is also important when promoting jobenrichmentfor the workers. When all involved inthe process are able to understand and comprehendthe expectations that the task requires, fewermistakes and errors are made during the executionof the task. Building a value-based communicationboth vertically and horizontally throughout theorganization is necessary.Many organizations using the project styleof management go through the tedious processof developing a “project management process,training the staff, and then never providingany long-term support and follow-up. Projectmanagement methodology should be thought ofas a tangible product that is developed, supported,and enhanced.” (Mochal, 2002) This is especiallyimportant when the company wishes to makeinvestments in purchasing and developing softwarefor CPM and CCPM methods in the organization.The initial cost of the software required for thesemanagement tools is very high along with the costof training of the worker. If the tool is not usedto its full extent then complete benefits cannot beobtained from this tool.As the scope of the project increases themembers working on projects tend to becomemore specialized in a specific task and crosstraining is rarely done. Cross training is differentfrom multitasking. In cross training, the worker istrained in more than one skill set. This can helpthe organization reduce the dependency on theconstrained resources and offer more flexibilityfor completion of the task as planned.Supporting services for the project team is veryimportant. This includes administrative staff andthe IT staff that is needed to track and develop thefuture needs of the project as it grows through thevarious phases. CPM and CCPM often assumethat the skill requirements for tackling a projectare always available and place low emphasison the development of this skill. The fact thatconstruction project fare much better relativelywhen compared to projects undertaken in otherindustries is that the potential project managersfor constriction projects are groomed for yearsprior to allowing them to head their own project.P. Stelth (MSc) - Professor G. Le Roy (PhD) - Projects’ Analysis through CPM (Critical Path Method)


44School of Doctoral Studies (European Union) JournalJulyMany other industries make the mistake ofpromoting a technical expertise individual into theproject management position without consideringif the individual will be an ideal person to leadand manage people under his or her supervision.Project work is often unique when compared tomanufacturing or production. Often, projectmembers have to conceive, plan and monitora solution from infancy to completion. In theprocess, the project might pass through variousstages of evolution and change. Project workersneed to be resilient enough to manage the changesand modifications as needed. While the CPM andthe CCPM might suggest the direction to be taken,finally it is up to the worker to make the necessarychange in the schedule and the process.Lastly, time management in any project iscritical. Time is a commodity that has limited“shelf” life. The significance of the fact that itcannot be recovered is the main characteristic thathas made people place a lot of importance and valueon the measure of time. Time management hastaken on strategic importance for organizations andcorporations due to the value of money associatedwith the time. The concept of time value of moneyis becoming of great importance to determine theeffectiveness and competitiveness of organization.Time management can also be used to measurethe performance levels of individual workers andemployees. Completing a project on time and upto the required quality can help companies andorganizations build a reputation for reliability inthe market.Commitment and dedication is required if anindividual wants to handle time more efficiently.This means talking less on the telephone, keepingoffice communication to a defined minimum,finding ways to handle routine tasks lessfrequently. It is very important to set a routineand stay true to it, under most circumstances.Things will change and complications will arisein any situation. It is unreasonable to not expectthis. Managers and supervisors can manage theirtasks and consequently their time better if theylearn to delegate tasks and responsibilities to theirsubordinates. (Chapman, 2001, Blair, 2003) It maytake some time in the initial stages to train andeducate a subordinate in the tasks to be completed;if the worker has learnt the task properly however,delegating may leave greater time for the managerto handle other more important decision-makingtasks. When considering time management, anyindividuals should be judicious in identifyingwhich task can be done by themselves and thetasks that may be completed more effectively bycontracting it out to another person.Conclusion and RecommendationThe CPM and the CCPM are both valuabletools that any organization can use successfullyto manage their projects. “Scope management,cost management, and time management” areimportant variables for projects. (Anbari, 2003)Every successful project is characterized by soundproject analysis using some form of networkdiagram that breaks up even very massive projectsinto small and manageable discrete tasks thatcan be performed. Understanding the true scopeand extent of the project is often the primary andcritical step to build a sound project managementfoundation for any undertaking. Projects differconsiderable. Some projects might be very shortwhile other might stretch for years. In addition,some projects might be routine and the companyand the project team might undertake similarprojects periodically—home construction is oneexample of this type of project. Projects such asthe <strong>International</strong> Space Station will take yearsto complete. And even after completion, it willrequire a maintenance and upkeep program.Making random assumptions that the needs of allprojects are similar and therefore can be understoodvia a common methodology is presumptuous.CPM and the CCPM both use a safety net tomanage the uncertainty that arise in the process ofcompleting the project. Where CPM uses floats forevery activity, CCPM uses buffers at the end of theactivity and at the feeding of non-critical activitiesinto the critical activities. CPM has been in usedfrom the 1950s. It has offered project managerswith effective ways to estimate the time and costneeds of the project. The critical path is not static.School of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Projects’ Analysis through CPM (Critical Path Method)45It changes, emphasizing that the project managerand senior managers have to constantly reviewand monitor the process. This helps ensure thatthe activities are being completed in the manner inwhich they were planned. Scheduling of projectbasedactivities can be done by using either theforward or the backward pass in CPM. Duringscheduling, every task and activity should bereviewed using compatible calendars. Using toomany different calendars can confuse the decisionprocess. Even if great care is taken to preventerrors, the schedule does not stay fixed and as aresult the constantly changing path might createerrors at later stages of the project if the sameindividual does not perform the change. CCPMhowever, does not recommend changing the CCschedule. Rather, adjusting and managing thebuffers in the process is considered to be moreappropriate. Too many changes in CCPM willconfuse and demoralize the worker in the longrun.As stated earlier, every project is not similar.And the manner in which CPM and CCPM isimplemented and used also differs considerably.Organizational culture and values play a significantrole in the process of planning and scheduling ofactivities. Knowledge that can be harnessed fromwithin an organization also has the potential todetermine the accuracy and the relevance of thedata and information available to managers at theplanning stages. More accurately the knowledgecompilation better the chances that the knowledgewill help improve the decision making process.Knowledge and information should flow fromall directions: i.e., from management to workersand from workers to management. Restrictingthe flow of information can prove disastrous.Management, also believing that they have thesole control over the information, can de-motivateand reduce the involvement of the worker in theproject completion process.With the use of virtual teams and globalplayers for various task completions, projectmanagement has become even more complicated.The earlier example of Boeing offered in Chapter3 indicates that projects are no longer restricted bygeographical boundaries. Ensuring that all partiesinvolved in the project use CPM or CCPM in asimilar manner is also critical. In many situationsfinal consumer in the project chain might dictatethe methods that are to be used by the other partiesfeeding the CP or CC.There are some advantages of CCPM thatcan greatly enhance CPM. The elimination ofstart- and end-dates, while a seemingly smallchange, can greatly affect the mentality of theworker. Perceptions play an important role in themanner by which a worker undertakes his or herresponsibilities so that a task may be completed.Most employed individuals either subconsciouslyor consciously tend to follow Parkinson’s Law.Restructuring and reengineering of companieshave also reduced the number of employeesavailable to complete any task—the reduction ofthe labor resource can introduce constraints thatmight not have been foreseen during the initialplanning stages. CCPM states that it is inefficientto have employees multitask on different projects.In reality however, this is an option that isunavailable to individuals working on projects.Many software tools often fail to identify the overutilization of the labor resource for the completionof any task. Capital resources also determine theextent of equipment and labor resources availablefor any project. CCPM identifies that managingresources can help project managers monitor thetask completion. At the same time they project theneed of these resources for future tasks.CPM method looks backwards. It often makesthe manager (functional or project) look good inthe short term. This is due to the achievement ofthe milestones set in completing tasks. On theother hand, CCPM looks forward. It does notalways stress the finished part of the project. Oneof the flaws of the CCPM method however, isthat it does not account for the time required torework or if the scope and quality of the activityis compromised. Slippage on any of the activitieson the critical chain will eat consume the projectbuffer time set at the end of the project. Criticsof this process are also quick to point out thatestimation of the project buffer still relies on thevariability information provided through the CPMmethod. In essence, it might be more difficultP. Stelth (MSc) - Professor G. Le Roy (PhD) - Projects’ Analysis through CPM (Critical Path Method)


46School of Doctoral Studies (European Union) JournalJulyto get realistic times for the CCPM during theplanning stages.The theory of constraint concept, when firstintroduced, focused purely on production andmanufacturing. Project management often doesnot have the same external and internal variables.They are often subjected to factors that might bebeyond the scope of the management of a systemthat is closed. At this point in time, CCPM offersorganizations a method of incorporating thehuman element into the CPM method. Whenused, the critical path method and the CCPMmethod can greatly complement each other tocreate an environment where project managers,functional managers and senior management cancollectively review the technical, the social, theenvironmental and the labor factors that mightaffect the completion of the project.The decision-making process does notsignificantly change with the use of any of thetools from the project team management point ofview. The decision process however, changes withrespect to the extent of risk and uncertainty thatthe two methods might utilize. When CPM wasinitially introduced, it was done for projects forwhich historical data could be obtained. PERT wasused for projects that did not have any precedenceor information available for tasks, times or cost.Over the years however, CPM and PERT wereoften combined together, thereby introducing theuncertainty aspect of PERT into the deterministicnature of CPM. CCPM method realizes theimportance of statistical process control indetermining time requirements for task andactivities. The controversy over using the meanor the median for determining the times of thebell curve for any task is still to be resolved. Asdiscussed in Chapter 2, different managers mighthave different tolerances for risk, and thereforeutilize different confidence intervals in determiningtime. Using one standard method for reduction ofthe basic estimate would not be appropriate.Each organization using CCPM would haveto identify its level of tolerance of risk andthe possible decision-making strategies thatit could use in case of project slippage. Boththese methods discussed in the study dependextensively on algorithms solved by many of thecommercial software packages available. It isimportant however, that organizations realize thatalgorithms need modification to match the natureof the project and the organizational missions andgoals. Project managers are often not experts atunderstanding the working of software; rather,they possess the technical expertise to identify ifthe results generated by the software are logicaland appropriate for the software. This requiresdeveloping and training individuals both to usethe software and to understand the caveats thathave to be considered prior to selection of thebest path. Software can only aid and help in thedecision making process. And, finally, it is themanager or supervisor who makes the decisions tochoose a certain path. There is great potential forboth these tools if properly utilized. The extent ofutilization will be based on how these tools standthe test of time in real world situations.BibliographyAACE (1990) AACE Cost Engineers Notebook.Anbari, F. T. (2003) Earned Value ProjectManagement Method and Extensions. ProjectManagement Journal, 34, 12,Anonymous (2001) Managing the critical chainStrategic Direction, 17, 28-30,Archibald, R. D. and Villoria, R. L. (1966)Network-based management systems (PERT/CPM), Wiley, New York,.Ayers, J. B. (2001) Handbook of supply chainmanagement, St. Lucie Press ;APICS, Boca Raton, Fla.Alexandria, Va.School of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Projects’ Analysis through CPM (Critical Path Method)47Baar, J. E. and Jacobson, S. M. (2004) Forecasting-What a Responsibility. Cost Engineering., 46,19,Back, W. E. and Moreau, K. A. (2001) Informationmanagement strategies for project management.Project Management Journal., 32, 10-20,Badiru, A. B. (1995) Incorporating learning curveeffects into critical resource diagrammingProject Management Journal, 26, 38-45,Baram, G. E. (1994) Delay analysis - Issues not forgranted Transactions of AACE <strong>International</strong>,1994, DCL5.1-9,Blair, G. M. (2003) Personal Time Managementfor Busy Managers Accessed on December 10,2003 from: http://www.see.ed.ac.uk/~gerard/Management/art2.htmlBrightman, B. K. ( 2004) Why managers fail, andhow organizations can rewrite the script TheJournal of Business Strategy, 25, 47-52,Cammarano, J. (1997) Project management: Howto make it happen IIE Solutions, 29, 30-34,Cascio, W. (2000) Managing a Virtual Workplace.Academy of Management Executive, 14, 81-90,Chapman, A. (2001) Time management techniquesand systems Accessed on December 10,2003 from: http://www.businessballs.com/timemanagement.htmChild, J. (2001) Trust-the fundamental bondin Global Collaboration OrganizationalDynamics, 29, 274-288,Clifton, D. S. and Fyffe, D. E. (1977) Projectfeasibility analysis : a guide to profitable newventures, Wiley, New York.Cohen, I., Mandelbaum, A. and Shtub, A. (2004)Multi-Project Scheduling and Control:A Process-Based Comparitive Study ofthe Critical Chain methodology and somealternatives Project Management Journal, 35,39-50,CriticalChainLtd (2003a) Critical Chainvs Critical Path Accessed on August 222004 from: http://criticalchain.co.uk/How/CriticalChainvsCriticalPa.htmlCriticalChainLtd (2003b) Key characteristics ofCritical Chain Project Management Accessedon August 22 2004 from: http://criticalchain.co.uk/How/KeyPoints.htmlDaft, R. L. (1997) Management, Dryden Press,Fort Worth.Day, G. S. (2000) Managing market relationshipsJournal of the Academy of Marketing Science,28, pp. 24-30,Deming, W. E. (1950) Deming’s 1950 Lecture toJapanese Management Accessed on July 182003 from: http://deming.eng.clemson.edu/pub/den/deming_1950.htmDeVor, R. E., Chang, T.-h. and Sutherland, J. W.(1992) Statistical quality design and control: contemporary concepts and methods,Macmillan, New YorkToronto.Doloi, H. K. and Jaafari, A. (2002) Towarda dynamic simulation model for strategicdecision-making in life-cycle projectmanagement Project Management Journal.,33, 23-39,Douglas, E. E. I. (1993) Field project control -Back to basics Cost Engineering, 35, 19-25,Drucker, P. (1974) In The Future of thecorporation(Ed, Kahn, H., PLM (Firm)) Mason& Lipscomb Publishers, New York, pp. 49-71.P. Stelth (MSc) - Professor G. Le Roy (PhD) - Projects’ Analysis through CPM (Critical Path Method)


48School of Doctoral Studies (European Union) JournalJulyDrucker, P. F. (1954) The practice of management,Harper, New York,.Elton, J. and Roe, J. (1998) Bringing discipline toproject management Harvard Business Review,76, 153-159,Evans, M. W. F. J. R. (2001) Baldrige Assessmentand Organizational Learning: The Need forChange Management Quality ManagementJournal, Volume 8.http://www.asq.org/pub/qmj/past/vol8_issue3/ford.htmlEvarts, H. F. (1964) Introduction to PERT, Allynand Bacon, Boston,.Farrell, M. (2002) Financial engineering in projectmanagement Project Management Journal.,33, 27-37,Fletcher, H. D. and Smith, D. B. (2004) Managing forvalue: Developing a performance measurementsystem integrating economic value added andthe balanced scorecard in strategic planning.Journal of Business Strategies., 21, 1-16,Foster, S. T. (2003) Managing quality : anintegrative approach, Pearson Prentice Hall,Upper Saddle River, N.J.French, W. L. and Bell, C. (1999) Organizationdevelopment : behavioral science interventionsfor organization improvement, Prentice Hall,Upper Saddle River, NJ.Fritz, R. (1996) Corporate Tides: The InescapableLaws of Organizational Structure, Berrett-Koehler Publishers, San Francisco.GDRC (2004) Glossary of Environmental TermsAccessed on July 6 2004 from: http://www.gdrc.org/uem/ait-terms.htmlGido, J. and Clements, J. P. (2003) Successfulproject management, Thomson/South-Western,Mason, Ohio.Globerson, S. and Zwikael, O. (2002) The impactof the project manager on project managementplanning processes. Project ManagementJournal., 33, 58-65,Goldratt, E. M. (1990) What is this thing calledtheory of constraints and how should it beimplemented?, North River Press, Croton-on-Hudson, N.Y.Goldratt, E. M. (1997) Critical chain, The NorthRiver Press, Great Barrington, MA.Goldratt, E. M. and Cox, J. (1993) The goal : aprocess of ongoing improvement, Gower,Aldershot, Hampshire, England.Handy, C. B. (1993) Understanding organizations,Oxford <strong>University</strong> Press, New York.Hartman, F. and Ashrafi, R. A. (2002) Projectmanagement in the information systems andinformation technologies industries ProjectManagement Journal., 33, 5-16,Haughey, D. (N.D.) Avoiding the ProjectManagement Obstacle Course Accessed onAugust 22 2004 from: http://www.projectsmart.co.uk/docs/project_management_obstacle_course.pdfHerroelen, W. and Leus, R. (2001) On the meritsand pitfalls of critical chain scheduling Journalof Operations Management, 19, 559,Herroelen, W., Leus, R. and Demeulemeester, E.(2002) Critical chain project scheduling: Donot oversimplify Project Management Journal,33, 48-60,Hobb, L. J. and Sheafer, B. M. (2003) Developinga work breakdown structure as the unifyingfoundation for project controls systemdevelopment. Cost Engineering., 45, pg. 17,School of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Projects’ Analysis through CPM (Critical Path Method)49Hutt, M. D. and Speh, T. W. (1985) Industrialmarketing management : a strategic view ofbusiness markets, Dryden Press, Chicago.Jackson, K. M., Mannix, E. A., Peterson, R. S. andTrochim, W. M. K. (2003) In A Multi-facetedApproach to Process Conflict.Jiang, J. J., Klein, G. and Ellis, T. S. (2002) Ameasure of software development risk. ProjectManagement Journal., 33, 30-42,Joinson, C. 2002 Managing Virtual Teams HRmagazine June 47 6Just, M. R. and Murphy, J. P. (1994) The effectof resource constraints on project schedulesTransactions of AACE <strong>International</strong>, 1994,DCL2.1-5,Karlsen, J. T. and Gottschalk, P. (2004) FactorsAffecting Knowledge Transfer in IT ProjectsEngineering Management Journal, 16, 3-10,Kerzner, H. (1979) Project management : asystems approach to planning, scheduling,and controlling, Van Nostrand Reinhold, NewYork.Kirkman, B. L., Gibson, C. B. and Shapiro, D.L. (2001) ‘Exporting’ Teams: Enhancing theImplementation and Effectiveness of WorkTeams in Global Affiliates OrganizationalDynamics, 30, 12-30,Knights, D. and Morgan, G. (1991) Corporatestrategy, organizations and subjectivity: Acritique Organizational Studies, 12, 251-273,Knoke, J. R. and Garza, J. d. l. (2003) Practicalcost/schedule modeling for CIP managementAACE <strong>International</strong> Transactions, PM61,Korman, R. (2004) 7. Critical Path Method:Network Logic Was Aided By MainframePower ENR, 252, 30,Kotelnikov, V. (2004) Resource Based ModelAccessed on May 27 2004 from: http://www.1000ventures.com/business_guide/mgmt_stategic_resource-based.htmlLamers, M. (2002) Do you manage a project,or what? A reply to “Do you manage work,devliverables, or resources” <strong>International</strong>Journal of Project Management., 20, 325,Leach, L. P. (1999) Critical chain projectmanagement improves project performanceProject Management Journal, 30, 39-51,Leemann, T. (2002) Managing the chaos of change.The Journal of Business Strategy., 23, pg. 11,5 pgs,Lowe, C. W. (1966) Critical path analysis by barchart: the new role of job progress charts,Business Publications, London,.Martin, C. C. (1976) Project management : how tomake it work, Amacom, New York.Mayo, E. (1977) The human problems of anindustrial civilization, Arno Press, New York.McKinlay, A. and Taylor, P. (1996) In The NewWorkplace and Trade Unionism(Ed, Smith, P.)Routledge, London.Meredith, J. R. and Mantel, S. J. (1995) Projectmanagement : a managerial approach, Wiley,New York.MindTools (2004) Critical Path Analysis & PERTCharts Accessed on August 21 2004 from:http://www.mindtools.com/pages/article/newPPM_04.htmMochal, T. (2002) Defining and supporting projectmanagement methodology Accessed on August22 2004 from: http://builder.com.com/5100-6315-1049591.htmlP. Stelth (MSc) - Professor G. Le Roy (PhD) - Projects’ Analysis through CPM (Critical Path Method)


50School of Doctoral Studies (European Union) JournalJulyModer, J. J. and Phillips, C. R. (1964) Projectmanagement with CPM and PERT, ReinholdPub. Corp., New York,.Morgan, G. (1998) Images of organization, SagePublications, Thousand Oaks, Calif.Nabors, J. K. (1994) Considerations in planningand scheduling Transactions of AACE<strong>International</strong>, 1994,Needleman, T. (1993) It’s not just for big firmsAccounting Technology, 9, 59-64,Newbold, R. C. (1998) Project management in thefast lane : applying the theory of constraints,St. Lucie Press, Boca Raton, Fla.Patrick, F. S. (1999) Critical Chain Schedulingand Buffer Management . . .Getting Out From Between Parkinson’s Rock andMurphy’s Hard Place Accessed on August 222004 from: http://www.focusedperformance.com/articles/ccpm.htmlPeters, T. F. (2003) Dissecting the doctrineof concurrent delay AACE <strong>International</strong>Transactions, CD11,Poirier, C. C. and Bauer, M. J. (2000) E-supplychain : using the Internet to revolutionize yourbusiness : how market leaders focus their entireorganization on driving value to customers,Berrett-Koehler, San Francisco.Porter, M. E. (1996) What is Strategy? HarvardBusiness Review, 74, 61-78,Pruitt, W. B. (1999) The value of the systemengineering function in configuration controlof a major technology project ProjectManagement Journal., 30, 30-37,Rad, P. F. and Cioffi, D. F. (2004) Work andResource Breakdown Structures for FormalizedBottom-Up Estimating. Cost Engineering., 46,pg. 31, 7 pgs,Randolph, W. A. and Posner, B. Z. (1992) Gettingthe job done! : managing project teams and taskforces for success, Prentice Hall, EnglewoodCliffs, N.J.Raz, T., Barnes, R. and Dvir, D. (2003) A CriticalLook at Critical Chain Project ManagementProject Management Journal, 34, 24,Rivera, F. A. and Duran, A. (2004) Critical cloudsand critical sets in resource-constrained projects<strong>International</strong> Journal of Project Management,22, 489,Sauer, C., Liu, L. and Johnston, K. (2001)Where project managers are kings. ProjectManagement Journal., 32, 39-50,Scavino, N. J. (2003) Effect of multiple calendarson total float and critical path Cost Engineering,45, 11,Schein, E. (1992) Organizational culture andleadership., Jossey-Bass, San Francisco.Schumacher, L. (1997) Defusing delay claimsCivil Engineering, 67, 60-62,Schuyler, J. (2002) Exploiting the Best of CriticalChain and Monte Carlo Simulation Accessedon August 22 2004 from: http://www.decisioneering.com/articles/schuyler1.htmlSciforma (2004) Introduction to Critical ChainAccessed on August 22 2004 from: http://www.sciforma.com/products/ps_suite/ccIntro.htmSewell, G. H. (1975) Environmental qualitymanagement, Prentice-Hall, Englewood Cliffs,N.J.School of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Projects’ Analysis through CPM (Critical Path Method)51Spoede, C. W. and Jacob, D. B. (2002) Policing,firefighting, or managing? Strategic Finance,84, 31-35,Sullivan, J. (2003) My customer is anyone whoisn’t me - and other training truths Nation’sRestaurant News., 37, 16,Svelby, K. E. (2001) Intellectual Capital andKnowledge Management Accessed on June 302004 from: http://www.sveiby.com/articles/intellectualcapital.htmlThomasen, O. B. and Butterfield, L. (1993)Combining risk management and resourceoptimization in project management softwareCost Engineering, 35, 19-24,Thoumrungroje, A. and Tansuhaj, P. (2004)Globalization Effects, Co-Marketing Alliances,and Performance Journal of American Academyof Business, 5, 495-502,Trent, R. J. 2004 What Everyone Needs to KnowAbout SCM Supply Chain ManagementReview March 2004USAID (2004) Glossary Accessed on June 28 2004from: http://www.usaid.gov/pubs/sourcebook/usgov/glos.htmlVan Slyke, E. J. (1999) Listening to conflict :finding constructive solutions to workplacedisputes, AMACOM, New York.Warkentin, M. E., Sayeed, L. and Hightower,R. (1997) Virtual teams versus face-to-faceteams: An exploratory study of a Web-basedconference system Decision Sciences, 28, 975- 997,Wikipedia (2004) Critical Chain Accessed onAugust 21 2004 from: http://www.fact-index.com/c/cr/critical_chain.htmlWinter, R. M. (2003) Computing the near-longestpath AACE <strong>International</strong> Transactions, PS111,WordHistory (2004) Critical Path Accessed onAugust 21 2004 from: http://www.worldhistory.com/wiki/c/critical-path.htmYahya, S. and Goh, W.-K. (2002) Managinghuman resources toward achieving knowledgemanagement Journal of KnowledgeManagement, 6, 457-468,Zultner, R. E. (2003) Multiproject Critical ChainAccessed on August 22 2004 from: http://www.cutter.com/research/2003/edge030610.htmlP. Stelth (MSc) - Professor G. Le Roy (PhD) - Projects’ Analysis through CPM (Critical Path Method)


52School of Doctoral Studies (European Union) JournalJulyIncome Disparity MeasurementAlexandre Popov (<strong>MB</strong>A)Master of Business Administration and candidate to PhD in Economics at the School of Doctoral Studies,Iles <strong>International</strong>e Université (European Union)Professor Stefen L. Freinberg (PhD)Chair of Macroeconomics and Monetary Economics Studies of the Department of Business Managementand Economics at the School of Doctoral Studies, <strong>Isles</strong> <strong>International</strong>e Université (European Union)AbstractThis article discusses the problems of measuring income disparity especially in the developing world.A substitute structure of measurement frequently employed by the third world has been the use of per capitaGNP facts and figures (GNP/c), however, the consideration of purchasing power parity (PPP) has beenmade and changes have been made accordingly. The changes made in this system compare the local pricesof products, merchandise and services of that particular country with the international prices of the samecommodities. By utilizing the identical comparative prices for each and every product and services, theresearchers evaluated the income measurements that had been changed for purchasing power parity (PPP).The results of the measurements model that considers the purchasing power parity (PPP) and changes theGNP/c accordingly is different from the model that disregards the changes.Key words: Economics, Macroeconomics, Income Disparity, Developing Economies, ComparativeAnalysisSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Income Disparity Measurement53IntroductionThe level of income disparity in the third worldhas become an important issue for the internationalcommunity because the repercussions of thisphenomenon are too dangerous to ignore. A numberof illustrations can be sketched to highlight theseriousness of this issue. For case in point, today,the richest country in the world is Luxembourgand the poorest country is Sierra Leone. It isshocking to find that the per capita gross nationalincome level of Luxembourg is more than ninetytimes of Sierra Leone (Andrew McKay, 2002).Another illustration can be drawn from the factthat the standard spending levels of the top 10%in Zambia had been thirty seven times more thanthe lowest 10%.Similarly, official statistics of India in 1990revealed that of those aged fifteen years andabove, approximately fifty six percent had beenuneducated. Likewise, In Venezuela during1996/97, it had been revealed that approximately2% of the land holdings had been 500 hectares ormore than that, which comparatively comprisedfor almost 60% of the land; and approximately50% of the total land possession had been 5hectares of or less than that; this accounted forless than 2% of the land;. It is clear from theabove illustrations that income disparity has beenon the rise in the third world countries (AndrewMcKay, 2002). Furthermore, the following tablesfurther illu strate the point being made. The tablesshows the human development comparative indexof the entire world (a country-wise breakup)in relation to population and poverty (table 1);education spending in relation to the total GDPand total expenditure of the governments (table2); the percentage of income sharing between therichest and the poorest (table 3); gender relatedincome distribution (table 4); occupation andunemployment rates (table 5).Andrew McKay (2002) illustrates the gravityof the present income disparity in the third worldand highlights the importance of this issue. Heasserts that the gap between the rich and the poorwill directly affect their quality of education andother social and economic services. The rich willbe getting the most excellent quality of servicesand the poor section will be deprived of thesevalues. He writes, “Inequality matters for poverty.For a given level of average income, education,land ownership etc., increased inequality of thesecharacteristics will almost always imply higherlevels of both absolute and relative deprivation inthese dimensions.”Andrew McKay (2002) asserts that incomedisparity in the third world will continue to depriveit from enhanced growth and development. Heasserts, “Inequality matters for growth. Asacknowledged in the 2000 White Paper, there isincreasing evidence that countries with high levelsof inequality – especially of assets – achieve lowereconomic growth rates on average. In addition,a given rate and pattern of growth of householdincomes will have a larger poverty reductionimpact when these incomes are more equallydistributed to begin with.”He further elaborates the repercussions ofincome disparity in the third world and relatesit to the high level of organized crime, brutalcivil wars and social disobedience movements.Andrew McKay (2002) also points out the unethnicityof income disparity. He writes, “Thereis a strong and quite widely accepted, ethical basisfor being concerned that there is a reasonabledegree of equality between individuals, thoughdisagreement about the question ‘equality ofwhat?’ (For instance, outcomes or opportunities?),as well as about what might be ‘reasonable’.Inequality is often a significant factor behindcrime, social unrest or violent conflict. These areoften important contributors to poverty in theirown right. Inequalities – even perceived ones– between clearly defined groups, for exampleaccording to ethnicity, may be an important issuehere.”Taking into account, the above mentioned facts,it is clear that income disparity has become a keyconcern for the entire international communityand this importance has been clearly illustrated inthe Millennium development goals (MDG). Butbefore the international community can promoteand fulfill this noble cause, it is important tounderstand the various structural theories that areA. Popov, S. L. Freinberg - Income Disparity Measurement


54School of Doctoral Studies (European Union) JournalJulybeing used by the third world countries to measurethe income disparity in their respective countriesbecause all efforts will be in vain, if structuraltheories of measurement that is comprehensiveenough to entail all specifics are not developed.This study will aim to fulfill this requirement andfirstly identify the gaps and shortcomings of thestructural theories being used right now and presentalternative theories to better evaluate the level ofincome disparity in the third world countries.Research QuestionsGeneral Questions:Q 1: Has level of income disparity in the thirdworld aggravated only in the recent years?Q 2: Can we determine the level of incomedisparity by age, color, gender, class, rural/urbanpopulation, economical system, occupation,education?Q 3: Is it important to understand the structuralprocedures behind income disparity in the thirdworld in order to formulate a better structuraltheory for measurement?Q 4: Can a better measurement method becrafted without understanding the present methodsof income disparity measurements?Specific QuestionsQ 5: Do the standard methods for evaluatingincome disparity continue to be beneficial andprecious?Q 6: Is it important to extend the conceptions ofmeasurements of income disparity beyond thosethat are being used today?Background of the StudyThe histories Context of Income Disparity:Throughout human history, there had been nochecks and balances on the growth of populationbecause the repercussions of over population hadnot been evaluated at that time. These repercussionshad only become significant during the industrialrevolution in the west. Many economists hadbecome afraid that the level of population growthhas been surpassing economic growth and a timewill come when the economic resource will bescarce and a large portion of the world populationwill suffer from deprivation.Glenn Firebaugh (2000) highlights theseconcerns of the classical economists of theeighteenth century and asserts that, “economicgrowth is unlikely to outpace population growthover the long run. In this model, economicgains are short-lived as the geometric growthof population inevitably catches up with lineareconomic gains.”These fears have been proved to be accurateas majority of the people living in this worldare deprived of even the basic needs of survival.Glenn Firebaugh (2000) writes, “The pace ofpopulation growth and economic growth overthe last two centuries has proven the classicaleconomists right about the expansion of the humanpopulation but wrong about the population trap.The productivity gains of the Industrial Revolutionwere accompanied by an era of unprecedentedpopulation growth. In 1820 the world’s populationwas about 1.1 billion. Today the world’s populationis over six billion.”While the per capita income has risen in thepast one hundred years, it has left majority of thepeople in extreme poverty. This is because theworld’s income had not been distributed evenly.The rich have become richer and the poor havebecome poorer. However, the resources to providethe basic necessities of life to the poor are aplentyand this phenomenon is unprecedented. Oneonly has to distribute the present resources in amore evenly manner. As Glenn Firebaugh (2000)asserts, “Although the rise in world incomes doesnot appear to be accompanied by rising humanhappiness or contentment, at the least it can besaid that at this juncture in history there is greaterpotential than there was in earlier eras for meetingthe essential human needs for food, shelter,clothing, and medical attention. The centraleconomic issue for our era is not whether there isSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Income Disparity Measurement55enough to go around--there is more to go aroundnow than ever before--but how evenly the world’sincome is distributed. The news in that regard isless heartening.”The key factor responsible for the presentlevel of income disparity had been the industrialrevolution of the west. The western countriesused to exploit the resources of the African andAsian countries and utilize them to enhance theirown earnings. This left a huge gap between thedeveloped world and the underdeveloped world,as on one hand, the developed world rapidlyprogressed through industrialization, while theunderdeveloped world had not only been deprivedof industrial growth but also had their resourcesexploited. As Glenn Firebaugh (2000) asserts,“The Industrial Revolution produced a sharpincrease in the income disparity between therichest and poorest regions of the world. In 1820per capita income in Western Europe (the world’srichest region at the time) was roughly three timesgreater than per capita income in Africa. Todayper capita income is almost 14 times greater inWestern Europe than it is in Africa. The gap is evenlarger for individual nations. Average incomes inthe richest and the poorest nations now differ by afactor of about 30.”Therefore, the hypothesis (the first researchquestion) that the phenomenon of income disparityis relatively new is untrue as it is clear that theseconcerns have been at the center stage throughoutthe industrial revolution. Also, it is safe to assertthat the collective income of the world has risendrastically in the last two hundred years, althoughit appears that it has been distributed extremelyunevenly. Both these facts have contradicted fairlypopular economic models. The rise in per capitaincome (or collective income) is in variationwith the population–trap theory, while the factof income disparity has contradicted the incomegrowth models (Glenn Firebaugh 2000).Before evaluating the present system ofmeasurement being used, it is important to notethat the most difficult issue for evaluating theincome disparity in the third world countries hasbeen the shortage of dependable income disparitydata. This leads to even further problems as theresearch of between-country along with insidecountryincome disparity gives researchers unusualkinds of data troubles.Therefore, it is important to have a universaland comprehensive income disparity measurementsystem so that the contradiction of assessing theincome disparity through various vital variables(such as, age, color, gender, rural/urban, economicalsystem, occupation, education) can be removedand an enhanced structure of measurement iscrafted that helps us to not only assess the presentlevels of income disparity in the third world butalso assist in decreasing this dilemma.Literature ReviewA Brief overview of the income disparitymeasurement methods being usedIn order to measure the income, several thirdworld countries utilize their annual national percapita Gross National Product (GNP/c) facts andfigures and subsequently change them to the formalexchange rate of the American dollar at that time.Each and every research study conducted in thismanner has revealed not only extremely huge gapsof income disparities but also an increase in thesegaps over the years.A substitute structure of measurement frequentlyemployed by the third world has been the use of percapita GNP facts and figures (GNP/c), however,the consideration of purchasing power parity(PPP) has been made and changes have been madeaccordingly. The changes made in this systemcompare the local prices of products, merchandiseand services of that particular country with theinternational prices of the same commodities. Byutilizing the identical comparative prices for eachand every product and services, the researchersevaluated the income measurements that had beenchanged for purchasing power parity (PPP). Theresults of the measurements model that considersthe purchasing power parity (PPP) and changesthe GNP/c accordingly is different from the modelthat disregards the changes.Data taken from IBRD (2000/01) explainsthis phenomenon. The table 2 and 1a in this datareveals that, “in 1999 Ethiopia’s adjusted GNP/cA. Popov, S. L. Freinberg - Income Disparity Measurement


56School of Doctoral Studies (European Union) JournalJulywas 600 P$ for 1999 compared to 100$ withoutadjustment. The adjusted number for the richestcountries is often lower than the non-adjusted one(in Sweden, for example, approximately 21,000P$compared to 25,000$) (as cited in Peter Svedberg,2001)”Radetzki and Jonsson (2001), Firebaugh (1999)and Schultz (1998), in independent studies, havestudied the measurement of national income byutilizing their respective annual national GNP/cfacts and figures and subsequently changingthem to the formal exchange rate of the Americandollar, as well as, studied the GNP/c and madethe necessary changes in accordance with thevariations in the purchasing power parity (PPP).However, all three of them made sure that allother units remained fixed. Their studies produceddifferent results for both the methods that had beenemployed (changed and the unchanged methods).Peter Svedberg (2001) reveals, “In Radetzki andJonsson, for example, the ratio increases modestlyfrom 18 in 1960 to 23 in 1995 (compare 3a and3b in Table 1). Firebaugh (1999) and Schultz(1998) arrive at even more drastic differences intheir studies. When using the adjusted incomemeasurements, they find the changes in distributionto be insignificant, or even improved (see below).The unquestionable conclusion is that the choiceof income measurement is the major factor indetermining which results are produced.”Critical analysis of these measurementmethodsIn light of the above mentioned evidence,one can conclude that the substitute method thatconsiders the purchasing power parity and makesthe necessary changes is more reliable than themethod that disregards these changes. In fact,officials in international organizations, such as theWorld Bank and the <strong>International</strong> Monetary Fund(IMF) not only agree with this method but alsoutilize it in their reviews and studies. However,several researchers have argued that the substitutemethod of measurement with changed GNP/cfigures is very random and undependable. PeterSvedberg (2001) argues, “…the uniform relativeprices by which all countries’ production is valuedare taken from the USA and hence reflect theproduction structure there and the preferences ofAmerican consumers. It cannot be taken for grantedthat these preferences (and prices) unambiguouslyreflect the relative value of goods and services inall other countries. A single alternative uniformrelative price structure would produce a partiallydifferent ranking of countries in terms of percapitaincome.”One more shortcoming of the substitute methodwith changed GNP/c figures is that it has onlybeen very recently developed and the studies thathave been conducted have been for only a smallnumber of countries. If this method is applied toevaluate the poorest countries of the world thenthe results obtained will be extremely different anddoubtful (Radetzki and Jonsson, 2001; Dowrickand Quiggin 1997).Peter Svedberg (2001) however, asserts that itis ironic to believe that the unchanged system ofmeasurement is less accurate than the substitutemethod with changed GNP/c. He believes, “Oneparticular problem with the latter is that the officialexchange rates in many countries are distorted(often overvalued). This is particularly true forthe many (poor) countries where the currency isnot convertible, the trade barriers are high andwidespread, the restrictions on capital movementare manifold, and numerous other market“distortions” exist, whose direct and indirecteffects on the exchange rates are large.”However, several examples can illustrate theirrationality of the results produced by utilizingmethods such as the unchanged GNP/c methodthat disregards the variations of the purchasingpower parity (PPP). The data taken from theIBRD (2000/01) report illustrates that the poorestcountries in the world, for example, Chad,Somalia, Ethiopia, Haiti have unchanged GNP/cin the limit of $100 to $200. However, a ratio of0.007 is obtained, if one relates this GNP/c tothe equivalent GNP/c of America or Europe. Theirrationality of this method can be gauged fromthe fact that an average Ethiopian should consume0.7% of what an average American or Europeanconsumes.School of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Income Disparity Measurement57Peter Svedberg (2001) argues, “……themortality rate resulting from undernourishmentand sickness in the poorest of countries would beinfinitely higher than what is observed (which isbad enough). With such low per capita income (ifthey are to be compared with income levels in theWestern world) as the non-adjusted GNP/c figuresimply (almost regardless of how these incomesare distributed within the country), a populationsimply cannot survive.”Several researchers have provided persuasiveevidence and facts that suggest that the unchangedGNP/c method, as well as the changed GNP/cmethod being employed by not only nationalgovernments but also international organizationsto evaluate the income disparity in the third worldis unreliable and that efforts have to be made topresent better structural theories so that betterefforts can be made to address this issue. Onesuch sociologists, Pritchett in 1997 performed astudy, which reveals, “…convincing calculations(using three different methods) which suggestthat a GNP/c adjusted for purchasing powerparity of approximately 250 P$ is the lowest thatis consistent with the current mortality estimates.These calculations are consistent with the fact thatthe average incomes in the 10 poorest countriesare in the 350 - 750 P$ range according to theWorld Bank (Peter Svedberg, 2001).”Problems with the use of these methods withincountries:Schultz (1998), Firebaugh (1999) and Wade(2001) have made an earnest effort to assessthe approximation of income disparity amidhouseholds throughout the world, that is to say,mutually across and within nations. Both, Schultz(1998), Firebaugh (1999) strongly criticized theresults of the study as they had been not onlyuncertain but also absurd. Both Schultz (1998),Firebaugh (1999) end their study by asserting theresults of the household income disparity obtainedfrom inside the nations are not as vital as thoseobtained across the nations.They conclude their study by writing, “(1) thetotal distribution is more uneven than the one acrosscountries (an obvious conclusion), (2) that thetotal distribution has remained unchanged duringthe periods under investigation (from 1960), and(3) the mal-distribution across countries accountsfor about ¾ of the total inter-household inequality(as cited in Peter Svedberg, 2001).”However, in 2001, a study conducted by Waderevealed the exact opposite results. This justshows the level of inconsistency in these methodsof income disparity measurements. Wade foundedhis study on two studies carried out by Milanovic(1999) and Dikhanov and Ward (1999) on behalfof World Bank. The most important revelationof this study is that the share of the bottom 10%households in the total world income has declinedconsiderably from 0.88% to 0.64%.It is worth noting that the problem associatedwith the methodology of household surveys is thesame as that of the GNP/c methods. Furthermore,researchers assert it is unfeasible to comparehousehold surveys across different countries overdifferent time periods.Ideas about the gaps that existWhat is wrong with the present methods beingused?Heston (1994) in his study evaluated theusefulness of evaluating income disparity withGNP/c (both changed and unchanged) methods.His results reveal that the notion of per capitaincome of the poorest nations in the world barelysurpassing the standard level of survival is anoverestimation, by any stretch of imagination.He founds his argument that the income acquiredthrough the agricultural sector has been undervalued by a large margin. Furthermore, it is worthnoting that several studies have concluded that upto 80% of the economies of the third world areunregistered and, therefore, these estimationsdisregard the “black economy” or the illegalsectors of the economy.Gordon (1990) and Moulton and Moses(1997) advance this notion even further. Theirstudy concluded that the illegal and unregisteredbusinesses in the western economies have alsobeen undervalued. Therefore, one can assert withcertainty that, at present, no such method existsthat is capable of assessing the level of incomeA. Popov, S. L. Freinberg - Income Disparity Measurement


58School of Doctoral Studies (European Union) JournalJulydisparity in the third world countries. Ironically,the inaccuracy of the data is not restricted to thedomain of income disparity as a report of IBRD(1999/00, 2000/01, table 4) reveals that the growthof the population had been amended by the WorldBank.(Peter Svedberg, 2001) explains, “…nocomment whatsoever of the underlying reasons isprovided by the World Bank. Nevertheless, all thedistribution studies reviewed earlier employed theprevious, obviously incorrect population statistics,as their basis. If those statistics are completelymisleading, which the World Bank suggests,they must have distorted the estimates of incomedistribution in the world to a great, althoughunknown, extent.”Theoretical frameworkChampernowne (1974) believes that thephenomenon of income disparity throughout theworld is more dangerous than the issue of globalpoverty and disease. This is because no efficientmethod of evaluating income disparity exists. Thisis true for both income disparities across countries,as well as within countries. As a result, confusionand helplessness prevails amongst the membersof the international community who have pinnedsuch high hopes to the beginning of this newmillennium.Interestingly, Robert Went (2004) in his lateststudy provides a theoretical pattern that may helpresearchers in crafting an enhanced mechanismfor measuring income disparity in the thirdworld. He believes that it is important to have thecorrect combination between various sets, suchas, measures, samples, and data in order to craftrealistic results.The very first decision, which the researchershould make before commencing his study, iswhether he will measure income distribution anddisparity through market exchange rates or throughpurchasing power parity (PPP). The internationalorganizations argue that the best method ofevaluating is the use of PPP; however, RobertWent asserts that the use of a particular methodshould be dependant on the objective of the study.He writes (2004), it is “…entirely sensible to lookat income in terms of exchange rates if one wantsto measure the relative position of countries in theglobal economy and their weight in internationalorganizations. Some countries for example havemore problems than others in finding the moneyto fund delegations with negotiators and juridicalexperts in Geneva to negotiate in the WTO, touse an understatement. Developing countries alsocannot pay off their debts with PPP dollars, whichwould drastically reduce the amount to be paid.”Furthermore, Robert Went asserts that it willbe suitable to scrutinize the data of global incomedisparity through either absolute or relativeincome differences. He illustrates his theoryby exemplifying, “If incomes of $100 increaseby 10 percent and incomes of $100.000 do too,the relative difference between the two does notchange. But the absolute difference increases from99,900 to 109,890, that is by 10 percent.” Thistheory can also be gauged from the Ravallion’seconomic theory. Ravallion (2004:8) believes,“There is no economic theory that tells us thatinequality is relative, or absolute. It is not that oneconcept is right and one wrong. Nor are they twoways of measuring the same thing. Rather, they aretwo different concepts. The revealed preferencesfor one concept over another reflect implicit valuejudgments about what constitutes a fair division ofthe gains from growth”In addition, Robert Went (2004) argues thatin order to effectively measure income disparitybetween-nations, as well as inside-nations,researchers should evaluate the stats of betweencountrystatistics (for example, growth and trade)and then measure that with the inside-countrystatistics so that a more accurate picture can berevealed.Another very useful theory presented byRobert Went directs the researchers either to counteach nation as “one” (weight) regardless of itspopulation or to consider the population of eachand every country. For example, in the first theory,China and America should be given the same“weight,” while in the second theory both thesecountries should be weighed according to the sizeof their population.School of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Income Disparity Measurement59Robert Went proposes another theory associatedwith the distinctions amid social disparityand social divergence. Social disparity can beassessed as the mean value derived across incomedistribution. Whereas, social divergence should becalculated by evaluating the relative proportion ofthe income distribution from top to bottom.Lastly, Ravallion (2001: 7) presents a theoryrelated to the measurement of income disparityfrom either household surveys or from privatespending expenditure per capita from the nationalstatistics. The former (evaluating income disparityfrom household surveys) do not include, “mostbenefits people get from publicly provided goodsand services but are generally considered the leastbad approximation of household expenses andincomes(as cited in Robert Went, 2004).” Whilethe later (evaluating income disparity from privatespending expenditure per capita from the nationalstatistics) “include spending on goods and servicesby unincorporated businesses and non-profitorganizations such as charities, religious groups,clubs, trade unions and political parties.”Summary of the theoretical frameworkIt is important to note that no clear-cut solutionabout the measurement of income disparity existseither across nations or within nations becauseincome and disparity are diverse conceptions. Thepresent structures of income measurements (bothchanged and unchanged GNP/c) are howeverextremely inaccurate (such as, UNDP 1999and Korceniewicz and Moran 1997). Effectivemeasures need to be taken both at the academiclevel and the clinical level so that newer conceptsand theories can be developed and ultimately anyone of these theories can turn out to be a usefulcontribution towards an accurate establishment ofincome disparity of both across-nations, as well aswithin nations.While evaluating income disparity, it isimportant to note that the decision on the methodto be used should be based on the objective of thestudy. For instance, if the objective of the study isto evaluate the gap between the developed nationsand the under developed nations, then such methodsshould be used that measure the mean ratio, butit is important to note that such measurementsmethods have apparent limitations. Furthermore,the decision regarding the group of nations thatwould obviously be random. In addition to that,regardless of the description, the proportionalmeasurements do not make an allowance for theincome disparity inside the group of countries takeninto account, or the income disparity across thenations in between the wealthiest and the poorest.This signifies that this measurements method doesnot present the data about the income disparity inthe world as one. Therefore, when the objective ofthe study is to evaluate the income disparity of theworld as a whole, then some kind of measurementmethod, which features in the whole distributionover all nations, is to be favored.Lastly, it is vital to understand that the futurecomparative distribution of incomes betweennations will grow partially on the per capitaeconomic development that particular groups ofnations will achieve, and partially on comparativepopulation increase in these groups. If the incomedisparities in these nations, which are very large,in the present circumstances, develop quickerthan those of other nations, there will obviouslybe a resulting development in the level incomedisparity. But if these nations sustain as anincreasing proportion of the world’s inhabitants,this will incline to create the income disparityscenario even more serious. The subject is thenwhat can be said in relation to anticipated growthrates, both in relation to the population and GNP/c,from theoretical perspectives, as well as, empiricalexperiences.HypothesisGeneral QuestionsA 1: Income disparity in the third world has notaggravated in the recent years, rather, thisissue dates back to the start of the colonialism andthe subsequent industrial revolutionA 2: It is next to impossible to determine thelevel of income disparity by age, color, gender,class, rural/urban population, economical system,occupation, education by using the presentsystem. However, clear directions and strategiesA. Popov, S. L. Freinberg - Income Disparity Measurement


60School of Doctoral Studies (European Union) JournalJulyhave been presented in this paper, which willclearly help researchers make better judgments inthe future.A 3: Without a shadow of doubt, in order toformulate a better structural theory for incomedisparity measurement, it is important to understandthe structural procedures behind thisphenomenon in the third world.A 4: It is impossible to craft a better measurementmethod without understanding the presentmethods of income disparity measurements.Specific QuestionsQ 5: Do the standard methods for evaluatingincome disparity continue to be beneficial andprecious?A 6: Standard methods for determining incomedisparity can be considered to be marginallybeneficial and worthwhile; however, it is vital towiden conceptions of income disparity furtherthan those characteristically measured in debateson this subject. This comprises crafting a moremulti-purpose point of view on income disparity,but also other features, for example, reflecting onincome disparity at diverse levels of calculationsand diverse time scopes. Utilizing both qualitative,as well as quantitative methods may also turn outto be exceptionally important.Research DesignOperationalization VariablesFurthermore, appropriate theories and opinionshave been given to prove not only the seriousness ofthe situation but also the validity of the arguments.Also, a brief overview is given of the presentpolicies being implemented by the governments incollaboration with the international organizations.Thereafter, a thorough analysis of thepresent policies is provided to demonstrate theeffectiveness of the policies being implementedand suggestions for future use are also provided soas to comprehensively evaluate the repercussionsfor both the developing and the developedcountries.Data ColletedThe tactic involved in this process has beenthat of a collection of the largest possible numberof existing information related to income disparityacross the third world from articles publishedin various scientific journals and magazinesby individual researchers, as well as, researchinstitutions has been compiled.Analysis planThe data analysis and search tactic dependedon manifold means so as to guarantee the mostadvantageous totality of facts and statisticsavailable. At the outset, a comprehensive literatureexploration had been performed by means ofinternet, as well as, university and public library.In this manner, the bulk of published informationrelating to the issue of income disparity in the thirdworld had been distinguished and compiled.The analytical strategy employed in this paperhas firstly identified the gravity of the situation athand relating to the history of income disparityacross centuries and subsequently, assessmentshave been made about the usefulness of thepresent methods of measurements being utilizedto evaluate income disparity in the third world.Lastly, structural theories have been presented thatwill open new doors to research and developmenton this subject.Causal Diagram (Ordinary LeastSquares Regression)Figure 1. The graph shows the GDP per capita atpurchasing power parity (PPP) and the Gini index,via regional combination.(The Gini index shows zero value for perfect incomeparity and 100 for perfect income disparity.)School of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Income Disparity Measurement618070605040Developed (industrialized)EconomiesTransition EconomiesAsia and the PacificLatin America and theCaribbeanSub-Saharan AfricaMiddle East an North Africa30201000 5 000 10 000 15 000 20 000 25 000 30 000 35 000 40 000 45 000GDP per capita (at PPP)Source: Poverty and income distribution indicator (KILM 20)The graph shows the industrialized countries indark blue; countries with transitional economiesin pink; Asian along with the pacific countries inYellow; Latin American along with the Caribbeancountries in light blue; the Sub-Saharan Africancountries in purple and the Middle Eastern andNorth African countries in Orange. It is evidentthat income disparity exists from one country toanother, particularly in regions like the South-EastAsia, Latin America, the Caribbean and Sub-SaharaAfrica (despite the fact that poverty statistics areunavailable). Other countries such as the UnitedStates, Turkey, and Russia are exceptionalcountries. The level of income disparity is on thehigher side, as well as the level of consumptionis also extremely high, with the poorest countrieshaving minimum income and consumption levelsand the richest countries having excessive incomeand consumption levels.Lastly, it is worth noting that the measurementsystem used in this study is far from perfectbecause it has ignored valuable data (such as thenumber of poor people and the standard level ofpoverty in every country). While, one can safelyconclude that the disparity in income does existbut it is difficult to assert the exact level and scaleof income inequality because of the measurementsystems being used to evaluate income disparity.The following table will illustrate the graphpresented above:Number or people living below $1.08 per day (million)1981 1984 1987 1990 1993 1996 1999 2001East Asia 795.6 562.2 425.6 472.2 415.4 286.7 281.7 271.3Of which China 633.7 425.0 308.4 374.8 334.2 211.6 222.8 211.6Eastern Europe and Central Asia 3.1 2.4 1.7 2.3 17.5 20.1 30.1 17.0Latin America and Caribbean 35.6 46.0 45.1 49.3 52.0 52.2 53.6 49.8Middle East and North Africa 9.1 7.6 6.9 5.5 4.0 5.5 7.7 7.1South Asia 474.8 460.3 473.3 462.3 476.2 461.3 428.5 431.1Of which India 382.4 373.5 369.8 357.4 380.0 399.5 352.4 358.6Sub-Saharan 163.6 198.3 218.6 226.8 242.3 271.4 294.3 312.7Total 1481.8 1276.8 1171.2 1218.5 1207.5 1097.2 1095.7 1089.0Number or people living below $2.15 per day (million)1981 1984 1987 1990 1993 1996 1999 2001East Asia 1169.8 1108.6 1028.3 1116.3 1079.3 922.2 899.6 864.3Of which China 875.8 813.8 730.8 824.6 802.9 649.6 627.5 593.6A. Popov, S. L. Freinberg - Income Disparity Measurement


62School of Doctoral Studies (European Union) JournalJulyNumber or people living below $1.08 per day (million)1981 1984 1987 1990 1993 1996 1999 2001Eastern Europe and Central Asia 20.2 18.3 14.7 22.9 81.3 97.8 113.0 93.3Latin America and Caribbean 98.9 118.9 115.4 124.6 136.1 117.2 127.4 128.2Middle East and North Africa 51.9 49.8 52.5 50.9 51.8 60.9 70.4 69.8South Asia 821.1 858.6 911.4 957.5 1004.8 1029.1 1039.0 1063.7Of which India 630.0 661.4 697.1 731.4 769.5 805.7 804.4 826.0Sub-Saharan Africa 287.9 326.0 355.2 381.6 410.4 446.8 489.3 516.0Total 2450.0 2480.1 2477.5 2653.8 2763.6 2674.1 2738.8 2735.4Source: Chen and Ravallion, 2004, p. 31.ConclusionIt is worth noting that when measuring incomedisparity at the country level, variations in thescale of economic activity owing to external(uncontrollable) factors, such as natural disasters,poor harvest due to water shortage etc, have adirect influence on the results acquired fromthe study. It is clear form the graph presentedabove and the facts shown in the paper that themeasurement systems being used today producehighly inaccurate results because of their limitedflexibility and comprehensiveness.Therefore, there is an imperative requirementto effort in the direction of crafting enhancedmeasurements methods of global incomedistribution and disparity, which are comprehensiveenough to accurately evaluate income disparitieswith all independent variable, such as, color, gender,rural/urban, economical system, occupation.This has apparently not been a straightforwardendeavor, nevertheless, viable theories have beenpresented in this paper that deserve the attention,consideration and support from not only the NonGovernment Organizations’ but also the policymakers, as well as the researchers.School of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Income Disparity Measurement63Table 1. World Population and Total povertyHDIrankHuman povertyindex(HPI-1)ValueRank (%)Probability atbirth of notsurviving toage 40 a(% of cohort)2000-05Adult illiteracyrate b(% ages 15and above2002)Populationwithoutsustainableaccess toan improvedwatersourceMDG MDGChildrenPopulation belowunderincome poverty lineweight(%)for age(% underage 5)1995-2002 C $1 a day d1990-2002 C $2 a day eNationalpoverty line1990-2002 C1990-2002 CHPI-1rankminusincomepovertyrank 1High Human Development23 Hong Kong, China (SAR) .. .. 1.8 6.5 .. .. .. .. .. ..25 Singapore 6 6.3 1.9 7.5 0 14 .. .. .. ..28 Korea, Rep. of .. .. 3.4 2.1 8 ..


64School of Doctoral Studies (European Union) JournalJulyHDIrankHuman povertyindex(HPI-1)ValueRank (%)Probability atbirth of notsurviving toage 40 a(% of cohort)2000-05Adult illiteracyrate b(% ages 15and above2002)Populationwithoutsustainableaccess toan improvedwatersourceMDG MDGChildrenPopulation belowunderincome poverty lineweight(%)for age(% underage 5)1995-2002 C $1 a day d1990-2002 C $2 a day eNationalpoverty line1990-2002 C1990-2002 CHPI-1rankminusincomepovertyrank 1114 Bolivia 27 14.4 16 13.3 17 10 14.4 34.3 62.7 -5115 Honduras 32 16.6 13.8 20 12 17 23.8 44.4 53 -17117 Mongolia 38 19.1 13 2.2 40 13 13.9 50 36.3 4118 Nicaragua 37 18.3 10.3 23.3 23 10 45.1 79.9 47.9 -31119 South Africa 52 31.7 44.9 14 14 12 7.1 23.8 .. 20120 Egypt 47 30.9 8.6 44.4 3 11 3.1 43.9 16.7 20121 Guatemala 44 22.5 14.1 30.1 8 24 16 37.4 56.2 1122 Gabon .. .. 28.1 .. 14 12 .. .. .. ..123 São Tomé and Principe .. .. 10 .. .. 13 .. .. .. ..124 Solomon Islands .. .. 6.8 .. 29 21 .. .. .. ..125 Morocco 56 34.5 9.4 49.3 20 9


2009 Income Disparity Measurement65Table 2. Education spending in relation to the total GDP and totalexpenditure of the governmentsShare of incomeor consumtion(%)MDGInequality measuresRichest 10% topoorest 10%Richest 20% topoorest 20%HDI rank Survey Year Poorest 10% Poorest 20% Richest 20% Richest 10% Gini IndexHigh Human Development1 Norway 2000 3.9 9.6 37.2 23.4 6.1 3.9 25.82 Sweden 2000 3.6 9.1 36.6 22.2 6.2 4 253 Australia 1994 2 5.9 41.3 25.4 12.5 7 35.24 Canada 1998 2.5 7 40.4 25 10.1 5.8 33.15 Netherlands 1994 2.8 7.3 40.1 25.1 9 5.5 32.66 Belgium 1996 2.9 8.3 37.3 22.6 7.8 4.5 257 Iceland .. .. .. .. .. .. .. ..8 United States 2000 1.9 5.4 45.8 29.9 15.9 8.4 40.89 Japan 1993 4.8 10.6 35.7 21.7 4.5 3.4 24.910 Ireland 1996 2.8 7.1 43.3 27.6 9.7 6.1 35.911 Switzerland 1992 2.6 6.9 40.3 25.2 9.9 5.8 33.112 United Kingdom 1999 2.1 6.1 44 28.5 13.8 7.2 3613 Finland 2000 4 9.6 36.7 22.6 5.6 3.8 26.914 Austria 1997 3.1 8.1 38.5 23.5 7.6 4.7 3015 Luxemburg 2000 3.5 8.4 38.9 23.8 6.8 4.6 30.816 France 1995 2.8 7.2 40.2 25.1 9.1 5.6 32.717 Denmark 1997 2.6 8.3 35.8 21.3 8.1 4.3 24.718 New Zealand 1997 2.2 <strong>6.4</strong> 43.8 27.8 12.5 6.8 36.219 Germany 2000 3.2 8.5 36.9 22.1 6.9 4.3 28.320 Spain 1990 2.8 7.5 40.3 25.2 9 5.4 32.521 Italy 2000 2.3 6.5 42 26.8 11.6 6.5 3622 Israel 1997 2.4 6.9 44.3 28.2 11.7 <strong>6.4</strong> 35.523 Hong Kong, China (SAR) 1996 2 5.3 50.7 34.9 17.8 9.7 43.424 Greece 1998 2.9 7.1 43.6 28.5 10 6.2 35.425 Singapore 1998 1.9 5 49 32.8 17.7 9.7 42.526 Portugal 1997 2 5.8 45.9 29.8 15 8 38.527 Slovenia 1998/99 3.6 9.1 35.7 21.4 5.9 3.9 28.428 Korea, Rep. of 1998 2.9 7.9 37.5 22.5 7.8 4.7 31.629 Barbados .. .. .. .. .. .. ..30 Cyprus .. .. .. .. .. .. ..31 Malta .. .. .. .. .. .. ..32 Czech Republic 1996 4.3 10.3 35.9 22.4 5.2 3.5 25.433 Brunei Darussalam .. .. .. .. .. .. . ..34 Argentina 2001 1 3.1 5<strong>6.4</strong> 38.9 39.1 18.1 52.235 Seychelles .. .. .. .. .. .. .. ..36 Estonia 2000 1.9 6.1 44 28.5 14.9 7.2 37.237 Poland1999 1999 2.9 7.3 42.5 27.4 9.3 5.8 31.638 Hungary 1999 2.6 7.7 37.5 22.8 8.9 4.9 24.439 Saint Kitts and Nevis .. .. .. .. .. .. .. ..40 Bahrain .. .. .. .. .. .. .. ..41 Lithuania 2000 3.2 7.9 40 24.9 7.9 5.1 31.942 Slovakia 1996 3.1 8.8 34.8 20.9 6.7 4 25.843 Chile 2000 1.2 3.3 62.2 47 40.6 18.7 57.144 Kuwait .. .. .. .. .. .. .. ..45 Costa RIca 2000 1.4 4.2 51.5 34.8 25.1 12.3 46.546 Uruguay 2000 1.8 4.8 50.1 33.5 18.9 10.4 44.647 Qatar .. .. .. .. .. .. .. ..48 Croatia 2001 3.4 8.3 39.6 24.5 7.3 4.8 2949 United Arab Emirates .. .. .. .. .. .. .. ..50 Latvia 1998 2.9 7.6 40.3 25.9 8.9 5.3 32.451 Bahamas .. .. .. .. .. .. .. ..52 Cuba .. .. .. .. .. .. .. ..53 Mexico 2000 1 3.1 59.1 43.1 45 19.3 54.654 Trinidad an Tobago 1992 2.1 5.5 45.9 29.9 14.4 8.3 40.355 Antigua and Barbuda .. .. .. .. .. .. .. ..Medium Human Development56 Bulgaria 2001 2.4 6.7 38.9 23.7 9.9 5.8 31.957 Russian Federation 2000 1.8 4.9 51.3 36 20.3 10.5 45.658 Libyan Arab Jamahiriya .. .. .. .. .. .. .. ..59 Malasya 1997 1.7 4.4 54.3 38.4 22.1 12.4 49.260 Macedonia, TFYR 1998 3.3 8.4 36.7 22.1 6.8 4.4 28.261 Panama 2000 0.7 2.4 60.3 43.3 62.3 24.7 5<strong>6.4</strong>A. Popov, S. L. Freinberg - Income Disparity Measurement


66School of Doctoral Studies (European Union) JournalJulyShare of incomeor consumtion(%)MDGInequality measuresRichest 10% topoorest 10%Richest 20% topoorest 20%HDI rank Survey Year Poorest 10% Poorest 20% Richest 20% Richest 10% Gini Index62 Belarus 2000 3.5 8.4 39.1 24.1 6.9 4.6 30.463 Tonga .. .. .. .. .. .. .. ..64 Mauritius .. .. .. .. .. .. .. ..65 Albania 2002 3.8 9.1 37.4 22.4 5.9 4.1 28.266 Bosnia and Herzegovina 2001 3.9 9.5 35.8 21.4 5.4 3.8 26.267 Suriname .. .. .. .. .. .. .. ..68 Venezuela 1998 0.6 3 53.4 36.3 62.9 17.9 49.169 Romania 2000 3.3 8.2 38.4 23.6 7.2 4.7 30.370 Ukraine 1999 3.7 8.8 37.8 23.6 <strong>6.4</strong> 4.3 2971 Saint Lucia 1995 2 5.2 48.3 32.5 16.2 9.2 42.672 Brazil 1998 0.5 2 64.4 46.7 85 31.5 59.173 Colombia 1999 0.8 2.7 61.8 46.5 57.8 22.9 57.674 Oman .. .. .. .. .. .. .. ..75 Samoa (Western) .. .. .. .. .. .. .. ..76 Thailand 2000 2.5 6.1 50 33.8 13.4 8.3 43.277 Saudi Arabia .. .. .. . . . . .78 Kazakhstan 2001 3.4 8.2 39.6 24.2 7.1 4.8 31.379 Jamaica 2000 2.7 6.7 46 30.3 11.4 6.9 37.980 Lebanon .. .. .. .. .. .. .. ..81 Fiji .. .. .. .. .. .. .. ..82 Armenia 1998 2.6 6.7 45.1 29.7 11.5 6.8 37.983 Philipines 2000 2.2 5.4 52.3 36.3 16.5 9.7 46.184 Maldives .. .. .. .. .. .. .. ..85 peru 2000 0.7 2.9 53.2 37.2 49.9 18.4 49.886 Turkmenistan 1998 2.6 6.1 47.5 31.7 12.3 7.7 40.887 Saint Vincent and the.. .. .. .. .. .. .. ..Grenadines88 Turkey 2000 2.3 6.1 46.7 30.7 13.3 7.7 4089 Paraguay 1999 0.6 2.2 60.2 43.6 70.4 27.3 56.890 Jordan 1997 3.3 7.6 44.4 29.8 9.1 5.9 3<strong>6.4</strong>91 Azerbaijan 2001 3.1 7.4 44.5 29.5 9.7 6 36.592 Tunisia 2000 2.3 6 47.3 31.5 13.4 7.9 39.893 Grenada .. .. .. .. .. .. .. ..94 China 2001 1.8 4.7 50 33.1 18.4 10.7 44.795 Dominica .. .. .. .. .. .. .. ..96 Sri Lanka 1995 3.5 8 42.8 28 7.9 5.3 34.497 Georgia 2001 2.3 <strong>6.4</strong> 43.6 27.9 12 6.8 36.998 Dominican Republic 1998 2.1 5.1 53.3 37.9 17.7 10.5 47.499 Belize .. .. .. .. .. .. .. ..100 Ecuador 1998 0.9 3.3 58 41.6 44.9 17.3 43.7101 Iran, Islamic Rep. of 1998 2 5.1 49.9 33.7 17.2 9.7 43102 Occupied Palestinian.. .. .. .. .. .. .. ..Territories103 El Salvador 2000 0.9 2.9 57.1 40.6 47.4 19.8 53.2104 Guyana 1999 1.3 4.5 49.7 33.8 25.9 11.1 43.2105 Cape Verde .. .. .. .. .. .. .. ..106 Syrian Arab Republic .. .. .. .. .. .. .. ..107 Uzbekistan 2000 3.6 9.2 36.3 22 6.1 4 26.8108 Algeria 1995 2.8 7 42.6 26.8 9.6 6.1 35.3109 Equatorial Guinea .. .. .. .. .. .. .. ..110 Kyrgyzstan 2001 3.9 9.1 38.3 23.3 6 4.2 29111 Indonesia 2002 3.6 8.4 43.3 28.5 7.8 5.2 34.3112 Viet Nam 1998 3.6 8 44.5 29.9 8.4 5.6 36.1113 Moldova, Rep of 2001 2.8 7.1 43.7 28.4 10.2 6.2 36.2114 Bolivia 1999 1.3 4 49.1 32 24.6 12.3 44.7115 Honduras 1999 0.9 2.7 58.9 42.2 49.1 21.5 55116 Tajikistan 1999 3.2 8 40 25.2 8 5 34.7117 Mongolia 1998 2.1 5.6 51.2 37 17.8 9.1 44118 Nicaragua 2001 1.2 3.6 59.7 45 36.1 16.8 55.1119 South Africa 1995 0.7 2 66.5 46.9 65.1 33.6 59.3120 Egypt 1999 3.7 8.6 43.6 29.5 8 5.1 34.4121 Guatemala 2000 0.9 2.6 64.1 48.3 55.1 24.4 48.3122 Gabon .. .. .. .. .. .. .. ..123 São Tomé Principe .. .. .. .. .. .. .. ..124 Solomon Islands .. .. .. .. .. .. .. ..125 morocco 1998/99 2.6 6.5 46.6 30.9 11.7 7.2 39.5126 Namibia 1993 0.5 1.4 78.7 64.5 128.8 56.1 70.7127 India 1999/2000 3.9 8.9 41.6 27.4 7 4.7 32.5128 Botswana 1993 0.7 2.2 70.3 56.6 77.6 31.5 63School of Doctoral Studies (European Union) Journal - July, 2009 No. 1


68School of Doctoral Studies (European Union) JournalJulyTable 3. The percentage of income sharing between the richest and the poorestShare of incomeor consumtion(%)MDGInequality measuresRichest 10% topoorest 10%Richest 20% topoorest 20%HDI rank Survey Year Poorest 10% Poorest 20% Richest 20% Richest 10% Gini IndexHigh Human Development1 Norway 2000 3.9 9.6 37.2 23.4 6.1 3.9 25.82 Sweden 2000 3.6 9.1 36.6 22.2 6.2 4 253 Australia 1994 2 5.9 41.3 25.4 12.5 7 35.24 Canada 1998 2.5 7 40.4 25 10.1 5.8 33.15 Netherlands 1994 2.8 7.3 40.1 25.1 9 5.5 32.66 Belgium 1996 2.9 8.3 37.3 22.6 7.8 4.5 257 Iceland .. .. .. .. .. .. .. ..8 United States 2000 1.9 5.4 45.8 29.9 15.9 8.4 40.89 Japan 1993 4.8 10.6 35.7 21.7 4.5 3.4 24.910 Ireland 1996 2.8 7.1 43.3 27.6 9.7 6.1 35.911 Switzerland 1992 2.6 6.9 40.3 25.2 9.9 5.8 33.112 United Kingdom 1999 2.1 6.1 44 28.5 13.8 7.2 3613 Finland 2000 4 9.6 36.7 22.6 5.6 3.8 26.914 Austria 1997 3.1 8.1 38.5 23.5 7.6 4.7 3015 Luxemburg 2000 3.5 8.4 38.9 23.8 6.8 4.6 30.816 France 1995 2.8 7.2 40.2 25.1 9.1 5.6 32.717 Denmark 1997 2.6 8.3 35.8 21.3 8.1 4.3 24.718 New Zealand 1997 2.2 <strong>6.4</strong> 43.8 27.8 12.5 6.8 36.219 Germany 2000 3.2 8.5 36.9 22.1 6.9 4.3 28.320 Spain 1990 2.8 7.5 40.3 25.2 9 5.4 32.521 Italy 2000 2.3 6.5 42 26.8 11.6 6.5 3622 Israel 1997 2.4 6.9 44.3 28.2 11.7 <strong>6.4</strong> 35.523 Hong Kong, China (SAR) 1996 2 5.3 50.7 34.9 17.8 9.7 43.424 Greece 1998 2.9 7.1 43.6 28.5 10 6.2 35.425 Singapore 1998 1.9 5 49 32.8 17.7 9.7 42.526 Portugal 1997 2 5.8 45.9 29.8 15 8 38.527 Slovenia 1998/99 3.6 9.1 35.7 21.4 5.9 3.9 28.428 Korea, Rep. of 1998 2.9 7.9 37.5 22.5 7.8 4.7 31.629 Barbados .. .. .. .. .. .. ..30 Cyprus .. .. .. .. .. .. ..31 Malta .. .. .. .. .. .. ..32 Czech Republic 1996 4.3 10.3 35.9 22.4 5.2 3.5 25.433 Brunei Darussalam .. .. .. .. .. .. . ..34 Argentina 2001 1 3.1 5<strong>6.4</strong> 38.9 39.1 18.1 52.235 Seychelles .. .. .. .. .. .. .. ..36 Estonia 2000 1.9 6.1 44 28.5 14.9 7.2 37.237 Poland1999 1999 2.9 7.3 42.5 27.4 9.3 5.8 31.638 Hungary 1999 2.6 7.7 37.5 22.8 8.9 4.9 24.439 Saint Kitts and Nevis .. .. .. .. .. .. .. ..40 Bahrain .. .. .. .. .. .. .. ..41 Lithuania 2000 3.2 7.9 40 24.9 7.9 5.1 31.942 Slovakia 1996 3.1 8.8 34.8 20.9 6.7 4 25.843 Chile 2000 1.2 3.3 62.2 47 40.6 18.7 57.144 Kuwait .. .. .. .. .. .. .. ..45 Costa RIca 2000 1.4 4.2 51.5 34.8 25.1 12.3 46.546 Uruguay 2000 1.8 4.8 50.1 33.5 18.9 10.4 44.647 Qatar .. .. .. .. .. .. .. ..48 Croatia 2001 3.4 8.3 39.6 24.5 7.3 4.8 2949 United Arab Emirates .. .. .. .. .. .. .. ..50 Latvia 1998 2.9 7.6 40.3 25.9 8.9 5.3 32.451 Bahamas .. .. .. .. .. .. .. ..52 Cuba .. .. .. .. .. .. .. ..53 Mexico 2000 1 3.1 59.1 43.1 45 19.3 54.654 Trinidad an Tobago 1992 2.1 5.5 45.9 29.9 14.4 8.3 40.355 Antigua and Barbuda .. .. .. .. .. .. .. ..Medium Human Development56 Bulgaria 2001 2.4 6.7 38.9 23.7 9.9 5.8 31.957 Russian Federation 2000 1.8 4.9 51.3 36 20.3 10.5 45.658 Libyan Arab Jamahiriya .. .. .. .. .. .. .. ..59 Malasya 1997 1.7 4.4 54.3 38.4 22.1 12.4 49.260 Macedonia, TFYR 1998 3.3 8.4 36.7 22.1 6.8 4.4 28.261 Panama 2000 0.7 2.4 60.3 43.3 62.3 24.7 5<strong>6.4</strong>62 Belarus 2000 3.5 8.4 39.1 24.1 6.9 4.6 30.463 Tonga .. .. .. .. .. .. .. ..School of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Income Disparity Measurement69Share of incomeor consumtion(%)MDGInequality measuresRichest 10% topoorest 10%Richest 20% topoorest 20%HDI rank Survey Year Poorest 10% Poorest 20% Richest 20% Richest 10% Gini Index64 Mauritius .. .. .. .. .. .. .. ..65 Albania 2002 3.8 9.1 37.4 22.4 5.9 4.1 28.266 Bosnia and Herzegovina 2001 3.9 9.5 35.8 21.4 5.4 3.8 26.267 Suriname .. .. .. .. .. .. .. ..68 Venezuela 1998 0.6 3 53.4 36.3 62.9 17.9 49.169 Romania 2000 3.3 8.2 38.4 23.6 7.2 4.7 30.370 Ukraine 1999 3.7 8.8 37.8 23.6 <strong>6.4</strong> 4.3 2971 Saint Lucia 1995 2 5.2 48.3 32.5 16.2 9.2 42.672 Brazil 1998 0.5 2 64.4 46.7 85 31.5 59.173 Colombia 1999 0.8 2.7 61.8 46.5 57.8 22.9 57.674 Oman .. .. .. .. .. .. .. ..75 Samoa (Western) .. .. .. .. .. .. .. ..76 Thailand 2000 2.5 6.1 50 33.8 13.4 8.3 43.277 Saudi Arabia .. .. .. . . . . .78 Kazakhstan 2001 3.4 8.2 39.6 24.2 7.1 4.8 31.379 Jamaica 2000 2.7 6.7 46 30.3 11.4 6.9 37.980 Lebanon .. .. .. .. .. .. .. ..81 Fiji .. .. .. .. .. .. .. ..82 Armenia 1998 2.6 6.7 45.1 29.7 11.5 6.8 37.983 Philipines 2000 2.2 5.4 52.3 36.3 16.5 9.7 46.184 Maldives .. .. .. .. .. .. .. ..85 peru 2000 0.7 2.9 53.2 37.2 49.9 18.4 49.886 Turkmenistan 1998 2.6 6.1 47.5 31.7 12.3 7.7 40.887 Saint Vincent and the.. .. .. .. .. .. .. ..Grenadines88 Turkey 2000 2.3 6.1 46.7 30.7 13.3 7.7 4089 Paraguay 1999 0.6 2.2 60.2 43.6 70.4 27.3 56.890 Jordan 1997 3.3 7.6 44.4 29.8 9.1 5.9 3<strong>6.4</strong>91 Azerbaijan 2001 3.1 7.4 44.5 29.5 9.7 6 36.592 Tunisia 2000 2.3 6 47.3 31.5 13.4 7.9 39.893 Grenada .. .. .. .. .. .. .. ..94 China 2001 1.8 4.7 50 33.1 18.4 10.7 44.795 Dominica .. .. .. .. .. .. .. ..96 Sri Lanka 1995 3.5 8 42.8 28 7.9 5.3 34.497 Georgia 2001 2.3 <strong>6.4</strong> 43.6 27.9 12 6.8 36.998 Dominican Republic 1998 2.1 5.1 53.3 37.9 17.7 10.5 47.499 Belize .. .. .. .. .. .. .. ..100 Ecuador 1998 0.9 3.3 58 41.6 44.9 17.3 43.7101 Iran, Islamic Rep. of 1998 2 5.1 49.9 33.7 17.2 9.7 43102 Occupied Palestinian.. .. .. .. .. .. .. ..Territories103 El Salvador 2000 0.9 2.9 57.1 40.6 47.4 19.8 53.2104 Guyana 1999 1.3 4.5 49.7 33.8 25.9 11.1 43.2105 Cape Verde .. .. .. .. .. .. .. ..106 Syrian Arab Republic .. .. .. .. .. .. .. ..107 Uzbekistan 2000 3.6 9.2 36.3 22 6.1 4 26.8108 Algeria 1995 2.8 7 42.6 26.8 9.6 6.1 35.3109 Equatorial Guinea .. .. .. .. .. .. .. ..110 Kyrgyzstan 2001 3.9 9.1 38.3 23.3 6 4.2 29111 Indonesia 2002 3.6 8.4 43.3 28.5 7.8 5.2 34.3112 Viet Nam 1998 3.6 8 44.5 29.9 8.4 5.6 36.1113 Moldova, Rep of 2001 2.8 7.1 43.7 28.4 10.2 6.2 36.2114 Bolivia 1999 1.3 4 49.1 32 24.6 12.3 44.7115 Honduras 1999 0.9 2.7 58.9 42.2 49.1 21.5 55116 Tajikistan 1999 3.2 8 40 25.2 8 5 34.7117 Mongolia 1998 2.1 5.6 51.2 37 17.8 9.1 44118 Nicaragua 2001 1.2 3.6 59.7 45 36.1 16.8 55.1119 South Africa 1995 0.7 2 66.5 46.9 65.1 33.6 59.3120 Egypt 1999 3.7 8.6 43.6 29.5 8 5.1 34.4121 Guatemala 2000 0.9 2.6 64.1 48.3 55.1 24.4 48.3122 Gabon .. .. .. .. .. .. .. ..123 São Tomé Principe .. .. .. .. .. .. .. ..124 Solomon Islands .. .. .. .. .. .. .. ..125 morocco 1998/99 2.6 6.5 46.6 30.9 11.7 7.2 39.5126 Namibia 1993 0.5 1.4 78.7 64.5 128.8 56.1 70.7127 India 1999/2000 3.9 8.9 41.6 27.4 7 4.7 32.5128 Botswana 1993 0.7 2.2 70.3 56.6 77.6 31.5 63129 Vanatu .. .. .. .. .. .. .. ..130 Cambodia 1997 2.9 6.9 47.6 33.8 11.6 6.9 40.4A. Popov, S. L. Freinberg - Income Disparity Measurement


70School of Doctoral Studies (European Union) JournalJulyShare of incomeor consumtion(%)MDGInequality measuresRichest 10% topoorest 10%Richest 20% topoorest 20%HDI rank Survey Year Poorest 10% Poorest 20% Richest 20% Richest 10% Gini Index131 Ghana 1999 2.1 5.6 46.6 30 14.1 8.4 30132 Myanmar .. .. .. .. .. .. .. ..133 Papua New Guinea 1996 1.7 4.5 56.5 40.5 23.8 12.6 50.9134 Bhutan .. .. .. .. .. .. .. ..135 Lao People’s Dem. Rep. 1997 3.2 7.6 45 30.6 9.7 6 37136 Comoros .. .. .. .. .. .. .. ..137 Swaziland 1994 1 2.7 64.4 50.2 49.7 23.8 60.9138 Bangladesh 2000 3.9 9 41.3 26.7 6.8 4.6 31.8139 Sudan .. .. .. .. .. .. .. ..140 Nepal 1995/96 3.2 7.6 44.8 29.8 9.3 5.9 36.7141 Cameroon 2001 2.3 5.6 50.9 53.4 15.7 9.1 44.6Low Human Development142 Pakistan 1998/99 3.7 8.8 42.3 28.3 7.6 4.8 33143 Togo .. .. .. .. .. .. .. ..144 Congo .. .. .. .. .. .. .. ..145 Lesotho 1995 0.5 1.5 66.5 48.3 105 44.2 63.2146 Uganda 1999 2.3 5.9 49.7 34.9 14.9 8.4 43147 Zimbabwe 1995 1.8 4.6 55.7 40.3 22 12 56.8148 Kenya 1997 2.3 5.6 51.2 36.1 15.6 9.1 44.5149 yemen 1998 3 7.4 41.2 25.9 8.6 5.6 33.4150 Madagascar 2001 1.9 4.9 53.5 36.6 19.2 11 47.5151 Nigeria 1996/97 1.6 4.4 55.7 40.8 24.9 12.8 50.6152 Mauritania 2000 2.5 6.2 45.7 29.5 12 7.4 39153 Haiti .. .. .. .. .. .. .. ..154 Djibouti .. .. .. .. .. .. . ..155 Gambia 1998 1.5 4 55.2 38 25.4 13.8 38156 Eritrea .. .. .. .. .. .. .. ..157 Senegal 1995 2.6 <strong>6.4</strong> 48.2 33.5 12.8 7.5 41.3158 Timor-Leste .. .. .. .. .. .. .. ..159 Rwanda 1983 4.2 9.7 39.1 24.2 5.8 4 28.9160 Guinea 1994 2.6 <strong>6.4</strong> 47.2 32 12.3 7.3 40.3161 Benin .. .. .. .. .. .. .. ..162 Tanzania, U Rep. of 1993 2.8 6.8 45.5 30.1 10.8 6.7 38.2163 Côte d’lvoire 1998 2.2 5.5 51.1 35.9 16.2 9.2 45.2164 Zambia 1998 1.1 3.3 56.6 41 36.6 17.3 52.6165 Malawi 1997 1.9 4.9 56.1 42.2 22.7 11.6 50.3166 Angola .. .. .. .. .. .. .. ..167 Chad .. .. .. .. .. .. .. ..168 Congo, Dem. Rep. of the .. .. .. .. .. .. .. ..169 Central African Republic 1993 0.7 2 65 47.7 69.2 32.7 61.3170 Ethiopia 2000 3.9 9.1 39.4 25.5 6.6 4.3 30171 Mozambique 1996/97 2.5 6.5 46.5 31.7 12.5 7.2 39.6172 Guinea-Bissau 1993 2.1 5.2 53.4 39.3 19 10.3 47173 Burundi 1998 1.7 5.1 48 32.8 19.3 9.5 33.3174 Mali 1994 1.8 4.6 56.2 40.4 23.1 12.2 50.5175 Burkina Faso 1998 1.8 4.5 60.7 46.3 26.2 13.6 48.2176 Niger 1995 0.8 2.6 53.3 35.4 46 20.7 50.5177 Sierra Leone 1989 0.5 1.1 63.4 43.6 82.7 57.6 62.9School of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Income Disparity Measurement71Table 4. Gender related income distributionGender-relateddevelopmentindex(GDI)LifeExpectancyat birth(years) 2002Adult Literacyrate(% age 15 andabove) 2002 aCombined gross enrolmentEstimated earnedratio for primary, secondary andincometertiary level schooling(PPP US$)(%) 2001/02 b 2002 cHDIrank Rank Value Female Male Female Male Female Male Female MaleHigh Human Development1 Norway 1 0.955 81.8 75.9 .. .. 102 f,g 94 g 31,356 43,340 02 Sweden 2 0.946 82.5 77.5 .. .. 124 f,h 104 f,h 23,781 28,700 03 Australia 3 0.945 82 7<strong>6.4</strong> .. .. 114 f,h 111 f,h 23,643 33,259 04 Canada 4 0.941 81.9 76.6 .. .. 96 g 93 g 22,964 32,299 05 Netherlands 5 0.938 81 75.6 .. .. 99 g 100 f,g 20,358 38,266 06 Belgium 7 0.938 81.8 75.6 .. .. 115 f,g 107 f,g 18,528 37,180 -17 Iceland 6 0.938 81.9 77.6 .. .. 195 g 86 g 22,716 36,043 18 United States 8 0.936 79.8 74.2 .. .. 96 h 89 h 27,338 43,797 09 Japan 12 0.932 85 77.8 .. .. 83 h 85 h 16,977 37,208 -310 Ireland 14 0.929 79.5 74.3 .. .. 94 g 87 g 21,056 52,008 -411 Switzerland 11 0.932 82.3 75.9 .. .. 86 g 90 g 20,459 40,769 012 United Kingdom 9 0.934 80.6 75.6 .. .. 119 f,g 107 f,g 19,807 32,984 313 Finland 10 0.933 81.4 74.3 .. .. 111 f,g 107 f,g 21,645 32,984 314 Austria 17 09.24 81.4 75.3 .. .. 92 g 91 g 15,410 43,169 -315 Luxembourg 16 0.926 81.3 75 .. .. 75 g,i 74 g,i 33,517 88,803 -116 France 15 0929 82.7 75.1 .. .. 93 g 90 g 19,923 33,950 117 Denmark 16 0.931 79 74.1 .. .. 99 k 92 k 26,074 36,161 418 New Zeland 18 0.924 80.7 75.5 .. .. 107 f,h 96 h 18,168 26,481 019 Germany 19 0.921 81.1 75.1 .. .. 88 h 89 h 18,763 35,885 020 Spain 20 0.916 82.7 75.8 96.9 98.7 95 h 89 h 13,209 29,971 021 Italy 21 0.914 81.9 75.5 98.1 98.9 84 g 81 g 16,702 36,959 022 Israel 22 0.906 80.9 77 93.4 97.3 94 89 14,201 26,636 023 Hong Kong, China (SAR) 23 0.908 82.7 77.2 89.6 96.9 70 73 18,805 33,776 024 Greece 25 0.894 80.9 75.7 96.1 98.6 88 g 84 g 10,892 25,601 -125 Singapore 28 0.884 80.2 75.8 88.6 96.6 75 k,n 76 k,n 15,882 31,927 -326 Portugal 24 0.894 79.5 72.5 90.3 95.2 97 g 90 g 13,084 24,373 227 Slovenia 24 0.892 79.7 72.5 99.6 99.7 94 g 86 g 14,084 22,832 128 Korea, Reo. of 29 0.882 79.2 71.7 96.6 99.2 85 h 98 h 10,747 23,226 -129 Barbados 29 0.884 79.4 74.4 99.7 99.7 93 g 84 g 11,634 19,116 230 Cyprus 30 0.875 80.5 75.9 95.1 98.6 75 g 74 g 11,223 23,916 031 Malta 31 0.866 80.6 75.8 93.4 91.8 77 g 77 g 9,654 26,160 032 Czech Republic 32 0.865 78.6 72 .. .. 79 h 78 h 11,322 20,370 033 Brunei Darussalam .. .. 78.8 74.1 91.4 96.3 75 72 .. .. ..34 Argentina 36 0.841 77.6 70.5 97 97 98 h 90 h 5,662 15,431 -335 Seychelles .. .. .. .. 92.3 91.4 86 85 .. .. ..36 Estonia 33 0.852 76.7 66.3 99.8 99.8 101 f,g 92 g 9,777 15,571 137 Poland 34 0.848 77.9 69.7 99.7 99.8 93 h 87 h 8,120 13,149 138 Hungary 35 0.847 75.9 67.6 99.2 99.5 89 h 84 h 10,307 17,465 139 Saint Kitts and Nevis .. .. .. .. .. .. 111 f,g 85 g .. .. ..40 Bahrain 39 0.832 75.8 72.4 84.2 91.5 82 77 7,961 23,505 -241 Lithuania 37 0.841 77.5 67.4 99.6 99.6 93 g 87 g 8,419 12,518 142 Slovakia 38 0.84 77.5 69.6 99.7 99.7 75 h 73 h 10,127 15,617 143 Chile 40 0.83 78.9 72.9 95.6 95.8 79 g 80 g 5,442 14256 044 Kuwait 42 0.827 78.9 74.8 81 84.7 81 k 71 k 7,116 20,979 -145 Costa Rica 44 0.823 80.5 75.7 95.9 95.7 70 69 4,698 12,197 -246 Uruguay 41 0.829 78.8 71.5 98.1 97.3 90 h 81 h 5,367 10,304 247 Qatar .. .. 75.3 70.4 82.3 84.9 84 79 .. .. ..48 Croatia 43 0.827 78 70.2 97.1 99.3 74 72 7,453 13,374 149 united Arab Emirates .. .. 77.3 73.2 80.7 75.6 72 65 .. .. ..50 Latvia 45 0.823 76.1 65.4 99.7 99.8 92 g 83 g 7,685 11,085 051 Bahamas 46 0.813 70.4 63.9 96.3 94.6 77 k,n 72 k,n 13,375 20,700 052 Cuba .. .. 78.6 74.7 96.8 97 78 77 .. .. ..53 Mexico 50 0.792 76.3 70.3 88.7 92.6 74 h 73 h 4,915 12,967 -354 Trinidad and Tobago 47 0.795 74.5 68.5 97.9 99 65 63 5,916 13,095 155 Antigua and Barbuda .. .. .. .. .. .. .. .. .. .. ..Medium Human Development56 Bulgaria 48 0.795 74.6 67.4 98.1 99.1 77 g 75 g 5,719 8,627 157 Russian Federation 49 0.794 73 60.7 99.5 99.7 92 h 85 h 6,508 10,189 158 Libyan Arab Jamahiriya .. .. 75.3 70.7 70.7 91.8 100 f,h 93 h .. .. ..59 Malasya 52 0.786 75.660 Macedonia, TFYR .. .. 75.7 71.3 .. .. 70 g 70 g 4,599 8,293 ..61 Panama 53 0.785 77.3 72.2 91.7 92.9 75 k 71 k 3,958 7,847 -162 Belarus 51 0.789 75.2 64.7 99.6 99.8 90 86 4,405 6,765 263 Tonga .. .. 69 67.9 98.9 98.8 83 82 .. .. ..64 Mauritius 55 0.775 75.7 68.3 80.5 88.2 68 70 5,827 15,897 -165 Albania 54 0.778 75.6 70.8 98.3 99.2 70 g 67 g 3,442 6,185 166 Bosnia and Herzegovina .. .. 76.6 71.2 91.1 98.4 .. .. .. .. ..67 Suriname .. .. 73.6 68.4 .. .. 79 h 69 h .. .. ..68 Venezuela 58 0.77 76.6 70.8 92.7 93.5 74 69 3,125 7,550 -269 Romania 56 0.775 74.2 67 96.3 98.4 70 g 67 g 4,837 8,311 170 Ukraine 57 0.773 74.6 64.5 99.5 99.8 86 83 3,429 6,493 1HDI rankminusGDI rank dA. Popov, S. L. Freinberg - Income Disparity Measurement


72School of Doctoral Studies (European Union) JournalJulyGender-relateddevelopmentindex(GDI)LifeExpectancyat birth(years) 2002Adult Literacyrate(% age 15 andabove) 2002 aCombined gross enrolmentEstimated earnedratio for primary, secondary andincometertiary level schooling(PPP US$)(%) 2001/02 b 2002 cHDIrank Rank Value Female Male Female Male Female Male Female MaleHDI rankminusGDI rank d71 Saint Lucia .. .. 74 70.7 .. .. 77 70 .. .. ..72 Brazil 60 0.768 72.5 63.9 86.5 86.2 94 h 90 h 4,594 8,420 173 Colombia 59 0.77 75.2 69 92.2 92.1 70 67 4,429 8,420 174 Oman 68 0.747 74.3 70.9 65.4 82 63 62 4,056 18,239 -775 Samoa (Western) .. .. 73.63 66.8 98.4 98.9 71 68 .. .. ..76 Thailand 61 07.66 73.4 65.2 90.5 94.9 71 68 5,284 8,664 177 Saudi Arabia 72 0.739 73.6 71 69.5 84.1 57 58 3,825 18,616 -978 kazakhstan 63 0.761 71.8 60.7 99.2 99.7 82 80 4,247 7,156 179 Jamaica 62 0.762 77.7 73.6 91.4 83.8 78 h 72 h 3,169 4,783 380 Lebanon 64 0.755 75 71.8 81 92.4 79 77 2,552 8,336 281 Fiji 69 0.747 71.4 68 91.4 94.5 73 h 73 h 2,838 7,855 -282 Armenia 65 0.752 75.5 68.9 99.2 99.7 75 69 2,564 3,700 383 Philippines 66 0.751 71.9 67.9 92.7 92.5 82 h 81 h 3,144 5,326 384 Maldives .. .. 66.8 67.7 97.2 97.3 78 78 .. .. ..85 Peru 74 0.736 72.3 67.2 80.3 91.3 88 h 88 h 2,105 7,875 -486 Turkmenistan 67 0.748 70.3 63.7 98.3 99.3 81 k,n 81 k,n 3,274 5,212 487 Saint Vincent and the .. .. 75.5 72.5 .. .. 66 63 .. .. ..Grenadines88 Turkey 70 0.746 73.1 67.9 78.5 94.4 62 h 74 h 4,757 7,873 289 Paraguay 75 0.736 73 68.5 90.2 93.1 72 h 72 h 2,175 6,641 -290 Jordan 67 0.748 70.3 63.7 98.3 99.3 81 k,n 81 k,n 3,274 5,212 491 Azerbaijan .. .. 75.4 68.6 .. .. 67 70 2,322 4,044 ..92 Tunisia 77 0.734 74.8 70.7 63.1 83.1 75 h 74 h 3,615 9,933 -293 Grenada .. .. .. .. .. .. 57 g 73 g .. .. ..94 China 71 0.741 73.2 68.8 86.5 95.1 64 k 69 k 3,571 5,435 595 Dominica .. .. .. .. .. .. 75 g 72 g .. .. ..96 Sri Lanka 73 0.738 75.8 69.8 89.6 94.7 66 r 64 r 2,570 4,523 497 Georgia .. .. 77.5 69.4 .. .. 70 68 1,325 3,283 ..98 Dominican Republic 78 0.728 69.2 64.4 84.4 84.3 81 h 73 h 3,491 9,694 099 Belize 80 0.718 73.1 70 77.1 76.7 72 g 71 g 2,376 9,799 -1100 Ecuador 79 0.721 73.4 68.2 89.7 92.3 71 g,s 73 g,s 1,656 5,491 1101 Iran, Islamic Rep. of 82 0.713 71.7 68.8 70.4 83.5 65 72 2,835 9,946 -1102 Occupied Palestinian .. .. 73.9 70.7 .. .. 81 78 .. .. ..Territories103 El Salvador 84 0.709 73.6 67.6 77.1 82.4 65 66 2,602 7,269 -2104 Guyana 81 0.715 6<strong>6.4</strong> 60.1 98.2 99 75 k 75 k 2,439 6,217 2105 Cape Verde 83 0.709 72.7 66.9 68 85.4 72 h 73 h 3,229 7,034 1106 Syrian Arab Republic 88 0.689 73 70.5 74.2 91 57 62 1,549 5.496107 Uzbekistan 85 0.705 72.4 66.7 98.9 99.6 75 78 1,305 1,983 1108 Algeria 89 0.688 71.1 68 59.6 78 69 h 72 h 2,268 8,794 -2109 Equatorial Guinea 86 0.691 50.5 47.7 76 92.8 52 64 16,852 42,304 2110 Kyrgyzstan .. .. 72.2 64.6 .. .. 81 80 1,269 1,944 ..111 Indonesia 90 0.685 68.6 64.6 83.4 92.5 64 h 66 h 2,138 4,161 -1112 Viet Nam 87 0.689 71.4 66.7 86.9 93.9 61 67 1,888 2,723 3113 Moldova, Rep. of114 Bolivia 92 0.674 65.8 61.6 80.7 93.1 82 h 89 h 1,559 3,463 0115 Honduras 95 0.662 71.4 66.5 80.2 79.8 61 g,s 64 g,s 1,402 3,792 -2116 Tajikistan 93 0.668 71.3 66 99.3 99.7 67 80 759 1,225 1117 Mongolia 49 0.664 65.7 61.7 97.5 98 76 64 1,316 1,995 1118 Nicaragua 97 0.66 71.8 67.1 76.6 76.8 66 h 63 h 1,520 3,436 1119 South Africa 96 0.661 51.9 46 85.3 86.7 77 78 6,371 14,202 1120 Egypt 99 0.634 70.8 66.6 43.6 67.2 72 k,n 80 k,n 1,963 5,216 1121 Guatemala 98 0.635 68.7 62.8 62.5 77.3 52 h 59 h 2,007 6,092 1122 Gabon .. .. 57.6 55.7 .. .. 70 k 74 k 4,937 8,351 1123 São Tomé an Principe .. .. 72.7 66.9 .. .. 59 64 .. .. ..124 Solomon Islands .. .. 70.5 67.8 .. .. .. .. 1,239 1,786 ..125 Morocco 100 0.604 70.3 66.6 38.3 63.3 52 61 2,153 5,354 0126 Namibia 101 0.602 46.8 43.8 82.8 83.8 72 70 4,262 8,402 0127 India 103 0.572 64.4 63.1 4<strong>6.4</strong> 69 48 g 62 g 1,442 3,820 -1128 Botswana 102 0.581 42.3 40.4 81.5 76.1 71 70 5,353 10,550 1129 Vanuatu .. .. 70.4 67.4 .. .. 58 59 .. .. ..130 Cambodia 105 0.557 59.5 55.2 59.3 80.8 53 64 1,622 2,117 -1131 Ghana 104 0.564 59.3 5<strong>6.4</strong> 65.9 81.9 43 50 1,802 2,419 1132 Myanmar .. .. 60.1 54.5 81.4 89.2 48 g 47 g .. .. ..133 Papua New Guinea 106 0.536 58.5 56.6 57.7 71.1 40 42 1,586 2,748 0134 Bhutan .. .. 64.3 61.8 .. .. .. .. .. .. ..135 lao People’s Dem. Rep. 107 0.528 55.6 53.1 55.5 77.4 53 65 1,358 2,082 0136 Comoros 108 0.51 62 59.2 49.1 63.5 41 50 950 1,699 0137 Swaziland 109 1.505 36.9 34.4 80 82 59 62 2,259 7,227 0138 Bangladesh 110 0.499 61.5 60.7 31.4 50.3 54 53 1,150 2,035 0139 Sudan 115 0.485 57 54.1 49.1 70.8 34 g 39 g 867 2,035 -4140 Nepal 116 0.484 59.4 59.9 2<strong>6.4</strong> 61.6 55 67 891 1,776 -4141 Cameroon 111 0.491 48.1 45.6 59.8 77 51 h 61 h 1,235 2,787 2Low Human Development142 Pakistan 120 0.471 60.7 61 28.5 53.4 31 g 43 g 915 2,789 -6143 Togo 119 0.477 51.4 48.3 45.4 74.3 55 78 941 2,004 -4School of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Income Disparity Measurement73Gender-relateddevelopmentindex(GDI)LifeExpectancyat birth(years) 2002Adult Literacyrate(% age 15 andabove) 2002 aCombined gross enrolmentEstimated earnedratio for primary, secondary andincometertiary level schooling(PPP US$)(%) 2001/02 b 2002 cHDIrank Rank Value Female Male Female Male Female Male Female Male144 Congo 112 0.488 49.9 46.6 77.1 88.9 44 h 52 h 707 1,273 4145 Lesotho 117 0.483 39 33.3 90.3 73.7 66 64 1,357 3,578 0146 Uganda 113 0.487 4<strong>6.4</strong> 44.9 59.2 78.8 68 73 1,088 1,651 5 5147 Zimbabwe 118 0.482 33.5 34.3 86.3 93.8 57 h 60 h 1,757 3,059 1148 Kenya 114 0.486 4<strong>6.4</strong> 44 78.5 90 52 54 962 1,067 6149 Yemen 126 0.436 60.9 58.7 28.5 69.5 37 g 66 g 387 1,274 -5150 Madagascar 121 0.462 54.6 52.3 60.6 74.2 44 46 534 906 1151 Nigeria 122 0.458 52 51.2 59.4 74.4 41 k,n 49 k,n 562 1,322 1152 Mauritania 124 0.456 53.9 50.7 31.3 51.5 42 46 1,581 2,840 0153 Haiti 123 0.458 49.9 48.8 50 53.8 51 k,n 53 k,n 1,170 2,089 2154 Djibouti 124 0.456 53.9 50.7 31.3 51.5 42 46 1,581 2,840 0155 Gambia 125 0.446 55.4 52.5 30.9 45 41 h 49 h 1,263 2,127 1156 Eritrea 127 0.431 54.2 51.1 45.6 68.2 28 39 654 1,266 0157 Senegal 128 0.429 54.9 50.6 29.7 49 35 h 41 h 1,140 2,074 0158 Timor_leste .. .. 50.2 48.5 .. .. .. .. .. .. ..159 Rwanda 129 0.423 39.4 38.4 63.4 75.3 50 56 968 1,570 0160 Guinea .. .. 49.3 48.6 .. .. 21 r 37 r 1,569 2,317 ..161 benin 130 0.406 53.1 48.5 25.5 54.8 41 h 64 h 876 1,268 0162 Tanzania, U. Rep. of 131 0.401 44.4 42.7 69.2 85.2 31 g 32 g 467 660 0163 Côte d’lvoire 132 379 41.5 40.9 38.4 60.3 34 50 818 2,222 0164 Zambia 133 0.375 32.5 32.9 73.8 86.3 43 47 571 1,041 0165 Malawi 134 0.374 38.2 37.5 48.7 75.5 71 h 77 h 427 626 0166 Angola .. .. 41.5 38.8 .. .. 27 k 32 k 1,627 2,626 ..167 Chad 135 0.368 45.7 43.6 37.5 54.5 25 g 44 g 760 1,284 0168 Congo, Dem. Rep. of the 136 0.355 42.4 40.4 51.8 74.2 24 r,s 30 r,s 467 846 0169 Central African Republic 138 0.345 41 38.7 33.5 64.7 24 38 889 1,469 -1170 Ethiopia 137 0.346 4<strong>6.4</strong> 44.6 33.8 49.2 28 41 516 1,008 1171 Mozambique 139 0.339 40 36.9 31.4 62.3 35 46 840 1,265 0172 Guinea-Bissau 141 0.329 46.8 43.7 24.7 55.2 29 k 45 k 465 959 -1173 Burundi 140 0.337 41.3 40.2 43.6 57.7 29 38 561 794 1174 Mali 142 0.309 49 47.9 11.9 26.7 21 r 31 r 635 1,044 0175 Burkina Faso 143 0.291 46.3 45.1 8.1 18.5 18 h 26 h 855 1,215 0176 Niger 144 0.278 46.3 45.7 9.3 25.1 16 23 575 1,005 0177 Sierra Leone .. .. 35.6 33.1 .. .. 38 g 52 g 337 815 ..HDI rankminusGDI rank dA. Popov, S. L. Freinberg - Income Disparity Measurement


74School of Doctoral Studies (European Union) JournalJulyTable 5. Occupation and unemployment ratesHDI rankUnemployedPeople(thousands)2002Total(% of labourforce)2002Averageannual(% of labourforce)1992-2002Female(% of malerate)2002Total(%of labourforce ages15-24)2002Female(% of malerate)2002Long-term unemployment(% of total unemployment)Women Men2002 2002High Human Development1 Norway 94.3 4 4.4 89 11.5 85 3.9 8.32 Sweden 176.2 4 <strong>6.4</strong> 84 11.5 85 3.9 8.33 Australia 631.3 6.3 8.1 94 12.4 87 17.1 25.94 Canada 1,276.20 7.6 9 88 13.7 77 8.8 10.35 Netherlands 169.9 2.3 4.8 128 5.9 87 2<strong>6.4</strong> 26.96 Belgium 329.9 7.3 8.4 125 15.7 95 53.6 45.97 Iceland 5.3 3.3 3.6 82 7.2 46 13.3 45.98 United States 8,388.70 5.8 5.4 95 12 87 8.1 8.99 Japan 3,586.60 5.4 3.8 91 10 76 22.4 36.210 Ireland 82.1 4.4 9.6 81 7.7 74 18 35.911 Switzerland 131.4 3.1 3.3 109 5.7 54 24.5 19.312 united Kingdom 1,508.50 5.2 7.3 77 11 68 17.1 26.913 Finland 236.9 9.1 12.5 100 20.7 97 21.2 27.314 Austria 229.5 9.1 12.5 100 20.7 97 21.2 27.315 Luxemburg 5.8 3 2.8 188 7 168 26.5 28.616 France 2,442.80 9 10.8 128 20.2 125 35.2 32.217 Denmark 129.4 4.5 6.1 102 7.1 59 22.4 17.218 New Zeland 102.5 5.2 7.1 106 11.4 98 11.5 16.919 Germany 3,396.00 8.1 7.9 95 9.7 70 50.3 4620 Spain 2,082.90 11.4 14.6 203 22.2 149 44.5 34.321 Italy 2,163.20 9.1 10.8 177 26.3 139 60.1 58.224 Greece 435.7 10 10.1 177 26.3 139 60.1 58.226 Portugal 272.3 5.1 5.5 146 11.5 143 36.2 34.828 Korea, Rep. of 708 3.1 3.5 73 8.1 70 1.2 3.132 Czech Republic 374.1 7.3 6 153 16 115 51.1 50.337 Poland 3,430.80 19.9 14.4 109 43.9 102 52 45.138 Hungary 238.8 5.9 8.7 88 12.690 90 41.7 4742 Slovakia 487 18.6 15.1 101 37.4 91 61.2 58.553 Mexico 548.6 2.7 3.5 104 4.9 124 0.4 1.2Medium Human Development88 Turkey 2,473.00 10.3 7.7 91 19.5 87 3<strong>6.4</strong> 27.3OECD f 36,137.50 6.9 6.9 107 13.1 94 30.9 28.5School of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Income Disparity Measurement75BibliographyAndrew McKay. Defining and MeasuringInequality. (Overseas Development Instituteand <strong>University</strong> of Nottingham). InequalityBriefing: Briefing Paper No 1 (1 of 3). March2002Champernowne, D. (1974), ‘A Comparisonof Measures of Inequality of IncomeDistribution’, The Economic Journal, 84:4,pp. 787-816. Taken from: Robert Went. How(not) to measure global poverty and inequality.<strong>International</strong> Network for Economic Method(INEM). Amsterdam, 19-21 August 2004Chen, S. and M. Ravallion (2004). How have theWorld’s poorest fared since the Early 1980s?World Bank Policy Research Working Paper3341.Dikhanov, Y. & Ward, M. _1999_, “Measuringthe Distribution of Global Income”, mimeo.,World Bank. Taken from: Peter Svedberg.Income distribution across countries: howis it measured and what do the results show?Seminar Paper No. 698.Dowrick, S. & Quiggin, J. _1997_, “TrueMeasures of GDP and Convergence”,American Economic Review, 87. Taken from:Peter Svedberg. Income distribution acrosscountries: how is it measured and what do theresults show? Seminar Paper No. 698.Gordon, R. J. _1990_, The Measurement ofDurable Goods Prices, Chicago <strong>University</strong>Press, Chicago. Taken from: Peter Svedberg.Income distribution across countries: howis it measured and what do the results show?Seminar Paper No. 698.Glenn Firebaugh. The Trend in Between-Nation Income Inequality. Annual Review ofSociology, 2000Graphs and tables for human developmentindicators. Available at: http://hdr.undp.org/statistics/data/index_indicators.cfmHeston, A. _1994_, “A Brief Review of someProblems in Using National Accounts datain Level of Output Comparisons and GrowthStudies”, Journal of Development Economics,44. Taken from: Peter Svedberg. Incomedistribution across countries: how is it measuredand what do the results show? Seminar PaperNo. 698.IBRD _1999/00_, World Development Report1999/00, Washington D.C. Taken from: PeterSvedberg. Income distribution across countries:how is it measured and what do the resultsshow? Seminar Paper No. 698.IBRD. 2000/01_, World Development Report2000/01, Washington D.C. Taken from: PeterSvedberg. Income distribution across countries:how is it measured and what do the resultsshow? Seminar Paper No. 698.Korzeniewicz, R. P. & Moran, T. P. _1997_,“World-economic Trends in the Distributionof Income, 1965-1992”, American Journal ofSociology, 102. Taken from: Peter Svedberg.Income distribution across countries: howis it measured and what do the results show?Seminar Paper No. 698.Milanovic, B. _1999_, “True World IncomeDistribution, 1988 and 1993: First CalculationBased on Household Surveys Alone”, WorkingPaper, World Bank. Taken from: Peter Svedberg.Income distribution across countries: howis it measured and what do the results show?Seminar Paper No. 698.Moulton, B. R. & Moses, K. E. _1997_,“Addressing the Quality Change Issue in theConsumer Price Index”, Brookings PapersA. Popov, S. L. Freinberg - Income Disparity Measurement


76School of Doctoral Studies (European Union) JournalJulyon Economic Activity, 1. Taken from: PeterSvedberg. Income distribution across countries:how is it measured and what do the resultsshow? Seminar Paper No. 698.Pritchett, L. _1997_, “Divergence: Big Time”,Journal of Economic Perspectives, 11. Takenfrom: Peter Svedberg. Income distributionacross countries: how is it measured and whatdo the results show? Seminar Paper No. 698.Paul Ormerod. Inequality: The Long View.Prospect July/August issue 2000.Peter Svedberg. Income distribution acrosscountries: how is it measured and what do theresults show? Seminar Paper No. 698. 2001.Radetzki, M. & Jonsson, B. _2001_, “TheExpanding Global Income Gap: How Reliableis the Evidence?”, European Journal ofDevelopment Research, forthcoming. Takenfrom: Peter Svedberg. Income distributionacross countries: how is it measured and whatdo the results show? Seminar Paper No. 698.Ravallion, M. (2001), Measuring AggregateWelfare in Developing Countries: How Well DoNational Accounts and Surveys Agree?, paperavailable at www.worldbank.org. Taken from:Robert Went. How (not) to measure globalpoverty and inequality. <strong>International</strong> Networkfor Economic Method (INEM). Amsterdam,19-21 August 2004Ravallion, M. (2004), Competing Concepts ofInequality in the Globalization Debate, paperavailable at www.worldbank.org. Taken from:Robert Went. How (not) to measure globalpoverty and inequality. <strong>International</strong> Networkfor Economic Method (INEM). Amsterdam,19-21 August 2004Schultz, T. _1998_, “Inequality in the Distributionof Personal Income in the World: How it ischanging and Why”, Journal of PopulationEconomics, 11. Taken from: Peter Svedberg.Income distribution across countries: howis it measured and what do the results show?Seminar Paper No. 698.UNDP _1999_, Human Development Report 1999,UNDP, Vienna. Taken from: Peter Svedberg.Income distribution across countries: howis it measured and what do the results show?Seminar Paper No. 698.Wade, R. _2001_, “Global Inequality, Winners andLosers”, The Economist, April 28. Taken from:Peter Svedberg. Income distribution acrosscountries: how is it measured and what do theresults show? Seminar Paper No. 698.School of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Analysis on the European Union Regional Policy77European Union Regional PolicyJean Malais (MPhill)Master of Philosophy and Candidate to PhD in Economics at the School of Doctoral Studies, Iles<strong>International</strong>e Université (European Union)Dr. Henk Haegeman (PhD)Chairman of the EU Analogue Standards Certification Committee at the <strong>Isles</strong> <strong>International</strong>e Univesité(European Union)AbstractAn examination of the goals and operation of a European Union regional policy to address income inequalityamong member regionsA broadly held view holds that regional planning in Europe has developed within very distinctive legal andadministrative frameworks i.e. British, Napoleonic, Germanic, Scandinavian and East European. In most ofthe Continental Europe especially within the Federal states there is a view that local and regional authoritiespossesses a general power over the affairs of their communities, in the United Kingdom notwithstandingthe requirement to provide public services on a local degree as agents of central governments. Theresponsibility to do such is dependent on the concept of basic structure in case of local authority is powersconferred upon it by the centre.Key words: European Union, Economics, Policy, Regional Perspective, Income Inequality, EqualityAssessmentJ. Malais, H. Haegeman - Analysis on the European Union Regional Policy


78School of Doctoral Studies (European Union) JournalJulyThe European Union – EU, although amongthe most affluent regions of the world, is markedby strange income inequalities at the internal leveland opportunity between its different regions. Theinclusion into its fold of 12 new member nationssince 2004 with incomes far lower compared to theEU average, has stretched the difference. Hence,setting up of a regional policy will help transferresources from the rich to the poor areas. It isnot only a mode of financial cohesion, but alsoa strong force with regard to economic forms ofintegration as well. Foraging a regional policy inthe EU will help in the solidarity as the policyis targeted to benefit citizens and areas whichare economically and socially disadvantaged asagainst the averages of EU. Besides, it will alsoensure cohesion as imminent positive benefits willaccrue for everybody in narrowing the incomeand wealth differences between the poorer nationsand regions as also those who are better off.There are huge differences in levels of affluencebetween and within member states. The ten majordynamic areas of the EU were marked by a levelof affluence, calculated by GDP per capital thatwas almost thrice higher compared to the tenleast developed areas. The most prosperous areasare all urban areas that include London, Brusselsand Hamburg. (Overview of the European Unionactivities regional policies, 2007)The vibrant impact of EU membership, inconcert with an enthusiastic regional policy hasbrought practical results. The case in point isIreland. It’s GDP that was about 64% of the EUaverage at the time it had joined in the year 1973 isnowadays considered to be one of be maximum inthe complete Union. With regard to the importantpost-2004 main concerns of framing of a regionalpolicy is to enhance the living standards in the newmember states to nearer to that of EU average asearly as possible. The EU had managed a healthyregional development policy since 1975, relocatingfunds from the well heeled member states to poorernations and regions through the EU’s structuralfunds. Spending from these funds was responsiblefor nearly a third of the EU budget in the periodfrom 2000-2006. Among the larger beneficiarieswere nations like Greece, Spain, Portugal, Ireland,Southern Italy and the eastern region of Germany.The EU has used the entry of these member states toreorganize and restructure its regional spending forwhich the new rules are applicable for the periodfrom 2007 till 2013. (Overview of the EuropeanUnion activities regional policies, 2007)It is during this period that the regional spendingis proposed to climb to 36% of the budgetaryprovisions of EU which translated in cash termsamounts to spending of over seven years of 308billion euros. The primary objective is promotionof growth enhancing stipulations for the overallEU economy and focus on three objectives ofconvergence, competitiveness and cooperation.The recent approach is called as Cohesion Policy.The joining of the comparatively poor new nationmembers implies that the main concentration in theforthcoming period will be on them and the areasof the other EU states having special needs. Basedon the present estimates, the 12 newly admittednations will get 51% of the net regional spendingin the years between 2007 and 2013 regardless oftheir representation being less than a quarter of thetotal population. (Overview of the European Unionactivities regional policies, 2007)EU policies are just as workable solution sincethe member states are keen to render them. Memberstates stand spanning the policy process impactingthe quality of policies shaped in Brussels and theirconsequence as they are implemented in Europe.Their wishes and their points of agreements andthe priorities blend to constitute potent forcesin designing the EU policy. The dynamics ofEU policymaking lies on the axis on a searchfor consensus among the member nations. Theagreements can be increasingly indefinable atthe time when member states have extensivelydifferent traditions or practices appropriate to thepolicy sphere. The greater diverse the behaviors tobe regulated, the more hard it becomes to designtransparent regulations. (Roberts; Springer, 2001,p. 27)In case the policymaking happens to becollective action followed by the member states,policy implementation will be an individual actionby them, and that action is given color by theindividual country’s independent culture and legalSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Analysis on the European Union Regional Policy79system. As their individual action are purportedlyto deliver homogeneous results, the differences andresemblances among the member states thus cometo be important factors to regard in the study ofthe EU regional policy. The research professionalswho took part in a vital study of EU regional policyconsented that every member states has in place itsown independent pattern of execution emanatingfrom the norms linked with executing nationallaws. (Roberts; Springer, 2001, p. 27)The requirement for a European regionalpolicy has evolved with the integration processand the broadening of the Union as regardsterms of relationships. The Werner Report in1971 backed the movement towards economicand monetary union by 1980, and observed thatcontinuous regional disparities within the EUwould weaken the achievement of the EuropeanMonetary Union. Besides, apprehension was alsoallayed that added integration would really itselftrigger more disparities between the central andperipheral areas. Thus was necessary to execute toregional policy for promoting convergence amongthe European regions and to guarantee that EUintegration would not create any fears as someregions would be excluded. Such an apprehensionregarding rising disparities came to be brighterwith the accession of the Republic of Ireland andthe United Kingdom in 1973 that was immediatelyfollowed by the building of the European RegionalDevelopment Fund --ERDF in 1975. The ECconsiders its regional policy not as mere transfersbut instead as a tool to underpin the economicbase of the recipient areas and to foster regionalconvergence. (Bouvet, 2006, pp: 3-4)The Regional policy, which is creation of theEU’s acknowledgment of the economic disparitiesbetween its central and peripheral areas whichwas known as the European Community, has risenin political and economic significance from thebeginning of 1975. Since the very beginning, theRegional Policy was planned not just to minimizeregional economic variations, but to strengthenregional and nationals support for Europeanintegration in general as well so as to fostercohesion both to the EU as also its regions. Duringthe initial decade of the Regional Policy, the EUgave funds to the member state governments, andthese state authorities were determined as regardsthe best ways in which to make application of themoney with some constraints. The Commissionacted in its entirety as a funding body. Changesduring 1988 established the ideals of partnershipin governance. (Wilson, 2000, p. 34)In fact, this reform gave a powerfullyhomogeneous regulatory perspective in verydiverse national contexts by demanding thatthe commission, national governments and theauthorities at sub-national levels cooperate in thedesign and execution of the EU regional programsin short as also long terms. This restructuring was aholistic endeavor to usher the regional authorities inas active partners if not equal partners with the EUand their state governments to lower the regionalvariations throughout Europe. It looked forwardto make the action of the EC more transparent inthe member states, as a reaction to the felt absenceof accountability in the EU’s democratic deficit.Therefore the commission decided to get localactors in the decision making that is importantto regional development. The reforms were alsoplanned to guarantee that the national governmentswere utilizing the EU money as a supplement andnot as replacement for the national developmentfunding. During the later part of 1990, this becameto be called in Eurospeak as the problem of‘additionally’ i.e. EU regional funding must neverbe expended instead of national funds, but as asupplement to them. (Wilson, 2000, p. 34)The regional policy varies in a lot of respectsfrom other policy areas on the social agenda ofthe European Community -- EC. For instance, itis the sole policy on which funds is required to bedisbursed in favor of its clients and it happens tobe the only one in case of which the EC has made anew institution, the Committee of the Regions. In alot of aspects, nevertheless, regional policy adjuststo the profile of other policy areas on the socialagenda front. It has been built to fulfill the socialrequirements and it joins the EC with its citizens.Hence, a successful regional policy would help inthe enhancement of the social legitimacy of theEC. The framing of the EC regional policy catersto several objectives i.e. political, economic andJ. Malais, H. Haegeman - Analysis on the European Union Regional Policy


80School of Doctoral Studies (European Union) JournalJulysocial. Its beginning originates from the economicmodel of the 1950s and also the 1960s as also fromthe political bargains struck during the negotiationsfor the EEC. National economic planning wasextensively accepted in postwar Europe withthe French indicative planning fostering a muchaccepted model of the government and the privatesector joining hands in the modernizing theeconomy. Several European economists consideredthat the public policy and public money could becombined to shape a greater rational and a morereasonable economic system. (Springer, 1994, p.72)The significance of the European public policyfor the EU member states has gone up in the lastfifteen years. Especially, the association betweenthe national and sub-national government hasundergone change a great deal. The comingof the “Europe of the Regions” is no more abuzzword, but rather a vital reality in the EU.Of course, the European Commission found inthe regional governments a crucial supporter infostering the Single European Market -- SEM andin doing so; it lessened the resistance of a lot ofnational governments to implement the SEM andEconomic and Monetary union -- EMU. And thispartnership has been the strong point of Europeanregional policy. The keystone in the process ofregionalization across the EU is a rising convictionthat thriving economic development in an area isfunctionally linked to its institutions, as regardsthe network of associations purportedly in supportof business innovation. The primary institutionsof governance regardless of the EuropeanCommission itself, member-states, or the regionsinside them act as if business development isin general associated functionally to the meshof regional institutions. The reason for this isconnected with the social scientific research. Incase of any event, the impact of the EU on everyaspect of regional policy has been distinctly visiblefor some years and particularly more recently inthe European Spatial Development Perspective --ESDP on which there was a general consensus bythe Council of Ministers. (Magone, 2003, p. 114)At this juncture, it is important to analyzewhether trade as also monetary forms of integrationin Europe involves the threat of gaping inequalitieswithin the areas. In order to understand byinvestments committed with regard to policiespertaining to regional in Europe which nowrepresents a third of the budget of the communityand constitute the second biggest issue followingthe common forms of agricultural policy, theanswers provided by the governments as well asthe EC is a resounding ‘yes’. The fast proliferationwith regard to the spending concerning regionalpolicy has been underway from the time of theaccession of Portugal as well as Spain. This, afterincluding Greece resulted a broadening of incomedisparities among the affluent and the poor nationsof what was known as the EC. The negotiation asregards the inclusion of the two Iberian nations ledto a rise in the resources meant for the regionalpolicies from the level of ECU 3.7 billion in theyear 1985 to that of ECU 18.3 billion in the year1992. The actual investments committed withregard to regional policies in case of these nationswere considered to be really greater since the EUneeds that its transfer need to correspond nationalforms of spending. The national policies pertainingto the region have also been vital in some countrieslike Italy, France and Germany. The widening ofthe EU to the Central & eastern European nations,in which the per capita GDP levels are consideredto be really lower compared to the four cohesionnations would spell important revamping ofEuropean regional policies. (Martin, 1999, pp:1-2)The Regional policy of the EU is founded onthe financial solidarity between the member stateswhose contributions to the Union budget is givento the less affluent regions and social groups. Forthe period between 2000-2006, these transferswill contribute to a third of the EU budget whichtranslated to absolute terms comes to 213 billionEuros. 195 billion Euros will be expended by thefour Structural Funds and 18 billion Euros by theCohesion fund. The important contribution of theStructural Funds is that they finance multi-annualprograms that constitute strategies programmed ina partnership with the regions, the Member Statesand the EC. The primary objective of the programis to (i) develop infrastructure, like transport andSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Analysis on the European Union Regional Policy81energy (ii) extension of telecom services (iii)provision of assistance to firms and provisionof training to workers (iv) Disseminate the toolsand knowledge of the information society. (EURegional Policy after enlargement, 2003)The primary instrument of EU Regional Policy,the Structural Funds is expressed around threeobjectives. Among this the first one lies in extendingthe progress as well as structural adjustmentwith regard to the areas wherein growth has notbeen considered to be of satisfactory level. Thesecond objective is supporting the economic andsocial conversion of areas encountering structuralproblems. The third objective lies in assistingthe adaptation and modernization of policies andsystems of education, training and employment.The NUTS 2 regions that is eligible for objective1 are among those with a GDP per capita lowerthan 75% of the EU average. The Cohesion Fund isneeded make provision of additional 18,000 eurosover the period from 2000-2006, in this situationfor nations such as Greece, Ireland, Portugaland Spain. Regardless of the expression of theEuropean regional funds under various purposes, itis not always distinct what the objectives comprise.The initial and fundamental issue for a good designof regional policies is to define the objectives ina distinct manner. The decision to be arrived iswhether there is a need for homogenization acrossspace of some total measures like per capitalincome, unemployment or rates of employment,or health and education indicators. (Puga, 2001, p.50)Besides, the issue also remains as to whetherthe first objective of personal fairness, similarpeople possessing similar opportunities in differentregions. It is therefore important that making clearas regards the regional policies constitutes thefist step. Subsequently, there is a need to searchfor optimal policies to attain those objectives.Prior to taking into account the possible tools, thisneeds decision on the direction of interference.The issue remains whether the amount of regionalheterogeneity provided in the absence of regionalpolicies very high or very low. The overallassumption lies in that the policy must look forwardto lower regional disparities by concentrating onthe poverty stricken nations. Nevertheless, thedegree to which this must happen is not evident.(Puga, 2001, p. 50)Regardless of what is happening, inconsistenciesexist between the European regional policyand national state-aid policies. Debating withina regional competition perspectives, it can bedifferentiated between the macro and the microlevel.On the macro-level, inclusion for instanceinfrastructure and education, the European outerfringes lies at a distinct disadvantage as againstthe main member states. On the micro-levelconcentrating on direct support to the productivesector, European state-aid regulation establishes ahierarchy. As per Article 92 of the EC treaty, thecountries that are not favored and regions facingdecline in industrial segment are permitted touse region-specific state aid so as to draw mobilefactors of production. (Steinen, 1991, p. 30)A broadly held view holds that regional planningin Europe has developed within very distinctivelegal and administrative frameworks i.e. British,Napoleonic, Germanic, Scandinavian and EastEuropean. In most of the Continental Europeespecially within the Federal states there is a viewthat local and regional authorities possesses ageneral power over the affairs of their communities,in the United Kingdom notwithstanding therequirement to provide public services on a localdegree as agents of central governments. Theresponsibility to do such is dependent on theconcept of basic structure in case of local authorityis powers conferred upon it by the centre. (Balchin;Sykora; Bull, 1991, p. 91)For effective functioning of the regionalpolicy, the need of the hour is a Director Generalwho will be appointed and will have thefollowing (i) good understanding of the cohesionpolicy (ii) overall good experience of financial,budgetary and administrative management (iii)experience in projects and or program preparationand management would be useful (iv) superbcommunication and negotiation proficiencies (v)established capacity in managing, coordinating andmotivating a team (vi) good working knowledge ofboth English and French would be an asset. (DGRegional Policy, 2004, p. 31)J. Malais, H. Haegeman - Analysis on the European Union Regional Policy


82ReferencesSchool of Doctoral Studies (European Union) JournalJulyBalchin, Paul; Sykora, Ludek; Bull, Gregory.1999. The Regional Policy and Planning inEurope. Routledge.Bouvet, Florence. 2006. European Union RegionalPolicy: Allocation Determinants and Effectson Regional Economic Growth. Departmentof Economics, <strong>University</strong> of California.[Online]. Available: http://www.econ.ucdavis.edu/graduate/fbouvet/job_market.pdf [25December 2007].Magone, Jose. M. 2003. Regional Institutions andGovernance in the European Union. Praeger.Martin, Philippe. 1999. Are European Regionalpolicies delivering? [Online]. Available:http://team.univ-paris1.fr/teamperso/martinp/eibpaper.pdf [25 December 2007].N. A. 2004. DG Regional Policy. Journal of theEuropean Union. 31-33. [Online]. Available:http://eur-lex.europa.eu/LexUriServ/site/en/oj/2004/ca234/ca23420040921en00310033.pdf [25 December 2007].N. A. 2003. EU Regional Policy after enlargement.[Online]. Available: http://www.euractiv.com/en/enlargement/eu-regional-policyenlargement/article-117535[25 December2007].N. A. 2007. Overview of the European Unionactivities regional policies. [Online]. Available:http://europa.eu/pol/reg/overview_en.htm [25December 2007].Puga, Diego. 2001. European Regional Policies inthe light of recent location theories. Journal ofEconomic Geography. 18(1): 45-51.Roberts, Ivor; Springer, Beverly. 2001. TheSocial Policy in European Union: BetweenHarmonization and National Autonomy.Boulder.Springer, Beverly. 1994. The European Union andits Citizens: The Social Agenda. GreenwoodPress.Steinen, Mathias Schulze. 1991. State Aid,Regional Policy and Locational Competitionin the European Union. European Urban andRegional Studies. 4(1):19-31.Wilson, Thomas. M. 2000. Obstacles to EuropeanUnion regional policy in the Northern Irelandborderlands. Human Organization. 12(2): 33-38.School of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Sales and Advertisement Relationship for Selected Companies Operating in India: A Panel Data Analysis83Sales and Advertisement Relationship for SelectedCompanies Operating in India: A Panel DataAnalysisDr. Suparn SharmaDr. Jyoti SharmaRunning Head: Sales and Advertisement RelationshipAbstract:The study tries to examine the growth pattern and trend of sales and advertisement expenses for theselected companies over a period from 1992-93 to 2006-07. Further it seeks to evaluate the effectivenessof advertisement expenses on sales of selected companies operating in India at aggregate and disaggregatelevels. It also tries to analyse the behaviour of share of advertisement expenses in total sales for the abovementioned categories. The study is based on panel or pooled secondary data collected for advertisementexpenditure and sales revenue of 134 randomly selected sample companies operating in India over theperiod from 1992/93 to 2006/07 which are further classified on the basis of amount of sales revenue as wellas on the basis of type of product produced. In this study of Panel or Pooled data, Fixed Effect approachwith and without dummy variables are applied to evaluate the effectiveness of advertisement expenses onsales. Further, annual compound growth rates and summary statistics are also estimated. The contributionhas found that the growth rate of sales revenue of manufacturing and companies, whose sales revenue ismore than 1000 crore, is highest inspite of the negative compound growth rate of advertising expenses ofthese two types of companies. Further, in case of non-manufacturing companies, it has been found thatthese companies are less popular among the consumers and they are also spending less on advertisementsas compared manufacturing companies. While answering, how much advertisement expenses need to beincurred, the study concludes that to a large extent it depends on the nature and size of industries.Paper Classification: Research PaperKey Words: Sales Revenue, Advertisement Expenses, Panel Data, Manufacturing and Non-ManufacturingDr. S. Sharma, Dr. J. Sharma - Sales and Advertisement Relationship for Selected Companies Operating in India


84School of Doctoral Studies (European Union) JournalJulyAdvertising is a prominent feature ofmodern business operations. One can encounteradvertising messages, while watching TV, readingmagazines, listening to the radio, surfing theinternet, or even simply while walking down thestreet, as advertisement has a stimulating influenceon purchasing behaviour of the customer. Thismammoth surge of advertisements from everypossible source is basically to fulfil the urge ofmarketers to reach to a large number of people sothat their product may receive optimum exposure.The role of this mass mode of communicationin creating brand loyalty, deterring entry andconsequently increasing sales revenue and profitsof the organisation and causing impact on thebusiness cycle has been emphasised at variouspoints of time by different studies (Robinson,1933; Kaldor, 1950; Nelson, 1974; Ozga, 1960;Stigler, 1961; Sundarsan, 2007). Broadly therole of advertising expenses in an economy canbe classified under two heads. According to oneschool of thought, advertising increases profitsand reduces consumer welfare by creatingspurious product differentiation and barriers toentry. While the other school of thought focuseson the informative character of advertising, whichmakes markets more competitive and reducesprofits by informing the customers about pricesand quality (Greunes et al, 2000). Inspite of theabove mentioned segregation, one cannot deny thefact that ultimate function of advertising expensesis to promote sales revenue. That is why everyorganisation with the expectation of earning returnis investing millions of rupees or dollars on thismode of marketing communication.Hence, in pursuit of their ultimate objectiveof increasing sales, every endeavour of eachmarketer is to make this mode of sales generationmore effective. But advertisement effectivenessconveys different meanings to different groups.To the writer or artist, effective advertising is thatwhich communicates the desired message. Whileto the media buyer, effective advertising is thatwhich reaches to prospective buyers a sufficientnumber of times. However to the advertising ormarketing manager, effective advertising is thatwhich, together with other marketing forces, sellshis brand or product. Whereas according to thegeneral manager, effective advertising producesa return on his firm’s expenditure. Infact to beeffective the advertising must achieve the goalof delivering messages to the right audience andthereby creating sales at a higher profit.The subject of advertisement has remained atopic of debate either on one pretext or anotherfor decades. At the beginning of 19 th century,though, it was a subject of little interest to themajor researchers, but it became a fertile topicfor economic research at the turn of 19 th centuryduring which, on one side its constructive role inproviding information to customers to satisfy theirwants at lower cost was recognised and on the othera wasteful confrontational role by offering littleinformation and doing redistribution of customersfrom one firm to another was acknowledged.Various studies have been conducted to assessthe different aspects of relationship betweenadvertisements and sales at different points oftime. A brief review of the studies relating todifferent dimensions of interrelationship of salesand advertisement is presented in the forthcomingparagraphs.Review of Selected LiteratureThe economic effects of advertisementexpenses has been a much debated topic andstudied widely at different points of time. Verdon etal (1968) while studying the relationship betweenadvertising and aggregate demand found thatadvertising have a positive relation with aggregatedemand. However, Ekelund and Gramm (1969)analysed the relationship between advertisingexpenditure and aggregate consumption but couldnot establish any positive relationship betweenthese two. Similarly, Taylor and Weiserbs (1972)studied the relationship between advertisingexpenditure and aggregate consumption on thebasis of Houtakker-Taylor model and showed thatadvertising affects aggregate consumption and therelationship between advertising and consumptionis not found to be unidirectional but simultaneous.Jagpal (1981) while applying the multiproductadvertising sales model to a commercial bank foundSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Sales and Advertisement Relationship for Selected Companies Operating in India: A Panel Data Analysis85that radio advertising was relatively ineffective instimulating sales of the joint outputs (number ofsavings and checking accounts). Sachdeva (1988)studying the trends in advertisement expenditure ofIndia’s large corporate bodies stated that foreigncontrolled companies single-handly accounted fora dominant share in advertisement expenditure.Consumer goods producing organisations controlledby foreign companies have emerged as one ofthe most important contributors to advertisementbudgets of the corporate world. Another study byLeong et al (1996) using cointegration techniquefound a strong positive relationship betweenadvertising expenditure and sales. Similarly, Leeet al (1996) found that the variables of advertisingand sales are not only integrated of same orderbut also cointegrated. The results explicated thatcausal relationship between advertising expensesand sales works in both directions. Leach andReekie (1996) analysed the effect of advertisingon the market share of a brand using variantsof the Koyack Distributed Lag model. Further,the results of the Granger causality test showedthat advertising expenses caused sales but salesdo not simultaneously cause advertising. Elliot(2001) revealed that advertising has a significantpositive effect on food industry sales and thisrelationship between advertising expenditureand sales appears to be stable. Pagan et al (2001)studied the effectiveness of advertising on salesusing bivariate Vector Auto Regression modeland showed that one time increase in advertisingexpenditure leads to increase in the sales of orangewith a one month lag. It was also found that theimpact of advertising expenditure on grape fruitsales is more immediate and relatively large. Whileanalysing the relationship between a company’sadvertising expenditure and its sales during therecession, Kamber (2002) found a measurablerelationship between advertising expenditureand sales, even after controlling other factors,such as, company size and past sales growth, etc.Guo (2003) examined the relationship betweenadvertising and consumption at macro level usingthe US data on advertising expenditure, personalconsumption and disposable income. The studywith the use of unit root tests and cointegrationanalysis substantiated the existence of cointegrationamong variables, which reveals the presence oflong-term equilibrium relationship among them.Sundarsan (2007) evaluated the effectiveness ofadvertising on sales of small and large firms, andfor multinational corporations. The results showedthat advertising has influenced sales, though itsrelative effectiveness was not the same for all thecategories of firms.The above review divulges that there is noconsensus on the economic effects of advertisingexpenses on sales revenue. Different studieshave shown diverse results. However, in general,majority of the studies have directed positiverelationship between the two. Most of the studieshave used time series data to capture the longtermeffects of advertising on sales. However, it isimportant to know effects of advertising expenseson sales revenue for Indian corporate sector.Moreover, the area that to what extent advertising’spersuasive character work to alter consumerswants and consequently sales have received scantattention. With this backdrop, the present studyhas been designed to find out the extent to whichadvertisement expenses cause impact on salesrevenue. More specifically the objectives of thestudy are to examine the growth pattern and trendof sales revenue and advertisement expenses forthe selected companies operating in India. Further,the present contribution aimed to evaluate theeffectiveness of advertisement expenses on salesrevenue for selected companies at aggregate aswell as disaggregate level. The present studywill also try to analyse the behaviour of share ofadvertisement expenses in total sales revenue forthe above mentioned categories.In consequent of the above mentionedobjectives, the study has been divided into threesections. The database and research methodologyhas been discussed in section I. While section II,attempts to study the relationship of sales revenueand advertisement expenses of randomly selectedcompanies operating in India, the summary,conclusions and implications of the study arecarried out in section III.Dr. S. Sharma, Dr. J. Sharma - Sales and Advertisement Relationship for Selected Companies Operating in India


86School of Doctoral Studies (European Union) JournalJulyDatabase and MethodologyThe present study is based on the advertisementexpenditure and sales revenue data of 134randomly selected sample companies operatingin India (Annexure-I). The secondary data usedin the present study is panel or pooled in natureand collected from PROWESS (2009) of Centrefor Monitoring Indian Economy, New Delhi,India for the period 1992/93 to 2006/07. Theterm advertisement expenditure, here, includesexpenses on advertisement as well as all otherexpenditure incurred under the head of marketingexpenses like publicity, promotion expenses, etc.In this Panel data based study Fixed Effectapproach with and without dummy variablesare applied to evaluate the effectiveness ofadvertisement expenses on sales revenue. Further,annual compound growth rates (ACGR) andsummary statistics are also estimated. Withthe purpose to study the behaviour of differentcategories of companies, the randomly selectedcompanies operating in India are classified onthe basis of sales revenues and type of productproduced. Accordingly, Type-1 companies are thecompanies whose sales revenue is less than Rs. 1000cr and Type-2 companies are the companies whosesales revenue is more than Rs. 1000 cr. Further,Type-3 companies are manufacturing companiesand Type-4 companies are non-manufacturingcompanies, which includes a combination of agro,food, service based units, etc.MODEL-1 Simple Fixed EffectModelLnS it= a 0+ b 1LnA it+ u itWhere, S= Sales Revenue, A= AdvertisementExpenses, Ln = Natural Logarithm, u it =Stochastic termMODEL-2 Differential InterceptDummy Variable ModelThe present model is used to study sales andadvertisement relationship for the Type-1 andType-2 as well as Type-3 and Type-4 companies. Inintercept dummy variable model, slope coefficientbeta (b 1) is assumed to be the same for two groups.The hypothesis to be tested, here, is ‘there is nodifference in the relationship between groups’(Ramanathan, 2002).LnS it= a 0+ a 1D 1+ b 1Ln A it+ u itD 1= 0 for Type-1 companies, D 1= 1 for Type-2companies.LnS it= a 0+ a 1D 2+ b 1LnAit + u itD 2= 0 for Type-3 companies, D 2= 1 for Type-4companies.MODEL-3 Differential SlopeDummy Variable ModelIn this model, the possibility that the slopecoefficient (b 1) may be different for different typesof companies has been studied. It is assumedthat the intercept term ‘a 0’ is unchanged. Sincethe intercept term is assumed to be the same, theregression line starts from the same point but mayhave different slopes.LnS it= a 0+ b 1LnAit + b2LnA it *D 1 + u itD 1= 0 for Type-1 companies, D 1= 1 forType-2 companies.LnS it= a 0+ b 1LnAit + b 2 LnA it *D 2 + u itD 2= 0 for Type-3 companies, D 2= 1 forType-4 companies.Results and DiscussionThe basic characteristic of the variables understudy is delineated in Table I in the form ofsummary statistics. The average expenditure onadvertisement is Rs. 8684.26 cr and similarly theaverage sales revenue is Rs. 383509.45 cr duringthe period under study. Further, it is highlightedthat high variation has been experienced for thesales revenue of randomly selected 134 companies;however in case of advertisement expenditure thevariation is comparatively low.School of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Sales and Advertisement Relationship for Selected Companies Operating in India: A Panel Data Analysis87TABLE 1 . SUMMARY STATISTICS OFSALES REVENUE AND ADVERTEMENTEXPENSESSummaryStatisticsSales Revenue(Amount in Rs. Cr.)AdvertisementExpensesMean 383509.45 8684.26Median 336491.26 8536.23Minimum 102746.7 5450.95Maximum 927601.79 12474.06StandardDeviationCoefficient ofVariation240470.21 2059.5462.71 23.72No. of Firms 134 134Source: Calculated from the data available at Prowess(2008), CMIETABLE 2. SUMMARY STATISTICS OFSALES REVENUE AND ADVERTISEMENTEXPENSES FOR TYPE-1 AND TYPE-2COMPANIESSummaryStatisticsType-1SalesRevenueType-1AdvertisementExpenses(Amount in Rs. Cr.)Type-2SalesRevenueType-2AdvertisementExpensesMean 16274.96 546.51 367234.49 8137.76Median 17388.33 650.66 318720.74 7859.29Minimum 7994.96 198.89 94751.74 7632.71Maximum 23932.1 780.18 903669.69 11693.88StandardDeviationCoefficientofVariationNo. ofFirms4534.13 194.07 236385.16 2124.3027.86 35.51 64.37 26.1179 79 55 55Source: Same as in the table ITABLE 3. SUMMARY STATISTICS OFSALES REVENUE AND ADVERTISEMENTEXPENSES FOR TYPE-3 AND TYPE 4COMPANIESSummaryStatisticsType-3SalesRevenueType-3AdvertisementExpenses(Amount in Rs. Cr.)Type-4SalesRevenueType-4AdvertisementExpensesMean 366708.22 8208.97 16801.23 475.29Median 317117.55 7910.52 19373.71 496.26Minimum 97460.53 4914.55 5286.17 127.79Maximum 905304.8 11602.15 22296.99 871.91StandardDeviationCoefficientofVariationNo. ofFirms190635.27 2092.81 5123.45 243.5551.99 25.49 30.49 51.24109 109 25 25Source: Same as in the table IThe summary statistics of different companiesexhibits that the average sales are more in case ofType-2 companies i.e. the companies whose salesrevenue is greater than Rs. 1000 cr. Further, themean expenditure on advertisement is reflectedmore in case of Type-3 companies, i.e. themanufacturing companies. The coefficient ofvariation in case of sales revenue is maximumfor Type-2 companies and in case of advertisingexpenses it is higher in Type-4 companies i.e. nonmanufacturingcompanies.The study of sales revenue and advertisementexpenses at aggregate and disaggregate levelcan help to sketch more precisely its nature andbehaviour during the period under study. The totalsales revenue and advertisement expenditure ofselected companies operating in India presentedin Figure I shows that total sales for the periodunder study are increasing continuously. Thesales were at Rs.102746.7 crore in 1992/93 andit had rapidly increased to Rs. 927601.79 croreduring the financial year 2006/07. This is beingDr. S. Sharma, Dr. J. Sharma - Sales and Advertisement Relationship for Selected Companies Operating in India


88School of Doctoral Studies (European Union) JournalJulyreflected by the 16.0 per cent ACGR of total salesrevenue over the period of study (refer table IV).Further, it is apparent from figure I that during theinitial period of study total advertising expensesof selected companies are increasing but later onduring 1994/95 the trend was reversed and similartendency was observed during 1997/98 to 2000/01periods and after which it increased continuouslytill 2006/07. This fluctuation of advertisementexpenses over the study period is confirmed bylow ACGR of advertisement expenses, which is atthe level of -0.6 per cent (refer table IV).necessary to examine the advertisement expenditurein relation to the size of the company’s salesrevenue. Accordingly, the share of advertisementsexpenses to the sales revenue of selected companiesis presented in figure II, displays a downwardtrend except during 1997/98 to 1998/99. This maybe due to the presence of companies which eitherdue to its size or scale or degree of maturity arespending less on advertisements. The reasons likeawareness among people regarding the availabilityof products, economies of scales and services mayalso be attributed to it.Sales RevenueCompanies Type-1 Type-2 Type-3 Type-4 TotalACGR 6.8 16.5 1<strong>6.4</strong> 7.9 16.0R2 .84 .99 .99 .69 .98t- value 127.20* 235.60* 247.57* 71.22* 233.85*CompaniesAdvertisement ExpensesType-1 Type-2 Type-3 Type-4 TotalACGR 9.3 -1.1 -1.2 14.4 -.6R2 .76 .03 .04 .95 .01t- value 72.85* 58.81* 60.83* 118.88* 65.44*Share of Advertisement Expenses toSalesShare of Advertisement Expenses toSales of Selected Companies10.008.006.004.002.000.00Source: Same as in the table I.Note: (i) * t- values are significant at 1 per cent level ofsignificance.199319941995199619971998YearSource: Same as in the figure I199920002001200220032004200520062007The size of advertisement expenditure incurredby the companies itself may be important, but it isSales of the Type 1 CompaniesSales of Type 2 Companies30000250002000015000100005000010000009000008000007000006000005000004000003000002000001000000Type 1 Companies’ Sales1993199419951996199719981999YearsType 2 Companies’ Sales199319941995199619971998YearsSource: Same as in the figure I20002001200220032004200520062007199920002001200220032004200520062007First part of figure III reflects the sales revenuebehaviour of Type-1 companies i.e. companieswhose sales revenue is less than Rs. 1000 crore.During 1992/93 the sales revenue of thesecompanies were at Rs. 7994.96 crore and the salesrevenue for these types of companies has increasedcontinuously at an ACGR of 6.8 per cent (refertable IV). Further, the figure III infers that the salesrevenue of Type-2 companies i.e. the companies,whose sales revenue is more than Rs. 1000 crore,is increasing monotonically and rapidly. The salesrevenue was at Rs 94751.74 crore in 1992/93 andit has increased to Rs.903669.69 crore in 2006/07School of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Sales and Advertisement Relationship for Selected Companies Operating in India: A Panel Data Analysis89at the ACGR of 16.5 per cent (refer table IV), asthese are the companies which are well establishedin the market. The high R 2 and t-values are alsosignificant which further validate the results.Figure IV shows that there is consistent increasein advertising expenses of Type-1 companiestill 2001/02 but afterwards it decreased during2002/03 but later on it started following the earlierpattern. The ACGR of advertisement expenses forthe whole period under study is 9.3 per cent (refertable IV).Advertisement Expenses of Type 1CompaniesAdvertisement Expenses of Type 2CompaniesType 1 Companies’ AdvertisementExpenses1000800600400200014000120001000080006000400020000199319941995199619971998Years199319941995199619971998Year199920002001200220032004200520062007Type 2 Companies’ AdvertisementExpensesSource: Same as in the figure I199920002001200220032004200520062007While in case of Type-2 companies 1992/93 to2000/01, the erratic behaviour of advertisementexpenditure can be noticed and it exhibits a negativegrowth (-1.1 per cent).However, still the growthrate of sales revenue of both types of companies ispositive and significant and that can be due to thereasons such as the reputation of company, brandname, product quality, etc. On the other side, fromthe figure V, it is quite clear that there is consistentincrease in sales of Type-3 companies during theperiod under study and ACGR of the same duringthis period is 1<strong>6.4</strong> per cent, while for Type-4companies it was 7.9 per cent.Source: Same as in the figure ISales of Type 4 CompaniesSales of Type 3 Companies2500020000150001000050000100000090000080000070000060000050000040000030000020000010000001993Type 4 Companies’ Sales19931994199519961997Type 3 Companies’ Sales1995199719981999YearsYearsSource: Same as in the figure IOn the same lines, second part of figure VIreflects that there is consistent increase in theadvertising expenses of Type-4 companies andACGR for these type of companies is estimatedat 14.4 per cent which shows that this class ofcompanies are using advertisement expenses forincreasing their sales revenue. At the same time,the first graph of figure VI gives the evidence of thevolatile behaviour of the advertisement expensesof randomly selected manufacturing companiesi.e. Type-3 companies due to which ACGR ofadvertisement expenses of Type-3 companiesis -1.2 per cent. It infers that in case of Type-4companies, continuous attention is being paid onthe spending of advertisement expenditure, whilein case of Type-3 companies this element is notconsidered regularly, however, the sales revenueof Type-3 companies are increasing continuously.It may be due to this reason that when companiesare new they advertise heavily so as to establishsome position in the market and to create someimage in the minds of consumers. But when thecompanies are established in the market thenthey advertise just to give a reminder regardingtheir presence in the market. So, they advertiseless but their sales increase due to economies of1999200120002001200220032004200520062007200320052007Dr. S. Sharma, Dr. J. Sharma - Sales and Advertisement Relationship for Selected Companies Operating in India


90School of Doctoral Studies (European Union) JournalJulyscale with the determinants like the reputationof the company, brand name, product qualityetc. Advertising expenses growth rate in case ofType-3 companies is also negative and this, assaid earlier, may be due to the reasons that theseare well established firms and invested a lot foradvertisements in the beginning and afterwardsthe growth of investment under this head becomeslowerAdvertisement Expenses of Type 3CompaniesAdvertisement Expenses of Type 4CompaniesType 3 Companies’ AdvertisementExpenses1400012000100008000600040002000010008006004002000199319941995199619971998Year199319941995199619971998Year1999199920002001200220032004200520062007Type 4 Companies’ AdvertisementExpensesSource: Same as in figure I20002001200220032004200520062007Similarly, the share of advertisement expensesto total sales revenue for Type-1 and Type-2companies is drawn in Figure VII, exhibits thatthe share of advertisement expenses to total salesrevenue for Type-1 companies has remainedalmost constant till 1996/97. Afterwards it hasshown an increasing trend at a slow rate till2000/01, thereafter it has displayed, in general,downward trend. While on the other hand, shareof advertisement expenses to sales for Type-2companies is demonstrating a downward trendexcept during the time period between 1997/98and 1999/00, afterwards it has remained constant.It may be because of this reason that Type-2companies due to their size started enjoyingbenefits of economies of scale and the expenseson advertisement display a downward trend.Furthermore, the share of advertisement expensesto sales revenue for Type-3 and Type-4 companiesrevealed a very contrasting picture and same ispresented in Figure VIII.Share of Advertisement Expenses to SalesShare of Advertisement Expenses toSales4.504.003.503.002.502.001.501.000.500.00Share of Advertisement Expenses toSales for Type 1 Companies199310.008.006.004.002.00o.oo199519971999Years200120032005Share of Advertisement Expenses toSales for Type 2 Companies199319941995199619971998Year1999Source: Same as in the figure I.200720002001200220032004200520062007In case of Type-4 companies the share ofadvertisement expenses to total sales revenuehas shown less volatility as compared to Type-3companies. The reasons attributed for this may be,as discussed earlier, that many of the companiesof Type-3 are established ones. Due to which theyneed to spend less consistently on advertisementthat also just to give a reminder regarding theirpresence in the market for their customers.The results of the regression for Model-1 i.e. inequation 1 reveal that the coefficient related to theadvertising expenditure is significant at 1 per centlevel of significance. From the results, it can beseen that there is a strong and positive relationshipbetween sales revenue and advertisement expenses.The advertising elasticity coefficient 0.657 inequation 1 for 1992/93-2006/07 explains that 1per cent increase in advertising expenditure leadsto 0.657 per cent increase in sales. Positive andstatistically significant intercept values reveal thatSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Sales and Advertisement Relationship for Selected Companies Operating in India: A Panel Data Analysis91even if the advertisement becomes zero, there willbe some amount of sales. This means that factorsother than advertising which determine the salesrevenue like the competitors price, the reputationof the company, brand name, product quality, etc.become operative.Model -1 Simple Fixed EffectModelLn S it= a 0+ b 1Ln A it+ u itLn Sit = 4.681 + 0.657Ln A it----------------------- Equation (1)(134.61)* (53.63)*R=0.76, R 2 =0.58, N=134Note: (i) t-values are given in theparentheses.(ii)* t- values are significant at 1 percent level of significanceThe dummy variable was introduced to testwhether the relationship between advertisingexpenses and sales revenue is different for variouscategories of companies. The results in the equation2 show that the coefficients of dummy variable arestatistically significant revealing the existence ofdifference in relationship of advertising expensesand sales revenue between the Type-1 and Type-2companies. The results show that the coefficient ofdetermination, i.e., R 2 , has improved substantiallyin the case of dummy variable model. Highercoefficient for the dummy variable reveals that theintercept term for the Type-2 companies are muchhigher as compared to the Type-1 companies.The Type-2 companies sell more for the samelevel of advertisement as compared to Type-1companies. This may be due to the reason thatthe Type-2 companies are better placed, as salesrevenue are high, in respect of reputation, brandname and sales promotion as compared to Type-1companies. Therefore, they sell more for a givenlevel of advertising expenditure.Model -2 Differentials InterceptDummy Variable Modela) Ln S = a + a D + b Ln A + u it 0 1 1 1 it itLn Sales = 4.391 + 1.578 D + 0.454LnAit 1-------------------- Equation (2)it(138.78)* (35.23)* (27.09)*R=0.83, R 2 =0.69, N=134b) Ln S = a + a D + b Ln A + u it 0 1 2 1 it itLn Sales = 4.767 – 0.417 D + 0.652 Lnit 2A it-------------------- Equation (3)(127.34*) (-5.99)* (53.523)*R=0.77, R 2 =0.59, N=134Note: (i) t-values are given in theparentheses.(ii)* t- values are significant at 1 percent level of significance.The equation 3 shows the result of intercept shiftdummy variable model in the case of Type-3 i.e.manufacturing companies and Type-4 companiesi.e. non-manufacturing companies, whichincludes a combination of agro, food, service,etc. kind of industries. The explanatory powerof the model is satisfactory. All the coefficientsare statistically significant at 1 per cent level.However, the coefficient of dummy variable isnegative. The negative coefficient reveals that theintercept term of Type-4 is lower than that of theType-3 companies, which implies that the Type-4companies sell less for a given level of advertisingexpenses as compared to the Type-3 companiesassuming that the slope coefficient is same forboth the categories.Model -3 Differential SlopeDummy Variable Modela) Ln S it= a 0+ b 1Ln A it+ b 2Ln A it*D 1+ u itLn Sit = 4.61 + 0.394Ln A it +0.385LnA it*D1--- Equation (4)(144.42)* (22.60)* (19.66)*R=0.81, R 2 =0.65, N=134b) Ln S it= a 0+ b 1Ln A it+ b 2Ln A it*D 2+Dr. S. Sharma, Dr. J. Sharma - Sales and Advertisement Relationship for Selected Companies Operating in India


92School of Doctoral Studies (European Union) JournalJulyu itLn Sit = 4.60 + 0.689Ln A it – 0.215LnA it*D 2--- Equation (5)(136.77)* (54.29)* (-8.16)*R=0.77, R 2 =0.61, N=134Note: (i) t-values are given in the parentheses.(ii)* t- values are significant at 1 per centlevel of significanceThe Model-3 is being estimated to study whetherthere is any divergence between various categoriesof companies with respect to the effectivenessof advertising expenses on sales revenue. Theequation 4 gives the results of Model-3, which isbased on the shift in the slope coefficient of themodel. The results show that all the coefficientsare statistically significant at 1 per cent leveland the R 2 is also satisfactory. The coefficient ofdummy variable (D 1) representing the shift in theslope variable is highly significant revealing thatin the case of Type-2 companies, the advertisingelasticity is higher as compared to the Type-1companies, i.e., for a given level of advertisingexpenses, the increase in sales revenue is more forType-2 companies than the Type-1 companies.The advertising effectiveness is higher forType-2 companies as they can afford to advertisemore effectively in relation to the Type-1companies. The absolute volume of advertisingin the case of Type-2 companies is very highand a marginal change in advertising also foundto be more effectual. The Type-2 companies cando better use of marketing expenses than theType-1 companies and can make the advertisingmore result-oriented and avail the economies ofadvertising as they continue to advertise for along period and deal in large volumes. Further, theabove model i.e. Model-3 also presents the resultsof slope shift dummy variable model with respectto Type-3 and Type-4 companies through equation5. All coefficients including the coefficients ofdummy variable are statistically significant. Theresults are very interesting as the coefficientof dummy variable is negative. The negativecoefficient reveals that the increase in sales as aresult of increase in advertising expenses is lessfor Type-4 companies i.e. the effectiveness ofadvertisements are more for Type 3 companiescomparatively, which indicates the less effectiveuse of advertisement expenses in terms of salesrevenue for Type-4 companies as compared toType 3 companies and Type 4 companies need tospend more advertisement expenses to have thesame level of sales revenue. Due to which, theyspend more on advertisement as compared toType-3 companies. This is also verified by havinga glance on Figure VI and Figure VII, which asdiscussed earlier portray an increasing tendencyof advertisement expenses for these type ofcompanies.Conclusions and ImplicationsAdvertisement is a persuasive communicationwhich attempts to change or reinforces ones’ priorattitude and it is basically done not only to informcustomers about products, rather it is a process,which further influences and persuades customersto purchase the product. The study is based onsecondary data collected for the advertisementexpenditure and sales revenue of 134 randomlyselected companies operating in India The data iscollected from PROWESS (2008) of Centre forMonitoring Indian Economy, New Delhi, India forthe period from 1992/93 to 2006/07. In this studyfor Panel or Pooled data, Fixed Effect Model withand without dummy variables are used to evaluatethe effectiveness of advertisement expenses onsales revenue. Further, annual compound growthrates and summary statistics are also estimated.The randomly selected companies for the presentstudy are classified on the basis of sales revenues.The study has found that the growth rate of salesrevenue of Type-2 and Type-3 companies is highestin their respective categories notwithstandingthe negative annual compound growth rates ofadvertising expenses of these two companies. Thereason for increase in sales of these companiescan be due to the factors such as the reputationof the company, brand name, product quality, etc.This is also reinforced by Model-1 and Model-2,which have demonstrated the presence of positiveand statistically significant intercept values. Itexhibits that factors other than advertising are alsoSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Sales and Advertisement Relationship for Selected Companies Operating in India: A Panel Data Analysis93in operation and therefore, determine the salesrevenue of company besides the advertisementexpenses. The Type-4 companies i.e. nonmanufacturingcompanies sell less for a givenlevel of advertising expenses as compared to Type-3 companies i.e. manufacturing companies. Itinfers that Type 4 companies are less efficient inmaking utilisation of its advertisement expensesas compared to Type 3 companies.The findings of the present study indicatethe existence of complex relationship betweenadvertisement expenses and sales revenue of thecompanies. How much advertisement expenseshave to be incurred depends to a large extenton the nature and size of industries i.e. whetherone is operating a large scale firm/small firm ormanufacturing/non-manufacturing firm. Thecompanies which are operating at a large scalecan better utilise their marketing expenses dueto the economies of scale and hence can be moreresult oriented for a longer period of time. Whilefor companies functioning in non-manufacturingsector the effectiveness of advertisements is less.So, these kind of firms need to acknowledge thatincrease in advertisement expenses may not bringthe same degree of sales revenue to them, whichcan be for organisations operating in manufacturingsector. It is to be taken into consideration thatadvertisement expenses is not the only factorto determine sales revenue of an organisation.Advertisement expenses are one of the variousfactors, though crucial, which determine salesof any company through increasing popularityof products/services among customers. So, theorganisations need to take care of this factor,while formulating their strategies relating to thespending of advertisement expenses.It can be wrapped up by stating thatadvertisement is considered as one of the mostimportant medium of communication influencingthe organisations’ performance in more than oneways. But its influential role may be suppressedby the operation of other factors which also seeksequal attention at the time of framing of any salespromotion policy.ReferencesEkelund, Robert B., and William P. Gramm(1969), “A Reconsideration of AdvertisingExpenditures, Aggregate Demand andStabilization”, Quarterly Review of Economicsand Business (summer), pp.71-77.Elliot, C. (2001), “A Cointegration Analysis ofAdvertisement and Sales Data”, Review ofIndustrial Organization, Vol. 18, pp.417-26Greunes, M.R., Kamershcen, D.R., and Kllin,P.G. (2000), “ The Competitive Effects ofAdvertising in the US Automobile Industry1970-94”, <strong>International</strong> Journal of Economiesand Business, Vol. 7(3), pp. 245-61.Gujrati, Damodar N. (2003), Basic Econometrics,Mc-Graw- Hill Companies, Inc.: New York.Guo, Chiquan (2003), “ Cointegration Analysisof Advertising Consumption Relationship”,Journal of the Academy of Business andEconomics, February, Obtained through theinternet: www.findarticles.com, [accessed on3 rd February, 2008].Hansens, M. Dominique (1980), “Bivariate TimeSeries Analysis of the Relationship betweenAdvertising and Sales”, Applied Economics,pp-329-39Jagpal, Harsharanjeet S. (1981), “ Measuring JointAdvertising Effects in Multiproduct Firms: Useof a Hierarchy-of-Effects Advertising- SalesModel”, Journal of Advertising Research, Vol.21 (1), pp. 65-75Kaldor, N. V. (1950), “The Economic Aspectsof Advertising”, Review of Economic Studies,Vol. 18, pp.1-27.Kamber ,T. (2002), “The Brand ManagersDilemma: Understanding How AdvertisingExpenditures Affect Sales growth during theDr. S. Sharma, Dr. J. Sharma - Sales and Advertisement Relationship for Selected Companies Operating in India


94School of Doctoral Studies (European Union) JournalJulyrecession”, The Journal of Brand Management,Vol.10(2), pp.106-20.Leach, F. Daniel and Reekie, W. D. (1996), “ANatural Experiment of the Effect of Advertisingon Sale: The SASOL Case”, Applied Economics,Vol.28, pp.1081-91.Lee, Junsoo, Shin, B. S and InChung (1996),“Causality between Advertising and Sales:New Evidence from Cointegration”, AppliedEconomic Letters, Vol.3, pp, 299-301.Leong, S. M., Outiaris, S. and Franke, G. R. (1996),“Estimating Long Term Effects of Advertisingon Sales: A Cointergration Perspective”,Journal of marketing Communication, Vol.2(2), pp, 111-22.Nelson, P. (1974), “The Economic Value ofAdvertising”, in Brozen, Y. (Ed.,) Advertisingand Society, New York: New York <strong>University</strong>Press, pp.43-66.Nelson, P. (1975), “The Economic Consequencesof Advertising”, Journal of Business, Vol. 48,pp.213-41.Ozga, S. A. (1960), “Imperfect Markets ThroughLack of Knowledge”, Quarter Journal ofEconomics, Vol. 74, pp.29-52.Pagan J., Sethi, S. and Soydemir, G.A. (2001), “TheImpact of Promotion/Advertising Expenditureson Citrus Sales”, Applied Economic Letters,Vol.8 (10), pp.659-63.Prowess (2008), Centre for Monitoring IndianEconomy, New Delhi.Ramanathan, R. (2002), IntroductoryEconometrics with Applications, HarcourtCollege Publishers.Robinson, J. (1933), Economics of ImperfectCompetition, MacMillan and Co.: London.Stigler, G. J. (1961), “The Economics ofInformation”, Journal of Political Economy,Vol. 69, pp.213-25.Suchdeva, Sudhu (1988), “Advertising in India:Some Characteristics and Trends”, WP 1988/05,Institute for Studies in Industrial Development,Obtained through the internet: Http:// isdev.nic./pdf/sudh.pdf, [accessed on March, 2008].Sundarsan, P.K. (2007), “Evaluating Effectivenessof Advertising on Sales A Study Using FirmLevel Data”, ICFAI Journal of ManagerialEconomics, Vol. V(1), pp.54-62Taylor, Lester D. and Daniel, Weiserbs (1972),“Advertising and Aggregate ConsumptionFunction”, American Economic Review, Vol.LX11 (4), pp.642-55.Verdon, Walter A., Campell, R. McConeell andTheodore, W. Roester (1968), “AdvertisingExpenditures as an Economic Stabilizer: 1945-64”, Quarterly Review of Economics andBusiness, (spring), pp. 7-18.School of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Sales and Advertisement Relationship for Selected Companies Operating in India: A Panel Data Analysis95Annexure - IList Of Randomly Selected 134 Companies Of IndianCorpotrate Sector1.3M INDIA LTD.32.GODREJ AGROVET LTD.64.RELIANCE CHEMOTEX INDS.2.ABHISHEK INDUSTRIES LTD.33.GODREJ INDUSTRIES LTD.LTD.3.ANIK INDUSTRIES LTD.34.GREAVES COTTON LTD.65.SANDESH LTD.4.APEEJAY TEA LTD.35.GULF OIL CORPN. LTD.66.SHARP INDIA LTD.5.APEEJAY SHIPPING LTD.36.H E G LTD.67.SINGER INDIA LTD.6.ASIAN ELECTRONICS LTD.37.H M T (INTERNATIONAL) LTD.68.SUPER FORGINGS & STEELS7.ASSAM CO. LTD.38.H M T BEARINGS LTD.LTD.8.ASSOCIATED STONE INDS.39.H M T LTD.69.SUPER SALES INDIA LTD.(KOTAH) LTD.40.HARRISONS MALAYALAM LTD.70.SUPER SPINNING MILLS9.AVAYA GLOBALCONNECT LTD.41.HINDUSTAN BREWERIES &LTD.10.B P L LTD.BOTTLING LTD.71.SUPERTEX INDUSTRIES LTD.11.BHARAT BIJLEE LTD.42.HINDUSTAN COMPOSITES LTD.72.SURYAJYOTI SPINNING MILLS12.BHARAT FERTILISER INDS. LTD.43.HINDUSTAN DORR-OLIVER LTD.LTD.13.BHARAT GEARS LTD.44.HINDUSTAN EVEREST TOOLS73.SURYALAKSHMI COTTON MILLS14.BHARAT HOTELS LTD.LTD.LTD.15.BIRLA POWER SOLUTIONS LTD.45.HINDUSTAN ORGANIC74.SURYALATA SPINNING MILLS16.BIRLA PRECISIONCHEMICALS LTD.LTD.TECHNOLOGIES LTD.46.HINDUSTAN SANITARYWARE &75.SURYAVANSHI SPINNING MILLS17.BIRLA TRANSASIA CARPETSINDS. LTD.LTD.LTD.47.HINDUSTAN TIN WORKS LTD.76.TATA COFFEE LTD.18.BIRLA V X L LTD.48.HYDERABAD INDUSTRIES LTD.77.TATA REFRACTORIES LTD.19.BO<strong>MB</strong>AY BURMAH TRDG.49.I F B INDUSTRIES LTD.78.TAYO ROLLS LTD.CORPN. LTD.50.INDIAN SUCROSE LTD.79.UNIVERSAL CABLES LTD.20.BO<strong>MB</strong>AY CYCLE & MOTOR51.J C T LTD.80.A B B LTD.AGENCY LTD.52.JAY SHREE TEA & INDS. LTD.81.A C C LTD.21.BO<strong>MB</strong>AY DYEING & MFG. CO.53.JINDAL DRILLING & INDS. LTD.82.AGRO TECH FOODS LTD.LTD.54.MAHINDRA UGINE STEEL CO.83.APAR INDUSTRIES LTD.22.BO<strong>MB</strong>AY PAINTS LTD.LTD.84.APOLLO TYRES LTD.23.BRABOURNE ENTERPRISES LTD.55.MILKFOOD LTD.85.ASIAN PAINTS LTD.24.CHLORIDE INTERNATIONAL56.NAHAR INVESTMENTS &86.B E M L LTD.LTD.HOLDING LTD.87.BHARAT ELECTRONICS LTD.25.CONSOLIDATED FINVEST &57.NAHAR SPINNING MILLS LTD.88.BHARAT FORGE LTD.HOLDINGS LTD.58.NOVARTIS INDIA LTD.89.BHARAT HEAVY ELECTRICALS26.ELECTRONICA MACHINE TOOLS59.OSWAL CHEMICALS &LTD.LTD.FERTILIZERS LTD.90.BHARAT PETROLEUM CORPN.27.ELECTROTHERM (INDIA) LTD.60.PANASONIC BATTERY INDIALTD.28.EUROTEX INDUSTRIES &CO. LTD.91.BIRLA CORPORATION LTD.EXPORTS LTD.61.PANASONIC HOME APPLIANCES92.BRITANNIA INDUSTRIES LTD.29.EVEREADY INDUSTRIES (INDIA)INDIA CO. LTD.93.CADBURY INDIA LTD.LTD.62.RAJSHREE SUGARS &94.CASTROL INDIA LTD.30.FOSECO INDIA LTD.CHEMICALS LTD.95.CENTURY ENKA LTD.31.GANDHI SPECIAL TUBES LTD.63.RAYMOND APPAREL LTD.96.CENTURY TEXTILES & INDS.Dr. S. Sharma, Dr. J. Sharma - Sales and Advertisement Relationship for Selected Companies Operating in India


96School of Doctoral Studies (European Union) JournalJulyLTD.109.I T C LTD.LTD.97.CIPLA LTD.110.INDIAN OIL CORPN. LTD.122.R S W M LTD.98.CUMMINS INDIA LTD.111.ISPAT INDUSTRIES LTD.123.RANBAXY LABORATORIES99.DABUR INDIA LTD.112.JINDAL SAW LTD.LTD.100.ELECTROSTEEL CASTINGS113.KIRLOSKAR BROTHERS LTD.124.RAYMOND LTD.LTD.114.MAHANAGAR TELEPHONE125.RELIANCE ENERGY LTD.101.EXIDE INDUSTRIES LTD.NIGAM LTD.126.RELIANCE INDUSTRIES LTD.102.FORCE MOTORS LTD.115.MAHINDRA & MAHINDRA127.SINTEX INDUSTRIES LTD.103.GLAXOSMITHKLINELTD.128.SURYA ROSHNI LTD.CONSUMER HEALTHCARE116.MARUTI SUZUKI INDIA LTD.129.TATA CHEMICALS LTD.LTD.117.MICRO INKS LTD.130.TATA COMMUNICATIONS104.HERO HONDA MOTORS LTD.118.N T P C LTD.LTD.105.HINDALCO INDUSTRIES LTD.119.NAHAR INDUSTRIAL131.TATA MOTORS LTD.106.HINDUSTAN PETROLEUMENTERPRISES LTD.132.TATA POWER CO. LTD.CORPN. LTD.120.NESTLE INDIA LTD.133.TATA STEEL LTD.107.HINDUSTAN UNILEVER LTD.121.PHILIPS ELECTRONICS INDIA134.TATA TEA LTD108.HINDUSTAN ZINC LTD.School of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Engineering And Tecnology97Engineering and Technology SectionContentIncrease of Agricultural Production based on Genetically Modified Foodto meet Population Growth DemandA metadata analysis of the capacity of intensification of agricultural production via genetic engineering tofeed a growing populationErnst Giger, Rudolf Prem, Michael LeenCorrosion in Concrete Bridge GirdersA critical examination concerning the problem of corrosion in concrete bridge girders with recommendationsto resolve the issueWalter Unterweger, Kurt NiggeDepartment’s ReviewersDeputy Head of Department – Engineering - Prof. Robert MunierChair of Aerospace and Transport Engineering - Prof. Bence AnzenbergerChair of Chemical Engineering - Prof. Louis MatéChair of Civil Engineering - Prof. Kurt NiggeChair of Electrical Engineering - Prof. Michael KirkbridgeChair of Mechanical Engineering - Prof. Alex NickolsDeputy Head of Department – Technology - Prof. József RuppelChair of Computer and Software Engineering - Prof. Thomas AaronsonChair of Nanotechnology - Prof. Jeffrey DesslerChair of Mechatronics - Prof. Sarah PolizaChair of Agricultural Engineering and Food Technology - Prof. Michael LeenChair of Energy Technology and Engineering - Prof. George SzentpetériSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


98School of Doctoral Studies (European Union) JournalJulyIncrease of Agricultural Production based onGenetically Modified Food to meet PopulationGrowth DemandsErnst Giger (MSc)Master of Science and candidate to PhD in Engineering at the School ofDoctoral Studies, <strong>Isles</strong> <strong>International</strong>e Université (European Union)Rudolf Prem (BSc)Bachelor of Science and candidate to Master of Philosophy in Biology and Earth Science atthe <strong>Isles</strong> <strong>International</strong>e Université (European Union) Degree’s Validation ProgrammeProfessor Michael Leen (PhD)Chair of Agricultural Engineering and Food Technology of the Department of Engineering andTechnology at the School of Doctoral Studies, <strong>Isles</strong> <strong>International</strong>e Université (European Union)AbstractA metadata analysis of the capacity of intensification of agricultural production via genetic engineering tofeed a growing population.In order to fully examine that relationship, it was necessary to critically examine literature, statistics, andhistorical examples that might shed some light on the relationship that exist between food production andpopulation growth. Additionally, studies were consulted that spoke to the capacity for genetically modifiedfoods to increase agricultural production. In all, the range of information required for this study was significantand at times may have appeared to stray beyond the limited scope of genetically modified food. However, inorder to demonstrate the manner by which genetically modified food would have its greatest negative impactupon human societies, it was essential to take a broader look at the role that genetically modified foods haveplayed in the intense push to intensify agricultural production year after year in order to presumably keep upwith geometric population growth by always generating more food than is needed.Key words: Agricultural Engineering, Food Technology, Genetics, Biology and Life ScienceSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Increase of Agricultural Production based on Genetically Modified Food to meet Population Growth Demand99Genetically Modified FoodIntroduction and Statement ofProblemSTATEMENT OF PROBLEMThe problem that this study will beaddressing is the extent to which the GeneRevolution—represented by the integration ofgenetic engineering techniques into the field ofagriculture—is capable of positively affecting thecurrent human population crisis. With significantpredicted increases in population set for thenext few decades and a current population thatalready seems incapable of feeding itself, this isa significant problem. The United Nation’s Foodand Agriculture Organization (FAO) estimatesthat the human population of the planet will riseto 8 billion by 2030. Further, the FAO estimatesthat to account for that predicted rise as well as toclose existing nutrition gaps and allow for dietarychanges, current agricultural production must beincreased by as much as 60% (Prakash and Conko,2004). Other estimates state that the Earth’spopulation could double to more than 12 billionwithin fifty years (Pimentel, Huang, Cordova andPimentel, 1996).The population growth of all biologicalorganisms, human beings include, is sigmoidor S-shaped in nature. This means that growthis characterized by a slow start followed byrapid acceleration, and the deceleration as theorganism population approaches the asymptoteof environmental limits. However, Hopfenbergand Pimentel (2001) point out that agriculturalproduction has consistently raised the limitsplaced on growth by negating the limitations offood supply. Food can, quite literally, be grownin almost any conceivable amount, though at thestart of the Twenty-First Century we are beginningto recognize that there might be final resourcelimits that production faces. It is important tonote that raising the food supply is artificial; thatis, it is not a naturally appearing phenomenonof the environment. When disruptions occur,such as drought or infestation the effects on foodsupply are dramatic and results in precipitousdeclines in population through famine and disease(Hopfenberg and Pimentel, 2001).Even more unfortunate is the fact that even aspopulation levels are rising at a rapid rate, the percapita amount of cultivatable land has been steadilyfalling throughout the last half of the TwentiethCentury. In 1966, there were 0.45 hectares percapita of cultivatable land in the world. By 1998,that amount had fallen to 0.25 hectares. It ispredicted to dwindle even farther to 0.15 hectaresby 2050 (Global outlook, 2004; Pimentel, Huang,Cordova and Pimentel, 1996). This means that evenif the population of the planet were to hold steady,it would be necessary to double the agriculturaloutput of cultivatable land just to keep up withthe diminishing availability of farmlands. Manyscientists, corporations, and governments haveenvisioned the biotech revolution in agriculture tobe the silver bullet that will avert famine, disease,conflict, environmental degradation, and countlessother ills associated with uncontrolled populationgrowth (Hopfenberg and Pimentel, 2001).The World Bank and the United Nationsestimated in 1996 that between one and two billionpeople were then malnourished because of lack offood, low income, and poor distribution of existingfood resources (Pimentel, Huang, Cordova andPimentel, 1996). This seemingly impossibly highamount of people has been confirmed by otherstudies, some of which have actually more recentlyindicated that as many as three billion people aremalnourished (Hopfenberg and Pimentel, 2001).This is a very significant problem. It is the primaryone that this study intends to address, throughthe lens of new technological developmentsin agricultural, namely biotechnology and thepossibility of engineering new sources of food atthe genetic level.PURPOSE OF THE STUDYThe purpose of this study is relativelystraightforward. One of the most significantproblems in feeding the growing population isapparently that rate of population growth versusthe rate at which food production can be increased.E. Giger, R. Prem, M. Leen - Increase of Agricultural Production based on Genetically Modified Food


100School of Doctoral Studies (European Union) JournalJulyConsidering a rough population growth rate ofabout 1.7% per year, agricultural production mustincrease at least at that pace in order to preventdecreases in per capita food supplies (Kindall andPimentel, 1994). A number of scholars on thesubject believe that genetic engineering is the onlyviable means by which such significant gains canbe made over the course of the next few decades.Gains of 5%, 10% or even 25% in some individualplants from the addition of even a single genetictrait are not wholly unreasonable. For that reason,genetic engineering is seen as the most promisingagricultural technology that can increase cropproductivity at a rate fast enough to hopefullyoutstrip population growth (Conko, 2003; Kindalland Pimentel, 1994).The intention here is to examine that assumption,that intensification of production via geneticengineering is the solution to the population crisis.The current perspective holds that increases inagricultural productivity are needed if we can everhope to feed the number of people projected tobe born within the coming decades (Hopfenbergand Pimentel, 2001). This conclusion is assumedby nearly the entire world. Even the critics ofgenetically engineering food crops rarely criticizethe capacity of the Gene Revolution to increaseproduction and provide more food for the world.Rather, their criticisms are generally based on thequestionable environmental, health, and politicaleffects of employing genetically modified food(Evans, 1998; Global outlook, 2004; Pringle,2003). The purpose of this study will take asignificantly different approach to the question ofthe viability of genetic engineering in agriculture.Whereas most critics of genetically modifiedfood criticize superficial aspects of the processor immediate results of such engineered food, thepurpose here is to take a broader, more systematicapproach to the matter. At issue here is the extent towhich the Gene Revolution and the incorporationof biotech advances into agricultural productioncan actually have a positive effect upon thegrowing population crisis. The faith in increasedproduction through genetic engineering is almostunfailing. It is part of a cornucopian perceptionof the world that imagines growth is always good,that the population will correct itself, that shortagesare mythical, and that and potential shortage canbe overcome through technology or substitution(Grant, 1993). It is important to remember thatwhether or not genetically modified crops canincrease agricultural production, the Earth’sresources are ultimately finite and eventually gainswill be impossible.ADM Corporation, a leading biotech corporation,advertises that they are using recombinant genetictechniques to increase food production in order tofeed a growing world. This is part of the classicperception of human population that envisions itan independent variable divorced from ecological,biological, and behavioral inputs. While someadmit that such factors can have a limiting effecton population growth, none suggest that increasingthese factors will result in similar increases ingrowth (Hopfenberg and Pimentel, 2001). This isan important point and one that forms the crux ofthis discussion of using genetically modified cropsto feed a growing world. It is apparent that rapiddecreases in the availability of food will have alimiting effect on population growth—decreasesupply and famine is the likely result. However,this study intends to show that increases in theavailable food supply, via the intensification ofproduction that genetic engineering promises, willactually cause a sudden increase in populationgrowth that will, ultimately only exacerbate thecurrent population crisis.The purpose of this study, then, is to examinethis very ideological assumption that has notlargely been challenged in the popular or criticalliterature on the subject. In reading through currentliterature on agricultural biotechnology, one isstruck by the astounding fact that very few of theauthors ever question the ability of geneticallymodified to provide more food for a humanpopulation that seems to expand without end.Even when genetically modified food is paintedin a negative fashion, the implicit assumption isthere that biotech agriculture could provide theintensification of production that is presumablyneeded to feed a growing population. This studywill attack that assumption and illustrate itserroneous base.School of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Increase of Agricultural Production based on Genetically Modified Food to meet Population Growth Demand101IMPORTANCE OF THE STUDYLeaders in the field of biotechnology are workingunder the auspices of cultural, governmental,and corporate blessings in order to use geneticengineering techniques to develop plant strains thatwill improve agricultural productivity. The matterof whether or not genetic engineering techniquesare up to this task is not open for debate. Thehistory of the so-called Gene Revolution pointsto the inescapable fact that recombinant genetictechniques can be employed to increase cropyields. In 1974, the first gene from one bacterialspecies was cloned and expressed in anotherspecies of bacteria (Evans, 1998). This was thebeginning of the biotech revolution in agriculturethat began in earnest during the 1980s and offeredgrowers a new level of control over agriculturalproduction. In 1984, antibiotic resistance wastransferred to a tobacco plant. As early as 1987successful experiments were being performed thatprovided pest and herbicide resistance to somekinds of plants. By the early 1990s, the techniquesfor creating genetically modified crops had beenfine-tuned (Pringle, 2003; Evans, 1998).Proponents of the technology point to some ofthe gains of the last twenty years in agriculturalprodcutivty as proof that biotechnology has a roleto play in increasing food supplies worldwide. Inthe past 50 years, while the population rose by50%, agricultural output doubled, due in part tothe results of biotechnology (Prakash and Conko,2004). Some of the advantages of the technologyinclude the ability to use genetic material fromany source from any part of the world. Geneticengineering also confers a high degree of control inthe expression of characteristics in the final plant.While opponents argue that poor distribution ofexisting food supplies is more crucial than notenough food, advocates of genetically modifiedfood continue to espouse its benefits to helpfeed the world (Evans, 1998; Pringle, 2003). Itwould be naïve to assume that such increasesin productivity can be achieved indefinitely.However, the potential for doubling current cropyields seems well within the scope of modernbiotech resources.So then, conceding that biotechnical resources canbe turned to agricultural production to increaseyields, then one might also wonder what possibleimportance this study can serve. After all, theexplicit purpose of this study is to determinethe effectiveness of genetically modified cropsto positively affect the current population crisisthat the world is facing. Critics to this ideamight wonder why, if admitting that agriculturalproductivity can be increased through biotechnicalmeans, there is need for any further discussion.The importance of this study is not so simple amatter as whether or not genetic engineering canbe effectively employed in agriculture. Rather, itintends to strike deep at the heart of the fallacious(and disturbingly unquestioned) assumptionthat increased agricultural productivity throughgenetically modified crop production is the path tosolving the current population crisis.The general and uncritical belief is thatpopulation growth must be countered withintensification of agricultural production. Afterall, if there are more people living than previously,we must increase the amount of food that is beingproduced in order to feed those new individuals.This is the basis for all the historical revolutionsin food production including the Gene Revolution,the Green Revolution, and even the AgriculturalRevolution. In essence, this is a Malthusian analysisof the current state of agricultural production.Malthus’ conclusions have led to the developmentof an ideological belief in the West that we mustfind ways to increase agricultural production inorder to keep pace with population growth. Heconcluded, famously, that population growthwill advance geometrically while agriculturalproduction can only increase arithmetically(Hopfenberg and Pimentel, 2001; Evans, 1998).But there are problems with this conclusion. Itsuggests that human populations can expandwithout similar preexisting expansions of the foodsupply. Quite simply, this is impossible. To put thematter bluntly, all people are made out of food. Ina biological sense, people are constructions of foodinputs. Human populations, thus, increase becauseof increases in food production, not the other wayaround (Hopfenberg and Pimentel, 2001).E. Giger, R. Prem, M. Leen - Increase of Agricultural Production based on Genetically Modified Food


102School of Doctoral Studies (European Union) JournalJulyThe importance of this study rests in the factthat it intends to challenge this assumption anddemonstrate that intensification of production willnot alleviate demands from an increased population.In fact, such intensification paradoxically makesthe crisis all the more worse. Therefore, thisauthor is convinced that the importance of thisstudy cuts to the heart of the modern populationcrisis that the world is experiencing as well as allthe environmental and social problems that stemfrom steady increases in population. The gutreaction to the population crisis is to find new waysto grow more food. The application of geneticengineering to agriculture represents the epitomeof modern agricultural intensification. This studywill fantastically and crucially illustrate thatgenetic engineering fails to solve the populationcrisis precisely because its methods are invariablysuccessful in increasing agricultural production.SCOPE OF THE STUDYThe scope of this study will be limited largelyto two aspects of this question. One, this studywill demonstrate the ability of modern geneticengineering techniques to increase the agriculturaloutput of modern farming techniques. This will beaccomplished through historical example as wellas through discussions of the methods and resultsof the Gene Revolution. The second, and integral,part of this study will involve a systems analysisof human population growth. This discussionwill draw heavily on the critical work of the pastdecade on the subject of human population growth,with specific attention paid to its relationship toagriculture and agricultural intensification.By broadening the scope of this study beyonda simplistic discussion of genetically engineeredfood and its propensity either to be beneficial ordetrimental, this study will make a more complexargument that attacks the ideological assumptionthat intensification of production, i.e. biotechagriculture, is capable of averting the current worldpopulation crisis (Hopfenberg and Pimentel, 2001).In other words, the genetically modified industryhas idealistically imagined its role in agricultureto be that of a savior, providing the necessarytechnology at just the right historical moment toavert famine, disease, and conflict. By expandingthe scope beyond genetic engineering techniques,it will be possible to examine the actual effects ofthe Gene Revolution rather than the assumed andhope for effects.RATIONALE OF THE STUDYThe rationale of this study is very straightforward.Whatever the potential and individual issuesassociated with genetically modified crops inindustrial agriculture—such as environmentalharm, health issues, or political conflict—theassumption stands among advocates and criticsthat biotechnology can increase agriculturalproductivity and thus avert famine, malnutrition,and feed a growing world population. This is nota challenge to the conclusion that biotechnologyhas the great potential to increase agriculturalproductivity (Evans, 1998; Kindall and Pimentel,1994). Pest resistant and herbicide resistant cropsalone can vastly increase agricultural productivity.More specialized and imaginative strains of thefuture will likely permit even greater increases inagricultural productivity.Despite this admission, this study is crucialbecause it examines the question of whether ornot intensification of agricultural productionwill actually avert famine and provide foodfor a growing population. It is this study’shypothesis that intensification of agriculturalproduction through genetic engineering willactually exasperate overpopulation, famine, andmalnutrition instead of solving them. While thismight seem a counterintuitive hypothesis, anexamination of existing data regarding geneticallymodified crops as well as population growth willreveal that the conclusion may be very accurate.Some might question the importance of such astudy to the field of agricultural biotechnology.The aim in this study strikes to the core of thediscussion of the applicability of geneticallymodified techniques. If it can be shown thatintensification of production will not improveoverpopulation and famine conditions, we willbe forced to question the ideological assumptionSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Increase of Agricultural Production based on Genetically Modified Food to meet Population Growth Demand103that challenges researchers to continue to developincreasingly productive crop strains.What follows, then, is not a criticism of specificside effects of agricultural biotechnology. Whileissues such as reduced biodiversity and corporategreed are important, they are too narrow for thepurposes of this study. The primary presumedbenefit of genetically modified crops is their abilityto provide increased yields to feed a growingworld. The rationalization for this study is achallenge to that basic assumption that agriculturalbiotechnology can have that benefit. In fact, it isevident that intensification of production via geneticengineering will have the exact opposite effect.This study is needed because in all the discussionsregarding the benefits or detriments of geneticallymodified food, none challenge its guidingpremise: intensification of agricultural productionis a good thing. The changes that biotechnologyintends to make to existing crop strains will havethe effect of improving agricultural production.The purpose of this study is to show that suchincreases are ultimately negative and will worsenconditions for the starving and malnourishedall the while increasing the rate at which anever larger human population causes significantenvironmental damage. The author’s intention,then, to demonstrate the macro-level negativeeffects of biotechnological intensification.DEFINITION OF TERMSThere are four primary terms that areworth initial definition and discussion so as toclarify the manner in which they will be usedthroughout the rest of the study. These form termsinclude: genetically modified food, agriculturalbiotechnology, industrialized agriculture, andpopulation. In this section these four terms willeach be discussed to aid in readers’ understandingin the remainder of the study.First, what are genetically modified food orgenetically modified crops? Genetically modifiedfood is any agricultural product that has beenmodified using recombinant DNA engineering.This can be a complex process by whichagricultural plants—notably soybeans or cotton—are manipulated in a laboratory so that new traitscan be introduced into the plant’s genetic code. Inone sense, all plant breeding is a form of geneticengineering, with certain plants specifically bredwith others to produce certain traits in the nextgeneration. However, to truly be considereda genetically modified food, the plant that isproduced is done so using more drastic breedingtechniques.Specifically, scientists insert the genetic materialof one organism into another, sometimes from anentirely different class of organism. The historyof this practice can be traced back to successfulbacterial transfers in the 1970s. In the 1980s and1990s successful and practical crop varieties ofpotatoes, tomatoes, soybeans, and tobacco werecreated (Evans, 1998; Pringle, 2003). Geneticallymodified food, thus, consists of those kinds ofplants that are bred with specific favorable traitsthat usually do not exist in the original agriculturalcrop. Genetic engineering in this fashion alsoholds the promise of being incredibly precise byallowing scientists to insert specific genes andgenetic markers into the DNA of target plants.This means that the randomness often associatedwith plant breeding is all but eliminated. Newplants can be designed according to the traits thatagriculturalists find most desirable.Human beings have been genetically modifyingvarious kinds of living organisms, from plants todogs for thousands of years. However, for all butthe most recent decades, any genetic alterationsthat were desired had to be accomplished throughthe slow process of selective breeding (What are,2006). In selective breeding close attention is paidto the offspring that are produced in a generationof the plant or animal that is being altered. Thebreeder selects for desired traits by permittingthe organisms that have the desired trait to breedsubsequent generations. From that generation,again the process of selection is employed such thatthe resulting third generation is more fine-tunedthan the previous two with regard to the desiredtrait. Over the period of many generations, desiredtraits can be bred into a species or undesirabletraits can be excised. Selective breeding is akinE. Giger, R. Prem, M. Leen - Increase of Agricultural Production based on Genetically Modified Food


104School of Doctoral Studies (European Union) JournalJulyto artificial natural selection in that acceptablegenetic characteristics are being bred for overtime. However, unlike natural selection, selectivebreeding is designed by man for his own ends.Genetic engineering, specifically of food, isthe latest incarnation of selective breeding. Itallows specific insertions of single genes thoughgene splicing that produces desired characteristicsin offspring without the trial and error associatedwith traditional selective breeding. Additionally,genetic engineering offers other benefits. It ismuch faster than selective breeding. Desiredresults can be achieved in a much smallernumber of generations, sometimes even withinthe first generation. Perhaps more importantly,genetic engineering allows researchers to crossspecies with ease, inserting genetic material fromseparate species into one another at will (Whatare, 2006). The ease of gene splicing and othergenetic engineering techniques has created aworld in which seemingly impossible unions canoccur. It is possible, for example, to design aplant that is capable of producing human insulin.Selective breeding alone could never accomplishsuch a monumental task that literally involvesmanipulating the genetic material available insingle cells.A variety of genetic engineering techniqueshave been created that have proven quite usefulfor agriculturalists. For example, Monsantomarkets an herbicide called Roundup. Roundupis especially virulent and can kill every plantthat it touches. To make this herbicide effective,Monsanto genetically engineering a soybean plant(as well as some other crops) that are effectivelyimmune to the effects of Roundup. The resultis fields of soybeans that can be sprayed withRoundup with impunity with no fear of harmingthe crop and yet every weed in the field will die.For farmers this means an easier time controllingweeds. Pairing Roundup with geneticallyengineering crops has reduced production costs,increased yields, and has ultimately reduced themarket price of the crops (What are, 2006). Otherresearch has developed other applications forgenetically engineered plants. Some have inserteda specific gene into corn that causes the plants toproduce a natural insecticide. This has all buteliminated crop damage from corn borers. Talkhas also been made of inserting anti-fungal genesinto corn plants (What are, 2006). In short, thereare few limits to the way that genetic engineeringcan be employed to improve agricultural yieldsand increase the amount of food resources thatare available. All that is required is the researchimagination to determine what specific gene fromany plant or animal on the planet might have auseful role to play in modern agriculture.Second, consider agricultural biotechnology.This is an umbrella term that encompassesgenetically modified food but which refers directlyto the techniques and tools that are used to producegenetically modified plants. Whereas geneticallymodified food is the product, agriculturalbiotechnology is the means by which that foodis created. In this study, it will be referred to onoccasion, as it proves necessary. However, by andlarge, little critical examination will be made ofthese biotechnological methods save in that themethods demonstrate the ability of geneticallymodified food to be incredibly more productivethan plants produced using more traditionalmethods.Third, industrialized agriculture is a necessaryterm for understanding genetically modifiedproduction. Use of this term is to denote a specifickind of agricultural production that focuses onintensification of production. Industrializedagriculture employs a wide variety of techniquesof which biotechnology is only the most recentand dramatic. The Green Revolution embodiesthe spirit of industrialized agriculture. Duringthe 1960s and 1970s, the major food producers inthe world turned to more scientific and industrialmethods to produce food. This meant morefertilizer, more pesticides, and more herbicides.It also meant customized crop rotation patternsand planting schedules all designed to maximizeagricultural output. Industrialized agriculturediffers from other forms of agriculture primarilybecause of its intense desire to always increaseproduction. In the past this has meant using morechemicals on crops as well as opening up moreland to agricultural production.School of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Increase of Agricultural Production based on Genetically Modified Food to meet Population Growth Demand105Industrialized agriculture can be consideredto have certain characteristics that make itdistinguishable from other, less intense forms ofagriculture such as backyard gardening accordingto Manning (2003). While both types of agricultureproduce food, the former is fundamentally differentfrom the latter. Industrialized agriculture, tostart, is capital-based instead of labor-based andperceives the countryside as an auto manufacturerwould see a factory floor. It is simply the zone inwhich production occurs. With capital, instead oflabor, in hand producers purchase items such asfarm equipment, fertilizers, and pesticides. Theseinputs are employed to create a specific output,namely surplus crops in the form of staples such aswheat, corn, and rice. Industrialized agriculture isbuilt on the premises that surpluses are beneficialand that intensification of production is the mostviable means by which to achieve those surpluses.The so-called Gene Revolution promises tosignificantly increase production withoutexpanding planted acres. It is the current epitomeof agricultural intensification.Finally, readers should understand what ismeant when the term population is employed.Discussions of population growth unfortunatelyusually center on the division in growth ratesbetween the developed and developing nations.References to population in this study are made asa whole. Population is being used interchangeablywith the entire human population, which consistsof more than six billion people spread across theglobe. Critics of this kinds of systems approachwill immediately challenge these conclusions,explaining that population growth is obviouslyaffected by living standards, as this explains thedifferences between growth rates in the First andThird Worlds. If growth is occurring in the ThirdWorld it is because food is making it to thosepeople, even if it is being produced in the FirstWorld. Whether or not the food that gets there issufficient to provide adequate nutritional value isbeside the point. It has already been establishedthat all people are made from food (Hopfenbergand Pimentel, 2001). Discussions of populationgrowth herein regard the entire system of humanpopulation. Where growth is happening is notnearly as important as whether or not growth ishappening. Population behavior and growth cannotbe analyzed at the individual level or, rather, willnot be in this study. Population behavior is theresult of specific biological and environmentalconditions, not individual choice as might be thecase at the individual level (Quinn, 1996).OVERVIEW OF THE STUDYThe remainder of this study will be dividedinto four chapters. Chapter Two is a literaturereview of contemporary critical and quasi-criticalliterature on the subjects of biotech agricultureand population growth. This literature review willprovide a base of knowledge that is intended toboth familiarize the reader with the key conceptsaddressed by this study, as well as with thecurrent assumptions that are being made in bothof these disciplines. Chapter Three provides anoverview of the methodology for the remainder ofthe study. In this chapter information regardingthe historical, statistical, and critical resourcesthat were examined and applied to the overallpurpose of the study are reviewed. Assumptionsabout the nature of human population growth haveencouraged the development and application ofgenetic engineering to agriculture. Consequently,the study has examined and integrated resourcesfrom several generally unrelated disciplines.Chapter Three explains the methodologicalframework that informed these interdisciplinaryexaminations.Chapter Four consists predominantly of dataanalysis. Resources that were briefly examinedfor their conclusions in Chapter Two are revisitedin examining the nature of their data sets andhow that information provides the criticalevidence for the hypothesis. This is a metadatastudy that incorporates a wide variety of datasets—historical and statistical—in the fields ofgenetics, agriculture, and population. Because ofthe encompassing nature of the hypothesis, it wasnecessary to turn to a larger than usual data set toflesh out the conclusions. Chapter Four providesa detailed analysis of the data that exists to provethat and intensification of agricultural productionE. Giger, R. Prem, M. Leen - Increase of Agricultural Production based on Genetically Modified Food


106School of Doctoral Studies (European Union) JournalJulyvia biotechnology will not alleviate populationpressures but will actually exasperate them.Finally, Chapter Five consists of a summary ofthe data analysis and the conclusions based onsaid data analysis. Additionally, Chapter Fiveincludes some recommendations based on theseconclusions that should be considered for futurestudies on this subject as well as for contemporarypolicy-makers.Literature ReviewSCOPE OF THE LITERATUREREVIEWThe purpose of any literature review is toexamine some of the critical—and occasionallynon-critical—works that have already been writtenon relevant subjects to the study’s hypothesis. Inthis case, because the hypothesis encompasses bothagricultural biotechnology as well as populationsystems, the proceeding literature review willalso cover both these topics. However, sincethe overarching hypothesis consists of the ideathat intensification of agricultural production viagenetic engineering techniques will not be capableof arresting rapid population growth, the literaturereview has been divided into two sections alongthese lines. In doing so, it will be easier forreaders to grasp the critical distinctions that arebeing made that are crucial to the development ofthis study. Namely, that distinction exists betweenthose who argue that genetically modified food isthe answer to a human population that seems to begrowing without limit and those who argue thatintensification of production will only lead to stillgreater increases in population.It should be noted at this point that absent fromthis literature review are any studies or authorsthat are directly and overtly critical of geneticallymodified food and its application to industrialagriculture. Such criticisms abound and caneasily be located. For example, one examinationof the issue (GM ‘assistance’, 2003) centers thereluctance of African nations to accept geneticallymodified food as indication of public wariness ofthe safety of genetically modified products as wellas the ulterior profit motives genetically modifiedaid represents for Western nations, especially theUnited States. The author points out that in theUnited States there is a general assumption thatAfrican nations would welcome famine aid in theform of genetically modified food. However, thisproves not to be the case. Many nations are afraidthat the United States is trying to test unsafe foodon starving Africans. Others directly challenge themotivations of such aid, explaining it in terms ofeconomics. Giving genetically modified food asaid would help US farmers who might not be ableto sell the product otherwise on the world market.Additionally, if genetically modified crops takeroot in Africa, it would open a new market for theproduct in the near future, divorced of aid (GM‘assistance’, 2003).Predominantly, such authors tend to focus onthe negative environmental or health consequencesof genetically modified food as well as the profitdrivenmotives of the multinational corporationsthat are producing genetically modified products.In other words, those arguments against the use ofgenetically modified food fail to address the centralissue in this study; that is that the intensification ofagricultural production through genetic techniqueswill be wholly ineffective at preventing rapidpopulation growth. The possibility of superweeds,unintended health complications, and thedestruction of biodiversity are all serious issues thatshould be carefully weighed in anyone’s decisionto support genetically modified food production.However, such considerations ignore the questionof whether or not genetically modified food is theanswer to the world’s ballooning population.Accordingly, the next section of this literaturereview will consider those studies and authorswho consider intensification of agriculturalproduction to be favorable. These authors seegenetic engineering techniques as the only viablemeans that can increase crop yields in the comingdecades. These claims are not denied. But thehypothesis here is that increases in crop yieldswill not help feed the hungry or curb populationgrowth but will instead make both of those issuesmore acute. The final section of this literaturereview considers those arguments and studiesSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Increase of Agricultural Production based on Genetically Modified Food to meet Population Growth Demand107that demonstrate the dangers in increasing foodproduction to feed a growing population. Mostof those studies are written about populationgrowth and only obliquely consider geneticallymodified food. However, the connection betweengenetically modified food and population growthwill be made more evident in subsequent chapters,specifically Chapter Four: Data Analysis.INTENSIFICATION OFPRODUCTION IS FAVORABLEOne of the most significant problems associatedwith rapid human population growth thatagricultural biotechnology presumes to be able tocorrect is famine. Some estimates put the numberof people who go to sleep hungry everyday ashigh as 740 million. More than 40,000 people dieeveryday from a combination of starvation andmalnutrition. If this trend continues, it is expectedthat there will be as many as one billion peopleundernourished by 2020 (Prakash and Conko,2004). Advocates of genetically modified foodherald it as the solution to this very significantproblem. On the surface, this assumption seemsvalid. If people are starving and undernourished,then the knee-jerk reaction should be to simplyincrease the production of food available. If thisis possible, then it should be possible to alleviatethe ills caused by famine.Conko (2003) argues that biotech agriculture isthe method by which we can increase agriculturalproductivity without resorting to increases inharmful chemical fertilizers, herbicides, andpesticides. Though primarily centered in theUnited States, in 2002 about 5.5 million farmersin 145 nations were planting more than 145million acres worth of GM crops (Conko, 2003).This is a significant amount and one that Conkoargues should be increased as soon as possible.Without literally turning the whole of the Earthinto agricultural land, Conko seems hard pressedto find any other way by which the growinghuman population can be fed. He admits thatthere are risks with agricultural biotechnology, butlimits these risks to those that are inapplicable tothis study. Topping his list of risks are potentialdangers such as negative health consequences,harmed biodiversity, and the creation of superweeds.Though he mentions these risks, Conkois confident in the ability of genetic engineeringtechniques to mitigate the potential risk and createa superior agricultural product. Nowhere doesConko consider that genetically modified foodwill not have a profoundly positive effect on thepopulation crisis.Of the primary benefits that Conko (2003)explains include the ability of biotech crops tobe pest resistant as well as herbicide resistant.The first benefit can be explained rather simply.Because it is possible to engineer crops thatthemselves are toxic to certain insects and pests,we can reduce our dependence on chemicalsmeant to reduce crop damage by pests. Herbicideresistance crops allow farmers to increase their useof herbicides to kill off invasive weeds withoutharming the crop itself. Both of these so-calledbenefits of the biotech harvest certainly have theincredible potential to vastly increase agriculturalproductivity. Conko envisions this increasedproductivity to be not only a fundamentally goodthing, but also entirely necessary if we intend tofeed out expanding population.Similarly, Prakash and Conko (2004) arguethat increases in productivity through geneticallymodified crops are the means by which faminecan be controlled and the population crisiscontrolled. In their article, “Technology That WillSave Billions From Starvation,” the authors firstexplain the extent of the population problem in theworld, specifically as it relates to starvation andmalnutrition. Those statistics from their articlewere mentioned in an earlier paragraph and indicatethat malnourishment could affect more than onebillion people within fifteen years. With such apowerfully disturbing image in mind, readers arethen led through a discussion of how GM cropspromise to increase productivity and thus solve thegrowing problem of starvation and famine.Like Conko (2003), Prakash and Conko addresssome of the concerns of critics of GM crops, butthis discussion is limited to critics who believe thatGM food will have negative environmental effectsor else widen the gap between the rich and theE. Giger, R. Prem, M. Leen - Increase of Agricultural Production based on Genetically Modified Food


108School of Doctoral Studies (European Union) JournalJulypoor in the world (2004). The perceived benefitsthat the authors outline for genetically modifiedcrops include increased productivity, increasednutritional value, and even incorporating vaccinesinto some food staples. While these prospectsare intriguing, they fail to address the matter ofhow increasing agricultural productivity will havethe effect of feeding billions and arresting rapidpopulation growth. The authors seem to take itas a matter of fact that human population willcontinue to grow without limits and that the onlycourse for social institutions is to develop newagricultural techniques to feed this population.Prakash and Conko (2004) state that because theworld population is expected to rise to 9 billionby 2050, we must develop the genetic techniquesnow in order to begin producing enough food forall those people who will inevitably be here withina few decades.The overall argument of this essay is thatrestrictive policies in Europe, Africa, and Asiathat limit the extent to which biotech techniquescan be used for food production must berescinded (Prakash and Conko, 2004). Theauthors wholeheartedly believe that geneticallymodified crops represent the only means bywhich we can feed the growing population of theworld. Again, this study does not disagree thatagricultural biotechnology will have the effect ofincreasing agricultural productivity, whatever thehealth and environmental consequences of suchtechniques might be. However, the assumptionthat biotechnology will somehow prevent famineand feed a growing population will be challenged.Nonetheless, among authors who favor the use ofgenetically modified crops this is a common theme,that increased agricultural productivity is thecorrect response to famine and population growth.We shall see that this is, in fact, incorrect.Despite the incorrectness of this theme andgeneral assumption, it is regular to see it expressed,even at the highest levels of global governance.The United Nations’ FAO recently released adocument that endorsed the use of geneticallymodified crops in the world after years of sittingon the political fence on the issue (Paarlberg,2005). In a report that concluded that geneticallymodified crops and agricultural biotechnology canbe good for poor farmers, the institution earnedsome vocal enemies among critics of geneticallymodified food in Europe and Asia, particularlyamong the environmental lobby. Despite thisresistance, the FAO stuck to its conclusions,defending genetically modified crops in certaincircumstances. In Africa, the institution argued,genetically modified crops could be designedthat were pest or even drought resistant. Suchtechnology could vastly improve the stability ofagricultural production in a region of the worldthat is already suffering from significant famineand overpopulation issues.For example, in Africa, about one-third of alladults are undernourished and food output hasbeen declining annually in thirty-one out of fiftythreeAfrican nations (Paarlberg, 2005). Underthose circumstances, the United Nations concludedthat the only viable alternative to correcting thissocial injustice is the use of genetically modifiedcrops and agricultural biotechnology throughoutthe continent. Paarlberg (2005) pointed out thatcritics were up-in-arms in response to the FAOconclusions and raised a number of environmental,health, and social concerns to suggest thatgenetically modified food is not safe and shouldnot be endorsed by the United Nations. The antigeneticallymodified crowd apparently used all ofthe standard rhetoric against genetically modifiedcrops, though criticisms never substantivelyquestioned whether or not genetically modifiedfood could produce the necessary levels ofagricultural production.The United Nations and the FAO respondedthat there are very significant population pressuresin the world today that are only predicted toget significantly worse in the coming decades.Paarlberg reports (2005) that the FAO predictsthere will be two billion more people by 2030and three billion more by 2050. Given theserealities, the United Nations concludes thatgenetically modified crops offer the only realopportunity to increase agricultural productivityand arrest overpopulation pressures, famine, andmalnourishment.Similarly, Kindall and Pimentel (1994) suggestSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Increase of Agricultural Production based on Genetically Modified Food to meet Population Growth Demand109indirectly that biotechnology is one of the mostsignificant means by which a rapidly growingpopulation can be fed. These authors begin theirstudy with the determination that the world needs tofind new ways to increase food production at a pacethat while proportionally exceed the worldwiderate of population growth, which stood then atabout 1.7% per year. This is the equivalent of adoubling every 40 years, providing for a mediumUnited Nations estimate of 10 billion people onthe planet by 2050. Given such statistics andpredicted growth, the authors concluded that wemust develop new agricultural technologies thatwould be capable of doubling production of foodsupplies at a similar rate or else face widespreadfamine, disease, and death.This study is an important demonstration of thecentral Malthusian assumption that we must findnew ways to produce more food for a populationthat is growing faster than production. It asks thequestion, “How are we going to feed all of thesepeople?” Instead, it should ask, “How are wegoing to stop producing so many people?” Thisis the issue at the core of this study and one thatis resonated in several other works (Kindall andPimentel, 1994; Hopfenberg and Pimentel, 2001;Quinn, 1996). Interestingly, the authors of thisstudy (Kindall and Pimentel, 1994) look to geneticengineering as one possible and important meansby which agricultural production can be increasedin order to feed a growing world. While the authorsdo suggest other alternatives, they conclude that acombination of these methods, including geneticengineering, will likely have the best chance ofincreasing agricultural production to meet thepopulation growth that they perceive to be surgingahead of agricultural production.A final report on the issue (Global outlook,2004) comes to similar conclusions though is moreguarded in its final analysis. This report points outthat conventional agricultural techniques have wellpast the point of diminishing returns and cannotbe significantly improved upon. Accordingly, theperceived need for more food to supply a growingpopulation demands that agriculturalists findother means to significantly increase production.The report also identifies a disturbing trend thatindicates that cultivatable land per capita has beendecreasing throughout the Twentieth Century andwill only fall more within the next fifty years. In1966 cultivatable land per capita was 0.45 hectare.By 1998 that amount had diminished to 0.25hectare. Predictions suggest that it will furtherplummet to 0.15 hectare by 2050 (Global outlook,2004). Given this fact, it would appear thatincreasing agricultural production will be a matterof intensifying yields on increasingly smallerportions of cultivatable land instead of simplyputting more marginal lands under the plow.This is the conclusion of this report, whichnonetheless takes a moderate approach to theissue. It concludes that agricultural biotechnologymust be combined with conventional agriculturaltechniques, improved distribution systems, anda focus on population growth controls (Globaloutlook, 2004). Though cautious, this reportnonetheless is in favor of an expanded role forgenetically modified technology in agriculture. Ofcourse, that caution is facilitated by a false faith inthe ability of the distribution of food to improve,for there to be viable population controls outsideof the biological environment, and that increasedproduction will not result in increased populationgrowth. As the next group of authors makes clear,none of these assumptions can or should be takenfor granted.INTENSIFICATION OF PRODUCTION ISUNFAVORABLEThough it might seem self-evident that the surestmeans by which to limit the deleterious effects offamine is by increasing the supply of availablefood (namely through intensification), there area number of studies and critics that challengethis basic ideological assumption. Indicative ofsuch arguments is a critique of the cornucopianassumption forwarded by many of the studiespresented in the previous section. Grant (1993)explains the extent of the cornucopian ideal. Sheexplains that such critics and analysts presumethat continued growth is always good and thatpopulation systems can be thought of in terms ofeconomics. Cornucopian analyses, thus, presumeE. Giger, R. Prem, M. Leen - Increase of Agricultural Production based on Genetically Modified Food


110School of Doctoral Studies (European Union) JournalJulyto apply market-type analysis to the question offood supply, famine, and population growth. Thissection demonstrates that there are a numberof critics who understand that such analyses arehighly erroneous.In a particular telling analysis, Durham andFandrem (1988) explain the difference betweensurpluses in economics and those relating to theequal distribution of food. They further arguethat the application of the former to the latter ishighly dangerous because it gives people the sensethat available food is there for the taking but issimply unable to reach those who are in need orelse that the solution is to produce more surpluses.However, in economics a surplus simply meansthat buyers of a product cannot exhaust the supplyof goods at any price that is still acceptable to theseller. In the case of commodities such as cerealcrops, this means that some nations become netexporters of grains. National buyers are unableto exhaust the supply at the listed price andconsequently, the product is sold to buyers on theinternational market.The seeming paradox that develops out ofthese circumstances is that the surplus onlyexists from the perspective of the supplier. Somebuyers, unable to meet the lowest acceptableprice, face shortages simply because they cannotafford the going rate (Durham and Fandrum,1988). In this sense, which is the market reality,achieving a surplus of food is meaningless interms of practically reducing hunger and faminein the world. Expanded production might lowerthe market price by increasing supply, but thatdoes not mean that the lowered price will bewithin reach of the poorest people in the world, ademographic that is usually the one facing famineand malnourishment. Durham and Fandrum (1988)explain that food surplus is a market term thatonly carries significant meaning as a descriptionof supply in supply-demand equations. In suchequations, demand is only effective when it isjoined with buying power. Thus, demand for morefood from the malnourished and starving billionsof the world has no effect on distribution until thatgroup is associated with the power to purchase.Worse, providing famine aid to such groups isnot necessarily helpful. While aid would bypassthe usual market issues associated with buyingpower, aid produces other, arguably worse,conditions. Providing aid to the starving doesnothing to help them create sustainable foodproduction in their own local situation. Too, suchaid facilitates a dependence upon First Worldsuppliers. Finally, aid can encourage suddenpopulation growth spurts that make the starvationproblem all the more pressing for the next, larger,generation (Durham and Fandrum, 1988; Fletcher,1991). When populations exceed the carryingcapacity of their own lands and environment,famine is the inevitable result. Aid, while morallyjustified to many, only makes the matter worse byproviding just enough food for the starving to livelong enough to reproduce and expand the problemin subsequent years.Some critics (Fletcher, 1991) have made drasticsuggestions that food aid should not be providedunless it comes with the assurance of thosepeople who are receiving aid that they will alsouse contraception to control their numbers. Sucha suggestion inevitably leaves a bad taste in themouths of many who find it morally reprehensiblethat anyone should be left to starve in the ThirdWorld when there exists such an abundanceof food supplies in the First World. However,Fletcher (1991) points out that food aid has thegreat potential to counterproductive. Ratherthan improve famine conditions it can increasedependence and make the population pressures inthat region more pressing. The thrust of Fletcher’sargument is sound; however, the suggestion thatcontraception would control population in theface of increases in the food supply still perceiveshuman population as an independent variableseparate from environmental inputs.Other existing literature on the matter ofagricultural production and population growth aremore careful about demonstrating the connectionbetween increases in food availability andpopulation growth. This idea seems contraryto the concept of free will and is bothersometo a great number of people. Contraception isforwarded as a useful solution to populationgrowth precisely because human societies tend toSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Increase of Agricultural Production based on Genetically Modified Food to meet Population Growth Demand111put great stock in people’s ability to choose butalso believe that people are not part of the largerbiological community. Biological organismsreproduce according to the availability of foodresources. When animals are confronted withlimits to their food supply, so is their populationlimited by that availability. Too, when increasesare present, no one is surprised to find that theorganism population has increased. However, forwhatever reason, when we increase the amount ofavailable food for human beings in the world everyyear through intensification, somehow everyoneis surprised that the population continues to swelland people continue to starve.For example, Quinn (1996) reinforces thispoint. He explains simply that food availabilityand population growth or decline are entirelylinked in biological species. More food results ingrowth while less food results in decline. Withhuman societies, the result is a positive feedbackloop that actually generates the populationexplosion. Production of a food surplus throughagricultural intensification results in more people.But since there are now more people, the programof intensification must be stepped up to producemore food. When that happens the inevitable resultis more people. This occurs because industrializedagriculture permits societies to nullify the negativefeedback loop built into biological environments.The push of agriculture is always to control theavailability of food and increase it. Being able toproduce more food at will means that environmentallimits on maximum population growth for humanbeings have been erased (Quinn, 1996). Growthcan proceed seemingly without end.Quinn (1996) points out that Malthus warnedagainst the inevitable failure of agriculture tokeep up with population. Quinn, on the otherhand, warns against the continued success ofagriculture. Viewed through this lens, populationgrowth is the result of the successful applicationof agriculture and its tendency to produce morepeople by producing more food year after year.According to this logic, we cannot stop increasingfood production and hope to still have populationgrowth. The latter flows from the former, not theother way around. Worse, increasing agriculturalproduction will make it impossible to ever achievepopulation stability. As long as more food is beingproduced year after year, human populations willcontinue to grow year after year.This analysis might seem counterintuitive tosome at first brush. To most people the immediateresponse to famine or a growing population is thatmore food needs to be produced. After all, if morepeople are being born every year and more peopleare starving every year, then it seems to reasonthat the response should be to produce more food.There obviously isn’t enough it. However, biologyand the nature of population growth simply doesnot agree with this commonsense approach to thesituation. Hopfenberg and Pimentel (2001) agreethat human population growth is a function ofavailable food supply. When there is less food,the population will fall. When there is more,it will rise. This is the conclusion of this studyexamining the relationship between agriculturalproduction, famine, and population growth.Hopfenberg and Pimentel (2001) like Quinn(1996) agree that human population growthshould not be viewed as separate and unique fromsupplies of food. If the population is growing everyyear, then it is because more food is being madeavailable to the entire population. Whether thatgrowth occurs in the First World or the Third Worldis an unnecessary and ridiculous distinction. Thefact is that when more food is produced every yearthe result is that more people are born every year.That population growth thus fuels the perceptionthat what is needed is further intensification ofproduction. However, doing so only results ineven greater increases in population. Thus, thesereports indicate that policy-makers should be verywary of the promises of the biotech harvest toincrease usable agricultural production by 5-25%(Conko, 2003). Such a development could fuel aneven greater population crisis.Though not entirely applicable to this study,Manning’s (2003) Against the Grain: HowAgriculture Has Hijacked Civilization containssome interesting historical facts that seem tocorroborate this assessment. In a lengthydiscussion of the history of famines, Manningexplains that famines and hunger have beenE. Giger, R. Prem, M. Leen - Increase of Agricultural Production based on Genetically Modified Food


2009 Increase of Agricultural Production based on Genetically Modified Food to meet Population Growth Demand113DATA GATHERING METHODData was gathered through extensive researchinto the subject, with specific attention paid to thoseresearchers who had previously compiled raw dataand information on the subjects of agriculturalproduction and population growth. Sources ofraw data were limited largely to those that wereresearched and published within the past decade,although this timeframe functioned as a usefulguideline instead of a dogmatic principle. Bylimiting the range of the search in such a fashion,information could be limited according to someof the more current trends and concepts in thefields being examined. Additionally, since geneticengineering only became extensive in agriculturalproduction since the early 1990s (Evans, 1998;Pringle, 2003), it was equally prudent to limitresearch to that timeframe.All of the primary data for this study wasgleaned from other scholarly, peer-reviewedstudies. In making this distinction, informationwas not included that might be construed as popularor uncorroborated. Because of the peer-reviewednature of the majority of the data sources, a degreeof credibility was added to the data gathering.Additionally, many of these studies—particularlythose dealing with population growth andagricultural production—were used to support theconclusions drawn from the hypothesis. Studieswere examined and included for the ability toadd statistical data to the research as well as topotentially provide a theoretical basis for theconclusions that would be drawn from that data.DATABASE OF STUDYThe database of the study includes historicaldata regarding population growth of humanbeings as well as statistical data that relates thatpopulation growth with agricultural production.Also included in the overall database wasinformation about the nature of agriculturalbiotechnology and genetically modified food.This information was employed in order todemonstrate exactly how genetically modified foodcould be considered a significant intensificationof agricultural production. Statistics that detailagricultural production rates were included aswell as information about predicted populationgrowth, how much agricultural production wouldbe needed to feed that growing population, andthe ability of genetically modified crops to fillin the production gap represented by predictedpopulation growth.This database of study was chosen and limitedin this way because of the basic hypothesisof the study, which is to examine the claim ofagricultural biotechnologists that geneticallymodified food will be able to feed a growingpopulation and even reduce famine, starvation,and malnutrition in the future. The argument isthat agricultural biotechnology will not only beunable to accomplish these claims, but that, infact, it will have the opposite effect and makethe existing problems much worse than theyalready are. In order to prove this is the case itis necessary to examine the relationship betweenagricultural production and population growth.Data was examined and included because of itsability to demonstrate that increases in agriculturalproduction actually precede spurts of populationgrowth.VALIDITY OF DATAThe validity of the data is significant. Datawas drawn from scholarly and historical sources.Information was included that could demonstratehistorical and statistical accuracy. In the case offinding valid data linking population growth andagricultural production, sources are consistent andrelatively non-controversial. What is controversialabout the data that being drawn on is not itsvalidity but rather the challenge to the idea thatincreases in production do not follow increases inpopulation. The data that has been employed inthis study should be considered valid, historicallyaccurate, and statistically significant.ORIGINALITY AND LIMITATIONSComplete originality in this study cannotbe claimed. Other researchers have probedE. Giger, R. Prem, M. Leen - Increase of Agricultural Production based on Genetically Modified Food


114School of Doctoral Studies (European Union) JournalJulythe connection between population growth andagricultural production. Some, if not a majority,have concluded that intensification of agriculturalproduction to feed a growing population actuallyresults in further increases in population. Thus,the hypothesis is not wholly original but nor isit without strong critical company. This studyis, however, the first study of which the authoris aware that specifically connects agriculturalintensification via genetic modification with rapidpopulation growth and famine. In this study, thedata analysis shows that famine and malnutritionare not solved by agricultural intensificationbut are actually created by it. By accepting thepower of genetically modified crops to intensifyproduction, this study analyzes the effect thatcontinued expansion of agricultural biotechnologywill have on overpopulation and famine over thecourse of the coming decades.The primary claim of genetically modified foodproponents used to justify techniques and practicesthat have not fully been tested is that increases inproductivity through genetic engineering will beable to feed an expanding population (Paarlberg,2005; Conko, 2003; Prakash and Conko, 2004).Some go so far as to claim that no means short ofgenetic engineering new strains of crops will beable to provide enough food for the hundreds ofmillions of people predicted to be undernourishedwithin a few years, let alone the people who arealready suffering under such conditions. Theoriginality of the study lies in its challenge tothis basic assumption about genetically modifiedcrops. The literature on the subject is very quiet onthis matter. Proponents and critics of geneticallymodified crops quibble over environmental, health,and political issues associated with geneticallymodified food. No studies of which the authoris aware directly challenge the basic ideologicalprinciple that is fueling the Gene Revolution. Fewquestion whether or not genetic manipulation of ourcrops will actually be able to reduce the percentageof people in the world who are perpetually hungry(Durham and Fandrum, 1988).Of course, there are limitations to this approach.Because the research and analysis is modeled on ametadata approach, all of the data sources that areemployed in this study have been filtered throughthe research work of others. Some might questionthis and suggest that this hypothesis filtered outcontradictory studies and statistics and thusnegate the conclusions drawn from the analysis.To this one can only say that the data used hasbeen utilized from studies on population growth,the history of agriculture, and texts on the GeneRevolution. Attempts to overcome this potentiallimitation by drawing data from a wider base ofsource materials from more than one discipline inorder to mitigate the limitations of the metadataapproach has been made.Another potentially significant limitation ofthis study is the inability of the research to includeand question some of the requisite assumptionsabout population growth that form the basicjustification for the Gene Revolution. Whether ornot researchers recognize their own complicity inthis ideological reproduction, the fact remains thatresearch into agricultural biotechnology is built onbasic tenets of industrialized agriculture. Theseinclude the importance of generating a surplus ofcrops and in always devising new means by whichproductivity can be enhanced. Because it waslimited to an examination only of the intensifyingrole of genetically modified crops and the tendencyof that intensification to exasperate populationpressures, it was also beyond the scope of this studyto fully demonstrate inadequacies in the reasoningof those who argue that intensification is necessaryand good. It is the hope of this researcher that thevalidity of the data and the logic of the analysiswill overcome this possible limitation.Additionally and finally, this study does notaddress the social or political consequences ofreliance on genetically modified food. Whilesome have examined the social costs of geneticallymodified crops in the form of increased Third Worlddependence on First World corporations, that isa discussion not pertinent to this examination.Additionally, the current political battle betweenthe United States and Europe or the United Statesand Asia or the United States and Africa over thesafety of genetically modified is also not pertinentto this discussion. While those are importantissues, they have been examined in some depthSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Increase of Agricultural Production based on Genetically Modified Food to meet Population Growth Demand115elsewhere. The purpose of this study, again, is tofocus on a relatively unexamined issue. As such,some might criticize the lack of examination ofthe social and political dimensions of geneticallymodified food. Unfortunately, such a discussionwould be lengthy and would not bring readers anycloser to a determination of the effects of geneticallymodified food upon population growth.Data AnalysisIn 2004, the Food and Agriculture Organization(FAO) of the United Nations published a reporttitled “The Ethics of Sustainable AgriculturalIntensification.” The title alone indicates theideological basis for the report, that it is possibleto intensify agricultural production in sustainableways, a possibility that has been undermined bypreviously mentioned studies as well as this report.However, if readers had any doubt that this wasthe stance of the FAO, they need only read the firstsentence of the report to understand the positionof, arguably, the most powerful world authority onagriculture, famine, and hunger. That first sentencestates that since the Agricultural Revolution of theNeolithic period human societies have repeatedlyfound ways to intensify agricultural productionin order to keep pace with growing populations.This is the position of those who favor agriculturalintensification through the advancement of newagricultural technologies such as biotechnology.In other words, the United Nations is taking thestance that in order to feed a growing population,it is necessary to continue to increase agriculturalproduction.The institution is bolstered in this sense by thegeneral assumption that the world’s populationwill stabilize sometime during the Twenty-First century. The FAO (2004) argues moreintensification is required in order to meet theneeds of a human worldwide population that isonly going to continue to grow. In another reportissued by the FAO (2005) the director generalof the institution, Jacques Diouf, suggests thatagriculture is the correct response to hungerand famine. Specifically, development in theagricultural sector of developing nations is seen asthe most effective means by which hunger can bereduced. However, as Manning (2003) explains,famines are the mark of increased agriculturalproduction and have been throughout the courseof written history. In this sense, the institutionalweight of the FAO seems to be shouldered behindglobal agricultural policy that has the effect ofactually increasing hunger and famine in the longterm.Interestingly, the FAO (2005) does providesome useful statistics regarding the state ofhunger and famine in the world, though arguablythe institution has misinterpreted the importanceof these findings. Among the data sets providedin that report is a series of statistics representingthe number of undernourished in the developingworld and how the proportion of those people byregion has changed between 1990-1992 and 2000-2002. Over that time frame, the report indicatesthat undernourishment in Asia and the Pacific fellfrom 20% of the total population of the region to16%. In Latin America and the Caribbean, theproportion fell from 13% to 10%. In the Near Eastand North Africa, undernourishment increasedfrom 8% to 10% of the population, while in sub-Saharan Africa the number of undernourisheddeclined marginally from 36% to 33%. In WestAfrica, a similar decline occurred, from 21% of thepopulation in 1990-1992 to 16% of the populationin 2000-2002.As one might well expect, these statistics areused to demonstrate that the current policy ofagricultural intensification is decreasing the extentof malnourishment and hunger in the world.Based on these numbers, the FAO (2005) proposesseveral scenarios for the coming decades by whichhunger and malnutrition can be further reduced.In fact, a cursory examination of this data,without any further analysis leads most to agreewith the findings of the FAO: while hunger stillpersists, agricultural intensification is improvingthe situation and will continue to do so until thehuman population stabilized sometimes duringthe Twenty-First century. However, using someof the sources from the above literature review aswell as other data sets on crop production in theE. Giger, R. Prem, M. Leen - Increase of Agricultural Production based on Genetically Modified Food


116School of Doctoral Studies (European Union) JournalJulylast twenty years produces significantly differentconclusions.What the FAO doesn’t mention in either ofthese reports, at least not directly, is cereal yieldshave been declining on a per capita basis for thepast twenty-five years (Pimentel, Hunag, Cordovaand Pimentel, 1996). Around 80% of the world’sfood supply is produced in the form of cerealgrains such as wheat, maize, and rice. With such ahigh proportion of the world receiving its nutritionfrom cereal grains, it is important that we examinesome of the rates of crop production in recentyears and determine the effect that reductions incereal production might have on population. Theimmediate assumption is that declines in cerealyields will lead to vast increases in the amount ofhunger, malnourishment, and famine. However,even the FAO data denies this even as they usethat data to argue for increases in agriculturalintensification. While per capita grain yields havebeen decreasing since around 1980, the number ofpeople who are undernourished in the developingworld has actually decreased significantly duringthat same period (FAO, 2004; FAO, 2005;Pimentel, Huang, Cordova and Pimentel, 1996).Other research has confirmed that since 1980while there have been increases in yields, in partdue to the Gene Revolution, the rate of grainproduction has been decreasing over that sametime period (Kindall and Pimentel, 1994). In fact,the rate at which grain yields have been increasinghas been decreasing over that time period. Inother words, the acceleration of crop yieldsthrough intensification has been slowing due tovery real physical and biological limits. The rateof increase was 2.2% in the 1970s and 1980s. Bythe late 1980s and early 1990s the rate of increasehad slowed to 1.5%. By the late 1990s, the rateof annual increase in cereal yields had slowed toa low of 1.0% (Conko, 2003). This is effectivelyhalf the rate of increase recorded even twenty-fiveyears ago. From this we can deduce that if currenttrends continue annual increases in cereal grainswill be a thing of the past by 2030. By 2050, thedate at which a number of predictions point to ahuman population of between 9 and 12 billion,environmental degradation due to agriculturalintensification could actually result in decreases inannual cereal yields, probably the first time that hashappened on a global scale since the AgriculturalRevolution itself.In other words, this discussion of reductionsin the rate of crop yield increases could be partlyresponsible for the decreases in malnourishmentin the developing world if we consider that in thatin the last decade improvements in distribution offood resources has improved. If that is the case,then this data—combined from the FAO as well asother resources—indicates a positive correlationbetween food production, population growth rates,and famine around the world. In fact, thoughfeared by many, if the rate of increases in cerealyields continues to fall at the current pace, withina few decades the gradual reduction of availablefood resources could well slow population growthto the point that stabilization is possible. Sincehuman population is a function of food supply,reductions in the supply—however small orgradual—will have an effect on the growthrate of the human population. The FAO, afterall, might be correct in its assumption that theworld population will stabilize by the end of theTwenty-First century. However, if that happensit will not be because of a institutional programof agricultural intensification. In fact, any furtherintensifications will likely result in a suddenballooning of the world population, which willcause increased famines, increased malnutrition,and increased conflict.Though there have been decreases in therate of crop yield increases, it is important tounderstand that more food is still being producedevery year and that this food is the direct causeof the population growth that the world has beenexperiencing. Consider that from 1980 to 1990the world population increased from 4.454 billionto 5.279 billion (Brunner, 2000). This representsan 18.5% increase in the number of people livingin the world over that period of time. Over thesame period of time, world crop productionincreased by 25%. The food production indexover that period rose by 25.6% and the livestockproduction index rose by 24.1% (Hopfenberg andPimentel, 2001). In other words, crop productionSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Increase of Agricultural Production based on Genetically Modified Food to meet Population Growth Demand117increases far outstripped world population growthduring the same period, fueled in part by advancesin production contributed by biotechnology.Conko (2003) and the FAO (2004; 2005)would agree that this data indicates that currentattempts to intensify agricultural production tokeep pace with population have been successful.But this argument does not mesh with the data.The idea that agriculture must somehow keep upwith population growth is Malthusian in basis. Heargued that population increases exponentiallywhile agriculture only increases productionarithmetically. If that were so and if faminesand malnutrition were really the result of notenough food resources, then the above statisticsshould indicate that there was more than enoughfood available for all of the people in the worldand that agricultural production has managed tooutstrip population growth. From a mathematicsstandpoint, if we accept Malthus’ premise, this issimply impossible. A geometric progression willalways move faster than an arithmetic one. Giventhat fact, how else can we explain the statistical factthat between 1980 and 1990 world cop productionincreased by 25% but population growth onlyadvanced by 18%?Simply, this is evidence of the fact thatpopulation growth does not spur on agriculturalintensification but instead that agriculturalintensification is the fuel that provides for explosivepopulation gains. That food production has beenrecently increasing faster than the populationrate should not be a surprise. As more food ismade available, more people can be producedfrom that food. The difference of 7% betweenthe two growth rates over the indicated decadecould be the result of distribution problems,political conflict, or economic loss. These are allpossibilities. However, it is important to recognizethat population growth could not have outstrippedthe rate of crop production increase. Just as it isimpossible to produce wheat without inputs suchas water or soil, so is it impossible to producepeople without such a basic input as food.This is the point in the analysis when we mustconsider the role that genetic engineering is toplay in agricultural intensification in the comingyears. Conko (2003) believes that the only wayto change the course of falling grain yields isthrough biotechnology. He alarmingly pointsout that within fifty years the human populationwill increase by at least three billion people. Heargues that if that is the case, and the current trendof decreasing yields continues, then there simplywill not be enough food to go around and faminewill become a way of life for countless billionsof people, primarily in Third World countries.Conko (2003) and many others already mentionedin this study firmly believe that biotechnology isthe only current methodology available that hasthe potential to increase crop yields significantly inthe next few decades. He points out that increaseof 5-25% in production are not impossible evenif only a single new trait is inserted into a singlecrop species.This is a fantastic claim that should beconsidered with the utmost seriousness. In drawinga parallel between decreases in grain yields anddecreases in worldwide malnourishment underthe thesis that human population is a function offood production, the possibility of new increasesin production should be considered a grave dangerto efforts to control population growth and reducefamine in the world. If more food is suddenlymade available through biotechnological means,as Conko and his ilk would have done, then it canbe rightly assumed that there will be a consequentjump in the population. In other words, usingbiotechnology as a means to become prepared for3 or more billion people over the next fifty yearsis part of a terrible self-fulfilling prophecy. Thetechnical gains in agricultural production designedto feed that predicted population will actuallyresult in the very increase that the intensificationwas designed to account for. This can be a hardconcept to wrap one’s brain around; however,the evidence from the animal kingdom as well ashuman correlational data supports this conclusion(Hopfenberg and Pimentel, 2001).Take, as example, some more evidenceregarding population growth and crop production.Between 1961 and 1993, Conko (2003) reports thatthe world population grew by 80%, the amount ofcropland used for agriculture increased by 8%,E. Giger, R. Prem, M. Leen - Increase of Agricultural Production based on Genetically Modified Food


118School of Doctoral Studies (European Union) JournalJulybut per capita food supplies actually increased.This development points to one of the hallmarksof the Green Revolution that started in the 1960sand continues with the latest Gene Revolution:intensification of production to produce higheryields on smaller amounts of land. Part of thispush has been in response to decreases in availablelands for agricultural use, down significantly overthat same period. By 1996, there only existed 0.27hectares of agricultural land per capita. This isabout half of the 0.5 hectares considered minimalfor producing a diet similar to those enjoyed inFirst World nations (Pimentel, Huang, Cordovaand Pimentel, 1996).In that respect, the agricultural intensificationprogram in the latter half of the Twentieth Centurythat led directly to developments in agriculturalbiotechnology by the late 1980s was a resoundingsuccess. Now, more than any other time in thepast, more food can be grown on smaller area ofland. But, at the dawn of the Twenty-First centurythe physical and biological limits of the GreenRevolution are being reached. Quite simply, theEarth itself is finite. There is only so much land togo around. As more and more of it are turned toindustrial agriculture or other degrading practices,subsequent years are faced with a shortageof usable agricultural land (Pimentel, Huang,Cordova and Pimentel, 1996). The end result isthat for agricultural intensification to continueat all, more drastic means must be exploited toproduce the kind of gains in crop production thatare perceived to be required.Genetic engineering is the presumed answerto this challenge. Recombinant DNA techniquesallow researchers to insert very specific genesfrom one plant or animal into the developinggenetic code of another plant or animal. Thoughthis is a complex process, it has been developed,tested, and enjoys a high success rate at this time.Researchers have already been able to developseveral varieties of food and other consumer cropsthat are designed to have favorable traits that neverdeveloped through the much longer and randomprocess of natural selection. Genetic engineeringpermits us to look forward to the possibility ofincreased crop yields or increased nutritionalvalue of existing crops because improved traitscan be expressed in staple crops. Soybeans thatare resistant to herbicides are now commonplace.The result has been increased soybean yields. Itis also possible to design plants that are resistantto certain kinds of pests because they release theirown toxins that were otherwise absent in thatparticular plant, such as in genetically modifiedpotatoes. The end result has been increases in thesize of potato yields that were previously lost topests.The future of genetic engineering promisessimilar increases in yields. In addition to plantsthat are herbicide or pest resistant, plants can bedesigned with any number of other genetic traitsgleaned from other plants or animals. Droughtresistant crops for desert regions could be produced.Crops that are more resistant to cold or heat couldopen up huge swaths of cropland outside of thecurrent temperate belt of agricultural activity.Improvements to crops are being contemplatedthat would make harvest by mechanical andindustrial means easier and result in lower losesof crops to crushing or damage during harvest,storage, or transportation (Pringle, 2003). Ineffect, genetic engineering promises to increasecrop yields significantly by improving the abilityof crops to resist certain kinds of damage that havebeen considered, in the past, inevitable loses inindustrial agriculture.If such increases in crops yields are produced,even for a short time, it is inevitable that the humanpopulation will increase accordingly as a result ofincreases in the available food supply. In otherwords, intensification of agricultural productionvia genetic engineering will ultimately have anegative effect on societies ability to controlpopulation growth and mitigate the negativeeffects of famine and malnutrition. Kindall andPimentel (1994) estimated that in the year of theirstudy food production was then sufficient to feed7 billion people an all-vegetarian diet. The totalpopulation at that time was only 5.5 billion. Poordistribution was cited as one of the major reasonswhy so many people were still starving, thougheven this conclusion missed one of the moresignificant points that could have been drawn fromSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Increase of Agricultural Production based on Genetically Modified Food to meet Population Growth Demand119this particular bit of data. In 1994, agriculturalintensification had created a potential surplus offood that would have fed more than 1.5 billion thanwere then alive. Continued intensification, largelythrough genetic means, has proceeded since then.The result has been that population has repeatedlyswelled to meet the available food resources.Though this is contrary to the traditional perceptionthat the relationship between food and populationis exactly the opposite, it is nonetheless supportedby the available data as well as the more criticalstudies regarding population growth, agriculture,and genetically modified foods.From this it is evident that the primary claim ofagricultural biotechnology, that it will provide morefood to feed a growing world, is fundamentallyflawed. It is true that developments in geneticengineering have made it possible to engineer newtypes of crops that are genetically enhanced andthat those enhancements have led to and have thegreat potential to result in significant increases inagricultural production. Genetic engineering hasthe powerful potential to be one of the greatestagricultural technologies ever for its ability tosubstantively change the level of control thatagriculturalists have over plants and their abilityto intensify production of those plants. But, as wehave seen from the data already presented and thesignificant literature review to that end, the endresult of increases in agricultural productivity is notdecreases in hunger, famine, or population growth.In fact, increases in agricultural productivity andthe availability of food resources is demonstrablyresponsible for exactly the opposite effects.What’s more, some of the data presented hereinindicates that decreasing yields of crops maybe inevitable unless genetically modified foodsovercome their stigma in the developing world andEurope. If that is the case, then the world shouldexpect to see significant increases in production inthe next few decades. However, if that does nothappen, what will be the result? Would the stateof hunger and famine in the world really be all thatbad if genetically modified foods failed to becomeaccepted in the mainstream and agriculturalproduction began to decrease? Though manywould immediately cry out in dismay at such apossibility, the statistical fact of the matter is thatreductions in agricultural productivity if managedconstructively and over a length of time, would notresult in significant or even noticeable declines innutrition or even increases in hunger and famine.Consider the following thought experimenttaken from Hopfenberg and Pimentel (2001).Picture a world of only one thousand humanbeings that experiences an annual growth rate ofa modest 1.4%. Further, imagine that agriculturalproduction on that world was held constant such thatproduction was high enough to provide every oneof those thousand individuals with a diet of 3,000calories per day. If production was held steady andthe rate of population growth remained constant,then the results are relatively easy to predict andshould be quite surprising. Most would assumethat famine and hunger would quickly ensue as thepopulation of the planet careened out of controland agricultural production failed to keep up withthe growth. In fact, this is not the case.After the first year at that growth rate, thepopulation has increased to 1,014 people andthe total number of calories that each of thoseindividuals can eat per day has fallen marginallyto 2,959. After the second year of this experiment,the population has grown to 1,028 and the numberof calories per capita per day has dropped againto 2,918. The population growth continues atthis pace, continuing to erode the total number ofavailable daily calories. After three years the percapita availability has dropped to 2,879 caloriesper day; another year and it has fallen to 2,838calories per day. In all, after four years of runningthis experiment, each individual only experiencesa decline in caloric intake by 162 calories, or 5.4%of their original 3,000. After nine years of runningthe experiment, the total number of availablecalories per capita has dropped only to 2,648 perday, for a total loss at that point of 353 calories,or 11.7% of the original 3,000 calories per capitaper day.The point of this thought experiment is todemonstrate that holding agricultural production ata stable level will not result in a dramatic decreasein the nutritional content of people’s daily caloricintake. Granted, most people outside of someE. Giger, R. Prem, M. Leen - Increase of Agricultural Production based on Genetically Modified Food


120School of Doctoral Studies (European Union) JournalJulyFirst World nations do not enjoy daily intakes of3,000 calories. But the point of the experiment isthat even with an increasing population, the totalreduction in available food resources is subtleand spread across the entire system so that no onegroup or person loses more than any others. Thisis, naturally, an idealized experiment. In the realworld holding agricultural production still wouldlikely drive up market prices and force anotherlarge proportion of the poor of the world to beunable to afford to buy the food that they need tosurvive. While this may or may not be the case, thepolitical ramifications of such a decision are beyondthe scope of this study. Instead, it is importantto note this aforementioned hold on agriculturalproduction would over the course of time stabilizepopulation growth. Contraception would not berequired. Social development programs wouldnot be required. Massive campaigns of foreignfood aid would not be required. All that would benecessary is to stop increasing the amount of foodthat is available to the world and allow more subtlesocial, individual, and reproductive-biologicalmechanisms to exert their own influence to reducethe population (Hopfenberg and Pimentel, 2001).The large proportion of the world views foodsupply and agricultural production in terms ofeconomics. In such terms, the finite realities ofenvironmental and biological constraints areignored (Pimentel, Huang, Cordova and Pimentel,1996). The question posed by policy-makershas simply been how are we going to find newtechnologies to feed all of these people. In theseterms, agricultural production and food suppliescan be increased without end. But this is not thereality of the world as evidenced by nearly everystatistical resource on agricultural productionduring the Twentieth Century. Quite simply,our civilization has come to the point at whichfurther increases in agricultural intensificationare not going to be possible. Genetic engineeringoffers the potential for human societies to furtherextend that line so that more food can be createdyearly and more people can be born. However,eventually even the limits of genetic engineeringwill be reached and no more significant gains incrop productivity will be possible. The optimistshope either that by that time new technologieswill have presented themselves or else that thepopulation will somehow stabilize on its own. Butthere is no reason to believe that either of theseeventualities will materialize. The question thatpolicy-makers should ask themselves is not howto feed all these people, but instead how to stopproducing more people.From such a position, Hopfenberg and Pimentel(2001) suggest that there are two ways by whichthe human population can be stabilized. One is tohalt intensification of agricultural production. Thiswould mean a moratorium on the development ofany genetic engineering technology that wouldresult in increases in agricultural production.The second means is to simply continue toincrease production as has been the case since theAgricultural Revolution and allow the system tocorrect for an overabundance of people throughfamine, conflict, and disease. Simply put, humansocieties exist well beyond the carrying capacity ofthe land, save in a few regions of the world. Furthersignificant population increases will be countereddrastic reduction though such deleterious meansunless the institutional program of agriculturalintensification is brought to an end.Summary, Conclusions, andRecommendationsIn summarizing the findings of this study, it isfirst important to recall the critical position fromwhich this discussion was launched and how thisstudy has worked to alter readers’ perceptions ofthe value of genetically modified food. In doingso, this study has developed a thesis that, in the end,genetically modified food will prove detrimentalto the long-term goals of sustainability, faminerelief,and population control on a global scale.Other studies exist, en masse, that examine thepotentially negative health benefits of geneticallymodified food. A whole host of others questionsthe environmental havoc that geneticallymodified foods can and probably will cause tothe environment. Some even probe the ethicsof allowing as powerful an agricultural tool asgenetically modified foods to remain in the handsSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Increase of Agricultural Production based on Genetically Modified Food to meet Population Growth Demand121of a few powerful transnational corporations. Inthe effort to contribute to the discussion on thepotential pitfalls of the Gene Revolution andgenetically modified foods in particular, thisstudy instead turned its gaze towards the practicaleffects that biotechnology will be likely to haveon agricultural production and the results ofthat effect on such major global issues such asoverpopulation and famine.In order to fully examine that relationship, itwas necessary to critically examine literature,statistics, and historical examples that mightshed some light on the relationship that existbetween food production and population growth.Additionally, studies were consulted that spoketo the capacity for genetically modified foods toincrease agricultural production. In all, the range ofinformation required for this study was significantand at times may have appeared to stray beyondthe limited scope of genetically modified food.However, in order to demonstrate the manner bywhich genetically modified food would have itsgreatest negative impact upon human societies,it was essential to take a broader look at the rolethat genetically modified foods have played in theintense push to intensify agricultural productionyear after year in order to presumably keep upwith geometric population growth by alwaysgenerating more food than is needed.More important than discussions of whetheror not genetically modified foods are healthy orwhether or not they will harm the environmentor whether or not corporate greed should drivedevelopment is the matter of whether or notgenetically modified foods is a solution at all, in anagricultural sense. Surely, from the agriculturalists’point-of-view genetically modified foods might bejust the thing they have been looking for to improvecrop yields that haven’t been increasing at quite therobust rate they were just a few decades ago. Afterall, isn’t the entire point of agriculture to increaseproduction year after year? With that premise asits modus operandi, industrialized agriculture hasleapt upon the potential that genetic engineeringprovides toward improving crop yields andhelping the agricultural sector improve its yieldsand profits by decreasing production costs andincreasing outputs in the form of food resources.Even a cursory examination of the literature onthe subject revealed that none had substantivelyaddressed the question of whether or not theincreases in agricultural production that geneticallymodified foods promised was fundamentallya “good” thing. The purpose of this study hasbeen to examine the extent to which agriculturalproduction, specifically genetic engineering andits capacity to improve crop yields, is beneficialto efforts to reduce famine and control pressuresfrom overpopulation.Though there is contention on this point, thebulk of the literature presented here as well asthe data that was analyzed seems to indicate thatintensification of agricultural production will notbe able to accomplish either of these goals. In fact,and much worse, agricultural intensification willhave the ultimate effect of making these problemsworse. What was revealed in this study is the factthat the traditional conception of the relationshipbetween food resources and population growth hasbeen reversed since Malthus penned his famousessay on the subject. The assumption has beenthat population proceeds independently of foodproduction and that it advances at a geometricpace, while production can only ever be increasedarithmetically. Thus for some time industrializedagriculture has taken this conclusion as justificationfor a continued program of increased agriculturalintensification aimed at perpetually increasing cropyields to stay just ahead of population growth.In reality, population growth is not an independentvariable that advances of its own accord withoutregard to environmental and biological inputs.It is commonsensical that restrictions on naturalresources—including food supplies—will have alimiting effect on growth. Population growth canbe mitigated by a lack of the resources necessaryfor growth, namely food. What few have beenwilling to admit in the meantime is that increaseson those same resources can be the impetus forincreases in growth. Providing increased foodresources to a given human population group willhave the effect of spurring on population growth.Biological organisms have the innate tendency towant to transform as much of the biosphere intoE. Giger, R. Prem, M. Leen - Increase of Agricultural Production based on Genetically Modified Food


122School of Doctoral Studies (European Union) JournalJulytheir own species as possible. Human beingscannot be considered an exception to this livingprinciple. However, the reason that most species donot multiply and overrun the planet in the fashionthat human beings have done is simply becausethey are limited by the one resource that makespossible continued growth: food. Limitations inthe food supply are ultimately limitations on aspecies’ ability to procreate and spread throughoutthe biosphere.Human beings have cleverly discovered ameans by which this limitation can be sidestepped,at least for a while. That means is agriculture.Agriculture permits human beings to create asmuch as food as they want when they want and,to some degree, wherever they want. The resulthas been steady increases in food production andavailable food resources over the course of thepast 10,000 years that have resulted in a humanpopulation that now covers the Earth and stands atwell more than 6 billion individuals. Agriculturehas permitted the human species to continually raisethe carrying capacity of the land by increasing theamount of people that can be supported on smallerand smaller amounts of land. Whereas at one timepeople had to roam large areas as hunters andgatherers to amass enough food to live, now foodcan be created centrally and in significantly largeamounts to feed larger and larger populations. Thesurplus of food resources that agriculture makespossible has been converted into human mass forthousands of years at an increasing rate. Morefood has meant more people, which has spurredthe need for more food, which then results in morepeople. This cycle has continued unabated for500 generations and brought the human speciesto a moment in history when some regard geneticengineering the only means by which furtherintensification of agricultural production can bepossible.In summarizing the research arc of this study,it is apparent what conclusions can be drawnfrom the work that was done herein. Agriculturalintensification does not lead to alleviations inpopulation or famine or environmental damage. Infact, agricultural intensification can be blamed—at least in part—for all of those problems andlikely several more. Agriculture, in short, andits continued efforts to increase production offood crops have contributed to the conditions bywhich more people than can be supported throughsubsistence now live on the planet. Famine hasbeen the result of this eventuality. Agriculture hasfueled rapid population growth in all parts of theworld causing significant environmental damage,social conflict, and more famine. Increasingagricultural production is not a solution to existingproblems; rather, it is the cause of them.With this conclusion in mind, where can weplace genetic engineering and genetically modifiedfoods? The effort of geneticists to create newkinds of crops has been prompted largely by theerroneous assumption that the surest way to combatfamine is with more food. This conclusion has beenshown to be incorrect. More food, paradoxically,will actually worsen the conditions of famine andoverpopulation in the long-term. Geneticallymodified foods represent the current epitome ofagricultural intensification because they offeragriculturalists the opportunity to create new kindsof staple crops that are even more productive andintensive than existing varieties developed overcenturies through selective breeding procedures.Genetic engineering permits the developmentof new kinds of crops that can be developedwithout a clear sense of the consequences that thatdevelopment will have. Genetically modified foodsare an agricultural production, one that representsthe highest degree of technical sophistication andagricultural intensification.Consequently, it is the determination of thisstudy and its author that genetically modifiedfoods should be considered highly dangerous andcounterproductive to any serious efforts to reducefamine and control the overpopulation problemfacing the world. Genetically modified foods area form of agricultural intensification. As such,they can be demonstrated to have the effect ofworsening the very problems that they purport tobe able to rectify. From a public-policy standpoint,the surest means by which overpopulation can becontrolled and the effects of famine minimizedis by first ending the program of increases inagricultural production and holding productionSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Increase of Agricultural Production based on Genetically Modified Food to meet Population Growth Demand123at current levels. Following that, decreases inproduction could actually be made over time thatwould have the effect of gradually reducing theworld population and further lessening the effectsof famine and overpopulation. However, sincesuch a major policy decision is unlikely to occur,at the very least further practical research intogenetically modified foods should be halted at leastuntil the full effects of agricultural intensificationcan be documented and demonstrated. When thattask has been accomplished, it will become veryevident that agricultural intensification throughany means—genetic engineering included—should be eschewed.It is the recommendation of this study, basedon these results and the conclusions drawn fromthe existing data, that further research be donein this specific field. Specifically, quantitativeeffects of genetic engineering on crop productionand yields should be compiled and analyzed todemonstrate without question the contribution thatgenetic engineering has made towards agriculturalproduction since becoming more commonplace inagriculture in the late 1980s. With such data inhand, future research could be designed aroundthe thesis that intensification of agriculturalproduction to feed a growing population willresult in further increases in population. Thatthesis could guide a comparative analysis of thecontribution to crop yields that were made throughgenetic engineering as correlated to growth inpopulation over the same time period. Regionalspecific data could also be gathered in order toshow how genetic engineering has had greater orless effects in different parts of the world. In fact,some regions resist the introduction of geneticallymodified foods altogether. An interesting researchquestion would be to determine whether or notthose areas have experienced the same kind ofpopulation growth that regions that have acceptedgenetically modified foods.It should be clear that there is much potential forresearch into this subject. Recommendationsfor further research into this subject are plentifuland could serve as a significant basis for largerexaminations into the relationship betweenagricultural production/intensification andpopulation growth over the history of humancivilization. Nonetheless, the conclusions ofthis study alone are indicative of the results thatcould be expected from those studies. Increasesin availability of food resources will result inincreases in population somewhere within thesystem. Decreases will result in a diminishingpopulation. Genetic engineering and geneticallymodified foods represent the most recent form ofagricultural intensification. Thus, their contributionto problems such as overpopulation and famineare very real. For that reason above all others,genetically modified foods should be challengedand removed from the use in a practical setting.ReferencesBrunna, B. (2000). Time Almanac 2001. Boston,MA: Family Education Company.Conko, G. (2003, Spring). The benefits of biotech:as the world’s population grows, environmentalstewardship will require science to find ways toproduce more food on less land. Regulation,26(1), pp. 20-25.Durham, D.F. and Fandrem, J.C. (1988, Winter).The food “surplus”: a staple illusion ofeconomics; a cruel illusion for populations.Population and Environment, 10(2). RetrievedFebruary 15, 2006, from http://dieoff.org/page115.htmEvans, L.T. (1998). Feeding the Ten Billion:Plants and Population Growth. Cambridge,UK: Cambridge <strong>University</strong> Press.FAO. (2004). The ethics of sustainable agriculturalintensification. FAO Ethics Series. Rome,Italy: FAO.FAO. (2005). The state of food insecurity in theworld. Rome, Italy: FAO.E. Giger, R. Prem, M. Leen - Increase of Agricultural Production based on Genetically Modified Food


124School of Doctoral Studies (European Union) JournalJulyFletcher, J. (1991, Spring). Chronic famine andthe immorality of food aid: a bow to GarrettHardin. Population and Environment, 12(3).Retrieved 15 February 2006, from http://dieoff.org/page91.htmGlobal outlook for agricultural biotech. (2004).APBN, 8(18), p. 1021.Grant, Lindsey. (1993). The cornucopian fallacies.FOCUS, 3(2). Retrieved February 15, 2006,from http://dieoff.org/page45.htmHopfenberg, R. and Pimentel, D. (2001). Humanpopulation numbers as a function of foodsupply. Environment, Development andSustainability, 3, pp. 1-15.Kanoute, A. (2003, August 4-11). GM ‘assistance’for Africa. Nation, 277(4), pp. 7-8.Kindall, H.W. and Pimentel, D. (1994, May).Constraints on the expansion of the global foodsupply. Ambio, 23(3). Retrieved February 15,2006, from http://dieoff.org/page36.htmManning, R. (2003). Against the Grain: HowAgriculture Has Hijacked Civilization. NewYork: North Point Press.Paarlberg, R. (2005, January/February). Fromthe Green Revolution to the Gene Revolution.Environment, 47(1), pp. 38-40.Pimentel, D., Huang, X., Cordova, A., andPimentel, M. (1996, February 9). Impactof population growth on food supplies andenvironment. AAAS Annual Meeting,Baltimore, MD. Retrieved February 15, 2006,from http://dieoff.org/page57.htmPrakash, C.S. and Conko, G. (2004, March1). Technology that will save billions fromstarvation. The American Enterprise, pp. 16-20.Pringle, P. (2003). Food, Inc.: Mendel toMonsanto—The Promises and Perils ofthe Biotech Harvest. New York: Simon &Schuster.Quinn, D. (1996). Population: a systems approach.In The Story of B (pp. 287-306). New York:Bantam Books.What are genetically modified (GM) foods?(2006). How Stuff Works. Retrieved February24, 2006, from http://science.howstuffworks.com/question148.htmSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Corrosion in Concrete Bridge Girders125Corrosion in Concrete Bridge GirdersWalter Unterweger (MSc)Master of Science and candidate to PhD in Civil Engineering at the School of Doctoral Studies, <strong>Isles</strong> <strong>International</strong>eUniversité (European Union)Professor Kurt Nigge (PhD)Chair of Civil Engineering of the Department of Engineering and Technology at the School of DoctoralStudies, <strong>Isles</strong> <strong>International</strong>e Université (European Union)AbstractA critical examination concerning the problem of corrosion in concrete bridge girders with recommendationsto resolve the issueThis paper provides a strong engineering and safety background into the problems associated with corrosionand bridges. The procedure used in this paper is presented through the careful examination of the existingliterature. Some of the literature may be a few years old but the past is prologue; what went before is asrelevant as what is going on today. Other literature presented is quite contemporary, and all of the materialspresented in this paper are relevant. Especially relevant are studies that have been conducted six, seven,eight and more years ago; compared and contrasted with what engineers and scientists are saying in thelatest bulletins and research documents. For example, the American Association of State Highway andTransportation Officials (AASHTO) offered standard specifications for highway bridges in the 1990s thatseem to be practical and yet have clearly not provided a workable solution to the ongoing problems ofcorrosion. In the article titled “Reliability of Reinforced Concrete Girders Under Corrosion Attack,” theauthors (Frangopol, et al, 1997) embrace the AASHTO strategy; first, the effects of corrosion “on bothmoment and shear reliabilities” are carefully investigated; second, a “reliability-based design approach” thatis based on minimization of “total material cost including corrosion effects” is taken into consideration. Thisarticle suggests that taking into consideration the environmental stressors on concrete (due to corrosion),along with the AASHTO standards, can then be plugged into “reliability-based optimization software.”That software is a product of the combining of general-purpose optimization software and a Monte Carlosimulation-based evaluation program.Hence, the procedure for coming up with reliable estimates of the lifeexpectancy of concrete girders comes in two phases, according to this research. Phase one spans the timefrom construction to corrosion initiation; phase two, from corrosion initiation to time when “unacceptablelevels of section loss have occurred.” But is this procedure proactive or reactive? The answer - it is indeedreactive, and it is also outdated. But nevertheless it should be researched and understood because it is partof the literature. Science cannot predict future conditions and dynamics based on models and hypothesesalone. A foundation for the projections of the future is based on evidence from the past.Key words: Civil Engineering, Bridge’s Building, Steel Corrosion, Materila Structure EngineeringW. Unterweger, K. Nigge - Corrosion in Concrete Bridge Girders


126School of Doctoral Studies (European Union) JournalJulyIntroductionHow safe are America’s older concrete highwaybridges? And how long is a concrete bridgeexpected to remain viable? What are the influencesthat have a negative effect on the integrity ofconcrete-steel composite bridge girders? Hasthere been adequate empirical research into thesematters? What are the scholarly journals reportingabout corrosion and possible solutions?These concerns are not new, but they havebecome more public and urgent since the collapseof the I-35 freeway bridge in Minnesota inAugust 2007. That structural failure has raisednew concerns about what effects corrosion canhave and does have on the structural integrity ofconcrete bridge girders. And although engineershave been working with contractors and plannersfor many years to address issues of bridge safety,much remains to be done.Hypothesis Of TheoreticalConsiderationsThe many and diverse problems associatedwith corrosion – and, at the end of the day,highway safety – certainly boil down to morecomplex and substantive matters than theory.But indeed theoretical considerations are part ofthe discussion and should logically lead to moreprofound understanding and hence a solution.To wit, an hypothesis: a) there is an enormousvolume of engineering data available; b) whetheror not current public resources are made availablefor the work that needs to be done to empiricallyinspect and evaluate existing bridge infrastructure,those resources and assets must be made availablewithout equivocation or delay; and c) corrosion isa fact of life but a nation that can send humansto the moon and explore the Solar System withrobotic flying machines can certainly find a wayto retard corrosion in bridge girders and in theprocess make travel safer.ProcedureThis paper provides a strong engineering andsafety background into the problems associatedwith corrosion and bridges. The procedure usedin this paper is presented through the carefulexamination of the existing literature. Some of theliterature may be a few years old but the past isprologue; what went before is as relevant as what isgoing on today. Other literature presented is quitecontemporary, and all of the materials presented inthis paper are relevant.Especially relevant are studies that have beenconducted six, seven, eight and more years ago;compared and contrasted with what engineersand scientists are saying in the latest bulletinsand research documents. For example, theAmerican Association of State Highway andTransportation Officials (AASHTO) offeredstandard specifications for highway bridges inthe 1990s that seem to be practical and yet haveclearly not provided a workable solution to theongoing problems of corrosion.In the article titled “Reliability of ReinforcedConcrete Girders Under Corrosion Attack,” theauthors (Frangopol, et al, 1997) embrace theAASHTO strategy; first, the effects of corrosion“on both moment and shear reliabilities” arecarefully investigated; second, a “reliability-baseddesign approach” that is based on minimization of“total material cost including corrosion effects”is taken into consideration. This article suggeststhat taking into consideration the environmentalstressors on concrete (due to corrosion), along withthe AASHTO standards, can then be plugged into“reliability-based optimization software.” Thatsoftware is a product of the combining of generalpurposeoptimization software and a Monte Carlosimulation-based evaluation program.Hence, the procedure for coming up with reliableestimates of the life expectancy of concrete girderscomes in two phases, according to this research.Phase one spans the time from construction tocorrosion initiation; phase two, from corrosioninitiation to time when “unacceptable levels ofsection loss have occurred.” But is this procedureproactive or reactive? The answer – it is indeedSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Corrosion in Concrete Bridge Girders127reactive, and it is also outdated. But nevertheless itshould be researched and understood because it ispart of the literature. Science cannot predict futureconditions and dynamics based on models andhypotheses alone. A foundation for the projectionsof the future is based on evidence from the past.ResultsThe cumulative result of doing the research ofall available and pertinent data is to become wellinformed. Without a background into what sourcesof materials have been used, what methodologieshave been applied, what science has brought tothe table in this regard, future researchers arefumbling around in the dark. The results of theresearch bring perspective and knowledge. Anexample of the knowledge that research of existingliterature can bring is provided by <strong>University</strong> ofIllinois at Urbana-Champaign engineers writing ina white paper (“Topical Conference on WirelessCommunication Technology”).Bernhard, et al, propose – after pointingout what all informed engineers know, that theinfrastructure of structural concrete bridges is“aging and deteriorating” – the use of wirelesstechnologies (a wireless embedded sensor system)to detect corrosion in concrete girders. The sensingmechanisms that are embedded into the girders are“active acoustic transducers” – and through the useof antenna sticking out of the girder, informationabout what is going on inside the girder can betransmitted to engineers on a constant basis. Notonce a year, or every other year, but every day datawould be available on the advent of, continuationof, and seriousness of the corrosion.The research on this project has already beendone, but its application in terms of widespreadacceptance is still an unknown. But the investigationdone by Bernhard, et al, shows that the loss ofbond strength as a result of corrosion between thereinforcing steel and the concrete that surrounds it“can be estimated using measurements of acousticgroup velocity in the bar.” Tools that actuallymeasure the acoustic signals were still underdevelopment at the time of this article (2003), butit seems promising, albeit more research is surelyneeded.Discussions Of ResultsGetting to the heart of the matter, an article inthe Journal of Structural Engineering (Enright,et al, 1998) explains (in classic understatednarrative), “Experience has demonstrated thathighway bridges are vulnerable to damage fromenvironmental attack,” including freeze-thaw,corrosion, and alkali-silica reaction.” What isneeded now are “rational decisions” in terms ofthe expected life cycle of a bridge, what costs areexpected to be in order to maintain the integrity ofbridges, the authors write.The Enright article provides research into“time-variant reliability methods” regarding“bridge life-cycle cost prediction.” Time changesthe resistance of a bridge to environmental factors,but many reliability studies on reinforced concretebridges do not factor in “time-dependence” aspects,the article explains. Some studies suggest thatconcrete elements in bridges do not degrade andhence, the belief is that resistance to environmentalfactors does not decrease as time goes by. Thereare currently over 600,000 highway bridges in theUnited States; and of those, “many” are “severelydeteriorated” and in desperate need of majorinfrastructure repair, the authors continue.Bridges are naturally expected to – anddesigned to – function safely over “long periodsof time,” Enright argues. And during those yearsof service the concrete bridges are fully expectedto stay sturdy notwithstanding “aggressive” and“changing” environments. Because there arelimited funds available for proper maintenance,it makes good sense to use those funds the mostefficient way, and hence, one of the main purposesof the time-variant reliability analysis. Whenengineers say a bridge has failed, they allude tothe failure of “any girder among a set of girders inan individual span.A study such as the one reported by the authorsof this article can help determine the reliabilityof reinforced concrete bridges while the bridgeis under the environmental attack by freeze-thaw,W. Unterweger, K. Nigge - Corrosion in Concrete Bridge Girders


128School of Doctoral Studies (European Union) JournalJulycorrosion, and alkali-silica reaction – and thedata can also show at the same time the results ofthe bridge’s sensitivity to various types of loads.The bridge that was analyzed in this research islocated near Pueblo, Colorado; it was a reinforcedconcrete T-beam (built in 1962; bridge L-18-BG).This bridge is made of three 9.1-meter (or 30 ft.)“Simply supported spans.”Each of the three spans has five girders “equallyspaced” at 8.5 feet apart. The bridge provides twolanes of traffic heading north; the most intenserandom moment (pulse) for the bridge is whentwo heavily loaded trucks drive over it side-byside.This occurs probably a thousand times a yearand yet he admits that it is “often difficult” to get“accurate maximum live load data” for two sideby-sidebig trucks with full loads.The author reports that typically corrosionbegins in a concrete bridge structure after a windowof time known as “corrosion initiation time.”It is this period of time during which the steelreinforcement becomes “depassivated due tocarbonation or chloride ion ingress” (Enright).And once that initial intrusion of corrosionhas begun, the reinforcing cross-sectional areabegins to decrease; the rate of decrease dependsupon the number of reinforcement bars that areindeed corroding. Also, the writer explains, afterthorough examination, failure can happen when“the limit state of bending failure by yieldingof steel of any one (or more) of the girders isreached.” This is a very technical research article,and often the writer seems to admit that no matterthe attempted empirical nature of the research, itis not easy to predict “the rate of strength loss” forreinforced concrete elements. But the research isvitally important in terms of trying to understandthe impact of corrosion, because “…even smallvariations in the degradation can have a largeimpact on the reliability of a bridge over its servicelife,” Enright explained.The actual time frame during which strengthin the girders begins to be lost is known as the“damage initiation time.” And the degradation(deterioration) of a single girder does notnecessarily mean the imminent failure of thebridge, Enright found; and interestingly, heconcludes that a bridge in which not all girders areunder “environmental attack” the series system“can be reduced to a smaller number of girders”provided, that is, that the remaining strength of thegirders “in the reduced system is substantially lessthan the remaining strength of girders eliminatedfrom the original system.”Writers M. Tavakkolizadeh and H.Saadatmanesh (“Strengthening of Steel-ConcreteComposite Girders Using Carbon Fiber ReinforcedPolymers Sheets”) published their article in theJournal of Structural Engineering in 2003. Theauthors suggest in the beginning of their articlethat “advanced composite materials” for use inthe rehabilitation of failing bridge infrastructurehave been “embraced worldwide.” The reason forthis worldwide use of advanced materials is thatconventional applications used in strengthening“substandard bridges” are not only “laborintensive,” but they cost more and they take moretime to apply to the deteriorating bridge.Their study was completed in 2003, four yearsprior to the disastrous bridge collapse on Interstate35; but even then, the American Associationof State Highway and Transportation Officials(AASHTO) was wary of deteriorating bridgesand was setting up strategies for rating the safetyof bridges. The result of AASHTO work at thattime showed that “more than one third” of thehighway bridges in the U.S. were believed to be“substandard.”Indeed, the National Bridge Inventory (NBI)showed that there were in 2003 81,000 “functionallyobsolete bridges” in the U.S. Moreover, of the81,000 functionally obsolete bridges more than43 percent are constructed of steel. The mainproblems associated with steel bridges, the authorwrite, is that they are subject to corrosion, they arenot maintained properly on a consistent basis, andthey suffer from “fatigue” over the years.What to do with these failing and faulty bridges?The authors report that replacing a bridge is farmore expensive than retrofitting it, and retrofittingtakes a lot less time. Hence, it is recommendedthat steel-concrete composite bridges be beefedup with concrete composite girders, and byintroducing fiber reinforced plastics (FRP) (madeSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Corrosion in Concrete Bridge Girders129of glass, carbon, and Kevlar “…placed in a resinmatrix.” The reason for using FRP materials is thatthey have “outstanding mechanical properties”and they also feature elasticity; and when FRP isapplied to strengthen a deteriorating girder, thelaminates weigh “less than one fifth of the steeland are corrosion resistant.”Before the authors explain the process of usingFRP to strengthen faulty bridges, corroded bridges,they point out that traditional rehabilitation ofbridges has been through five techniques. One,simple strengthening of members; two, placingaddition member (girder) in the bridge; three,“developing composite action”; four, “producingcontinuity at the support” structures; and five,“post-tensioning.” The downside of thesetraditional applications is that they call for the useof heavy machinery, there is a substantial period ofdown-time for the bridge (which of course causesinconvenience for traffic and trucks that cross thebridge in normal times), they are expensive kindsof repairs and moreover, “they do not eliminatethe possibility of reoccurrence of the problem.”They offer the example of the use of weldedsteel cover plates used to repair and beef upexisting structures – this has been a commonlyapplied method for many years, since 1934 whena French bridge that was 73 years old receivedthis kind of “fix.” The main problem with thiskind of solution is that first of all, it requires heavyequipment and worse yet, the welded plates arealso subject to the same fatigue that the originalgirder suffered from. There is also the possibilitythat there may be “galvanic corrosion betweenthe plate and existing member and attachmentmaterials” (Tavakkolizadeh, et al, 2003).The research into the possible use of epoxybondedsteel plates for use in strengtheningflawed, corroded steel-concrete composite bridgeinfrastructure was launched in 1964, in SouthAfrica. And again in Japan in 1975, epoxy-bondedsteel plates were used in bridges needing anupgrade. The authors offer several more studiesto back up their assertions. Then they reporttheir own research on the effectiveness of epoxybonding carbon fiber reinforced polymers (CFRP);they tested three large-scale girders that had beenstrengthened with “pultruded carbon fiber sheets.”And then three identical girders were strengthenedwith “one, three, and five layers of CFRP sheets.Viscous epoxy was used to bond the laminateto the steel surface. Those girders were putthrough a series of tests at varying load levels.Through a series of mathematical diagrams theauthors show the physics involved in their tests.The conclusions they reached showed that whensteel-concrete girders were retrofitted with epoxybondedCFRP laminates a “very promising”outcome was experienced. The technique ofapplying epoxy-bonded CFRP “improved theultimate load-carrying capacity significantly,”the authors report. In specifics, they report thatwhen the CFRP laminates are applied in sufficientquantities the ability of a concrete-steel compositegirder to carry a heavy load is increased by 44%(one layer); 51% (three layers of epoxy-bondedCFRP applied); and when five layers are applied,the ability of the girder to handle a heavy load israised by 76%.Further, because of the flexibility of the CFRPbonding, there was no problem with the elasticstiffness of the girders.An article titled “Behavior of reinforcedconcrete beams strengthened with carbon fibrereinforced polymer laminates subjected tocorrosion damage,” published three years prior tothe research by Tavakkolizadeh and Saadatmanesh,explores the same issue, the feasibility of – andeffectiveness of – using reinforcing concrete-steelcomposite beams with epoxy-bonded CRRP. Inthe research, researchers utilized ten reinforcedconcrete beams (100 x 150 x 1200 mm) using“variable chloride levels” that ranged from 0-3%chloride.Six of those beams had been strengthenedby using epoxy bonding CFRP laminates onthe surface of the concrete. In this experiment,four were left un-strengthened. And of the unstrengthenedfour they were put through the rigorsof “accelerated corrosion by means of impressedcurrent to 5, 10, and 15% mass loss.” More testswere conducted. A bending procedure was done.The bottom line of the tests showed that the beamsthat were treated with CFRP laminates did indeedW. Unterweger, K. Nigge - Corrosion in Concrete Bridge Girders


130School of Doctoral Studies (European Union) JournalJuly“successfully confine the corrosion cracking.”All the beams that were strengthened showed“increased stiffness over the un-strengthenedspecimens” and they revealed what the authorscalled “marked increases” in the “yield andultimate strength” of the beams.The authors noted that a number of studies havebeen done relating to the problem of corrosion, andconsistently that research has shown that whenthere is corrosion there is a “corresponding drop inthe cross-sectional area of the steel reinforcement.”Moreover, research has shown, according to theauthors, the corrosion products occupy a “largerarea” than the steel occupies. Meanwhile thecorroded areas can and do exert “substantialtensile forces” on the concrete surrounding thereinforced area. This is problematic becausethose expansive forces that result from corrosionare capable of causing “cracking, spalling, andstaining of concrete” – which leads to a depletionof the structural bond between the concrete andthe reinforced steel.It is therefore apparent to the authors that byusing carbon fibre reinforced polymer laminates theexpansion of steel reinforcements that are causedby corrosion can be restricted to “up to 15% massloss.” The bottom line in this research is that thestructural performance of the CFRP strengthenedand corroded beams were improved in comparisonto the beams that were not strengthened.The Journal of Structural Engineering, a verygood source for reference material regardingcorrosion of concrete reinforced bridge girders,published an article in 2003 titled “Life-CycleModeling of Corrosion-Affected ConcreteStructures: Propagation.” In this piece author C.Q. Li, a senior lecturer of civil engineering at the<strong>University</strong> of Dundee in Scotland, addresses whathe terms “unsatisfactory” results from previousstudies into reinforcement corrosion in concrete.Therefore, Li sets out to develop new models ofstructural resistance deterioration used in “wholelife performance assessment of corrosion-affectedconcrete structures.” Li explains in his abstractthat his paper will provide a “complete picture” ofwhole life performance assessment of reinforcedconcrete structures that are affected by corrosion.He claims that over the last thirty years therehas been intensive research into how to preventcorrosion in reinforced concrete; he mentions thelack of satisfaction regarding the results of thatresearch. In particular, he points to “Concrete inthe Ocean” in the UK; BRITE in Europe, and inNorth America, SHRP studies. These studies, Liclaims, did not go deeply enough into the effectthat corrosion has on structural deterioration. Forhis work, Li ignores the kinds of studies that dealwith corrosion in the “first life cycle” (from thetime of installation to the first signs of corrosion)and rather, looks into the “second life cycle” (fromthe time of the initiation of corrosion to the time theconcrete beam or structure is “unserviceable”).Li’s model goes beyond the existing Tuuttimodel (“the well-known” model that assessesand predicts the service life of corrosion-affectedconcrete structures); he does so because the Tuuttimodel uses “degree of corrosion to indicate servicelife,” which is not far enough into the meat of theissue, he contends. The Tuutti model, like othersin this genre, relates to the fact that the degreeof corrosion is indeed connected to structuralresistance deterioration and “a conversion fromthe degree of corrosion to structural resistancedeterioration” (strength and stiffness). But thisstrategy is not as “straightforward” as it could bein terms of telling the tale of the life cycle usingthe performance criteria that structural engineersuse, Li insists.In his tests, Li used a total of 30 specimensthat consisted of a variety of different concretecompositions (different water cement rations andcement types). He put the tested samples under“simultaneous loading and salt spray” conditions,which were simulated in a big “corrosiveenvironmental chamber” that was constructedexclusively for Li’s research purposes. To achievetest results in a short amount of time, Li adoptedaccelerated conditions; the loads were “keptconstant” on the concrete samples until they wereremoved from the environmental chamber for thetesting procedure.It was determined that the corrosion growthwas directly related to “crack distribution” as wellas the pattern within the test sample itself. WhatSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Corrosion in Concrete Bridge Girders131did this prove? Corrosion is “essentially a localactivity at the cracked sections of RC members,”Li indicates, which he says is “very important tostructural engineers” who concern themselveswith the “cross-sectional capacity of structuralmembers.” As to the flexural strength of theexamples tested, this was measured during threewindows of time during the research – those werethe third, the fifth and the seventh months of thetest. The results of that flexural strength research –reflected through Li’s charts and diagrams – showthat the rate of deterioration of RC structures (forstrength and stiffness) is “sharply different.” Thestiffness deteriorates far more severely than thestrength; for example, by the time the stiffnessdeteriorates to about 60% of its original condition,the strength of the concrete substance has onlydeteriorated to about 10% of its original state(during the same amount of time).Like all cautious researchers, Li reminds readersthat the test data always varies with reference tostructural deterioration; and yet these variationscan be understood by realizing that there are thefollowing: environmental conditions; materialdiscrepancy; human error; and workmanshipvariations. Playing roles in the research projectsare the following: physics, chemistry, mechanics,structural engineering, concrete technologyamong other disciplines. And so with thesefactors in mind, Le explains that he has takenthe “phenomenological approach” – which takesinto consideration the uncertainties associatedwith attempting to focus on statistics related todeterioration of strength and stiffness.In his discussion section Li mentions that hispaper can serve as “an example of future researchdirection.” That is to say, the problem cries out formore research, as do all the aspects of corrosionthat were looked into during his research. Hisconclusion points to the fact that indeed reinforcedconcrete flexural members “deteriorate at differentrates,” and as reported, stiffness deteriorates ata faster rate than strength. His model, at the endof the day, will show that once the reinforcedconcrete flexural members break down to the pointthat they are “unserviceable” that tells engineersthat there is “less than 15% of service life left.”The key of course is being able to know whenthe reinforced concrete flexural members havereached “unserviceable” conditions.But interestingly, while Li explained at thebeginning of the article that there was too littleresearch into this area of bridge corrosion, andthat he would produce a model in order to betterdetermine the science of corrosion’s effects, in theend he writes that “more research on corrosionpropagation” – in particular, research that is initself experimental – is necessary. That is the casebecause there need to be rational models developed“to be used in whole life performance assessmentfor corrosion-affected concrete structures.”ConclusionsBridges using concrete girders (reinforcedwith steel rebar) are constantly under attack bythe environmental conditions and by the loads thatrumble over them every day of every year. Thepublic officials whose responsibility includes thesafety and welfare of the citizens are apparentlynot doing the job that is expected of them; that is tosay, this paper points out that of the 600,000 or sohighway bridges in the United States, perhaps asmany as one-third are unsafe due to deteriorationcaused by corrosion. That is an unconscionableand deplorable situation and must be addressed bythe legislators in all 50 states along with electedrepresentatives in Washington, D.C.RecommentationsRecommendation ONE: A recent article canbe a good example of a recommendation for betterpredictability vis-à-vis when a bridge is due to bereplaced. To wit, in the previous research articlethe author discussed in depth the problems relatedto measuring the time involved and difficultiesconnected to coming up with a valid liveassessment of reinforced concrete that has beenattacked by corrosion. The variables that Li alludedto as difficult to put a precise finger on (he usedthe “phenomenological approach”) authors M.B.Anoop and K. Balaji Rao called “fuzzy variables.”In their research the two structural engineeringW. Unterweger, K. Nigge - Corrosion in Concrete Bridge Girders


132School of Doctoral Studies (European Union) JournalJulyscientists from India present a case study in whichthey attempt to establish a model which comparesthe times “to reach different damage levels” for a“severely distressed beam.” The beam in questionis located in the Rocky Point Viaduct and the pointof the research is to provide a model that can beused in terms of when to schedule inspections forreinforced concrete girders that are being attackedby “chloride-induced corrosion.”In order for wise engineering decisions tobe made vis-à-vis the proper time for bridgegirder inspections and maintenance activities tocommence, a formula or strategy is necessary.And the authors of the article “Application offuzzy Sets for Remaining Life Assessment ofCorrosion Affected Reinforced Concrete BridgeGirders” assert that the use of inspection field data– in anticipation of evaluating the performanceand remaining life assessment of corrosion ofreinforced concrete bridge girders – should proceedusing the concept of “additive fuzzy sets.”Why use “fuzzy” variables? The authorsbelieve that there are always to be uncertaintieswith regard to the variables that the environmentbrings to the table regarding corrosion on concretegirders. And so, the benefits of “fuzzifying” thoseuncertain variables are several, this article states.For one, the authors contend Fuzzifying offers“greater generality”; two, “higher expressivepower”; three, an “enhanced ability to model realworld problems”; and four, “a methodology forexploiting the tolerance for imprecision.”That last reason used to justify fuzzifyingseems to go against the grain of engineering sinceprecision and exactitude are so very importantto solving problems. But Anoop and Rao insistthat in projecting a life assessment procedureusing their strategy if offers a chance to measureenvironmental aggressiveness factors. The authorsdefine that factor based on environmental variables,the relative humidity and “degree of wetting anddrying,” and of course the temperature. Once thoseare known from field observations, and then therewill be “linguistic uncertainties” as regards thevalues of those known environmental variables.How to deal with the uncertainties? That’swhere the authors bring in their fuzzy sets. Inseeking the pot of gold at the end of the rainbow,so to speak, that is, the method for determininghow long the Rocky Point Viaduct will remaineffective given the attack of corrosion producedby the environmental aggressiveness, theauthors continue using a “rule-base, formulated”approach. Technically, the whole point of theirdiscussion and research is to show the engineeringcommunity (and readers who may have a vestedand/or professional interest in understanding thedynamics of corrosion in concrete bridge girders)how useful this proposed methodology is by wayof assessing how much time the girder at issue hasleft (its “life assessment”).The Rocky Point Viaduct is located near PortOrford, Oregon about 25 miles due east of thePacific Ocean. The viaduct has five spans (eachwith a length of 114 m and a deck width of 10.6m), and the records for how the viaduct has beenmaintained as noted below. It was built in 1955; theinitial report of problems through the maintenanceinspection was 12 years later in 1967. In Mayof 1968 cracking was noticed on the concretebeams, and a year later, January 1969, inspectorsnoticed “badly rusted rebars” and “spalling” of theconcrete.The first repairs were made in September 1969;in May 1976 a “substantial” portion of one sectionhad been lost due to corroded rebars; and byFebruary 1991 the decision was made to replacethe structure entirely, the authors explain. Whatthe authors offer in this article is a study focusedon the beam on the extreme western edge of theviaduct, the part closest to the ocean and most fullyexposed to the “impact of the weather from theocean.” At this point in their article the authors dothe math, explain the variables, and point out howthey went about doing their research. The “fuzzy”numbers they used in the research were based onthe damage levels; the “cracking,” spalling, thecolor changes of the concrete, and the loss of steelsection “and deflections.”The schematic of the damaged viaduct ispresented as Appendices #1 at the end of this paper.The authors state that by researching the failedportions of this bridge, applying their strategies– combined with the definitions of damage levelsSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Corrosion in Concrete Bridge Girders133– they can (with reasonable accuracy) predictthe window of time at which various levels ofdamage can be expected on future bridges usingconcrete girders. In their summary, the authorsinsist that their methodology for projecting futurelife assessment of corrosion affected reinforcedconcrete bridge girders is based on, first, accuratedata from field observations that empirical evidenceof performance evaluation. Secondly they writethat as to the life assessment of the girders inquestion they use the concept of “additive fuzzysets.”What’s the advantage of using fuzzy sets?Using this strategy, research engineers can projectthe known, reported times of corrosive impacts(which were demonstrated using the inspectiondata from the Rocky Point Viaduct engineeringmaintenance records) for different levels ofdamage on future bridges. In other words, byobserving a very distressed beam, in this case theRocky Point beams, which lasted only 36 years,researchers can take those numbers and add tothe existing research other case studies; then theyhave the opportunity to plug in “fuzzy” numbersfor future bridge corrosive damage. They admitin their Summary that this “fuzzy” strategy needsto be “further developed,” but they offer it as abeginning. Their paper was written in 2007, priorto the disastrous bridge collapse in Minnesota, butif their “fuzzy” approach had been used perhaps in2005, or 2006, based on engineering inspections,the failure of the I-35 structure may have beenpredicted – and even averted.Recommendation TWO: According to a briefarticle in the journal Advanced Materials &Processes, a potentially effective and reliable wayof protecting against steel rebar corrosion thatseems inevitable in concrete bridges is by usingzinc-hydrogel anode 4727. This product mayprovide “long-term electrochemical protection”against the deterioration of the steel rebar in theconcrete, the article states. What actually happenswhen this “pressure-sensitive” zinc-hydrogelsolution is applied is that an “iconic current” isconducted. Wires run through the rebar grid once theconcrete is covered thoroughly with the adhesivegel. Then there is a “transfer of electrons” andthe zinc behaves as though it were the corrodinganode, which then allows the rebar to function as“the cathode protected from corrosion.”The bottom line here is the zinc slowly corrodesrather than the rebar. Ironically, this zinc hydrogelanode was developed in Minneapolis in the year2,000 – by the 3M company – seven years beforeI-35 crashed into a river in Minneapolis.Recommendation THREE: The studyconducted by Korean civil engineers Hyo-GyoungKwak and Young-Jae Seo puts forward, in short,another methodology in terms of predicting thelife expectancy of a concrete bridge. The authorssuggest the behaviors of “pre-stressed concrete”(PCS) girders can be predicted fairly accurately.This is because the PCS girders are constructed byplacing in-situ concrete decks on the girders “attime intervals.” Therefore a careful engineeringassessment of the differences in materials, theirproperties and their age, at the time they are loaded,can lead to “time-dependent material behaviors.”Their model posits that cracking at interiorsupports can be predicted because the “ultimateshrinking strain is expressed as a function ofconcrete slump” along with factors such as relativehumidity, unit weight of cement, and air content.Frequent field inspections and recommendationsgo hand-in-hand with this method of predictingthe life span of concrete bridge girders.Recommendation FOUR: Yet another modeloffering possible solutions for bridge engineersdealing with corrosion comes from Kim Anh T.Vu and Mark G. Stewart; the authors proposea strategy built around three durability designspecifications relating to a RC slab bridge andrelating to a projection of loads crossing the bridgeover a given number of months and years. In otherwords, builders should not just construct a bridgeand then begin safety inspections at pre-arrangedintervals; instead, specific projections as to the loadand the environmental factors can give advancedanswers to durability and time-variant awareness.Meantime the authors provide the obvious answerto one reason for long-term deterioration andreduction in structural safety is the use of de-W. Unterweger, K. Nigge - Corrosion in Concrete Bridge Girders


134School of Doctoral Studies (European Union) JournalJulyicing salts during winter seasons. When the ratioof water to cement is heavier on the water than itshould be “increases failure probabilities.”ReferencesAdvanced Materials & Processes. “Zinc hydrogelanode protects against rebar corrosion.” 158.6(2000): p. 15.Anoop, M.B., & Rao, Balaji K. “Application ofFuzzy sets for Remaining Life Assessmentof Corrosion Affected Reinforced ConcreteBridge Girders.” Journal of Performance ofConstructed Facilities, 21.2 (2007): 166-171.Bernhard, J.T., Hietpas, K., George, E., Kuchma,D., & Reis, H. “An Interdisciplinary Effortto Develop a Wireless Embedded SensorSystem to Monitor and Assess Corrosion inthe Tendons of Prestressed Concrete Girders.”2003 IEEE Topical Conference on WirelessCommunication Technology 15.17 (2003):241-243.Enright, Michael P. “Service-Life Prediction ofDeteriorating Concrete Bridges.” Journal ofStructural Engineering 124.3 (1998): 309-317Frangopol, Dan M., Lin, Kai-Young, & Estes,Allen C. “Reliability of Reinforced ConcreteGirders Under Corrosion Attack.” Journal ofStructural Engineering 123.3 (1997): 286-297.Kwak, Hyo-Gyoung, & Seo, Young-Jae. “Shrinkagecracking at interior supports of continuousPre-cast pre-stressed concrete girder bridges.”Department of Civil engineering, KoreaAdvanced institute of Science and Technology.Construction and Building Materials. 16.1(2002): 35-47.Li, C. Q. “Life-Cycle Modeling of Corrosion-Affected Concrete Structures: Propagation.”Journal of Structural Engineering 129.6 (2003):753-761.Soudki, Khaled A.; & Sherwood, Ted G. “Behaviorof reinforced concrete beams strengthenedwith carbon fibre reinforced polymer laminatessubjected to corrosion damage.” Journal ofStructural Engineering vol. 27 (2000): 1005-1010.Tavakkolizadeh, M., & Saadatmanesh, H.“Strengthening of Steel-Concrete CompositeGirders Using Carbon Fiber ReinforcedPolymers Sheets.” Journal of StructuralEngineering 129.30 (2003): 30-40.Vu, Kim Anh T., & Stewart, Mark G. “Structuralreliability of concrete bridges includingImproved chloride-induced corrosionmodels.” Department of Civil, Surveying andEnvironmental Engineering, The <strong>University</strong> ofNewcastle. Structural Safety 22.4 (2000): 313-333.School of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Social Science135Science SectionContentMaple and other CAS (Computer Algebra Systems) applied to Teachingand Assessing MathematicsThis paper describes the use of specific technological tools that assist students in the development of theirmathematical skillsRobert Thomson, Arelli Santaella, Mark W. BoulatBrain Neuroplasticity and Computer-Aided Rehabilitation in ADHDDiscussion on neuroplasticity, different ways the brain finds its way to healing tissue damage andcomputer-aided rehabilitation in ADHDHansel Undsen, Melissa Brant, Jose Carlos AriasFeasibility of overcoming the technological barriers in the constructionof nanomachinesA detailed analysis of nanotechnologyGünter Carr, Jeffrey DesslerDepartment’s ReviewersChair of Mathematics - Prof. Mark W. BoulatDeputy Head of Department – Earth and Biology - Prof. Alexandra MoffettChair of Earth and Environment Science - Prof. Sergio FalrowChair of Biology and Life Science - Prof. Melissa BrantChair of Physics and Astronomy - Prof. Timothy OlsonChair of Chemistry - Prof. Randolf LamanSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


136School of Doctoral Studies (European Union) JournalMaple and other CAS (Computer Algebra Systems)applied to Teaching and Assessing MathematicsRobert Thomson (MSc)Master of Science and candidate to PhD in Mathematics at the School of Doctoral Studies, <strong>Isles</strong><strong>International</strong>e Université (European Union)JulyArelli Santaella (PhD)Chair of Education and Communication Studies of the Department of Social Science at the School ofDoctoral Studies, <strong>Isles</strong> <strong>International</strong>e Université (European Union)Mark W. Boulat (PhD)Chair of Mathematics of the Department of Science at the School of Doctoral Studies, <strong>Isles</strong><strong>International</strong>e Université (European Union)AbstractThis paper describes the use of specific technological tools that assist students in the development oftheir mathematical skillsThere has been a significant increase in the utilization of technology in the areas of science andmathematics. As students go to college and move into the workforce math skills and technological skillsbecome important components in ensuring job security. Computer algebra systems (CAS) are programsdesigned for the symbolic manipulation of mathematical objects such as polynomials, integrals, andequations. Typical actions are simplification or expansion of expressions and solving of differential oralgebraic equations. Most CAS permit the user to write programs for complex tasks and all features ofhigh-level programming languages are available.The proper use of CAS has been associated with increasesin the amount of information that students retain as it relates to mathematics. Computer Algebra Systemsplay an important role in mathematics particularly as it relates to students and their perceptions aboutlearning Algebra through CAS. Student attitudes toward CAS technology seem to be reflected in thelevel of mastery associated with certain mathematical concepts and the proper teaching as it relates tothe utilization of CAS. As it relates to the current study it was clear from the consideration of previousstudies, Saunders (2003) and our work to date that there are a variety of responses to the Maple sessionsand CAS as a whole. It is also apparent that different students can have quite different attitudes towardsthe same activity. As it relates specifically to the maple activities the results indicate however that moststudents found some benefit in the Maple activities with regard to their mathematical understanding and thevisualisation capabilities seemed to once again be significantly useful. Students are generally positive aboutthe motivational impact of the Maple sessions. Many students develop a reasonable level of competencewith the basic commands and students are often able to present a correct Maple solution.Key words: Education, Mathematics, Technology, Science TeachingSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Maple and other CAS (Computer Algebra Systems) applied to Teaching and Assessing Mathematics137Executive SummaryMath and the use of technology often go handin hand. In this day and age there is an increasedneed for the development of technologicaltools that assist students in the developmentof their mathematical skills. As a result of thepressures facing American educators as it relatesto mathematics scores, there is a great deal ofevidence that suggest that technology may be acritical component in the improvement of thesescores. The research contained in this reportaddresses the use of CAS in the mathematicsclassroom. It also addresses the attitudes sharedby teachers and students alike as it relates to theuse of CAS technology. There are several reasonsfor the attitudes that are expressed by users ofthis technology. The research found that studentsthat have superb mathematical skills may actuallyexperience a greater level of security with the usageof CAS. The research also indicates that someteachers have negative attitudes towards the use ofCAS. It is clear that some of this negativity existsbecause teachers have not been properly equippedas it relates to the utilization of technology andCAS in particular. The research related to theutilization of Maple found that there are variationsin student experience but most students find theprogram beneficial. The research also indicatesthat there is a great deal of optimism concerningthe future utilization of Maple and other CASprograms. Overall the research indicates that theremust be concessions made for the developmentand implementation of new CAS programs andthat students and teacher have to receive trainingrelated to the development and implementation ofthese programs.Introduction and statementof the research questionsComputer algebra systems (CAS) areprograms designed for the symbolic manipulationof mathematical objects such as polynomials,integrals, and equations. Typical actions aresimplification or expansion of expressions andsolving of differential or algebraic equations.Most CAS permit the user to write programsfor complex tasks and all features of high-levelprogramming languages are available. In additionCAS have numerical systems for visualization(2D-, 3D-plots, animations) and numericalcomputations (numerical equation solving,numerical integration).In addition to being a tool for the manipulationof formulae CAS should be expert systemsthat contain information concerning all themathematics contained in quality mathematicalhandbooks. There are a lot of commercial andnon-commercial products. The most popularsystems are Mathematica and Maple. Othersystems are Derive, MuPAD. Computer AlgebraSystems (CAS) are widely available in tertiarymathematics departments as desk-top software. AtRMIT, the mathematics and statistics departmenthas a site licence for the CAS Maple, which isused in some research activities and is becomingincreasingly vital for teaching. CAS also providesnew assessment tools for automatically marking -online assessments using software (e.g. AIM-tth).The introduction of Computer-Assisted Assessment(CAA) demands a substantial investment of bothtime and resources. Before any commitment tointroduce CAA is made, there is a need to exploreall possibilities.PurposeThe purpose of this research is to take asystematic approach to the design and evaluation ofthe teaching, learning and assessing mathematicscourses using the CAS Maple. Of particularinterest are first year service mathematics coursesat RMIT <strong>University</strong>. The effectiveness of differentways of incorporating Maple activities into suchcourses will also be examined.The investigation will be conducted as aresearch and development activity through whichMaple activities are designed and evaluatedin a feedback cycle and we follow an ActionResearch methodology. Initially, examples fromthe literature and relevant theories concerningmathematical understanding were sought in orderR. Thomson, A. Santaella, M. W. BoulatMaple and other CAS (Computer Algebra Systems) applied to Teaching and Assessing Mathematics


138School of Doctoral Studies (European Union) JournalJulyto inform the development of new resources.Student’s responses to the first cycle of activitiesin 2003 were obtained. The conclusions drawnare informing the development of resources forthe next cycle. This process will continue overthe course of six semesters. The research methodsutilized are observations of classes, analysis ofstudent’s work, responses to specially designedtest instruments, use of feedback questionnairesand structured interviews. Some use will be madeof video will also be utilized to record and analysemethodology to evaluate the teaching and learningof mathematics using Maple in a computer lab.Research QuestionsThe particular research questions of interest are:• How should the work covered in Mapleclasses relate to the rest of the mathematicscourse?• Which particular types of Maple activities aremost effective in helping develop studentsmathematical understanding?• Which particular type of Maple activities dostudents find most rewarding?• How can Maple activities be assessedappropriately and efficiently (using CAA)?Literature ReviewThere have been many studies concerningof the teaching and learning (procedural andconceptual) when technology is used to supportthe teaching of mathematics. The purpose of thisliterature review is to expound upon the conceptsthat were learned as a result of the aforementionedstudies. Let us begin this literature review bydescribing how people learn mathematics and therole of technology in this learning process.How People Learn Mathematics and theRole of technologyHow People Learn MathematicsSmith (2002) of Duke <strong>University</strong> presents asummary of many papers and PhD theses. Thisreview reveals what others have discoveredabout learning mathematics. For instance,Bowden, Marton (1998) discuss the strength ofthe latest research about learning mathematicsand the impact of technology in the learningand teaching mathematics at all levels. PastResearch concerning learning, teaching andusing technology in mathematics reveals thatlearning mathematics is, critical and teachers playa key role in influencing outcome of learning withnew technology (Bransford, Brown, Cocking(1999).New Technologies are interactive: learning bydoing, receiving feedback, refine understandingand building knowledge. Technology introducesmany opportunities for visualisation. We are ableto access wide sources of information: digitallibraries, real-world data, connection to otherpeople who provide information, feedback andinspiration, all of which can enhance the learningof teachers as well as students.The utilization of technology can createa friendly teaching-learning environment.Intelligent synthesis of computer technologyand mathematics into the curriculumintegrated with new teaching methods andlaboratory courses lead to new effective waysof learning and teaching mathematics, andmore productive student activity and creativity.There is also great deal about controversy aboutusing technology as it relates to teaching andlearning mathematics. National Research CouncilUSA NRC report, highlights positive featuresparticularly of interactive technologies for learningin general (Bransford, Brown, Cocking 1999).One of the highlights of the research is thattechnology used inappropriately makes nosignificant difference in learning. Simply usingcalculators and computers doesn’t bring fortha change in the learning experience. However,School of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Maple and other CAS (Computer Algebra Systems) applied to Teaching and Assessing Mathematics139technology incorporated intelligently withcurriculum and pedagogy produces learning gains.There is little evidence one technology is betterthan another--what matters is how a technologyused). There is evidence that using CAS forconceptual exploration and for learning leads togains in solving problems that can transfer to latercourses. In traditional course students tend to useprocedural solutions that do not easily transferto new situations. Technology enables sometypes of learning activities (discovery learning,cooperative learning) that is hard to achievewithout technologies.Galbraith (2002) also discusses some of theunanswered questions raised with respect to theuse of technology in undergraduate mathematicsteaching and learning. Selected material from threeresearch projects is used to answer this question.One study Ramsden (1997), investigates impactof using powerful software packages in teachingand learning against traditional lectures and textbooks. Kent and Stevenson (1999) examine towhat extend must the structure of mathematicsbe understood so that mathematical procedurecan be learned effectively. Research took place atthe <strong>University</strong> of Queensland between 1997 and2000 and programs involved the use of Maple infirst year undergraduate teaching. They focusedon range of student’s questions and student’sperformance. The third study, Templer (1998), isabout links between computer controlled processesand mathematical understanding.Results suggest that teaching demands areincreased by the use of technology. Attitudestowards mathematics and computers occupydifferent dimensions and students adopt differentpreferences about how to use technology in learningmathematics. In addition, students can experiencevarious difficulties in using technological tools.Students may view the utilized tool in severalways: technology as master, technology asservant, technology as partner, and technologyas an extension of self, each of which promotesa different kind of technology assisted learning(Galbraith 2002).The Role of TechnologyAccording to Lederman and Niess (2000) it isessential that the appropriate use of technologyis an important component in the preparation ofmathematics teachers (Lederman and Niess 2000).The authors explain that the National Council ofTeachers of Mathematics (NCTM) has deemed the“Technology Principle” as one of six principlesassociated with high quality mathematicseducation. The principle asserts that: “Technologyis essential in teaching and learning mathematics;it influences the mathematics that is taught andenhances students’ learning (Lederman and Niess2000).”Indeed the incorporation of technology intothe classroom can be seen in a number of ways.These uses for technology can be categorized asit relates to the controller or the primary user ofthe technology (Lederman and Niess 2000). Forinstance some the ways that technology is utilizedforces the teacher to become the primary user(Lederman and Niess 2000). In addition, manyteacher preparation courses train the teacher as theprimary user (Lederman and Niess 2000). Theother approach to teacher preparation preparespre-service teachers to encourage their futurestudents to utilize technology to research and solveproblems (Lederman and Niess 2000).The aforementioned three uses of technologyin teacher education can promote to better teachereffectiveness and improved student learning. Onthe other hand the authors report that“it has been our experience that the mosteffective way to use technology to bring aboutenhanced student learning of mathematics isto prepare preservice teachers to incorporateinto their teaching an array of activities thatengage students in mathematical thinkingfacilitated by technological tools. Hence,in our preparation of secondary preserviceteachers, we emphasize the third use, inwhich ultimately the student is the primaryuser, and to some degree, the second use,in which the teacher is the primary user(Lederman and Niess 2000).”R. Thomson, A. Santaella, M. W. BoulatMaple and other CAS (Computer Algebra Systems) applied to Teaching and Assessing Mathematics


140School of Doctoral Studies (European Union) JournalJulyThe aforementioned emphases should takeplace using the following five guidelines.1. Introduce Technology in Context – thisguideline asserts that technology features whetherthey be mathematics specific are general needto be introduced and demonstrated within thecontext of content-based activities (Ledermanand Niess 2000). The authors explain that suchan introduction is necessary because teachingtechnology based skills first and then attempting tofind topics related to math for which the technologyskills may be useful is akin to teaching a set ofprocedural mathematical skills and then presentinga group of “word problems” to solve utilizing theprocedures (Lederman and Niess 2000). As such,this approach has the capacity to obscure the reasonfor learning and utilizing technology. It can alsomake mathematics make materialize as nothingmore than an addendum, and lead to unnaturalactivities (Lederman and Niess 2000). With thisbeing understood the utilization of technology inmathematics teaching is not designed only for thepurpose of teaching about technology, but also forthe purpose of improvng mathematics teachingand learning through technology (Lederman andNiess 2000).2. Concentrate on meaningful Mathematicswith Appropriate Pedagogy- this particularguideline encompasses content-based activitiesthe utilize technology and asserts meaningfulmathematics procedures, concepts, and strategies,should be focused upon and should replicate thenature and character of mathematics (Lederman andNiess 2000). Activities must also maintain soundmathematical goals as it relates to curriculum andmust not be created simply because technologymakes them achievable (Lederman and Niess2000). Certainly, the utilization of technologyin mathematics teaching must encourage andfacilitate examination, conceptual development,interpretation, and problem solving, as identifiedby the NCTM (Lederman and Niess 2000). Inaddition technology should not be utilised to carryout procedures without the proper mathematicaland technological comprehension (Lederman andNiess 2000). This means that student should beable to insert rote formulas into a spreadsheet todemonstrate such things as population growth(Lederman and Niess 2000). In addition itshould not be used in ways that can detour fromthe underlying mathematics by adding showyillustration into a Power Point slideshow that themathematics becomes unimportant (Lederman andNiess 2000). As such the mathematical content andpedagogy should not be compromised for the sakeof the technology (Lederman and Niess 2000).3. Use Technology to your advantage - thisguideline asserts that activities should utilizetechnology to an advantage (Lederman and Niess2000). This means that the activities should gobeyond what can be done in the absence of thetechnology (Lederman and Niess 2000). Theauthors assert that technology allows users toinvestigate topics with more intensity (Ledermanand Niess 2000). For instance, it allows them towrite programs, interconnect mathematics topics,and develop several proofs and solutions andto do so in more interactive ways through suchcomponents as simulations, and data collectionusing probes (Lederman and Niess 2000).Additionally, technology makes available thestudy of mathematics themes that were formerlyimpossible, these themes include such concepts asrecursion and regression; this is done through theremoval of computational constraints (Ledermanand Niess 2000). Utilizing technology to teachsimilar mathematical topics in essentially thesame manner that could be realized withouttechnology, fails to strengthen students’ learningof mathematics and undermines the value oftechnology (Lederman and Niess 2000).4. Connect Mathematics Topics- the authorsassert that technology-augmented activitiesmust smooth the progress of mathematicalconnections in two ways: (a) the interconnectionof mathematics concepts and (b) connectingmathematics to relatable events (Lederman andNiess 2000). A great deal of school mathematicsconcepts can be utilized to model and resolvecircumstances that come about as a result ofSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Maple and other CAS (Computer Algebra Systems) applied to Teaching and Assessing Mathematics141biological, physical, social, environmental, andmanagerial sciences (Lederman and Niess 2000).Additionally, countless mathematics concepts canbe associated with the arts and humanities. Theproper utilization of technology can encouragesuch applications by providing immediate entréeto real data and information, by ensuring thatthe insertion of mathematics concepts are usefulfor making applications more practical, and bymaking it simpler for teachers and students toconnect numerous representations of mathematicsconcepts (Lederman and Niess 2000).5. Incorporate Multiple Representationsthefinal guideline asserts that classroom activitiesmust be inclusive of various representations ofmathematical concepts. Research has suggestedthat a large percentage of students have problemsconnecting the graphical, verbal, numericaland algebraic representations of mathematicalfunctions. Thus the proper use of technologycan be helpful in assisting students to make theaforementioned connections. This is accomplishedthrough the connecting of tabulated data to graphsand curves of best fit, and producing sequencesand series algebraically, numerically, andgeometrically (Lederman and Niess 2000).ConclusionAs it relates to learning mathematics and therole of technology there are several conclusionsthat can be drawn. The first of which is that thelearning of mathematics in any environmentcan be a challenge. The challenge is present forstudents as well as educators. This is particularlytrue of complex mathematical concepts. It isalso apparent that people learn mathematicsdifferently and therefore there may be a need fordiffering approaches as it relates to the teaching ofmathematics.The research also makes it clear the useof technology in the classroom has increaseddrastically in recent years. As a result of greateraccessibility to technology educators havebegun to incorporate technological programsinto the learning environment. This particularintegration has taken place in most subject areasand researchers have found that it is particularlyuseful as it relates to the teaching of mathematics.However, its usefulness can only be realized whentechnology has been implemented appropriately.Now that we have an increased understandingof how people learn mathematics let us focus ona study that focuses on the use of technology as itrelates to mathematics.Review of studies related to the useof technology in the mathematicsclassroomGravely et al (2006) confirms that the utilizationof technology in mathematics and science hasincreased drastically in recent years. The authorsalso note that there are variations in the use of thesetechnologies and student perceptions concerningthese technologies and there helpfulness acrossa variety of subjects (Gravely et al 2006). Forinstance, some investigations have reportedthat technology is used often in the classroomsetting while other investigations have found thattechnology is only utilized occasionally (Gravelyet al 2006). It has been suggested that technology isoften used as a way of communicating and middleschool students perceived technology as lesshelpful than did elementary or high school students(Gravely et al 2006). In addition mathematicsstudents perceived technology as more helpfulthan did students in science classes. In additionfemales perceived technology as more helpful thandid males (Gravely et al 2006). Finally researchhas suggested that teacher and student perceptionsof amount of use differed with teachers reportinggreater use than students (Gravely et al 2006).The authors further insist that the increasedamount of utilization of technology is due in part tothe The No Child Left Behind Act of 2001(Gravelyet al 2006). The act encourages mathematicsand science teachers to assist children in greaterachievement in these subjects (Gravely et al 2006).One of the ways in which such improvements mayoccur is through the use of technology in thesesubjects. Research has indeed suggested that theR. Thomson, A. Santaella, M. W. BoulatMaple and other CAS (Computer Algebra Systems) applied to Teaching and Assessing Mathematics


142School of Doctoral Studies (European Union) JournalJulyuse of technology in mathematics is one of thesix principles of high quality math (Gravely et al2006). In addition Quellmalz (1999) asserted thatcomputers can be utilized by students to obtain,apply, and expand what they comprehend aboutmathematics and science (Gravely et al 2006).Weaver (2000) also discovered that the use ofcomputers was associated with greater studentachievement; this achievement was seen in bothmale and female students. In addition in theirqualitative study, Henderson, Eshet, and Klemes(2000) reported that the incorporation of interactivemultimedia improved girls’ perceptions towardscience and encouraged development in social andthinking skills as soon as second grade(Gravely etal 2006). Driscoll (2002) has also suggested thefollowing framework which contains four ways inthat technology could be utilized in classrooms toassist learning; there are as follows:1. Learning transpires in context, whichencompasses ways that technology can encouragelearning by providing practical contexts that causelearners to be involved in solving multifacetedproblems and computer simulations that presentcontexts for learners to comprehend complexphenomena (Gravely et al 2006).2. Learning is an activity, and as such it requiresthe use of concept mapping, brainstorming, and/orvisualization software (Gravely et al 2006).3. Learning is also reflective, and is inclusiveof technologies that encourage communicationinside and outside the learning environment(Gravely et al 2006). 4. Learning is also a socialexperience that is inclusive of software that canhandle a networked multimedia environment inthe classroom (Gravely et al 2006).Alagic (2003) also asserts that computers canbe utilised to present multiple representationsin mathematics. Although, he also asserted thatinstructors need to have successful encounterswith the use of technology in order to use itefficiently (Gravely et al 2006). In addition Winn(2003) asserts that technology in education playstwo roles. The first is one as an introduction toverbal environments and the second as a way toevaluate the dynamics of learning for anecdotal,evocative, and prescriptive purposes (Gravely etal 2006). The authors also explain that“Although technology is being used inclassrooms, the amount of use is limitedby availability of technology, curricularmaterials designed to optimize use, andthe lack of experience of teachers inusing technology effectively. In terms ofavailability, Kleiner and Lewis (2003)reported that there is one instructionalcomputer with Internet access for aboutevery five students in the U.S. In their studyof technology use in different countriesKnezek et al. (2000) found several barriersto the use of information and computertechnologies, including (a) shortage oftechnology, (b) logistical problems, (c)the changing roles of teachers, (d) time,and (e) accountability in terms of the typeof learning that is tested (Gravely et al2006).”Manoucherhri (1999) conducted a survey ofhigh school and middle school math teachers andreported that computers in mathematics classesare most frequently being used only for drill andpractice. There is an implication that teachers donot have the chance to develop skills in how touse technology more efficiently and as a resultadditional education is needed for teachers(Gravely et al 2006).Bussi et al (2002) assert that for manyinstitutions of learning instructors must considerthe manner in which mathematics is taught. Theauthor contends that it is time to reconsider themathematics syllabus in the light of the potentialof technological tools available in mathematicslearning The author explains that the veryexistence of CAS creates questions related therole of algebraic manipulations including solvingequations. These questions include the following• To what degree do math students have toreach mastery in solving linear or quadraticequations when they have access to toolsthat solve multifaceted equations which areSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Maple and other CAS (Computer Algebra Systems) applied to Teaching and Assessing Mathematics143inclusive of linear and the quadratic equations(Bussi et al 2002)?• What kind of mathematical knowledge andinsight can students gain when utilizing sucha tool (Bussi et al 2002)?Indeed Ball (2003) points out that differentinstructional uses of technology have effects onthe cultivation of algebraic concepts and algebraicskills (Ball 2003). For instance, research hasasserted that both graphing calculators andcomputer algebra systems are capable tools forencouraging certain types of comprehension as itrelates to algebra, including the comprehensionof algebraic functions (Ball 2003). In additionthere are also significant questions concerning therole of paper-and-pencil computation in creatingcomprehension in mathematics in addition toskill (Ball 2003). The author contends thatthese questions are present at all levels of schoolmathematics. Also empirical investigation andevidence are critical for practitioners who desireto have stronger evidence for making solidinstructional determinations (Ball 2003).In addition a great deal of previous researchconcerning algebra has placed greater emphasison student learning issues as opposed to algebrateaching issues. This is evident with Kieran(1992) who asserts that, “The research communityknows very little about how algebra teachers teachalgebra and what their conceptions are of their ownstudents’ learning.” The author explains that thereare many changes occurring throughout the nationas it relates to algebra (Ball 2003). However forthese changes to be effective teachers and creatorsof instructional materials must have access toresearch-based information related to variousmodels for algebra teaching at different levelsand the effect of such models on student learningdifferent aspects of algebra (Ball 2003).In addition, such research could find waysin which teachers work including the manner inwhich they utilize certain opportunities to learn,and how they use instructional materials, as theyplan and teach lessons (Ball 2003). For instance,even though elementary teachers’ utilizationof texts has been researched in various studies,less is known about how algebra teachers utilizetextbooks, technology, tools, and additionalinstructional materials (Ball 2003). However,this is the type of knowledge that is significantas it relates to any large-scale improvement ofalgebra learning for American students in that itwould inform the design and implementation ofinstructional programs (Ball 2003). Indeed “thechanging algebra education landscape demandsthat we direct collective research energies towardsolving some of the most pressing problems thatare emerging as a result of these change. Researchinto algebra teaching, learning, and instructionalmaterials should be at the forefront of efforts toimprove outcomes for all students in learningalgebra in the nation’s K–12 classrooms (Ball2003).”According to Marx (2005) the use of technologyin the college classroom may be beneficial tostudents. However, many colleges have to firsttrain faculty on the proper use of computers. Thearticle explains that college professors are ussualysomewhat reluntanct to use technology. Theyoften view having students use technology in ameaningful way as a serious challenge. In additionthis reluctance is present because “universityfaculty already have teaching, research andpublishing requirements, and working to integratetechnology takes time away from these activities.Also, faculty often become discouraged becausethey do not receive credit for their work in addingsignificant technology components to their courses(Marx, 2005).”To combat this issue many universities havedeveloped skills workshops (Marx, 2005). Theseworkshops gave professors training related to theuse of technology in mathematics also with othersubjects. Additionally, these workshops featuredinformation on the benefits of using technology ineducation (Marx, 2005).There have also been some grants given tocolleges to encourage the use of technology inthe classroom. It seems that such technologiesare necessary because of the type of competitionthat American colleges are confronting whencompared to universities in other regions of theworld. The implementation of such technology isR. Thomson, A. Santaella, M. W. BoulatMaple and other CAS (Computer Algebra Systems) applied to Teaching and Assessing Mathematics


144School of Doctoral Studies (European Union) JournalJulyvitally important to ensure that students are readyto join the workforce.ConclusionIt is apparent form this research that schoolsare attempting to use technology in mathematicsclassrooms. However it is also apparent thatmany schools have fallen short of meeting goalsthat would ensure that the use of technology isbeneficial to students. The research indicatesthat for some students particularly at the middleschool level, there may be very little benefitassociated with using mathematics technology.The research also focused on the perceptions ofteachers. It seems as though many teachers arenot adequately trained on the appropriate use oftechnology in mathematics. While others fearthat using technology with mathematics doesnot ensure that students will learn the conceptsthat are presented. Overall it seems that a greatdeals of technology that is currently being utilizedneeds to incorporate a better strategy for use in themathematics classroom.2.3 Utilizing Computer Algebra SystemsThere are many different settings that utilizeCAS. According to Kahn & Kyle (2002) theincreased power of computers and increasedaccessibility to computers has created a significantdemand for their use in the process of learning,teaching and evaluating mathematics in highereducation institutions throughout the world.Computer Algebra Systems are no exceptionto this rule as they have the capacity to performnumerical manipulations (Kahn & Kyle 2002).There are various versions of CAS which includeMathematica, Maple, Macsyma and DERIVE(Kahn & Kyle 2002). Some systems are alsoavailable on hand held calculators such as TI-92plus and theTI-89 (Kahn & Kyle 2002).Over the last 20 years a great deal of emphasishas been placed on the use of CAS in the realmsof learning and teaching. However, there is stilla great deal of information that is unknown aboutCAS and its utilization (Kahn & Kyle 2002). Itis clear however that the utilization of calculatortechnology in the academic environment isoften associated with lower test scores and thisassociation has led to a great deal of negativityconcerning CAS (Kahn & Kyle 2002).Although there is negativity associated withCAS there have been studies that report thatstudents gain a better conceptual understandingof mathematics with no significant loss incomputational skills (Kahn & Kyle 2002). In onesuch study, Hurley et al, (1999) refer to a NationalScience Foundation report that found that anestimated 50% of the educational institutionscarrying out studies on the impact of technologysaw increases in conceptual understanding (Kahn& Kyle 2002). In addition, there was greaterfacility associated with visualization and graphicalunderstanding, and a capacity to solve a wider arrayof problems, without any loss of computationalskills (Kahn & Kyle 2002). An additional 40%reported that students in classroom settings withtechnology had done at least as well as those inconventional classroom settings (Kahn & Kyle2002).2.3.1 Learning and teaching concernswith a CAS and the suitable to use a CASThe authors assert that many teachershave suggested that CAS has the capacity toundervalue some areas of mathematics that areroutine in nature and necessitate a great deal ofalgebraic manipulation. The authors point out thatthis complex manipulation can be seen in bothmathematics and engineering students that haveto learn methods of integration such as the use ofpartial fractions without utilizing CAS (Kahn &Kyle 2002). Some educators argue that “If theobject were solely to obtain an analytical solution,then the skills developed by performing suchmental manipulations would appear to be virtuallyredundant if a CAS were available (Kahn & Kyle2002).”On the other hand most educators concede thatCAS is beneficial for most students because itallows that to integrate trigonometric, exponentialand simple polynomial functions mentally (KahnSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Maple and other CAS (Computer Algebra Systems) applied to Teaching and Assessing Mathematics145& Kyle 2002). It is important because thesemethods should be understood by students.However the question often arises concerningthe appropriateness of moving form mentalcalculations to using a computerized tool like CAS(Kahn & Kyle 2002).The authors contend that there has been agreat deal of debate concerning this very issue.They assert that previous research conductedby Buchberger (1989) suggested a ‘white box/black box’ principle for instructing (Kahn &Kyle 2002). Such a principle asserts that whenstudents are learning a new subject area thingssuch as proofs, thermos and basic concepts needto be taught by hand (ie mental calculations) andCAS or the white box approach should be usedsparingly (Kahn & Kyle 2002). This principlealso asserts that when mental calculations becomecustom and applications become significantlymore difficult, the use of a CAS must be permittedand supported (as a black box) (Kahn & Kyle2002). This educational principle may present amotivation for the utilization of effective CAS (Kahn & Kyle 2002). In theory, it would seem thatthe principle is simple to practice within a singleclass or subject area, but hard to organize across aprogram that is inclusive of a variety subject areasand many instructors with different teaching stylesand strategies (Kahn & Kyle 2002).In addition to these instruction concernsstudent must also be taught how to carry outalgebraic procedures and still be taught conceptualunderstanding. This means that students mustknow how to differentiate symbolically whichdiffers from knowing what a derivative means(Kahn & Kyle 2002). In addition, developing ananalytical solution to a differential equation mightbe of restricted use if the student does not havethe capacity to visualize and interpret the solution(Kahn & Kyle 2002). In other words the contentsof the ‘white box’ must be understood before‘black box’ stage is permitted to take over (Kahn& Kyle 2002).In addition some educators concede that inlight of the creation of CAS there is less of aneed to spend instruction time on the intricatepoints of technical manipulation but educatorsshould focus on the improvement of problemsolvingskills and a conceptual knowledge ofmathematics (Kahn & Kyle 2002). Indeed someeducators believe that computers can be utilizedto perform the processes that enable the user toconcentrate on the product (Kahn & Kyle 2002).It In fact form nay students doing mathematics hasbeen reduced simply completing procedures withvery little ability to reason and programmes tendto be simply a collection of white boxes (Kahn &Kyle 2002). A collection of white boxes could bedetrimental as it appears that a balance is neededto ensure that the appropriate concepts are actuallylearned (Kahn & Kyle 2002). Wu (1998) assertsthat within mathematics education in general andthe Calculus Reform movement in particular, drillexercises and the importance of memory cannotcompletely be ignored (Kahn & Kyle 2002).“Kutzler (2000) has discussed the topic of‘What mathematical skills are necessaryin the age of the CAS?’ He equates thenecessity to do some mental calculationswith the necessity of the body to exercisein order to maintain muscle tone andgeneral health. The point is a good one. Itis important to exercise our minds. On theother hand, manipulations should not belimited to those things that students cando by hand or in their head. They need tounderstand how to use all of the tools oftechnology effectively. The instructor hasan important role to play in developing thisunderstanding (Kahn & Kyle 2002).”Once a school system decides to use CASthey must embrace an approach to teaching usingsuch a system (Kahn & Kyle 2002). The authorsassert that the most common approach to CASis computer lab sessions (Kahn & Kyle 2002).Throughout such sessions students are presentedwith exploratory tasks to complete and supportlecture material and the exercises that are performedmentally (Kahn & Kyle 2002). Students are thengiven the opportunity to use CAS to carryout thesetasks. It has been asserted that this type of approachdoes not depend on CAS but instead it uses CASR. Thomson, A. Santaella, M. W. BoulatMaple and other CAS (Computer Algebra Systems) applied to Teaching and Assessing Mathematics


146School of Doctoral Studies (European Union) JournalJulyto enhance the overall learning experience andaids students in developing mathematics skillsincluding problem solving and visualization (Kahn& Kyle 2002). This is known as a constructivistapproach as opposed to an instructivist approachto teaching and learning. Students throughout theworld are taught through this approach (Kahn &Kyle 2002).Another approach that uses CAS in additionto a traditional lecture/tutorial style is believed tobe less successful (Kahn & Kyle 2002). Overallmany educators believe that students must beconvinced of the significance of the use of CASand given a choice will often prefer to solve aproblem by hand if possible (Kahn & Kyle 2002).This is the reason why the exploratory tasks andquestions should necessitate CAS use (Kahn &Kyle 2002). In addition many students require aconsistent approach to CAS use across the entireprogram of study (Kahn & Kyle 2002). This isbecause learning how to utilize a CAS’ module isof little use if students do not use a CAS elsewherein their programme even when it is appropriate todo so (Kahn & Kyle 2002). Program design shouldpermit an incorporated policy on CAS usage, notjust as it pertains to Mathematics degree programsbut also as it relates to service teaching in programssuch as engineering where mathematics is utilized(Kahn & Kyle 2002).The authors assert that the CAS is nothingmore that a mathematical tool and tools have beendeveloped since the inception of mathematics inthe classroom (Kahn & Kyle 2002). However, itseems that with the advent of mathematics toolsthat utlize modern technology there is a greatdeal of protest (Kahn & Kyle 2002). Howeverif the CAS is used properly not only are studentsengaged in mathematical thinking, they areengaged in better thinking at a higher level (Kahn& Kyle 2002). In addition, they are confronting theessence of problems, instead of working around thefringes by engaging in complicated manipulations(Kahn & Kyle 2002). The authors also contendthat the protest associated with the use of a CASlies primarily in the fact that the capacity to carryout complex manipulations has been labelled asactually ‘doing mathematics’(Kahn & Kyle 2002).As it relates to the optimization exercises, theCAS performed all of the tasks associated withfinding derivatives (Kahn & Kyle 2002). Someeducators assert that this is a fundamental skillthat will be abandoned in the use of CAS( Kahn &Kyle 2002). However the research contends thatsuch a fear is unrelated to the question of utilizinga CAS(Kahn & Kyle 2002). However, there arestudents will not be good at finding derivatives.On the other hand students will be able to findderivatives and do the task well (Kahn & Kyle2002). In either case it is clear the simply ‘doingmathematics’ concerns reasoning and problemsolving (Kahn & Kyle 2002). In addition, the‘doing of calculations’ is only one aspect of themathematics equation. This is true regardless ofthe calculations being numerical or algebraic(Kahn & Kyle 2002).The article further states that the affect of a CASwill be most significant if it is permitted in formalexaminations (Kahn & Kyle 2002). Assessmentis not necessarily a key component in teachingbut it does drive students’ learning and a CASwill become extremely useful if students believeit presents them with an advantage in assessedwork (Kahn & Kyle 2002). As the use of CAS issuggested for manipulative work the followingquestions must be addressed1.2.Is it a fact that mathematics course workand examinations will only be manageableby students who are skillful problem solvers(Kahn & Kyle 2002)?Will only the students who havemanipulative skills or the ability to dostraightforward, well-defined applicationsbe unable to pass courses that utilize CAS(Kahn & Kyle 2002)?If these things are factual the case thenmathematics may become an elitist subject andmay be overlooked by other disciplines (Kahn &Kyle 2002). The examples given indicate that itis possible to construct examinations that requireSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Maple and other CAS (Computer Algebra Systems) applied to Teaching and Assessing Mathematics147the students to use and demonstrate manipulativeas well as problem-solving skills (Kahn & Kyle2002).Assessment of CASKahn & Kyle (2002) explains that assessingwhat has been taught is always necessary.Therefore assessing CAS is a necessary step inunderstanding the overall impact of CAS (Kahn& Kyle 2002). Assessing CAS is particularlynecessary when the utilization of CAS cultivatesthe learning outcomes of the course (Kahn &Kyle 2002). However, this raises a number ofas it relates to CAS use, especially the use of aCAS in formal, timed examinations (Kahn &Kyle 2002). This issue has been debated at lengtheven among those that support CAS. A greatdeal of the debate centers around the mechanicsof implementation and equity of CAS use andalthough these are important concerns they aresecondary details when compared to the primarypurpose of assessing appropriate mathematicalknowledge and skills examinations. As a result ofthis the author explains that“an integrated approach covering learning,teaching and assessment is needed. Theuse of a CAS in the mathematics exam isnot completely analogous to an open bookexamination. In fact, the CAS generallytells the mathematics student less than(say) the open history book may tell thehistory student. The CAS merely doesthe numerical or symbolic calculation;the interpretation of the result is beyondits capability. The interpretation is wherethe student’s understanding of the topiccomes to light. It merely emphasizes to thestudent, and possibly to the instructor, thatdoing calculations is not the sole purposefor studying a topic. Mathematical problemsolving is a process that goes far beyondmanipulation of symbols. The use of a CASremoves a computational roadblock that canstand between the students and the solutionof the problem (Kahn & Kyle 2002).”The authors explain further that theexamination questions that are created for aclassroom environment in which students areallowed to have access to a CAS should ensurethat they recall concepts and ideas, not just toshow manipulations and recall formulas (Kahn &Kyle 2002). The test evaluator needs assess whatthe students have really learnt during the courseas opposed to what they have simply memorized(Kahn & Kyle 2002).How Students perform when they aregiven CAS examinationsThose that are investigating the use of CAS oftenwant to know if the use of CAS in the classroomactually increases the number of students that passthe module. In some cases researchers have foundthat students who used CAS and were examinedfound the group C skills to be difficult (Kahn& Kyle 2002). In addition, pass rates for suchexaminations and mean marks were identical to thescores for student in non-CAS classrooms (Kahn& Kyle 2002). The researchers also point out thatin most assessments there was not a significantchange, either positive or negative, in the numberof students who passed the examinations (Kahn &Kyle 2002). In addition there was not a considerablechange in the overall grade distribution. Howeverthe researchers do assume that the CAS will alterthese factors; though it will not make all studentsinto math experts (Kahn & Kyle 2002). That wouldbe dependent upon student desire and ability inaddition to the skills of the teacher (Kahn & Kyle2002). What CAS does present teachers with isthe capacity to teach and assess the mathematicswhich is the ultimate goal (Kahn & Kyle 2002).The authors also explain that learning todo math with CAS is more difficult than mosttraditional courses in secondary school (Kahn& Kyle 2002). This means that if instructors donot stress the importance of CAS throughout thecourse and also during assessment activities runthe risk of the students will revert back to theirtraditional mode of doing mathematics (Kahn &Kyle 2002). The authors contend that in somecases when CAS is utilized the goals of the courseR. Thomson, A. Santaella, M. W. BoulatMaple and other CAS (Computer Algebra Systems) applied to Teaching and Assessing Mathematics


148School of Doctoral Studies (European Union) JournalJulywould be lost and the Group C level skills wouldbe ignored or minimized because of a studentpreoccupation with the manipulative nature of theequations (Kahn & Kyle 2002).As a result of the preliminary time that thestudent needs to become comfortable with theCAS, experience has also indicated that theinstructor should be able to write exams thatdo not have as many questions as on traditionalexaminations, but still encompass the major ideasof the class (Kahn & Kyle 2002). Therefore, thecreation of the exam at this level necessitatescareful thought concerning the issues of studentcapacity in addition to the capabilities and role ofthe CAS in learning the subject and responding toexamination questions (Kahn & Kyle 2002).The authors also assert that mathematicscourses at all levels need to be taught with anemphasis on developing the skills highlightedin the MATH Taxonomy level C (Kahn & Kyle2002). In addition the CAS presents an excellentopportunity to achieve this goal (Kahn & Kyle2002). This means that teachers of mathematicsmust incorporate the use of this tool into theirteaching. CAS will not make teaching redundantor inconsequential (Kahn & Kyle 2002). On thecontrary it permits teachers set students on thepath to both mathematical discovery and problemsolving (Kahn & Kyle 2002). AdditionallyCAS does not create an artificial environmentin the classroom, instead it prepares studentsfor their futures as consumers of mathematicsor professional mathematicians (Kahn & Kyle2002).A survey of Mathematics graduates workingas mathematicians that was published in themagazine, Math Horizons (Moylan, 1995). In thissurvey a two-part question was asked, “Whichof the courses and skills that you took as anundergraduate Mathematics Major best preparedyou for your present career? Which prepared youthe least (Moylan, 1995)?” Those that respondedto the survey asserted that the most useful courseswere those that consisted of modeling and problemsolving, that is finite mathematics, statisticsdifferential equations, and operational research(Kahn & Kyle 2002).Conversely, even though the respondentsdid not recognize whole courses as ineffectual,they did express disapproval of the part of theirundergraduate training that stressed manipulativeand hand calculation of numerical and technicalalgorithms (Kahn & Kyle 2002). Examples ofthis are inclusive of time spent on technique aftertechnique of symbolic integration, closed formsolutions of differential equations. multiplicationand manipulation of large matrices, anddifferentiation problems (Kahn & Kyle 2002).They explained that their employers had softwarepackages that performed these tasks on theircomputers (Hornaes & Royrvik 2000). Insteadthey felt they needed additional time on developingproblem solving and analysis skills (Hornaes &Royrvik 2000).A significant percentage of students who goto graduate school in the sciences or mathematicsutilize computer packages as a component of theirresearch work (Hornaes & Royrvik 2000). Such apractice is viewed as a basic system of conductingmodern research (Hornaes & Royrvik 2000).However there is no doubt that the research beingconducted is invalid because of a reliance on thecomputer (Hornaes & Royrvik 2000).This research makes it apparent howmeaningful exercises and examinations may bedeveloped for utilization in a CAS laboratory orin circumstances where every student has accessto a CAS (Hornaes & Royrvik 2000). In addition,instead of trivializing mathematics teaching andlearning, the authors conceive that is has beenenhanced (Hornaes & Royrvik 2000). As a resultadditional time can be spent on troubles requiringstudents to build up the level C skills of the MATHTaxonomy (Hornaes & Royrvik 2000). The authorsfurther contends that mathematics instruction thatdoes not include the utilization of the CAS risksbecoming ineffective (Hornaes & Royrvik 2000).In addition there is little excuse for not usingit as it is available in inexpensive calculators(Hornaes & Royrvik 2000). It is also embeddedin at least one word processor program knownas a Scientific Noteboo k(Hornaes & Royrvik2000). CAS has also been incorporated in somepopular industrial and academic mathematicsSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Maple and other CAS (Computer Algebra Systems) applied to Teaching and Assessing Mathematics149tools including MathCAD and MatLab (Hornaes& Royrvik 2000). In addition in the United Statesand the United Kingdom there exists engineeringschools which are requiring students to understandthe use of CAS in completing their course work(Hornaes & Royrvik 2000). The authors assert thatis imperative that students in mathematics coursesare comfortable using the tools of the 21st century(Hornaes & Royrvik 2000). The key to suchuse is that the CAS is permitted in all levels ofmathematics instruction and evaluation (Hornaes& Royrvik 2000).ConclusionAs it relates to the utilization of CAS theresearch is clear concerning the need for CASin the 21 st century. The research indicates thatthere are various settings that utilize CAS. Theresearch also indicates that the availability of newtechnology has enhanced the learning experience.Because CAS has the ability to perform numericalmanipulations it is viewed by many as an idealtool for the teaching of Algebra. In addition CASprograms came in many different forms includingMathematica, Maple, Macsyma and DERIVE.The are also come loaded on hand held calculatorssuch as TI-92 plus and theTI-89.The research also points out that there is alimited amount of research on the topic of CASbecause many institutions of learning have justbegun to implement these programs on a largescale. On one end of the spectrum there seems tobe a belief that CAS and calculator in general leadto lower test scores in students that utilize them.In addition the utilization of CAS does have manysupporters that assert that students get a betterconceptual understanding of mathematics with nosignificant loss in computational skills when usingsuch programs.2.4 CAS in the ClassroomAccording to Bloom (2002) the messagedistilled from 24 papers familiar to all otherfindings from recent studies and research is thatthe focus on use of CAS has expanded in recentyears especially in the tertiary sector while the usein higher-level mathematics subjects is not madeas explicit. At this stage the secondary educationagencies in a number of countries are activelyengaged in the investigation of the inclusion ofCAS calculators in the high school curriculum.This review considers issues that CAS raises forlearning, the resources it offers, and students’ andteachers’ responses to the inclusion of CAS.Pedagogy and Epistemology issues withCASAcording to Hornaes & Royrvik (2000) anynew pedagogical tool often faces some resistanceby students and teachers, the introduction of CASat the collegiate level is no exception (Hornaes &Royrvik 2000). Additionally grguments in favorof or against the utilization of CAS are numerous;most of these argument are pedagogical but alsosome are practical (Hornaes & Royrvik 2000).The authors assert that the most frequently usedargument for utilizing CAS is that it can eliminatethe hard work and difficulty of performingalgebraic manipulations, therefore permittingmore time to carryout the more important taskof mathematical modeling and problem solving(Hornaes & Royrvik 2000). For instance,“Traditionally, problems given to engineeringstudents represent unrealistically simplephysical situations, so that it is possiblefor students to do the calculations by hand.The ability to have CAS do the complicatedcomputations, so the argument goes, willgive students more time to work on morerealistic, more complicated, and presumablymore interesting problems.4,5 There is,however, a danger that teachers may betempted to expect too much of studentswhen using CAS, presumably because thealgebraic manipulations become a nonproblem(Hornaes & Royrvik 2000).”When utlizing the algebraic capacity of CAS,some researchers assert that the tasks associatedwith teching concepts is simplified (Hornaes &R. Thomson, A. Santaella, M. W. BoulatMaple and other CAS (Computer Algebra Systems) applied to Teaching and Assessing Mathematics


150School of Doctoral Studies (European Union) JournalJulyRoyrvik 2000). For instance, it is no longernecessary for students to master manipulativeskills to learn the concept of integration; and lesstime spent on algebra leaves more time to work onconcepts which is the most significant aspect oflearning (Hornaes & Royrvik 2000). Among thosewho are sceptical of CAS, the opposite argumentis posited: that a decrease in algebraic skill resultsin a decreased understanding of mathematicalconcepts (Hornaes & Royrvik 2000).The authors further explain that learning howto use CAS can be time consuming (Hornaes &Royrvik 2000). As a result, if instructors wantstudent to realize any benefit CAS should beutilized as a unifying tool in various coursesthroughout an academic career (Hornaes &Royrvik 2000). Nevertheless, a large percentageof instructors assert that engineering students inparticular should learn to use other, more focused,technical programs that are commonly utilizedin the industries in which the students will beemployed. Both teachers and student have alsoobjected to the use of CAS as an addition to analready weighed down curriculum in mathematics(Hornaes & Royrvik 2000). This is becauseCAS takes time to master this is time that isusually taken away from drilling of elementaryskills (Hornaes & Royrvik 2000). Students thatare weaker in mathematics have even greaterdifficulty accepting that the utilization of CAS canassist them in passing examinations (Hornaes &Royrvik 2000).Bossé, & Nandakumar (2004) assert that thecreation of powerful Computer Algebra Systemshave continually affected curricula, pedagogy, andepistemology in secondary and college algebra.Nevertheless, epistemological and pedagogicalresearch related to the role and efficacy of CASin the learning of algebra is insufficient (Bossé,& Nandakumar 2004). The research conductedby Bossé, & Nandakumar (2004) investigatesconcerns regarding typing expressions into TexasInstruments TI-92 and TI-89 CAS and suggestionsfor the future use of CAS and how it can becomemore epistemologically and pedagogically sound(Bossé, & Nandakumar 2004).Bossé & Nandakumar (2004) state furtherthat the current era that we live in allots studentsaccess to calculators in addition to computersthat are programmed with CAS. There is also anincrease in the curriculum that is being developedto accommodate CAS technology (Bossé, &Nandakumar 2004). Teachers and the developersof curriculum constantly evaluate mathematicalcontent as it relates to the curriculum which ismost suitable for applications of CAS (Bossé, &Nandakumar 2004). Absent from from a majorityof these evaluations and curricular developments,are scholarly considerations concerning the rolethat CAS plays in teaching and learning (Bossé, &Nandakumar 2004).Although CAS seems to significantly improvestudent learning along with the manipulationof algebraic expressions, such a system is alsofundamentally burdened with epistemologicallyunsound computerized functions which cancreate misunderstandings (Bossé, & Nandakumar2004). These particular authors argue in favorof increased acceptance of CAS. However theauthors also insists that CAS (principally linkedto the TI-92 and TI-89 calculators) still have someprogramming nuances which need to be addressedin the future so that CAS is more conducive tostudent learning (Bossé, & Nandakumar 2004).As it relates to the issue of programmingnuances it has been readily recognized that theENTER key is responsible for more than loadinga function or expression into the CAS memory ordisplay it on the screen of the calculator (Bossé,& Nandakumar 2004). Nevertheless, for studentsthe vagueness of precisely what will appear andwhy remains problematic (Bossé, & Nandakumar2004). In addition, automatic processes tendto take over the users’ input and execute taskson the entries not considering whether or notthe users wish to carry such tasks out or not. Ifthese issues are not addressed, they can presenta harmful effect on student learning (Bossé, &Nandakumar 2004). The authors further explainthat it is vitally important that educators utilizemethodologies and instructional tools in theclassroom that have concret epistemological andpedagogical foundations (Hornaes & Royrvik2000). Currently the state of CAS can improve andSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Maple and other CAS (Computer Algebra Systems) applied to Teaching and Assessing Mathematics151enhance problem-solving and discovery learningsituations and the aforementioned misbehaviorcan also impede learning (Hornaes & Royrvik2000). It can be asserted that the current use ofCAS is an excellent example for students whounderstand the respective mathematics and useit primarily to speed up their investigations andwork(Hornaes & Royrvik 2000). On the otherhand, as a learning tool for student who do nothave a solid understanding of the mathematics,CAS may currently be inadequately refined fortheir utilization and learning (Hornaes & Royrvik2000).Effectiveness of CAS in the Classroom andStudent Attitudes Towards CASPierce & Stacey (2002) posit that in recentyears studies concerning tertiary and secondarymathematics classes have supported the ideathat the simple presence of CAS in a classroomdoes not automatically mean that the benefits ofsuch a system will be realized. Indeed studentshave to learn how to use hardware and softwareefficiently (Pierce & Stacey 2002). This processis refereed to as instrumental genesis (Pierce &Stacey 2002). Instrumental genesis is the processby which such available technology becomes apowerful tool. This particular learning processpresents a new problem for students (Pierce &Stacey 2002). Some researchers have assertedthat when students have to learn new mathematicsand new technology simultaneously they may besidetracked by having to learn how to use the newtechnology (Pierce & Stacey 2002). On the otherhand other researchers have asserted that theseproblems are identical to the issues that studentsface when performing calculations with paper/pencil (Pierce & Stacey 2002).The authors further assert that in order torealize the full benefits of CAS student must havethe capacity to discriminate in their utilization ofthis technological tool (Pierce & Stacey 2002).In addition research has found that a range inthe levels of engagement with the technologyas it relates to students (Pierce & Stacey 2002).Research has indicated that students’ utilization ofCAS was often tempered by their underlying beliefsconcerning mathematics and their assessment ofwhat was important (Pierce & Stacey 2002). Forinstance one researcher discovered that studentswho believed that mathematics was ‘answerbased’rejected the opportunity for exploration ofmathematical ideas presented by CAS (Pierce &Stacey 2002).In addition the students placed a high value onindividual effort and undervalue the utilization oftechnology (Pierce & Stacey 2002). In addition,Lagrange (1996) found that not every student hadthe desire to use CAS (Pierce & Stacey 2002). Thestudy also found that student did not desire to nothave pen and paper work and that many student,enjoyed doing routine calculations (Pierce &Stacey 2002). The present authors also conductedprevious studies (Pierce and Stacey 2001a, 2001b)involving examples of the individual natureof students and their responses concerning theavailability of CAS. The individual nature of isillustrate in the following comments.“Student A: Sometimes I use pen andpaper and (later) find you can do it on thecomputer. Then I prefer the computer butI don’t start on the computer first or I getconfused. (2001b p.16) Student B: I thinkit (CAS) actually helps me learn new thingsbecause when there are new things that I’mlearning, while I’m finding them difficult, Ican use DERIVE and go through the steps.With more practice, and seeing DERIVEgo through it, I pick it up myself and then Ican feel confident doing it myself without apackage (Pierce & Stacey 2002).”Indeed the significance of CAS can onlybe determined by the effectiveness of itsutilization (Pierce & Stacey 2002). Using CASin the classroom necessitates that students have afamiliarity with both the hardware and the softwarethat is related to CAS (Pierce & Stacey 2002).Therefore the success that students experience willbe dependent upon their ability to master the useof the technology while simultaneously learningR. Thomson, A. Santaella, M. W. BoulatMaple and other CAS (Computer Algebra Systems) applied to Teaching and Assessing Mathematics


152School of Doctoral Studies (European Union) JournalJulyvarious math skills (Pierce & Stacey 2002). Theeffectiveness of their utilization will be dependentupon both technical and personal factors whichare inclusive of the following:• Whether the student can maneuver the systemwith minimal problems (Pierce & Stacey2002)• The students approach towards the utilizationof CAS (Pierce & Stacey 2002)• The method and rationale for the use of CAS(Pierce & Stacey 2002)As a result of previous research and theauthors experience of students’ learning in a CASenvironment, a framework has been developedwhich may provide a guide for monitoring progressin Effective Use of CAS for students(Pierce &Stacey 2002).To this end a study was conducted whichexamined and monitored students’ use of CASas it related to both performing and learningmathematics (Pierce & Stacey 2002). Theparticipants for this study were 21 undergraduatestudents enrolled in a 15-week (13 weeks teaching)introductory calculus course. The course involvedthe use of CAS (DERIVE 2.55) for teaching,learning, and assessment tasks (Pierce & Stacey2002). The purpose of the study was to utilizethe framework as a foundation for considering ifstudents’ Effective Use of CAS got better duringthe course and, if there was “differential changeamongst the students, which students improvedin what aspects and why. Surveys, interviews andobservation were used to collect data for classresults and detailed case studies (Pierce & Stacey2002).”To carryout this particular study TechnicalDifficulties and Judicious Use of CAS surveys weregiven at the end of laboratory classes in weeks 1, 7and 13 (Pierce & Stacey 2002). The Students wereexpected to reflect on their utilization of CASduring that laboratory session and answer to asuccession of statements (Pierce & Stacey 2002).The authors explain that statements associatedwith the possible problems had the choice of‘not applicable’ in addition to a 5-point frequencyscale from ‘never’ to ‘every time’(Pierce & Stacey2002). The Judicious Use statements were offeredas a single multiple-response question (Pierce& Stacey 2002). In week 1 additional questionswere asked as it related to students’ previous useof technology as it related to mathematics andin week 13 respondents were asked to respondto statements about when they make a decisionto utilize CAS and if CAS is helpful (Pierce &Stacey 2002).The purpose of this study was to explain whatcomprises the Effective Use of CAS (Pierce &Stacey 2002). The study aimed to present thesecomponents in an organized framework and toreveal the importance of the framework by usingit to identify the evolution of the Effective Useof CAS in a group of students in a functions andcalculus course (Pierce & Stacey 2002). Theauthors explain that there are two divisions in theEffective Use of CAS that include the technicaland the personal (Pierce & Stacey 2002). Thepurpose of these two divisions has to do withaccessible which means that data can be collectedon each of these divisions separately (Pierce& Stacey 2002). In one of the classes, studentsdiffered significantly on each of the divisions(Pierce & Stacey 2002). The data also illustratedsome independence of the technical and personaland that students with positive attitudes had thecapacity to be technically strong or weak (Pierce& Stacey 2002). The authors report that“On the other hand, the study has shown thatthe personal and technical aspects influenceeach other over time: students with positiveattitudes, for example, practise using CASand so their technical ability improves.Thedata relating to the technical aspect wasaffected by the actual software used. Thisprogram has been superceded by versionswith more ‘user friendly’ interfaces andhand-held CAS calculators, but, basedon more recent teaching experiences,we still expect that the range of technicalSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Maple and other CAS (Computer Algebra Systems) applied to Teaching and Assessing Mathematics153difficulty within a class is potentially wide.Overcoming technical difficulties is still amajor consideration when teaching withCAS (Pierce & Stacey 2002).”Additionally the dividing of the personal aspectinto two rudiments has been successful (Pierce &Stacey 2002). This is because the two componentsare independent as the study found students withpositive attitudes who showed in their JudiciousUse of CAS, when utilizing it strategically, and withdiscrimination(Pierce & Stacey 2002). However,at the other end of the spectrum, a student witha positive attitude frequently used it involuntarilyand erratically thus demonstrating low JudiciousUse of CAS (Pierce & Stacey 2002). In addition,the framework was helpful in illustrating to theteacher/researcher how teaching could be changedto further stress the improvement of Effective Useof CAS ( Pierce & Stacey 2002).The authors further assert that the results ofthis particular study emphasize that a CAS, onlyhas some potential ( Pierce & Stacey 2002). Theprocess of a user understanding the capacity of theprogram and making it a sophisticated instrumentto utilize; this is a the process Artigue( 2001)reffered to a “instrumental genesis” and stillrequire attention ( Pierce & Stacey 2002). For thisparticular study the framework was adequatelysuccessful in presenting a structure for recordingand analyzing important workings of the processto suggests its use in more significant studies(Pierce & Stacey 2002)Hornaes & Royrvik (2000) conducted a studywhich addressed student attitudes towards CAS.This particular study observed these attitudes asit relates to gender and aptitude. The authorsexplain that in Norway (where the study tookplace high school students are permitted to choosebetween advanced or regular options for theirsubjects (Hornaes & Royrvik 2000). The gradesthat are earned in the various subjects counttowards entrance into college. Because theseoptions are available to high school students, theauthors assert that students tend to choose the leastdemanding options as it relates to mathematicsand more advanced options as it relates to topicsthat are easier to get high grades in (Hornaes &Royrvik 2000).However to gain entrance into engineeringcolleges, high school students must take advancedclasses in science and math (Hornaes & Royrvik2000). In addiiton within Norway the number ofstudents attending college has increased but therehas been a decrease in the number of engineeringstudents (Hornaes & Royrvik 2000). Whilesimultaneously there are a variety of jobs availbleand as a result a less significant percentage of thegood students are going to engineering schools,and these students are replaced in part by studentswho would not have aimed at an academic careerat all in the past (Hornaes & Royrvik 2000). Theauthors further explain that in Norway Basicengineering education is a two-track system thatconsists of a three-year program or a four-anda-half-yearprogram (Hornaes & Royrvik 2000).Most of the engineering students begin the threeyeartrack and these are usually the studentsthat have less mathematical ability (Hornaes &Royrvik 2000). Indeed, mathematics, has becomea hurdle for many students, who seem to haveloss their proficiency in confronting mathematicalchallenges (Hornaes & Royrvik 2000). The authorfurther explains that in 1992“SEFI (Societe Europeenne pour laFormation des Ingenieurs) published a reporton engineering education in Europe: “A CoreCurriculum in Mathematics for the EuropeanEngineer”.1 It advocated that educationalinstitutions incorporate computer programswhen teaching mathematics to engineeringstudents. Although it was not explicitlystated in the report, many teachers interpretthis to mean Computer Algebra Systems(CAS). CAS has a history spanning severaldecades (Macsyma, Maple, Mathematica,Reduce, etc.). These computer programswere initially used by those lucky fewwho had access to powerful and expensivecomputers, and who were prepared to put upwith a rather archaic user interface. Duringthe last 10-20 years, these programs, aswell as desktop computers, have becomeR. Thomson, A. Santaella, M. W. BoulatMaple and other CAS (Computer Algebra Systems) applied to Teaching and Assessing Mathematics


154School of Doctoral Studies (European Union) JournalJulyavailable at reduced prices and with userfriendlyinterfaces, placing CAS withinreach of practically everyone past (Hornaes& Royrvik 2000).”The authors confirm that the latest releases ofCAS programs such as Maple and Mathematicaare complex tools that can perform advancedsymbolic and numeric calculations (Hornaes &Royrvik 2000). These programs also have graphplotting and text editing capabilities, which makesthem excellent tools for use in higher education,engineering education in particular (Hornaes &Royrvik 2000). As a result of the robustness of theseprograms, there has been a great deal of interestin their use, with an increasing number of studiesthat have found that students learn the subjectsmore effieciently when they use CAS (Hornaes &Royrvik 2000). Many experts contend that CASis a technical tool that has a pedagogical problemthat must be resolved (Hornaes & Royrvik 2000).As it relates to student in Norway, theNorwegian government has established guidelinesfor engineering colleges as it relates to the basicmathematics curriculum (Hornaes & Royrvik2000). The guidelines are based on the SER report,and requires that colleges use computer programsin engineering mathematics classes (Hornaes& Royrvik 2000). For the majority of colleges,CAS is used to meet this requirement. There hasbeen a mixed reaction to these guidelines formboth faculty and the colleges in general. Initiallythese guidelines were scene as an opportunity toevaluate the impact of utilizing CAS in a largestudent population and at various institutions oflearning (Hornaes & Royrvik 2000).Indeed there are a great deal of engineeringstudents in Norway that do not have the appropriateamount of mathematical competence and areconsidered to be rather weak students (Hornaes &Royrvik 2000). Several studies have shown goodeducational results through the CAS for thesestudents. For isntance Hillel et al. focused on thepossibility that CAS could actually encourageweak students and found that the students didin fact improve in the area of mathematics(Hornaes & Royrvik 2000). In addition, in theirresearch Rogers and Graves found success usingCAS in mathematics education for students thathad difficulties with math (Hornaes & Royrvik2000). The viewpoint of enhancing the learningenvironment for weak students has been one ofthe primary arguments concerning the utilizationof CAS (Hornaes & Royrvik 2000).On the contrary, other research has suggestedthat CAS may affect strong and weak studentsdifferently(Hornaes & Royrvik 2000). Theyassert that the affect of CAS may actually becounterproductive for the majority of students.In addiiton, the researcher Child claims to havefound that only 20% of mathematics studentsbenefit from using CAS (Hornaes & Royrvik2000). The remaining 80% do not benefit fromthe usage of CAS (Hornaes & Royrvik 2000). Thisparticulat researcher also asserts that for CAS to bea successful teaching tool it must be cultivated tofocus on individual learning (Hornaes & Royrvik2000).In addition the research explains that throughoutthe last decade, there has been a great deal of interestrelated to the effects of gender among students insciences and engineerin g(Hornaes & Royrvik2000). For instance Abelson et al. have illustratedthat female undergraduate students at MIT stearclear of electrical engineering and computerscience more than other courses in engineering(Hornaes & Royrvik 2000). Researcher assert thatthis difference may be present because those fieldsare notorious for being mathematics intensive(Hornaes & Royrvik 2000). In addition otherstudies such as Shoaf-Grubbs,1Jones and Boersand Sher have evaluated female students who usegraphics calculators or CAS and have discoveredthat these students experience advantage from useof such technologies(Hornaes & Royrvik 2000).For the most par, however, there is very little in theliterature concerning gender differences and useof CAS within engineering education (Hornaes &Royrvik 2000).This database provides researchers withthe opportunity to study gender differences asit relates to the manner in which engineeringstudents view the benefits of using CAS (Hornaes& Royrvik 2000). The author explains thatSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Maple and other CAS (Computer Algebra Systems) applied to Teaching and Assessing Mathematics155understanding these differences is critical for thefuture utilization of CAS in engineering educationbecause it is important to know how variousstudent populations react to CAS (Hornaes &Royrvik 2000). Many educators believe that theymust able to answer a series of questions prior tobeing comfortable with using CAS in education(Hornaes & Royrvik 2000). For this reason thisstudy focuses on how students’ aptitude and genderrelate to their attitudes towards the utilization ofCAS in the classroom (Hornaes & Royrvik 2000).The study involved the Norwegian engineeringstudent population and researchers attempted tofind an answer to whether CAS is more beneficialfor some groups of students than others (Hornaes& Royrvik 2000).To answer these questions researchers mailedquestionnaires to one individual at each college.The individual then dispursed the questionairesto the students. The questionnaires were alsodispursed to teachers and administrative personnel,but were not used as part of the final analysis(Hornaes & Royrvik 2000). The researchersreceived a total of 1779 answers from students,without prompting them so that the rate of returnwould be increased (Hornaes & Royrvik 2000). Asa result the rate of return was nearly 35% from acombination of computer science and engineeringstudents. Many of the questions were designedto gather background information concerning thestudents, such as engineering major, year, marks,gender, and name of college and the utilizationof CAS in the college (Hornaes & Royrvik2000). The remaining questions addressed howthe students viewed the utilization of CAS in theclassroom (Hornaes & Royrvik 2000). The aim ofthe research was to study the differences amonggroups of students (based on aptitude and gender)as it related to their attitudes toward utilizationof CAS in the classroom (Hornaes & Royrvik2000).The results of this research indicate that bothgood and poor students perceived CAS as usefulin the process of understanding mathematics(Hornaes & Royrvik 2000). The study aslo founda small difference in the attitudes that students hadconcerning the use of CAS (Hornaes & Royrvik2000). The good students had a silghtly morepositive attitude toward the use of the technologythan did the poor students (Hornaes & Royrvik2000). The study also found that good studentsbelieved that CAS was simple as it related to bothinput, and output syntax. Indeed the study founda noticible difference between the good and thepoor students and the better students reportedhaving fewer problems with the utilization ofCAS (Hornaes & Royrvik 2000).The authors further explains that it is unclearfrom the answers whether the syntax problemsare truly syntax problems or simply a problemwith modeling problem that is independent ofthe CAS (Hornaes & Royrvik 2000). As a resultof the amount of syntax that the students usefor elementary use of CAS being limited, theresearchers believe that a significant part of thedifficulties may be due to modeling problems(Hornaes & Royrvik 2000). This is a problem thatoccurs when students solve math problems byhand (Hornaes & Royrvik 2000). Therefore if theaforementioned interpretation of the data is correct,one could conclude that the students must firstwork on their modeling. If their capacity to modelis improved, then the syntax problem may decrease(Hornaes & Royrvik 2000). Nevertheless, some ofthe syntax problems that were reported may havebeen authentic and should be addressed (Hornaes& Royrvik 2000). One solution is to require morefrequent use of CAS (Hornaes & Royrvik 2000).In addition, students must be given problems informats that introduce new CAS syntax gradually(Hornaes & Royrvik 2000). The authors explainthat“As expected, students find it much easierto understand the results returned by CASthan to formulate the mathematical model inCAS syntax. It appears from the tables thatstudents consider themselves quite capableof interpreting the results obtained fromthe CAS programs. However, we suspectthat the situation is not that encouraging.Obtaining any result gives a good feeling,but it is easy to overlook that these resultsneed to be interpreted in two ways. First,R. Thomson, A. Santaella, M. W. BoulatMaple and other CAS (Computer Algebra Systems) applied to Teaching and Assessing Mathematics


156School of Doctoral Studies (European Union) JournalJulywhat does the output mean mathematically,and second, but more important, how arethe results related to the physical world. Wefeel, without being able to give other thananecdotal evidence, that most students donot spend enough time interpreting results,and therefore do not get a dear idea abouttheir own ability to interpret the results.We do, however, believe that they are ableto understand the output syntax withouttoo many difficulties (Hornaes & Royrvik2000).”ConclusionOverall this section of the literature reviewreveals some of the problems and successesassociated with the utilization of CAS. It isapparent that some of the tools that utilize CAStechnology have some programing nuances thatcan be difficult for students to overcome. Inaddition, there are some issues associated with theoverall implementation of CAS in the classroom.These issues are most often related to modelingproblems and the differing levels of masteryassociated with the students. The researchindicates that is is important to address theseissues because it essential that if CAS is utilized itis done so effectively.This section of the resarch also focused onstudent attitudes toward CAS in the classroom.It is apparent that many students benefit from theuse of CAS. This is particularly true of studentsthat were already proficient students as it relatedto mathematics. The research seems to assertthat students tend to have better attitudes towardCAS when they have been properly taught howto use the technology and they understand themathematecal concepts being presented.Survey papersBarton S. (2000) presented at the Atlanta2000 ICTCMResearch Concerning the Achievement ofStudents Who Use Calculator Technologiesand Those Who Do NotThe research findings of Barton (2000)explored the use of calculators and computersover a 30 year period in the classroom. In addition,research has been conducted to assess the effectof the calculator or computer on student learning.Beginning in the late 1970s, studies were gatheredand analysed to determine the effect of calculatoruse in mathematics classrooms.For example, Suydam (1976, 1980) createda report that analyzed 75 studies from the late1960s through the 1970s relating the effects ofcalculator use on mathematics education. Thestudies addressed the areas of achievement intraditional instruction, achievement within aspecial curriculum and student attitude towardmathematics. Over 95 comparisons weremade. In 47 of the comparisons, no significantdifference was found. The studies were primarilyat the elementary school level. Suydam’s findingssuggest the use of calculators do not adverselyaffect student achievement, and can actually resultin higher achievement when compared to noncalculatorusage.In addition Hembree and Dessart (1986)combined the information from Suydam and otherstudies into one meta-analysis. Over 70 studieswith quantitative data comparing calculator-basedinstruction to traditional instruction were usedin the analysis. About half of the studies foundno significant difference in the achievement ofstudents who use calculators compared to thosewho did not use calculators. However, the resultsof the analysis on overall achievement found thatmost grade levels were significantly and positivelyaffected by the use of calculators even thoughmany of the studies did not allow calculators onthe exam. Results of the meta-analysis foundaverage ability students at all grade levels whoSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Maple and other CAS (Computer Algebra Systems) applied to Teaching and Assessing Mathematics157used calculators performed significantly betterthan the non-calculator group on computation andproblem solving.In another meta-analysis Smith (1996)found over 30 studies that were completed afterHembree’s review. Smith’s review included studiesin grades K-12 from 1984 to 1995. Results foundsignificantly higher achievement for students whoused calculators for problem solving, computation,and conceptual understanding compared tostudents who did not use calculators. By the late1980s graphing calculators began to appear morefrequently in mathematics classrooms. Smithincluded eight secondary school comparisonstudies involving the graphing calculator. Analysisof the studies found no significant difference inachievement between students who use a graphingcalculator to graph mathematical functions andthose who did not.Thirty studies were collected from dissertationsand journal articles published from 1986 to 1995.Mathematics topics included functions, algebra,linear programming, finite mathematics, statistics,and business, applied, and science calculus.Computer enhanced instruction in the studiesincluded teacher demonstration using a singlecomputer and a classroom display unit, studentuse of a graphing or programmable calculator, orstudents (singly or in pairs) using microcomputersin a laboratory setting. Results of the analysisinclude:(a) a statistically significant positive influencefound on overall achievement when the computeror graphing calculator were used(b) no significant effect was found betweentechnology and control groups on proceduralachievement, however, a significant favourableeffectresulted on procedural achievement when theexperimental group students were allowed to usetechnology during testing(d) when experimental group students weredenied use of technology on tests, proceduralachievement (though not statistically significant)was adversely affected, (e) instructional use ofcomputers and graphing calculators as both tooland for demonstration was the most beneficial toall achievement,(f) access to graphing calculator only in theclassroom or lab had a slightly adverse effect onconceptual achievement.King (1997) performed a meta-analysis todetermine the effect of computer-enhancedinstruction on college level mathematics. SinceSmith’s (1996) analyses and King’s (1997) reviewwere completed, more than 60 studies investigatingthe impact of graphing utilities on mathematicsinstruction have been conducted. Those studies,from the past decade, that examined the effect ofgraphing technology (including computer algebrasystems CAS) with control groups not usingthe technology were compiled. Mathematicalconcepts of algebra through calculus in both highschool settlings and college courses were includedin the review.Results discussed in this paper include studentoverall achievement, conceptual understanding,and procedural knowledge. Eight studies for thisreview came from Smith (1996), sixteen appropriatestudies came from King (1997), and 28 morewere gathered from computer assisted searchesof Dissertation Abstracts <strong>International</strong>, EducationAbstracts, British Education Abstract, ERIC, andHumanities and Social Science Abstracts. Fiftytwostudies were found to meet all the criteria forthis review: 5 at the beginning algebra level, 4high school Algebra II, 9 high school precalculus,3 high school calculus, 4 college level elementaryor intermediate algebra, 14 college algebra, 5college precalculus (including trigonometry), and8 college calculus studies. The studies included 40dissertations, 3 master theses, 7 journal articles,and 2 proceedings articles.Results for Overall Achievement, Barton(2000)One question of major interest concerningthe use of technology in mathematics courses ishow the overall achievement of students who usegraphing technology or CAS as an aid to learningcompared with students not using the technology.To address this issue, 46 of the studies found inR. Thomson, A. Santaella, M. W. BoulatMaple and other CAS (Computer Algebra Systems) applied to Teaching and Assessing Mathematics


160School of Doctoral Studies (European Union) JournalJulythe use ofMaple was also put to good use. TheMaple sessions were organised in such a way thatthe students could have regular contact with Mapleas part of their course.The research activity for this study involvedan evaluation the student’s response to the newMaple materials and their delivery. The purposewas to determine how useful the students foundthe Maple tasks, how far they had helped with thedevelopment of understanding of the mathematicsinvolved and how competent students felt as Mapleproblem-solvers. Initially paper and pencil taskswere incorporated with Maple tasks but this wasdiscontinued (by the later part of 2003) followingstudent feedback.Development of the Maple ResourcesIn developing the Maple resources, the aimswere similar to those in the previous studies,Sanders (2003). The students were to extend theirfamiliarity with Maple as a problem-solving toolin a way that would contribute positively to theirexperience of the course and encourage a betterunderstanding of the mathematics involved. Mapleworksheets were upgraded by the researcher andsupervisor. With only one hour per week availablefor the Maple sessions, it would not be enoughtime to cover all the topics covered in lectures.In determining what subject content should beincluded in the Maple sessions, the MA1143subject content was considered in terms of thefollowing:• What were the aspects with which studentstypically had the most difficulty?• Were there any topics that would seem tobe especially suitable to exploring withMaple?On this basis, it was decided that the Maplesessions would be structured as followed:1.2.3.4.Maple RefresherFirst order Differential EquationsSecond order Differential EquationsModelling with Differential Equations5. Surfaces and Space Curves6. Vectors, lines and planes7. Directional Derivatives8. Taylor Series9. Solving Systems of Linear Equations10. Matrix Algebra11. Eigenvectors and EigenvaluesA description of the development processfor each Maple worksheet is given below. EachMaple worksheet includes written explanationon topic, exercises and assignment component.The assignment work was to be handed in bythe students at the end of each one-hour session.This was so that the organisation of marking andreturning work would be straight forward and alsoso that the students would not end up spendingan undue amount of time on Maple assignmentsoutside of the scheduled class times. The students’overall Maple mark would be calculated using a‘best of’ formula (best 8 out of 10 assignmentssince the Maple refresher does not include anassignment). The Maple worksheets were put ontothe webpage using Blackboard for the course so thatthe students could access the worksheets directlyat the beginning of the session or sometimes evenearlier.The Maple worksheets, Report of theFindings so far, Weekly Observations andWork SamplesMaple RefresherThe MATH1143 students had some experiencewith Maple during the previous semester.However, there was a gap of ten weeks betweenthe last Maple session so it was important topresent the students the opportunity of refreshingtheir memories of some basic Maple commands.The aim here was that students would be betterequipped then to learn some new commands thefollowing week and they would also be moreconfident and positive about the Maple sessions ifthey had the chance to review the previous work.During the first session of the semester, thestudents accessed the Maple Refresher worksheet.School of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Maple and other CAS (Computer Algebra Systems) applied to Teaching and Assessing Mathematics161In-class observations showed that the students didnot have a very clear memory of how to select andenter the basic commands needed to expand andfactorise expressions and solve equations. Thisfirst session showed that although the students hadhad recent experience with Maple, they were notat all confident in using Maple at this stage.First order Differential EquationsThis worksheet was the first that would forma component in the evaluated work. The studentshad been studying ordinary differential equationsin the lectures. Given that the visualisation aspectof Maple has proved useful previously. Thisworksheet looked at a practical application ofsolutions to first order differential equations in theform of Newton’s Law of Cooling. An exampleshowed the code used to represent and solve thedifferential equation for certain initial conditions.It was clear that some of the difficulties thestudents experienced with tasks were Mapledifficulties--problems with the syntax for aparticular command or difficulties in findingerrors.The students working in small groups wereoften dividing the tasks amongst them rather thanall going through the tasks together. The groupwork situation enabled the students to discuss whatthey were doing and often figure out solutionsamongst themselves while the students workingindividually often had to ask for help as soon asthey encountered a problem.Second order Differential EquationsThe next week’s Maple session moved onto look at the solution of second order ordinarydifferential equations. Students had been taughtthem in their lecture classes. The auxiliaryquadratic had real roots. The Maple code to solveequation was presented as an example. Moststudents were able to complete the full worksheethere. The students were also now more competentusing Maple and were familiar with the generalapproach for the class. Some students were tryingto go straight to the exercises without looking at theexamples. Most students were able to identify thetype of solution from the auxiliary equation. Theywere able to pick out the particular integral froma general solution. This worksheet was successfulon the whole in teaching the students how to useMaple to solve second order equations.An issue often raised with regard to the useof CAS is the fact that much of the computationwork is hidden and activities need to be designedcarefully to encourage the students to think aboutwhat the software is doing. The activity based onthis worksheet shows that good design can havestudents thinking about the mathematics.Modelling with Differential EquationsAs it related to this particular worksheet, theaim was to look at a problem-solving situationwhere the students could see a concrete applicationof the solution to differential equations. Graphsand animations were used to help the studentsto visualise the solutions. This worksheet usedthe example of a simple pendulum to illustratedifferential equations being used to model aconcrete application. The students enjoyed theanimation of the pendulum and appreciated thedynamic representation.During the exercise, the students were requiredto use Maple to solve the differential equationrepresenting the motion of the pendulum whena damping factor is introduced. The students hada good deal of experience solving differentialequations with Maple by now and this taskpresented few problems. Students would not beable to interpret the animation code or write theirown animation but in terms of visualizing theproblem it is very helpful to provide animation forthem. One of the benefits promoted by the use ofCAS in teaching mathematics is that more realisticsituations can be modelled. This worksheet wasagain generally successful.Surfaces and Space CurvesThe Maple sessions were next used to look atfunctions of several variables. Maple was used toenhance the students’ visualisation of the three-R. Thomson, A. Santaella, M. W. BoulatMaple and other CAS (Computer Algebra Systems) applied to Teaching and Assessing Mathematics


162School of Doctoral Studies (European Union) JournalJulydimensional graphical representations of functionsof two variables and of space curves. Most studentshad no difficulty generating a plot of a givenelliptic paraboloid and identifying the level curvesfrom the contourplot. One of the useful features ofthree-dimensional plots in Maple is the facility torotate the surface.Common difficulties came when plotting thecylinder and helix. The equation of the surface ofthe cylinder was difficult for some of the studentsto obtain and many used the ‘cylinder’ commandfrom the plottools package, which they found byusing the help facility.On the whole, the students managed the tasksin this worksheet well and seemed to enjoy beingable to utilise the three dimensional graphicscapabilities of Maple. The visualisation aspectis paramount here as the facility to pick up androtate the surfaces gives the students a much betterappreciation than they would have from lectures.Three of the students left written commentsin assignments how they are impressed by thegraphics of planes. Students were really active andprogress in Maple was obvious.Vectors, lines and planesFocus of this worksheet is basic vector workand the commands needed to plot lines and planes.The students were shown examples of how Maplecommands can be used to perform basic algebrawith vectors and to find unit vectors, scalar andvector products and the angle between vectors.Most students were able to complete taskscorrectly, requiring them to superimpose an arrowon a plane to represent the normal vector. Therewere some difficulties in identifying the vectorsrequired for the arrow command but these wereusually quickly resolved.The responses to the worksheet were againpositive. The students were able to use the basicvector commands effectively and the use of thethree-dimensional plots helped the students tovisualise the concept of a normal vector and theangle between planes.Directional DerivativesThe aim of the directional derivatives worksheetwas to again use Maple to help students understandthe concept of a directional derivative by usingvisualisation. In this worksheet, most studentswere able to construct a plot of a given surface andshow a particular point on the surface. They werethen asked to find the gradient vector at the givenpoint. Most of the students needed help finding thethree directional derivatives that they were askedto find and had problems understanding topic fromthe lecture.This worksheet demonstrated a situation whereinitial confusion is sometimes evident beforethe students reach a better understanding of aconcept. Some students had difficulty in relatingthe directional derivative to the context givenbut once they had done this, they had a strongerconceptual knowledge of the meaning of thedirectional derivative and how it might be used.This worksheet was the most difficult for studentsand some students couldn’t complete assignmentby end of the class.Taylor SeriesThe Taylor Series derivation for a givenfunction requires significant computational workand the aim of the worksheet was to use Mapleto carry out the procedures and allow the studentsto focus on the overall structure of the TaylorPolynomials produced.The first task for the students was to find anexpression for the nth term of a Taylor Seriesbased on the first few terms derived by Maple inan example. Most students were able to predict thenext term in the series but many needed help toconstruct the nth term definition from there. Thissuggested that the students were not as experiencedin performing such a task as might be expected.To some students, the relation of the radius ofconvergence to the converging polynomials wasnot obvious.This worksheet seemed to present thestudents with more mathematical difficulties. Thedifficulties were in working out the nth terms andSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Maple and other CAS (Computer Algebra Systems) applied to Teaching and Assessing Mathematics163radius of convergence. The intention was that thestudents can focus on the relation to the graphs.Solving Systems of Linear EquationsEquations of three variables were used so thevisualisation element was brought in again byrepresenting equations by planes and consideringthe intersection of the planes.Most students reported that they preferred touse the ‘LinearSolve’ command as it enabled thesolution to be found immediately but some studentssaid that they preferred to use ‘RowOperation’because they could “see what was going on.”Overall, the worksheet worked well. Itwas confirmed that using the ‘RowOperation’command enabled the students to concentrate onthe overall strategy without having to deal withdetailed computations. The students were nowalso very familiar with using the plot commandsand the visualisation of the solutions was seen tobe helpful.Matrix AlgebraThe next matrices worksheet used Maple toexplore some results in matrix algebra, particularlyinvolving inverses and transposes. The rowreduction method for finding a matrix inverse wasalso presented.In this worksheet, the first task was to establishwhether some combinations of operations onmatrices are equivalent by using Maple to computeparticular examples. Most students were ableto establish the correct results but many did notpresent the steps very clearly and some got intodifficulty by reusing a name for a matrix and notrealising that Maple would be using a previousdefinition. Some students had particular difficultywhen a long series of operations needed to becomputed.There was a generally positive response to thisworksheet. Some students expressed pleasure andsurprise at being able to write out the pencil andpaper proofs so succinctly in the second question.The students were generally able to work throughthe exercises quite quickly once some initialdifficulties were overcome.Eigenvectors and EigenvaluesThe final Maple worksheet was an adaptationof a previous worksheet, dealing with eigenvectorsand eigenvalues. Although students generally findthe method for finding eigenvalues and eigenvectorsreasonably straightforward, they often do not haveany useful conceptual idea of what their resultsmean. This worksheet used a visual representationof the eigenvector in two dimensions to demonstratethe fact that the eigenvector is the vector whosedirection is unchanged under the transformationrepresented by the given matrix. The eigenvalueas scale factor could also be clearly seen. Hence,the animation showed the relationship between theoriginal vector and the image vector depending onthe position of the original vector.For some students, the animation presented auseful visualisation of the eigenvector concept.Overall, the group said that it had been interestinglearning to use a mathematics software system asthey had not come across anything like it before.Research DesignAs with the previous studies, Saunders (2003),a combination of quantitative and qualitativemethods was to be used to evaluate the students’responses to the Maple sessions for MATH1143.The aims of the sessions were to provide aninteresting and motivating activity, to help studentsdevelop better understanding of the mathematicsinvolved and to give them some experience ofusing a sophisticated CAS to solve problems.The research design needed to address theevaluation of the Maple sessions in relation to theseaims. Furthermore, given that the Maple sessionswere a regular and significant part of the course,students became more experienced Maple users.The consideration of the student’s conceptualand procedural levels described previously couldalso be addressed in relation to their continuingdevelopment here.R. Thomson, A. Santaella, M. W. BoulatMaple and other CAS (Computer Algebra Systems) applied to Teaching and Assessing Mathematics


164School of Doctoral Studies (European Union) JournalJulyNotes are made on the students’ reactions tothe worksheets. Work samples were analysed fromgroups also.The first semester course MATH1142 in2004In 2004, Maple classes for MATH1142 arestill in a block at the end of semester. This clearlyappears to be less desirable than through thesemester, as is done with the second semestercourse MATH1142.MATH1142 is a traditional calculus coursewhere Maple is used in separate lab sessionsto support the teaching and learning. We useBlackboard to post the teaching and assessmentmaterials (Maple files) on the web. We haveanalysed in detail two assignments that havebeen individualized for each small group. Thesolutions are Maple files submitted to proxy emailaccounts. They are marked with the overall marksdistribution and detailed comments interspersedthroughout the Maple file – all in a new paragraphstyle, coloured dark green. The marked Maplefiles are emailed back to each student.One of these assignments focuses on numericalintegration using trapezoidal and Simpson’s rules.After careful analysis of student work, we willnow re-design this to be submitted and marked byAIM – a computer based assessment system thatuses Maple to interrogate the answers and providefeed back for particular errors (in how Simpson’srule has been incorrectly programmed).Another individualized assignment is wherestudents use our version of the Polya typeof problem solving approach using Maple tomaximise the area in the Norman window problem.A labelled diagram is required – something thatcomputer assisted assessment (CAA) programsdon’t help with! We emphasize that Maple filesshould include graphics and a “write-up” andpropose, Blyth and Labovic (2004), that CAA toolsshould provide a semi-automatic marking modewhere some text and graphics can be marked bythe lecturer with the computations (symbolic andnumeric) marked automatically.We made experiments with various forms ofe-assignments. We want to be able to assess howand what we value. Some use of CAA is desirableand reduce the high demand on staff time toannotate mark and return by email the individuateassignments.Maple SurveyIn addition to the aforementioned research,there was also an additional survey given tostudents. This survey included the followingquestions.1. Please pick your group.MathsOthers2. We had 3 types of assignments in ourlab classes. Express yourself about eachone.a) I preferred to submit hardcopyassignment.SD D N A SAb) I preferred to submit electronically(via email).SD D N A SAc) The animation assignment andpresentation was interesting and helped melearn math.SD D N A SAd) The curve translation (with automaticmarking) helped me learn maths.SD D N A SA3. Cooperative work helped me learn.SD D N A SASchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Maple and other CAS (Computer Algebra Systems) applied to Teaching and Assessing Mathematics1654. The graphs produced in the Maplesessions were helpful to my understanding.SD D N A SAKEY:SD - Strongly DisagreeD - DisagreeN - NeutralA - AgreeSA - Strongly AgreeResults of the SurveyAt the end of the 2005 semester researchersconducted the survey to gauge student’s satisfactionconcerning some of the assignments they had tocomplete. A total of 60 students out of 122 of theenrolled students submitted the survey. Thirtyoneof the participants from “others” degree and29 were math degree candidates. Information wecollected was really satisfactory. The results areas follows:Questions 2 We had 3 types ofassignments in our lab classes. Expressyourself about each one.Question 2(a) I preferred to submithardcopy assignment.RESULTSOther DegreeCandidatesMath DegreeCandidatesSD=6SD=0D=17 D=2N=6 N=5A=2 A=12SA=0SA=10Total Others=31 Total Math=29The results for question 2 (a) seem to indicatethat most students that are math degree candidatesprefer to turn in hard copies of there assignments.In fact nearly 76% “Agree” or “Strongly Agree”with submitting hard copies of their assignments.On the other hand, students not seeking mathdegrees most often disagree with submitting hardcopies. In addition, and equal number of the otherdegree students strongly disagree or are neutralconcerning the issue of submitting hard copies oftheir assignments.Question 2(b) I preferred to submitelectronically (via email).RESULTSOther DegreeCandidatesMath DegreeCandidatesSD=0SD=0D=4 D=2N=6 N=10A=11 A=8SA=10SA=9Total Others=31 Total Math=29As it relates to submitting assignmentselectronically, 68% of other degree candidates“agree” or “strongly agree” that they prefer tosubmit assignments electronically. Nearly 60%of math degree candidated either “agree” or“strongly disagree” in submitting their assignmentselectronically. Only 10% of all the students thatanswered this question disagreed with submittingassignments electronically. In addition, 27% ofthe of all the participants were neutral concerningthis issue.Question 2(c) The animation assignmentand presentation was interesting andhelped me learn math.Other DegreeCandidatesMath DegreeCandidatesSD=0SD=0D=0 D=2N=6 N=1A=15 A=17SA=10SA=9Total Others=31 Total Math=29As it relates to the question of animationassisting in the learning experience, it appears thatR. Thomson, A. Santaella, M. W. BoulatMaple and other CAS (Computer Algebra Systems) applied to Teaching and Assessing Mathematics


166School of Doctoral Studies (European Union) JournalJulyalmost an equal number of other degree studentsand math degree students believed that theanimation was helpful in assisting their learningmath. An estimated 81% of other degree studentsand 89% of math degree students either “agree” or“strongly agree” that the animation was helpful.Only 3% of all the participants believed that theanimation was not helpful and all were mathdegree students. Only 7 of the 60 participantswere neutral concerning this particular issue.Question 2(d) The curve translation(with automatic marking) helped melearn math.RESULTSOther Degree Math DegreeCandidatesCandidatesSD=0SD=0D=4 D=2N=6 N=10A=11 A=8SA=10SA=9Total Others=31 Total Math=29The results for this particular survey questionindicate that 68% of other degree candidates“agree” or “strongly agree” that the curvetranslation featuring automatic marking assistedthem in learning math. While only 60% of mathdegree candidates believed that it was helpfulin understanding math. In addition a total of10% of all respondents disagreed concerningthe helpfulness of curve translation featuringautomatic marking.Question 3 Cooperative work helped melearn.RESULTSOther DegreeCandidatesMath DegreeCandidatesSD=0SD=0D=4 D=2N=6 N=10A=11 A=8SA=10SA=9Total Others=31 Total Math=29The responses to these questions indicate thatmost of the participants benefit from cooperativework. None of the participant strongly disagreedconcerning the efficacy of cooperative learning. Inaddition nearly 70% of the other degree candidatesbelieved that cooperative work was beneficial,while 60% of math degree candidates asserted thatcooperative work was helpful. However, 34% ofthe math candidates were neutral concerning thisissue and 19% of other degree students wereneutral concerning this issue.Question 4 The graphs produced inthe Maple sessions were helpful to myunderstanding.RESULTSOther DegreeCandidatesMath DegreeCandidatesSD=0SD=0D=1 D=2N=6 N=5A=14 A=13SA=10SA=9Total Others=31 Total Math=29Concerning the final survey question, themajority of all respondents asserted that theyagree or strongly agree that the graphs producedby the Maple session assisted their learning. Infact 77% of the other degree students “agree” and“strongly agree” that the graphs were beneficial.In addition, nearly an identical percentage (76%)of the math degree students also believed that thegraphs were beneficial. Only 5% all participantsdisagreed that the graphs were beneficial. Noneof the benefits “strongly disagree” that the graphswere beneficial.Discussion of Survey ResultsThe overall results of this survey seemed toindicate that students that are not seeking mathdegrees seem to benefit more from computertechnology as it relates to the learning of math.In addition, it indicates that students that aremath degree majors prefer to submit hard copiesSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Maple and other CAS (Computer Algebra Systems) applied to Teaching and Assessing Mathematics167of assignments. While it is difficult to surmisewhy these results show these disparities, they doseem to indicate that those that already have a firmunderstanding of math benefit less for math relatedtechnologies. It also appears that those that havea firm foundation in math also prefer to turn inhard copies of assignments. It can be concludedthat students that have and acumen for math canbenefit from the Maple technology, however thosestudents that do not have this acumen are lesslikely to benefit from this technology.Conclusions and future workThis research has provided an in depth analysisof the literature that exist on this topic. Theresearch indicated that there has been a significantincrease in the utilization of technology in theareas of science and mathematics. As studentsgo to college and move into the workforce mathskills and technological skills become importantcomponents in ensuring job security. As a resultinstructors must understand the importance oftaking advantage of technology while techingvaluable mathematics skills.The investigation found that technologyis needed and useful in the comprehension ofmathematics. The research also found thatteachers must be properly prepared for the useof technology within the classroom. This meansthat teachers should be presented with certainguidelines and learn how to implement theseguidelines as it relates to teaching mathematicswhile simultaneously using technology.The research also found that the proper useof CAS has been associated with increases in theamount of information that students retain as itrelates to mathematics. Computer Algebra Systemsplay an important role in mathematics particularlyas it relates to students and their perceptions aboutlearning Algebra through CAS.Student attitudes toward CAS technology seemto be reflected in the level of mastery associatedwith certain mathematical concepts and the properteaching as it relates to the utilization of CAS.In other words students that mastered variousmathematical concepts and knew how to use CASproperly had a more positive attitude toward theeffectiveness of CAS.Overall the literature review indicates that thereis a need for CAS and the use of technology in theteaching of mathematics. The literature indicatesthat the No Child Left Behind Act has played asignificant role in improving mathematic educationthroughout the country. In addition, research hasindicated that technology is a critical componentin the enhancement of mathematical skills. Areview of the research confirms this assertion. Italso apparent that there are some issues associatedwith the use of CAS and the preparation of preserviceteachers.As it relates to the current study it was clear fromthe consideration of previous studies, Saunders(2003) and our work to date that there are a varietyof responses to the Maple sessions and CAS as awhole. It is also apparent that different studentscan have quite different attitudes towards the sameactivity.As it relates specifically to the maple activitiesthe results indicate however that most studentsfound some benefit in the Maple activities withregard to their mathematical understanding andthe visualisation capabilities seemed to once againbe significantly useful. Students are generallypositive about the motivational impact of theMaple sessions. Most students gain an impressionof the usefulness of Maple as a mathematicstool. The level of problem-solving competencewith Maple is reasonable but many students relyon the example code and would have difficultytyping in commands from scratch. Many studentscomment that they would have valued more timeon the Maple activities. Many students developa reasonable level of competence with the basiccommands and students are often able to present acorrect Maple solution.In 2003 the use of paper and pencil werereduced at the request of the students. So theprevious finding supporting paper and pencil taskswould appear not to be appropriate for the future.The majority of students worked together as ateam to produce their solutions, even though theymay have divided tasks amongst them. This meantthat they discussed, explained and justified theirR. Thomson, A. Santaella, M. W. BoulatMaple and other CAS (Computer Algebra Systems) applied to Teaching and Assessing Mathematics


168School of Doctoral Studies (European Union) JournalJulywork to each other. At this stage, the followingpoints could be made regarding the effective useof Maple activities in first year service courses:It is helpful for the Maple activities to beclosely integrated with the work covered inlecture classes.Maple work is better schedule through outthe semester.Students organised to work in small groupsappear to be well motivated and successfulwith the tasks.Further work is required to developprocedures to include text and graphs insemi-automatic marking.Further work is required on how to modifyAIM (software for automated marking) tobe able to capture some parts of assignmentssuch as program code corrections forSimpson’s rule (as discussed above) andproviding diagnostic feedback for students.Overall it is suggested that further researchinvestigate the most effective programs that utilizeCAS. Doing so could provide a standard for futureutilization of these technologies that can be usedin schools throughout the country. In addition itmight be important to investigate whether Mapleor some other for of CAS is most effective in theclassroom. Understanding the most effective typeof program may assist curriculum developers infuture. There must also be research concerningthe most effective teaching tools for teachers asthey also learn how to utilize this technology.The results of the Maple survey are indicativeof the idea that those with an existing abilityfor math are less likely to benefit from mathrelated technologies. However, there seems tobe some benefit associated with the use of mathtechnology for college students that are not mathmajors. It seems that technology is beneficial inthis population and assist in learning new mathconcepts. It also seems that math degree majorsare more likely to prefer submitting hardcopies oftheir assignments even though they are also fondof submitting assignments electronically. Thesurvey also indicates that students benefited fromworking cooperatively. Future research shouldinclude studies concerning this phenomenon.There are many aspects of research that have notbeen addressed as it relates to CAS. Future researchmust make more effective use of classrooms thathave incorporated such technology at every levelof education. Conducting such research is a criticalcomponent in the successful development of CAScurriculum and the improvement of mathematicsscores.ReferencesBall D. L. (2003) Mathematical Proficiency forAll Students: Toward a Strategic Researchand Development Program in MathematicsEducation. Rand: Santa Monica, CA.Barton, S. (2000). What Does the ResearchSay about Achievement of Students WhoUse Calculator Technologies and ThoseWho Do Not? E. Proc. ICTCM 13 (Atlanta2000), 13-C25, available at the URL:http://archives.math.utk.edu/ICTCM/Blyth, B. (2001). Animations using Maple in FirstYear. Quaestiones Mathematicae, Suppl., 1 ,Supplement, 201—208.Blyth, B. (2002). Finite Element Methods:Presentation and Animation using Maple. InJ. Böhm, editor, Proceedings of the Vienna<strong>International</strong> Symposium on IntegratingTechnology into Mathematics Education (VISIT-ME-2002), bk teachware, Austria, (CDROM).Blyth, B. (2003). Visualization of slicing diagramsfor double integrals using Maple. In R.L.May and W.F. Blyth, editors, EMAC2003Proceedings, Proceedings of the SixthEngineering Mathematics and ApplicationsConference, 7-12. Engineering MathematicsGroup, ANZIAM, Australia.School of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Maple and other CAS (Computer Algebra Systems) applied to Teaching and Assessing Mathematics169Blyth, B. (2003). Geometry of surfaces usingMaple. New Zealand Journal of Mathematics,32 Supplementary Issue, 29-36.Blyth, B. and Labovic, A. (2004). Assessment ofe-Mathematics with Maple. Proceedings of theAsian Technology in mathematics Conference(ATCM 2004), Singapore, to appear.Bransford, J. D., Brown, A. L. and Cocking,R. R. (1999). How People Learn: Brain,Mind,Experience, and School, Washington:National Academy Press.Bowden, J. and Marton, F. (1998). <strong>University</strong> ofLearning: Beyond Quality and Competencein Higher Education, London: Kogan Page;Sterling, VA: Stylus Publishing.Bossé, Michael J, Nandakumar, N R COMPUTERALGEBRA SYSTEMS, PEDAGOGY, ANDEPISTEMOLOGY. Mathematics and ComputerEducation, Fall 2004. http://findarticles.com/p/articles/mi_qa3950/is_200410/ai_n9464780Bussi M. B., English L. D., Jones G. A., Lesh R.A., Tirosh D. (2002) Handbook of <strong>International</strong>Research in Mathematics Education. LawrenceErlbaum Associates. Place of Publication:Mahwah, NJ.Engelbrecht, J. and Harding, A. (2003). OnlineAssessment in Mathematics: MultipleAssessments Formats. New Zealand Journal ofMathematics, 32 Supplementary Issue, 57-65.Galbraith, P. (2002). Life wasn’t meant to be easy:Separating wheat from chaff in TechnologyAided Learning.Gravely, A., Lawrenz, F., & Ooms, A. (2006).Perceived Helpfulness and Amount of Useof Technology in Science and MathematicsClasses at Different Grade Levels. SchoolScience and Mathematics, 106(3), 133+.Holdener, J. (1997) PRIMUS, VII No 1, 62-72,available at the URL: http://www.dean.usma.edu/math/pubs/primus/Back_issue_TOC.htmHembree, R. and Dessart, D. (1986). Effects ofhand-held calculators in precollege mathematicseducation: A meta-analysis. Journal for Researchin Mathematics Education, 17, 83-99.Kahn, P. & Kyle, J. (Eds.). (2002). EffectiveLearning & Teaching in Mathematics & ItsApplications. London: Kogan Page.King, H. J. (1997). Effects of computer-enhancedinstruction in college level mathematics asdetermined by a meta-analysis.(Doctoraldissertation, The <strong>University</strong> of Tennessee,1997). Dissertation Abstracts <strong>International</strong>,59(1), 114A.Kent, P., and Stevenson, I. (1999)”’Calculus inContext’: A Study of Undergraduate ChemistryStudents’ Perceptions of Integration.” Papergiven at the conference Psychology ofMathematics Education 23 (July 1999).Lederman, N. G., & Niess, M. L. (2000). Technologyfor Technology’s Sake or for the Improvementof Teaching and Learning?. School Science andMathematics, 100(7), 345.Lawson, D. (2002). Computer-aided assessmentin mathematics: Panacea or propaganda?CAL-laborate, October 2002, 6-12. UniServeScience, <strong>University</strong> of Sydney, Australia.LTSN in Maths, Stats and OR. Computer-AidedAssessment Series. Available at the URL: http://ltsn.mathstore.ac.uk/articles/maths-caa-series/Marx S. (2005) Improving Faculty Use ofTechnology in a Small Campus Community.T H E Journal. Volume: 32. Issue: 6. PageNumber: 21Mueller, U.A., Forster, P.A. and Bloom, L.M.(2002). CAS in the Classroom: A Status Report.R. Thomson, A. Santaella, M. W. BoulatMaple and other CAS (Computer Algebra Systems) applied to Teaching and Assessing Mathematics


170School of Doctoral Studies (European Union) JournalJulyIn W.- C. Yang, S.-C. Chu, T. de Alwis, and F.M.Bhatti, editors, Proc. 7th Asian TechnologyConference in Mathematics: ATCM 2002,pages 284-293. ATCM, Inc, USA, 2002.Park, K. and Travers, K. J. (1996). CBMS Issuesin Mathematics Education, Volume 6, pp 155-176.Pierce & Stacey (2002). Monitoring EffectiveUse of Computer Algebra Systems. http://extranet.edfac.unimelb.edu.au/DSME/CAS-CAT/publicationsCASCAT/2002Pubspdf/PierceStaceyEUCAS.pdfQuellmalz, E. (1999). The role of technology inadvancing performance standards in scienceand mathematics learning. San Luis Obispo,CA: Stanford Research Institute <strong>International</strong>.Ramsden, P. (1997). Mathematica in Education: OldWine in New Bottles or a Whole New Vineyard?Paper presented at the Second <strong>International</strong>Mathematica Symposium, FinlandRocket, A. (2000) Study at UIUC: Performanceof students who took all their UIUC calculusin Mathematica sections and performance ofstudents who took all their UIUC calculus inbook sections in selected engineering courses.Saunders, J. (2003). Pedagogical Use of a CASin First Year Tertiary Mathematics serviceCourses. M.App.Sc., RMIT.Sangwin, C. S. (2003). Assessing HigherMathematical Skills Using Computer AlgebraMarking Through AIM. In R.L. May andW.F. Blyth, editors, EMAC2003 Proceedings,Proceedings of the Sixth EngineeringMathematics and Applications Conference,229-234. Engineering Mathematics Group,ANZIAM, Australia.Siew, P. S. (2003). Flexible Online Assessmentand Feedback for Teaching Linear Algebra. Int.J. Math. Sci. Technol., 34(1), 43-51.Smith, D.A. (2002). How people learn …mathematics. In M. Boezi, editor, Proc. Second<strong>International</strong> Conference on the Teaching ofMathematics (at the undergraduate level). CD.John Wiley and Sons, Inc., USA, (2002).Smith, B. A. (1996). A meta-analysis of outcomesfrom the use of calculators in mathematicseducation. Doctoral dissertation, TexasA & M <strong>University</strong>, 1995). DissertationAbstracts <strong>International</strong>, 57, 787A. (<strong>University</strong>Microfilm)Suydam, M. N. (1980). The use of calculator inprecollege education: A state of the art review.Columbus, OH: Calculator InformationCenter.Suydam, M. N. (1976). Electronic hand calculators:The implication for precollege education. FinalReport. Washington,Templer, R. (1998). Mathematics Laboratoriesfor Science Undergraduates. In C.Hoyles, C.Morgan. and G. Woodhouse (Eds.), Rethinkingthe Mathematics Curriculum (pp. 140-154).London: Falmer Press.Uhl, J. (2002). A guide to the studies done on theMathematica-based courses, available at theURL: http://www-cm.math.uiuc.edu/Uhl, J. J. Why (and how) I teach without longlectures. Available at the URL:http://www-cm.math.uiuc.edu/come.htmlWeaver, G. (2000). An examination of the NationalEducational Longitudinal Study (NELS:88)data base to probe the correlation betweencomputer use in school and improvement intest scores. Journal of Science Education andTechnology, 9(2), 121-33.Winn, W. (2003). Beyond constructivism: Areturn to science based research and practicein educational technology. EducationalTechnology, 43(6), 5-14.School of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Brain Neuroplasticity and Computer-Aided Rehabilitation in ADHD171Brain Neuroplasticity and Computer-AidedRehabilitation in ADHDHansel Undsen (MSc)Master of Science and candidate to PhD in Software Engineering at the School of Doctoral Studies, <strong>Isles</strong><strong>International</strong>e Université (European Union)Melissa Brant (PhD)Chair of Biology and Life Science of the Department of Science at the School of Doctoral Studies, <strong>Isles</strong><strong>International</strong>e Université (European Union)Jose Carlos Arias (PhD, DBA)Vice-Chancellor (Rector) of the <strong>Isles</strong> <strong>International</strong>e Université (European Union) and Chief Researcherin the Neuroscience Interdisciplinary Research Project of the Department of Science at the School ofDoctoral Studies (European Union)AbstractThis article includes a discussion on the brain’s ability to work around damage caused by injury or otherinsult, a discussion on different types of brain damage, and a discussion on the various ways for healing, orat least softening, the effects of brain damage. It also discusses motor, sensory, and autonomic function; thepsychiatric aspects of traumatic brain injury; schizophrenia; and cerebrovascular disorder. It includes anextended discussion on the role MRI and PET examination in discovering what really goes in the formationand development of the brain in developmental disorders, including ADHD.H. Undsen, M. Brant, J. C. Arias - Brain Neuroplasticity and Computer-Aided Rehabilitation in ADHD


172School of Doctoral Studies (European Union) JournalJulyCurrent Empirical Understanding ofCentral Nervous System NeuroplastyPlasticity is a term meaning ability to change,to modify as needed to meet some new situationor repair damage from some insult to a system.What does the term plasticity mean with regardto repairing damage that a person was born with?The information regarding the ability of thebrain to repair itself is gathering at an astoundingrate as the newer imaging methods make thestudy of the living brain a daily occurrence.Magnetic Resonance Imaging and PositronEmission Tomography are both techniques thatallow researchers to study the brain while it isfunctioning. This allows the researchers to seewhat parts of the normal brain respond to givenstimuli and to compare how a damaged brainresponds to the same stimuli. This new work isexpanding the boundaries of the definitions ofplasticity. We have moved from a belief that theadult brain especially was not able to compensatefor damage done to a cautious optimism that thebrain has a great deal more in the way of resourcesto rebuild and heal than previously believed.Much of this new attitude toward CNS plasticitycomes as the result of imaging techniques thatallow doctors and researchers to watch the brainfunctioning. Previously all studies were donepost-mortem or with techniques that allowed onlystill images. Functional Magnetic ResonanceImaging and Positron Emission Tomography areboth techniques that allow for working images.When different parts of the brain are activelyinvolved in a given operation there is a differencein the amount of blood going to that distinct partof the brain and these differences allow activity tobe tracked.Can the information gathered about how thehuman brain works to begin with and then repairsitself after injury be applied to helping peopleborn with some developmental dysfunction suchas Attention-Deficit Hyperactivity Disorderpopularly known as ADHD? Children and adultswith this disorder display inabilities to pay attentionto the task at hand, appropriately monitor theirresponses to stimulus and in a range of functionscalled executive functions such as planningand organization. Building and ordering thesefunctions is what such people need and the hopefor the future is that our new technologies can bothgive us better understanding of causes and providenew, more effective ways to treat ADHD. Besidesthe new imaging technologies, we also have oursocieties’ love affair with personal computers.The focus of this research will be to test the ideathat the personal computer and the programs thatcan be designed for it can be a strong positive toolto help ADHD clients.In Chapter One, Cognitive Neurorehabilitation,discusses plasticity or the ability of the brain towork around damage caused by injury or otherinsult. The discussion centers on re-growth ofvarious aspects of the neuron and its discreteparts. The discussion also looks at some of theneurotransmitters that have been discovered tostimulate nerve growth and re-functionIn the Jul/Aug 2003 issue of The Journal ofHead Trauma Rehabilitation, in an article titled“Concepts of CNS plasticity in the Context ofBrain Damage and Repair,” by Donald G. Stein andStuart W. Hoffman, the focus is most particularlyon traumatic head injury and its aftermaths.Among the ideas they explore are those of specificlocalization and complete equipotentiality. Thereis also information concerning new drugs that canlimit the damage done when the brain is injured.Part of the problem when the brain is injured isthat the original injury creates further problems.Damaged and dying cells give off chemicals thatcreate a toxic environment and cause cell damageand death to spread beyond the original insult.Stopping this spread and creating an environmentwhere damaged cells can heal is part of the newtechnology. They discuss the need for re-evaluatingthese concepts in light of better data and currentexperimentation. There is much discussion ofmany studies of animals and humans that refineand redefine understanding of what can occur afterinjury to the brain, whether that injury is controlledand deliberately inflicted, as in the laboratory, oractual traumatic, accidental injury. Part of theinformation they deal with has to do with injuriescreated in fetal monkeys which were then returnedSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Brain Neuroplasticity and Computer-Aided Rehabilitation in ADHD173to the womb to come to term. The authors saythat not only was there a complete sparing ofcognitive function, but there was also a radical reorganizationof the cortical mantle, and there werecells of certain types where they weren’t normallyfound. Further, studies concerning various braininjuries in younger children seemed to bear out theconcepts of the equipotentiality theory which saysthat the various parts of the brain can take overor create function to make up for loss or injury.While nothing is mentioned of the congenitalissues that seem to be at the basis of ADHD, manybehaviors described as the result of injuries oraging also apply to this disorder.In the American Scientist, article, “BrainPlasticity and Recovery from Stroke,” Nina P.Azari, and Rudiger J. Seitz begin by outliningwhat happens when a stroke damages the brain.They make the argument that conditions such asloss of speech or paralysis can occur because thepart of the brain that controlled those things hasbeen damaged or destroyed. They explain thewhole process thus:The brain is able to perform all of these tasks(and much more) at the same time becauseit is not merely a homogeneous blob ofcells. It is made of separate parts—neuralnetworks—that are specifically dedicatedto doing each task independently. Suchdivision of labor has obvious benefits andit is performed so seamlessly that we neverneed to think about it—until somethinggoes wrong.When a particular neural network isdamaged, as often happens in a stroke, thesystem fails and function is lost becauseno other neurons in the brain “know” howto do t he task formerly performed by thedamaged network. Thus, the result maybe paralysis or the loss of speech or theinability to comprehend speech or any oneof a number of actions we take for granteduntil we can’t perform them anymore.Curiously, however, many people who havesuffered a stroke regain some or most of thelost function after a brief recovery period,sometimes in a matter of weeks. (Italicsadded) (427)The situation that has puzzled doctors andresearchers for years is the recovery that so manystroke victims make even in the face of what seemlike devastating losses. The writers discuss twokinds of plasticity. The physical ability of thebrain to heal—more than was believed previously,and the ability to re-learn what has been lost—alsomore than previously believed.Utilizing PET scanning, the researchers studieda group of 21 stroke patients with motor cortexlesions and who had all suffered sever paralysis toone hand. 12 of the patients had recovered functionwithin about four weeks. The PET scans suggestedthat instead of the pyramidal tract, the recoveredpatients’ brains were using some manner ofcompensatory track from the supplementary motorarea to the spinal cord. This different track wasaccompanied by abnormally enhanced connectionsbetween the thalamus and the cerebellum. Withother stroke patients, they observed examplesof plasticity where the compensation involvedparts of the brain not usually associated withmovement. The PET images indicated that whenasked to move fingers of the affected hand, thebrain activity of blindfolded patients showedengagement of the visual cortex—something thatdoes not normally occur in movement of the handor fingers. It appeared that an alternative networkoutside of those areas normally associated withmovement had been recruited.These writers also make a good case for theimproving body of information that is becomingavailable through the use of functional imagingmethods. As these techniques are more andmore used for visualizing various kinds ofinjury involving the brain, those involved withneurorehabilitation should find information thatwill assist in treating other brain malfunctionissues such as ADHD.The textbook Brain Damage, Brain Repair,Edited by James W. Fawcett, Anne E. Rosser,and Stephen B. Dunnett, is a deeply detailedwork. From Chapter One through Chapter Six,H. Undsen, M. Brant, J. C. Arias - Brain Neuroplasticity and Computer-Aided Rehabilitation in ADHD


174School of Doctoral Studies (European Union) JournalJulymany kinds of brain damage are discussed. Theredoesn’t to seem to be concomitant explanationof the symptoms that would be noted with eachdisease or disorder, however, most of the subjectsdiscussed are widely known. The chapters discuss“Death and Survival in the Nervous System,” is anin depth discussion of how and why neuronal cellsdie and how some of them can survive. The subjectsinclude necrosis, apoptosis, control of apoptosis,cell death caused calcium and free radical damage,trophic factor withdrawal and cell death caused byDNA damage. Chapter Two discusses axotomyand mechanical damage. Chapter Three discussesmetabolic damage to the CNS. This chapter’sfocus is on how insult such as stoke can precipitatea whole cycle of reactions that cause cell death dueto chemical changes around the damage. ChapterFour has as its focus neuronal damage and deathdue to inflammation and demyelination. ChapterFive goes into brain and neuronal damage thatcan be caused by infection. Chapter Six discussesneurodegenerative disease.Chapters Seven through Ten begin discussingvarious ways, both natural and man-made avenuesfor healing or at least softening the effects of braindamage. These chapters are titled or focused as,“Neuroprotection,” “Steroids,” “Trophic Factors,”and “Control of Inflammation.”Chapter Eleven discusses the relative easewith which the Peripheral Nervous System, evenin mammals, heals and regenerates. The chaptermakes it clear that even this nerve regenerationis not without its problems but it does occurmore readily than in the CNS. Chapter Twelvegoes into a detailed explanation of current bestunderstanding of why the CNS is so difficult toobtain healing and regeneration in.The next three chapters include information inthe areas of, “Anatomical Plasticity,” “BiochemicalPlasticity,” and “Remyelination,”Chapter Seventeen involves discussion ofMotor, Sensory, and Autonomic Function.” Theprimary idea here is to note how these functions areassessed in various disorders and disease processlike MS, Parkinson’s disease, and Huntington’sdisease. Many evaluation tools are included in thechapter.Chapter Eighteen does approximately the sameservice in evaluation of “Cognition.” Again thefocus is on familiar diseases and their effects andprocesses. Various forms of dementia especiallyare discussed. and discussion of approximatelyTwenty different tests and measures for cognitivefunction, are looked at, and there is considerationof determining test validity.Chapter Nineteen is titled, “PsychiatricAssessment,” and discusses the psychiatricaspects of traumatic brain injury, schizophrenias,cerebrovascular disorder, and then the maindiscussion focuses around the various aspects ofHuntington’s Disease.“Deep Brain Stimulation: Challenges toIntegrating Stimulation Technology With HumanNeurobiology, Neuroplasticity and Neural repair,”by Andres Lozano, MD, PhD. FRSC in the Journalof Rehabilitation Research and Development;Nov/Dec 2001 is a “guest editorial” that discussesaspects of medical treatment for Parkinson’s andother disorders that, in places, almost sounds likescience fiction. He never actually mentions treatingdisorders like ADHD with deep brain stimulationbut the implication is there. Lozano discusses thefact that surgical intervention in Parkinson’s is nota new idea, that it was the treatment used in the1940’s and 1950’s.He says:Patients who were treated operatively backin the 1940’s were generally awake, andwritten reports suggest that neurosurgeonsused to just cut into different parts of thebrain in what would now be considereda somewhat nonspecific manner untilsomething interesting happened. Either thepatient’s symptoms improved or an adverseeffect was produced to signal the stoppingpoint of the procedure. (x)He goes on to explain how it was that modernmedicine came to find out what caused Parkinson’s.He also discusses how medicine moved to usinglevodopa in the treatment of Parkinson’s. And,Lozano says, now medicine is back to surgeryas the most up-to-date treatment for Parkinson’sSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Brain Neuroplasticity and Computer-Aided Rehabilitation in ADHD175and for other disorders that they are just nowexperimenting with. The treatment involvesputting electrodes into parts of the brain andsending titrated electrical impulses in to help stopspasms and other symptoms of Parkinson’s andother disorders. He gives the example of a childwith a genetic disorder that made his limbs andtrunk twist in random movements. The child didnot respond to drugs and got to where he couldn’twalk at all although there was nothing wrongwith his cognitive functioning and images of hisbrain showed nothing out of the ordinary. Thesituation finally responded to DBS. Specifically tosomething called a bilateral pallidal procedure andthe child was able to go back to school and to allhis usual activities including riding his bicycle.Lozano is careful to make it understood that inmany ways researchers and neurosurgeons don’tknow exactly how these methods work. He saysthey know for sure that they have neurostimulatorsin a certain part of the brain but exactly how manyand what kinds of neurons are being affectedis still not clear. Lozano closes out his guesteditorial by speculating on where and how bothDBS and various drugs that researchers suspectare released as the result of the stimulation, canbe used in treating any number of disorders fromdepression to eating disorders to chronic pain. Itseems obvious that deep brain stimulation opensvery new possibilities for encouraging plasticity inthe CNS. The application in such things as ADHDis not clear, but the concept that this treatment mayhave a place in its care/healing, is tantalizing.M.S.C. Thomas, in an article, “Limits onPlasticity,” in the Journal of Cognition &Development, Thomas evaluates and recaps muchof the most current literature on the plasticityof the central nervous system. He reviews fourbooks, Handbook of Developmental CognitiveNeuroscience Nelson and Luciana (Eds.) (2001);Developmental Neuropsychology: A ClinicalApproach, Anderson, Northam, Hendy &Wrennall, (2001); Developmental Disorders ofthe Frontostriatal System: Neuropsychological,Neuropsychiatric, and Evolutionary Perspectives,Bradshaw (2001); and Neural Plasticity: Theeffects of Environment on the Development of theCerebral Cortex, Huttenlocher (2002).Generally,the author’s over-all view would seem to bethat the most useful information is to be foundbetween the two extremes of “pre-determined bybuilt in blueprint” and “unlimited plasticity.” Hediscusses the two prevailing theories of when,where and how the brain and associated functionsdevelop and poses the idea that “truth” is probablysomewhere between.In the section labeled, “Assessing Limitsto Plasticity Via Recovery From Early BrainDamage,” Thomas is discussing various viewsof “Early Plasticity” which indicates that damagedone when a person is a child should heal morecompletely with less residual incapacity. In theliterature being reviewed, this doesn’t seem tohold true and Thomas says:Indeed, children with generalized cerebralinsult (e.g., from traumatic brain injury)exhibit both slower recovery and pooreroutcome than do adults suffer similarinsults. This is quite inconsistent with thenotions of greater early plasticity. From the“early vulnerability” perspective, short-termrecovery favors the mature brain. Acrosstime, a child who has seemed initially torecover well from the insult may start toincreasingly lag behind age-matched peersand fail to show the expected emergence ofnew cognitive skills. The child thus appearsto “grow into” his or her cognitive deficit asthe brain matures. (104)There is also an extended discussion of therole MRI and PET examination is playing indiscovering what really goes in the formationand development of the brain in developmentaldisorders including ADHD. This informationshould make it possible to better intervene eitherpharmacologically or cognitively the problem is,that in his working to not seem to take a strongposition, Thomas rather leaves the reader wherehe found him/her.Michael Zappitelli, Teresa Pinto and NatalieGrizenko offer a exhaustive review of literatureH. Undsen, M. Brant, J. C. Arias - Brain Neuroplasticity and Computer-Aided Rehabilitation in ADHD


176School of Doctoral Studies (European Union) JournalJulytitled “Pre-, Peri-, and Postnatal Trauma in Subjectswith Attention Deficit Hyperactive Disorder.”They begin by establishing that although theetiology of ADHD is not well understood, the factthat ADHD has genetic components is documentedby family studies, twin studies, and adoptionstudies. Further, their review paper discussesthe possible role of environmental stressors suchas pregnancy and delivery complications whichit has been suggested may also increase the riskfor ADHD. The authors write, “It is believedthat pre-, and perinatal trauma may have a directeffect on the fetal brain during a crucial period ofdevelopment….ADHD, increasingly discussed interms of its origins in neurochemical alterations,can be examined as a neurological outcome ofinsult suffered early in life.”(542)According to Zappitelli et al. in utero toxinshas been a major focus for research, especially theareas of “cigarette smoking and alcohol exposure.”(543) In one study quoted in this article, thecomment is made that “the accepted neurochemicalhypothesis behind the pathophysiology of ADHDis a dysfunction of the dopamine system in the prefrontalcortex.”(543) The writers go on to quotefurther, “Animal studies have shown that pupsprenatally exposed to nicotine showed decreasedstriatal dopaminirgeric receptor binding sites.”(543) There are many other studies cited to bolsterthe contention that maternal smoking can leadto fetal hypoxia—blood carboxyhemoglobin iselevated in pregnant women who smoke, possiblyleading to decreased oxygen delivery to the fetus.They go to discuss how mothers drinking alcoholduring pregnancy might create later problems withlearning and behavior problems and those resultsare inconclusive. The only thing researchersknow for sure is that excessive alcohol use willcause fetal alcohol syndrome. Other studies haveconsidered other, non-substance factors that mightcontribute to ADHD. Hartsough and Lambert,in a study of 301 children with hyperactivity and191 unaffected children, the there seemed to becorrelation with young maternal age, poor maternalhealth during pregnancy, eclampsia, and paritypregnancy. Millberger and others add maternalbleeding and complications of maternal accidentsto the list of possible causes. Other studies haveconcluded that hypoxia, for any reason, is likely afactor in ADHDSuggestions for future studies includefollowing women through their pregnancies,together with the use of medical records and datagatherers and analysts with sufficient medicalknowledge to record and interpret data accurately.In retrospective analysis, data collectors would be“blind” to the status of ADHD subjects.There is one related body of information thatis conspicuous by its absence. That would be anyrelationship between ADHD or other learning/behavior problems and second hand smoke or,for that matter, any research might there be aboutthe effects alcohol or smoking may have on thegenetic integrity of sperm from a baby’s father.From the Rogers Medical IntelligenceSolutions, an on-line continuing education service,comes the article/lesson set titled, “Advancesin the Treatment of Adult ADHD—LandmarkFindings in Non-stimulant Therapy,” MargaretWeiss and Robert Bailey Eds. Comes informationin the continuing search for finer definition ofwhat goes on in the brain when one has ADHD.Although the focus of this particular article isadult ADHD, some of the electrophysiologicalstudies mentioned were actually done on children.In the mentioned studies, children with ADHDwere found to have lower amplitudes in t he areasof the brain believed to be analogous for attentionand memory. MRIs have shown that the prefrontallobe and right caudate nucleus is smaller in patientswith ADHD. In adults who were diagnosed withADHD as children, PET scans have revealeddecreased activity in frontal cortical activity andabnormal regional and global glucose metabolismduring the performance of a task involvingexecutive function. PET scans have also showndecreased dopamine transmission in the left andmedial portions of the prefrontal cortex.School of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Brain Neuroplasticity and Computer-Aided Rehabilitation in ADHD177Neurophysiology of IntercerbralNeuronal Regeneration and Repair:Intra-cellular and Extra-cellularMechanisms.Chapter Two of Cognitive Neurorehabilitationis a fairly in-depth discussion of results intransplanting fetal cells to damaged brain areasin the treatment of deliberately caused lesions inlaboratory animals. The discussion also includes acomparison of the relative effectiveness betweenfetal cells and genetically engineered cells. Thereare further comparisons between the results inyoung and aged animals. The various studiescited indicate that re-enervation, re-establishmentof chemical function and growth in neurons areall observed whether the lesions are chemically orsurgically induced.Weizmann Institute; Scientists reveal key partof nerve regeneration mechanism. (2004) BiotechWeek, Feb 11, Atlanta. 622. A further discussionof this subject is found in an article in the on-linemagazine Biotech Week, written by the magazine’sstaff, where the protein complexes importin alphaand importin beta are described along with theirfunctions in the repair and healing of nervesoutside the CSN.Chapter Twenty-Two of Brain Damage, BrainRepair, is titled, “Axon Regeneration in the CNS.”There is a detailed discussion of the reasonswhy re-growth in the CNS is difficult, such asthe interference that occurs from the chemicalscreated by dying cells and the mechanism thebody uses to “clean up the mess.” The followingchapters are also dealing with various presentlyexperimental methods to encourage re-growthin the central nervous system. Chapter Twentythreeis “Primary Neuronal Transplant,” Twentyfouris “Glial Transplant,” Twenty-five dealswith “Stem Cells,” Twenty-six is concerned with“Gene Therapy.” This concludes the main partof the book. There are nine Appendices that givedetailed, concise information on nine differentbrain diseases or disorders.This part of an article is found in BiotechWeek:Normally, injured nerve fibers, known asaxons, can’t regenerate. Axons conductimpulses away from the body of the nervecell, forming connections with other nervecells or with muscles. One reason why axonscan’t regenerate has been known for about15 years: several proteins in the myelin,and insulating sheath wrapped around theaxons, strongly suppress growth.(127)The article explains reasons why CNS axonsdon’t re-grow and what neuroscience and biologyresearch are doing to re-grow nerves when there hasbeen damage of some kind. Using combinations oftechnology to not only interfere with the proteinsthat stop re-growth but to also make growth happenmore quickly is what scientists are after.This is one of two articles reproduced in twosections because the information applied bothplaces.“Deep Brain Stimulation: Challenges toIntegrating Stimulation Technology With HumanNeurobiology, Neuroplasticity and neural repair,”by Andres Lozano, MD, PhD. FRSC in the Journalof Rehabilitation Research and Development;Nov/Dec 2001 is a “guest editorial” that discussesaspects of medical treatment for Parkinson’s andother disorders that almost sound like sciencefiction. He never actually mentions treatingdisorders like ADHD with deep brain stimulationbut the implication is there. Lozano discusses thefact that surgical intervention in Parkinson’s is nota new idea, that it was the treatment used in the1940’s and 1950’s.He says:Patients who were treated operatively backin the 1940’s were generally awake, andwritten reports suggest that neurosurgeonsused to just cut into different parts of thebrain in what would now be considereda somewhat nonspecific manner untilsomething interesting happened. Either thepatient’s symptoms improved or an adverseeffect was produced to signal the stoppingpoint of the procedure.(x)H. Undsen, M. Brant, J. C. Arias - Brain Neuroplasticity and Computer-Aided Rehabilitation in ADHD


178School of Doctoral Studies (European Union) JournalJulyHe goes on to explain how it was that modernmedicine came to find out what caused Parkinson’s.He also discusses how medicine moved to usinglevodopa in the treatment of Parkinson’s. And,Lozano says, now medicine is back to surgeryas the most up-to-date treatment for Parkinson’sand for other disorders that they are just nowexperimenting with. The treatment involvesputting electrodes into parts of the brain andsending titrated electrical impulses in to help stopspasms and other symptoms of Parkinson’s andother disorders. He gives the example of a childwith a genetic disorder that made his limbs andtrunk twist in random movements. The child didnot respond to drugs and got to where he couldn’twalk at all although there was nothing wrongwith his cognitive functioning and images of hisbrain showed nothing out of the ordinary. Thesituation finally responded to DBS. Specifically tosomething called a bilateral pallidal procedure andthe child was able to go back to his usual activitiesincluding riding his bicycle.Lozano is careful to make it understood that inmany ways researchers and neurosurgeons don’tknow exactly how these methods work. He saysthey know for sure that they have neurostimulatorsin a certain part of the brain but exactly how manyand what kinds of neurons are being affected,is still not clear. Lozano closes out his guesteditorial by speculating on where and how bothDBS and various drugs that researchers suspectare released as the result of the stimulation, canbe used in treating any number of disorders fromdepression to eating disorders to chronic pain. Itseems obvious that deep brain stimulation opensvery new possibilities for encouraging plasticity inthe CNS. The application in such things as ADHDis not clear, but the concept that this treatment mayhave a place in its care/healing, is tantalizing.In an article from the United Kingdom, titled,“Neurological Rehabilitation: A Science Strugglingto Come of Age,” found in PhysiotherapyResearch <strong>International</strong>, Valerie Pomeroy andRaymond Tallis discuss the general state of thephysical therapy aspect of rehabilitation. It seemsappropriate to place their work here because oftheir recognition of the need to better understandhow what is done externally with the body by wayof retraining—in their case stroke patients—willeither support neuronal healing, regeneration andreturn to function or it will hinder it. The authorsquote a number of studies that would probablyhold out a measure of hope except there were suchweaknesses in study design time after time that itis almost impossible for the results to be acceptedon an equal footing with other scientific research.Their basic premise is now that we know the CNSis basically “soft wired” mutable and much moreaccommodating to injury than was previouslythought, how do physical therapists re-configuretheir practice to take advantage of this amazinginformation. These people are asking themselvesand their profession what plasticity means tothem. They are asking if deep brain stimulation isa tool that they can use in their part of patient care.Physical therapists are looking at the new imagingtechniques and asking is this a tool for us? At thispoint, they are trying it all as far as they can getresearch dollars to take them. The comment ismade that there is no comparison between whatis spent on developing medications and what isspent on researching more effective methods forphysical therapy. There is application even toADHD, perhaps. So little is actually known aboutwhat will permanently, positively affect ADHDand maybe something the physical therapists learnand apply will be part of that answer.In an article, “Can Brains Help PolicymakersImprove Their Education Systems? ScientistsSay They Can,” written for the Organizationfor Economic Cooperation and Developmentthe anonymous author says neuroscientists arebeginning to argue that loss of neurons after 40can be offset by stimulating the brain regularly. Itis believed that, as with muscles, targeted exercisecan help. This plasticity or the capacity for lifelonglearning is an exciting finding. At a meeting inJune of 2000, the OECD’s program on LearningScience and Brain Research was launched with theaim of getting neuroscientists and policymakerstalking to each other.What the neuroscientists have to offer is theinformation they have been gathering over thelast 10 years. Technology, such as fMRI, whichSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Brain Neuroplasticity and Computer-Aided Rehabilitation in ADHD179uses radio waves to measure active brain areas,and Positron Emission Tomography, PET, whichtracks brain energy metabolism with the help ofhigh-powered computing is the thing of today andthe future. Rather than having to work on cadaversor injured people, neuroscientists can watch theblood circulating through living, healthy braintissues. They can record “the firing of neurons andcircuitry of synapses.” Using these technologiesallows researchers to find and measure processlike spatial orientation.Bruno Della-Chiesa, organizer of the 2000conference, says that Bruce McCandliss of theSacler Institute in New York, presented researcherfindings from the study of dyslexia that areconsidered amazing. McCandliss says they havenot only pinpointed a tiny section of the brain thatcauses dyslexia, but they think they have a way tocorrect it. He uses a method that “jogs” the brainand reactivates neuron links. There was, accordingto the article, to be another conference in Spain inFebruary of 2001. The focus was to be on usingthe new technology on a range of problems thatconfront many youth.Dennis Garlick, <strong>University</strong> of Sydney, inPsychological Review offers an article titled“Understanding the Nature of the General Factor ofIntelligence: The Role of the Individual Differencesin Neural Plasticity as an Explanatory Mechanism,”that gives some explanation of how intelligentbehavior is mediated through the neurons andtheir early innate plasticity. He further exploresthe role of that plasticity in the development ofintelligence. It is explained that researchers havedeveloped a concept that postulates two forms ofintelligence. Fluid intelligence is a form whichbegins development at birth and continues to aboutage 16. The other form is labeled crystallizedintelligence and is intelligence applied to learningand can continue to develop over the life span. Itis believed that these two forms of intelligence,indeed all mental functioning depend on plasticityin neuronal functioning.Garlick writes:The properties of neurons have been firmlyestablished through recent procedures suchas electrophysiological recording of singleneurons and the patch clamp technique. Infact the precise characteristics of neuronshave been specified both mathematically andin computer simulations. These studies haveshown that neurons represent a relativelysimple gating mechanism whereby inputsare summed, and if these inputs exceed acertain threshold, an action potential isproduced that is propagated to all of theneuron’s attendant connections. The criticalissue than is how a system consisting of suchunits may produce meaningful or intelligentbehavior. Given that intelligent respondingat the neural level must ultimately consistof the ability to arbitrarily map inputs tooutputs, some mechanism is required toallow this mapping.(120)Garlick goes on to explain that the answer tobeing able to respond to the stimuli coming in liesin changing the connections between the neurons.He says that by changing connections, the patternof activation in t he network can be changed. Asit is explained:A neuron cannot decide to receive inputsfrom only one neuron and not others to whichit is connected. Inputs to the dendritic treeresult in changes in the electro potentialsacross the membrane that obeys simplelaws of conductance. Nor can the neuronchoose to send an action potential to onlyone axonal branch and not another….neuralactivity is not able to shape its pathwaythrough the neural network actively. Rather,the neuron is a rather simple processing unitthat operates independently of other neuronsand whose firing is determined by changingthe connections between it self and otherneurons. (121)The question is then raised as to how the braindevelops its connections. In studies of braindamage, it is revealed that the cerebral cortexis the area of the brain responsible for higherintellectual process and is also a newer part ofH. Undsen, M. Brant, J. C. Arias - Brain Neuroplasticity and Computer-Aided Rehabilitation in ADHD


180School of Doctoral Studies (European Union) JournalJulythe brain, most developed in humans. When thispart of the adult brain is examined histologicallyit is possible to see that neurons in the adultcortex form idiosyncratic connections with otherneurons. They don’t connect with those rightbeside themselves. The researchers say this, “…this indicates that the cerebral cortex has evolvedto produce very complex connections betweenthe neurons, in contrast to earlier evolved brainareas, which possess simpler and more uniformneural circuits.” The writer goes on to say thatthese complex connections will mold or restrictthe way activation would take place through thesystem would allow or create the complex patternsbetween input and output and this would affectintellectual behavior.In images of axons and dendritic trees froma newborn and from an adult, it can be seen thatneuron connections in the newborn are mostlyundifferentiated. As the child grows and developsconnections become more complex and moreidiosyncratic until the time of maturity at 16 whenfluid intelligence stops developing.There is a discussion of the possible partgenetics might play in causing neurons to behavein ways to create intelligent patterns of reaction.This theory is discarded because there are simplynot enough genes to individually control each andevery neuronal connection. The theory is alsodiscarded because new development, such as thecerebral cortex couldn’t have happened. It is ofnecessity that the neural system respond to stimulifrom the environment so that growth and changecan take place. If neural function was controlledby genetics, how would we have learned to readand write, for example, when this is somethingour ancient ancestors did not do—did not needto do. Garlick further suggests that the fact theneural system is self-regulating and responsiveto environment allows it to respond to damage.The neurons will change the connections theymake. The cortical system such as the primaryvisual cortex has greatest plasticity up to aboutthe age of 5, whereas the brain areas responsiblefor such abilities as language and fluid attentionhold plasticity until about 16. An explanation forthe individual differences in intelligence and forplasticity in the case of injury is offered as exactlythat—individual differences.The European Journal of Neuroscience has anarticle from Inna Belyantseva and Gary R. Lewin,“Stability and Plasticity of Primary AfferentProjections Following Nerve Regeneration andCentral Degeneration.” This is an extremelydetailed and technical article that discussesexperimentation in promoting growth of differentnerves. The discussion concerns how spinal cordnerves can be encouraged to sprout and fill placesin the neuronal system that have been deliberatelydamaged. Working with rats, the researchersused a number of techniques including crushingparts of nerves and chemically damaging othersto see what kind of compensatory growth wouldoccur in certain afferent nerves. In order to moreclosely understand what they were seeing, theseresearchers tended to work continuously withthe same animal so that individual differences inhow the body handled insult would not cloud theresults. The summary for this article states:In summary, the present study providesnew evidence that the ability of sensoryneurons to regenerate centrally in responseto denervation and injury is remarkablyheterogeneous. This heterogeneity insprouting is manifested not only in thedistances that new processes can grow butalso in the types of denervated neuropil thatcan be newly innervated. It is clear fromneurophysiological studies that, followingnerve lesion, substantial rearrangementsof afferent terminals may take placewithin normal central territory. Our datasuggest that these rearrangements maydiffer depending on which subpopulationof sensory neuron is studied. A somewhatsurprising outcome of the present studywas that inducing a regenerative mode insensory neurons appears to confer only asmall sprouting advantage….In contrast,it appears that central differentiation is asufficient stimulus to induce considerablesprouting of peptidergic afferents. (467)School of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Brain Neuroplasticity and Computer-Aided Rehabilitation in ADHD181There are no particular comments as to howthis all applies to the possibilities for usefulplasticity in injured humans but it does make itseem possible that, with certain assists, there is thepossibility for spinal cord regeneration.“Neural Plasticity and Exaptation,” fromAmerican Psychologist by John R. Skoyles, offerssome interesting ideas concerning Neuroplasticity.An evolutionary theorist by the name of StephanJay Gould proposed the concept, “… that humanpsychology links not to past evolutionaryadaptations but to the co-opting of previouslyevolved functions to do new things—exaptation.”Gould also accredited our expanded brain toexaptation: “The human brain, is par excellence,the chief exemplar of exaptation,” (55) and,”exaptations of the brain of the brain must greatlyexceed adaptation by orders of magnitude.”(57) Examples of such exaptations are, Gouldsuggested, language, religion, the fine arts, writingand reading.Buss et al. (1998) criticized Gould’s ideas. Hesaid that there was no demonstrated special designfor the ‘hypothesized function or for co-optedfunctionality or any “distinct original adaptationalfunctionality.” The author of this article commentsthat Gould is not a neuroscientist. Skoyles suggeststhat “…neural plasticity—strongly links the brain,exaptation, and human psychology.” He goes onto explain that neural plasticity is an importantadaptation, that plasticity concerns the ability ofneural circuitry, if properly trained, to learn almostany function. Neuro-researchers contend thisbecause it has been found that in people who areblind, neural circuits process hearing and Braillereading. Also, there are circuits that should haveno ability to process visual stimulation, that cando exactly that when retinal inputs are surgicallydirected to them. Studies have shown that in peoplewho make great demands on their left hand—suchas pianists—the circuits in the right primary motorcortex cover a larger area than in people who don’tmake such demands. “Neural plasticity,” Skoylessays, “would seem prima facie to be an importantadaptation of the brain. At least for the human brain,it is also an important exaptation.” He says thatother researchers have shown that this expansionhappened in the parietal, temporal and prefrontalassociation cerebral cortex areas, not the primaryor older areas so the human brain gained manyneuronal areas that were open to acquire non-innateskills such as using tools and communication.Then, these areas were strengthened in thesespecialized uses by transmission of informationacross generations and humans were able to puttogether the beginnings of, “material and symbolicculture.” He goes on to say that the flexibilitycreated by this neuroplasticity allowed a being thatwas originally fitted for living and functioning inhunter-gather bands, to do mathematics, read andprogram computers.In the article, “Induction of Plasticity in theHuman Motor Cortex by Paired AssociativeStimulation,” written for Brain, Katja Stefan, ErwinKunesch, Leonardo G. Cohen, Reiner Benecke,and Joseph Classen, discuss their experimentationwith inducing plasticity in the human motor cortexby uses of electrical stimulations administered fromseparate points on the body. Their experimentscaused changes that lasted for 30-60 minutes ormore and caused an increase in the amplitude ofthe motor evoked potentials in resting abductorpollicis brevis muscle as well as a prolongation ofthe silent period measured in the pre-contractedAPB following transcranial magnetic stimulation.It is proposed that the methods used in stimulatingthe motor cortex in these experiments could beused to help repair damage and the resultant motordisabilities. There was no speculation offered asto the possible effectiveness of similar stimulationto deal with other forms of brain dysfunction.Neuropharmacological Interventionsin Neuroprotection andRehabilitation in Traumatic BrainInjury, Congenital Central NervousSystem Conditions including ADHDand Psychostimulant Medications.Chapter Five of Cognitive Neurorehabilitationopens a discussion of relatively new research usingthe female reproductive hormone, progesterone.The brains of females and males, of many species,H. Undsen, M. Brant, J. C. Arias - Brain Neuroplasticity and Computer-Aided Rehabilitation in ADHD


182School of Doctoral Studies (European Union) JournalJulyhumans included, have differences that scientistshave studied for years. Such factors as brain mass,the shape of various brain structures, the number ofneurons in the posterior cerebral cortex and otherfeatures show marked, measurable differences.For example, female rodent brains exhibit moredendritic branching than the brains of males andthere is also a fluctuation connected to the oestruscycle. The researchers for this chapter havefound evidence that suggests even such unrelatedmedical conditions as breast cancer surgery andbrain damage can be affected more positively bywhere in the cycle the patient is when sex hormonetherapy is applied.Chapter Seven of Cognitive Neurorehabilitationdiscusses actual drug interventions on behalf ofbrain injury/stoke patients with the desired endresult of protecting the central nervous systemfrom “cascading” symptoms after injury andmedications to assist in recovery of function. Itis noted that most of the information highlightedin the chapter comes from animal studies and thatwork has really just started with efforts for humanbrain trauma patients.The chapter does not specify applicability toADHD.Chapter Eight of Cognitive Neurorehabilitationfurther discusses the role of medication inthe treatment of brain injury with a focus ondeveloping symptoms of destructive behaviors.The descriptions of the various behaviors underthe general label of “destructive” could easilybe applied also to many of the behaviors seen inADHD. The similarities open the possibilities ofpharmacological interventions in the treatmentof this disorder. Chapter Eight also discussesthe role of adapting environment, especiallywith regard to background noise and otherextraneous stimulation, in the treatment of certainmanifestations of destructive behaviors. Thisconcept bears consideration for ADHD also.Brown <strong>University</strong> puts out something calledthe Child and Adolescent Newsletter. Vol. 20,Issue No. 2 of February, 2004 contains an articlerecapping the deliberations and conclusions ofchild mental professionals from six countries.The conference was sponsored by Johnson andJohnson, manufacturer of Risperdal (risperidone)and Concerta (methylphenidate), both drugscommonly used in the treatment of ADHDand disruptive behavior disorders. Consensusstatements included, “Do not be satisfied with asingle diagnosis: keep assessing to uncover likelyco morbidities; accurate diagnosis is essential toimprove prognosis.” Another statement says, “…say researchers, psycho stimulants are commonlyprescribed with twice-daily dosing, whereasthrice-daily dosing, or the use of long-actingagents, providing daily coverage is generallymore desirable.” There were three recommendedtreatments and nine key findings included in thearticle.From Pediatrics Vol. 113 April, 2004 comestwo studies that are both related to a project ofthe National Institute of Mental Health. The tworeports, National Institute of Mental Health Multimodal Treatment Study of ADHD Follow-up:24 Month Outcomes of Treatment Strategies forAttention Deficit/Hyperactivity Disorders; andNational Institute of Mental Health MultimodalTreatment Studies of ADHD Follow-up: Changesin Effectiveness and Growth After the End ofTreatment; are actually reporting on slightlydifferent aspects of how their research designwas impacting the lives of the study subjects atspecified times after the end of the actual treatmentprogram.The subject group was comprised ofapproximately 550 children. 579 originallyentered the study and 540 were still participating atthe time of the first follow-up ten months after theend of treatment. The participants were randomlyassigned to four groups. One group, designated CCwas the control group and received no treatment.Another group, designated Beh, received onlybehavioral modification therapy. Group three,labeled Comb, received both medication withany one of a variety of the medications used intreating ADHD and behavioral rehabilitationtherapy; and the last group was labeled MedMgt.and received only medication with the same rangeof familiar medications. Generally speaking,the results showed significant, original positiveresponse in both the MedMgt groups and theSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Brain Neuroplasticity and Computer-Aided Rehabilitation in ADHD183Comb groups. After 14 and 24 months there wasmore diminishment of positive effects for thesetwo groups and none to speak of for the other twogroups but, the original improvement for the othertwo groups was significantly lower. Between thetwo groups that showed improvement, the Comb.Group showed the greatest amount of improvementoverall and maintained that superiority at both 14and 24 months after treatment ended although therewas measurable diminishment of positive effectswith time. The other issue this study was intendedto deal with was inhibition of growth while onprescribed medications such as methylphenidate.The result reported for this aspect of the studysays, “In the MYA follow-up, exploratorynaturalistic analyses suggest that consistent use ofstimulant medication was associated maintenanceof effectiveness but continued mild growthsuppression.”“A Cognitive Remediation Program for Adultswith Attention Deficit Hyperactivity Disorder,”in the Australian and New Zealand Journalof Psychiatry2002; 36:610-616 offers a mostinteresting sidelight on the subject of medicationeffects. This information may only apply to adultswith ADHD but it is worthy of comment that in theover-all reporting of features affecting outcomes,Caroline Stevenson, et al, note that .whethera subject was medicated or not, did not seemto impact how well the rest of the interventionworked.The American Journal of Public Health; Feb.2002 offers a study by Andrew S. Roland,David M. Umbach, Lil Stallone, A. JackAftel, E. Michael Bohling and Dale P. Sandlerentitled, Prevalence of Medication Treatment forAttention Deficit—Hyperactivity Disorder AmongElementary School Children in Johnston County,North Carolina.” The stated reason for this studywas to find out what the prevalence of children inthe public school system was who had not onlyreceived diagnoses for ADHD, but to find outhow many of them were actually being treated forthe condition. The test population was childrengrades one through five in the public schools only.Private schools and home-schooled children wereexcluded because there was likely to be a profounddifference in their learning environment. Also,children who where in self-contained classroomsor who had special education designations such assuch as autism or sever health disabilities such astraumatic brain injury were excluded. After thediscussion of findings, the researchers offer thiscommentary:If the prevalence of ADHD diagnosisor ADHD medication treatment amongelementary school children in the UnitedStates is similar to the estimates reportedhere, educators and public health officialsmay have substantially underestimated thepublic health impact of ADHD. (234)These researchers go on to say that this problemis likely to be far more prevalent than the surveyshows because while older children are probablystill ADHD, the use of medications, especiallystimulant medications falls off “sharply” amongteenagers. The authors also comment on some oftheir demographics which are bourn out in otherstudies. Boys are three times as likely as girls toreceive diagnoses of ADHD. African-Americanchildren were only slightly less likely to bediagnosed as ADHD and Hispanic children weremuch less likely to receive such a diagnosis.Chapter Twenty discusses “PharmacologicalManagement,” and the drug management is prettymuch for the same for the disorders as are prettymuch the focus of the whole book, MS, Parkinson’s,Huntington’s and other traumatic and progressivebrain problems. There is discussion of the variousdysfunctions that can result from the brain beingdamaged such bladder problems, spasticity,sensory malfunctions and bowel disturbances.Obviously, all this discussion is focused arounddrug therapies for the different problems.“A Comparison of the Newer TreatmentOptions for ADHD,” in the Formulary January2003 Vol. 38, by Lisa Edwards PharmD., offersa close examination of the latest in the classesof drugs known as psycho stimulants which areused in the treatment of ADHD. The variousdrugs in this group are methylphenidate, mixedamphetamine salts, dextroamphetamine andH. Undsen, M. Brant, J. C. Arias - Brain Neuroplasticity and Computer-Aided Rehabilitation in ADHD


184School of Doctoral Studies (European Union) JournalJulypemoline. Pemoline and its various brand names,Cylert, from Abbott, and various generics, areno longer recommended due to its associationwith life-threatening hepatic failure. Tricyclicantidepressants such as guanfacine, clonidine andburopion have also been used in the treatment ofADHD but there are safety concerns that limittheir use in children. Edwards says that one ofthe major complaints with the stimulants hasbeen the need for frequent dosing. She goeson to say, that recently, the pharmaceuticalcompanies have brought out long acting versionsof the medications they have produced for years.These new forms of medications are marketedunder the names Concerta©, Metadate©, RitalinLA© and Adderall XR©. Various clinical trialsfor each of these medications are outlined. Themajor considerations with the stimulants areappetite suppression, possible inhibition of growthand possible sleep disturbances. Medicationadministration needs to be focused to cover schoolhours without interfering with evening meals andbedtime.Edwards also addresses the issue of the30% of children, adolescents and adults whoeither do not respond to, or are intolerant of, thestimulant therapies. She offers a discussion ofa drug called atomoxetine or, Strattera as it isknown on the market. This drug acts as blockerof presynaptic norepinepherin transporters in thebrain. Previously other non-stimulants such as theTCAs were used but the increased risks of suchside-effects of cardiac arrhythmias have limitedtheir use in pre-pubertal children.David A. Kube, Mario C. Petersen and FredrickB. Palmer authored a study titled, “AttentionDeficit Hyperactive Disorder: Comorbidity andMedication Use,” from the journal ClinicalPediatrics in which 353 children referred to the<strong>University</strong> of Tennessee Health Science Center,Boling Center for Developmental Disabilitiesbetween December 1, 1996 through June 1,1998were studied in a retrospective review. Subjects werereferred for concerns of possible developmentalproblems. The inclusion criteria were the referral,age greater than 2 and no previously diagnosedmental retardation, developmental delay, autismor cerebral palsy. 189 children met these criteria.Each child was examined by a developmentalpediatrician, had a complete medical history,developmental history, family and social history,physical examination and lab work if necessary.There was information from the clinical interview,school reports including questionnaires for teachersand also for parents and observation at the clinic.Where appropriate, children were also seen by arange of other specialists. If the children wereon medication for treatment of ADHD, that wasrecorded. The study was intended to answer a fewquestions. Those previously diagnosed as ADHDwere re-evaluated to test the idea that “stimulantmedications are overused.” While their study didindeed find children who had been mis-diagnosed,what they actually found was that a percentage ofthe children finally diagnosed as ADHD were noton any kind of medication or treatment at all.“National Trends in the Treatment of AttentionDeficit Hyperactivity Disorder,” by Mark Olfson,Marc J. Gameroff, Steven C. Marcus and PeterS. Jensen, (2003) in The American Journal ofPsychiatry is a brief overview of what was themost current information in the treatment ofADHD in June of 2003. The studies comparedthe information from a survey done in 1987 anda follow-up, identical survey done in 1997. Thereare figures from households in all socioeconomiclevels, from the pharmaceutical industries,physician-based surveys, and state-wide surveysof Medicaid, showing that treatment of and forADHD, has increased. There is this statementregarding medications:The National Medical Expenditure Surveyand the Medical Expenditure Panel surveyask for the conditions associated with eachprescribed medicine bought or otherwiseobtained. We focused on the prescribedmedications associated with the treatmentof ADHD. Psychotropic medications werethe classified as stimulants, anti-depressants,clonindine, and other psychotropicmedications, including antipsychotics,mood stabilizers, anxiolytics and hypnotics.(1072)School of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Brain Neuroplasticity and Computer-Aided Rehabilitation in ADHD185The authors also note that there seemed to be atrend of decreasing involvement of psychologistsand other behavioral health professionals withADHD as the medication use increased.In a report from the Surgeon General’s Office,under pharmacological treatment, the statement ismade that stimulants have been the treatment forchildhood behavioral disorders since the 1930’s.It goes on to say that stimulant medications areeffective for 75-90% of children with ADHD.At the time the report was written, most of theusage was still in the shorter-acting forms ofthese various drugs because it is said that thedrugs metabolized and leave the body quickly somedication routines are times around the child’sschool schedule. Although stimulants improveclassroom performance and behavior for ADHDchildren, they do not appear to achieve long-termchanges that persist outside of the medications.The following medications are suggestedas possible means to medicinally enhance thehealing/rehabilitation of memory-specific areasof the brain. Drugs suggested are: hypothalamicand pituitary neuropeptides, cholinergic agonists,catecholaminergic agonists, nontropics andvasoactive agents. It is also suggested thatminimizing the use of medications that canpotentially interfere with cognition is alsohelpful. There is no discussion of the use of suchmedications outside of the issues of acquired braininjury.In the Jul/Aug 2003 issue of The Journal ofHead Trauma Rehabilitation, in an article titled“Concepts of CNS plasticity in the Context ofBrain Damage and Repair,” by Donald G. Stein andStuart W. Hoffman, the focus is most particularlyon traumatic head injury and its aftermaths.Among the ideas they explore are those of specificlocalization and complete equipotentiality. Thereis also information concerning new drugs that canlimit the damage done when the brain is injured.Part of the problem when the brain is injured isthat the original injury creates further problems.Damaged and dying cells give off chemicals thatcreate a toxic environment and cause cell damageand death to spread beyond the original insult.Stopping this spread and creating an environmentwhere damaged cells can heal is part of the newtechnology which includes the experimental useof progesterone as a neuroprotector.In the “National Institute of Mental HealthMultimodal Treatment Study of ADHD Follow-up:Changes in Effectiveness and Growth After thenEnd of Treatment,” the MTA Cooperative Groupreports on how well members of their subjectgroups did in a couple of areas that have been ofconcern to parents and providers who deal withADHD children. One area is that of carry-overof positive effects of either medication or variousforms of behavior modification. Using the figuresfrom their sample groups it was determined,as has been noted in many other studies ofeffects of stimulant medications that there is noperceivable carry over effect as far as control ofADHD symptoms once stimulant medications arediscontinued.There was also, in group members who decidedto stay with medication after the study was over, anoticeable increase in the symptoms of both ADHDand ODD indicating a decrease in the effectivenessof the medication, perhaps attributable to the bodyaccommodating to the medication. The otherissue of concern with regard to medications suchas methylphenidate is the interference with growththat has been noted in many studies. First of all, itis important to note that a number of researchershave in one way or another discounted the ideaof suppressed growth with the use of stimulantmedications. For example, Satterfield et al offeredthe explanation that although growth might besuppressed to begin with, that there would bea growth rebound even if the medication wascontinued and there was no “summer holiday”provided. Another group of researchers, Spenceret al, hypothesized that the ADHD itself, and notmedication factors is what caused the perceivedgrowth issues. This was one of the questions theMTA group hoped to find a definitive answer forwith their much larger sampling. It was observedthat there was indeed something causing growthinhibition in the groups that were on medication thelongest and it seemed to be an effect from the useof methylphenidate. As to the idea that there mightbe growth-inhibiting factors within the ADHDH. Undsen, M. Brant, J. C. Arias - Brain Neuroplasticity and Computer-Aided Rehabilitation in ADHD


186School of Doctoral Studies (European Union) JournalJulyspectrum itself, since there was quite pronouncedgrowth in subjects who had no medication,this particular hypothesis doesn’t seem to hold.There was also interest in how well the effects ofbehavioral modification treatment would hold uponce actual treatment was over and while therewas also some erosion here, it was found not to beas much as with the cessation of medication. Inorder to really understand the long-term effects ofcontinuous medication, track growth/slow growth,and to track the effects of other treatments offeredin the Multimodal Testing, the is currently a MTAfollow-up that is following these subjects intoadolescence and in to adulthood.Rogers Medical Intelligence Solutions an onlinecontinuing education service for medicalproviders offers “Advances in the Treatmentof Adult ADHD—Landmark Findings In NonstimulantTherapy,” edited by Margaret Weiss,MD, and Robert Bailey MD. This lesson packageoffers information about the effects of ADHD onan adult population. The effects for adults are notonly the same inattention, dysfunctional impulsecontrol etc., but the picture is further darkened byincreased likelihood of drug and alcohol abuse,cigarette use, with increased difficulty of quittingand, more likely hood of traffic and drivingproblems.Then the writ-up offers a hopeful reviewof the role of a drug called atomoxetine. First,the traditional drugs of choice for dealing withADHD, the stimulants such as methylphenidateare discussed. Their actions as dopamineand norepinepherine re-uptake inhibitors arementioned. The major concerns, once the ADHDperson becomes an adult is the potential for abuseand for the ADHD patient to sell the medicationbecause it is one of the class of drugs known on the“street” as uppers. Studies done both on subjectpopulations and by various methods of medicalimaging describe a very narrow range of actionfor atomoxetine which seems to be exactly whatADHD sufferers need. There have been some sideeffects noted that are more pronounced in adultsthan in children, such as, insomnia, gastrointestinaleffects and genitourinary symptoms. Over all,however atomoxetine is probably going to be abetter choice for many ADHD patients, especiallyfor adults for whom substance abuse maypotentially be an issue.Diagnostic Techniques inNeurorehabilitation including: PET,MRI, and Neuropsychological testing(including: Stroop Color Word Test,and Continuous Performance Tests)applicable to ADHD.Chapter Three of Cognitive Neurorehabilitationis involved with a discussion of the role ofneuroimaging in identifying, specifically notonly areas of the brain tasks are mediatedthrough—information that researchers have hadon many levels for a long time—but what areascan accommodate new tasks as required when theorganism has been brain-injured. The ability tostudy living creatures as they perform various tasksdesigned to activate different sections of the braingive researchers vital, new information. Since theadvent of these new imaging capabilities:In recent years, neuroscientists haveincreasingly moved from the strict localizationapproach to brain function and have begun to thinkof both sensorimotor and cognitive processing asthe product of activity in functional networks.This imaging also displays how the functions arecarried out. (McIntosh and Gonzales-Lima, 1994;Bressler,1995; Vaadia et al., 1995). This systemslevelapproach has allowed researchers to examineinteractions among brain areas during specifictypes of cognitive or motor function and how theseinteractions change as behavior changes. (Grady& Shitij, pg.47)Although the authors do not specificallymention ADHD, they do comment in summationthat,Whether as a result of injury, disease,degeneration, congenital condition (italicsadded) or decline, compensation originates in adeficit—a mismatch between skills, expectationsand demands. Through various mechanisms ofsubstitution, remediation, accommodation orassimilation, compensation involves closing oneSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Brain Neuroplasticity and Computer-Aided Rehabilitation in ADHD187or more gaps between skills, expectations anddemands.(Grady & Shitij, p. 69)Chapter Twenty of CognitiveNeurorehabilitation offers a discussion of testsand methods for accurately as possible definingthe areas of the brain that have been affected bystroke or injury. The author is careful to explainthat:Many of the component cognitive functionsthat have been ascribed to the executivecontrol system, such as inhibitory control orworking memory are not easily operationalzedand their measurement almost alwaysinvolves some dependence on othercognitive abilities such as visual recognitionor reading. Clinical tests that show deficitsin patients with frontal lobe lesions typicallyrequire multiple component process andlack performance measures that are specificto individual cognitive processes.(316)There are a number of tests for executivefunction named and they are many of the sametests used in determining the performance abilitiesof children diagnosed with ADHD. These testswhich include, Wisconsin Card Sorting Test,Tower of Hanoi, Trail Making B and the SixElements tests have shown some sensitivity tofrontal involvement and include the need forproblem-solving mental flexibility and or set/shifting. Working memory can be evaluatedwith such tests as Consonany Trigrams Test, selforderedpointing measures and delayed responsealternation measures. Inhibition of pre-potentresponding can be assessed with the Stroop Testand Go-No-Go measures. Though prospectivememory is not frequently tested there are tests tocheck on capacity to remember to initiate intendedactions. It is also noted here that to get a goodpicture of what is going on with a given patient itis necessary to use multiple test methods.In an article entitled, “Examining BrainConnectivity in ADHD,” from Psychiatric TimesJan. 2004, 41, researchers, Sanjiv, K. M.D. andThaden, E. discuss the valuable role of magneticresonance imaging (MRI) in learning moreabout what takes pace in the brains of childrenand adolescents with ADHD. There a numberof advantages of MRI research with children:no ionizing radiation, gives a tool for doinglongitudinal studies; could allow researchers tostudy the effects of psychotropic drugs on brainstructure; and could in the future help develop newtools for diagnosis and for studying what the effectof drugs might be on disorders such as ADHD.Having more effective tools to deal with ADHDis important because the disorder not only causesdifficulties in school, but creates the possibilityof even worse problems such as increased risk ofdriving accidents in late adolescence and adulthood.The particular focus of this study was to measureand evaluate differences in the connective “whitematter” of the brain. New evidence suggests thatabnormalities in white matter development maycontribute to ADHD.M.S.C. Thomas, in an article, “Limits onPlasticity,” in the Journal of Cognition &Development, Thomas evaluates and recaps muchof the most current literature on the plasticityof the central nervous system. He reviews fourbooks, Handbook of Developmental CognitiveNeuroscience Nelson and Luciana (Eds.) (2001);Developmental Neuropsychology: A ClinicalApproach, Anderson, Northam, Hendy &Wrennall, (2001); Developmental Disorders ofthe Frontostriatal System: Neuropsychological,Neuropsychiatric, and Evolutionary Perspectives,Bradshaw (2001); and Neural Plasticity: Theeffects of Environment on the Development of theCerebral Cortex, Huttenlocher (2002).Generally,the author’s over-all view would seem to bethat the most useful information is to be foundbetween the two extremes of “pre-determined bybuilt in blueprint” and “unlimited plasticity.” Hediscusses the two prevailing theories of when,where and how the brain and associated functionsdevelop and poses the idea that “truth” is probablysomewhere between.In the section labeled, “Assessing Limitsto Plasticity Via Recovery From Early BrainDamage, Thomas is discussing various views of“Early Plasticity” which indicates that damagedone when a person is a child should heal moreH. Undsen, M. Brant, J. C. Arias - Brain Neuroplasticity and Computer-Aided Rehabilitation in ADHD


188School of Doctoral Studies (European Union) JournalJulycompletely with less residual incapacity. In theliterature being reviewed, this doesn’t seem tohold true and Thomas says:Indeed, children with generalized cerebralinsult (e.g., from traumatic brain injury)exhibit both slower recovery and pooreroutcome than do adults suffer similarinsults. This is quite inconsistent with thenotions of greater early plasticity. From the“early vulnerability” perspective, short-termrecovery favors the mature brain. Acrosstime, a child who has seemed initially torecover well from the insult may start toincreasingly lag behind age-matched peersand fail to show the expected emergence ofnew cognitive skills. The child thus appearsto “grow into” his or her cognitive deficit asthe brain matures.(104)There is also an extended discussion of therole MRI and PET examination is playing indiscovering what really goes in the formationand development of the brain in developmentaldisorders including ADHD. This informationshould make it possible to better intervene eitherpharmacologically or cognitively. The problem is,that in his working to not seem to take a strongposition, Thomas rather leaves the reader wherehe found him/her.In the Journal of Emotional and BehavioralDisorders, Edith E. Nolan at el. offer the resultsof testing with various test instruments forreliability in identifying children with ADHDand often associated behavioral disorders such asOppositional Defiant Disorder. The purpose wasto test the reliability of such things as the SymptomInventory Test. The conclusions that the researcherscame to were that the youngest children, the preschoolthrough age five were more likely to beADHD-C or combined disorder whereas the mostcommon finding in the adolescent age group wasADHD-I or inattention disorder. Besides findinga high degree of reliability in the various testsinstruments, researchers also verified a perceptionthat girls are under-reported and under-treated forADHD and other behavioral disorders. The testgroups followed the expected approximately threeboys to one girl ratio, but the findings stronglysuggest that when girls are identified and referred,they frequently display more intense symptoms.There are fairly complete statistical data to backup the researchers claims.From the Journal of Abnormal ChildPsychology, Vol. 29,December 2001, comes thereport of a study done by Russell A. Barkley etal, that discusses the results of various measuresof executive function, temporal discounting andsense of time of adolescents with ADHD and withODD. It was stated that this study was unique inthat it was aimed at children who were older whenthey were first referred to a clinic for behavioraland performance issues rather than being a followupon children who had been in the mental healthsystem since they were young. Besides testing torule out seriously diminished IQ, all subjects wereasked not to take any stimulant medications theymight be on. Tests used included KBIT CompositeIQ to obtain a minimum baseline intelligencerating; Conners Continuous Performance Test,to test Vigilance and Response Inhibition; Tocheck for Verbal Working Memory the DigitSpan Reversed (Wechsler Intelligence Scalefor Children, 3 rd Ed. WISC-III Wechsler, 1994)was employed. To evaluate nonverbal workingmemory, the Simon Game was used. Verbal fluencywas tested through the use of the Controlled OralWord Association (F-A-S) test. Other areas testedwere Ideational fluency by use of the Object UsageTest, Non-verbal Fluency, with the Form FluencyTest, Temporal Discounting was checked by use ofthe Reward Discounting Task in two ranges $100dollars with time variables and $1000 dollars withtime variables; and the last areas of testing relatedto Sense of Time and Time Reproduction. Onlyone of these testing methods had any connectionwith computers and no special significance wasassigned to this nor was it postulated that therewould be any great difference in performance dueto the work being done by computer.Richard W. Root II and Robert J, Resnickhave co-written an article titled, “An Updateon the Diagnosis and Treatment of AttentionSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Brain Neuroplasticity and Computer-Aided Rehabilitation in ADHD189Deficit-Hyperactivity Disorder in Children,”for the Professional Psychology: Research andPractice(2003)Vol. 34 journal. As part of theirarticle, they have included the following guidelinesin the diagnosis of ADHD:1. Child should have a complete physicalto rule out possible medical problems bothas cause for the behaviors of concern andto make sure the child has no conditionsthat might interfere with care.2.The assessment should establish thatthe child has significant inattention,impulsivity or over activity, that is ageinappropriate and not accounted for bysome other etiology.3. There should be information gatheredfrom parents, the child, teachers, andthese interviews may be structured orunstructured.4. There should be a complete reviewof school and health records, includingreport cards, achievement tests,psychoeducational tests and medical andpsychological treatment records.5. There should be a battery ofpsychological tests that might includeany or all of the following: ContinuousPerformance Test, Freedom FromDistractibility Index of the WechslerIntelligence Scale for Children—III,Porteus Mazes, the Rey-OsterriethComplex Figure Test, the Trail MakingTest (A and B), the matching FamiliarFigures Test, the Wisconsin SelectiveReminding Test, the Wisconsin CardSorting Test, the Controlled Oral WordAssociation Test, the Stroop Word-ColorAssociation Test and the Hand MovementsTest.It is also suggested that it is helpful to observeinteractions between the parents and child eitherinformally or as part of some task assigned for themto do together to assess for co-morbid oppositionaldefiant or conduct disorder.Clinical Pediatrics is an on-line, ProQuestPsychology Journal. James Nahlik offersinformation specifically designed to obtain accuratediagnoses of ADHD in adolescents. The basisof Nahlik’s concern is that most instruments forassessing for ADHD are aimed at younger children.One of the criteria from the DSM-IV requiressymptoms to be present before age 7. Nahlik,however, points out: Nevertheless, certain subgroupshave less marked symptoms in childhoodand are easily missed. These include the followingthree sub-groups: individuals (mainly girls) whotend to exhibit fewer hyperactive symptoms—themore obvious symptoms for teachers and parentsto pick up on; individuals who exhibit mentalrather than physical restlessness; children with ahigher than normal average intelligence quotient(IQ).Nahlik goes on to suggest that diagnosing thispopulation correctly is important because whenthese people are not recognized and treated, theyare likely to under-achieve at school which canlead to poor employment prospects, there aremore than normal difficulties in relationships andadolescents with ADHD at much higher risk forbehaviors such as dangerous driving, carelesssexual activities, substance abuse and criminalityall of which can have a negative impact on theirfutures. Also, in order to obtain reasonableaccommodation from high schools or colleges, anaccurate diagnosis is required.Michael J. Murphy of Indiana State <strong>University</strong>,in the article “Computer Technology for OfficebasedPsychological Practice: Applications andFactors Affecting Adoption,” from Psychotherapy:Theory, Research, Practice, Training, Vol. 40(2003) discusses many aspects of the use ofcomputer technology by the private psychologicalpractitioner and those who are part of managed careorganizations. For the most part, this article doesnot address the treatment aspects of computers inthe office although there is an extended discussionH. Undsen, M. Brant, J. C. Arias - Brain Neuroplasticity and Computer-Aided Rehabilitation in ADHD


190School of Doctoral Studies (European Union) JournalJulyof the use of the computers and their varioussoft wares as diagnostic tools. As he opens thediscussion of computers as assessment anddiagnostic tools Murphy makes this statement,“Next to billing packages, psychological testingis the best established and most frequently usedcomputer application specifically developed forpsychology.” (14) He goes on to say computerscoring and evaluation is available for all the majortesting instruments such as Rorschach, MinnesotaMultiphasic Personality Inventory, the WechslerAdult Intelligence Scale and the Wechsler MemoryScale-III. Murphy also offers the opinion thecurrent applications, because they are text based,are not taking the best advantage of the abilitycomputers have to present “graphic displays ofstimuli and complex scenarios.” He has foundthat one application that comes into this categoryis the continuous performance test (CPTs) that isavailable for use in the assessment of attentiondisorders. Computer programs for assessmentinterviewing is another set of applications that areavailable, however in a study by Salib and Murphy(2003) none of the independent practitionerssurveyed use the computer-based interviews.Under the heading, “Technology Use inTreatment,” Murphy reviews literature thathas been developed as far as computers andtheir various capabilities such as the Internet intreatment. Although, many researchers seem tofeel that there are great possibilities for both inofficeuse and what they are calling “telehealth”individual practioners are not involved to anynoticeable extent. The people who feel thereare great possibilities have many reasons. Oneresearcher, Budman (2000) says, “…applicationscan be tailored to individual needs and are costeffective, convenient, standardized, and fostergreater disclosure.”There is information to show that the internetis a widely used source of information concerninghealth issues, a poll by Harris in 2000 found thatdepression was the single most frequently searchedhealth topic and that four of the top ten healthtopics searched had a mental health component.Many other aspects of computers-as-therapy-aidare discussed with the comments that therapistsand clients alike don’t seem interested becausemuch of what is available is also available in selfhelpbooks and watches with alarms so that peoplehave the option of going low-tech, low-price.In the overall review at the end of the article, theauthor states that computers and software are usedmostly for the business aspects of the counselor’soffice, not for the assessment or treatment ofclients. Murphy writes, “The relative advantagesof receiving services from a psychologist acrossthe country over a face-to-face interaction with alocal therapist do not seem compelling to patientsor clinicians.In her dissertation titled, “CognitiveRehabilitation: A Method for Improving Sustainedand Selective Attention in Adolescents WithAttention Deficits,” Glinda Bullock designed aresearch mechanism to test for the effectivenessof computer-mediated cognitive rehabilitation foryoung people who had attention deficits issuesdue to ADD. The testing was done with four malemiddle-school students who were all diagnosedwith ADD and who were all being treated withpsychostimulant medications (Ritilin). She useda pretest-posttest design. The subjects weretested before the intervention to check on levelsof attentional functioning and then were testedafter six weeks of a hierarchical attention trainingprogram. This program included three weeksof sustained attention training and three weeksof selective attention training. Bullock reportsthat there was some improvement for all foursubjects in sustained and selective attention onat least two out of three measures. “However,she writes, “only selective attention results weresignificant.” Bullock states that these results givereason to believe that cognitive rehabilitationcould be effective in treating ADD and that furtherstudies using groups, subjects and controls, arewarranted.The following assessment instruments areoften used to evaluate how limited memory isafter brain injury. In the article, “FunctionalTreatment Approaches to Memory ImpairmentFollowing Brain Injury,” in Topics in LanguageDisorders,(1997) Vol. 18 45-58, Judith Hutchinsonand Thomas P. Marquardt, offer an extensive lookSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Brain Neuroplasticity and Computer-Aided Rehabilitation in ADHD191at what memory dysfunctions after brain injurymight or might not be. They offer a number oftests as possible ways to evaluate the extent ofloss of memory. The authors do state that suchinstruments are, “limited, difficult to standardizeand sometimes, controversial.” The testingmediums offered for memory are, the RivermeadBehavioral Memory Test, which is a battery oftest that attempt to simulate real life memorysituations; the Present Functioning Questionnaire;Prospective Memory Process Training MemoryQuestionnaire; Multimodal Inventory of CognitiveStatus; Cognitive Failures Questionnaire;Everyday Memory Questionnaire; Brief CognitiveRating Scale and the Instrumental Activitiesof Daily Living Scale. There is no mention ofhow any of these tests might aid in evaluatingcongenital situations such as ADHD. There is alsono indication that any of these tests are or couldbe computerized. Besides the testing mechanisms,there is also some discussion of how MRI andPET imaging help to identify more closely theparts of the brain associated with different aspectsof memory.In the article, “Activation of the Visual Cortexin Motivated Attention,” Margaret Bradley etal. studied the effects of emotional involvementwith visual cortex engagement. The study designused students in a psychology class and images inboth color and grayscale. The images came fromseven categories designed to provoke emotionalresponse. The imaging technique used wasfunctional magnetic resonance imaging. The studyis very involved, deeply statistical and probablyof interest to those whose focus is on emotionalresponse however it wouldn’t seem to have muchapplicability to the subject of ADHD or of howbest to approach this disorder.The Rehabilitation of Attentionusing Computer Assisted CognitiveRehabilitation Programs including:Selective, Sustained, and DividedAttention.Although this article discusses the use ofcomputers mostly in passing, apparently, some ofthe Pay Attention© materials Kerns, K.A., Eso,K., Thomson, J., (1999) were using when they didthe study “Investigation of a Direct Interventionfor Improving Attention in Young Children withADHD” were administered by computer and itmay be that the materials they were using would begenerally adaptable to use on computers especiallyif it seems that computers add some extra measureof acceptability for young children, i.e. they feelvery grown up working with the computers so thatthey are attending better at the same time theirattention issues are being addressed.In Chapter Nineteen of CognitiveNeurorehabilitation the book’s editors providebrief evaluations of eleven different studies thatused some manner of computerized attemptsat re-training of attention in stroke or otherwisebrain-damaged adults. Although the studies eachshowed something in the way of possibilities,across the spectrum, the major difficulty wasthat there seemed to bsipe no genralizability to amultitude of situations or general life situations.Each study had weaknesses as to methods oftesting or methods of reporting. Because of theseproblems, it is not possible to actually decidewhether or not computer-aided training has a realplace in the rehabilitation of brain injury or stokepatients and since none of these studies were aimedat congenital, cognitive disorders such as ADHD,there is no way to know if the measures used havetransferability. There is also no way to know if itis the use of computers that is not functional or if itis the programs being used—the programs are notinteresting enough to engage attention enough forthere to be a lasting effect.Bill Lynch, a PhD in private practice inRedwood City, California offers this article titled,“A Historical Review of Computer-assistedCognitive Retraining. The article appeared inthe Journal of Head Trauma Rehabilitation.Lynch says that researchers and therapists beganconsidering the possibility of using computers incognitive retraining about 40 years ago.Places like NYU Medical Center, HawaiiState Hospital, and the VA Medical Center in PaloAlto became known for their programs involvingcomputers even before electronic video games andH. Undsen, M. Brant, J. C. Arias - Brain Neuroplasticity and Computer-Aided Rehabilitation in ADHD


192School of Doctoral Studies (European Union) JournalJulyhome computers were common. The rehabilitationprograms in these places approached the treatmentof traumatic brain injuries with an emphasis onthe impaired thinking, memory and informationpossessing.His purpose was to give a general overviewof what was happening in the field of computerassistedneurorehabilitation. His evaluation isthat programs which work the best combine theinformation it is desired that the client learn withpractical applications and fun.It is also his perception that programs aimedat compensatory approaches provided betteroutcomes than then did those applying restorativeprograms. He offers a number of suggestions inthe compensatory category. First of all there isa wide array computer-based assistive technologysuch as, NeuroPage which is a programmed pagerthat will provide either tonal or text remindersfor prospective memory tasks. Lynch says thatWilson and colleagues have published researchon the NeuroPage which indicates improvedADLs and good patient compliance in 80% of a143 subject study. NeuroPage is one of the manyof what Lynch calls “reminding systems.” Hesuggests that such things as answering machines,digital watches with data storage, microcassettesdigital palm sized dictation devices and PDAs areall computer-based helpers that can create greaterindependence for brain-impaired people, no matterwhat the diagnosis.Lynch offers for consideration an outline forfuture research in this field. He says researchshould address these issues:(1) How does treatment with this softwarecompare with existing treatment approacheswith regard to effectiveness and cost? (2)What brain conditions are more likely torespond favorably to this software? (3)What is the optimal time after onset to begintreatment with this software? (4)What isthe optimal treatment regimen or schedulefor using this software?Lynch includes a list of softwares to assist witheverything from driver screening and training,to programs that help with math and executivefunction by creating “real-world” simulations touse math and planning abilities.It would seem that the compensatory approachcould be of benefit to ADHD clients because theirdeficits do not result from abilities they once hadand understood, being taken away.In the discussion of a program that aninterdisciplinary team is working on, D.L.Mickey, et al., of Neuropsychological Associates,of Madison WI, describes a battery of computeraidedretraining formats for retraining victimsof traumatic brain damage. Their article, “BrainInjury and Cognitive Retraining: The Role ofComputer Assisted Learning and Virtual Reality”explains the work they are doing to harness thecomputer as therapeutic tool. The focus is ontraumatic injury but the information itself shouldbe transferable.Their Adaptable Learning Environmentfor Rehabilitation Training, acronym, ALERT,begins by defining the cognitive domains to beaddressed:The design of this learning environmentaddresses a variety of cognitive domainsincluding arousal and orientation, attentionand concentration, memory visual andspatial perception, language and verbalskills, executive functioning (e.g.,reasoning, planning, organization, problemsolving), life skills (e.g., time tellingbudgeting following directions), and socialskills. Cognitive tasks are activities specificto assessing the skills contained within abroader cognitive domain (e.g., attention,memory, executive functioning) and aredesigned to enhance the learner’s skill levelin that particular sub-domain. (2)The authors go on to describe the programas adaptable to age through game-like activitieswhich use a multiple level system to providechallenge and growth. They write that once theparticipant reaches a certain “predetermined levelof competency in a particular skill, he or shewill then engage in a series of simulation tasksSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Brain Neuroplasticity and Computer-Aided Rehabilitation in ADHD193designed to enhance the functional use of theskill and increase the ecological validity of t heintervention. This is accomplished through theuse of virtual environments.” The discussion thatfollows, although aimed at brain injury issues,also seems to include many of the same cognitiveweaknesses which are recognizably part and parcelof ADHD. It would seem that the same kinds ofinterventions could be constructively applied toADHD across the age spectrum just as well asfor brain damage. Part of the ALERT programincludes Virtual Reality interactive settings. Theimage shown in the article is that of a kitchenthat “talks back” so to speak. With interactiveobjects—a phone that rings, a sink that fills withwater, a pot that boils and a coffee pot that works,as the text puts it, Once started these processes…continue until the user takes action. All thosethings can be happening at once “thereby creatinga need for prioritized attention. Meanwhile, theuser’s responses and response times are recordedand performance scores calculated to be used toupdate the user model and guide the course oftreatment.Plans for these programs include expandingthe life-skill scenarios and virtual environments toinclude many vocational settings and vocationaland social problem-solving scenarios in order tosupport vocational rehabilitation. It does seemthat there would be ways to adapt this whole ideato working with ADHD. In the Conclusion to thearticle, it is noted that ALERT was to be madeavailable over the Internet and on CD-ROM.Release was scheduled for Spring 1998. Therewas also supposed to be a web site: http://www.earthlab.com/alert/Margaret M. Bradley at el., in a study forthe National Institute of Mental Health, titled“Activation of the Visual Cortex in MotivatedAttention,” published in Behavioral Neuroscienceworked with neuroimaging, specifically fMRI tomeasure where and how in the brain motivatedattention was focused. No particular diseaseprocesses where discussed, as the actual focusof the study was emotional arousal and focusedattention, but it is possible that this informationcould be useful in designing computer programsto assist with rehabilitating attention in personswith ADHD.From Cyber Psychology and Behavior, Vol.5, November (2002) is an article titled, “TheEffect of Virtual Reality Cognitive Training forAttention Enhancement,” from a Korean researchteam, Cho Baek-Hwan et al., concerns the use ofVirtual Reality to assist in training and enhancingattention in children with ADHD. Baek-HwanCho et al. begin their discussion with a definitionor description of the five basic categories ofattention which are: focused attention, sustainedattention, selective attention, alternating attentionand divided attention. This group developed theirown materials to be used for this study. Sincemuch of a child’s time is spent in the classroom,the first project was a virtual classroom, or avirtual environment, VE. The VR cognitivetraining tools were designed to follow the patternsset up in ADHD assessment instruments such asContinuous Performance Task, Test of Variablesof Attention or Wisconsin Card Sorting, but aremeant to be used in training rather than assessment.The comment is also made that the VR programshave the added feature of levels that allow thesubjects to track their own progress.The researchers report a number of factors thatmight make evaluation of their system difficult,i.e. the subjects, while they displayed many ofthe same behavior tendencies as people withADHD, were a population from a reformatorysetting and none of them had an actual diagnosisof ADHD. The research group also feels thattheir programming needs work because the groupthat used just a regular desk top computer forthe training reported the effort to be tedious anduncomfortable. The positive side, however isthat the people using the Head Mounted Displayand a Head Tracker not only showed good resultsover all but also more willingly worked withthe training program. There is some detailedinformation as to how the tests worked and whatthe elements were.In her dissertation titled, “CognitiveRehabilitation: A Method for Improving Sustainedand Selective Attention in Adolescents WithAttention Deficits,” Glinda Bullock designed aH. Undsen, M. Brant, J. C. Arias - Brain Neuroplasticity and Computer-Aided Rehabilitation in ADHD


194School of Doctoral Studies (European Union) JournalJulyresearch mechanism to test for the effectivenessof computer-mediated cognitive rehabilitation foryoung people who had attention deficits issuesdue to ADD. The testing was done with four malemiddle-school students who were all diagnosedwith ADD and who were all being treated withpsychostimulant medications (Ritalin). She useda pretest-posttest design. The subjects weretested before the intervention to check on levelsof attentional functioning and then were testedafter six weeks of a hierarchical attention trainingprogram. This program included three weeks ofsustained attention training and three weeks ofselective attention training. (Italics added) Bullockreports that there was some improvement for allfour subjects in sustained and selective attentionon at least two out of three measures. “However,she writes, “only selective attention results weresignificant.” (Italics added) Bullock states thatthese results give reason to believe that cognitiverehabilitation could be effective in treating ADDand that further studies using groups, subjects andcontrols, are warranted.Incredible Horizons: New Brain Research, doesoffer some basic information on brain plasticityand developing attention, but this particulardocument is basically one long commercial fora series of products put out by Advanced BrainTechnologies/Unique Logic and Technologies.The products are a collection and combination ofsoftware programs and dietary supplements.In, “Current Directions in Computer-assistedCognitive Rehabilitation,” from the journal,Neurorehabilition Samuel T. Gontkovsky, et al.discuss the state of computer-based neurocognitiverehabilitation. They view cognitive deficitsin a pretty general way although most of theirexamples have to do with brain injury patients.They discuss various studies that have beendone with computer programs. The researcherssay that as the empirical evidence continuesto increase, “direct comparisons of findingsacross investigations remains problematic dueto multiple methodological issues to similar tothose encountered when attempting to comparetraditional methods of cognitive rehabilitation.”(196) and the general view is that computers getabout the same efficacy for patient improvementas traditional therapy. While at first glimpse, thatmay not seem wonderful, these researchers seethose results in areas such as attention deficits asvery promising because programs can be providedfor use on home computers, once training iscompleted, and people can be working with theinformation at home. Even if such treatmentneeds to be combined with more traditionalinterventions, it is suggested that working at homecould be very cost effective in itself and in reducingthe number of therapist hours required per patient.Even if computer-based treatments were not usedto “replace” traditional therapy it would makesense to use them for re-enforcement of treatment.As in many other articles, the possibilities for usewith ADHD certainly exist but are not directlyaddressed.Norman W. Park and Janet L. Englescontribute their article, “Effectiveness of AttentionRehabilitation after an Acquired Brain Injury:Meta-Analysis,” published in Neuroscience. Theydiscuss the importance of returning brain-injuredsubjects to there former level of attention-relatedfunction. The writers state most accurately thatthe level to which attention is negatively affectedis a strong predictor of a patient’s likely abilityto return to work. The authors outline the usualmethods of re-training attention which usuallyincludes completing a series of exercises or drills. The rehabilitation client responds to visual orauditory stimuli within a frame work of rules. Inthe Attention Process Training Program there is aseries of graded exercises that begin with simplypressing a buzzer when they hear the number 3,through increasingly complex tasks. There isfeedback at each level. When the training tasksare completed, there is testing and treatmentis considered successful when improvement isshown on the tests of cognitive function. TheAttention Process Training program has sectionsfor sustained, selective, alternating and dividedattention. This approach was developed becauseone current theory says the specific divisions ofattention need specific training or re-training.Nothing was mentioned as to whether any of thisparticular program is computerized.School of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Brain Neuroplasticity and Computer-Aided Rehabilitation in ADHD195According to these writers, another, lessstudied approach to rehabilitation is to train or retrainattention by having clients perform specifictasks of “functional significance.”One example of something considered afunctionally significant task was driving. Kewmanet al. (1985) looked at driving as a complex skillthat requires critical paying attention and flexibilityto shift focus from one activity to another. Anexperimental group completed a series of exercisesthat were developed around divided attention. Thetasks were performed driving an electric-poweredvehicle and used a group of brain-injured subjectsand a group of regular subjects. This paper saysthat the brain-injured subjects were not trainedusing the specific exercises and offers no resultsof this experiment.This paper goes to say one purpose is to evaluateother studies of cognitive, attention-focusedrehabilitation studies specifically those studiesthat attempted to “directly retrain attention.”In the “Discussion” section of this paper,the writers make these comments:One objective of this meta-analysis was toevaluate quantitatively, for the first time,the efficacy of rehabilitation programs thatattempt to directly retrain attention. Severallines of evidence showed that these methodsproduced only small, statistically nonsignificantimprovements in performance inall general measures of cognitive functionand in all specific measures of attention whenimprovement was determined using pre-postwith control effect size estimates. We alsoexamined individual studies that reportedan improvement in cognitive functioning.Of the 12 direct-retraining studies with acontrol condition, 6 reported no statisticallysignificant improvement in performanceafter training. In the remaining 6 studies,the pattern of improvement was specificin each case and, in most cases, could beattributed to specific skills acquired duringtraining. (Italics added) Thus, support forthe hypothesis that direct retraining canrestore or strengthen damaged attentionalfunction was not found in the reviewedstudies. (205)The writers do state that there may have beensignificant improvement for individuals however,grouped data prevented examination of suchcases.The second stated purpose of the meta-analysiswas:…to identify methodological factors thatmay contribute to the variability in trainingefficacy across studies. Effect sizes derivedfrom studies without a control groupwere consistently much larger than thosefrom studies with a control group…Thesefindings strongly suggest that the largereffect sizes in the pre-post only studiesare attributable to the effects of practiceon the outcome measures and not to otherassociated factors” (205)Although, these reviewers are critical of howmuch of the looked at research was conducted,they offer an interesting comment for futureconsideration:Perhaps most important, however, theconsistency and magnitude of practiceeffects, observed even after a single exposureto test material, demonstrate that people withacquired brain injuries can quickly learn abroad range of skills. This finding suggeststhat treatment aimed at helping peoplelearn or relearn skills after an acquiredbrain injury will probably be effective,particularly if the skill being learned has asubstantial attentional component. (Italicsand emphasis added) (206)In the closing remarks there are some commentsthat might be pertinent to working with ADHDattention deficit issues and that might also beparticularly suitable for computer-aided training.They are discussing breaking complex skills downinto simpler components. By practicing thesecomponents and giving feedback as practice goesalong performance can more easily be evaluated.H. Undsen, M. Brant, J. C. Arias - Brain Neuroplasticity and Computer-Aided Rehabilitation in ADHD


196School of Doctoral Studies (European Union) JournalJulyA little earlier in the discussion they also used theterm “neuropsychological scaffolding.” This is,perhaps, another concept that could be applied tocomputer-assisting ADHD subjects.Although this letter is found on the websitefor a company that provides Captain’s Log® asa product, it seems worth noting. As is so oftenthe case, the subjects who took part in the studyat Shasta College were all brain injured so it mustbe inferred that they would be re-learning not firsttime acquiring the discussed capabilities. Theletter writer, who is affiliated with the college’sHigh Tech Center, explains that each student iscarefully evaluated for abilities and disabilitiesprior to beginning the use of Captain’s Log®. Anindividual care plan is devised with the computerprogram as its main component. The writer,Bobby Roberts, says:In just a short period of time (in some casesas little as nine weeks) we are seeing greatimprovement in these students’ abilities inmemory, in reading and in all of the otherbasic cognitive areas covered by Captain’sLog®. Most exciting of all, we are seeingthese skills generalize into daily living andimprove the students’ quality of life.The individuals showing the mostimprovement and the biggest generalizationare those who spent a minimum of one hour,three times a week working with Captain’sLog®. It seems to be very importantthat they start at an appropriate level ineach category and follow their individualplan….“An Analysis of Computerized CognitiveTraining and Neurofeedback in the Treatment ofADHD,” is a review that comes from the samesite and is commentary by Joseph A. SandfordPhD., on a study conducted by Drs. Aubrey Fineand Larry Goldman, at the Center for the Study ofSpecial Populations, California State Polytechnic<strong>University</strong>, Pomona, California.The research sample was 67 subjects betweenthe ages of 8-11, 85% of whom were males. Eachvolunteer had been professionally diagnosed withADHD by either a physician or a psychologist.They were randomly assigned to CognitiveTraining, Neurofeedback or No Treatment. A prepostdesign was used and the test examiners wereblind to group designation. This author states thatthe experimental findings were analyzed basedon a “3X2 ANOVA factorial design ….Thus, thisdesign provided experimental controls for possiblepractice, learning and placebo effects.The description/definition of Captain’s Log®says it consists of cognitive training exercisesthat utilize the capacity of the computer toprovide immediate non-judgmental feedback forself-regulation, individualized instruction, reenforcementof response inhibition, challengingtasks which require sustained attention, engagingand game-like stimuli and small, chunk-likebuilding block exercises. A number of familiarnon-computerized tests were administered to thetest subjects and the Cognitive Training—the grouptreated with Captain’s Log® scored generally andsignificantly higher in most measurements. Thereis further discussion with regard to the feedbackportion of the testing and the commentatorfinishes with the observation that no treatment atall of children with ADHD during the summer ledto a pervasive worsening of their emotional andbehavioral problems. He commented it wasn’tclear what they did, but it seemed that a lack ofregular tasks demanding the various disciplines tobe found, say, in school was not good for ADHDchildren. All of these comments and observationswill be included in the next section as well as theycover all the aspects of ADHD discussed there.The Rehabilitation of ExecutiveFunction Using Computer AssistedCognitive Rehabilitation Programsincluding Response Inhibition (selfregulation);Problem-solving andSelf-monitoring; Working Memory;Planning and Organization.In the dissertation done by Dona M. Belluci,“The Effectiveness of Computer-AssistedCognitive Rehabilitation for Patients WithSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Brain Neuroplasticity and Computer-Aided Rehabilitation in ADHD197Chronic Mental Illness” many of the symptomsdescribed for schizophrenia, are also, to one extentor another also present in people with ADHD.Seeing these similarities, it seems there would be alogic to trying some of the same computer-assistedrehabilitations with the ADHD population.Chapter Twenty-one of CognitiveNeurorehabilitation there is a discussion of theneed for and the progress in theories and methodsfor helping people with memory deficits. Thediscussion is, again, focused around peoplewho have known proper memory function andhave lost some of that function due to injuryor illness. In recent time there have been threemajor approaches to memory rehabilitation—environmental adaptations, new learning and theuse of new technology. New technology involvesa number of possibilities. “Smart” houses thatare being designed to increase the possibility ofindependent living for confused elderly peopleare likely adaptable to other groups who suffercognitive impairment. These special homes area mix of high technology and adapted everydayappliances or items. Examples are telephoneswith pre-programmed numbers and a picture onthe phone button cues the user which person intheir world is connected to which button. Videophone links connect the client with a care centeror primary helper. Control systems for watertemperatures can be installed to protect fromshowers or bathes being too hot or cold.Another form of technology is the NeuroPage.It is a pager which can be used to remind memoryimpairedpeople to do certain things as they goalong through their day. Whether the prompt is,“take your medications,” or “It’s time to feed thedog,” with the assistance being mediated fromoutside, the person isn’t trying to use and impairedmemory to remember passive memory aids suchas using a day runner. There is something calledthe Interactive Task Guidance System which canbe used to provide step-by-step instructions fordoing whatever daily task the guidance is requiredfor. The ITGS uses computers to provide cues todaily tasks such as cooking.“A Cognitive Remediation Program for AdultsWith Attention Deficit Hyperactivity Disorder,”in the Australian and New Zealand Journal ofPsychiatry2002; 36:610-616 Caroline Stevenson,Stephanie Whitmont, Laurel Bornholt, DavidLivesey, and Richard J. Stevenson, offer theresults of a cognitive intervention on behalf ofadults who have been disgnosed as being ADHD.The purposefully designed intervention reportsnothing about the use of computers but becauseof the success reported is worth noting andworth considering as a possible model to adaptfor children and to possibly adapt to use withcomputers.The study reported here used a three-prongapproach to reduce the impact of cognitiveimpairments: (i) retraining cognitivefunctions; (ii) teaching internal andexternal compensatory strategies; and (iii)restructuring the physical environment tomaximize functioning. Many experts in thefield of adult ADHD use and recommendsuch an approach, albeit without systematicevaluation of these interventions.The Cognitive Remediation Program (CRP)was designed for a small group format andincluded eight weekly, therapist-led groupsessions; support people who acted as coaches andthere was a workbook for the participants. Thepoint of group sessions was to teach strategiesto improve function in the areas of motivation,concentration, listening, impulsivity, organization,anger management and self-esteem. To evaluatethe co-effects of intervention and medication, bothmedicated and non-medicated participants wereincluded. To stabilize this variable, participantswere asked to not change their medication statusuntil the two-month follow-up point. The outcomemeasurements used to determine the success ofthe intervention were, DSM-IIIR ADHD checklist,Adult Organizational Scale, Davidson and LangSelf-Esteem Measure, and the State-Trait AngerExpression Inventory. These measurements wereadministered pre-treatment, immediately posttreatment,and at 2 months and 12 months posttreatment.The results for maintaining the gainsof this particular intervention are encouragingH. Undsen, M. Brant, J. C. Arias - Brain Neuroplasticity and Computer-Aided Rehabilitation in ADHD


198School of Doctoral Studies (European Union) JournalJulyin all respects except the anger managementcomponent.In the article, “Functional Treatment Approachesto Memory Impairment Following Brain Injury,”from Topics in Language Disorders, (1997) Vol.18, 45-58, Judith Hutchinson and Thomas P.Marquardt discuss many methods for dealingwith memory impairment. They discuss MRIand PET imaging to help with closer diagnosis.They discuss the possible role of medication inassisting with cognitive/memory rehabilitation,They discuss the many possible test instruments toevaluate the extent of memory damage. The nextsection of their article deals with what they referto as peripheral factors such as vision or hearingaids, and making sure that nutrition and hydrationare optimal, adjustment of medications as maybe needed, making sure patient is getting enoughsleep, is relaxed enough and not in pain as muchas possible.They then go on to discuss practical,environmental adjustments that can be made toassist with memory. These aids include labeled,fixed receptacles for glasses, keys, dentures etc.,signs on cabinets and drawers and reminder cuesfor activities or medication schedules. There arefurther comments about family/caregiver trainingto help these people better understand what is goingon with the patient and to train the people aroundthe patient in the use of cues and environmentaladjustments to make things easier for everybody.This article also includes an extended discussionof the importance of developing patient awarenessof disability and the extent of disability as thefoundation on which to build/rebuild memoryfunction. The authors quote Crossen et al. (1989)in these levels of awareness:1. Intellectual awareness is the abilityto understand and verbalize deficitknowledge.2. Compensatory awareness is the slightlymore advanced ability to identify errors andcorrect them.3. Anticipatory awareness is thedemonstrated ability to prepare for andprevent potential problems resulting fromcognitive impairments.The memory-impaired patient must beassessed for current level of awarenessand presented with educational andcounseling support appropriate to thatlevel o f awareness in order that awarenessbe gradually increased. Group therapyapproaches are particularly effective inproviding peer feedback and improvinginsight into deficits. Video feedback isalso frequently employed for awarenesstraining, including treatment to facilitatecommunication in context, message repairand cohesiveness of narrative. (Erlich &Spies, 1985)Hutchinson and Marquardt go on to say:Many popular approaches to memoryimprovement, including those designedfor the non-brain-injured population arebased on assumption that repetitive use,practice, and placing of increased demandson the cognitive system will result inneurological adaptation and recovery.General “stimulation” approaches havebeen found to have minimal and onlygeneralized effects on overall cognitivefunction. However, narrowly focused directretraining of cognitive processes throughhierarchically arranged treatment(theprocess approach described by Bracy (1986)and Solberg and Mateer [1987]) has beenshown to be effective in the rehabilitationof some types of linguistic and cognitiveimpairments. Process-specific traininghas been used most effectively in attentiontraining and probably least effectivelyin memory training. {Emphasis/italicsadded}The crucial role of attention inmemory function—in encoding, rehearsal,consolidation, in retrieval processes—indicates the utility of providing attentionprocesstraining as part of a memoryrehabilitation program. However, repetitivepractice in memorization and frequentSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Brain Neuroplasticity and Computer-Aided Rehabilitation in ADHD199demands to remember material, althoughstill frequently observed in rehabilitationsettings, have not been shown to be usefulin memory improvement and, in fact, arelikely to be detrimental to the therapyprocess (Prigatano,et al. 1984)The authors go on to offer suggestions in theway of strategies to be used in a rehabilitationsetting. There are personal compensatory trainingtechniques which focus on increasing consciousattention, rehearsal techniques and operatingon specific content to enhance its encoding andsubsequent retrieval. Rehearsal training is theprocess of repetition of information in workingmemory in order to enhance its consolidation intolong term memory. They go further to explainthat this process is to some extent automaticand unconscious in normal function but theautomaticity of processing is impaired in braininjury. Rehearsal training is based on the idea thatbrain-injured patients need explicit, consciousrehearsal of information.There is also a discussion of visual imagery,domain-specific training, the TRACES protocoland external compensatory aid training as ways towork around or re-train memory deficits. Thereare no specific suggestions for use outside the areaof dealing with people who had adequate memoryskills and lost them due to injury however itwould seem that there would be a certain aspectof transferability. There is also no mention of theuse of computers except as the power behind theNeuroPage. This article is seven years old and theremay have been changes that would now includethe deliberate use of computers at some point.This article, in its focus on memory seems to, byimplication, suggest possible treatment avenuesfor all executive functions and for conditions suchas ADHD.Although this letter is found on the websitefor a company that provides Captain’s Log® asa product, it seems worth noting. As is so oftenthe case, the subjects who took part in the studyat Shasta College were all brain injured so it mustbe inferred that they would be re-learning not firsttime acquiring the discussed capabilities. Theletter writer, who is affiliated with the college’sHigh Tech Center, explains that each student iscarefully evaluated for abilities and disabilitiesprior to beginning the use of Captain’s Log®. Anindividual care plan is devised with the computerprogram as its main component. The writer,Bobby Roberts, says:In just a short period of time (in some casesas little as nine weeks) we are seeing greatimprovement in these students’ abilities inmemory, in reading and in all of the otherbasic cognitive areas covered by Captain’sLog®. Most exciting of all, we are seeingthese skills generalize into daily living andimprove the students’ quality of life.The individuals showing the mostimprovement and the biggest generalizationare those who spent a minimum of one hour,three times a week working with Captain’sLog®. It seems to be very importantthat they start at an appropriate level ineach category and follow their individualplan….“An Analysis of Computerized CognitiveTraining and Neurofeedback in the Treatment ofADHD,” is a review that comes from the samesite and is commentary by Joseph A. SandfordPhD., on a study conducted by Drs. Aubrey Fineand Larry Goldman, at the Center for the Study ofSpecial Populations, California State Polytechnic<strong>University</strong>, Pomona, California.The research sample was 67 subjects betweenthe ages of 8-11, 85% of whom were males. Eachvolunteer had been professionally diagnosed withADHD by either a physician or a psychologist.They were randomly assigned to CognitiveTraining, Neurofeedback or No Treatment. A prepostdesign was used and the test examiners wereblind to group designation. This author states thatthe experimental findings were analyzed basedon a “3X2 ANOVA factorial design ….Thus, thisdesign provided experimental controls for possiblepractice, learning and placebo effects.The description/definition of Captain’s Log®says it consists of cognitive training exercisesH. Undsen, M. Brant, J. C. Arias - Brain Neuroplasticity and Computer-Aided Rehabilitation in ADHD


200School of Doctoral Studies (European Union) JournalJulythat utilize the capacity of the computer toprovide immediate non-judgmental feedback forself-regulation, individualized instruction, reenforcementof response inhibition, challengingtasks which require sustained attention, engagingand game-like stimuli and small, chunk-likebuilding block exercises. A number of familiarnon-computerized tests were administered to thetest subjects and the Cognitive Training—the grouptreated with Captain’s Log® scored generally andsignificantly higher in most measurements. Thereis further discussion with regard to the feedbackportion of the testing and the commentator finisheswith the observation that no treatment at all ofchildren with ADHD during the summer led toa pervasive worsening of their emotional andbehavioral problems. He commented it wasn’tclear what they did, but it seemed that a lack ofregular tasks demanding the various disciplines tobe found, say, in school was not good for ADHDchildren. All of these comments and observationswill be included in the next section as well as theycover all the aspects of ADHD discussed there.Working Memory is the chief cognitivefunction being evaluated in the study, “Trainingof Working Memory in Children With ADHD,”by Torkel Klingberg, Hans Forssberg, and HelenaWesterberg, published in the Journal of Clinicaland Experimental Neuropsychology.Working memory, say these researchers,underlies several cognitive abilities includinglogical reasoning and problem-solving. In ADHDimpairment of WM is of central importance and itis suggested that it is in part an impaired frontallobe. The study was set and designed to investigatewhether it would be possible to improve WMcapacity with training. They go on to say that inprevious attempts to do the same thing, the trainingdid not involve graduated difficulty levels so thatwhile there grew to be faster reaction times, therewas no reported increase in WM capacity. There hasbeen some reported success in teaching rehearsalstrategies to children with learning disabilities andthere have been studies where subjects have learnedstrategies to retain large numbers of digits. Thesestrategies did not, however prove to generalize toother WM or reasoning tasks.The study was made unique by designingspecial computer programs, that included astaircase method of increased demand, with theidea of pushing the subject to close to capacity.Training with these programs was 20 minutes perday, 4-6 days per week for at least 5 weeks. Inthe pre-test, post-test part of the study, a batteryof tests used for evaluation of WM capacity andprefrontal functioning was used. Among otherswere included Raven’s Colored ProgressiveMatrices, and Stroop’s test which children withADHD are known to have problems with. Therewere seven children in the test group and seven inthe control group. To use with the control groupthere was a specially designed “placebo” programthat lacked the interactive difficulty level and wasadministered less than 10 minutes per day. Also,the study was a double-blind design where parents,children and the psychologist who administeredpre and post training testing did not know whichversion of the programs the children had practiced.There is a detailed description of the actual testmeasures and how they changed as difficultyincreased. This same test measure was used in asecond experiment in which four healthy malesbetween 22-29, with no psychotic, or neurologicalhistory participated in the training.The general findings were that during training,performance gradually improved on all trainedtasks with an increased amount of information keptin WM and decreased reaction times. The writersalso report that on the cognitive testing performedbefore and after training, all subjects improved onall tasks. They go on to say, “The present studyshowed that intensive and adaptive, computerizedWM training gradually increased the amount ofinformation that the subjects could keep in WM.The improved performance occurred over weeksof training, and is in this respect similar to theslow acquisition of a perceptual skill or a motorskill.”In the article Executive Functions, SelfRegulation, and Learned Optimism in PediatricRehabilitation: A Review and Implications forIntervention,” in Pediatric Rehabilitation (2002Mark Ylvisaker and Timothy Feeney, review muchof the current literature on executive functionsSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Brain Neuroplasticity and Computer-Aided Rehabilitation in ADHD201and make recommendations for treating childrenwith executive function deficits. Most of theirdiscussion has to do with brain injury but theystate that they believe what they have found istransferable to children with congenital learningdysfunctions such ADHD. In various studies,researchers found that different kinds of damageor conditions will produce variations on thegeneral theme of Executive functions. Differentdeficits cause different parts of the executivefunction picture to be distorted. For examplechildren with traumatic brain injury configuresolving the executive function puzzle differentlythan children whose brain damage was causedby meningitis. The authors go on to say thatADHD children present a third, separate picturein that, reduced behavioral inhibition is the coredeficit which causes a, “secondary interferenceof four additional executive functions (workingmemory; self-regulation of affect, and arousal;internalization of speech and reconstitution orbehavioral analysis and synthesis.” (52)Ylvisaker and Feeney also discuss strategiesfor working successfully with the child populationwhich has suffered some sort of brain injury.Their opinion is that executive functions must beworked with as a whole if there is to be a successfuloutcome for the client. Further, they suggeststrategies based on real-life, day-to-day issuesthe clients face as opposed to laboratory testingtypeactivities. Their emphasis was studentbasedco-planning, and execution led by speciallytrained teachers or parents. There was nothingmentioned about the use of computers but theredoesn’t seem to be any reason why their “Goal-Plan-Do-Review” format couldn’t be adapted tofunction with a computer program as long as thecooperative nature of the concept wasn’t lost orturned over to the computer. As far as that goes, ifan adolescent really had a problem with authorityfigures, he/she might take guidance and directionbetter from a computer figure of some sort.In another part of this study, the authors reviewthe work of Anderson et al and quote this passage:“The magnitude and intractability of the defectsincurred by injury at an early age suggest that theremight be limited neuronal plasticity in the sectorsof the prefrontal circuitry which contribute toemotional modulation and the linkage of emotionand decision making.”(59)There is also an extensive checklist to be usedin assessing interventions proposed for use in thetreatment of impaired children.Some of the material found in the Hutchinson/Marquardt article cited in a previous section mightalso apply to working with cognitive defects.Many of the same tests, might in some mannerfunction for assessing weaknesses and strengths inADHD, but nothing is said about how researcherswould go about testing for functions that probablywork poorly at best. Nothing is said about howto teach what a person never had. It may bethat the Present Functioning Questionnaire andthe Multimodal Inventory of Cognitive Statusmight be of assistance in ADHA, especially withadults who may have managed to compensate forweaknesses.This same information appears in sectionVI as it is applicable to both discussions as wediscussed at the beginning of the project. I hopethis particular set of information proves accessibleand useful to your project as it—or something likeit—seems so logical and workable.In the discussion of a program, that aninterdisciplinary team is working on, D.L.Mickey, et al., of Neuropsychological Associates,of Madison WI, describes a program of computeraidedretraining for victims of traumatic braindamage. Their article, “Brain Injury andCognitive Retraining: The Role of ComputerAssisted Learning and Virtual Reality” explainsthe work they are doing to harness the computer astherapeutic tool. The focus is on traumatic injurybut the information itself should be transferable.Their Adaptable Learning Environmentfor Rehabilitation Training, acronym ALERT,begins by defining the cognitive domains to beaddressed:The design of this learning environmentaddresses a variety of cognitive domainsincluding arousal and orientation, attentionand concentration, memory visual andspatial perception, language and verbalH. Undsen, M. Brant, J. C. Arias - Brain Neuroplasticity and Computer-Aided Rehabilitation in ADHD


202School of Doctoral Studies (European Union) JournalJulyskills, executive functioning (e.g.,reasoning, planning, organization, problemsolving), life skills (e.g., time tellingbudgeting following directions), and socialskills. Cognitive tasks are activities specificto assessing the skills contained within abroader cognitive domain (e.g., attention,memory, executive functioning) and aredesigned to enhance the learner’s skill levelin that particular sub-domain. (2)The authors go on to describe the programas adaptable to age through game-like activitieswhich use a multiple level system to providechallenge and growth. They write that once theparticipant reaches a certain “predetermined levelof competency in a particular skill, he or shewill then engage in a series of simulation tasksdesigned to enhance the functional use of theskill and increase the ecological validity of t heintervention. This is accomplished through theuse of virtual environments.” The discussion thatfollows, although aimed at brain injury issues,also seems to include many of the same cognitiveweaknesses which are recognizably part and parcelof ADHD. It would seem that the same kinds ofinterventions could be constructively applied toADHD across the age spectrum just as well asfor brain damage. Part of the ALERT programincludes Virtual Reality interactive settings. Theimage shown in the article is that of a kitchenthat “talks back” so to speak. With interactiveobjects—a phone that rings, a sink that fills withwater, a pot that boils and a coffee pot that works,as the text puts it, Once started these processes…continue until the user takes action. All thosethings can be happening at once “thereby creatinga need for prioritized attention. Meanwhile, theuser’s responses and response times are recordedand performance scores calculated to be used toupdate the user model and guide the course oftreatment.Plans for these programs include expandingthe life-skill scenarios and virtual environments toinclude many vocational settings and vocationaland social problem-solving scenarios in order tosupport vocational rehabilitation. It does seemthat there would be ways to adapt this whole ideato working with ADHD. In the Conclusion to thearticle, it is noted that ALERT was to be madeavailable over the Internet and on CD-ROM.Release was scheduled for Spring 1998. Therewas also supposed to be a web site: http://www.earthlab.com/alert/Review of Computer AssistedCognitive Rehabilitation as aTreatment Modality:A. Empirical Support, B. Limitations,and C. Future Directions.A. Empirical Support:In a review of literature entitled, in Educationand Treatment of Children. 25:2 ChunshenXu, Robert Reid and Allen Steckelberg of the<strong>University</strong> of Nebraska undertake to draw someconclusions regarding using computerizedtesting and training for ADHD children. While itseems they are convinced, perhaps on a personallevel, that computers should be a valuable toolfor treating and training ADHD children, theactual results of their review of the literature areinconclusive at best. They offer the explanationfor this vagueness as being in some cases, poordesign, some cases lack of real control and somecases poorly collected information.In the study, “The Effect of Virtual RealityCognitive Training for Attention Enhancement,”Cho et al., offer these conclusions in support ofusing computer-mediated Virtual Reality as anAttention Enhancement tool.In this research, our main goal was tovalidate the use of VR in cognitive trainingfor attention enhancement. Therefore wedeveloped a cognitive training program inVE as part of the prototype of the AttentionEnhancement System.Compared to the non-VR group and thecontrol group, the VR group increased theircorrect rate and decreased their perceptualsensitivity (d’) and response bias (B)School of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Brain Neuroplasticity and Computer-Aided Rehabilitation in ADHD203significantly which means that cognitivetraining in VE with HMD and head trackeris effective in sustaining one’s attention,and making one consider sufficiently anddistinguish target stimuli more sensitively….These results prove that VE cognitivetraining, which is as application of exposuretherapy, is more effective in improving theattention span of children and adolescentswith behavioral problems and helping learnto focus on some tasks. We can also saythat immersive VR may be appropriate forattention enhancement. (Italics added.)As part of his article, “Historical Review ofComputer-assisted Cognitive Retraining,” BillLynch reviews the software and personal hardwareavailable at this moment—2002—when this articlewas published. He offers the statement: “There aretwo trends evident in the way in which computersare used in cognitive rehabilitation treatment inrecent years: one involves the content of software,and the other involves the use of computers orelectronic devices as cognitive aids or prostheses.”(451). There follows an extensive discussion ofwhat is currently available, with approximateprices as far as software programs are concerned,and also a discussion of the prostheses. There areno prices given in this part of the discussion.Park/Ingles offer much comment on the currentstate of treatment for attention deficits. They arewilling to hypothesis that specific-skills training/retraining has hope to offer but lacks much inwell designed, well controlled studies whereinformation gained can be scientifically acceptedand built on.S. T. Gontkovsky et al. provide empiricalsupport for the use of computers in cognitiverehabilitation by reviewing the work of a numberof research groups. They offer the example ofFinlayson et al. who demonstrated significantimprovement in “new learning, problemsolving, mental flexibility, and psychomotorfunctioning following completion of a program ofmicrocomputer exercises which was individuallydesigned and systematically implemented for anadult female who had sustained a severe closedhead injury.” Gontkovsky goes on to say thatreports indicated generalization of the gains madewhen independent neuropsychological testing wasdone.In, “Applications of Computer-basedNeuropsychological Assessment,” Philip Schatzand Jeffery Browndyke, published in the Journal ofHead Trauma Rehabilitation of September 2002,present an overview of where computer-basedassessment is and what would make a better futurefor this technology and for the neuropsychologicalprofession.These writers begin by looking at how computersare presently used in professional practice. Theyare of course, used for scheduling, billing, and theword processing that goes with narratives of careand histories. Computers are also being used tosome extent for neuropsychological assessmentand that is the primary focus of their article. Atpresent, there have been traditional paper-andpencil tests computerized. Tests such as, thePeabody Picture Vocabulary Test, the Raven’sColored Progressive Matrices test, and the variousversions of the Wechler’s Adult Intelligence Scaleup until very recently when the WAIS-III prettymuch returned to it’s written format.The authors also discuss the Halstead-ReitanNeuropsychological Test Battery which was“originally developed to predict the presence andlocalization of brain damage”. (396) They writethat although a lot of attention has been shown tothis battery as far as computerization values areconcerned, it hasn’t really proven more effectivethan clinician interpretation. Something calledthe Category Test, a subset of the Halstead-ReitanBattery was first computerized in 1975. It hadproblems and was eventually re-computerized inthe 1980s. Improved microcomputer technology“allowed for full automation of the CategoryTest with the exception of the verbal instructionsand prompts necessary for test completion. Thisversion of the Category Test demonstrated anacceptable level of equivalence with the originalversion of the Category Test.”The writers go on to say that many assessmenttests have been computerized and suggest accessingwork by Bartram and Baayliss or Kane and KayH. Undsen, M. Brant, J. C. Arias - Brain Neuroplasticity and Computer-Aided Rehabilitation in ADHD


204School of Doctoral Studies (European Union) JournalJulyfor more in depth material on what is actuallycurrently available. These authors, Schatz andBrowndyke, include in their presentationthe APA guidelines for including the computerin clinical practice. They then move into the realdiscussion that is the purpose of the article whichis, how valuable and or practical computer-basedassessment is in the clinical setting, its limitationsand future possibilities. They offer these featuresof computer-based formats for consideration inpractice:Computer-based assessment has inherentfeatures that are absent in traditionalforms, such as timing of response latencies,automated analysis of response patterns,transfer of results to a database for furtheranalysis, or the ease with which normativedate can be collected or compared withexisting normative databases. In addition,computer-based assessment measures arebetter able to provide precise control overthe presentation of test stimuli, therebypotentially increasing test reliability. (397)They go on to offer other advantages tocomputer-based assessment.B. Limitations:In this same study, of “Technology Applicationsfor Children with ADHD: Assessing the EmpiricalSupport,” it is stated that the most seriouslimitations at this time are that there are very fewwell-controlled studies—“a handful”—and thesestudies had very narrow focuses.The majority of studies were quasiexperimentalat best or were very limited casestudies. The main methodological concernslie in three areas: (1) the lack of rigorousexperimental studies, (2) subject selectionprocedures, and (3) outcome measures. Forsome studies confounding variables makeit uncertain to what extent the positiveresults reported in these studies are due toother factors and how much, if any can beattributed to the computer-based training.For example, in many studies studentsalso received behavior modification… orchanges in the academic environment….Subject selection is also problematic.Current best practice in ADHD diagnosisrequires a multi-stage, multi-informantassessment procedure….Few studiesmet this requirement. In some studies isuncertain whether an ADHD diagnosis waswarranted because of lack of descriptionof diagnostic criteria…, over reliance onreport data…, or the identification processhad not been completed…..The situation isfurther complicated by the fact that, overthe time span covered by this review, thediagnostic criteria for ADHD have changedthree times.Considerations that are not addressed in theCho et al. study, “Virtual Reality as an AttentionEnhancement Tool” are sustainability of effect,and there were no comments or suggestions thatthe observed effects on attention could be relatedsolely to the novelty of the method.Parks/Ingles also address the problems ofstudy design and terminology. They comment onthe number of claims for progress that are basedon anecdotal material from individuals. Morerigid study design standards are called for as arelarger samples and, these researchers particularly,suggest going outside of the “usual” subject groupswhich very frequently are composed entirely ofpatients with acquired brain injury in order to (1)better define attention and its deficits, and, (2) inorder to see if the same methods of skill-specifictraining are effective for attention deficits createdby different causes. To work almost exclusivelywith a single cause/client group severely limitsgenralizability.In the Gontkovsky study cited in the “Empirical”section, the limitations of the results from the headtrauma subject are what are seen so frequently—they are anecdotal. Can these results be replicatedin other research settings? Can this “individuallydesigned” program be used by a general headSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Brain Neuroplasticity and Computer-Aided Rehabilitation in ADHD205trauma population? Does this program, whichis not really described, offer any possibilities toother neurologically impaired populations such asADHD?In, “Applications of Computer-basedNeuropsychological Assessment,” Philip Schatzand Jeffery Browndyke, published in the Journal ofHead Trauma Rehabilitation of September 2002,present an overview of where computer-basedassessment is and what would make a better futurefor this technology. As part of their presentation,they look at an array of the criticisms of thetechnology. The APA is concerned with the failureof some test developers’ to meet established testingstandards. Poorly designed human-computerinterfaces have been another concern. Therehave been those who suggested that the computerpresents such a dramatically different presentationthat computer-based and traditional administrationcan never be equal. It is suggested that factors notfound in the traditional paper-based testing mightbe disruptive and need to be identified and studiedto evaluate the validity of this concern. Anotherpossible limitation is inaccurate timing in somesoftwares where the synchronization between theprocessor and the monitor has a delay or error intiming that can create a problem with consistency.The authors say that other researchers havedeveloped software that have solved this problemand offer near millisecond accuracy.Another limitation in the use of computers isthe very automatic nature of operation that donot allow the examiner to interrupt or stop theassessment which can interfere with flexibilityin the evaluation. The current paradigms do notallow for collecting spontaneous verbal responses.There are researchers who have issues with tryingto take even a laptop to a bedside evaluation or totry to use with physically incapable people.In a report titled, “Development and InitialTesting of a multimedia Program for ComputerassistedCognitive Therapy.” Published inAmerican Journal of Psychotherapy, Jesse H.Wright, Andrew S. Wright, Paul Salmon, Aaron T.Beck et al., bring up the objection to computerizedrehabilitation that comes from a study by Stuartand LaRue, “who found that severely depressedpeople had difficulty using a computer programthat attempted to simulate patient-therapistcommunication.”Another area of concern is patient acceptanceof multimedia programs as part of their therapy.There is a concern that patients will perceive thattherapists “don’t want to work with them.” Therewas also difficulty in making the communicationbetween human and machine work.C. Future Directions:In general it would appear that the greatest needin this area and the most useful activity would bemany well-constructed, well-controlled, carefullyconducted studies of a great enough number ofproperly diagnosed ADHD children, adolescents,and adults to provide results that can be considereddefinitive.The authors of “Technology Applications forChildren with ADHD: Assessing the EmpiricalSupport,” suggest the following, “unansweredquestions,” as guides for future research:(1)How should computers, and, other formsof instruction, be integrated to maximizeeffectiveness of computer-based instruction?(2) How can teachers and learners usetechnology most efficiently to produceoptimal results? (3) How can teachersselect appropriate software and hardwarefor students with ADHD? (4) What arethe short-term and long-term effects ofsuch software or hardware on maintainingstudent attention and academics? (5) Howcan computer programs be designed to trainstudents with ADHD in behavioral andsocial skills?In the article “Issues in Diagnosis of AttentionDeficit/Hyperactivity Disorder in Adolescents,”Nahlik repeats the concept that the young childbasedbias of most testing instruments is probablythe single most important issue to be addressed onbehalf of the ADHD adolescent who has not beendiagnosed earlier. A suggested area for intensefuture research is in testing measures based onH. Undsen, M. Brant, J. C. Arias - Brain Neuroplasticity and Computer-Aided Rehabilitation in ADHD


206School of Doctoral Studies (European Union) JournalJulycomputer use that will not rely so heavily onparent/teacher/subject reporting and so would bemore objective.Bill Lynch, in the article, ”Historical Reviewof Computer-assisted Cognitive Retraining,” inJournal of Head Trauma Rehabilitation, offersfor consideration an outline for future research inthis field. He says research should address theseissues:(1) How does treatment with this softwarecompare with existing treatment approacheswith regard to effectiveness and cost? (2)What brain conditions are more likely torespond favorably to this software? (3)What is the optimal time after onset to begintreatment with this software? (4) What isthe optimal treatment regimen or schedulefor using this software? (454)From Cyber Psychology and Behavior, Vol. 5,November (2002) is an article titled, “The Effectof Virtual Reality Cognitive Training for AttentionEnhancement,” from a Korean research team,concerns the use of Virtual Reality to assist intraining and enhancing attention in children withADHD. Baek-Hwan Cho et al, after reportingthe results of their research, make the followingsuggestions for bettering the job that was done.The first comment was to use a subject groupwith actual diagnoses of ADHD. The next changethey will make will be to conduct the cognitivetrainings over a greater period of time to improvesome of their dependence measurements. Theyhave developed specific routines for alternatingattention and divided attention but there wereproblems with the subjects understanding andcompleting what was expected so these programsare still being worked on.As have other researchers, Gontkovsky et al.comment on the need for more studies and morestringent studies. They say:Additional empirical research clearly isneeded to address the aforementionednumerous shortcomings. Investigationsinvolving larger samples of patients, a widervariety of diagnostic groups (e.g. patientswith dementia secondary to Alzheimer’sdisease or Parkinson’s disease), and lessimpaired participants are necessary beforefirm conclusions can be made with respectto the viability of computer-based cognitiverehabilitation.This group also suggests more investigationof the possibilities of enhancing cognitiveperformance for non-impaired subjects and todo rigorous studies there. There is the impliedsuggestion for another direction for future study.The comment is, “However, the authors contendthat given the equivalent findings between thetwo modes of intervention, (computer-basedor traditional face-to-face) computer-basedcognitive rehabilitation potentially would be moreeconomical and more convenient (if conductedindependently in the home) than more traditionalforms of intervention.”Suggestions for further investigation fromthe Park/Ingles study include further empiricalresearch into the efficacy of specific-skills trainingas a method of cognitive rehabilitation. Theperception is that specific-skills training will helpimprove performance substantially.Another suggested direction for study is tobetter determine a definition of attention. Parks/Ingles are candid that a lack of consensus as towhat constitutes attention and attention deficitsmay have interfered with identifying relevantstudies due to a difference in terminology. As faras the focus of this paper, any and all of this shouldbe framed in the concepts of computer-aidedtherapy for those with ADHD.In, “Applications of Computer-basedNeuropsychological Assessment,” Philip Schatzand Jeffery Browndyke, published in the Journalof Head Trauma Rehabilitation of September2002, present an overview of where computerbasedassessment is and what would make a betterfuture for this technology. The authors recognizethat issues of confidentiality must be addressedand that would be part of their focus. On the otherhand, the possibilities in being able to quickly shareSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Brain Neuroplasticity and Computer-Aided Rehabilitation in ADHD207test results and getting other opinions on patientcondition is the pay-off for solving that problem.Developing ecologically valuable programs,such as, Schultheis’ Neurocognitive Driving Testis where computer-aid cognitive assistance isheaded. The writers say, “With the emergence ofthree-dimensional virtual reality technology, theassessment of driving ability is a clear example ofan emergent technology. They also mention thatthe developer of the program has an article in thesame journal.References*Azari, N. P. & Seitz, R.J. (2000). Brain plasticityand recovery from stroke. American Scientist,88, 426.*Baek-Hwan, C., Ku, J., Dong, P.J., Saebyul, K.,Yong, H. L., In, Y. K., Jang, H.L. and Sun, K.I.,“The Effect of Virtual Reality Cognitive Trainingfor Attention Enhancement.”CyberPsychologyand Behavior Vol. 5:129-137Barkley, R.A., Edwards, G., Laneri, M., Fletcher,K., Lori, M., Executive Functioning, TemporalDiscounting, and Sense of Time in AdolescentsWith Attention Deficit Hyperactivity Disorder(ADHD) and Oppositional Defiant Disorder(ODD). Journal of Abnormal Child PsychologyDecember Vol. 29, 541-556Bellucci, D. M. (2000) The Effectiveness ofComputer-Assisted Cognitive RehabilitationFor Patients With Chronic Mental IllnessDissertation*Belyantseva, I. and Lewin, G., (1999) “Stabilityand Plasticity of Primary Afferent ProjectionsFollowing Nerve Regeneration and CentralDegeneration.” European Journal ofNeuroscience; Vol. 11:457-469Bradley, M. M., Sabatinelli, D. Lang, P. J,Fitzsimmons, J. R., King W., Desai, P“Activation of the Visual Cortex in Motivatedattention”. Behavioral Neuroscience. Vol. 117(2) April 2003, 369-380*Biotech WeekBrain Train. The OECD Observer: Paris October2000Brue, A.W. PhD, and Oakland, T. D. (2002).Alternative Treatments for Attention-Deficit/Hyperactivity Disorder: Does EvidenceSupport Their Use?Bullock, G.R. (2003) Cognitive Rehabilitation: AMethod for Improving Sustained and SelectiveAttention in Adolescents With AttentionalDeficits. DissertationDeegan, G. (2003) Discovering Recovery.Psychiatric Rehabilitation.26:4 368-376Edwards, L.A. (2003) A Comparison of the NewerTreatment Options for ADHD. Formulary Vol.38: 38-51Erlanger, D.M., Kausbik, T., Brosbek, D., Freeman,J., Feldman, D., Festa, J., (2002) “Developmentand Validation of a Web-Based Screening Toolfor Monitoring Cognitive Status.“ Journal ofHead Trauma Rehabilitation Vol. 17:458-476*Garlick, D. (2002) “Understanding the Nature ofthe General Factor of Intelligence: The RoleIndividual Differences in Neural Plasticity asan Explanatory Mechanism.” PsychologicalReview Vol. 109:116-136*Gontkovsy, S. T., McDonald, N. B., Clark P.G.,Ruwe, W. D., (2002) “Current Directions inComputer-assisted Cognitive Rehabilitation.”Neurorehabilitation, Vol. 17:195-199Harvard Medical School, (2004) Children’sHospital Boston; Key advance Reported inH. Undsen, M. Brant, J. C. Arias - Brain Neuroplasticity and Computer-Aided Rehabilitation in ADHD


208School of Doctoral Studies (European Union) JournalJulyRegenerating nerve Fibers, Biotech Week 127*Hutchinson, J. and Marquardt, T.P. (1997)Functional Treatment Approaches to MemoryImpairment Following Brain Injury. Topics inLanguage Disorders Nov. Vol.18 45-58Incredible Horizons: New Brain Research.Advanced Technologies advertising.Kerns, K.A., Eso, K., Thomson, J., (1999)Investigation of a Direct Intervention forImproving Attention in Young Children WithADHD. Developmental Neuropsychology.16:2 273-296*Klingberg, T., Forssberg, H., and Westerberg,H., (2002) “Training of Working Memoryin Children with ADHD.” The Journal ofClinical and Experimental Neuropsychology,Vol. 24:781-791Kube, D.A., Petersen, M.C., Palmer, F.B. (2002)Attention Deficit Hyperactivity Disorder:Comorbidity and Medication Use. ClinicalPediatrics Vol. 41461-467*Lozano, Andres,(2001) Journal of RehabilitationResearch and Development; Nov/Dec, x-xiv*Lynch, B., PhD., (2002) Historical Reviewof Computer-assisted Cognitive Retraining.Journal of Head Trauma Rehabilitation.18:25446-457Mickey, D. L, at. el. ( ) Brain Injury and CognitiveRetraining: The Role of Computer AssistedLearning and Virtual Reality. Madison WI:Neuropsychological Associates. Internallypublished report.*Murphy, M.J. (2003) Computer Technologyfor Office-based Psychological Practice:Applications and Factors Affecting Adoption.Psychotherapy: Theory, Research, Practice,Training. Vol. 40 10-19Nolan, E. E.; Volpe, R.J.; Gadow, K. D. andSprafkin, J. (1999) Developmental, genderand Co morbidity Differences in ClinicallyReferred Children with ADHD. Journal ofEmotional and Behavioral Disorders, 7, 11-19*Olfson, M., Gameroff, M.J., Marcus, S.C., andJensen, P.S., (2003) “National Trends in theTreatment of Attention Deficit HyperactiveDisorder. The American Journal of PsychiatryJune Vol. 160 1071-1077*Park, N. W., and Ingles, J.L., (2001)“Effectiveness of Attention Rehabilitation Afteran Acquired Brain Injury: A Meta-Analysis.”Neuropsychology Vol. 15:199-210Pomeroy, V. and Tallis, R. (2002) “NeurologicalRehabilitation: A Science Struggling to Comeof Age. Physiotherapy Research <strong>International</strong>Vol. 7 76-89*Roland, A.S., Umback, D.M., Stallone, L.,Naftel, A.J. (2002) “Prevalence of MedicationTreatment for Attention Deficit-HyperactivityDisorder Among Elementary School Childrenin Johnston County, North Carolina. AmericanJournal of Public Health Feb.Vol. 92 232-234*Root, R. W. and Resnick, R. J. (2003) An Updateon the Diagnosis and Treatment of AttentionDeficit/Hyperactivity Disorder. ProfessionalPsychology: Research and Practice Vol. 34 34-41Sandford, J. A., ( ) “An Analysis of ComputerizedCognitive Training & Neurofeedback In theTreatment of ADHD.” On-linereview of Captain’s Log® Software.Sanjiv, K. M.D. & Thaden, E. (2004). ExaminingBrain Connectivity in ADHD. PsychiatricTimes Jan. 2004, 41.School of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Brain Neuroplasticity and Computer-Aided Rehabilitation in ADHD209*Schatz, P. and Browndyke, J. (2002) “Applicationsof Computer-based NeuropsychologicalAssessment.” Journal of Head TraumaRehabilitation Vol. 17:395-410*Skoyles, J. R., (1999) “Neural Plasticity andExaptation.” American Psychologist, Vol. 54:438-439Stuss, D. T., Winco, G., and Robertson, I.H. (Eds.)Cognitive Neurorehabilitation. Cambridge:<strong>University</strong> Press.Stein D. G., and Hoffman, S. W. (2003). Conceptsof CNS plasticity in the context of braindamage and repair. The Journal of HeadTrauma Rehabilitation, Jul/Aug 18 317-342.Surgeon General, Mental Health Report*Thomas, M. S.C., (2003) Limits on Plasticity.Journal of Cognition & Development, 4, 99-126.Weizmann Institute; Scientists reveal key part ofnerve regeneration mechanism. (2004) BiotechWeek, Feb 11, Atlanta. 622Weiss, M. and Bailey, R., (2003) “Advances inthe Treatment of Adult ADHD—LandmarkFindings in Non-stimulant Therapy. RogersMedical Intelligence Solutions.Wright, J. H., Wright, A.S., Salmon, P., Beck,A.T., et al., “Development and Initial Testingof a Multimedia Program for ComputerassistedCognitive Therapy. American Journalof PsychotherapyXu, C., Reid, R. and Steckelberg, A., (2002)Technology Applications for Children withADHD: Assessing the Empirical Support.Education and Treatment of Children. 25:2224-248Ylvisaker, M. and Feeney, T. (2002) ExecutiveFunctions, Self-regulation, and LearnedOptimism in Pediatric Rehabilitation: A Reviewand Implications for Intervention. PediatricRehabilitation, Vol. 5 51-70*Zappitelli, M., Pinto, T., Grizenko, N. (2001)“Pre-, Peri-, and Postnatal Trauma in SubjectsWith Attention Deficit Hyperactivity Disorder.”Canadian Journal of Psychiatry, Vol.4 6 Aug.542-548H. Undsen, M. Brant, J. C. Arias - Brain Neuroplasticity and Computer-Aided Rehabilitation in ADHD


210School of Doctoral Studies (European Union) JournalJulyFeasibility of overcoming the technologicalbarriers in the construction of nanomachinesGünter Carr (MSc)Master of Science and candidate to PhD in Physics at the School of DoctoralStudies, <strong>Isles</strong> <strong>International</strong>e Université (European Union)Jeffrey Dessler (PhD)Chair of Nanotechnology Studies of the Department of Engineering and Technology atthe School of Doctoral Studies, <strong>Isles</strong> <strong>International</strong>e Université (European Union)AbstractThe Science of molecular size machines and its engineering designs and constructions until late 1980swere not considered practicable. Nanotechnology, according to the leading exponents of that time wereneither feasible nor viable, due to the fact of total structural difference of the constituent of nano-moleculardevice i.e. Atoms from the mechanical objects of every day life. The essential components of engineeringmechanics i.e. cogwheels, gears or motors could not be imagined to have formed by means of atoms, thatare characterized by fuzzy and unsubstantial contents having no definite location position.Key words: Nanotechnology, Nanomachines, Mechatronics, Physics, EngineeringSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Feasibility of overcoming the technological barriers in the construction of nanomachines211NanomachinesThe Science of molecular size machines andits engineering designs and constructions untillate 1980s were not considered practicable.Nanotechnology, according to the leadingexponents of that time were neither feasiblenor viable, due to the fact of total structuraldifference of the constituent of nano-moleculardevice i.e. Atoms from the mechanical objectsof every day life. The essential components ofengineering mechanics i.e. cogwheels, gears ormotors could not be imagined to have formed bymeans of atoms, that are characterized by fuzzyand unsubstantial contents having no definitelocation position. Edwin Schrödinger, a leadingquantum theoretician, regarded the particles as notpermanent entity but an instantaneous event andderived the conclusion that atoms could no longerbe regarded as “identifiable individuals”. WernerHeisenberg, with extreme pessimism describedatoms as “a world of potentialities or possibilities”rather than “of things and facts”. (Is the futurenano?)Such ideologies succeeded making the scientistsof that time convinced to view nanotechnologyas an unattainable objective. During the secondhalf of the 20 th century some scientists however,ventured to explore the prospects of the subject.The efforts began with coinage of the terminologyof molecular engineering by Arthur Von Hippel,an electric engineer of Massachussetts Instituteof Technology (MIT) during 1950s and withhis optimistic predictions for possibilitiesof the constructing nano-molecular devices.Contemporary Nobel laureate and physicist,Richard Feynman revolutionized the conceptthrough his lecture “There is plenty of room at thebottom”. (Is the future nano?) K. Eric Drexler, setup the organization “Foresight Institute, Palo Altoat California, in 1986 for popularization by theconcept of building materials and products withatomic precision. Presently, scientists consider itas the pioneer organization for development ofnanotechnology.Questions still arise in the present scenario withregard to the progress of development of nanotechnology.Even though much has been achievedin the field, the dreams have not yet been fulfilledtill now. However, developments and intensiveresearch in the field have given rise to revealingof new features of atoms, such as robustness ofatoms to exist independently, facilitating isolationand counting in units. This feature of atoms givesstrength to construct reliable parts of workingnano devices. Currently, we have the capability tomake the atoms move around so as to place themin desired locations. These achievements in lessthan past two decades have led to Nobel Prizewinning contributions in the field. The remarkablecontributions of Dehmelt of <strong>University</strong> ofWashington in Seattle, revealing stability of evensubatomic particles enabling its isolation withinmagnetic traps for months together is noteworthy.(Is the future nano?)Table 1. Prizes for elucidating atoms and subatomic particlesNobel prize Winners Achievement1986 Gerd Binnig, Heinrich Rohrer Scanning tunnelling microscope1989 Hans Dehmelt, Wolfgang Paul Traps to isolate atoms and subatomic species1992 George Charpak Subatomic particle detectors1994Clifford Schull,Neutron diffraction techniques for structureBertram Brockhousedetermination1997Steven Chu,Claude Cohen-Tannoudji,William Phillips(Source: Is the future nano?)Methods to cool and trap atoms with laserlightG. Carr, J. Dessler - Feasibility of overcoming the technological barriers in the construction of nanomachines


212School of Doctoral Studies (European Union) JournalJulyDevices constructed from individual atomsare called Nanomachines. According to someresearchers; in future, to combat disease,nanomachines will be able to enter living cells.Nanomachines, which can reorganize atoms inorder to make new objects, can be built in future,according to the researchers. Nanomachines, ifthe researchers succeed, can be used to get rid ofpoverty by obviously converting dirt into food.Nanomachines are incredibly small devices, as theterminology indicates. They are constructed fromindividual atoms and their size is measured innanometers. (Nanomachines: Nanotechnology’sBig Promise in a Small Package)There is no macroscopic analogue forNanomachine. By atomic scale “pick andplace”, nanomachine would make any structure,including itself, that is, a set of nanoscalepincers would pick individual atoms from theirsurroundings and place them where they shouldgo. (Nanotechnology: Nanomachines) Futuristand visionary K. Eric Drexler made famous thecapability of nanomachines during the 1980’sand 1990’s. K Eric Drexler coined the concept ofnanotechnology during 1986, and made public inhis publication of Engines of Creation. Drexlervisualized the possibilities of efficient constructionof objects at molecular level with the help ofmicroscopic machines which were predicted tobe the solutions for many ailments of the presentworld. (Book Review: Unbounding the Future: TheNanotechnology Revolution by K. Eric Drexlerand Chris Peterson with Gayle Pergamit)The production of the ‘assembler’ is theeventual goal of nanomachine technology, as perDrexler. The nanomachine assembler is intended toinfluence matter at the atomic level. The assemblerwill be used to move atoms from existing moleculesinto that of new structures and will be built withsmall ‘pincers’. The idea is that the assemblershould fabricate useful items from raw materialby reorganizing atoms. If one can scoop dirtinto a vat and be patient, a team of nanomachineassemblers can change the dirt into an apple, achair, or even a computer, and this is the theory.A molecular schematic of the object to be built isto be put up into the memory of the machines inthe vat. Then they would fabricate the chosen itemby methodically rearranging the atoms enclosed inthe dirt. (Nanomachines: Nanotechnology’s BigPromise in a Small Package)Though some primordial devices have beentested, Nanomachines are mainly in the phase ofresearch development. A sensor with capability tocount specific molecules in a chemical sample, andhaving switched approximately 1.5 nanometersacross, is an example. Medical technology is thefield where nanomachines will find applicationsfirstly to recognize pathogens and toxins fromthe samples of body fluid. (Nanomachine)Nanomachines can be used in the field ofpharmaceuticals, to watch over symptoms ofchange in a patient, to treat cancer, AIDS and to useit for operations of those areas which are difficultfor operating upon. (The Ethics of Nanotchnology)It could be used to produce carbon fibers whichwould be as strong as Diamond and which alsobe less expensive than plastic. Also an importantaspect of nanomachines is that this technologyis comparatively less expensive, clean and alsowhich is easy to maintain than other technologieswhich are presently available. (Book Review:Unbounding the Future: The NanotechnologyRevolution by K. Eric Drexler and Chris Petersonwith Gayle Pergamit)School of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Feasibility of overcoming the technological barriers in the construction of nanomachines213Respirocytes with Red Cells.(by Vik Olliver, 1998)The Ethics of Nanotechnology. Retrieved fromhttp://cseserv.engr.scu.edu/StudentWebPages/AChen/ResearchPaper.htmFor decades, scientists’ brains are filled withthe development of nanomachines. The discoveryof bio-molecular motors, such as myosin, kinesinand dynein, roughly 20 years ago, was the startingpoint for the dream of constructing machines whichhad the capacity to copy, replace or to work inrecital with existing bio-molecular machines. Theunderstanding of the biophysical and biochemicalproperties of bio-molecular motors have beenenhanced by present day technologies whichhelp to watch, influence and analyze particles ormolecules at the nanometer scale. Proteins arethe bio-molecular motors, which generate forcesand movements within cells, that is, transformingchemical energy into mechanical energy. Thediverse functions of these proteins are such thatsome are accountable for rotation and mobility ofcilia, DNA replication, organelles transport, etc.(Bio-Molecular Motors Research in Japan: AsianTechnology Information Program)As an example, the mechanical activity of biomolecularmotors is sustained by the hydrolysisof ATP (adenosin-triphosphate), the “fuel”, whichturns out energy for the processive movement ofkinesin along microtubes or the contraction of theactin/muosin complex in muscles. The purposesof bio-molecular machines cannot be read byequivalence to artificial machines, as they are notsimple. Molecular machines and proteins have anactive structure and have size in comparison tothe nanometer range. Also, thermal energy can bematched up to the input energy to the molecularmachines and when shown to thermal agitation,the molecular machines function at very highefficiency.The artificial machines come in contrast withmolecular machines, as artificial machines usemuch higher energy than thermal energy to workquickly, deterministically and precisely. Hence,on such grounds, an understanding of the dynamicproperties of proteins and their interactions amongthemselves is essential. The development ofSingle-Molecule Detection (SMD) techniques tostraightforwardly check the dynamics of proteinsand molecular machines have been extendedto encompass a wide variety of biologicalsciences. SMD techniques, in amalgamationwith nanotechnology developments, will bemore influential in directing more research indevelopment of nanomachines and that of biomolecularmachines. (Bio-Molecular MotorsResearch in Japan: Asian Technology InformationProgram)These machines mostly consist of proteinswhich are synthesized out of carbon- the chiefingredient of all living things. And as Drexlerexplained, these machines, called as assemblers,will be built on the basis of one atom at a time, withthe capacity of being able to fit several hundredswithin a single cell. That these man-made nanoassemblerswill add innumerable variety to thecountless life forms that are currently known isagreed from both sides of the debate. This is dueto a number of appealing differences betweenthe assemblers and the biological cells on whichthey are based. The replication instructions withinthese hybrid biological/mechanical machinesto be designed by engineers will be composedof computer code instead of DNA is one suchdifference worth mentioning.However, these machines still look like andperform as existing cell types and this is whatG. Carr, J. Dessler - Feasibility of overcoming the technological barriers in the construction of nanomachines


214School of Doctoral Studies (European Union) JournalJulyDrexler and other engineers and nanobiologistsforesee. Viruses are composed of proteins andcoding material and its replication is possible onlywithin a host organism and this nature of viruseswill be built into some assemblers. A virus-likeassembler, after entering a cell, utilizing the freshlyintroduced coding information, can instruct thecell’s own internal machinery to replicate. Otherassemblers similar to bacteria will carry withintheir own firm boundaries, all the materials whichare required for biosynthesis, and hence would beindependent. They might also make alterationsimproving or redesigning the original cells bybecoming organelles within eukaryotic cells,and are also similar to bacteria in this respect.(Nanomachines and biological systems: Utopia orDystopia)Today’s surgery of using a massive blade cuttingthrough a crowd of cells, killing thousands, willappear pretty barbaric from a cell’s point of view.To stitch up the damage, a thick cable is towed in,and for healing to take place, it is left to the cells todiscard their dead and multiply. The administrationof a drug to a patient from the cell’s point of viewcan be visualized as follows: the drug molecules,before recognizing specific molecules by “touch”,knock pointlessly around till they get adjusted intotheir target molecule. Compare this to a molecularmachine equipped with a nano-computer thatholds data on the structure of all healthy tissue,which can feel, prepare, and act at this level.It is possible to build repair machines with asize of a bacterium, which can enter and leavecells, can wipe out intruders in the blood cells andcan even check the DNA itself for any mistakes.Nowadays, doctors depend on drug molecules andthe cell’s capability to mend itself, when a cell isdamaged, even though this process does not alwaysbring the patient back to health. In future, doctorscan restore cells that have been damaged to thepoint of inactivity with the help of nano-devices,which can repair on the smallest components ofthe cell. These machines can reconstruct injuredmolecules inside the cell by getting to the base ofthe problem. (The Promise of Nanotechnology)Nanomachines can provide support to theimmune system, because these machines canfight with natural nanomachines, viruses, andbecause the body’s own immune system has somelimitations like not remembering the shape of itsenemies, failing to identify malignant cells anddelaying full development of immune reaction.Nanomachines can make a mammoth contributionto ageing, can affect bacteria, can influence tumorsand also help in remodeling damaged tissue.(Nanotechnology and Medicine) The daily workof the body is done by the molecular machines.Muscles affect our motions while we chew andswallow. The bundle of molecular fibers, enclosedin muscle fibers, compress by sliding againstothers. The molecular machines in stomach andintestines, called digestive enzymes, break downthe complex molecules in food into smallermolecules and these are used for the purpose offuel or are used as building blocks. (Unboundingthe Future: The Nanotechnology Revolution)Useful molecules are carried to the bloodstreamby the molecular devices which are found in theouter layer of the digestive tract. The molecularstorage devices which are called hemoglobinenhance oxygen in the lungs. The heart, driven bymolecular fibers, pumps blood loaded with fueland oxygen to cells. Contraction in the musclesis based on sliding molecular fibers and is drivenby fuel and oxygen. In the brain, the molecularpumps that influence nerve cells are done bynanomachines. Molecular machines in the liver thatbuild and break down a whole mass of moleculesare influenced by nanomachines. Such a process isrepeated in other parts of the body. (Unboundingthe Future: The Nanotechnology Revolution)Nanomachines which can make replicaof themselves are another objective ofnanotechnology. A machine can be able toconstruct replica of it, if it can reorganize atomsin order to build new materials. Products whichare thus made by nanomachines will be extremelylow-priced, if this objective is achieved. This isbecause, the technology, once fine tuned willnot need specific materials, which might beuncommon and therefore cost money, as it will beself-replicating. Nanotechnology will sign an endto traditional financial systems is the forecast ofArthur C. Clarke. A world of stimulating promisesSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Feasibility of overcoming the technological barriers in the construction of nanomachines215will open up, if scientists would be able to designnanomachines which would have the capacity toreorganize atoms. Advanced treatments for manydiseases can be given by nanomachines which aremodeled for different purposes. Injecting medicalnanomachines, programmed to recognize anddisassemble cancerous cells, into the bloodstreamof cancer victims, can provide a rapid and efficienttreatment for all types of cancer. Damaged tissueand bones can be mended by nanomachines.(Nanomachines: Nanotechnology’s Big Promisein a Small Package)By building molecular support structures byreassembling nearby tissue, they could even beused to toughen bones and muscle tissue. Medicalscience will speedily adopt treatments for most ofthe human illnesses which will have the capacityto influence human cells at the minute level.These treatments will be cheap and accessible tothe whole people because nanomachines will bedesigned in such a way to make copies of them.If nanotechnology is proved to be effective,problems relating to shortage of food problems ofhunger can be solved. As nanomachines can havethe capacity to change anything into food, thisfood could be used to solve problems of hungerworldwide.Food produced by nanomachines would be lessexpensive and would be available to all. As in thecase of food, which enables to influence the everincreasingpopulation, nanomachines would beable to produce other goods as well. Also clothing,houses, televisions, cars and computers would bemade possible at less money. As nanomachineswill change all garbage into new goods whichcan be consumed there need not be worries inrelation to the garbage produced. (Nanomachines:Nanotechnology’s Big Promise in a Small Package)Another advantage of using nanomachines isthat individual units would require less energyto operate. Nanites would exist for centuriesbefore collapsing and hence robustness is anotherpotential asset. (Nanomachine)The finding of toxic chemicals and measuringof their concentrations, in the environment isanother prospective application. High operationalspeed is possible due to the microscopic size ofnanomachines. (Nanomachine) This is because allmachines and systems will be likely to work fasteras their size reduces. Nanotechnology can resolveenvironmental problems like ozone depletionand global warming. By releasing clouds ofnanomachines into the upper atmosphere, thesenanomachines can methodically destroy the ozonereducing chlorofluorocarbons (CFCs) and buildnew ozone molecules out of water and carbondioxide.As water and carbondioxide both containoxygen, the atmosphere contains an abundantsupply of oxygen atoms; and so ozone (O3) can bebuilt out of 3 oxygen atoms. Teams of dedicatednanomachines could be engaged to destroy thesurplus CO2 in the lower atmosphere while theozone building teams are at work in the upperatmosphere. CO2 has been recognized as one ofthe major contributors to global warming and isa heat trapping gas. To bring back the planet’secosystem and to stop global warming, surplusCO2 has to be removed, which can be done bynanomachines. All species on earth will be profitedby this. A new era for humanity will begin once thenanotechnology is perfected and nanomachinesare produced. This will quickly lead to the endof hunger, illness, and environmental problems.(Nanomachines: Nanotechnology’s Big Promisein a Small Package)The intermingling of nanotechnology in theform of nanosize particles into the mainstream isevident in the products of everyday use such assunscreen, paint, cosmetics, and industrial coatingsawaiting its more extensive uses in the near future.Minimization of side effects of the drugs throughthe preparation of accurate combinations withthe help of nanoparticles is experimented in thefield of Pharmaceuticals. Eradication of diseaseslike cancer warranted coating of the receptors ofcells with nanoparticles of drugs that inhibits thereproductive cycle. Use of Nanosensors for checkup of the health of astronauts is being explored bythe NASA and the <strong>University</strong> of Michigan.The aim is to explore the method of infusingthe blood cells of astronauts for continuousmonitoring of the exposure to radiations or otherinfectious agents. Dendrimers and syntheticG. Carr, J. Dessler - Feasibility of overcoming the technological barriers in the construction of nanomachines


216School of Doctoral Studies (European Union) JournalJulypolymers having a diameter of less than 5nmare the constituents of the devices. This involvesthe infusion of nanosensors into white bloodcells, for detecting the symptoms of biochemicalchanges due to radiation. The fluorescent tags areattached in order to make the dendrimers glowwith the location of proteins related to cell death.Development of retinal scanning device withlaser that detects fluorescence from lymphocyteswhile passing through the capillaries behind retinais under progress. Taking up of blood samplesand transplantation of larger sensors resulting ininflammation or infection is being avoided by useof nanosensors. (A king-size future for nanosizemachines: nanotechnology researchers are layingthe groundwork for atomic-scale engineeredsystems)Many controversial debates wereattracted by the concept of Nanomachines whichis considered to be a bold step in mechanics. Thedevelopments however have been able to restore adegree of confidence among the people irrespectiveof the fact that many hurdles are to be overcomefor its fruitful implementation. Remarkableachievements in this direction are due to the recentdevelopments of the science towards constructionof first molecular assemblers. Demonstration oftwo researchers of IBM in the field of scanningtunnelingmicroscope by spelling out the initial ofthe company on the atomic scale using 35 Xenonatoms showed advancement in this direction thatproved the capability of moving of single atomswith great accuracy. These developments will leadto occurrence of the second industrial revolutionwithin a decade. (A king-size future for nanosizemachines: nanotechnology researchers are layingthe groundwork for atomic-scale engineeredsystems)Since there are various methods for constructionsof the nanomachines it is difficult to predict thetimeline for this revolution. Research in the fieldis being fueled by development in the relatedfields of Computer industry, genetic engineeringmicrominiaturization, physics, and chemistry etc.Still it is difficult to predict the exact method ofconstruction of first molecular assemblers due topresence of host of technological difficulties. Thedevelopment in the field does not seem to be moreimpossible in comparison to the project of sendingman into the Space and moon during 1959. Theparticipation of the private sector fuelling thedevelopment of nanotechnology makes theproject advantageous over the project of man onthe moon. Its propounders like Drexler predict thereplacement of conventional factories as well asthe fossil fuels they use by making the solar cellsmore efficient, cheaper and tougher. The evolutionof computing power can reach its goal by meansof the nanotechnology.In this field the evolution of nanotechnologycapable of building devices many a time faster,efficient and cheaper will surpass the limitingminiaturization of existing electronic methods.This will lead to preparation faster computers assmall as the size of a single cell. The formulationof nano computers will revolutionize the objectsof everyday life. Nanomachines can create tinyrobots of the size of only a few billionths of ameter which seems to be impractical. They arepredicted to bring revolutionary change in livesin near future. Unclogging of arteries, repair ofdamaged cells or DNA will all be possible withthe help of Nanomachines. Nanorobots will beable to identify and spoil the harmful bacteriathrough a simple mouthwash and bring cure fromplaque and tartar. It will be possible to preparestronger and lighter materials than steel throughsupercomposites constructed out of nanostructuralmaterials bringing revolutionary changes inconstruction of spaceship and in the field ofspace research and travel. (A king-size future fornanosize machines: nanotechnology researchersare laying the groundwork for atomic-scaleengineered systems)The necessity of the power of free market isvisualized by Drexler and others in the field. The useof nanotechnology is propounded most extensivelyin the field of medicines than that of other possibleapplications. Deriving the impression from theuse of “natural molecular machines” in the bodyin the form of digestive enzymes, hemoglobin,and the propounders predicted the more efficientapplication of nanomachines in the immunosystemfor destroying harmful virus and bacteria evenSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Feasibility of overcoming the technological barriers in the construction of nanomachines217in a better way than the existing natural WhiteBlood Cells. Application of nanomachines inthe field can be extended to cleaning of blockedarteries, repair of damages cells, re-growing ofnew organs and limbs. It is even predicted to haltthe natural aging process at the stage of its fullestdevelopment. It is predicted to make it possible toeliminate even the eternal teenage scourge througha nanomachine cream. The only limitation in thefield seems to be the replacement of the automobileby the nanomachine- built underground railways.(Book Review: Unbounding the Future: TheNanotechnology Revolution by K. Eric Drexlerand Chris Peterson with Gayle Pergamit)Nanotechnology presently is at its infant stage,can be compared with a lit match going to begina bonfire of strange applications with the devotionof engineers that will throw revolutionarysparking in the civilization. The civilization ison the way of developing such a field of scienceand engineering that will have far reaching effectsin other existing fields. Nanotechnology seemsto be a better avenue existing within ourselvesthat has been identified for the future successand glory of the whole mankind. (The Future ofEngineering: Nanotechnology: A Mission Into theMicroscopic)Nanomachines constitute the future hope forhumanity. Curing diseases, fixing the atmosphereand reduction of poverty fully is no more a remoteidea but will become a reality with the help ofnanomachines. Overcoming of the technologicalbarriers in construction of nanomachines by thescientist will fetch all the goals of humanity. Thethirst in this never ending race however, warrantscaution. The irresistible temptation of buildingself-replicating machines at cheaper rate willlead to endangering the planet with machinesoverriding man. It is feared that the machines maydevour the entire planet in the race of producingmore and more machines. However, the reality ofthe benefits those nanomachines assure to fetchin manipulating matter is quite noteworthy anddeserves pursuance of the technology with muchenthusiasm.ReferencesBio-Molecular Motors Research in Japan. AsianTechnology Information Program (ATIP), 2002Retrieved from www.atip.org/ATIP/NANO/reports/atip02.006.pdf Accessed on 26 April,2004Chen, Andrew. The Ethics of Nanotechnology.Retrieved from http://cseserv.engr.scu.edu/StudentWebPages/AChen/ResearchPaper.htmAccessed on 26 April, 2004Frischauf, Norbert. Nanotechnology and Medicine.02, May, 2002 Retrieved from http://www.itsf.org/resources/factsheet.php?fsID=175Accessed on 26 April, 2004Jogi, Vikram. The Ethics of Nanotechnology.Retrieved from www.cs.wmich.edu/~elise/courses/cs603/Presentation/ Nanotech_Presentation_022304.ppt Accessed on 26April, 2004Nanomachine. 01 December, 2001 Retrieved fromhttp://whatis.techtarget.com definition/0,,sid9_gci514355,00.html Accessed on 26 April,2004Nanotechnology: Nanomachines Retrieved fromhttp://www.elcot.com/nano/nanomachine.htmAccessed on 26 April, 2004Is the future nano? Retrieved from http://www.chemsoc.org/chembytes/ezine/2000/rouvray_dec00.htm Accessed on 26 April, 2004Korrane, Kenneth J. A king-size future fornanosize machines: nanotechnologyresearchers are laying the groundwork foratomic-scale engineered systems. (FutureTechnology) Machine Design, Sept 19, 2002,Retrieved from http://www.findarticles.com/cf_dls/m3125/18_74/92458835/p2/article.jhtml?term= Accessed on 26 April, 2004G. Carr, J. Dessler - Feasibility of overcoming the technological barriers in the construction of nanomachines


218School of Doctoral Studies (European Union) JournalJulyPerson, Lawrence. Book Review: Unbounding theFuture: The Nanotechnology Revolution by K.Eric Drexler and Chris Peterson with GaylePergamit. The Freeman: Ideas on Liberty.November, 1992. Retrieved from http://www.fee.org/vnews.php?nid=2664 Accessed on 26April, 2004Plotnick, Debbie. Nanomachines and biologicalsystems: Utopia or Dystopia 07 January, 2002Retrieved from http://serendip.brynmawr.edu/biology/b103/f00/web2/plotnick2.htmlAccessed on 26 April, 2004Silby, Brent. Nanomachines: Nanotechnology’sBig Promise in a Small Package. Departmentof Philosophy. <strong>University</strong> of Canterbury, 2002Retrieved from http://www.def-logic.com/articles/nanomachines.html Accessed on 26April, 2004Tikoo, Sonia. The Future of Engineering:Nanotechnology: A Mission Into theMicroscopic. The National Academy ofEngineering.Retrieved from http://www.engineergirl.org/NAE/CWE/egedu.nsf/weblinks/ESER-5KRS2R?OpenDocument Accessed on 26April, 2004Unbounding the Future: The NanotechnologyRevolution. Retrieved from http://www.foresight.org/UTF/Unbound_LBW/chapt_10.html Accessed on 26 April, 2004Wisz, Michael S. The Promise of Nanotechnology.Retrieved from http://www.rso.cornell.edu/scitech/archive/95spr/nano.html Accessed on26 April, 2004School of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Society’s Identity Search through Art219Social Science SectionContentSociety’s Identity Search through ArtThis paper proposes identification of social personality by examination of two works of art in order todiscuss how humans define art and the meaning of artSimone Rothschild, Jim CurtsingerAnalysis on Modernism and Literary ImpressionismA comprehensive analysis of the structure and texture of the beginning of literary impressionism based ontwo works of FordDonatella Petri, Anna RichardsonComparative Analysis on Contrasting Approaches of Psychology dealingwith LanguageThis paper assesses three different articles on language, cognition and psychology and discusses just whatthe human mind really isAnne Marie RougeUnderstanding of Religion and the Role Played by Cultural Sociology inthe ProcessLooks at the contribution to the understanding of religion in the era of globalisation, and Islamicfundamentalism that cultural sociology has hadSheila VaughamPolicy of Preemption or the Bush DoctrineAn in-depth exploration of the Bush doctrine and the controversy it has engenderedAna DresnerDepartment’s ReviewersDeputy Head of Department – Humanities – Prof. Anuar ShahChair of Arts and Architecture – Prof. Jim CurtsingerChair of Languages and Literature – Prof. Anna RichardsonChair of Law and Political Science – Prof. Klaus AggemannChair of History and Archaeology – Prof. Maria Lopez-CuellarChair of Philosophy and Religion – Prof. Helmut KröllDeputy Head of Department – Social Sciences – Prof. Ivanna PetrovskaChair of Anthropology – Prof. Alexandre RostonovChair of Geography – Prof. William TeffanbergerChair of Education and Communication – Prof. Arelli SantaellaChair of Psychology – Prof. Roger AllenChair of Sociology – Prof. Martin AstonS. Rothschild, J. Curtsinger - Society’s Identity Search through Art


220School of Doctoral Studies (European Union) JournalJulySociety’s Identity Search through ArtSimone Rothschild (MA)Master of Arts and candidate to PhD in Plastic Arts at the School of DoctoralStudies, <strong>Isles</strong> <strong>International</strong>e Université (European Union)Professor Jim Curtsinger (PhD)Chair of Arts and Architecture of the Department of Social Science at theSchool of Doctoral Studies, <strong>Isles</strong> <strong>International</strong>e UniversitéAbstractThis paper proposes identification of social personality by examination of two works of art in order todiscuss how humans define art and the meaning of art.Manet’s painting was firmly rooted in Paris of the mid-19th century. It is thus not difficult for us to get aglimpse of what society was like in that time and place by looking at Manet’s paintings. Looking at theVenus of Willendorf, however, does not tell us anything about the society that it is a relic of. It thus requiresus to use our intellects and our imaginations in order to piece together an explanation that might satisfy uspersonally, but can never be held up as a firm example, as we can with Manet’s paintings. Thus, it can besaid that the relationship between art and society is in fact conditioned by a third factor, which has been themain subject of our inquiry - that of history. Without all the written records of the 19th century that havebeen kept, we might have no way of knowing what we are looking at when we study a Manet painting.This truth comes to the surface when we look at the Venus, which comes from a period that pre-dated allknown forms of writing.Key words: Art, Society, Painting, SculptureSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Society’s Identity Search through Art221Society’s Identity Search throughArtThe Venus of Willendorf: Womenin the Stone AgeArguably the oldest known artifact known toman, the Venus of Willendorf is not a man at all,but a woman – and a very obese one at that. Thetitle “Venus” is used to describe her in an ironicfashion, as there is nothing beautiful about herfrom a modern day perspective. This is perhapswhy more and more art history books these dayslabel her as the “Woman of Willendorf.” Thislinguistic shift allows us to analyze the Venus in anew light. Rather than viewing her as some sort ofgoddess or fertility symbol, as she has commonlybeen viewed by scholars since her discover in theearly 20 th century, it is now possible to considerher as simply a “woman” – that is, a mundanefigure taken from life, rather than rendered outof evocation to some imaginary, unseen force.In keeping with tradition, however, and to avoidconfusion, we will retain the name Venus ofWillendorf throughout this paper.The Venus of Willendorf consists of a womanwith a large stomach that hangs out, but does nothide her public region. It is tiny – big enough tohold in the palm of one’s hand. When you turnher around, the Venus has large buttocks that aresimultaneously quite flat. She boasts quite largethighs, which are oddly pressed together at theknees. The forearms of the Venus, however, arequite thin. They are holding the upper half of herbreasts. It appears that she is wearing bracelets.The Venus’s breasts are soft and full. You cannotmake out any nipples on the statue.The interpretation that the Venus was associatedwith fertility has more to do with just her obesity.It also has to do with the attention to detail thesculptor showed to her breasts and her genital area(Giedion 1962, p. 178). The labia of the statue’svulva is exquisitely detailed and made to stand outto an unnatural extent. The statue has no discerniblepubic hair. The breasts on the statue are large andher stomach is round; thus, many scholars haveinterpreted her as being pregnant, and a tool forfertility – a symbol of female procreativity.Like other “Venus” figurines of the Paleolithicera, the Venus of Willendorf has no face. For thisreason, many scholars have argued that she wasnot meant to refer to a particular person; instead,her anonymity adorns her with universal meaningof some sort. In other words, they feel that theoriginal usage of the Venus was meant to be asa symbol, rather than a reminder of a particularwoman that the artist may have known.Where her face should be, there are rows ofplaited hair that have been wrapped around herhead. “Close examination, however, reveals thatthe rows are not one continuous spiral but are,in fact, composed in seven concentric horizontalbands that encircle the head, with two more halfbandsbelow at the back of her neck” (Witcombe2003). Soffer, et. al. has suggested that the Venusis actually wearing a woven hat or cap (2000, pp.37-47). When viewed from the side, however, itseems clear that the Venus indeed has hair. Viewedfrom the figure’s profile, she seems to be lookingdown with her chin sunk to her chest. Her hair islonger in the back of her head, thus dispelling thenotion that she is wearing a sort of knit cap. Herhair is woven into a total of seven thick plaits.Witcombe (2003) notes the significance of theVenus’s hair. In figures dating from the Paleolithicera, it is very rare to find this much attention paidto details of the hair. It has thus been supposedthat the hair on the Venus is loaded with a specialmeaning – otherwise, the artist would not havebothered putting so much obvious effort into it.But not having enough knowledge, we can onlymake suppositions about the role of hair on theVenus from our knowledge of later cultures.Hair has traditionally been thought of as theseat of the soul. It is also considered to be a sourceof strength, health, and beauty. It is not so muchthe length, style, or color of the hair, but, oddlyenough, the odor that often elicits erotic attraction.According to Witcombe (2003):The erotic attraction of the odor of hairis obviously rooted in the sense of smell,which plays a considerable role in sexualS. Rothschild, J. Curtsinger - Society’s Identity Search through Art


222School of Doctoral Studies (European Union) JournalJulyrelations. Though greatly diminished inthe modern world, smell was paramount inestablishing an erotic rapport with a mate, asit still is among animals. In this context, thehair of the woman or goddess representedin the “Venus” of Willendorf figurine mayhave been regarded as erotically charged asher breasts and pubic area.One of the other odd characteristics of theVenus – as well as other such figurines from thePaleolithic era – is that she does not have any feet.She most likely never had feet, as it was reportedthat she was perfectly intact upon discovery in1908. If indeed the Venus of Willendorf was meantto be used as a fertility idol, then the absence offeet makes sense, in that only those parts of thefemale body considered useful for such functionswere required. Anyway, it is highly unlikelythat the Venus of Willendorf – or any other suchfigurines from the Paleolithic era – were intendedto stand up on their own.As we stated earlier, the Venus of Willendorfwas clearly intended to be held in the palm ofone’s hand that is intended to be regarded fromall sides (Giedion 1962, p. 437). “When seenunder these conditions, she is utterly transformedas a piece of sculpture. As fingers are imaginedgripping her rounded adipose masses, she becomesa remarkably sensuous object, her flesh seeminglysoft and yielding to the touch” (Witcombe 2003).While we may rightfully imagine what it mustfeel like to hold the Venus of Willendorf, trying toimagine what purpose she serves is another issuealtogether. If she is not a fertility figure or meantto evoke some sort of goddess, could it be that shewas merely carved for decoration? Or to serveas a toy for a child? Given the intricacy of thecarvings and the design, it is highly unlikely thatthe second question has an affirmative answer. Forwhy would the carver then wish to put so mucheffort into the Venus? All of the detailed work onthe Venus results in a quite realistic representationof an obese woman, the likes of which are rare forthe Paleolithic era.The assertion that she is meant as a sort of fertilitygoddess is also doubtful, however, as the figuredoes not represent a typically pregnant woman.Rather, she resembles a woman who has grown faton eating a lot of meat, fat, and marrow, and led asedentary lifestyle. After all, it is for these reasons– the consumption of unhealthy food combinedwith a lazy lifestyle – why so many Americans areobese today. But seeing the condition representedin a Stone Age relic is odd indeed, considering thefact that people during this era lived as huntersand gatherers, and thus would have been very fitand thin. As they were constantly having to trackdown their own food, they could not afford therich, sedentary lifestyle that is so common in ourfast paced, technologically advanced universe.The only way that the Venus could have gottenso fat is if she were living a particularly privilegedlifestyle. But as archaeological research has taughtus, this was very rare – if impossible – during theStone Age. If we imagine for a moment, however,that she did enjoy some sort of special status anddid not have to hunt alongside normal people, thenthis does not go far in telling us much about thestatus and role of such women in the Paleolithicera. We can only assume that they thoroughlyenjoyed the opportunity not to have to work asothers surely must have!It should be noted that we have yet to retrieve amale figure from the Paleolithic era that is similarlyobese. Thus, if the Venus were a specific woman –or a type of woman who existed in the Stone Age –she must have been special enough to rely on menand other women to do her hunting and gatheringfor her. But who could such a woman have been?When we examine the details of the Venusof Willendorf, we find such qualities as dimplesin particular places. This suggests that whoevercarved the Venus must have had a real life model– surely were the Venus a fantasy then the artistwould not have gone to such extravagant lengthsin rendering such precise details.At the same time, as we mentioned earlier,the Venus has no face. So if she were based ona real life model, then she must have not been soimportant, as the artist did not feel it necessary torender details of her face as a means of preservingher for immortality. It is only the obesity of hertorso (her arms are unusually thin for such aSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Society’s Identity Search through Art223figure) and her extravagant hairstyle (or knittedhat) that seem to make her different from otherfemale figures found from the Paleolithic era. Butthere is also the fact that someone went to greatlengths to render exact details of her body thatmakes her special – despite the fact that her faceis not shown.What we know about the Venus of Willendorfis that she was not alone. There have been otherexamples of obese figurines like her found allover the European continent, all of which seemto date from the Stone Age. So there must havebeen some sort of shared understanding amongthe earliest societies as to the special meaning thatsuch a woman possessed.What is more, there have been some hypothesesthat the Stone Age society was matriarchal, ratherthan patriarchal, as our present day society is. Thisis because of the fact that the images and figuresof women dating from this era by far outnumberthose of the men. If the Venus is meant to representa sort of Earth deity or Supreme Mother Goddess,then it is highly likely that here on the earthlyplane, women played a far more important role inpre-historical society than they have been allowedto in the historical era. Perhaps women even ruledover men. With the Venus, the possibilities areendless. She could have been a Queen who ruledover others, or merely a symbol of wealth thatpeople carried with them for good luck.It is possible that she was all of these things.Perhaps she represented both a living woman anda goddess. Then, we must ask ourselves which ofthe two might have come first. But as there is nowritten information dating from this era to confirmour suspicions, our ideas regarding the meaningof the Venus of Willendorf are reduced to merespeculation.The idea that the Venus may have been somesort of Mother or Earth deity derives from theGreeks. In the 7 th century, the Greek poet Hesiodcalled this goddess Earth Gaea, who emerged outof chaos and brought forth the sky, the mountains,and the sea. Plato, the philosopher, refers to thegoddess as Ge in his Timaeus. In Olympia, therewas supposedly an altar and sanctuary dedicatedto the worship of Gaia. There was also a similarsanctuary near the entrance to the Acropolis inAthens.This Earth Goddess would later be worshippedby the Romans as “Tellus,” who was referred toas “the Great Mother” by Varro, who lived from116 to 27 BCE. Tellus is also referred to by thepoet Lucretius, who wrote that she alone was themaker of human bodies and all animals. Lucretiusasserted that Tellus was the mother of all the gods.By the third century BCE, there would emerge inRome the cult of the Great Mother, which wouldcome to be identified with Cybele, the mothergoddess.There is also mention of the Great Motherin Lucian’s dialogue Saturnalia. St. Augustinewould later attack the Great Mother in his The Cityof God Against the Pagans. This gives us a clearindication of how important she was in the Paganworld, and thus, possibly, in the Stone Age.With the rise of Christianity and attacks by St.Augustine and others, mention of the Earth Motherwas largely suppressed until the 18 th century, whenshe began to be mentioned once again as being theMother Goddess. In the 19 th century, a revival ofinterest in the Mother Goddess flourished. Thiswas also the period when civilized society becameaware that there existed some tribal cultures thatstill worshipped the Earth as a female goddess.The work of the Swiss anthropologist JohannJacob Bachofen during this period effectively gaverise to interest in the possibility of matriarchalsocieties and the worship of feminine earth deities(Bachofen 1992). These ideas merged in the 19 thcentury with the development of evolution, leadingmany to conclude that our society must havepassed through a matriarchal phase before enteringinto the present patriarchal phase (Darwin 2006).Social Darwinists would go on to assert that thisis why patriarchal culture ultimately survived –because matriarchal society was not strong enough.The idea of a matriarchal society was thus seen asprimitive, and used to denigrate the role of womenin society – until the emergence of the feministmovement in the 20 th century. These ideas mostlikely conditioned responses to the archaeologistswho first discovered the Venus of Willendorf inthe early 20 th century. This is why so many of ustoday naturally assume that the Venus is an EarthS. Rothschild, J. Curtsinger - Society’s Identity Search through Art


224School of Doctoral Studies (European Union) JournalJulyGoddess from a matriarchal Pagan era.It should be asserted, however, that there is noevidence whatsoever that supports the hypothesisthat the Venus of Willendorf is indeed emblematicof the Mother Goddess or the Earth Goddess.It is also not known for sure whether or not thePaleolithic society was indeed matriarchal. Wedo not know if Stone Age people believed in theMother Goddess or the Earth Goddess as a universaldeity. After all, reason some scholars, there are nomarks on the Venus that would traditionally alignher with any form of deity. But at the end of theday, arguments for and against the Venus’s statusas a Mother Goddess or Earth Goddess screech toa halt.It is nevertheless interesting to ponder thepurposes for which the Venus of Willendorf mighthave been used. In recent years, some feministscholars have speculated that the Venus may haveserved some sort of gynecological purpose forwomen in the Paleolithic era. Perhaps she servedas a charm of some sort for menstruating women,or for women who were pregnant. Upon thediscovery of the Venus, it has been confirmed thatthere was evidence of some red pigment on herbody. This particular shade of red has traditionallybeen associated with menstrual blood. In latertraditions, menstrual blood, or evocations thereof,served as a symbol of a life-giving agent. As theVenus’s vulva is sculpted in heavy detail, it beginsto make sense that the sculpture might have hadsome connection to mythical rites related towomen’s menstruation. This has even led somescholars to speculate that the Venus of Willendorfis not the work of a man – but of a highly skilledwoman artist. What is certain about the Venus ofWillendorf, whoever made it and whatever purposeit was meant to serve, is that it is firmly lodged inthe field of the feminine.Manet’s ParisWhile his later reputation would posit himas “king of the bohemians,” Edouard Manetwas actually born firmly within the ranks of theParisian bourgeoisie in the first half of 1832. Hewas the son of a judge, Auguste Manet, and arefined woman named Eugenie-Desiree Fournierwho was distantly related to the Crown Prince ofSweden. The young Manet did not take well totraditional schooling, preferring from the outsetto paint and draw. Thanks to the encouragementof his Uncle Charles Fournier, Manet’s interest inthe arts was fostered early on; Fournier frequentlyaccompanied the young child to the Louvre tolook at the museum’s magnificent collection ofpainting and sculpture.After a brief stint in the merchant marines,Manet began studying under Thomas Couture.He would remain in Couture’s studio until theyear 1856. Early influences on Manet were theSpanish – in particular Goya and Velazquez.Manet disagreed with the popular view at the time,however, that all art should reflect classical ideas.Rather than attempting to emulate the Old Masters,Manet broke with tradition early on, proclaiminghis allegiance to the ideals of the present and tocontemporary life – as well as what would cometo be known as the “modern” style. In this, Manetmarked a decisive turn away from the art theoryof Diderot towards the modernizing tendenciesof Charles Baudelaire – although, it should benoted, the esteemed poet and art critic never fullyembraced the work of Manet (Rosenberg 1969, p.173).Any study of art’s relationship with societyduring this key period in the history of art musttake into consideration the revitalization of Paristhat was occurring in the 19 th century under BaronHaussmann. Prior to the year 1852, Paris hadretained its old medieval infrastructure, whichwas progressively beginning to wither away.Paris was thus becoming an infrastructural mess.Haussmann took many steps to modernize Paris,effectively transforming it into the city we knowit as today.There is no doubt that Haussmann’srevitalization of the famous city also had effectson Paris’s cultural and social life. The cityexperienced a rise in economic prominence, asthe modernization efforts created a number of newjobs. What is more, storefronts were completelyredesigned. The streets were both widened andlengthened. Buildings were redeveloped orSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Society’s Identity Search through Art225completely destroyed. In the end, Paris wouldemerge as the loveliest – and most culturallyprogressive – capital on earth. The art of Manetand the Impressionists is clearly symptomatic ofthese shifts in the social landscape.The Absinthe Drinker of 1858 is the paintingthat would launch Manet’s career as an artist ofModern life. This painting aims to capture theimage of a figure one would have commonly seenon one of Paris’s back streets during this era – adebauched man sitting alone, wrapped in a cloak,with a glass of absinthe nearby and an emptybottle at his feet. Despite the fact that this manwould certainly be far removed from the circle offriends Manet associated with, it is nonetheless avery honest portrayal of the social conditions thatexisted in Paris in the mid-19 th century.The Old Musician of 1862 is perhaps a moredirect reference to the effects that Haussmann’srenovation project had on people in Frenchsociety – in this case, those on the lower end ofthe social spectrum. Parts of the poorer, morederelict sections of Paris were completely razed inthe modernization process, effectively displacinga significant part of the population. It is a groupof such displaced individuals, victims of “thenew era,” that Manet took as his subject in TheOld Musician. In the center of the frieze-likearrangement, an elderly gypsy with a violin inhand stares impassively at us. Next to him stand atriad of children, each of whom is gazing in anotherdirection with a hopeless expression adorningtheir underdeveloped faces. Already in The OldMusician, we can see traces of Manet’s detached,emotionally cold rendering of his subjects – a tonethat would come to define his work, and Parisianmodernism in general. It is as though Manetwishes to convey the message – this is how lifeis – but without passing a moral judgment on thecircumstances. Instead, he allows the viewer tomake up his or her mind on what these people aremeant to represent, if anything.Perhaps the most radical aspect of 1862’s LaMusique aux Tuileries is the fact that it containsno real subject. This painting is quite a departurefrom Manet’s previous concern with the destituteof Paris. It celebrates the high-class, fashionablesociety of Paris during this time – a part of societythat Manet and his friends were indelibly a partof. There is no real central subject in the painting,though – we are at a celebration of some sort, andare able to lose ourselves in the swirl of the crowd.In this respect, the painting is “out of focus,” likebeing lost in a real crowd. At the same time,through its intricate detail to the clothing wornby the individuals in the picture, we get a goodglimpse at what Parisian high society would havelooked like during this era.At the same time, La Musique aux Tuileriesrepresents a definitive break with the dominantstyles in French painting of the 19 th century andhelps pave the way towards the movement thatwould come to be known as Impressionism. Withits loose handling of paint, La Musique aux Tuilerieseffectively anticipates the “fast painting” methodsof the Impressionists, while simultaneously givinginspiration to the “snapshot style” that would laterbe developed by such painters as Degas.1862 proved to be quite a productive year forManet. This was also the year he completed theSpanish Guitar Player. This painting would winManet acceptance into the renowned Salon, as itwas reminiscent of Parisians’ love for all thingsSpanish.Despite his acceptance into the Salon andthe French painterly elite, it was not the SpanishGuitar Player that made Manet a star. Rather, itwas his highly controversial Déjeuner sur l’Herbe,which was completed the following year. Thispainting was rejected from the Salon for its highlyunorthodox subject matter and style.It was not just Déjeuner sur l’Herbe thatexperienced the brunt of rejection that year at theSalon. So many quality paintings were rejected in1863 that several artists banded together in orderto form the Salon des Refusés as a kind of counterprotestto that year’s Salon. It was at the Salon desRefusés that Manet’s star first shown in the form ofthis rejected painting, which has since gone on tobecome regarded as a masterpiece – a key work ofearly Impressionism. But this recognition did notcome early. In the words of MacDonald (1999):S. Rothschild, J. Curtsinger - Society’s Identity Search through Art


226School of Doctoral Studies (European Union) JournalJulyAlthough influenced by Raphael andGiorgione, Dejeuner did not bring Manetlaurels and accolades. It brought criticism.Critics found Déjeuner to be anti-academicand politically suspect and the ensuingfirestorm surrounding this painting hasmade Le Dejeuner sur l’Herbe a benchmarkin academic discussions of modern art. Thenude in Manet’s painting was no nymph,or mythological being... she was a modernParisian women cast into a contemporarysetting with two clothed men. Manyfound this to be quite vulgar and beggedthe question “Who’s for lunch?” Thecritics also had much to say about Manet’stechnical abilities. His harsh frontal lightingand elimination of mid tones rocked ideasof traditional academic training. And yet,it is also important to understand that noteveryone criticized Manet, for it was alsoDéjeuner which set the stage for the adventof Impressionism.Indeed, Manet emerged as something of anenfant terrible in the Parisian art scene of this era.In the same year, he would also produce Olympia,another painting featuring a female nude thatwould become the center of much controversy.Of course, it was never Manet’s intention to shockwith his art. He truly did not understand why histwo paintings had had such an extreme effect onsociety. While his bohemian roots have frequentlybeen asserted by biographers who like to indulgethe myth of the wild, savage artist, in truth, Manetwas very much a part of high society in Paris inthe mid-19 th century. He truly wanted his art toreflect this society – and he wanted the society toaccept him as an artist.Despite, or perhaps because of, the controversygenerated by these two paintings, Manet wouldgo on to inspire a horde of other artists with histechnical virtuosity. Painters like Pierre Renoirand Claude Monet would find in Manet somethingof a father figure. He would liberate painters ofthe 19 th century from the constraints of academicpainting, allowing them an unprecedented amountof freedom to explore the art of painting fromuncannily new perspectives.The political events that rocked Paris betweenthe years of 1867 and 1871 would have a significantimpact on many of the later works of EdouardManet. After the Franco-Prussian War, it seemedlike the city’s golden years had come to an abruptend. Times were rough, and Manet documentedthe hardships of these years in a series of paintingsthat included Execution of Maximilian, TheBarricade, and Civil War. Manet himself joinedin, becoming a gunner in the National Guard in1870. Manet reacted to the war with horror; thisis clearly indicated from the three paintings thatwere inspired by this period. The Execution ofMaximilian, completed in 1868, has echoes ofGoya’s Third of May.Manet did not limit his output during this periodto political subjects, however. He continued tokeep his eye on every aspect of the society he stillvery much felt himself to be a part of. His famousPortrait of Emile Zola was completed during thisperiod, as was The Balcony and The Railroad.By the end of the conflict, Manet’s reputationas a serious artist was cemented. He was widelyconsidered to be the leader of the new group knownas the Impressionists. Manet also became firmlyinvolved in café life in the 1870s and 1880s, andhis paintings from this era reflect this. Of particularinterest was the Café Guerbois, situated nearManet’s studio. This is the café where the leadingImpressionists – including Monet, Pissaro, Renoir,Degas, and Sisley – would gather. At the head ofthe table was always Manet – despite the fact thathe felt very uncomfortable in his role as the leaderof the Impressionists. While he remained friendswith the Impressionist painters throughout hislife, he refused to participate in their exhibitions.Indeed, while Manet’s open style certainlyinfluenced the development of the movement,Manet himself was never really an Impressionistpainter. The Impressionists were firmly cementedas the leading avant-garde movement in Parisduring this time; Manet preferred being a memberof the establishment, and thus stuck to the Salon.In fact, it was only late in life that Manet cameanywhere close to approaching the Impressionists’School of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Society’s Identity Search through Art227style or preferred subject matter. At the same time,Manet’s style was resolutely his own. Even ablatantly Impressionistic painting such as ClaudeMonet Painting on His Studio Boat from 1874 isindicative of the artist maintaining his interest inwhat people around him are doing, rather with thenatural world, which the Impressionist painterslargely wished to convey in their canvases.Another of Manet’s near-Impressionist works,Argenteuil, was completed around the same time.It features a couple seated on a bench in front of alake, with the town featured in the title of the workin the background. As MacDonald (1999) hasnoted, “both [paintings] approach the notions ofreflected light and atmosphere of Impressionismbut Manet never becomes assimilated into the trueImpressionist style.”But it was not nature that ultimately inspiredManet; rather, it was the city of Paris itself. Thisis perhaps most evident in the café paintings thatthe artist completed in the last years of his life.The most famous of these, A Bar at the Folies-Bergere, was completed in 1882. It consists ofa typical French barmaid staring forlornly at thecrowd, which is reflected in the mirror in back ofher. With its brilliant composition of black andwhite patches, A Bar at the Folies-Bergere is amelancholy masterpiece that transports the viewerto the café itself. It is a work filled with opticalcontradictions that is nonetheless typical of Manetat his greatest – as a mirror of the very society thatforged his genius.Indeed, perhaps more than any artist of hisera – and certainly more than the subsequentImpressionist artists – the work of Edouard Manetis reflective of the Parisian society that he was verymuch a part of. MacDonald (1999) has linked theidiosyncratic painting style of Manet directly withthe times he lived through: “If Manet’s work seemsto be full of contradictions, or to employ a lack ofperspective from time to time, then perhaps thatwas the true reality of Paris in Manet’s time.”Women in Manet’s ArtAs we have already examined the pivotalrole that gender plays in analyzing the Venus ofWillendorf, it is worth taking the time to analyzethe role that gender – particularly the femininegender – plays in the work of Edouard Manet.Indeed, it was the nudity of his female figures thatshocked his Parisian audience early on in Déjeunersur l’Herbe and Olympia.Whereas previous representations in classicalart of nude women elevated them to the status ofVenus like goddesses, Manet broke with traditionby painting his nude figures as every day people,effectively equal to their male counterparts. Thenudity in Déjeuner sur l’Herbe serves no real,visible purpose. Two men and two women areon a picnic, and one of the women has apparentlyremoved her clothes. There is no orgy going on,as she is the only one who is undressed. She staresout at us plaintively, her slightly overweight bodyproviding a mild echo of the Venus of Willendorf(which, of course, Manet had never seen as ithad not yet been discovered.) In the words ofAustralian art critic Robert Hughes (1990):The painting has the quality of farce,presented in the guise of a Second Empirepictorial machine. At the same time itis intensely serious (as farce can be), andone of the victims of its seriousness is thestereotype of the nude. Manet invariablypainted women as equal beings, not asdenatured objects of allure. Victorine,the model, is clearly a model doing aprofessional stint; the illusions of theSalon body, timelessness and glamour, areno longer properties of nakedness. Otherartists painted nymphs as whores; it tookManet, in the Olympia, to paint a whore asher own person, staring back at the voyeurs,restricting the offer to a transaction. Here, asin paintings of women who were not models(such as Berthe Morisot, whose shadowedand inward-turning beauty Manet couldportray as the index of thought), one seeshim inventing the image of the “modern”woman. It was there to be seen; but that istrue of any prophecy (138).S. Rothschild, J. Curtsinger - Society’s Identity Search through Art


228School of Doctoral Studies (European Union) JournalJulyOlympia caused a major uproar when it was firstexhibited in 1865 at the Salon in Paris. Despitethe fact that it calls to mind the classical imagesof Giorgione (Venus Sleeping), Titian (Venus ofUrbino), and Ingres (Odalisque with a Slave),the public was outraged by Manet’s depictionof a common prostitute laying nude on a bed. Ablack female servant stares at her as she fixes theMadame’s bed, while a black cat stands on edgeat the end of the bed, as though anticipating theviewers’ loud, outraged response.Not everyone despised Olympia, however.One sympathetic reviewer by the name of JeanRavenel effectively conveyed both the publicoutrage and the real achievements merited byManet’s painting:The scapegoat of the Salon, the victim ofParisian lynch law. Each passer-by takes astone and throws it in her face. Olympia is acrazy piece of Spanish madness, which is athousand times better than the platitude andinertia of so many canvases on show in theExhibition.Armed insurrection in the camps of thebourgeois: it is a glass of ice water whicheach visitor gets full in the face when hesees the BEAUTIFUL courtesan in fullbloom.Painted of the school of Baudelaire, freelyexecuted by a pupil of Goya; the viciousstrangeness of the little faubourienne, womanof the night out of Paul Niquet, out of themysteries of Paris and the nightmares of EdgarPoe. Her look has the sourness of someone aged,her face the disturbing perfume of fleur de mal;the body fatigued, corrupted, but painted undera single transparent light, with the shadows lightand fine, the bed and the pillows are put down inthe velvet modulated gray. Negress and flowersinsufficient in execution, but with real harmonyto them, the shoulder and arm solidly establishedin a clean and pure light. The cat arching itsback makes the visitor laugh and relax, it is whatsaves M. Manet from popular execution (quotedin Kapos 1995, p. 40).Indeed, it was not because of the fact that hewas painting his female subjects nude that sodisturbed Manet’s public. It was the fact thathe was portraying average-looking women –including prostitutes and artist models – andmaking no effort whatsoever to idealize them inany way, as artists of the past had.Clark (1956, pp. 164-165) and other writershave also pointed out that the female figuredepicted in Olympia is a Venus figure. Throughoutthe history of art, the Venus figure has typicallybeen associated with love, beauty, and fertility.Both the nude women in Dejeuner sur l’Herbeand Olympia clearly fit into this tradition – despitethe fact that Manet simultaneously broke withtradition owing to both the context of their nudityas well as the physical appearance of his chosenmodels, as depicted in the paintings.In this way, Manet was endowing his femalesubjects with a degree of respect that they had notreceived in the history of western art. Manet’sfascination was with people, in general. He didnot discriminate between social classes, sex, orgender – everyone was fair game for one of hisdepictions. But he would not make any effort topresent his models as otherwise on canvas thanthey were in real life. To do so was dishonest, andManet, above all, was interested in capturing thetruth – something that artists before him had hadto shy away from, whether to conform to socialmores or for reasons relating to patronage. In thewords of one commentator:As we look back at it now, Manet was not reallybeing that revolutionary. His ideas of usingcommon people are right in line with GustaveCourbet (1819-1877) but he just took it to thenext level. And if we look at the political historyof France, the revolution and the overthrowof the aristocracy, the art world was ripe for achange in the way we view things -- though theviewing public (who were often upper class)had to be dragged kicking and screaming (JSSGallery 2005).School of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Society’s Identity Search through Art229ConclusionWhen viewing any work of art, weimmediately begin to make sense of the image interms of how it relates to the world we know, aswell as what we have been taught about history.Not only does society seem to influence the waysin which art is produced – society is often reflectedin the works of art itself. This begs the questionof whether or not we may use ancient artifacts thatpre-date written culture as a means of “reading”the society that existed during that era.In the case of ancient artifacts such as theVenus of Willendorf, we are at first put off by theinherent strangeness of the piece. As we begin toassimilate received ideas about the era that it wasforged during, however, we begin to get a clearerpicture as to what it might mean. The intriguingthing about the Venus and other Paleolithic relics,however, is the fact that we ultimately have noidea exactly what it could mean. Instead, we canonly satisfy our interest by relating the piece’shistory to our modern ideas of what gender mighthave signified to ancient prehistoric civilizations.Our speculations, however, will not lead us to anyconclusive answer.When we move to a century closer in time toours, however – in the case of Manet, the 19 th – weare able to get a better view of the ways that societyand art impacted one another through the use ofprimary and secondary sources. If going from theVenus to Manet does indeed represent a shift froma matriarchal society to a patriarchal one, thenexamining the controversies surrounding Manet’swork certainly gives us a glimpse into some of thedetrimental effects that patriarchy has had on ourculture.While I would not go so far as to argue thatManet was an early feminist, the fact that heportrayed his female subjects with dignity – evenwhen they were nude – is something that is worthcommending. Instead of idealizing the femalefigure, as male artists prior to him were wont todo, Manet showed us quite average images of thefeminine body. It is almost as though he wished tochallenge our assumptions – not merely about thefemale body itself, but about the entire history ofart as it had been written to date. It will no longerbe enough to pretend that reality is otherwise,these images seem to shout out; instead, we mustbe faithful to the objective truth that progress isbuilt on.Manet’s painting was firmly rooted in Paris ofthe mid-19 th century. It is thus not difficult for usto get a glimpse of what society was like in thattime and place by looking at Manet’s paintings.Looking at the Venus of Willendorf, however, doesnot tell us anything about the society that it is arelic of. It thus requires us to use our intellectsand our imaginations in order to piece together anexplanation that might satisfy us personally, butcan never be held up as a firm example, as wecan with Manet’s paintings. Thus, it can be saidthat the relationship between art and society is infact conditioned by a third factor, which has beenthe main subject of our inquiry – that of history.Without all the written records of the 19 th centurythat have been kept, we might have no way ofknowing what we are looking at when we studya Manet painting. This truth comes to the surfacewhen we look at the Venus, which comes from aperiod that pre-dated all known forms of writing.When analyzing art’s relationship to society,it is also vital to take into consideration theprocess of canonization. This can be thought of asessentially the process whereby individual worksof art are assigned to given periods and places.So, for instance, the Olympia of Manet receives itsimportance as being emblematic of early ParisianModernism. It is thus difficult for us to assign asimilar position to the Venus in our canon, as weare uncertain as to its origins. As a matter of fact,we do not know whether it is actually intended asa work of art, or if it had some sort of utilitarianuse in the pre-literate society that forged it.“Works” – or, perhaps more aptly, artifacts– such as the Venus thus cause us to reconsiderwhat the true definition of art is. Certainly, whena modern-day art critic “reads” a painting fromthe last hundred years, he or she makes no effortto decode the society that produced such a work;to do so would be redundant, as it is a given thatwe probably already know enough about Parisiansociety in the 19 th century to gouge Manet’sS. Rothschild, J. Curtsinger - Society’s Identity Search through Art


230School of Doctoral Studies (European Union) JournalJulymeaning – or, to be precise, what is one of themany meanings one finds in a Manet painting.Instead, the critic will focus on the expressivecontent of the painting, the technique deployed,etc. The Venus figurine we have chosen to considerhere, however, is first of all read for its potentialanthropological value (i.e. what it might or mightnot have been used for in the society that producedit.) The expressive qualities have been touched onin our essay above, but they are hardly the focusof most critical inquiries into the statue’s inherentmeaning.At the same time, it is possible to consider theVenus as a pure work of art, detached from the sortof guesswork that has characterized most inquiryinto the “meaning” of the subject. Just as Manet’swomen were meant to evoke a certain notion offemininity that was both dominant during hisera – as well as transcendental, so the Venus isexpressive of both universal qualities, such asthe beauty and importance of fertility, as wellas the more specific quality of obesity, which issomething that is no longer valued in the presentday, but is still an eternal condition of humanity.It is these conditions and values that ultimatelyresonate throughout time and constantly recur inart, effectively giving art its true meaning.BibliographyBachofen, J. J. 1992, Myth, Religion, and MotherRight, Princeton <strong>University</strong> Press, Princeton.Baring, A. & Cashford, J. 1991, The Myth of theGoddess: Evolution of an Image, Viking, NewYork.Clark, K. 1956, The Nude: A Study in Ideal Form,Princeton <strong>University</strong> Press, Princeton.Darwin, C. 2006, On the Origin of Species: ByMeans of Natural Selection, Dover Publications,New York.Dobres, M. A. 1996, “Venus Figurines” in TheOxford Companion to Archaeology, ed. B. M.Fagan, Oxford <strong>University</strong> Press, Oxford.Duhard, J. P. 1993, “Upper Palaeolithic Figures asa Reflection of Human Morphology and SocialOrganization”, Antiquity, 67, pp. 83-90.Ehrenberg, M. 1989, Women in Prehistory,<strong>University</strong> of Oklahoma Press, Norman.Giedion, S. 1962, The Eternal Present: TheBeginnings of Art, Oxford <strong>University</strong> Press,London.Graziosi, P. 1960, Palaeolithic Art, McGraw Hill,New York.Hughes, R. 1990, Nothing if Not Critical: SelectedEssays on Art and Artists, Penguin Books, NewYork.JSS Gallery 2005, Edouard Manet’s Olympia,Available at: http://www.jssgallery.org/other_artists/Manet/Olympia.htm#TopKapos, M. 1995, The Impressionists and TheirLegacy, Barnes & Noble Books Leroi-Gourhan,A. 1968, The Art of Prehistoric Man in WesternEurope, Thames and Hudson, London.MacDonald, L. 1999, “Edouard Manet”, Artchive,[Online] Available at: http://www.artchive.com/artchive/M/manet.htmlManet, E. 1874, Argenteuil, Painting, [Online]Available at: http://www.abcgallery.com/M/manet/manet30.htmlManet, E. 1881-1882, A Bar at the Folies-Bergere,Painting, [Online] Available at: http://www.artchive.com/artchive/M/manet/manet_bar.jpg.htmlManet, E. 1874, Claude Monet Painting on HisStudio Boat, Painting, [Online] Available at:http://www.abcgallery.com/M/manet/manet29.htmlSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Society’s Identity Search through Art231Manet, E. 1863, Déjeuner sur l’Herbe, Painting,[Online] Available at: http://www.jssgallery.org/other_artists/Manet/Lunch_on_the_Grass.htmManet, E. 1863, Olympia, Painting, [Online]Available at: http://www.jssgallery.org/other_artists/Manet/Olympia.htmRosenberg, H. 1969, Artworks and Packages,Thames and Hudson, London.ca. 27,000 PB”, Archaeology, Ethnology andAnthropology of Eurasia, 1, pp. 37-47.Venus of Willendorf, Oolitic limestone, [Online]Available at: http://witcombe.sbc.edu/willendorf/willendorf.htmlWitcombe, C. L. C. E. 2003, “The Venus ofWillendorf”, Women in Prehistory, [Online]Available at: http://witcombe.sbc.edu/willendorf/willendorfdiscovery.htmlSoffer, O., Adovasio, J. M., & Hyland, D. C. 2000,“The Well-Dressed ‘Venus’: Women’s WearS. Rothschild, J. Curtsinger - Society’s Identity Search through Art


232School of Doctoral Studies (European Union) JournalJulyAnalysis on Modernism and LiteraryImpressionismDonatella Petri (MPhill)Master of Philosophy and candidate to PhD in English Literature at the Schoolof Doctoral Studies, <strong>Isles</strong> <strong>International</strong>e Université (European Union)Professor Anna Richardson (PhD)Chair of Languages and Literature of the Department of Social Science at the Schoolof Doctoral Studies, <strong>Isles</strong> <strong>International</strong>e Université (European Union)AbstractA comprehensive analysis of the structure and texture of the beginning of literary impressionism based ontwo works of FordThe plot of The Good Soldier is deceptively simple. It tells the story of two couples, one English and oneAmerican, who meet at a spa in Germany. Edward, the English male half, suffers from a heart condition,as does Florence, the female American. The two couples quickly form a friendship that endures for severalyears. Eventually, however, it is revealed that during this time, Florence and Edward have been carryingon an affair. Leonora, the English wife, knows about the affair all along, but the narrator of the novel -John Dowell, the American husband - does not find out until much later. It is only upon the death of boththe adulterers that more about their affair is revealed, putting John in the unwitting position of a sort ofdetective.Key words: English Literature, Modernism, Literary ImpressionismSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Analysis on Modernism and Literary Impressionism233Analysis on Modernism andLiterary ImpressionismStructure and Texture in Ford’s The GoodSoldier and Parade’s EndAs Graham Greene once wrote on the subjectof Ford Madox Ford, “No one in our centuryexcept James has been more attentive to the craftof letters. He was not only a designer; he was acarpenter: you feel in his work the love of the toolsand the love of the material” (Greene 1962, p. 8).In what follows, we intend to explore the ways inwhich Ford both designed and engineered what areperhaps his two greatest novels, The Good Soldierand the works comprising Parade’s End. Througha rigid analysis of both the formal and textualaspects of Ford’s work, we hope to expose thosequalities that contributed to Ford’s developmentas one of the pioneering authors of Modernismand literary Impressionism.We will begin with an analysis of the sprawlingtext of The Good Soldier, which is characterizedby its inventive use of the flashback device.Structurally, the work makes use of a nonchronologicalorder of events, which are relayedto us by an unreliable narrator. We will investigatethese formal devices, showing how the structure ofthe novel is meant to mirror the chaotic events thatare depicted throughout the course of the book.We will then turn our attention to what isarguably Ford’s finest achievement, the tetralogyParade’s End. We will explore the ways in whichthe main characters of this book are linked with theunreliable narrator of The Good Soldier. We willalso examine the ways in which the character’spsychological development influences the textureof the work. This is a key point, because in noother work did Ford manage to show the intricateworkings of the human consciousness as he didin Parade’s End. We will conclude by showingthe ways that Ford Madox Ford’s unique literaryImpressionism has left a distinctive mark – notonly on Modernism, but also on the evolution ofliterature in general.Throughout our analysis, we will make useof the authoritative Bodley Head edition of FordMadox Ford’s works, as well as an array of standardand recent scholarship on these two novels.The Good SoldierThe Good Soldier, a book that is widelyconsidered to be Ford’s masterpiece, is astylistically unprecedented work of literature.Ford himself, never shy of patting himself on theback, readily admitted so much; he would laterwrite of“taking down one of [his] ten-year-oldbooks [and exclaiming] ‘Great Heavens,did I write as well as that then?’ ... AndI will permit myself to say that I wasastounded at the work I must have put intothe construction of the book, at the intricatetangle of references and cross-references”(quoted in Smiley 2006).While The Good Soldier was not to be the lastof Ford’s great accomplishments, it is certainlyone of his most coherent on both a structural andtextural level.The plot of The Good Soldier is deceptivelysimple. It tells the story of two couples, one Englishand one American, who meet at a spa in Germany.Edward, the English male half, suffers from a heartcondition, as does Florence, the female American.The two couples quickly form a friendship thatendures for several years. Eventually, however,it is revealed that during this time, Florence andEdward have been carrying on an affair. Leonora,the English wife, knows about the affair all along,but the narrator of the novel – John Dowell, theAmerican husband – does not find out until muchlater. It is only upon the death of both the adulterersthat more about their affair is revealed, puttingJohn in the unwitting position of a sort of detective.Although he would have preferred not to knowabout his wife’s infidelity with his dear friend, thefacts come to him continuously throughout thenovel, which is structured as a series of seeminglyendless digressions, effectively mirroring the wayin which memory works; as Dowell tells us at thebeginning of the novel:D. Petri, A. Richardson - Analysis on Modernism and Literary Impressionism


234School of Doctoral Studies (European Union) JournalJulyIs all this digression or isn’t it digression?Again I don’t know. You, the listener, sitopposite me. But you are so silent. Youdon’t tell me anything. I am, at any rate,trying to get you to see what sort of life itwas I led with Florence and what Florencewas like. Well, she was bright; and shedanced. She seemed to dance over the floorsof castles and over seas and over and overthe salons of modistes and over the plagesof the Riviera – like a gay tremulous beam,reflected from water upon a ceiling. Andmy function in life was to keep that brightthing in existence. And it was almost asdifficult as trying to catch with your handthat dancing reflection. And the task lastedfor years (Ford 1962, p. 24).Like Florence, the prose of Ford’s narratorseems to dance throughout the novel – muchlike the colors seem to dance across a canvas byMonet. Now that Florence is gone permanentlyfrom his life, it becomes Dowell’s task to preserveher memory – to make her immortal – throughhis narrative, just as in life it was his job to keepher alive. It is through this artful, impressionisticstructuring of events – and its disregard ofchronology – that the novel builds its suspenseand drama, which has kept generations of readersengaged from the beginning to the end. Everysingle utterance made by the narrator or othercharacters, no matter how insignificant they mightseem at first, later acquires significance as the plotunravels.Owing to the highly textured narrative, it isworth examining John Dowell in fuller detail,as he is the medium through which the narrativeis generated. As far as narrators go, Dowell isincredibly naïve. He comes from a well-to-doPhiladelphia family, and apparently has very fewambitions in life other than to marry Florence –this, despite the fact that she comes across as anincredibly demanding spouse who is resistant tophysical intimacy with her husband. Emotionally,Florence is distant and unwilling to share herdeepest feelings with Dowell, yet he loves her inspite of all these apparent flaws in her character.Owing to her heart condition, Florence claims thatshe is unable to engage in sexual activity with herhusband. He is thus turned in to a sort of servantwho is forced, unwittingly, in to catering to hiswife’s every need. She also carries on at leasttwo love affairs – despite her supposed inabilityto make love.One of the idiosyncratic elements of Dowell’snarration is the fact that his voice sounds farfrom American; in fact, the book is narrated in adefinitively early 20 th century British idiom. Smiley(2006) has argued that this fits the theme of TheGood Soldier quite well, however, as the book’sultimate subject is not America at the turn of thecentury, but England. Dowell, as an expatriate,quickly comes to find favor with the upper classEdwardian lifestyle of the English in the early 20 thcentury. At the same time, he is conflicted aboutits high standards and attention to detail – he findsvery rare roast beef to be nauseating, brandy tobe disagreeable, and is repelled by cold baths.What is most repellent to Dowell about Edwardianvalues, it later comes to be revealed, is the fact thatthe social decorum seems to serve as a means forpreventing people from really getting to know oneanother; hence all the secrets that the close friendsof the central narrative are able to keep with oneanother, secrets that Leonora is openly aware of,but are still a mystery to the uninitiated American.In the words of Mizener:There was certainly, in the Edwardianworld Ford was contemplating, someradical discontinuity between what Dowellcalls the “natural inclinations” of people’sunconscious selves and the trained habits oftheir conscious selves that made them “goodpeople” and that made their society “theproudest and the safest of all the beautifuland safe things that God has permitted themind of men to frame.” Perhaps there alwayshas been such a discontinuity; perhaps therealways will be. Perhaps the more beautifullyordered and successfully disciplined the“parade” of civilization becomes, the moredestructive of men’s natural inclinations italso becomes” (Mizener 1985, p. 276-277).School of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Analysis on Modernism and Literary Impressionism235This struggle between the Edwardian virtues ofold and the Modernism of the new world wouldlater come to the forefront in Ford’s Parade’s Endtetralogy.If Dowell has any virtues at all that shine throughthroughout the course of the narrative, it is hisearnestness and the fact that he is quite disarming.Unlike the notoriously egotistical author behindthe work, Dowell continuously expresses his selfdoubtsand shortcomings. He is never self-servingthroughout the course of his narrative; in general,he is overly generous to those who perhaps do notalways deserve it. In such a rambling narrative, itis not possible to be “reliable” in reporting facts;Dowell’s very unreliability – as both a person anda narrator – contribute much to the rich texture ofThe Good Soldier. Furthermore, his portraits ofEdward, Florence, Leonora, and other charactersprovide an apt literary equivalent to painterlyImpressionism in their personal, if not scattered,style of rendering.Rather than letting his emotions regardinghis wife’s untimely demise get the best of him,Dowell portrays Leonora and Edward as beingendowed with respectability, as well as characterand grace. The few flaws that Edward is endowedwith appear to be related to his intelligence andhis education – he is otherwise portrayed as agenerous individual. As a respected magistrateand landlord in England, he is endowed with ahigh degree of morality. He is responsible andupright. At the same time, perhaps owing to hislack of intelligence, Edward is portrayed as beingsentimental. This turns out to be a major flaw, inthat others often see him as being weak or frail inhis character. This also makes him, like Dowell, aweak character in comparison to his wife, who, likeFlorence, is strong and determined in character.Dowell often portrays Leonora as being “cold”.We learn, for instance, that Leonora and Edwardhave not spoken in private for over a dozen years,despite their show of warmth and affection to oneanother in public. While it would have been easyto make of Edward and Leonora caricatures –indeed, their “types” are rife in English literature –Ford addresses their inner lives seriously, digestingtheir inherent emptiness as being symptomatic ofthe turmoil of the transitional era during whichthe story takes place (the early 20 th century.) Thelack of intimacy in the lives of both husband andwife has been aptly characterized by Smiley asfollows:It is clear as the novel proceeds that not onlydo Edward and Leonora have no idea whatintimacy is, they also have no way of findingout: for one thing, they don’t read novels,and for another, Leonora consults priests andnuns for marital advice, and what they haveto offer are third-hand clichés such as “menare like that.” Edward consults no one, andthere seems to be no structure in his life thatwould permit such consultation. Other menof his social class tell dirty stories, perhapsas a form of sharing information, but thesemake Edward uncomfortable. Thus, whenEdward begins to feel out of sympathy withLeonora some three or four years into theirmarriage, he is ripe for exploitation, and heends up making a costly liaison and losingabout 40 percent of the principal value of hisestate. Over the next 10 years Leonora takesover management of the estate and brings itback to its original value, but the balance oftheir relationship is fatally undermined byher control and his untrustworthiness.What Leonora and Edward’s marriage has incommon with that of Florence and John is that bothmarriages feature emasculated men. It is implied,via the narration of John, that both men havebeen complacent in their emasculation, however.While Dowell’s emasculation remains somethingof a mystery throughout the course of The GoodSoldier, Edward seems to have participated in hisemasculation in that he has received no educationin life in anything other than following protocol –hence, he becomes the “good soldier” of the title.While Edward’s emasculation can be attributed tohis stupidity, the fact that John allows himself to betreated harshly by his wife can only be attributed toa general laziness or lethargy, which is symbolizedD. Petri, A. Richardson - Analysis on Modernism and Literary Impressionism


236School of Doctoral Studies (European Union) JournalJulyby the fact that he shows no real interest in steeringhis life in any particular direction, as he himselfadmits.Because he seems to be aware of the fact thathe has been emasculated, John has grown to accepthis status in the relationship. Edward, on the otherhand, is never able to fully accept his position –probably, it is implied, because he is unable tofully understand why things are as they are. Hethus resorts to a series of affairs, which his wifeis openly aware of. Leonora seems to believe thatonce they are in a more financially stable position,he will regain his interest in her.One of the more intriguing aspects of theplot of The Good Soldier is the fact that Johnremains devoted to his wife throughout – whilehe simultaneously portrays her as being a rathershallow, vile creature. Despite her strong intellect,she clearly wants to be catered to. Her sole desireis to be the center of attention, and she will dowhatever is necessary to attain it. Morally, sheis a rather abject figure, and yet John loves herand refuses to criticize her – despite the fact that,through his narration, he shows how she, in fact,is hardly a likeable human being. While she mayconceal her intentions behind her intellectualdemeanor, Florence is little more than a crudesocial climber, when you get down to it.It quickly becomes apparent that, beyond theveneer of friendship, Leonora does not actuallylike Florence. This is revealed early on in thenovel, in the guise of a disagreement over religion.When Florence crudely insults the Catholic faith,to which Leonora belongs, Leonora takes Johnaside. John attempts to defend his wife, butLeonora will have none of it:‘It’s hardly as much as that. I mean, that I mustclaim the liberty of a free American citizen tothink what I please about your co-religionists.And I suppose that Florence must have liberty tothink what she pleases and to say what politenessallows her to say.’‘She had better,’ Leonora answered, ‘not say onesingle word against my people or my faith.’It struck me at the time, that there was an unusual,an almost threatening, hardness in her voice. Itwas almost as if she were trying to convey toFlorence, through me, that she would seriouslyharm my wife if Florence went to somethingthat was an extreme (Ford 1962, p. 67-68).It is through his liaison with Florence thatLeonora eventually loses all respect for herhusband Dowell describes Leonora as thus:Leonora, as I have said, was the perfectlynormal woman. I mean to say that innormal circumstances her desires werethose of the woman who is needed bysociety. She desired children, decorum, anestablishment; she desired to avoid waste,she desired to keep up appearances. Shewas utterly and entirely normal even inher utterly undeniable beauty. But I don’tmean to say she acted perfectly normallyin the perfectly abnormal situation. All theworld was mad around her and she herself,agonized, took on the complexion of a madwoman; of a woman very wicked; of thevillain of the piece. What would you have?Steel is a normal, hard, polished substance.But, if you put it in a hot fire it will becomered, soft, and not to be handled. If you putit in a fire still more hot it will drip away. Itwas like that with Leonora (Ford 1962).Dowell thus upholds Leonora’s femininity as“normal,” in that she is all too willing to submitto her traditional role in the dominant Edwardianestablishment – something that is repugnant to hisunconventional wife, Florence. Unlike Florence,Leonora has no need for greater power or influenceover her husband’s life – or anyone else’s, forthat matter. She instead seeks only to keep upappearances and maintain the proper socialdecorum, no matter what may be happening in herpersonal life, behind the scenes. She is perfectlyhappy to play the role that society has allotted toher. She desires to have children and become apart of the establishment – the former of which sheand her husband are unable to manage, a fact thatbrings them both a great deal of unhappiness.What lends the narrative an additional layer ofSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Analysis on Modernism and Literary Impressionism237texture is Ford’s reliance on paradox throughout.Despite his devotion to his wife, Dowelloccasionally questions why he stayed with herthroughout the duration of their marriage, lapsingout of his denial to momentarily acknowledge thelovelessness of their relationship:For peace I never had with Florence, and Ihardly believe that I cared for her in the wayof love after a year or two of it. She becamefor me a rare and fragile object, somethingburdensome, but very frail. Why it was as ifI had been given a thin-shelled pullet’s eggto carry on my palm from Equatorial Africato Hoboken. Yes, she became for me, as itwere, the subject of a bet – the trophy of anathlete’s achievement, a parsley crown thatis the symbol of his chastity, his soberness,and of his inflexible will. Of intrinsic valueas a wife, I think she had none at all for me.I fancy I was not even proud of the way shedressed (Ford 1962, p. 86-97).Through his portrayal of Leonora andFlorence, it quickly becomes apparent that Dowellviews women as being both strong creatures andcompletely inhuman in their behavior. He sees inboth women, especially his wife, the desire andcapacity for change. This frightens him, as womenserve as the very fabric upon which society isbuilt. While Leonora initially upholds herself in a“normal” fashion early in the novel, as catastrophelooms, her pristine façade begins to crack andfade, leading to the catastrophic conclusion ofevents in the novel. She essentially becomes mad,and Dowell affiliates her madness with an innateevil or “wickedness.” He is again unable to fullyreconcile the fact that not only his personal lifehas changed, but also an entire way of life – thatis, Edwardian England – has become a relic ofthe past. This is symbolized by the dissolution ofthe female characters in the novel, both of whichultimately transgress their allotted roles in societythrough their actions.While John’s portrayal of Edward is largelysympathetic, there are occasional strangeoutbursts, such as, when summarizing Edward’smilitary career, he suddenly states, “It wouldhave done him a great deal of good to get killed.”At the same time, once we reach the end of thenovel and are able to see how profoundly tragicEdward’s demise is, it begins to make more sense.As a “good soldier,” it in fact would have beenmuch better for Edward to have met a heroic end,rather than the pathetic legacy he manages to leavebehind at the end of the novel. Thus, the title isloaded with an ironic dimension, the sharpnessof which contributes much to the richness of thenarrative’s psychological discourse.We have in The Good Soldier a novel whosestylistic richness is rooted in its complex set ofreferences and cross-references that enable bothcomplementary and contradictory meanings toemerge in nearly every sentence, thus giving thereader an endless web of meanings to interpret.In the course of the novel, Ford very carefullyconstructs a complex plot that intertwines itscharacters’ precious psychological states withevents, chance, and muddled intentions. Whatemerges is a very complex picture of modern life.It should be noted that Ford originally wantedto call his book The Saddest Story. But the bookwas not to be published until shortly after theoutbreak of the First World War, during whichtime the publisher decided that the reading publicprobably did not want another “sad story.” Thefact that the novel was ultimately called The GoodSoldier, however, adds another layer of intriguethat the novel would have been lacking with thefairly banal title that Ford chose. What is more, inits subtlety, the title The Good Soldier gives us aglimpse into the social dimension that Ford’s maincharacters inhabit. It is not only stupidity thatEdward and the other characters face; it is mostly amoral issue that each in turn must combat in orderto win or lose the battle of life, as it is depicted inthe novel. As Smiley (2006) has written of TheGood Soldier,[The characters] are representatives ofa system that fails them and fails in theirfailure. It is the subtler side of Dickens’sCircumlocution Office, of Thackeray’sVanity Fair. In Dickens’s and Thackeray’sD. Petri, A. Richardson - Analysis on Modernism and Literary Impressionism


238School of Doctoral Studies (European Union) JournalJulyday, the landed gentry could still beattacked. By Ford’s time, all the socialand cultural arrangements of feudal Europewere imploding in the first world war.Ford was astute enough to depict both theinevitability of the implosion and its sadness- the world of Jane Austen a hundred yearson, depopulated, lonely and dark.A further level of intrigue is added to the novelonce one becomes cognizant of the fact that Dowellis apparently unaware of the larger significance ofthe events that he is narrating in The Good Soldier(Cousineau 2004, p. 86). Even in his portrayal ofhis dear friend Edward, whom he clearly pities,he is ultimately unable to decide whether Edwardis a true hero or an utter, contemptible failure.What makes Dowell so human is the fact that he isunable to draw a conclusion to the textured massof thoughts that comprise the narrative, of whichthis strand is perhaps emblematic:I don’t know. And there is nothing to guide us.And if everything is so nebulous about a matter soelementary as the morals of sex, what is there toguide us in the more subtle morality of all otherpersonal contacts, associations, and activities? Orare we meant to act on impulse alone? It is all adarkness (Ford 1962, p. 14).It is ultimately up to the reader, then, to decidethe level of irony that goes into the title The GoodSoldier.Parade’s EndIn terms of structure, Some Do Not…, the firstnovel in the Parade’s End tetralogy, is deceptivelysimple, in comparison to that of The Good Soldier.The book utilizes a chronological arrangement ofevents that is only slightly modified. Structurally,the book can be reduced to three parts. The firstpart of the book takes place in Rye in June of1912. Roughly, it occurs from the opening ofthe book where Christopher and Macmaster areintroduced on the train up until the point when theall night right of Valentine and Christopher comesto an end when they run into General Campionoutside of Mountby. The second part of the booktakes place five years later in the dining room ofSylvia and Christopher’s apartment in Grays Inn.This part of the book takes place in the course of asingle afternoon. The third part of Some Do Not…occurs continuously with that of the second part,only it takes place on the street between Grays Innand the War Office, at the War Office, and back atSylvia and Christopher’s apartment.Each part of Some Do Not… consists of a myriadof vignettes or occurrences. In this respect, thenovel has been compared by the likes of Mizenerto a well-structured play (Mizener 1985, p. 495).In the first part, for instance, the first vignetteoccurs on the train. A vignette that takes place atLobscheid then follows it. It is here where SylviaTietjens joins Father Consett and her mother. Thethird vignette in the first part takes place at the innwhere Macmaster and Christopher are lodging. Inthe fourth vignette, Macmaster and Christopherhave breakfast at the Duchemins’ residence. Thefifth vignette takes place at the Wannops’ luncheon.In the final vignette of the first part, Christopherand Valentine take their long ride that culminateswith their having an accident. The other two partsof the book boast a similar structure.Unlike The Good Soldier, which is narratedin the first person, Some Do Not…’s narrationoccurs in the third person. Despite this vital shiftin authorial narration, the voice of the narrator isstill strong throughout the novel. In the words ofMizener,Like [Henry] James, Ford never frees hischaracters’ minds entirely from the controlof the narrator; both when he is writingas the omniscient narrator and when he isfollowing the movement of a character’sconsciousness, he writes as a third person.This third person is not – except veryoccasionally in Some Do Not… -- objectiveand impersonal; he is vividly present to usas a personality, as an ironic, judging mind.It is Ford’s desire to let us hear his voicethat makes him summarize the thoughtsof his characters in the third person ratherthan present them to us directly. By thisSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Analysis on Modernism and Literary Impressionism239means he is able to show us the events asa character perceives them and at the sametime to show us the narrator’s judgmentof the characters’ thoughts and feelings(Mizener 1985, p. 496).As the tetralogy progresses, the narrativestyle begins to shift to a focus on the natureof consciousness itself, rather than the moretraditional mode of omniscient narration – thusputting it more in line with Ford’s earlier work (i.e.The Good Soldier). In line with Freud’s discoveryof the unconscious, Ford began to structure thenovels that follow according to the multiplicityof consciousness; in this respect, nothing is everas simple as it seems, and the books in Parade’sEnd begin to grow more and more complicated asthe narrative agency strains to incorporate an everevolving strand of consciousness. Of course, onecan find both tendencies – that of the traditionalomniscient narrator and that of the more modern“man of many thoughts” – in Some Do Not…Ford’s narrative agency is thus flexible enough toinclude a breadth of modes, making Parade’s Enda truly Modernist work.This dual process is illustrated quite literally inthe beginning of Some Do Not… in the train rideto Rye. Macmaster is working on a monographon Rossetti, the esteemed neoclassical artist of the19 th century. Christopher Tietjens does not carefor Rossetti, although it is not revealed why. Itis inferred, however, that Christopher finds faultsaplenty in the very discursive realm that Rossettiis in many ways a relic of – the very realm thatMacmaster very much desires to become a part of.In the words of Macmaster:Gabriel Charles Dante Rossetti, the subject ofthis little monograph, must be accorded the nameof one who has profoundly influenced the outwardaspects, the human contacts, and all those thingsthat go to make up the life of our higher civilizationas we live it to-day… (Ford 1963a, p. 20).This represents everything that Tietjens secretlyresents of the society that he finds himself in. It isthus that one of the major themes of Parade’s Endis established – the contact between the traditionaland the Modern. It is perhaps no coincidencethat Rossetti, in real life, studied under Ford’sgrandfather, Ford Madox Brown.But it is not just thoughts about his monographthat occupy Macmaster in this first scene.Woven into his literary meditations are thoughtson the success that he will meet with, shouldthis monograph be positively received by theestablishment of the day. He also fantasizes abouthow the fame will win him the attention of a goodnaturedbeautiful woman, who he will settle downwith. This thought triggers yet another thought– the fact that he has a tendency to chase afterbig-bosomed, lewd creatures of the female sex.He credits his friend Christopher for saving himfrom the more dire consequences of such affairs.Ironically, however, Christopher has gotten himselfinto quite a messy situation with a woman who hasbecome pregnant by another man, yet is desperateto get married.Indeed, the richly textured first part of SomeDo Not… is emblematic of the ways in which theunconscious mind constantly shifts the thoughtsof the conscious mind. While Macmaster’s mindis primarily occupied with thoughts about hismonograph, he cannot help but turn his thoughtsto the relationship between Christopher andSylvia. Sylvia’s trapping of Christopher elicits arecollection of the scene that took place earlier thatday when Christopher received a letter stating thatSylvia had left her lover and intended to return tohim. It is then revealed that Sylvia is pregnant, yetChristopher must now face the crisis of whether ornot to return to Sylvia when he is not sure if thechild is his.It is thus that the texture of Some Do Not… isbuilt – on the interweaving of the conscious andunconscious minds of its principle characters.Despite his intellectual pretensions, it quicklybecomes apparent in the course of the bookthat Macmaster is not very sophisticated. Thus,he frequently becomes a victim of his ownunconscious thoughts. Again, the importance ofwhat Ford termed the “under self” emerges inthe course of this novel; the strongest charactersall have a keen awareness of their under selves.Valentine, who is perhaps the most intelligentand sophisticated character besides ChristopherD. Petri, A. Richardson - Analysis on Modernism and Literary Impressionism


240School of Doctoral Studies (European Union) JournalJulyin Some Do Not…, is strongly aware of both herconscious self and her “under self” throughout:She heard herself saying, almost with asob, so that she was evidently in a state ofemotion:“Look here! I disapprove of this wholething…”At Miss Wanostrocht’s perturbed expressionshe said to herself:“What on earth am I saying all this for? You’dthink I was trying to cut loose from this school!Am I?” (Ford 1963a)Her under self is indeed attempting to breakfree. This is revealed by the fact that she hasalready sought out Christopher and promisedherself that she would commit to him for the restof her life. Her conscious mind, however, doesnot acknowledge the fact that she has made thisdecision until much later in the narrative. It isnot until she joins Christopher at Grays Inn thatshe is able to unite her conscious and unconsciousfeelings and fully acknowledge the decision shehas in fact made. This is when her conscious mindspeaks to her:This man… had once proposed love to herand then had gone away without a wordand… had never so much as sent her apicture-postcard! Gauche! Haughty! Wasthere any other word for him? There couldnot be. Then she ought to feel humiliated.But she did not… Joy radiated from hishomespuns when you walked beside him.It welled out; it enveloped you… Like thewarmth from an electric heater, only thatdid not make you want to cry and say yourprayers – the haughty oaf (Ford 1963a, p.149).Thus, in the universe of Parade’s End, theunconscious desires of Ford’s characters are quitefrequently at odds with their conscious desires.As a result, paradox once again becomes a chiefmotivating contributor to the texture of the novels;in the words of Mizener,The determining responses of Ford’scharacters almost always take place in thisway, below the level of consciousness,so that their conscious conception ofthemselves is always more or less at oddswith the intentions of their subconsciousselves. The result is a psychic conflict thatgoes on continually in Christopher andValentine and Sylvia and even, occasionally,in minor characters like Vincent Macmasterand General Campion, who, while he sitswriting his letter to the Secretary of Statefor War “with increasing satisfaction,”finds that “a mind that he was not usingsaid: ‘What the devil am I going to do withthat fellow?’ Or: ‘How the devil is thatgirl’s name to be kept out of this mess?’ ”(Mizener 1985, p. 498-499)In many ways, the conflict between thecharacters’ conscious and unconscious lives inParade’s End mirrors the same process that occursin The Good Soldier – albeit on a much moreextreme level. It should be observed, however,that Ford does not reduce his characters’ innerlives to a mere duality, wherein the conscious lifeis “good” and the unconscious life is “bad” or viceversa. The story for each of the characters in thetetralogy is a lot more complicated than that.That being said, Ford’s own moral convictionsas a Tory frequently figure in Parade’s End. Fordultimately feels that the conscious mind shoulddominate one’s personality, as the conscious selfis the social, responsible entity in each human.This resonates with Christopher’s belief that one’sfeelings must always be sacrificed for the good ofthe collective entity.It is when one’s unconscious desires comeinto conflict with what has been determined as goodand true by the rest of society that a psychologicalcrisis often occurs in the characters. Ford’simplication that a successful resolution to suchconflicts involves becoming a “Tory radical” viathe adaptation of an entire new mode of being thatconsists of social conventions that allow one to liveby the principles of society while simultaneouslysatisfying their under selves. While ChristopherSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Analysis on Modernism and Literary Impressionism241clearly values the collective entity, he also assertsthat that entity must never be betrayed from above(Ford 1963, p. 149; Mizener 1985, p. 499).Besides the conflict between the characters’conscious selves and under selves, one of the mainthemes of Parade’s End is the torturous processthat Christopher Tietjens must go through inorder to completely free himself of the Edwardianconventions he has grown up with and thus adaptto the Modern era. The conventions of Edwardiansociety, Ford argues through the course of his novel,no longer embody the principles that it purports touphold. Anyone, such as Christopher early in thetetralogy, who attempts to live by such standardsis thus setting themselves up for doom and failure.Christopher eventually realizes that, in orderto survive, he will have to completely reject hissocially ordained role of the Edwardian YoungerSon. Eventually, Christopher discovers that hisunconscious self – his “under self” – decided toreject that role long before his conscious mindchooses to reject it. It is this process that we seeunfurl throughout the course of the tetralogy, andthat lends the series its textural momentum.The first sign that Ford allows us to see of thiscoming-into-being of Christopher is the fact thathe is clearly ambitious. At the same time, despitethe fact that Christopher is a member of the rulingclass and thus expected to lend money to anyonewho asks for it, he resents the fact that he is askedto lend two hundred fifty pounds to ColonelPartridge. But the ultimate act of Christopher’ssevering himself from the values of the past comesat the moment when he decides to give up hiscomfy governmental post in London in order togo into the antiques business in the country, whilesimultaneously leaving his wife Sylvia in order tolive with Valentine Wannop. This is the outcomeof a veritable crisis of consciousness in ChristopherTietjens, a crisis that adds a significant level oftexture to the four novels. The ongoing strugglewith Tietjens’s consciousness is expertly renderedin such energetic passages as follows:The beastly Huns! They stood betweenhim and Valentine Wannop. If they wouldgo home he could be sitting talking to herfor whole afternoons…That in effect was love. It struck himas astonishing. The word was so little inhis vocabulary… He had been the YoungerSon…Now: what the Hell was he? A sortof Hamlet of the Trenches? No, by Godhe was not… He was perfectly ready foraction. Ready to command a battalion. Hewas presumably a lover. They did thingslike commanding battalions. And worse!He ought to write her a letter. Whatin the world would she think of thisgentleman who had once made improperproposals to her; balked; said “So long!”or perhaps not even “So long!” And thenwalked off. With never a letter! Not evena picture postcard! For two years! A sortof Hamlet all right! Or swine!Well, then, he ought to write her aletter. He ought to say: “This is to tellyou that I propose to live with you assoon as this show is over. You will beprepared immediately on cessation ofactive hostilities to put yourself at mydisposal; Please. Signe, Xtopher Tietjens,Acting O.C. 9 th Glams.” A proper militarycommunication (Ford 1963a, p. 132-133).The astute reader will pick up on the fact thatChristopher’s consciousness is being rendered herenot as an internal monologue, but as an internaldialogue. This is illustrative of the conflict thatbrews within Christopher throughout – it is aspectacular dramatization of consciousness thatfew writers would be capable of in the early 20 thcentury. But Ford does not stop there – he addsseveral levels of irony to Christopher’s internaldiscourse in the form of such phrases as “a propermilitary communication” in regards to an insistentlove letter he is imaginarily composing. Theutilization of a multi-layered internal dialoguegives us a good idea of how Christopher’s mindactually works. It is a great dramatization of oneman’s internal struggles with himself, against theconventions of his time, using the language ofthose very conventions. Through such an internalD. Petri, A. Richardson - Analysis on Modernism and Literary Impressionism


242School of Doctoral Studies (European Union) JournalJulydialogue, Christopher is able to come to an ultimaterealization of his veritable unconscious motives.Arguably, it was not until A Man Could StandUp – the third novel in the tetralogy – that Fordfinally mastered the narrative process of theinterior dialogue, bringing its full implications tobear in the story. Some Do Not…’s structure isultimately based on scenes, despite the narrativerigor to which Ford applies himself (as describedabove.) No More Parades, the second novel inthe series, takes place nearly completely in theminds of Sylvia and Christopher. Their states ofmind, however, are more often described, ratherthan represented. It is rare that Ford dramatizestheir consciousnesses in a complete way in NoMore Parades, and when he does, it is ultimatelyat moments of great dramatic tension, such asthat experienced by Sylvia in a moment of hatredtowards Christopher:There occurred to her irreverent minda sentence of one of the Duchess ofMarlborough’s letters to Queen Anne. Theduchess had visited the general duringone of his campaigns in Flanders. “MyLord,” she wrote, “did me the honour threetimes in his boots!” … The sort of thingshe would remember… She would – shewould – have tried it on the sergeant-major,just to see Tietjens’ face, for the sergeantmajorwould not have understood… Andwho cared if he did!… He was bibulouslyskirting round the same idea…But the tumult increased toan incredible volume… She screamedblasphemies that she was hardly aware ofknowing. She had to scream against thenoise; she was no more responsible for theblasphemy than if she had lost her identityunder an anaesthetic… She was one of thecrowd! (Ford 1963b)Christopher experiences a similar momentwhen the conscious side of his mind loses controlduring his interview with the General:Panic came over Tietjens. He knew itwould be his last panic of that interview.No brain could stand more. Fragmentsof scenes of fighting, voices, names, wentbefore his eyes and ears.[…]He exclaimed to himself: “Byheavens! Is this epilepsy?” He prayed:“Blessed saints, get me spared that!” Heexclaimed: “No, it isn’t! … I’ve completecontrol of my mind. My uppermost mind”(Ford 1963b).Thus, in No More Parades, we witness aradical shift from the scene-based structure of thefirst novel in the tetralogy to a narrative that isbuilt more on the inner processes of its characters’lives. At the same time, No More Parades boaststhe same three part structure as the previousnovel. As in Some Do Not…, each part consistsof a series of fairly straightforward vignettes thatcontribute to the overall momentum of the novel.Each vignette consists of action happening in thepresent, or else action that has been recollectedfrom the past. It is also similar structurally toSome Do Not… in that the story is, for the mostpart, told in a chronological manner. At the sametime, the dense layering of flashbacks, as thatwhich occurs in the course of The Good Soldier,also emerges as a device in Parade’s End; Fordaccomplishes this by spending much of his timeinside the mind of Christopher Tietjens. Memorytends to work in a spontaneous way, as the work ofMarcel Proust has also shown us. Thus, defyingthe chronological ordering of No More Parades,the third chapter of the first part takes place nearlyentirely inside Christopher’s mind and takes placeat the end of the evening when he hears that Sylviais in Rouen. This allows Christopher to put inorder his memories of meeting and getting to knowSylvia in the style of a military report – again, thedense layering of multiple literary styles elicits asense of irony, for which Ford has become known.At the same time, Ford’s intention here is to defythe conventions of chronology by giving us anemotional history of Christopher’s relations withSylvia. Christopher’s feelings surrounding Sylviabecome intertwined with his emotions surroundinghis job at the base. Together, the feelings conspireto make real changes in Christopher mind and life,School of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Analysis on Modernism and Literary Impressionism243a process that happens right before our eyes in thecourse of this exemplary narrative:The one thing that stood out sharply inTietjens’ mind… when at last, with a stiffglass of rum punch, his officer’s pocketbookcomplete with pencil… he sat in hisflea-bag with six army blankets over him– the one thing that stood out as sharplyas Staff tabs was that that ass Levin wasrather pathetic… On the frozen hillside,he… had grabbed at Tietjens’ elbow,while he brought out breathlessly puzzledsentences…There rested a singular mosaicof extraordinary, bright-coloured andmelodramatic statements, for Levin…brought out monstrosities of news aboutSylvia’s activities, without any sequence…And as Tietjens, seated on hishams, his knees up, pulled the softwooliness of his flea-bag under his chin…it seemed to him that this affair was likecoming back after two months and tryingto get the hang of battalion orders…So, on that black hillside… whatstruck out for Tietjens was that… themysterious “rows” to which in his fearLevin had been continually referring hadbeen successive letters from Sylvia to theharried general [Campion]… Tietjens sethimself coolly to recapitulate every aspectof his separation from his wife…The doctor’sbatman, from the other side of the hut,said:“Poor ----------- O NineMorgan!…” in a sing-song mockingvoice… They might talk till half-past three.But that was troublesome to agentleman seeking to recapture whatexactly were his relations with his wife.Before the doctor’s batman hadinterrupted him by speaking startlingly ofO Nine Morgan [a deceased man whosedeath haunts Christopher throughout thenovel], Tietjens had got as far as whatfollows with his recapitulation: The lady,Mrs. Tietjens…He took a sip from the glass ofrum and water… He had determined notto touch his grog. But his throat hadgone completely dry… Why should histhroat be dry?… And why was he in thisextraordinary state?… It was because theidea had suddenly occurred to him that hisparting from his wife set him free for hisgirl… The idea had till then never enteredhis head.He said tohimself: We must go methodically intothis!…“Better put it into writing,” hesaid.Well then. Heclutched at his pocket-book and wrote inlarge pencilled characters:“When I married MissSatterthwaite…”He exclaimed:“God, what a sweat I am in!”…It was no good going on writing.He was no writer, and this writing gaveno sort of psychological pointers… (Ford,quoted in Mizener 1985, p. 503-504)The second part of No More Parades takes placelargely in the mind of Sylvia. Such a derangementof chronology, as featured in the passage above,features even more prominently in this part of thebook, inferring that Sylvia’s mind is a lot morescattered than that of Christopher. As the presentday of Part I comes to a close, it is taken up againat the beginning of Part II – thus giving the novelits chronological level of structure. As Christopherrides off to see Sylvia (end of Part I), Sylvia seesChristopher arriving at the hotel (beginning ofPart II). When she sees Perowne, Sylvia’s mindgoes off on her memories of the affair she had withhim half a decade earlier in a provincial Frenchtown. This vignette comes to an end when Sylvia,amidst her reflections, suddenly realizes thatChristopher may in fact be involved in anotheraffair in the small French town they are currentlyin – Rouen. The chronological present time of thenovel then skips to that evening at dinner, whereSylvia joins Christopher and Sergeant-MajorD. Petri, A. Richardson - Analysis on Modernism and Literary Impressionism


244School of Doctoral Studies (European Union) JournalJulyCowley. Throughout the course of the dinner,Sylvia’s mind becomes preoccupied with eventsthat transpired the previous day, when she went fortea at Lady Sachse’s residence. Sylvia then goesinto an internal dialogue, similar to Christopher’sabove, with Father Consett in heaven.No More Parades then attains yet anotherdiscursive layer of texture when Sylvia allowsChristopher to read a series of letters that shehas withheld from him up to this point. Amongthese is a letter from Mark Tietjens that Sylviahas already read; as Christopher reads the letter,Sylvia simultaneously recites it in her head frommemory, a text that is then inserted as part of hernarration, allowing us – the readers – to read itas well. Mizener (1985, p. 505) summarizes theevent as follows:What we have here, then, is Sylvia’srecollection of Mark’s sardonic descriptionof Sylvia’s efforts to persuade Mark towithhold an income Christopher has in factrefused to take from Mark. Mark had feltChristopher should take the income but hadnot been able to make him; Sylvia thoughtChristopher had taken it and wanted Mark totake it back. As Sylvia recalls Mark’s letter,we hear Mark’s own words about Sylvia(and Valentine) and at the same time listento Sylvia’s response to them. “Hearing”the letter repeat itself in her memory, wecontemplate with her Christopher reading itto himself and – knowing well Christopher’sfeelings about the money, about Sylvia, andabout Valentine – we imagine his response toit. Thus three minds, all intimately familiarto us [by now], are brought before us, vividwith their own styles and voices, in responseto the same matter. Part II ends with Sylviaand Tietjens dancing to a phonograph: theywill go up to Sylvia’s bedroom shortly todiscuss their situation.The third part of No More Parades returns usto the mind of Christopher. As this section opens,Christopher is just waking up when Levin andGeneral Campion enter his tent. Christopher isquestioned as to what went on the previous nightthat he spent in Sylvia’s room. In such a fashion,the violent activities that night are revealed to us.It is at this point that Christopher is sent to the frontlines – an event that will surely result in his death.Christopher’s mind at this point comes closest to abreakdown than at any other point in the tetralogy.Part III comes to a close when Campion inspectsthe cook-house of Christopher.Unlike the more sprawling text of the firstnovel in the series (not to mention The GoodSoldier), Ford is successful in restraining hisnarrative to focus on the mindsets of his two majorcharacters, Christopher and Sylvia. In this, hesimultaneously narrows the scope of the novel interms of space and time. All of the action of thenovel takes place in two locations: the army baseand Rouen. In temporal terms, the first part of NoMore Parades takes place in the course of a singleevening and the following morning. The secondpart takes place on the same day in the hour or twopreceding dinnertime. The third part takes placethe following morning, within the course of a fewhours. Everything else that occurs outside of thischronological temporal structure exists in the formof remembrances in the minds of the two maincharacters – Christopher and Sylvia. They arethe narrative agency, the consciousnesses throughwhich all action in the novel is allowed to takeplace. Thus, it is the meaning of each occurrencein No More Parades that takes precedence over theevent itself; such a tightly woven structure enablesFord to fully explore the significance behind eachevent, and thus give us a richer insight into theinner lives of the characters.A Man Could Stand Up, the third novel in theseries, is perhaps the fullest expression of thetheme of one consciousness at war with itself –an event that mirrors the actual war happeningin the external world of the novel. In the courseof the novel, both Christopher and Valentine arefinally able to “win” the battle with themselves,ultimately freeing themselves from the Edwardianconventions of the past and fully adaptingthemselves to the ways of the Modern era. Onceagain, we find the author taking his re-arrangementsof events to a new extreme, departing even furtherSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Analysis on Modernism and Literary Impressionism245from the conventions of chronology than he did inhis previous novel.The narrative of A Man Could Stand Up beginson Armistice Day in 1918. In the beginning, thenarrative is concerned with the mind of ValentineWannop. As the novel begins, Valentine is on thephone with Edith Ethel Duchemin, who is nowmarried to Macmaster. She informs Edith thatChristopher is back in London and in need of help.At this point, Valentine comes to the conclusionthat the Edwardian values of her parents that shehas been ingrained with since childhood do notfunction well in a postwar universe. She mustdiscard these values in order to move forward –which means admitting to herself that she has beenhopelessly in love with Christopher ever since thetwo of them parted (at the end of the first novel.)At this point, she decides to seek out Christopherand commit to him.The second part of A Man Could Stand Upbegins several months prior and occupies itselfwith the mind of Christopher. It takes place duringthe great battle during the course of which theAllies were very nearly defeated by Ludendorff.Similar to Edith in the previous chapter, it is at thismoment that Christopher frees himself of his ToryYounger Son obligations by vowing to give up hisjob in the Imperial Department of Statistics, givingup his loyalty to Sylvia, and moving to the countryto settle down with Valentine Wannop and sellantiques. While this part bears no chronologicalrelation to the first part of the novel, it very nearlyparallels the first part in meaning. Again, Ford hasgone even further in extrapolating the meaning ofevents rather than the events themselves.In the third part of the novel, the minds ofValentine and Christopher are alternately exploredthrough the course of a meeting between the twoat Grays Inn. Without even intending to do so, thetwo gradually come to a precise understanding ofone another and discover that they hold the sameviews in common. Another telephone call occurs.It is at this point that Wannop attempts to preventthe union of Valentine and Christopher, and in theprocess, inadvertently reveals that Valentine’s soleintention in coming to Grays Inn was to becomehis mistress.Last Post, the final novel of the series, is inmany ways the most radical, as it takes place in thetime span of just a few hours and all of the actionoccurs at the cottage of Valentine and Christopher.The story is set sometime between the years of1926 and 1929, and Christopher is now in his earlyforties. Sylvia has decided to free Christopher andallow him to marry Valentine. Christopher arrivesat the bedside of Mark just in time to see himdie. The narrative of Last Post is almost entirelycomprised of interior monologues. We alternatelyexplore the mindscapes of Mark, Marie Leonie,Sylvia, and Valentine. Ford’s intention in focusingmainly on the minds of Mark and Marie Leonie isto provide a sort of parallel to the relationship ofChristopher and Valentine. This completely shiftsthe dichotomy that was set up in the first threebooks of the series; Christopher no longer figuresas the central character – that distinction nowgoes to Mark, with the female counterpart givento Marie Leonie, and Valentine and Sylvia findingthemselves cast aside. In many ways, Mark isnearly identical to Tietjens – he is perhaps Tietjensminus the martyr complex that occasionally marsthe actions of the latter. At the same time, Markis unlike Christopher in that he refuses – up tothe point of his death – to reject the Edwardianvalues of the old world in order to embrace thoseof Modernity. Thus, by elaborating on the innerworkings of Mark’s mind, Ford employs a clevernarrative device whereby he actually expoundson aspects of Christopher’s personality that thereader would not be able to gouge were he to haveremained inside Christopher’s mind. By the timeone finishes this masterful work, the battle begunin the first novel has reached a poignant climax.The interior monologue of Marie Leonie, onthe other hand, imbues the novel with a decidedlyFrancophone air that is startlingly new in thetetralogy. While her tastes are very bourgeois,despite the fact that she previously worked as achorus girl before her liaison with Mark, the realpurpose that Marie Leonie serves is to add a levelof criticism to the texture of the novel, as she ishighly disapproving of both Christopher andValentine. What makes Marie Leonie’s position inLast Post awkward, however, is the fact that she isD. Petri, A. Richardson - Analysis on Modernism and Literary Impressionism


246School of Doctoral Studies (European Union) JournalJulynot directly linked with any of the characters in theprevious books – her appearance is sudden, andthus, disruptive. This has been a point of contentionfor critics over the years, many of who feel thatMarie Leonie’s sudden appearance unsettles theotherwise perfect structure of Parade’s End.At the same time, Last Post allows Christopherto sort out many of the conflicts that emerged inthe previous novels. After taking a cottage withValentine in Fittleworth, he persuades Markand Marie Leonie to join them. In Last Post,Christopher is finally able to organize his antiquebusiness. This is not to say that Christopher is ableto fully resolve everything and that problems donot persist throughout the novel – for this wouldbe untrue to life, more in tune with the fairy talesthat Ford penned early on in his career. Rather,Last Post comes to an end with Christopher andValentine leading a frugal, obscure life that putsthem in tune with the Modern era. About the othercharacters, we know very little about their future.We must instead use our imaginations. Will Sylvia,for example, marry Campion? In such conditions,Ford implies, nothing is ever certain, and it is thisuncertainty that ultimately defines us and keeps usbound together as humans in the ongoing strugglefor survival.It should be noted that Ford later came todespise Last Post so intensely that the work is notincluded in the Bodley Head edition of Parade’sEnd. Many critics, including Graham Greene,conceded with Ford’s opinion that this work is notof very high quality. Ford insisted that Parade’sEnd should be instead viewed as a trilogy, andthat Last Post was little more than a mistake thatshould have never seen the light of day. Others,such as Mizener (1985, p. 508) believe that LastPost in fact serves as an appropriate ending toParade’s End. It does exactly what Ford said thathe set out to do – that is, to reveal what happensto Tietjens. Perhaps this is why Ford grew todislike it so much – he would have much ratherleft it a mystery in the end. The focus in Last Post,however, is not on the meaning of these events,but on the events themselves – a fact that unsettlesthe rigorous structure that Ford had set up in theprevious books of the tetralogy. After all, seen as atrilogy, the books in fact represent a closed circle:they start off with Tietjens and his loved ones asEdwardians and closes with them having evolvedinto Modern beings. At the same time, Last Postdoes not really serve as a conclusion to the series– it is more like a coda to the trilogy.Owing to the crafty texture one finds inParade’s End, it is in fact at times difficult forthe casual reader to figure out “what happens”(except, of course, in Last Post, when the eventsform the foreground of the novel.) The novel isinstead meant to dramatize the consciousnessof its characters while exploring the underlyingmeaning behind each event, rather than focusingon the event itself. The events that come tothe surface in each of the novels are those thatarise incidentally in the consciousnesses of thecharacters that the narrative focuses on at anygiven point. This means that each of the novelsin the tetralogy is rife with contradictions. Thesecontradictions, however, do not disturb our readingof the books, because we come to realize that suchcontradictions are true to life. The structures ofthe novels, in fact, are designed so as to reveal thevery structurelessness of life and thought. It isonly when the artist applies order – whether thatorder is chronological or otherwise – to life thatthe work of art (or literature) emerges. This putsFord’s process in line with Nietzsche’s conceptionof the Apollonian and Dionysian modes ofliterature. In Parade’s End, Dionysus is evokedthrough the “wild” consciousnesses of Ford’s maincharacters; it is only through Ford, the author, andhis rendering of these consciousnesses, that theApollonian mode of order is imposed on them.To conclude, it seems clear that the novelsof Parade’s End form the most cohesiveexpression of what would come to be termedliterary Impressionism. Like the same movementin painting, Ford’s Impressionism sought tounderstand the structure and meaning underlyingthe events that form the basis of each of hisnovels. Rather than merely describing the eventsthemselves, as so many authors working in a moretraditional vein have been content to do over theyears, Parade’s End takes the fluctuations andconstant contradictions of consciousness itselfSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Analysis on Modernism and Literary Impressionism247as its ultimate subject, and renders these tics andflows in a finely textured prose that has its parallelsin the paintings of Monet and Van Gogh, wherethe ultimate meaning counts for more than what isactually being depicted.ConclusionIn reading two of Ford Madox Ford’s greatestliterary creations, we come to find that, in additionto being a gifted storyteller, Ford was also a giftedpractitioner of the art and craft of writing. Hisfinely textured literary output gives rise to one ofthe finest examples of Impressionism that we havein literature.As we have seen in our analysis of The GoodSoldier and the novels comprising Parade’s End,Ford deploys Impressionistic narrative techniquesas a means of rendering human complexity in allits torturous glory. Impressionism, as a narrativedevice, reveals a lot not just about the individual,but about others, as well. In short, Ford’sImpressionism was deployed in order to givereaders an idea of what it was like to be human ata particular time and place. In this respect, Ford’snovels can be seen on the same pedestal as Joyce’sUlysses. In the words of Gasiorek:Impressionism responded to what Fordsaw as the inescapably subjective natureof human perception; it also provided atechnique for the mirroring of the socioculturalchanges associated with modernity.Impressionism, in short, had a historicalas well as a cognitive dimension: it soughtto portray the mental processes by whichknowledge is gained and to representthe difficulties inherent in any attempt tocomprehend contemporary life. These twoaims ratified a questioning, open-ended,modernist fictional mode. Impressionism’shostility to didacticism and moralism couldbe seen in the techniques it deployed toenter imaginatively into human dilemmasand to depict them in all their bewilderingcomplexity, leaving readers to make theirown interpretations and judgments. In thisdialogic conception of writing the novelemerged as the form par excellence for theexploration of psychological bafflement,social fragmentation, and historical change(Gasiorek 2004, p. 206).Thus, the ultimate subject of Ford’s novels isthe nature of perception itself – its fluctuations,contradictions, and conflicts with the surroundingculture and the norms that it imposes on theindividual. This opens up the philosophicalconflict between freedom and determinism – asubject that will have to be explored in fuller detailelsewhere.What distinguishes Ford’s Impressionism as aliterary device is the fact that he combined it witha critical consciousness that, together, gave rise toa uniquely Modernist conception of the universe.This is perhaps most evident in the Parade’s Endnovels, where consciousness itself becomes themain subject – as well as the narrative agency.The ultimate Modernist masterpiece is that whichcontains its own internal criticism; as we haveshown in our discussion of the Parade’s Endtetralogy, it is safe to say that these works, takentogether, would fit this definition.Ford’s novels also had a social motivation –that is, he wished to keep track of the historicalchanges that were occurring in England and therest of the world as he was busy writing. This iswhere Ford’s Impressionism crossed roads withhis skepticism. Cultural dislocation could only bedissected, for Ford, through his highly structured,textured works. The skepticism we find in Ford,as expressed through his characters, once againadds a critical dimension to his works – his novelsthus become a vehicle for social criticism, as wellas aesthetic criticism.War became a catalyst for much of Ford’swriting in both The Good Soldier and Parade’sEnd, even when it is not explicitly referred to byname. But it is because of war that Ford’s literaryImpressionism makes sense – as Impressionismand Modernism were both rooted in fragmentation,they were meant to mirror the fragmentation thatwas occurring in the world in the early years ofthe 20 th century. This impression of life artistsD. Petri, A. Richardson - Analysis on Modernism and Literary Impressionism


248School of Doctoral Studies (European Union) JournalJulyundoubtedly gleamed from the First World War– both the euphoric build-up to the war, and thetragic realization that the bloodiest war in historyhad taken heavy tolls on both culture and society.Ford undoubtedly viewed the war as being pivotalin the fight for the values of an earlier era – valuesthat he was nostalgically attached to throughout hislife, despite the fact that he ultimately recognizedthe need to let go of them. What we see, then,beyond the conflict of consciousness versus theunder self, is simultaneously a conflict betweenone view of the changes in the world that has themas a tragic decline and another that sees thesechanges as a slackening of oppressive bonds.By the time The Good Soldier was begun onFord Madox Ford’s fortieth birthday, it is highlylikely that the author was in a bewildered state bywhat was occurring around him. This bewildermentand confusion finds its voice in the idiosyncraticnarration style of that novel. It is more fullydeveloped in Parade’s End, particularly the thirdvolume, in which all “objective” perspective hasbeen lost and the characters are portrayed as puremindfulness, unchecked by any external measureof value.Just as Gertrude Stein managed to create aliterary form of Cubism in such works as TenderButtons and The Making of Americans, T.S.Eliot introduced us to the bewildering nature offragmentation in The Waste Land, and Joycepioneered the stream-of-consciousness style inUlysses, so Ford Madox Ford’s great contributionto Modernism was to be his pioneering literaryadaptation of Impressionism, which comes to theforefront in such novels as The Good Soldier andthe Parade’s End tetralogy. His influence can alsobe seen in the work of such late Modernists asSamuel Beckett, as well as contemporary writerssuch as Philip Roth and Saul Bellow, for whomthe exploration of consciousness becomes a keyfactor in the construction of narrative. Insofar asan “advanced” form of novel writing emerged inthe early years of the 20 th century, Ford MadoxFord is an often-overlooked contributor to thatvital body of literature that would change not onlythe course of art, but also the course of life in thecoming century.BibliographyArmstrong, Paul B. The Challenge of Bewilderment:Understanding and Representation in James,Conrad, and Ford. Ithaca, NY: Cornell<strong>University</strong> Press, 1987.Beckett, Samuel. Molloy. New York: Grove Press,1994.Bender, Todd K. Literary Impressionism in JeanRhys, Ford Madox Ford, Joseph Conrad, andCharlotte Bronte. New York: Garland, 1997.Brettell, Richard. Modern Art 1851-1929:Capitalism and Representation. Oxford: Oxford<strong>University</strong> Press, 1999.Cassell, Richard A. Critical Essays on Ford MadoxFord. Boston: Hall, 1987.Coetzee, J.M. In the Heart of the Country. NewYork: Vintage, 2004.Cousineau, Thomas J. Ritual Unbound: ReadingSacrifice in Modernist Fiction. Newark:<strong>University</strong> of Delaware Press, 2004.Dickens, Charles. Little Dorrit. 1855-1857.Retrieved 4 January 2008 from http://www.bibliomania.com/0/0/19/39/frameset.html.Eliot, T.S. Selected Poems. London: Faber andFaber, 1954.Ford, Ford Madox. The Good Soldier. The BodleyHead Ford Madox Ford, Vol. 1. London: TheBodley Head, 1962. 13-220.Ford, Ford Madox. Parade’s End. The BodleyHead Ford Madox Ford, Vol. 3 & 4. London:The Bodley Head, 1963a & b.School of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Analysis on Modernism and Literary Impressionism249Ford, Ford Madox. The March of Literature: FromConfucius’ Day to Our Own. Champaign, IL:Dalkey Archive, 1994.Ford, Ford Madox. Joseph Conrad: A PersonalRemembrance. New York: Octagon Books,1965.Freeman, Nick. “Not ‘Accuracy’ but‘Suggestiveness’: Impressionism in The Soulof London.” <strong>International</strong> Ford Madox FordStudies, 4, 2005. 27-40.Gasiorek, Andrzej. “ ‘In the Mirror of the Arts’:Ford’s Modernism and the Reconstruction ofPost-war Literary Culture.” <strong>International</strong> FordMadox Ford Studies, 3, 2004. 201-218.Greene, Graham. “Introduction.” The BodleyHead Ford Madox Ford, Vol. 1. London: TheBodley Head, 1962. 7-12.Haslam, Sara. Fragmenting Modernism: FordMadox Ford, the Novel, and the Great War.Manchester: Manchester <strong>University</strong> Press,2002.Hampson, Robert and Max Saunders, eds. FordMadox Ford’s Modernity. Amsterdam: Rodopi,2003.Hoffmann, Charles G. Ford Madox Ford. Boston:Twayne, 1990.Joyce, James. Ulysses. New York: Vintage, 1990.Mizener, Arthur. The Saddest Story: A Biographyof Ford Madox Ford. New York: Carroll &Graf, 1985.Radell, Karen Marguerite. Affirmation in a MoralWasteland: A Comparison of FordMadox Ford and Graham Greene. New York:Lang, 1987.Rhys, Jean. After Leaving Mr. Mackenzie. NewYork: Norton, 1997.Saunders, Max. “Ford, the City, Impressionismand Modernism.” <strong>International</strong> Ford MadoxFord Studies, 4, 2005. 67-80.Smiley, Jane. “The Odd Couples.” The GuardianUnlimited, May 27, 2006. Retrieved 4 January2007 from http://books.guardian.co.uk/review/story/0,,1783758,00.html.Snitow, Ann Barr. Ford Madox Ford and the Voiceof Uncertainty. Baton Rouge: Louisiana State<strong>University</strong> Press, 1984.Stein, Gertrude. The Making of Americans.Normal, IL: Dalkey Archive, 1995.Stein, Gertrude. Tender Buttons. New York: DoverPublications, 1997.Thackeray, William Makepeace. Vanity Fair. 1917.Retrieved 4 January 2008 from http://www.bibliomania.com/0/0/51/94/frameset.html.Judd, Alan. Ford Madox Ford. London: Collins,1990.D. Petri, A. Richardson - Analysis on Modernism and Literary Impressionism


250School of Doctoral Studies (European Union) JournalJulyA Study Of Dyslexia Among Primary SchoolStudents In Sarawak, MalaysiaRosana Bin Awang BolhasanEducation DepartmentBatu Lintang Teachers’ Training InstituteSarawak MalaysiaTel: 082 243501Fax: 082 252382E-mail: drrosana58@yahoo.comBatu Lintang Teachers’ Training InstituteCollege Road93200 KuchingSarawak, MalaysiaAbstractThe purpose of this study was to determine the degree of dyslexic reading problem among primaryschool students and the relationship between the degree of dyslexia and the demographic factors. Eightdemographics factors, according to gender of age, class, parents’ income, parent education, parents’occupation, students’ position in the family and the number of brothers and sisters in the family are chosenfor the study. There are 32 characteristics of dyslexic student listed in the questionnaire “ Dyslexia ScreeningInstrument”. 250 dyslexic students from 7 primary schools in Petra Jaya area in Sarawak, who were earlydetermined in the pilot study were the sample in the study. The analysis is done by using SPSS Windows6.1. The result of the study shows the dyslexic students concerned really facing reading problem because58-62% of them exhibit the 32 characteristic of dyslexia. However, the relationship between dyslexiaand the demographics factors is weak, that is at the correlation of r=0.0 – 0.12 only. This shows that thedyslexic problem among the students are of no correlation with the demographic factors.School of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 A Study of Dyslexia among Primary School Students in Sarawak, Malaysia251IntoductionDyslexia is a language disability, affectingreading, writing, speaking and listening. Itis a dysfunction or impairment in the use ofwords. Consequently, relation with others andperformance in every subject in school can beaffected by dyslexia. It can be found around theworld principally among boys. It exists in learnersof slow, average and superior intelligence. Thedyslexic child can come from any background orany income level and dyslexia may occur in anychild in a family regardless of order in which heis born.Like other countries, in Malaysia, reading isone of the skills required in the study of language.It is the important skill in the hierarchy of theMalaysian Education syllabus. It is very essentialand considered to be one way to evaluate thesuccess of students in their learning in schools. Inthe integrated curriculum of Secondary Schools,reading ability is of prime importance besidesthe skills in arithmetic and writing. The ability toread is not only considered as basis to achievingsuccess in other learning processes but includingthis skill this main skill of reading in the EducationSystem is proven to be the factor of success fromprimary to higher institutional level.Amir Awang (1995) quoted that students abilityis one of the factors that contribute of their learningwidely in the other areas of knowledge. Findingsfrom research study on reading still persist untiltoday that it has great bearing on achievement invarious areas of acquiring knowledge. It is proventhat students who are able to read usually havegreat potentiality in their studies.According to Bond & Tinker (1987) readingability considered to be of paramount importancewhich ties the bond of interaction which enablespeople to communicate with one another. Smith(1973) had dwelled in depth on “psycholinguisticcommunication” in correlation with readingprocess from the view of psycholinguistics.In general it has three views which have beensupported by linguists and cognitive psychologists.Their views are:a. There need to be only small portion of theinformation that requires understandingfrom printed text.b. Understanding must proceed vocabularyc. Reading is not to decode written languagefrom that of oral.Our nation’s educational experts have mucheffort in promoting and developing the skills ofreading and interpreting, especially in the MalayLanguage subject. However, the adversity inreading ability amongst students in the primary andlower secondary school still prevails. Accordingto Mohd. Fadzil Haji Hassan (1998) the problemof student disability in reading in schools has notbeen solved and so far cannot be overcome.Students being unable to read and dislikereading is a topic of conversation that is oftenbrought up by the various communities. AbdulHalim Yusuf (1995) quoted:Recently questions about the increase in thenumber of studentswho cannot read and dislike reading newsfrom the mediareveals that many students do not have thereading skill.Lately, the media has reported that studentsare not proficient in reading. Various authoritiesvoiced their concern about the phenomenon ofstudents having low reading proficiency. TheMalaysian Ministry of Education, parents andteachers have voiced out their concern over thenewspaper (Sofiah Hamid, 1999). According to thereport from the Director General of The MalaysianEducation Ministry, there are about 6,000 ofprimary 6 students who cannot read properly.In view of the importance of reading skillwhich necessarily acquired as the basic skill aswell as the unsolved problems about dyslexia, athorough study needs to be carried.According to Kamarudin Hj. Hussin (1980)cases on dyslexia are increasing. It happens in theprimary school and lower secondary schools. Toaddress this problem, the Ministry of Educationhas taken steps such as:Rosana Bin Awang Bolhasan - A Study of Dyslexia among Primary School Students in Sarawak, Malaysia


252School of Doctoral Studies (European Union) JournalJulya. Conducting courses on reading forprimary school teachers.b. Introducing special project on remedial andincentive studies in 1975 by Centreof Advanced Curriculum.c. Organizing and conducting workshops andseminars ton address the problem.d. All projects being planned are in collaborationwith Faculty of Education<strong>University</strong> Malaya.e. Many education offices carry out remedialprogram on the Malay language subject.f. Teachers Training College for specialeducation successfully organized courseson methodology for teacher trainees.Although the Education Ministry has takenvarious steps to tackle the problem on dyslexia, ithas not been able to overcome it successfully. Thecurriculum division has come up with a findingthat primary six students in the school below havereceived certain percentage of success:a. The Chinese National School 50.5%b. The Tamale National School 50.8%(Hasmah Udin, 1998)This phenomenon has a great set back. Dueto that, the Ministry of Education introducesthe new Curriculum for primary school, whichfocuses mainly on reading, writing and arithmetic(Kementerian Pendidikan Malaysia, 1998:1).It is hoped by the end of the primary schoollevel, students are able to adapt themselves intodeveloping thinking process.Following the new primary school curriculum,the Cabinet Committee has come up with the newEducation Policy or System formulated a newCurriculum of secondary school to replace theprevious one. This has come up with the stress onspeaking, reading and writing proficiently as wellas being creative in handling situation.Musa Jalil (1989) has found out that 40% ofthe primary 6 students in Pulau Pinang cannotread well. There are 15,728 students, 2573 cannotread, 2,105 can read but without the ability tocomprehend the text they read. His study hascome up with a number, 6,668 out of 15,728 whocannot master the basic skill of reading and arealmost illiterate.This has been proven through the weaknessesof the new primary school curriculum. It has notencouraged the student to strive harder and provento have no bearing at all to improve the situation.It means the student spend fruitless sessions intheir schools for the whole 6 years in the primaryeducation.This situation has raised the level on anxietyamongst educationists, parents and the society.Being not able to possess a good proficiency inreading, students will not be able to refer andlearn much from text books in order acquire otherknowledge on other genre. It has been proven thatstudents fail in their examinations just becausethey cannot understand or comprehend thequestions. Due to this problem, in depth study isnecessary to be carried out so that the real problemcan be indentified. The details of the problems canbe looked upon from the following points of viewand this research objectives are mainly focusedon the problem of dyslexia amongst studentsin the primary school Sarawak, Malaysia. Thespecification of this study are the highlighting of:a. The frequent dyslexia characteristics exhibitedby the dyslexic students.b. The relationship between the degree ofdyslexia faced by the student and theirdemographics factors.c. The significant difference between male andfemale dyslexic.All these aspects are the focus of this casestudy with the hope that all can be solved, havingcome up with a guide to remedy the situation withsystematic well-planned approaches.MethodThe data for this research was collected fromthe District of Petra Jaya, in the state of Sarawak,Malaysia. The district was selected because itSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 A Study of Dyslexia among Primary School Students in Sarawak, Malaysia253meets the requirement of the main focus of thestudy in terms of demographical features.Further more, one pilot study had been madebefore the actual research was done. The pilot studyis to certify the researched subject is being madethrough interview with the principal, the remedialteachers and other teachers who are teaching them,besides observation of the students who had beennotified. This study involved 250 dyslexic studentsand this sample has been confirmed through thepilot study at the early stage of the research.Besides using the students sample, theresearcher also distributed questionnaire to the 25teachers who are in charge of every subject, andthe class teachers to find out about their perceptiontoward the students. The age of those dyslexicstudents range 7 to 12 years and were from 7primary schools.The pilot study s carried out after interviewwith those teachers who teach them. Students thathave been analyzed are been observed. In thisobservation, the students characteristic as set inDyslexia Screening Instrument were detected. Inthe pilot study report from teachers and the students’work are also included as criteria to ascertain if thestudents is suffering from dyslexia.At first, the researcher distributed questionnaireto the dyslexic students and asked them to write theirname in the questionnaire. Then, the researcherdistributed the same questionnaire to the classteacher and asked them to evaluate the students.The teachers’ perception is important becauseaccording to Abang Ridzuan (1991), “Classteacher is one who knows well about the problemsamong the students besides their attitude”. In anindirect way, teacher’s perception can be used as acontrol for the students’ opinions.The “Dyslexia Screening Instrument” byKathryn B. Choon et al (1994) is a rating scaledesigned to describe the cluster of characteristicsassociated with dyslexia and to discriminatebetween students who display these characteristicsand students who do not. This scale, for use in theschool setting, is quick and non intrusive, andprovides education professionals with a startingpoint for identifying students at risk for dyslexia.The Dyslexia Screening Instrument is designedto be used with students in grade 1 through 12(ages 6 through 21). It can be used to screen entirepopulation of students or students who exhibitreading, spelling, writing or language-processingdifficulties. Rating and scoring should take 15 to20 minutes per student.A classroom teacher who has worked directlywith the student for at least six weeks shouldcomplete the Rating Form. This will result ina rating that will be more accurate because theteacher has observed the student over a lengthyperiod of time and can compare the students’performance to that of the students; classmate.For an elementary student, the prefer rater isthe teacher who instructs the student in a varietyof subjects. For a middle school or high schoolstudent, the prefer rater a language teacher whogenerally has more opportunity to observe thebehavior that is indicative of dyslexia.The professional who is in charge of gatheringinformation about the student should explain tothe rater that the purpose of the Rating Form isto obtain an accurate picture of current studentperformance related to specific characteristics.The professional also should make sure the raterunderstand how to complete the Rating Form andwhat each statement describes.The rater should complete the studentinformation on the front of the Rating Form. Notall of the information is required for scoring, but itmay be useful for record-keeping purposes.a.b.c.d.e.Never exhibitsSeldom exhibitsSometimes exhibitsOften exhibitsAlways exhibitsBesides that, a questionnaire is used torecognize especially the Socio-economic status ofthe students family. The items are:a.b.c.d.e.GenderAgeLevel of studyOccupation of Parent/GuardianEducation of Parent/GuardianRosana Bin Awang Bolhasan - A Study of Dyslexia among Primary School Students in Sarawak, Malaysia


254School of Doctoral Studies (European Union) JournalJulyf.g.The number of brothersFamily statusQuestions ManagementThe instrument that is used is a questionnaire,which was translated from its original instrument“Dyslexia Screening Instrument”. Both versions,English (original) and Malay Language (Translatedare attached). The questionnaire is made into twogroups which contain similar question. The firstset is for the students and second set is for theteachers.EvaluationEvaluation is made according to evaluationprocedure, especially the explanation for everystatement which is written in the questionnaire.Both questionnaires need to be completed in 15 to20 minutes only. The filling in the demographicquestionnaire and the questionnaire for studentswho suffer from dyslexia must be carried outby the help from the teacher and the researcher.Detail explanation about their needs followed bythe meaning of a statement must be carried outand it is students preference to choose their scaleaccording to their own valuation. Teachers whoare involved must have experience in teaching thestudents for at least 6 weeks. It can help the teacherto make an observation followed by comparingtheir potential with their friends. It makes twomonths to complete the questionnaire.Data AnalysisThe questionnaire that are filled in are collectedfor analysis. Both of the students and teachersvaluation are put together for every respondent.Every respondent is evaluated according to everyitem in the questionnaire and written down inboth valuation scales. One type of analysis fromis modified to simplify the analysis. Here is theexample of simple procedure by using the analysisform.Table 1 Analysis Data CodeITEM 1 ITEM 2 ITEM 3A B A B A BRespondent 1 2 1 3 4 2 2Respondent 2 1 1 2 3 3 4Respondent 3 1 2 3 2 2 2There is data analysis from 250 respondents andthe items are from 1 to 33. Teacher’s valuationsare in (A) and the student’s valuations are in(B). The demographic questionnaire has beenaccomplished and analyzed. All data has beenprocessed for frequency, correlation, regressionfollowed by T-test by using SPSS Window 6.1.Frequency analysis is a prepared list ofquantitative data and this is done by listing, inrank order from high to low, all the scores to besummarized, with tallies to indicate the number ofsubjects receiving each score. The scores in adistribution are grouped into intervals. To furtherthe understanding and interpretation of data, it willbe presented in frequency polygon with frequencyanalysis. In this context, frequent act by thestudents who suffer from dyslexia can be detectedeasily and frequent analysis characteristics can bealso recognized.With correlation, researcher seeks to determineif relationship exists between two or more variables.By comparing the performance of different groupsis the way to study relationships. Sometimes,such relationships are useful in prediction, butmost often the eventual goal is to say somethingabout causation (Jack R. Fraenkel and Norman E.Wallen, 1990:158).Correlation coefficients can take on valuesfrom – 1.00 to + 1.00 inclusive; the greater theabsolute value of the coefficient, the stronger therelationship. A correlation coefficient of zeroindicates no relationship or independence of thevariables. In the context of this study correlation isused to seek relationship between the demographicfactor and the characteristics which have beenshown by the dyslexia students.Regression analysis allows the researcher towork out whether two variables are associated,whether people who vary on one variable alsoSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 A Study of Dyslexia among Primary School Students in Sarawak, Malaysia255vary systematically on the offer (D.A.de Vaus,1995: 179). The researcher also can determinehow strongly these variables are associated. It alsoenables the researcher to say how much impacteach unit change in the independent variable hason the dependent variable.In summary, the regression coefficient canbe used to measure the amount of impact orchange one variable produces in another. Theyare asymmetrical and will be different accordingto which variable id independent. In this study,regression will be used to see the most valuablechanges or the main influence of dyslexia.The t-test provides a method by which themeans of the samples can be compared when itis assumed that the samples have been randomlyselected and the score are obtained from normallydistributed population (Gajendra K. Verma andKanka Mallick, 1999:205). Use of the test enablesresearcher to say whether the difference obtainedis quite likely to occur by chance, or whether it issignificant. In the latter case, the difference maybe due to some underlying cause which deservesfurther investigation.This is case study research. After pilotstudy, with interview and observation done, setsof questionnaires are produced for the actualresearch. The whole procedure of the research isas in Figure 1 below.Figure 1: Research Graphic ProcedurePilot Study(Interview)Pilot Study(Observation)Pilot AnalysisStudyConstructing InstrumentAccording to validity and reliabilityApplied in the study15Product of studyRosana Bin Awang Bolhasan - A Study of Dyslexia among Primary School Students in Sarawak, Malaysia


256School of Doctoral Studies (European Union) JournalJulyResultThe result of the research on dyslexia hasbeen experienced by the respondents in theprimary school level. The question that is goingto be answered is, ‘Are the demographic factorslike the economic level of the parents influence alldyslexia characteristics?.The main purpose of describing these variableswas to provide some insight into the characteristicof dyslexia students pertaining to the study.All statistical analysis and other analyses onrelationships between variables and varianceswithin variables are also described.From the questionnaire ‘Dyslexia ScreeningInstrument’, the researcher makes a decision to havea valuation with frequency analysis, correlationanalysis, regression analysis and t-test. Theresearcher used the test result from every individualaccording to the analysis and finally differentiatethem. Before the process, the researcher has todiscuss the dyslexia characteristics openly withthe students themselves without putting any of theinfluence factors.Demographic Characteristics of RespondentsAgeTable 2 shows the distribution of respondentsby age. The data indicates that only 19.2 percentof the respondents were 7 years; 68 percent ofthe respondents were between 8 to 11, and 12percent wee 12 years of age. The mean age of therespondents was 9.24 years with a range of 7 to 12years old.Table 2Distribution of Respondents by AgeAgeNumber ofRespondentsPer Cent7 Years 48 19.28 Years 45 189 Years 50 2010 Years 42 16.811 Years 35 1412 Years 30 12Total 250 SD = 1.98X = 9.24 SD = 1.98(Gender)Breakdown of Respondents by GenderNumber ofRespondentsPer CentMale 145 58Female 105 42Total 250 100.0Parent IncomeTable 3The monthly income for all parents of therespondent is summarized in table 4. The meanincome of the parents was RM325.84. However,the range of their income varied very widelyfrom RM100.00 to RM1280.00. It was generallyobserved by the researcher that most of the parentsof the respondents had understand their actualincome.Table 4Distribution of Parent IncomeLevel of income Number ofPer Cent(RM) RespondentsRM150 and less 70 28RM151 – RM300 90 36RM151 – RM300 52 20.8RM451 – and38 15.2aboveTotal 20 100.0X 325.84Parents Level of EducationLevel of education referred to the actual numberof years of formal schooling both secular andreligious education. The mean number of years ofeducation completed for all parents of respondentswas 4.66 years while the range was from 0 to 11years. Table 5 provides the breakdown of thesample of years of education completed. Thedata indicates that only 28.4 percent of the parentshad education beyond the elementary level (6years) and 54.4 percent of the parents had formalschooling between 1 to 6 years.School of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 A Study of Dyslexia among Primary School Students in Sarawak, Malaysia257While 17.2 percent had not received any formaleducation.Table 5Parent Level of EducationLevel of EducationNumber ofRespondentsPer CentNo Education 43 17.21-3 51 20.44-6 85 34.07 or more 71 28.4Total 250 100.0X = 4.66Parents OccupationAs shown in table 6, of the 250 parents ofrespondents, about 8.8 percent did not havepermanent jobs. 22.4 percent have their permanentjobs in the government sector as teachers, clerks,police, nurses and office workers. However, about34 percent have their jobs in private sector and selfemployed with own small business. There wereabout 34.8 percent of the parents of the respondentsinvolved themselves as labours.Table 6Parent OccupationOccupationNumber ofRespondentsPer cent- No PermanentJobs22 88.8- Private firm 31 12.4- Self employed 54 21.6- GovernmentService56 22.4- Labours 87 34.8Total 250 100.0X = 4.66Hierarchy in the FamilyTable 7 shows the distribution of respondentsby hierarchy in the family. About 20 percent ofthe respondents are from the youngest and oldestkids in the family. As revealed from the data, 20percent of the respondents are sixth in hierarchyof the family, while the second, third, forth andseventh hierarchy revealed almost a similarpercentage which range from 10 to 14 percent.The mean for all respondents was 4.56.Hierarchy in thefamilyTable 7Hierarchy in the FamilyNumber ofRespondentsPer centFirst 27 10.8Second 35 14Third 25 10Forth 28 11.2Firth 32 12.8Sixth 50 20.0Seventh 27 10.8Eight 26 10.4Total 250 100.0X = 4.56 SD = 0.17Number of SiblingsThe distribution of number of siblings in therespondents’ family is presented in table 8. Thedata reveals that the dyslexia students are fromfamily of 3 to 5 siblings which range the percentageof 16.0.Table 8Number of Siblings in the FamilyNo. ofSiblingsNumber ofRespondentsPer cent1 15 62 25 103 42 16.84 40 165 42 16.86 40 167 21 8.48 25 10Total 250 100.0X = 4.59 SD = 0.7Rosana Bin Awang Bolhasan - A Study of Dyslexia among Primary School Students in Sarawak, Malaysia


258School of Doctoral Studies (European Union) JournalJulyStatistical AnalysesFrequency analysisBased on the items in the questionnaire in‘Dyslexia Screening Instrument’ there are 32items which are the normal characteristics thathave been shown by the students who suffered thedyslexia problem. The research points that thestudents very often show the 32 characteristics.Although the dyslexia level and status is differentfrom each other, this is the view of the two sideswhich involved the teachers and the studentsthemselves. Table 9 will show that 62% of thestudents are frequent and 58% are always showingthe 32 characteristics. For more discussion, thereare 8 high characteristics percentages from therespondents used in this research.Table 9: Frequency / Percentages of theTeachers/Students View about theEvery Time facing the dyslexia problemItems1. The writingvocabulary not stablewith the oralvocabulary. (Item 15)2. Not active in oral.(item 26)3. Weak in arranging theimportant Point inwriting. (Item 16)4. Can remember in ashort period (Item 8)5. Less skill in spelling.(Item 17)6. Understand whilein class but Decreasein Test. (Item 9)7. Not exact in oralreading. (Item 10)8. No. PlanningIII. ViewAmount/PercentageTeachers View 201 ( 80% )Students View 179 ( 71% )Teachers View 201 ( 80% )Students View 173 ( 69% )Teachers View 200 ( 79.4% )Students View 188 ( 74.6% )Teachers View 190 ( 75.4% )Students View 158 ( 62.7% )Teachers View 158 ( 62.7% )Students View 176 ( 70% )Teachers View 188 ( 75% )Students View 177 ( 70% )Teachers View 183 ( 73% )Students View 167 ( 66.2% )Teachers View 189 ( 75% )Students View 170 ( 68% )Table 9 proves that dyslexia’s students are veryoften showing good response in oral reading. Thepercentage of 49.6% is the proof of validity ofweakness that exist in the students.Besides that, the students have been detectedthat they had shown weakness in their writingwhich may have connection with the oral reading.According to the percentage, in item 20, 49.2%(Teachers view) and 47.6% (Students View).The students are also showing forgettablecharacteristic because they understand or knowfor a short time and could not remember the nextday. The figures 57.5% student view are theproof based on that characteristic. Besides that,the students are also weak in arranging words.Accordingly that weakness can be detected fromitem 14 that shows both view, the teacher’s view50.8% and the students’ view 11.6%The students also showed doubts in writing andoral, and it causes the students having problemsin both writing skill and oral skill which has beenexplained in item 10 and 20. this unstable existencehas been detected by item 15 that produces thepercentage of 61.9% and 57.9% from the teachersand students respectively. Item 17 proves that thestudents have less skill in spelling at that level,which is supposed to be. This means that thestudent are really having a problem in spellingskills compared to normal students at the samelevel. The highest percentage is between 51.6%and 51.2% from the both views which is the proofof the situation.The students also illustrated the weakness andbeing slow in making prediction. It may haveoriginated from other weakness, in them. By item27, this weakness is proved with the percentageof the teachers view (60.7%) and the studentsview (57.5%) which is quite high. The delayin making the prediction can cause difficultyin making plans. This problem can cause lesscreativity, ability and can cause problems instudying if there is no action taken to solve theproblem. Concerning item 8, regarding studentsthat always forget, this characteristic is supportedby item 30. By item 30, the students always showtheir weakness in repeating the explanation, whichhave been explained to them. They are weak inSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 A Study of Dyslexia among Primary School Students in Sarawak, Malaysia259saying something that has been said to them. Theviews percentages from both sides on this matterare 57.5% and 56.3%.There must be a possibility that the studentsunderstand what they been taught, but alwaysshow their weakness in the test. This matter hasbeen proven because they can easily forget andcould not repeat the fact or explanation, which hasbeen given to them like it was stated in item 8 anditem 31. The fall in this test id dominated by thepercentage, of 56.3% and 57.6% from the viewof both sides by item 9 in the questionnaire. Thestudents also show their noisy emotion in makingactivities or work especially in pressured andlimited time. This matter can be proven by item 6with the percentage of 50.4% and 46.8%. Noisyemotion may be the cause of their less capabilityin planning their work properly. These studentsalways could not plan their work that shows by thepercentage 58.7% and 56%, which is high by item7 in the questionnaire. The percentage of 47.6%and 51.2% the characteristics are easily disturbedand this is the factor that the students are weak insome aspect.Item 16, proved that the students are weakin arranging the essay content. The percentageof 51.6% (the teachers view) and 59.1% (thestudents view) prove the presence of the weaknessin the students themselves. This matter may haveit’s connection with item 7, which says that thestudents always could not plan their work.The correlation and domination is in item 8 anditem 30 being interpreted above. Item 19 is alsoconnected. The students are specified as a groupthat needs repeated explanation because they caneasily forget and are already weak in repeatingthe explanation that has been given to them. Bythis item 19, the view of 54.0% from the teachersand 51.6% from the student themselves, prove thestrong statement that the students need a repetitionexplanation or ‘drilling’ system.It is clear that the students often and everytime show the 32 characteristics, which an alreadyanalyzed by the ‘Dyslexia Screening Instrument’.The dyslexia level and status is detected by theresearch correlation, that is the percentage ofviews which have been collected. From theanalysis above, it can be proved that the studentsare in critical difficulties. In certain situation,their problem is not serious especially for itemsno. 3,4,5,13,28 and 31 which shows the frequencyand every time it is lower than 50%. This showsthat we do not agree about them beingo Disappointed very easily (item 3)o Down to earth (item 4)o Lower down their status (item 5)o Weak in direction concept (item 13)o Misplace / lost their personal thing (item28)o Very quick in thinking (item 31)Correlation AnalysisBy the correlation analysis, there are a fewobvious relation between the independent variablewith the dyslexia characteristics. This was provenby the Pearson correlation. This analysis showsthat there is a relation in weak level only betweenthe dyslexia characteristics with the age factor is (r = 0.13; p, 0.041 ). This mean that relation existbetween the simple disturbed characteristics in thestudents with their age factor.Based on the result from the questions 32 andquestion 8, the change factor has been identifiedbecause of the relation between both in the weakstages. That means from the questionnaires thathave been given to the students, their charactersare weak regarding education, occupation andtheir parents or guardians. Their characteristicsare not influenced by their parents or guardianshigh education or their high salary. That also forthe matter of factor-gender, age, the number ofsiblings, and their status in the family. All thesefactors have their own weakness for the dyslexiacharacteristics.Because of the high percentage in the weaklevel ( below r – 0.4 ), we have to see how muchis the amount that r = 0.12 to see the connection.From this result, we will divide them into twogroups that is parents which influence the studentsand the students factor itself, which emerge in thedyslexia characteristics in the questionnaire.Rosana Bin Awang Bolhasan - A Study of Dyslexia among Primary School Students in Sarawak, Malaysia


260School of Doctoral Studies (European Union) JournalJulyFAC-TORSITEMTable 10: Parents / Guardian withDyslexia CharacteristicsDisappointedEasilyFeelDownTO EarthFeelDownTO EarthExplanationTo BeRepeatedEducation 0.12 0.08 0.06 0.4Income 0.04 0.10 0.04 0.04Income 0.02 0.02 0.1 0.07Signify p > 0.05Based on table 10 above, there is a relationbetween education, income and the student’sparent or guardians’ occupation with significantrelation of being weak in disappointed easily, feeldown to earth, noisy the pressure and explanationhave to be repeated which the students have onlevel p< 0.05. For example, the connection ofparents education factor concerning disappointedeasily ( r = 0.12 ), down to earth ( r = 0.08 ). Noisywith pressure ( r = 0.06 ) and explanation to berepeated ( r = 0.04 ) on level p < 0.05.The student’s factor that influences the dyslexiacharacteristic also shows a weak significant relationin table 11 The students factor is influenced by thedyslexia characteristics.ITEMFACTORSNumber ofsiblingsTable 11: Students factor that influenceDyslexia CharacteristicsStatus in thefamilyForget EasilyNot right inoral readingUnrelationoralvocabularywith writingvocabulary0.02 0.12 0.040.02 0.12 0.1Gender 0.05 0.06 0.09Age 0.08 0.02 0.09Primary Level 0.08 0.02 0.09Significance p < 0.05From table 11 above, it can be seen thatstudent’s factor does not influence the look andthe characteristics forget easily, not right in oralreading and inequality which is in the students.The number of siblings factor for example onlyinfluences the students look which is forget easily(0.2), not right in oral reading (0.12) and inequality(0.04) on level p < 0.05.There is less influence by parents or guidancein student’s dyslexia characteristic. This provesthat with correlation obtained by Pearson whichshows the it is not beyond 0.4 but only around r =0.0 until r = 0.12 only.Regression AnalysisIn the regression analysis which has beencarried out, researcher likes to know the maindemographic factor, which influences dyslexia.For that, researcher has inserted all this which isoccupation, income, and the parents education inthe research for the purpose to find one or somefactors that always influence the students. Alsoincluded are the five students demographic factors,which are age, gender, status in the family, numberof siblings and the class that the students are inwhile the research is in progress.Table 12: The Demographic FactorsWhich Influence DyslexiaMult RR2RFRSignificantOccupation .367 .135 .124 .894 P < .001Income .307 .094 .083 .894 P < .001Education .285 .082 .070 1.298 P < .001Age .221 .049 .041 .848 P < .01Number ofsiblings.158 .025 .021 .970 P < .05Status of thefamily.157 .024 .021 .848 P < 0.5Primary .386 .149 .135 .889 P < .001This research shows that the outside influencesand the factors in the student only give less effect tothe students. The researcher found that the socioeconomicstatus of the parents has less influenceon their child’s dyslexia characteristics. Theresults obtained show that the parent’s occupationis one of the factors, which can influence ( R2 =.135 ). This means that the parents occupationcontributes about 13.5% to the dyslexia problem.School of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 A Study of Dyslexia among Primary School Students in Sarawak, Malaysia261Majority of the student’s parents work as labour.Because of that the parents have no time to payattention to their children. The low educationlevel, contributes 8.2% to the problem of nothelping their children in reading. The incomefactor contributes 9.4%( r 2 = 0.094 ) to this problem. With lowincome, the parents couldn’t manage to buy booksfor their children to read.The researcher believes that the dyslexia isassociated with the students age factor. Studentsin secondary school have no problem comparedto those primary school students. This has beenproved with the high percentage ( R 2 = .149 ) ofinfluence on the student’s dyslexia. This meansthe dyslexia problem always happens in the earlystage or in this context in first school level which isprimary one, two, three and happens less in secondstage which is primary four, five and six. This isbecause the first stage, the students understandingis less compared with the second stage where theylearn a lot by revision studies process which arecarried out by the school from time to time.The meaning of comprehension is the student’sability to understand something they read liketheme, plot and teaching from academic books orstory books. With less understanding, may be thereader gets less information. This results in thefailure to collect information and to make use ofthe information when it is needed.One more thing researcher believes that thisdyslexia has connection with lack of interest instudents in what they read. It is because of lessconcentration in their reading process or influenceof other matters like thinking of playing, evennegative influences from classmates. Like it wasexplained before, researcher also has an opinionthat their interest in something that they read isimportant in dyslexia. The interest in a story bookthat is interesting, can bring back the curious feelingand high concentration to try and understand allthe facts. This can improve their understandingin what they have read. This can be differentiatedwith the interest in academic books which may beless than the interest in story books. Here parentsneed to give motivation to their children to learnsomething that they are not interested in.In the context of age and level, even thoughconfirmed that the dyslexia characteristics bulgein the early stage of schooling which means age ata young, we get the different test result that is theclass factor shows strong influence besides thatthe age factor is R2 = .049 or 4.9% only. Thismay be because of their mature age each student isdifferent from the other. The age factor is not themain reason that contributes to dyslexia problemand it is believed that the dyslexia that the dyslexiaproblem will disappear when they grow older.If the researcher touches on the status factor ofthe in the family and the number of siblings thatis 2.5% and 2.4%. The researcher can view asthat the factor of the number of the children inthe family can influence dyslexia. This is becausewhen the number of children is too high, theparent’s attention to the individual will be less.t-TestThe t-test has been carried out to analysiswhether there are any differences that aresignificant among the male students and femalestudents in dyslexia problem.Table 13: The Result of t-test for Gender Factor.Item Gender N Min SP t-valueSignificarForget Male 145 3.6 0.9Their 1.03 P = 0.01duty Female 106 3.4 1.0Noisy Male 145 3.6 1.0basis 1.14Pressure Female 106 3.4 1.1Lost Male 145 3.7 0.8basic 0.39In test Female 106 3.6 1.0Weak Male 145 3.7 0.8in 0.39Writing Female 106 3.6 0.8Slow to Male 145 3.8 0.8Predict 0.68Female 106 3.6 1.0Significant p = 0.05P =0.056P =0.057P =0.023P =0.365By this t-test it can be found that there are 5characteristics which show different significanceamong the male and female students. TheseRosana Bin Awang Bolhasan - A Study of Dyslexia among Primary School Students in Sarawak, Malaysia


262School of Doctoral Studies (European Union) JournalJulyfive characteristics are in between p = 0.008 top = 0.057. Two characteristics which are verysignificant among the male and female studentsare slow in making prediction with the value of – t= 1.14 and p = 0.056. The significance is found inthe forget easily characteristic with ( value – t =1.03 p = 0.01 ) , lost basis in test 9 value – t = 0.39p = 0.57 ) and weak in writing ( value – t = 0.39 p= 0.023 ).The mean value also shows the student’sratio of male and female who suffer from thedyslexia problem. All the mean ratio in these fivecharacteristics show the male students mean ratiois higher than the female mean ratio. This meanthat there are more male student facing the easilyforget problem , noisy, and lost basis in test, weakin writing and slow in making prediction comparedto the female students. This may be because ofthe male is not serious and slows and puts lessconcentration in doing thing.From the analysis and test which nave beencarried out, researcher can make an excuse bydividing the result into two parts that is socioeconomicstatus of the parents of the students andthe demographic factor itself.The result of the analysis has proved thatthe socio-economic status factor which includeeducation, occupation and the parent’s incomeinfluences the dyslexia characteristic. Parents, whohave low education and low income can influencethe students. The students should be given moreencouragement in education and help them in theirhome work to solve the dyslexia problem.About the student’s factor, researcher foundout that there are two tests have come out withoutstanding results of age factor, primary, numberof siblings and status from the family also influencethe students that suffer from dyslexia. Thismeans from the primary one level until primarysix, they will continue to show the same dyslexiacharacteristics. Besides that, the researcher foundthat the dyslexia characteristics are different in themale and female students in the easy forget, noisywith pressure, lost basis in test, weak in writingand slow in making prediction.However, the incidence of dyslexia as reportedvaries a great deal from language to language.There has been much speculation as to the reasonfor this variance. One assumption is that answermay lie in the inherent linguistic merits and scriptsof the different languages. However, MacdonaldCritchley et, al (1970:96) maintains that this is notcredible and suggests the low incidence of dyslexiamight be due to genetic reasons. At any rate, atthe present time, this variance of dyslexia fromlanguage to language cannot be explained. Whatwe do know is that dyslexia is likely to be foundaround the world (Janett W, Lerner, 1989:3)There are sex differences in the incidence ofdyslexia, just as there are in color blindness. Thedyslexia child is referred to in most books as ‘he”for a good reason. While both boys and girls canhave dyslexia, boys are far more likely to have it.As with estimates on the incidence of dyslexia,so too, is there a lack of consensus on the ratio ofdyslexia male to dyslexia females. The estimatesvary from study to study: 2-to-1 (John Money,1962:31), 3.5-to-1 or 4-to-1 (T.R.Miles and ElaineMiles, 1983:2), 4-to-1 (Critchley, OP, Cit:9), 5-to-1(Sandhya Naidoo, 1972:25). The ratio of dyslexiamales to dyslexia males to dyslexia females hasbeen nearly 6-to-1 and the method of enrollmentacceptance and pairing sex with like sex is likelyto have contributed to this higher ratio.The difference in the number of male dyslexiaas compared to females is well founded andaccepted. The reason has not yet been established,although there are numerous hypotheses: a greateroccurrence of cerebral trauma in males, thehemispheric functioning of the sexes, a mutantat a single locus whose expression is modified bysex, or a polygenic expression that has a lowerthreshold for males than that for females.Dyslexia has no favorites in regard to thewealthy or the poor, the cultured or the culturallydisadvantaged. Any child from any background canhave dyslexia but the socioeconomic backgroundsof dyslexia are varied.Any child in the family can have dyslexia,whether he be the oldest, the youngest, or the inbetweenchild. Research on birth order is sparse.In a study of five hundred dyslexia, 24.6 percentwere the oldest in their families, while 36 percentwere the youngest (Edith Klasen, 1950(60). ThereSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 A Study of Dyslexia among Primary School Students in Sarawak, Malaysia263is no difference in birth order in the incidence ofdyslexia among brother and sisters were found infamilies with dyslexics.DiscussionThrough observation on the aspect of writing,students with dyslexia have great difficultiesin writing. On the whole they are very poor inwriting, having poor skill of spelling poor inoral and written vocabulary as well as poor inarranging content of compositions. The reliabilityof this observation had been proven by a few otherresearchers of the past. Henshelwood (1959) inLerner (1985) quoted that the inability in readingof students with dyslexia is caused by the noncorrespondingvisualization of the right hemispherewith the collaboration area (angular gyrus) of theleft hemisphere. The loss in this collaborationdid not only cause gradual diminishing ability inreading but also the ability in writing as well asspelling (ographia). Orton (1980) had also agreedbut with consideration of looking at the aspectof functional approach. According to him therelationship between the two hemispheres is veryimportant concerning writing and reading skills.Besides that students with dyslexia are usuallypoor in learning. They usually fall apart undertime limits and pressure, often losing groundon achievement tests, having poor handwriting,inaccurate oral reading as well as having delay inverbal response.With these criteria it had been proven fromresearch made by Hinshelwood (1959) in Lerner(1985) with its structural and Orton (1980) withits functional. There had been other researcherssuch as Slobin (1991), Menyuk Wiig et. Al (1973)and Leong (1974). These researchers had stressedon the importance of the parts of the brain whichare for remembering visually perceptions of lettersand words. These researchers highlighted theImportant of a balance coordination between thehemispheres of brain which clearly demonstratedthe weakness of the left hemisphere that causedalexia, graphia, aphasia, apraxia, slip of thetongues and poor listening skill.In the aspect of correlation, students withdyslexia with educational and socio-economicfactors, number in a family, class and gender hadthe correlated significance of r-0.4, P


264School of Doctoral Studies (European Union) JournalJulyb.should be used. In that case samples can begiven continuous attention for a long periodof time.Due to the in imbalance of the written andoral vocabulary it portrays the main criteriashown by the pupils. It is hoped that teachingcan be more focused on interaction whichis very open to teacher and pupils. Thatwill encourage pupils to talk more openly.It also helps to built up their confidencein reading. However, writing can also bestressed in order to create a balanced skillsin both oral and abnormally, it may be thatsometime in the future CT scans will revealwill more specific finding in regard todyslexia. (Martha B. Denckla: 1985)helping dyslexics. Society recognizes the needto provide the dyslexic with opportunities forremediation opportunities to learn and to developnormally, and opportunities to become what he iscapable of becoming.The challenge of dyslexia must be met by all:all parents, schools, researchers, teachers-traininginstitutions, the federal government, society asa whole – and the dyslexic himself. For it willtake all of us working together to accomplish whatmust be accomplished – what can be done. Wemust make this challenge the focus of our efforts.AcknowledgementThe future is promising for the dyslexic,although progress toward fulfillment of the promiseis slow. It will not be realized soon enough to helpsome already out there in the Dyslexia World ofFrustration. But we are finding out more aboutthe condition. We know that there is a geneticfactor in the cause of dyslexia, and therefore wecan be alert to the occurrence in some familiesand provide the immediate help as needed. Wenow know how to diagnose dyslexia accurately;the problem lies in disseminating and using thisknowledge. Unfortunately, some people seem tobe unwilling to give up pet theories or special teststhat they have devised (which also bring a certainprofit). We know that, because of maturationalfactors, an accurate diagnosis of dyslexia at thepresent time ordinarily cannot be made before achild has reached about the age of eight.We know that dyslexia can be alleviated, andthat the most appropriate time to begin remediationfor a child is at about the age of eight. It is fareasier to remediate the condition at this early agethan at an older age, when certain behaviors andattitudes have been internalized. Of course theseverity of the dyslexia condition will affect thesuccess and length of remediation, (as will otherfactors).More information is being distributed aboutdyslexia; and thus more people are aware of thecondition and are becoming concerned aboutThe author wishes to express his sincereappreciation to the following individuals on theinvolvement in preparation of this manuscript:Director of Malaysia Education Planning andResearch Section, and the State Education DirectorOf Sarawak, Malaysia for their sympathy andcooperation that made this manuscript a success.Director of Batu Lintang Teachers’ TrainingCollege, headmasters and teachers of the schoolsfor being helpful and concern over the researchmade.Reference1. Abang Ahmad Ridzuan (1991), Factors Relatingto Achievement of High School Students inKuching City, Malaysia. Unpublished PhD.Thesis <strong>University</strong> of Hull, England.2. Abdul Halim Yusuf (1995), “SukatanKurikulum Baru Sekolah Menengah”, KualaLumpur: Pusat Perancangan Kurikulum.3. Amir Awang (1995), “Trenda Baru dalamBidang Pendidikan Bahasa”, Kuala Lumpur.Utusan Publication and Distributor Sdn. Bhd.School of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 A Study of Dyslexia among Primary School Students in Sarawak, Malaysia2654. Bertil Hallgren (1950) Specific Dyslexia(“Congenital Word-Blindness”); A Clinicaland Genetic Study (Copenhagen : EjnarMunksgaard, 1950); trans. By Erica Odelberg(Stockholm : Esselte Aktiebolag)5. Bonds and Tinker. (1987), “ReadingDifficulties.” The Diagnosis and CorrectionNew York : Appleton-Century-Croft.6. D.A. de Vaus (1991). Surveys In SocialResearch, London: Allen & Unwin.7. Edith Klasen (19720) The syndrome of SpecificDyslexia (Baltimore: Unviersity Park.8. Gajendra K. Verma and Kanka Mallick (1999)Researching Education, London: FalmerPress.9. Hinshelwood J. (1959) Congenital Wordblindness.H.K. Lewis, London.10. John Money (1962) “Dyslexia: A Postconference Review,” Reading Disability,Progress and Research Needs in Dyslexia, JohnMoney, ed. (Baltimore: Johns Hopkins,11. Kamarudin Hj. Husin, (1980). “PedagogiBahasa. Petaling Jaya : Longman MalaysiaSdn. Bhd.12. Kathry B. Coon et al. (1994) Dyslexia ScreeningInstrument. United State of America: HarcourtBrace & Company.13. Lerner, W. Janet (1985) Learning Disabilities.London: Open Book Publishing Ltd.14. Macdonald Critchley (1970) The DyslexiaChild, Springfield, III: Charles C. Thomas.15. Mohd. Fadzil Hj. Hassan. (1998). “Isu-IsuPerancangan Bahasa. Kuala Lumpur : DewanBahasa dan Pustaka.16. Musa Jalili. (1989). “Falsafah Pendidikannegara” Kuala Lumpur : Pusat PerkembanganKurikulum.17. Sandhya Naidoo (1972). The Research Reportof the ICAA Word Blind Centrenfor DyslexiaChildren. New York : John Wiley.18. Smith, D. Shelley (1986) Genetics andCorrecting Reading Disabilities. London:Taylor and Francis.19. Sofiah Hamid. (1991). “Pendidikan dalamPolitik Di Malaysia. Kuala Lumpur : DewanBahasa dan Pustaka.20. T.R. Miles and Elaine Miles (1983), Help forDyslexia Children, London: Methuen.Rosana Bin Awang Bolhasan - A Study of Dyslexia among Primary School Students in Sarawak, Malaysia


266School of Doctoral Studies (European Union) JournalAppendix 1JulyDYSLEXIASCREENINGINSTRUMENTKathryn B. Coon, Melissa M. Waguespack, Mary Jo PolkRespondent Name: ________________________________________________Date of Birth:________________________________________________Age:________________________________________________Gender:________________________________________________Standard:________________________________________________School:________________________________________________--------------------------------------------------------------------------------------------------------Rater’s Name:________________________________________________Rater’s Signature: ________________________________________________Date of Rating: ________________________________________________RATER: To what extent does the students exhibit these characteristics?1 – never exhibits 2 – seldom exhibits 3 – sometimes exhibits 4–often exhibits5 – always exhibits (Please rate all statements)__________ 1. Easily distracted__________ 2. Forgets assignment and/or loses papers__________ 3. Easily frustrated__________ 4. Low self-esteem__________ 5. Puts himself/herself down__________ 6. Falls apart under time limits and pressure__________ 7. Disorganized__________ 8. Knows material one day; doesn’t know it the next day__________ 9. Knows class material but tests poorly__________ 10. Oral reading inaccurate__________ 11. Reverses letters and/or numbers__________ 12. Losing ground on achievement tests__________ 13. Poor directionally (up/down, left/right, over/under)__________ 14. Poor sequencing skills__________ 15. Vocabulary of written composition in NOT equal to student’s spoken vocabulary__________ 16. Poor organization of composition (Events are not in chronological order or any disciplineorder organization__________ 17. Inadequate spelling for grade level__________ 18. Trouble following a series of directions__________ 19. Needs information repeated__________ 20. Poor handwriting__________ 21. Has trouble copying__________ 22. Unable to tell time, days of the week, months of the year__________ 23. Unable to tell time, days of the week, months of the year__________ 24. Cannot recall words, especially names__________ 25. Production of smudged papers (erasures, mark-over)__________ 26. Delay in verbal response__________ 27. Doesn’t anticipate consequence of behavior__________ 28. Misplaces and loses personal items__________ 29. Can’t stay on task__________ 30. Can’t repeat information__________ 31. Has trouble with the alphabet (learning and/or saying)__________ 32. Is very literal/concrete in thinkingRater’s Signiture _________________________________________________________School of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 A Study of Dyslexia among Primary School Students in Sarawak, Malaysia267Appendix 2Frequence/Percentage of the Teachers/Students View about the Frequent and Every Time facing theDyslexia Problem1.2.3.4.5.6.7.ITEMEasily distractedForgets assignments and/Or loses papersEasily frustratedLow self-esteemPuts himself/herselfdownFalls apart under timeLimits and pressureDisorganizedKnows material one day:8.doesn’t know it the nextdayKnows class material but9.Tests poorlyOral reading inaccurate10.Reverses letters11.Losing ground on12.achievement testsPoor directionally (up/13.downLeft/right, over/under)Poor sequencing skills14.Vocabulary of written15.Composition is NOTequalTo student’s spokenTEACHER’S ANDSTUDENT’S VIEWNEVER SELDOM SOMETIMES OFTEN ALWAYSEXHIBITS EXHIBITS EXHIBITS EXHIBITS EXHIBITSTeacher’s 18 (7.1%) 22 (8.7) 58 (23.0%) 120 (47.6%) 34 (13.5%)Student’s 30 (11.9%) 17 (6.7) 49 (19.4%) 129 (51.2%) 27 (10.7%)Teacher’s 2 (8.0%) 18 (7.1%) 54 (21.4%) 139 (55.2% 39 (15.5%)Student’s 9 (3.6%) 21 (8.3%) 72 (28.6%) 120 (47.6%) 30 (11.9%)Teacher’s 13 (5.2%) 35 (13.9%) 109 (43.3%) 78 (31.0%) 16 (6.3%)Student’s 12 (4.8%) 20 (7.9%) 113 (44.8%) 89 (35.3%) 18 (7.1%)Teacher’s 7 (2.8%) 42 (16.7%) 98 (38.9%) 81 (32.1%) 24 (9.4%)Student’s 17 (6.7%) 30 (11.9%) 104 (41.3%) 83 (32.9%) 18 (7.1%)Teacher’s 10 (4.0%) 38 (15.1%) 93 (36.9%) 92 (36.5%) 19 (7.5%)Student’s 7 (2.8%) 25 (9.9%) 100 (39.7%) 102 (40.5%) 18 (7.1%)Teacher’s 4 (1.6%) 12 (4.8%) 76 (30.2%) 127 (50.4%) 33 (13.1%)Student’s 13 (5.2%) 27 (10.7%) 62 (24.6%) 118 (46.8%) 32 (12.7%)Teacher’s 2 (0.8%) 16 (6.3%) 45 (17.9%) 148 (58.7%) 41 (16.3%)Student’s 8 (3.2%) 17 (6.7%) 57 (22.6%) 141 (56.0%) 29 (11.5%)Teacher’s 3 (1.2%) 18 (7.1%) 40 (15.9%) 145 (57.5%) 46 (18.3%)Student’s 7 (2.8%) 21 (8.3%) 48 (19.0%) 144 (57.1%) 32 (12.7%)Teacher’s 4 (1.6%) 18 (7.1%) 41 (16.3%) 142 (56.3%) 46 (18.3%)Student’s 7 (2.8%) 22 (8.7%) 46 (18.3%) 145 (57.5%) 32 (12.7%)Teacher’s 8 (3.2%) 20 (7.9%) 41 (16.3%) 125 (49.6%) 58 (23.0%)Student’s 10 (4.0%) 23 (9.1%) 51 (20.2%) 115 (45.6%) 52 (20.6%)Teacher’s 14 (5.6%) 29 (11.5%) 85 (25.8%) 117 (4<strong>6.4</strong>%) 27 (10.7%)Student’s 22 (8.7%) 24 (9.5%) 79 (31.3%) 102 (40.5%) 25 (9.9%)Teacher’s 6 (2.4%) 19 (7.5%) 41 (16.3%) 149 (59.11%) 37 (14.7%)Student’s 5 (2.0%) 22 (8.7%) 65 (26.8%) 128 (50.8%) 32 (12.7%)Teacher’s 22 (8.7%) 40 (15.9%) 78 (31.0%) 86 (34.1%) 25 (9.9%)Student’s 23 (9.1%) 33 (13.1%) 86 (34.1%) 88 (34.9%) 22 (8.7%)Teacher’s 7 (2.8%) 20 (7.9%) 48 (19.0%) 128 (50.8%) 49 (19.4%)Student’s 11 (4.4%) 25 (9.9%) 65 (25.8%) 122 (48.4%) 29 (11.5%)Teacher’s 4 (1.6%) 11 (4.4%) 36 (14.3%) 156 (61.9%) 45 (17.9%)Student’s 7 (2.8%) 20 (7.9%) 46 (18.3%) 146 (57.9%) 33 (13.1%)16.17.vocabularyPoor organization ofComposition (Eventsare notin chronological orderor any discipline order oforganizationInadequate spelling forgrade levelTeacher’s 3 (1.2%) 11 (4.4%) 38 (15.1%) 130 (51.6%) 70 (27.8%)Student’s 12 (4.8%) 10 (4.0%) 42 (16.7%) 149 (59.1%) 39 (15.5%)Teacher’s 7 (2.8%) 12 (4.8%) 43 (17.1%) 130 (51.6%) 60 (23.8%)Student’s 9 (3.6%) 20 (7.9%) 65 (25.8%0 129 (51.2%) 29 (11.5%)Rosana Bin Awang Bolhasan - A Study of Dyslexia among Primary School Students in Sarawak, Malaysia


268School of Doctoral Studies (European Union) JournalJulyITEMTrouble following a18.seriesof directions19.Needs informationrepeatedPoor handwriting20.Has trouble copying21.Unable to tell time,22.days ofthe week, months ofthe yearUnable to keep place on23.page when readingCannot recall word24.Especially namesProduction of smudged25.Paper (erasures, markover)Delay in verbal response26.Doesn’t anticipate27.Consequence of behaviorMisplaces and loses28.Personal itemsCan’t stay on task29.Can’t repeat information30.Has trouble with the31.alphabet(learning and/or saying)Is very literal/concrete in32.thinkingTEACHER’S ANDSTUDENT’S VIEWNEVER SELDOM SOMETIMES OFTEN ALWAYSEXHIBITS EXHIBITS EXHIBITS EXHIBITS EXHIBITSTeacher’s 6 (2.4%) 20 (7.9%) 55 (21.8%) 135 (53.6%) 36 (14.3%)Student’s 4 (1.6%) 22 (8.7%) 80 (31.7%) 120 (47.6%) 26 (10.3%)Teacher’s 4 (1.6%) 18 (7.1%) 38 (15.1%) 136 (54.0%) 56 (22.2%)Student’s 13 (5.2%) 13 (5.2%) 59 (23.4%) 130 (51.6%) 37 (22.2%)Teacher’s 5 (2.0%) 22 (8.7%) 53 (21.0%) 124 (49.2%) 48 (19.0%)Student’s 8 (3.2%) 21 (8.3%) 71 (28.2%0 120 (47.2%) 32 (12.7%)Teacher’s 9 (3.6%) 24 (9.5%) 70 (27.8%) 113 (44.8%) 36(14.3%)Student’s 9 (3.6%) 20 (7.9%) 94 (37.3%) 111 (44.0%) 18 (7.1%)Teacher’s 18 (7.1%) 33 (13.1%) 54 (21.4%) 118 (46.8%) 29 (11.5%)Student’s 14 (5.6%0 23 (9.1%) 65 (25.8%) 126 (50.0%) 24 (9.5%)Teacher’s 15 (6.0%) 35 (13.9%) 62 (24.6%) 110 (43.7%) 30 (11.9%)Student’s 16 (6.3%) 28 (11.1%) 73 (29.0%) 116 (46.0%) 19 (7.5%)Teacher’s 8 (3.2%) 27 (10.7%) 65 (25.8%) 131 (52.0%) 21 (8.3%)Student’s 9 (3.6%) 28 (11.1%) 85 (33.7%) 114 (45.2%) 16 (6.3%)Teacher’s 5 (2.0%) 33 (13.1%) 85 (33.7%) 108 (42.9%) 21 (8.3%)Student’s 9 (3.6%) 21 (8.3%) 70 (27.8%) 124 (43.2%) 28 (11.1%)Teacher’s 4 (1.6%) 15 (6.0%) 32 (12.7%) 157 (62.3%) 44 917.5%)Student’s 10 (4.0%) 32 (12.7%) 45 (17.9%) 144 (57.1%) 21 (8.3%)Teacher’s 1 (0.4%) 8 (3.2%) 42 (16.7%) 153 (60.7%) 48 (19.0%)Student’s 7 (2.0%) 16 (6.3%) 55 (21.8%) 145 (57.5%) 28 (11.1%)Teacher’s 5 (2.0%) 45 (17.0%) 112 (44.4%) 66 (26.2%) 23 (9.1%)Student’s 14 (5.6%) 28 (11.1%) 114 (45.2%) 79 (31.3%) 17 (6.7%)Teacher’s 5 (2.0%) 13 (5.2%) 55 (21.8%) 137 (54.4%) 42 (16.7%)Student’s 5 (2.0%) 24 (9.5%) 60 (23.8%) 134 (56.7%) 20 (7.9%)Teacher’s 4 (1.6%) 15 (6.0%) 47 (18.7%) 145 (57.5%) 41 (16.3%)Student’s 9 (3.6%) 19 (7.5%) 55 (21.8%) 142 (56.3%) 27 (10.7%)Teacher’s 6 (2.4%) 13 (5.2%) 50 (19.8%) 136 (54.0%) 47 (18.7%)Student’s 10 (4.0%) 23 (9.1%) 69 (27.4%) 129 (51.2%) 21 (8.3%)Teacher’s 61 (24.2%) 57 (22.6%) 62 (24.6%) 54 (21.4%) 18 (7.1%)Student’s 73 (29.0%) 32 (12.7%) 62 (24.6%) 62 (24.6%) 23 (9.1%)School of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Understanding of Religion and the Role Played by Cultural Sociology in the Process269Understanding of Religion and the Role Played byCultural Sociology in the ProcessSheila Vaughham (MSc)Master of Science and Candidate to PhD in Sociology at the <strong>Isles</strong><strong>International</strong>e Université (European Union)AbstractThis paper assesses the contribution of cultural sociology to the understanding of religion from a criticalperspective. First it examines and summarizes how three Weber, Durkheim and Marx see the nature ofreligion as a cultural form. It then looks at various recent theories of religion under globalization. Theauthor then assesses the contribution that these cultural theories give to the understanding of real religiousphenomena through examining whether they can help in an understanding of Islamic fundamentalism.S. Vaugham - Understanding of Religion and the Role Played by Cultural Sociology in the Process


270School of Doctoral Studies (European Union) JournalJulyGiddens states that sociological approaches toreligion are still strongly influenced by the ideas ofthe three “classical” sociological theorists - namelyMarx, Weber and Durkheim (Giddens, 1996). It isfor the above reason that this essay will start offwith a critical evaluation of the beliefs that thesethree “classical” theorists held about religion as acultural form.Budd claims that Marx was not as interestedin religion as he was in other social institutions,and that his views on it were not very differentfrom those of other contemporary radicals (Budd,1973). To Marx, religion was made by manand corresponded to nothing “super-empiricalor internal”. In fact, according to Budd, Marxbelieved religion was nothing more than a sign ofthe alienation of man in a society which oppressedand dehumanized him. Marx believed that,because a “liberated” man would feel no need of“metaphysical explanation” on life, religion woulddisappear in time (Budd, 1973). Budd leads us tobelieve that to Marx, religion would only changewith changes in the relations of production. Itwould therefore be defeated by the birth of a newsocial and economic order. (Budd, 1973).Giddens believes that Marx accepted the viewthat religion represents human self-alienation.He claims that, in Marx’s view, religion in itstraditional form will, and should disappear. Thisdoes not mean that Marx dismissed religion, itsimply means that he believed that the positivevalues embodied in religion could become guidingideals for improving society, (Giddens, 1996).In fact, according to Giddens, Marx believedthat religion is the “opium of the people” (Giddens,1996, p464). In other words, Marx believedthat religion defers happiness and rewards tothe afterlife, teaching the acceptance of existingconditions in this life. This means that attention isdiverted away from inequalities and injustices inthis life by promising a better future in the Afterlife.In addition to this, Budd believes that Marxthought religion not only reflected the suffering ofthe working class, but offered a “fantasy escape”from that suffering (Budd, 1973).Marx believed that religion had a very powerfulideological element, and that religious beliefs andvalues often provide justifications of inequalitiesof wealth and power (Giddens, 1996). Hefurther claims that religion often has ideologicalimplications which serve to justify the interests ofthe ruling class at the expense of other subordinateclasses (Giddens, 1996).Thus, one can see from the above that Marxviewed religion as just another commodity inthe hands of the people who own the means ofproduction (i.e.: it is a tool in the hands of thepowerful in order to repress the masses). I believethat Marx’s views of religion are too simplistic.This is because, while Marx’s view on religiondoes allow one to see in which ways it preserves thestatus quo, it fails to show in which ways religioncan be a catalyst for social change (an issue whichwill be dealt with later). In addition, Marx’s beliefthat religion would disappear as society changesand develops is also flawed. This can be seen, as,in the modern era of globalization, many religiousgroups are stronger now than they ever were inthe past. This will be demonstrated in the laterdiscussion on Islamic fundamentalism.Having stated the weaknesses of Marx’s theoryof religion, it is important to note that in manyways religion does function to control society.Budd demonstrates this when he says thatMany empirical studies of the operation ofreligious institutions in modern societiessuggest that they do fulfill the role Marxclaimed for them, that of adjusting peopleto a social order from which their materialgains are so small that a political, evenrevolutionary, response would seem tobe the only reasonable alternative (Budd,1973, p55).In contrast to Marx, Durkheim spent a largepart of his intellectual life studying religion.He concentrates his study of religion on smallscale, traditional societies (Giddens, 1996).Durkheim, unlike Marx, does not connect religionprimarily with social inequalities or power, butwith the “overall nature of the institutions of asociety” (Giddens, 1996, p465). He based hiswork on the study of the totenism practiced bySchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Understanding of Religion and the Role Played by Cultural Sociology in the Process271Australian aboriginal societies, which he believedrepresented religion in its most “elementaryform” (Giddens, 1996). A totem, according toGiddens, was originally an animal or plant takenas having particular symbolic significance for areligious group. It is a sacred object, “regardedwith veneration and surrounded by various ritualactivities” (Giddens, 1996).Giddens goes on to explain how Durkheimdefined religion in terms of a distinction betweenthe “sacred” and the “profane”. He says that sacredobjects and symbols are treated as apart from theroutine aspects of existence (i.e.: the realm of theprofane). As a sacred object, the totem is believedto have divine properties that separate it completelyfrom other animals that might be hunted (Giddens,1996). Giddens says that Durkheim takes this ideafurther and claims that totems are sacred as theyrepresent the group itself, and they stand for all thevalues central to the group or community. This canbe seen in the following quote:The reverence that people feel for the totemactually derives from the respect they holdfor central social values. In religion, theobject of worship is actually society itself(Giddens, 1996).Giddes believes that Durkheim holds thatreligions are never just a matter of belief, andthat all religions involve regular ceremonial andritual activities, in which a group of believersmeet (Giddens, 1996). He further holds that inthese collective ceremonials, as sense of groupsolidarity is affirmed and heightened, as they takethe individual away from the concerns of this life(i.e.: the ‘profane social order”) into an elevatedsphere in which they feel in contact with higherforces. Durkheim claims that these higher forces(such as totems or gods) are really the expression ofthe influence of the collectivity over the individual(Giddens, 1996).Giddens goes on to say that, according toDurkheim, ceremony and ritual are essential tobinding the members of groups together. Thisis why rituals and ceremony are found not onlyin regular situations of worship, but in variouslife crises at which major social transitions areexperienced. He says that Durkheim believed thatcollective ceremonials reaffirmed groups solidarityin times when people are forced to adjust to majorchanges in their lives (Giddens, 1996).According to Giddens, Durkheim believedthat all small traditional cultures have everyaspect of life permeated by religion. He claimsDurkheim believed that religious ceremoniesboth create new ideas and categories of thoughtand reaffirm existing values. This means thatDurkheim believed that religion is not just a seriesof activities, but that it actually conditions modesof thinking of individuals in traditional societies(Giddens, 1996).Similar to Marx, Durkheim believed thatwith the development of modern societies, theinfluence of religion would wane. This is becausescientific thinking replaces religious explanationand ceremonial and ritual activities come tooccupy only a small part of an individual’s life.However, in contrast to Marx, Durkheim believedthat religion in an altered form would continue toexist, since even modern societies would dependon rituals to reaffirm their values and to help createcohesion (Giddens, 1996).I believe that Durkheim’s views on religion aremuch more useful that Marx’s, because many of thetrends he described can be seen in modern religiousmovements, and they will be demonstrated later inthis essay.In contrast to Durkheim, Weber studiedreligions on a worldwide scale. In fact, Webermade detailed studies of what he called “worldreligions” - the religions that have attained largenumbers of believers and have affected the courseof global history (Giddens, 1996).Weber’s writings on religion concentrate on theconnection between religion and social change.His work contrasts to that of Marx since Weberargues that religion is not only a conservativeforce, but that religiously inspired movements caninspire social change (Giddens, 1997).Giddens claims that Weber saw his studyof the world religions as a single project, andthat his study of the impact of Protestantismon the development of the West is part of aS. Vaugham - Understanding of Religion and the Role Played by Cultural Sociology in the Process


272School of Doctoral Studies (European Union) JournalJulycomprehensive attempt to understand the influenceof religion on social and economic life in varyingcultures (Giddens, 1997). In his comparison of thedifferent world religions, Weber points out that intraditional China and India there was at certainperiods a significant development of commerce,manufacture and urbanism. However, these didnot generate the radical patterns of social changeinvolved in the rise of industrial capitalism inthe West. This is because the religion found inthese parts of the world inhibited such change. Incontrast to this, Weber believed that Christianityhas a “revolutionary aspect”. This means thatwhile the religions of the East cultivate an attitudeof passivity towards the existing order within thebeliever, Christianity involves a constant struggleagainst sin. Hence, it can stimulate revolt againstthe existing order of the status quo (Giddens,1996).To me, Weber’s work is the most comprehensiveand convincing of the three “classical theorists”dealt with in this essay. This is because he looksat religion on a worldwide scale, comparingand contrasting different religions in differentcultures and examining how these differentreligions affected the development of the culturesthemselves. I must stress, however, that I thinkthat the fact that Weber did not actually everexperience these different cultures personally, butresearched them using the work of others, meansthat his work might be flawed.The above has briefly summarized the views ofthe three “classical” sociological theorists that willbe dealt with in this essay. It has briefly assessedwhat each of them believed, showing in whichways their theories are useful and in which waysthey are not. This essay will now look at moderntheorists’ beliefs about religion and its relationshipwith the global world.The first modern theorist that will be discussedis Keith Roberts. Roberts says that during thelast fifty years, societies around the world haveundergone radical, fundamental changes. Hesays that each society and nation has becomeless isolated and autonomous (Roberts, 1995).Roberts says that globalization involves severalinterdependent processes. First, it involves astructural interdependence of nation-states.Second, it involves a synthesis and crossfertilizationof cultures as societies borrow ideas,technologies, artistic concepts, mass mediaprocedures, and definitions of human rightsfrom one another. Third, it involves a change insocialization to a broader inclusiveness of othersas being “like us” and to a sense of participation inthe global culture. Finally, it involves an increasein individualism, accompanied by a decreasein traditional mechanisms of control (Roberts,1995).In terms of religion under globalization,Roberts claims that an interesting religiousdevelopment of the global era is what he calls “theincrease of global perspectives in the theologiesand ethical systems of major world religions”(Roberts, 1995, p399). He goes on to explainthat many sociologists believe that the recentphenomenon of accommodation and tolerance ofother religious traditions is the result of increasedglobal interdependence. In other words, as peopleare forced to trade and interact with people inother parts of the world controlling importantresources, it becomes apparent that judgmentalattitude implying the inferiority of others (or a selfrighteous posture regarding one’s own moral andspiritual values) is unacceptable (Roberts, 1995).Roberts sates that religions can no longer ignoreglobal interdependence and the fact that the worldis becoming “a single sociocultural place”, andthat the global cultural diversity compels churchesto accept pluralism as the first step towardsaccommodation of diverse people and alternativesocial systems (Roberts, 1995).Roberts goes on to explain how globalizationinvolves three important processes that havean effect on religion. These three processes are:secularization of social structures and cultures,the introduction of advanced and complexcommunication technologies, and the changingdemographic migration patterns (Roberts, 1995).According to Roberts, secularization involvesa rational, utilitarian, and empirical approachto decision making, so that the world becomes“de-spiritualized”. It also involves institutionaldifferentiation and increased autonomy of variousSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Understanding of Religion and the Role Played by Cultural Sociology in the Process273institutions from religious domination (Roberts,1995). He goes on to say that as societies modernize,religious institutions are often relegated to a lessinfluential role in social life. This means thatofficial religious pronouncements compete withthe state to define what encompasses acceptablesocial behaviour (Roberts, 1995).McGuire believes that secularization is nota foolproof concept. She claims it is impreciseand broad, and lends itself to “nonobjectivediscussions”. She goes on to say that the conceptof secularization is not very useful as it impliesa unilinear historical development - i.e.: theinevitable decline of religion and religiosity. Sincethe nature of social change is far more complex,and religion is thoroughly embedded in so manyfacets of society, a unilinear interpretation cannotportray the complex ways religion reciprocallyinfluences society (McGuire, 1992). Further,McGuire believes that, the data often used tosubstantiate secularization are based on narrowdefinitions of religion that does not incorporatenew forms that religion might take in the future(McGuire, 1992).Roberts explains that the improvedcommunication technologies found in the globalera can have major impacts on religion. He saysthat technologies of various sorts usually originatein the West, and push for improved competitionand social change. Since religious groups oftenprotect traditions, the traditional religious groupseither “feel besieged or are forced to adapt andto develop some form of process theology thatembraces change” (Roberts, 1995, p404).He explains how mass communication hasbeen used effectively by Islamic leaders tocommunicate their message. This means thatthrough the availability of mass media, officialmonotheistic Islam has spread to the country,where “folk versions” of Islam or non-Islamicdeities have prevailed (Roberts, 1995). Robertsdoes, however, point out that the media can havea negative effect on traditional religions suchas Islam. This is because the media introducesWestern consumerist ideals. It is for this reasonthat even though the orthodox leaders tend to seethe benefits that the improved media network canhave fundamentalists are more likely to attackthe media with its intrusive consumerism, even(Roberts, 1995).Roberts claims that religious groups often makeup multinational conglomerates. An example ofthis is the way that the Roman Catholic Churchis an international body that has members locatedall around the world, in many different nations.The political influence of religious groups is wellsummed up by Roberts: “the vitality of religionis frequently connected to its functions for ethnicidentity or for the mobilization of political powerof an ethnic or regional group” (Roberts, 1995,p406). In fact, Roberts goes on to say that the rolereligion plays in solidifying ethnic identity maybe even more important for religions in pluralisticsocieties, where “supernatural sanction of one’sown culture helps fend off anomie” (Roberts,1995).In addition to the above, McGuire identifiesother global trends that have impacted on religionin modern societies. She claims that institutionaldifferentiation has had a major effect on religionin modern times. This is the process by which “thevarious institutional spheres in society becomeseparated from each other, with each institutionperforming specialized functions”(McGuire,1992, p251). In contrast to complex societies,simple, traditional societies’ beliefs, values andpractices of religion directly influence behaviourin all other spheres of life. In complex societieson the other hand, each institutional sphere hasgradually become differentiated from others(McGuire, 1992). In fact, in a highly differentiatedsocial system, the norms, values and practices ofthe religious sphere have only indirect influence onthe other spheres such as education and business.McGuire claims that this is evidence of thedeclining influence of religion (McGuire, 1992).McGuire also introduces the term legitimacy,which she defines as “the basis of authorityof individuals or institutions, by which theycan expect their pronouncements to be takenseriously”(McGuire, 1992, p253). She then saysthat legitimacy is not an inherent quality, but isbased on the acceptance of an individual’s claimsby others (McGuire, 1992).S. Vaugham - Understanding of Religion and the Role Played by Cultural Sociology in the Process


274School of Doctoral Studies (European Union) JournalJulyShe states that the location of religion incontemporary society reflects changes in thebasis of legitimacy within that society. Thismeans that stable societies typically have stablesources of legitimacy. Religion legitimatesauthority in traditional societies by its “pervasiveinterrelationship” with all other aspects of society(McGuire, 1992). McGuire believes that, incontemporary society, the differentiation processhas resulted in competition and conflict amongvarious sources of legitimacy that are available(McGuire, 1992).Historically, religions were monolithic. Thismeans they established the worldview of theirsociety, and had the monopoly over the ultimatelegitimization of individual and collective life(McGuire, 1992). McGuire introduces the termPluralism to refer to the modern global societalsituation in which no single worldview holds amonopoly. McGuire claims that pluralism can beused in a narrower sense - to describe the politicaland societal tolerance of competing versions oftruth. She then says that pluralism, in both thislimited sense and in the broader sense has aneffect on religion. Where worldviews coexist andcompete as plausible alternatives to each other, thecredibility of all is undermined. In other words,“the pluralistic situation relativizes the competingworldviews and deprives them of their taken-forgrantedstatus” (McGuire, 1992, p255). This hasa very important effect in society. In a pluralisticsituation, worldviews and authoritative claimscompete and this results in the diffusion of sourcesof legitimacy among many agents in society, thuspossibly breaking up the social order found in thatsociety (McGuire, 1992).McGuire develops this argument further andtells us that the instability of sources of legitimacycan encourage the formation of minority groups,who challenge the ruling social order “through adevelopment of their own particular views andsymbols (McGuire, 1992). A good example ofwhat McGuire is talking about here could be theIslamic Fundamentalists that will be discussedlater in the essay.McGuire goes on to show how a religiousmovement that arises in response to social changemay itself help bring about social change. Shebelieves that, depending on certain situation thatcould be present or absent in different contexts,religion can either prevent social change, or it canencourage it (McGuire, 1992)She claims that there is something inherentlyconservative about religion. This is becausereligious beliefs consist of taken for granted truthsthat can build a strong force against new ways ofthinking. These practices that people believe arehanded down from God are very resistant to change.She also believes that religion is often used by thedominant classes to legitimize the status quo, aswell as the specific roles and personal qualitiesthat are found in that society (McGuire, 1992)McGuire goes on to demonstrate how religion,under the right circumstances can promote socialchange. She attributes this to the effectivenessreligion has in uniting people’s beliefs withtheir actions, and by promising a better futurewith its vision of the way the world should be(McGuire, 1992). Another aspect of religion thatcan contribute to social change is the capacity ofreligious meanings to serve as symbols for change,since they often present an image of a utopic future.They therefore create a vision of what could beand suggest to believers that they all have a roleto play in bringing about this change (McGuire,1992).She goes on to explain that social changeoften requires an effective leader who can expressdesired change, motivate followers to action anddirect their actions into some larger movementfor change. McGuire believes that religion oftenis a major source of such leaders, largely becausereligious claims form a potent basis of authority(McGuire, 1992). McGuire expresses that thesocial unity that religion brings to a group ofpeople is empowering. She thus believes that thefollowers of a charismatic, religious leader mayexperience a sense of power in their relationshipwith the leader and with fellow believers that willenable them to apply a new order to their socialworld (McGuire, 1992).McGuire claims that certain qualities of somereligious beliefs and practices make them morelikely too effect change than other religions.School of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Understanding of Religion and the Role Played by Cultural Sociology in the Process275These religions would emphasize a criticalstandard position that poses an internal challengeto the existing social arrangements. In addition,ethical standards also provide a basis for internalchallenge to the existing social arrangements.Furthermore, the content of the norms and ethicalstandards found within a religion would influencethe minds of resulting social action (McGuire,1992). Finally, she claims that the way in whichan individual’s perception of their social situationwould be heavily influenced by the way thattheir religion defines that reality. This means thatbelievers are unlikely to try and change a situationthat their religion has defined as one that humansare powerless to change. It is important to realizethat the opposite also applies (McGuire, 1992).The final condition that she gives is if religiousmodes of action are the only channel that peoplehave for affecting their social world. This meansthat economic dissatisfaction and political dissentmay be expressed in religious terms and resolvedthrough religious modes of action (McGuire,1992).The above has summarized the ways inwhich several modern theorists view religion asa cultural form in the post-modern, global era.One can see that the ideas of Marx Weber andDurkheim have all been used and developedin these modern theories. The essay will nowbriefly examine how these theories can be usedto understand fundamentalism in general, andIslamic fundamentalism in particular.On the subject most pertinent to this essay,Roberts claims that religious fundamentalism inthe twentieth century is not a process unique toa single religious tradition or society. This meansthat it cannot be interpreted simply in light of localor national events, nor in terms of characteristicsof any given religion. He believes that the causefor fundamentalism is global, and appears tobe a reaction against global modernization andsecularization (Roberts, 1995). He then goes on tolist what he calls the “several features of the globalprocess” which cause fundamentalism (Roberts,1995).Firstly, he believes that the increased pluralismand relativity in the increasingly diverse globalworld is threatening the traditions that alwaysprotected the absoluteness of norms and values.In other words, “the idea of alternative lifestylesbeing tolerable is offensive to those who are socertain that they alone know ‘the truth’” (Roberts,1995, p402). The second feature is what Robertscalls “fear of economic interdependence onother people”. This he claims stimulates a desireto reassert autonomy and to proclaim one’suniqueness. This means that fundamentalism isin part, an attempt to “reestablish isolation andindependence from the world system” (Roberts,1995). The third feature Roberts gives is thatfundamentalism represents a reaction againstinstitutional differentiation that characterizes theglobal era. He says that fundamentalist groupsare convinced that the acceptance of globalizationwill involve the death of their traditionalculture (Roberts, 1995). The fourth feature offundamentalism Roberts identifies is that they areoften counteractions against religious reforms. Toexplain what he means by this, he uses a case studyof Iran as it went through modernization. He claimsthat official Islam in that country became liberatedfrom folk versions and emphasized a universalisticmonotheism. However, it also embraces aspectsof secularization. As conservatists rejectedmodernization in favour of traditionalism, theyalso rejected the new interpretation of Islam.They did this by forming a new, “literalistic anduncompromising” interpretation of Islam (Roberts,1995).Kamal and Samatar claim that the rise of Islamin its fundamental form as in its other forms is aresponse to a double alienation. The first is a feelingof “being subjected to the logic of the modernworld system, but not being of modernity”. Inother words, many Muslims believe that they areobjects in the constitution of modernism, and do nothave any agency to effect their own social world(Pasha & Samatar, 19??). The second alienationis located in the domestic context where both“civil associations and the state are in some formof decomposition”. They claim that when peopleare confronted by relentless material deprivations,repressive and inept policies, and constant culturaldislocation, many of them feel compelled to takeS. Vaugham - Understanding of Religion and the Role Played by Cultural Sociology in the Process


276School of Doctoral Studies (European Union) JournalJulypart in drastic rethinking and thus fundamentalism(Pasha & Samatar, 19??).Beckford puts forward a set of ideas that havemajor implications for the way in which religioncan be understood in different types of societies(i.e.: post-industrial and post-modern society).These ideas are associated with various Marxist andquasi-Marxist scholars. Beckford claims that whatthese scholars have in common is the belief that,as a result of basic transformations in the structureof capitalist societies the new post-structural stage,“new social movements” have began to attainmajor importance (Beckford, 1989). This meansthat, whereas the dominant conflicts of industrialsociety, according to Marx were supposed to havearisen from the contradictions between capitaland labour, it is now believed that capitalism hasundergone such major transformations that the siteof the dominant social conflicts are now over thestruggle for quality of life and the social make-upof society in the future (Beckford, 1989). Beckforddefines these new social movements as:Forms of collective action and sentimentwhich are based on feelings of solidarityand which engage in conflict in order tobreak the meanings of the system of socialrelations in which they operate (Beckford,1989, p144).It is my belief that Islamic fundamentalism canbe seen according to the above definition of newsocial movements.Another contribution that Beckford makes isthat he explained why Marxist and Quasi Marxisttheories have shifted their concentration fromeconomic based conflicts to culturally basedconflicts. He says that this shift has occurredbecause in post-industrial societies, the socialsystem is no longer based solely on an economicbase. Instead it is run by means of informalsystems that are designed to ensure the marketsand resources are efficiently exploited (Beckfoird,1989). This means that the old struggles forworking class participation in the system and forminimal standards of living have been replacedby new struggles over meaning and value of thesocial process as a whole (Beckford, 1989). Thesecond reason for this shift is that, whereas inthe past class conflicts were fought in “symbolicmedia” and were therefore only partly cultural, thenew movements are primarily and directly cultural(Beckford, 1989). It is my belief that the aboveshift can be seen in action in the case of Islamicfundamentalism.According to the Microsoft EncartaEncyclopedia, the Islamic world began toexperience the increasing “pressure of the militaryand political power and technological advancesof the modern West” as early as the 18 th century(Microsoft Encarta Encyclopedia, 1996). It goeson to say that it became clear that at the economicand technical level at least, the world of Islamhad fallen behind. The reason why this rocked theIslamic world so greatly was because the Westerncountries were mainly Christian. This meant thatthe Islamic belief that Islam is the final revelation,supplanting Christianity was being questioned byIslam’s failure to lead the world into the future(Microsoft Encarta Encyclopedia, 1996).The religious crisis felt by many Muslims wasfurther developed in the 20 th century with thecreation of the state of Israel in an area regarded as“one of the heartlands” of Islam. This crisis causedtwo responses by the Muslim world. Firstly, manypeople argue that Islam needs to be modernizedand reformed, while the second response is torevert to the old, traditional Islamic way of life(Microsoft Encarta, 1996).These people believe that the crisis facedby Islam is a result of the” willingness of manyMuslims to follow the false ideas and values ofthe modern secular West”. They therefore believethat what is needed is a reassertion of traditionalvalues. They further claim that the crisis ofIslam is the result of the corruption of Muslimgovernments and the growth of secularization andWestern influence in the Muslim world (MicrosoftEncarta, 1996). Often those that argue in thisway believe in the use of violence in the causeof overthrowing unjust and corrupt governments,and it is this approach that is referred to as Islamicfundamentalism (Microsoft Encarta, 1996).School of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Understanding of Religion and the Role Played by Cultural Sociology in the Process277In terms of the major case study that willbe used in this essay (i.e.: the rise of Islamicfundamentalism in Algeria), another importantpoint needs to be made. The Microsoft EncartaEncyclopedia claims that the FIS political partyfound in Algeria (which will be referred to later)has emerged with the objective of installing whatit sees as a “proper Islamic government”, runninga state based on Islamic law (Microsoft Encarta,1996). It can therefore be described as an Islamicfundamentalist party. This essay will now examinethe rise of Islamic fundamentalism in Algeria.Arjomand begins his article by identifyingthe processes of social change which he believesare likely to strengthen disciplined religiosityand, under the right conditions, to give rise tomovements for “orthodox reform and renewal ofIslam” (Arjomand, 1986). He identifies five suchprocesses. The first he calls “integration into theinternational system” (such as Western colonialismand the advent of Christian missionary activities).The second he refers to as “ the development oftransport, communication and the mass media”.The third is urbanization. The fourth is the spreadof literacy and education and the last process henotes is the incorporation of the masses into thepolitical society (Arjomand, 1986).He goes on to say that with the advent ofbooks and newspapers, a public sphere in whichthe literate members of society could participatewas created. This sphere was then extendedto include some of the semi-literate throughthe “institution of public debates and lectures”(Arjomand, 1986, p88). It was this arrival of themedia of communication that gave rise to certainreligious movements (Arjomand, 1986). This canbe seen in the above sections on the importanceof the rise of the media and its effects on religion.Where Arjomand takes this further is his beliefthat, when one looks at the case study that will beused in this essay (i.e.: Islamic fundamentalism inAlgeria), one can see the effect of the “channels ofphysical communication” - such as roads - had onthe spread and movement of the Islamic doctrine.Arjomand claims that the non-existence of goodroads caused the expansion of orthodox reformismto be limited. He demonstrates how, whentransport was improved in West Africa after theSecond World War, the spread of Islamic doctrinealong the newly improved roads was dramaticallyincreased. He even claims that the cheaper andsafer cost of transport has increased the number ofpilgrims to Mecca, which in turn contributes to thespread of Islam in West Africa (Arjomand, 1986).In terms of urbanization, Arjomand claims thatcities throughout history have been the center ofIslamic (and Jewish and Christian) piety. He alsoclaims that social dislocation such as migrationfrom villages to towns is accompanied by increasedreligious practice (Arjomand, 1986). He thengives an impressive array of statistics that showthat the rapid urbanization of Iran and Turkey hasbeen accompanied by an increase in the level andintensity of religious activities, as well as with amultifaceted revival of Islam (Arjomand, 1986).Arjomand shows that, coupled with the rapidurbanization comes another important conditionfor the development of Islamic Fundamentalism:the spread of literacy. He claims that the increasein literacy seems to increase the interest in religionand seems to run independently of urbanization. Hedemonstrates how the spread of literacy coincideswith the expansion of higher education, and thata close connection between higher education andIslamic activism also seems to exist (Arjomand,1986).The above has demonstrated the contributionthat the above sociologists have made to thetheoretical understanding of fundamentalism ingeneral. It will now demonstrate how many of thephenomena and processes mentioned above canbe found in the real-life case study of the rise ofIslamic fundamentalism found in Algeria.Spencer claims that Islamic fundamentalism,and the popularity of Islamic political bodies thatare found in Algeria arose because of several keyconcepts that have already been discussed. Shesays that since its independence from France in1962, the Algerian state had been associated witha “secularized, modernizing and socialist path ofdevelopment” (Spencer, 1996). This shows howthe Algerian state was effected by the processes ofsecularization and modernization which have beendiscussed above, and which we have seen can leadS. Vaugham - Understanding of Religion and the Role Played by Cultural Sociology in the Process


278School of Doctoral Studies (European Union) JournalJulymembers of a society to search for meaning in life,and possibly to tenaciously hold on to religion.Spencer goes on to say that the urban riots ofOctober 1988 (which are often thought of as anIslamic demonstration) were actually not Islamicin nature. They were actually a protest againstunemployment, economic hardship and the “rigorsof reforms that affected the young and poor morethan they affected the vested interests at the centerof the one-party state” (Spencer, 1996, p93). It isvery interesting to note that the above demonstratesa point put forward by McGuire, when she statesthat religion can often cause social change if it isthe only channel open to people who are unhappywith other aspects of social life.The reason why the above phenomenon can beseen in the essay is that it effectively demonstratesthat what at first glance looked like a religious riotwas actually a protest against conditions found inthe society that the masses were not happy with,but had no avenue other than religion in which toexpress their views. Spencer alludes to this whenshe says that:More than any other Islamist groups, the FISnot only galvanized the opposition of theyoung and the unemployed to the existingsingle-party government, but also revivedforms of Islamic expression rooted in earlierperiods of Algerian history (Spencer, 1996,p93).The situation found in Algeria againdemonstrates a point raised earlier, which claimsthat a major reason why religion can be a catalystfor social change is because of its unifying effecton people. This process can be seen in Spencer’sarticle when she claims that the political party theFIS (Front Islamique du Salut) “predominatedover other Islamist parties through the ability ofits leaders to draw several trends of thought andactivism together under a single, mobilizing,populist umbrella” (Spencer, 1996, p94). Shefurther enhances this point by pointing out thatboth moderates and radicals (i.e.: different typesof people that would have had no other avenue inwhich they could work together) supported theFIS, along with other people who were “committedsimply to undermining the governmental hegemonyof the single party state” (Spencer, 1996, p94).Spencer demonstrates how Islam has alwaysbeen used by people in Algeria to contest thelegitimacy of the given status quo in Algeria’spolitical history. She claims that different politicaland social groups have fought different politicalissues through the common medium of Islam, andhave drawn their strength from the appeals andvalues of Islam (Spencer, 1996).Spencer’s article shows the effect of thecommunication networks, and urbanizationprocess had on the rise of the Islamism found inAlgeria. This is demonstrated when she showshow Islamic leaders used the mosques and“maquis” to protest against “the degradations ofurban life through the propagation of a new visionof Islamist social and political morality” (Spencer,1996, p97). Zoubir adds to this by claiming thatthe mosques provided a political base for thefundamentalist movements (Zoubir, 1998). Zoubiralso shows that the increase of religious programsof television and radio ensured that the populationobserved and understood the precepts of Islam,and Islamic thought became entrenched in thesociety (Zoubir, 1998).Spencer further demonstrates how the FISused Islam as a means to create social changewhen she says that the leaders of the FIS notonly presented radical alternatives directed atthe mass of the population, but they also had themeans to propagate their message swiftly. Thiswas due to a campaign of privately and publiclysponsored mosque building, and the building ofIslamic universities. This meant that people inAlgeria had access to these centers of learning,thus expanding the domain of higher education(Spencer, 1996). This move to higher education,as has been demonstrated above, opens the door ofopportunity for large groups of people to join andparticipate in fundamentalism.According to Zoubir, the rise of radicalfundamentalism in Algeria is difficult to explainsince a close relationship between religion andpolitics has always existed in Algeria. This meansthat Islam is not only a religion, but it is also theSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Understanding of Religion and the Role Played by Cultural Sociology in the Process279basis of the Algerian identity and culture (Zoubir,1998). He goes on to demonstrate how the statein Algeria used Islam to legitimize its rule, andhe does this through the use of the Durkheimianview of symbols and meanings: “the state resortedto Islamic symbols to establish and reproduce itslegitimacy” (Zoubir, 1998p 123).Zoubir goes on to show how Islam becamea cornerstone for Algerians to resist and protestFrench colonial rule. He claims that the brutalitywith which the colonial authorities expropriatedthe main local religious institutions left a markon the Algerian population. He continues that thecoercion to which the French resorted in order toestablish its “cultural hegemony” as well as itscontempt for the native population and its valuesgives one an explanation as to why Algerians clungto Islam as their distinct cultural identity (Zoubir,1998).Zoubir demonstrates what he believes wasthe cause of Islamic fundamentalism in Algeria.He believes that the reason for its rise in Algeriawas due to the Weberian term “disenchantment ofthe world” (which is caused by modern science)(Zoubir, 1998). He defines a fundamentalist as“someone who has become conscious of the acuteinequalities, but who is also convinced that thecurrent strategies of development will not succeedin alleviating them” (Zoubir, 1998, p131).He goes on to show how fundamentalistorganizations fulfilled the needs of the massesby providing structures that adequately replacedthe “old, communitarian” ones. This meant that,by providing such structures, the fundamentalistgroups enabled the alienated individual to regaina global image of the self within a community ofpeople with similar beliefs (i.e.: Islam) (Zoubir,1998).Zoubir believes that Algeria at this time wasdominated by a population of youths who feltbetrayed by their government, one which was notcapable of providing adequate services (such aseducation, housing and employment). This meantthat the youth rejected all the “founding myths andsymbols” of the Algerian nation (Zoubir, 1998).This coupled with the modernization processoccurring in Algeria meant that the state lost itslegitimacy and credibility in the eyes of the people(Zoubir, 1998).One can conclude from the above that culturalsociology contributes greatly to the understandingof religion. This has been demonstrated through alook at how several cultural sociological theoriescan be used to understand the rise of Islamicfundamentalism in Algeria. It is important torealize that the discussion on the rise of Islamicfundamentalism in Algeria has been simplifiedin this essay due to space constraints. It is alsoimportant to realize that the work of the classicalsocial theorists on religion had a heavy influenceon many of the ideas of the modern social theorists.This can also be seen in the above essay.ReferencesArjomand, S.A. (1986). Social Change andMovements of Revitalization in ContemporaryIslam, in Beckford, J.A. (1986) New ReligiousMovements and Rapid Social Change. SagePublications, London.Beckford, J.A. (1989). Religion and AdvancedIndustrial Society, Chap 6. Unwin Hyman Ltd:London.Budd, S. (1973). Sociologists and Religion,Chapter 3. Camelot Press Ltd, London andSouthampton.Giddens, Anthony. (1996). Sociology (2 nd edition).Chapter 14, Religion. Polity Press: Cambridge,UK.McGuire, M.B. (1992). Religion: the SocialContext (3 rd edition). Chapter 7, The impactof Religion on Social Change, WadsworthPublishing Company, Belmont, California.McGuire, M.B. (1992) Religion: the SocialContext (3 rd edition). Chapter 8, Religion inthe Modern World. Wadsworth PublishingCompany, Belmont, California.S. Vaugham - Understanding of Religion and the Role Played by Cultural Sociology in the Process


280School of Doctoral Studies (European Union) JournalJulyMicrosoft Encarta 96 Encyclopedia. “Islam” 1993-1995 Microsoft Corporation. Funk & WagnallsCorporation.Pasha, M.K. & Samatar, A.I. The resurgenceof Islam, in Mittelman, J.H. (Ed)(19??),Globalization: Critical Reflections, Chapter 9.Roberts, K.A. (1995). Religion in SociologicalPerspective (3 rd edition) Chapter 17, Religionand Globalization. Wadsworth PublishingCompany: USASpencer, C. The roots and Future of Islam inAlgeria, in Sidahmed, C.A. & Etheshami,A. (Eds.) (1996), Islamic Fundamentalism,Westview Press: USAZoubir, Y.H. State, Civil Society and the Questionof Radical Fundamentalism, in Ahnmed, S.M.Islamic Fundamentalism: Myth and Realities(1998), Thaco Press: UK.School of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Policy of Preemption or the Bush Doctrine281Policy of Preemption or the Bush DoctrineAna Dresner (MPhill)Master of Philosophy and Candidate to PhD in Anthropology at the<strong>Isles</strong> <strong>International</strong>e Université (European Union)AbstractThe paper explains the principles of the Bush Doctrine and the policy of preemption that was essentially theresponse of the Bush administration to the attacks of 9/11. The paper looks at the arguments of supportersof the Bush doctrine but then explores the position of opponents as well. The paper discusses the rise ofanti-Americanism, the contentions that Bush disregards the will and needs of the states he has invaded, thefact that democratization is not working and the lack of evidence about weapons of mass destruction at thestart of the second Iraq war.A. Dresner - Policy of Preemption or the Bush Doctrine


282School of Doctoral Studies (European Union) JournalJulyThe terrorist attacks of 9/11 were a definingmoment in both American foreign policy and thelives of millions of people. The lives that wereclaimed by the attacks have left thousands andthousands of family members and an entire nationgrieving. The attacks that were immediatelycondemned throughout the world were regardedas the beginning of the war on terrorism in theUnited States where President George W. Bushannounced America was ready to fight back. The“Bush doctrine” as American foreign policy hasbeen called, is essentially the response of the Bushadministration to the attacks of 9/11. Initially, it wasused to describe the invasion of Afghanistan, butwas later broadened as to encompass the famous“policy of preemption” which was claimed tooperate on various levels. First of all, this strategyof preemption holds that the United States canattack any country and depose any political regimeif they pose a security threat on the U.S. The threatdoes not have to be immediate, and the securitythreat can be either terrorism or the developmentof weapons of mass destruction. This also justifiedthe invasion of Iraq. Secondly, the policy ofpreemption was described by the administrationas a strategy supporting democracy all over theworld, especially in the Middle East. The thirdset of principles of the Bush doctrine refers to adiplomacy tending toward “unilateralism” i.e.“a willingness to act without the sanction ofinternational bodies such as the United NationsSecurity Council or the unanimous approval of itsallies” (Kagan: 17).The Bush doctrine was further developed inthe National Security Strategy paper issued by theWhite House on September 17,2002. This paperannounced “a new legal as well as strategic conceptthat would represent a fundamental change from thepast” (Gardner: 586): “The United States will notuse force in all cases to preempt emerging threats,nor should nations use preemption as a pretext foraggression. Yet in an age where the enemies ofcivilization openly and actively seek the world’smost destructive technologies, the United Statescannot remain idle while dangers gather.” (TheNational Security Strategy of The United States ofAmerica 15 Sept. 2002 in Gardner: 586)Supporters of the Bush doctrine claim thatnuclear weapons pose a deterrent threat to theUnited States, and that “hopes for a stable anddemocratized Islamic world, for example, maybe short-lived if Iraq or Iran were to acquire sucha capability. We see already how the tiny NorthKorean arsenal - and its proclivities to proliferate- could confound America’s position as theguarantor of East Asian security and democracy.”(Donnelly, The Logic of American Primacy)Moreover, they argue that American principles,interests, and systemic responsibilities arguestrongly in favor of an active and expansive stanceof strategic primacy and a continued willingnessto employ military force (Ibid) On the other hand,there are increasingly more voices which supportthe idea that the preemptive war which lies at theheart of the Bush doctrine has long been viewedas “immoral, illicit, and imprudent” (Bacevich:2007). In addition, the “quick, economical, anddecisive victory in Iraq” (Ibid) that the Bushadministration had aimed at has failed to occuras the war has produced different consequencessuch as heightening the anti-American hatredwhich already existed, and alienating Americanfriends and former supporters of American foreignpolicy.The occupation stage of the Iraqi war thatthe U.S. is now engaged in is extremely costly,with around $4 billion per month. Aside from theeconomic point of view, the military occupationof Iraq is counterproductive in the fight againstAnti-American terrorists especially becauseinternational cooperation is essential in order toannihilate terrorist organizations, and the U.S.has not received much support in their militaryoperation in Iraq. Moreover, the U.S. occupationof Iraq has cost the lives of hundreds of Americanservicemen and servicewomen and has left a fewthousand wounded or disabled. As far as Iraqi lossof life, the numbers are staggering. According toAssociated Press, at least 3,240 Iraqi civilianswere killed during the combat stage of operations,i.e. between March and May, 2003 (Preble: 6).Thesis: This paper strives to illustrate that the U.S.occupation of Iraq has contributed to a deeper senseof insecurity at home on the part of Americans, andSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Policy of Preemption or the Bush Doctrine283has not helped reduce the magnitude of worldwideterrorism. On the contrary, it has increased thewave of anti-American hatred. Also, as far asinternational cooperation, it has left the UnitedStates rather unpopular and lonely.The claim that Iraq was developing weaponsof mass destruction was central to the currentadministration’s decision to wage war against Iraq.By invoking this particular reason, the supportersof the war knew that they were appealing to oneof America’s greatest fears, i.e. the prospect ofanti-American terrorists seizing control of nuclearweapons. As President Bush formulated it,preemptive war was seen as a solution of deterringother nations that might have been in the processof developing nuclear weapons programs: “fordiplomacy to be effective, words must be credible,and no one can now doubt the word of America.”(George W. Bush, press release January 20, 2004as quoted in Preble: 26). The administrationclaimed that the United States’ involvement inIraq determined countries such as Libya, Iranand North Korea to abandon their initial plans todevelop weapons of mass destruction. Nonetheless,Iran and North Korea have openly reaffirmed theirdesire to expand their nuclear programs.American foreign policy has not been exactlycoherent (Gardner: 585). America has offered itssupport to Israel which clashed with the attemptsto work with Arab states. Collaboration with theBritish impeded better relations with the emergingcountries of the Middle East, and also threatenedto undermine the security of the region that thesevery ties were meant to protect (Kuniholm: 433).This incoherence has only deepened in the last fewyears, and has given rise to a number of suspicionswithin the international community. The presenceof U.S. troops in Iraq does not eliminate the threatof nuclear weapons. In fact, since the allies havenot been able to locate any such weapons onthe Iraqi territory, American occupation sendstroubling signals about American foreign policyand the U.S.’s intentions as far as other nations ofthe world. These suspicions are directly linked toIraq’s oil resources and have generated the ideathat the United States is an imperialistic state.These suspicions have been used by Al Qaedaand other terrorist organizations in their attempt torecruit new members.President Bush argued that the war againstSaddam Hussein was part of the large scalewar on terrorism waged by America after 9/11.Nonetheless, Al Qaeda still maintains terroristcells throughout the world, even in developedcountries. Nevertheless, the center of Al Qaedahas remained in Afghanistan where the Talibanrule has been preferred to open civil war. In fact,the organization has benefited from the situationin Iraq. As with all major terrorist organizations,a poor and humiliated population is an easy prey.Al Qaeda has found support in Iraq where itsanti-American propaganda deeply resonated withthe Iraqi people. At first, the administration haddomestic support for going to Iraq as Saddam’sregime was presented as “a threat of uniqueurgency” (George W. Bush, Project for the NewAmerican Century, “Statement on Post-War Iraq”in Preble: 14) which determined Americans to viewIraq as a anti-American dictatorship that possessedweapons of mass destruction, and thus needed to beneutralized (Preble: 15). The link between SaddamHussein and Al Qaeda has not been proved, andSaddam’s involvement in 9/11 has remained solelyan element of American administration rhetoric, asPresident Bush associated Saddam’s dictatorshipwith terrorism: “The battle of Iraq is one victoryin a war on terror that began on September 11,2001.” (G.W. Bush, 2003). However, no proof ofthis association has ever been located. This lack ofproof has generated domestic disbelief in the Iraqiwar, but more importantly perhaps, it has enhancedthe anger and resentment felt by Iraqis towards theU.S. and its citizens. Furthermore, it has worsenedAmerica’s image throughout the world and hasseverely discredited the U.S. “war on terror”.The conflict is not between civilizationsbut within states, within cultures and within anincreasingly global community “over the valuesand ideas that underpin modernizations and thenorms and directions of modern civilization”(Kuniholm: 426) The enemy of democratic statesis international terrorism, an elusive enemywhich cannot be confronted without internationalcooperation (Kuniholm: 430). Furthermore,A. Dresner - Policy of Preemption or the Bush Doctrine


284School of Doctoral Studies (European Union) JournalJulyif states support terrorist organizations, andhelp them propagate and gain new members,international cooperation must be targeted atthe states in question so that their support isannihilated. This has been one of the explanationsfor the American presence in Iraq. However, thepresence of American troops has not considerablychanged the situation in Iraq where democracy hasstill not penetrated the collective conscience or thepolitical system. In fact, American involvement inIraq might actually suppress such political andsocial development. The violence has spread fromSunni to Shiite communities (Preble: 54) andfrom central Iraq to regions in the south and west(Ibid.).One of the goals of the Bush doctrine is to spreaddemocracy to the Middle East. In fact, the centralclaim that supporters of the Iraqi occupation haveformulated is that the US troops are contributingto the creation of a stable and democratic Iraq(Preble: 45). Moreover, they have argued thatgovernments in neighboring countries could followin the path of Iraq and adopt peaceful democraticregimes. This is however easily contradicted bya few historical and social considerations. Ethnicand religious cleavages prevent such a scenariofrom ever becoming reality. Since its creation,Iraq has been a nation torn between immensesocial inequalities and religious differences. Thelack of education has also worsened the situationof the Iraqi people. In fact, this lack of educationcombined with extreme poverty account forthe appeal of terrorism among common Iraqis.Moreover, Iraq has no experience in liberal andpluralistic government hence America’s attempt tocreate and impose such a regime is likely to fail. Itis extremely difficult to craft a regime that will alsofunction when put into practice especially when itis imposed through military intervention. A studyconducted in 2003 has shown that only 4 of the16 military operations through which the UnitedStates aimed at changing a government resultedin the establishment of democracy (Pei, Minxin;Kasper, Sara in Preble: 46).The process of democratization largely dependson historical developments and cannot be reducedto a matter of imposing the right institutions inIraq. Democracy is based upon political freedomwhich can only be acquired by a state when thelatter benefits from economic growth, a solidlevel of education and a coherent national identity(Preble: 49). Given the ethnic turmoil, low rate ofeducation and the high percentage of Iraqi peopleliving below the poverty line, it is obvious that theUnited States cannot simply change the politicallife of the country.The level of hatred for Americans is growing,and the populations of the Middle East are withouta doubt, the center of this hatred. Islam is not anation, but many Muslims express a kind ofreligious nationalism, and the leaders of radicalIslam, including Al-Qaeda seek to establish atheocratic nation or confederation of nations thatwould encompass a large portion of the Middle Eastand beyond. Like national movements elsewhere,“Islamists have a yearning for respect, includingself-respect, and a desire for honor” (Kagan: 23).Complete withdrawal would be synonymousto a reaffirmation of American’s intentions notto suppress the aspirations of the Iraqi people.This could be much more than a mere symbolicgesture, as the populations inhabiting this regionof the world would no longer feel as threatenedand controlled by America, which in turn, mightlead to a decrease in terrorist activities directed atthe United States. As far as the domestic responseto the occupation of Iraq, it is important to notehere that American intervention in Iraq is no longersupported at home where policymakers have beenconfronted with a severe loss in domestic supportfor the war (Preble: 58).In order for these anti-terrorism efforts to besuccessful, they must draw on an internationallyaccepted code that not only stipulates the normsof behavior for states in the region, but alsothe elements of cooperation (Kuniholm: 435).Although there are “reasonable-sounding theoriesas to why America’s position should be eroding”(Kagan: 20) as a result of global oppositionto the war and the unpopularity of the currentadministration, there has been little measurablechange in the actual policies of nations, other thantheir reluctance to assist the United States in Iraq(Ibid). However, American occupation of Iraq hasSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 Policy of Preemption or the Bush Doctrine285also created the image of an imperialistic Americawhich has isolated itself from its traditional alliesby focusing on reshaping world politics. In thissense, President G.W. Bush has been accused ofwanting to establish world order according to hisown principles disregarding the will and needs ofthe states that he has invaded, such as in the caseof Afghanistan and Iraq. In turn, this has given riseto an immense wave of anti-Americanism whichhas deteriorated the image of America worldwide.By withdrawing militarily from Iraq, the messagethat the United States would be sending would beextremely powerful. Both the Middle East andthe rest of the world would be reassured that theintention of America is not to take control of the oilresources as it has been speculated. Furthermore,the American administration must acknowledgethe fact that the process of democratization ishighly complex, and that democracy cannot besimply carried and implemented from one placeto another. American foreign policy must redirecttowards real threats such as global terrorism, andmust consider embracing a position that will allowAmerica to resume international cooperationwhich would end its phase of loneliness andunpopularity.Bacevich, Andrew J. 2007, ‘Rescinding theBush Doctrine’, The Boston Globe, 1 March,(electronic edition) < http://www.boston.com/news/globe/editorial_opinion/oped/articles/2007/03/01/rescinding_the_bush_doctrine/ >Donnelly, Thomas. 2003, ‘The Underpinnings ofthe Bush Doctrine’, National Security OutlookAEI Online (Washington), American EnterpriseInstitute for Public Policy Research. < http://www.aei.org/publications/pubID.15845/pub_detail.aspGardner, Richard N. 2003, ‘Neither Bush northe “Jurisprudes”’, The American Journal of<strong>International</strong> Law, vol. 97, vol. 3, pp. 585-590.Kagan, Robert. 2007, ‘End of Dreams, Return ofHistory’, Policy Review, no. 144, pp. 17-26.Kuniholm Bruce R. 2002, ‘9/11, the Great Game,and the Vision Thing: The Need for (AndElements of) a More Comprehensive BushDoctrine’ The Journal of American History,vol. 89, no. 2, pp. 426-438.Preble, Christopher. 2004. Exiting Iraq: Why theU.S. Must End the Military Occupation andRenew the War against Al Qaeda : Report ofa Special Task Force, Washington, DC, CatoInstitute.A. Dresner - Policy of Preemption or the Bush Doctrine


286School of Doctoral Studies (European Union) JournalCall for PapersJulyBusiness Intelligence Journal (BIJ)Business Intelligence Journal (BIJ) publishes research analysis and inquiry into issues of importance tothe business community. Articles in BIJ examine emerging trends and concerns in the areas of generalmanagement, business law, public responsibility and ethics, marketing theory and applications, businessfinance and investment, general business research, business and economics education, production/operations management, organizational behavior and theory, strategic management policy, social issuesand public policy, management organization, statistics and econometrics, personnel and industrial relations,technology and innovation, case studies, and management information systems. The goal of BIJ is tobroaden the knowledge of business professionals and academicians by promoting free access and providevaluable insight to business-related information, research and ideas. BIJ is a semiannual publication andall articles are peer-reviewed.Business Intelligence Journal will be published semiannually (one volume per year) by the BusinessIntelligence Service of Secured Assets Yield Corporation Limited based in London, UK.Types of paperRegular Articles: These should describe new and carefully confirmed findings, and research methodsshould be given in sufficient detail for others to verify the work. The length of a full paper should be theminimum required to describe and interpret the work clearly.Short Communications: A Short Communication is suitable for recording the results of complete smallinvestigations or giving details of new models, innovative methods or techniques. The style of mainsections need not conform to that of full-length papers. Short communications are 2 to 4 printed pages(about 6 to 12 manuscript pages) in length.Reviews: Submissions of reviews and perspectives covering topics of current interest are welcome andencouraged. Reviews should be concise and no longer than 4to 6 printed pages (about 12 to 18 manuscriptpages). Reviews manuscripts are also peer-reviewedVisit: http://www.saycocorporativo.com/saycoUK/BIJ/papers.htmlPlease read Instructions for Authors before submitting your manuscript. The manuscript files should begiven the last name of the first author. Submit manuscripts as e-mail attachment to the Editorial Office.BIJ will only accept manuscripts submitted as Microsoft Word archives attachments within an e-mailcommunication forwarded to the above mentioned e-mail address.Submit papers to: edit.bij@saycocorporativo.comSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 General Information287SRRNetSocial ResponsibilityResearch Networkwww.socialresponsibility.bizSocial Responsibility Research NetworkWho are we?We are an international network of scholars who share similar interests in aspects of socialresponsibility. Currently we have about 500 members and membership is free.Network officers:Chair of the Network: Professor Dr. David Crowther, De Montfort <strong>University</strong>, Leicester BusinessSchool, The Gateway, Leicester LE1 9BH, UK davideacrowther@aol.comVice Chair: Professor Dr. Güler Aras, Yildiz Technical <strong>University</strong>, Institute of Social Science,Yildiz Besiktas 34349, Istanbul, TURKEY guleraras@aol.comWhat do we do?Conferences2008 7th conference CSR and SMEsDurham, UK2009 8th conference CSR and NGOsPretoria, South Africa2010 9th conference CSR and Global GovernanceZagreb, Croatia2011 10th conference CSR and the New EconomyNew Orleans, USAPublicationsSocial Responsibility JournalThe official refereed journal of the Network; published 4 times per year by Emerald.Discussion Papers in Social ResponsibilityAn opportunity for early publication of articles. Published when necessary by SRRNet.The NewsletterPublished 3 times per year and containing news and opinion pieces. Sent to all members.Research Book Series: Issues in Corporate Behaviour and SustainabilityBooks published in association with the conferences and given to all conference delegates.<strong>Full</strong> details of all of our activities can be found from our website – www.socialresponsibility.bizIf you share our aims then please join us. We look forward to hearing from you.


288School of Doctoral Studies (European Union) JournalJuly<strong>Isles</strong> <strong>International</strong>e UniversitéSchool of Doctoral Studies(European Union)Approved by Charter of The Ministry of Education of the British <strong>Isles</strong> to act as a chartered<strong>University</strong> outside of the United Kingdom; with <strong>Full</strong> Accreditation granted from the AcadémieEuropéenne d’Informatisation; established in Brussels, Belgium by Order of the King ofBelgium Albert II, with full recognition from The Ministry of Justice and Research of the Belgian Crown:The <strong>Isles</strong> <strong>International</strong>e Université (European Union) has been commended to host the School of Doctoral Studies (EU)in Brussels, Belgium, aiming to accomplish three fundamental missions:• Development and enhancement of elite doctoral studies’ programmes, with cutting edge standards on academic andscientific research;• Enforcement of EU analogue quality standards on academic programmes developed by tuition institutions outside theEU area: (a) By evaluating academic methodology applied on learning programmes; (b) By providing full coachingand tutoring support to tuition institutions worldwide to upgrade, assure and maintain academic methodology qualitylevels on their way towards excellence; and (3) By awarding EUASC Seal (EU Analogue Standards Certification) atcorresponding quality levels achieved at each evaluation point; and• Ensuring academic EU Analogue Standard by performing: (a) Academic Validation on degrees earned by students atrecognised and/or certified tuition institutions located outside the EU area; (b) Double Degree awarding on degreesearned by students at recognised and/or certified tuition institutions located elsewhere the EU area; and (c) Degreesawarding on studies programmes developed by tuition institutions, once certification at higher quality levels on appliedacademic methodology has been achieved and after collaboration agreement has been executed for these purposes .In order to achieve its missions, the <strong>Isles</strong> <strong>International</strong>e Université has gathered some of the best minds in Europe, whohave developed sate-of-the-art doctoral academic programmes, elite research methodology and cutting edge technologicaltools, and who act as permanent Faculty Members to strictly enforce this toolkit’s proper application on daily basis.Doctoral StudiesThe School of Doctoral Studies of the EU’s academic structure includes four Departments (Business Management andEconomics, Engineering and Technology, Science and Social Science) which host 37 Disciplines, offering PhD studies onpractically every main field of human knowledge.Over 355 PhD students are involved in more than 116 cutting edge research projects, most of them being currentlydeveloped in collaboration with 12 other universities in the EU area and elsewhere; 94% of these students are engaged onprogrammes designed to undertake pure research assignment and 6% are required to undertake a research assignment andpursue theoretical studies in the form of seminars or courses.Studies towards a doctoral degree are worth 240 higher education credits (ECTS credits) and require an average of fouryears of full-time study. The research is intended to lead to a scholarly thesis; writing it will take up most of a student’stime and all theses are publicly defended. Every doctoral student receives a studies grant for partial or total coverage onfull programme’s term costs, as well as individual tutoring. Currently, slightly over 72% of all programmes’ full termcosts are covered by studies grants.Science students spend a great deal of time in the laboratory. Some departments may require that the thesis be part of anongoing project within the department. In the fields of technology and natural science, researchers often work as part of ateam. If research findings are reproduced in academic journals the thesis may be a compilation of the published articles.Forward inquiries to: admin@iiuedu.euSchool of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 General Information289Who are we?The Universidad del Valle de México (UVM) is one of the largest and most prestigious universities inMexico. Founded in 1960 and accredited by the Federacion de Instituciones Mexicanas Particulares deEducacion Superior, UVM enrolls students at 32 campuses throughout Mexico.Universidad del Valle de México was founded by a Group of entrepreneurs andacademics, led by Mr. Jose Ortega in response to the professional requirementsof the Mexican labor market, aiming to provide education with quality:High SchoolUndergraduateUndergraduate for working adultsGraduateContinuing EducationUVM’s mission is to be an institution that fully educates with an equilibriumbetween sciences and technology, ethics and culture, according to the social needs,in search of truth and welfare. The institutional philosophy and educational modelseek to educate in human values, updated knowledge and the acquisition of skillsthat teach the student to competitively adapt to the labor market.UVM alumni are distinguished by their abilities, knowledge, attitudes, and social skillsthat are shaped by the identity subjects.UVM is the only global university in MexicoBecause of its prestige and quality the UVM is part of Laureate <strong>International</strong> Universities, themost important network of universities in the world which is integrated by 24 well-known privateuniversities in:Spain, Switzerland, France, Costa Rica, Panama, Honduras, Ecuador, Chile, Peru, Brazil, England,Chipre, China, Canada, US, Germany and Mexico.UVM has 35 campuses throughout Mexico offering 38 undergraduate degree programs in Artsand Humanities, Social Sciences, Economic and Management Sciences, and Engineering; 11undergraduate degree programs for working adults and 28 graduate programs. UVM has more than100,000 students and more than 9,000 employees (teachers and staff).UVM opens its door to the worldUVM students have access to international qualified academic opportunities through programsoffered by Laureate <strong>International</strong> Universities


290School of Doctoral Studies (European Union) JournalJulySummer coursesSemester academic exchangesDouble degreeGraduate studiesRecognitions:Through its history, UVM has received recognitions that prove its excellence:• Academic Excellence - Secretaría de Educación Pública (SEP).• Affiliation - Asociación Nacional de Universidades e Institucionesde Educación Superior (ANUIES).• Academic Quality Certification - Federación de InstitucionesMexicanas Particulares de Educación Superior (FIMPES).• Second private university in the country holding the highest figureof academic programs accredited by Consejo para la Acreditaciónde la Educación Superior, A.C. (COPAES).• National Registree of Scientific and Technologic Institutions andEnterprises (RENIECYT) - CONACYT.• According to the Reader´s Digest Intelligent Decision Markers (IDM) Guia Universitaria 2008,UVM is one of the best upper education institutions in the country, from a range of over 100public and private universities.UVM’s faculty training programsUVM’s commitment with its students and with the country is to enhance its academic quality andprepare successful professionals; therefore, it invests in constant training and specialization of itsteachers through the Centre for Academic Excellence (CAE).2008 Year of academic strengtheningUVM trained more than 3.000 teachers and staff in different areas of knowledge between Januaryand September of this year. It also started a program of specialization with postgraduate anddoctorate courses, as well as an English program. UVM has identified important public and privateupper education institutions in Mexico and abroad, to perform together development and trainingprograms.Teacher training and professional developmentNearly 2.000 UVM teachers all over the country took the Institutional Teaching and PedagogyProgram with a focus on the correct implementation of the educational model, which underlies thestudents’ principles: learn to learn, learn to be, learn to do, and learn to undertake.Also, more than 400 teachers of the 35 campuses took 12 seminars in areas such as: IndustrialEngineering, Mechatronic, Animation, Marketing, Management, Communication, Law, Psychology,School of Doctoral Studies (European Union) Journal - July, 2009 No. 1


2009 General Information291Health Sciences and Hospitality.140 academic leaders of the 6 regions in which UVM has classified its campuses in 15 states ofMexico and Mexico City, are enrolled in the Diplomat in Leadership for Academic Management.Postgraduate Studies in prestigious institutionsA priority for UVM is to have teachers with postgraduate studies; thus, Walden <strong>University</strong>, a prestigiouson-line upper education institution accredited in the US, has offered scholarships for Master degreesin Education, Management, Information Technologies, Psychology and Systems Engineering.Sports at UVM go beyond boundariesUVM considers Sports an important part of the student’s education. This has allowed some of itsstudents to excel at national and international events. Maria del Rosario Espinoza and GuillermoPerez were both Tae Kwon Do gold medal winners in the Olympic Games of Beijing, China.UVM supports young leaders of projects with social impactFrom creating eco-tourism opportunities to developing anational hotline to combat domestic abuse, young peoplein Mexico are using their energy and creativity to improvetheir communities – and country. To support their efforts,Universidad del Valle de Mexico (UVM), joined theSylvan/Laureate Foundation and the <strong>International</strong>Youth Foundation in 2006 in creating “Premio UVMpor el Desarrollo Social” (UVM Prize for SocialDevelopment). Its goal: to celebrate and supportoutstanding young Mexican social entrepreneurs.Premio UVM has adapted the YouthActionNet® Global Fellowship model to provide a tailormade,culturally-relevant, and Spanish-language centered leadership development experience for 15young Mexican leaders, ages 18-29, annually. The Premio UVM fellowship strengthens the projectmanagement and communications skills of young Mexican leaders, while connecting them to theirpeers and experts to create a national network of youth leaders affecting positive change.Student DevelopmentResponsible of the education that students receive, UVM sets special emphasis on the students’integral education and the continuous improvement of its faculty. This way, it responds to theexpectations and trust of Mexican families and prepares good successful professionals with a globalvision, who will acquire the skills and knowledge that the labor market requires.The student development area at UVM has important national and international projects as:Institutional Fair of Entrepreneurs UVMCongress Simulation.United Nations Simulation Model.Congresses.Student Councils.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!