11.07.2015 Views

Cournot Oligopoly and the Theory of Supermodular Games

Cournot Oligopoly and the Theory of Supermodular Games

Cournot Oligopoly and the Theory of Supermodular Games

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

GAMES AND ECONOMIC BEHAVIOR 15, 111–131 (1996)ARTICLE NO. 0061Continuous Stochastic <strong>Games</strong> <strong>of</strong> Capital Accumulation withConvex TransitionsRabah Amir ∗WZB (IV/2), Reichpietschufer 50, 10785 Berlin, GermanyReceived October 23, 1992We consider a discounted stochastic game <strong>of</strong> common-property capital accumulationwith nonsymmetric players, bounded one-period extraction capacities, <strong>and</strong> a transition lawsatisfying a general strong convexity condition. We show that <strong>the</strong> infinite-horizon problemhas a Markov-stationary (subgame-perfect) equilibrium <strong>and</strong> that every finite-horizontruncation has a unique Markovian equilibrium, both in consumption functions which arecontinuous <strong>and</strong> nondecreasing <strong>and</strong> have all slopes bounded above by 1. Unlike previousresults in strategic dynamic models, <strong>the</strong>se properties are reminiscent <strong>of</strong> <strong>the</strong> correspondingoptimal growth model. Journal <strong>of</strong> Economic Literature Classification Codes: C73, O41,Q20. © 1996 Academic Press, Inc.1. INTRODUCTIONOne <strong>of</strong> <strong>the</strong> emerging trends in <strong>the</strong> exp<strong>and</strong>ing relationship between game <strong>the</strong>ory<strong>and</strong> economics is <strong>the</strong> application <strong>of</strong> dynamic/stochastic games in a variety<strong>of</strong> settings. Early studies initiated this development via specific exampleswhere equilibria are computable in a more or less straightforward manner. Avery partial list <strong>of</strong> papers along <strong>the</strong>se lines includes those <strong>of</strong> Shubik <strong>and</strong> Whitt(1973), Reinganum <strong>and</strong> Stokey (1988), <strong>and</strong> Cyert <strong>and</strong> DeGroot (1970). For athorough treatment <strong>of</strong> <strong>the</strong> widespread linear quadratic case, see Basar <strong>and</strong> Olsder(1982).More recently, several studies <strong>of</strong> strategic dynamics have been conductedalong general lines, i.e., with <strong>the</strong> reward function <strong>and</strong> <strong>the</strong> transition law satisfyingonly common structural properties derived from economic considerations.∗ This work was initiated while <strong>the</strong> author was visiting C.O.R.E., Belgium <strong>and</strong> completed while <strong>the</strong>author was visiting <strong>the</strong> Department <strong>of</strong> Economics, University <strong>of</strong> Dortmund, Germany. The author isgrateful to both institutions for providing an extremely congenial <strong>and</strong> stimulating work environment.The author also thanks Wolfgang Leininger, Jean-Francois Mertens, Abraham Neyman, <strong>and</strong> Matt Sobelfor helpful conversations <strong>and</strong> comments. Financial support by <strong>the</strong> D.F.G. is gratefully acknowledged.1110899-8256/96 $18.00Copyright © 1996 by Academic Press, Inc.All rights <strong>of</strong> reproduction in any form reserved.


112 RABAH AMIRThis str<strong>and</strong> <strong>of</strong> <strong>the</strong> literature may be divided into two groups, according to oneimportant aspect <strong>of</strong> <strong>the</strong> information structure <strong>of</strong> <strong>the</strong> dynamic game under consideration,namely, perfect information vs simultaneous moves.In <strong>the</strong> former category, at least three separate areas figure prominently. A notion<strong>of</strong> consistent planning/altruistic growth, introduced by Strotz <strong>and</strong> Pollack, hasbeen thoroughly investigated by Leininger (1986) <strong>and</strong> Bernheim <strong>and</strong> Ray (1983).See also Lane <strong>and</strong> Leininger (1986) <strong>and</strong> Bernheim <strong>and</strong> Ray (1987). A model<strong>of</strong> dynamic duopoly has been <strong>the</strong> object <strong>of</strong> several studies: Cyert <strong>and</strong> DeGroot(1970), Maskin <strong>and</strong> Tirole (1988a,b). Finally various racing models have beenanalyzed; see, e.g., Harris <strong>and</strong> Vickers (1987) <strong>and</strong> Budd et al. (1992).Most notably, very general pure-strategy existence results have been derivedfor dynamic games <strong>of</strong> perfect information: Harris (1985) <strong>and</strong> Hellwig<strong>and</strong> Leininger (1987) consider history-dependent equilibria while Hellwig <strong>and</strong>Leininger (1988) deal with Markovian equilibria. A common feature <strong>of</strong> all<strong>the</strong> studies in this class <strong>of</strong> dynamic games is <strong>the</strong> deterministic nature <strong>of</strong> <strong>the</strong>dynamics.For games with simultaneous moves, <strong>the</strong> dominant setting has been that <strong>of</strong>strategic capital accumulation/resource extraction, initiated by <strong>the</strong> pioneeringwork <strong>of</strong> Levhari <strong>and</strong> Mirman (1980). Existence <strong>of</strong> a stationary equilibrium for<strong>the</strong> deterministic version <strong>of</strong> this class <strong>of</strong> games is established in Sundaram (1989)<strong>and</strong> Amir (1990). See also Amir (1989). An extension to <strong>the</strong> stochastic case isgiven in Majumdar <strong>and</strong> Sundaram (1991) <strong>and</strong> Dutta <strong>and</strong> Sundaram (1990).It should come as no surprise that no general pure-strategy existence resultsare available for such games. Indeed, <strong>the</strong> simultaneity <strong>of</strong> moves in a dynamiccontext <strong>of</strong>ten implies that quasi-concavity, which is a basic requirement for <strong>the</strong>application <strong>of</strong> most fixed-point <strong>the</strong>orems, is not preserved, even under mostfavorable assumptions on <strong>the</strong> data <strong>of</strong> <strong>the</strong> game. Note that such difficulties arealso encountered when dealing with stationary equilibria <strong>of</strong> games with perfectinformation.Somewhat more surprisingly, it turns out that <strong>the</strong> available abstract subgameperfectequilibrium existence results in mixed (or behavioral) strategies forstochastic games (see Raghavan et al., 1991, <strong>and</strong> references <strong>the</strong>rein) are notapplicable to most economic models. The primary reason is that <strong>the</strong>se results involve<strong>the</strong> finiteness <strong>of</strong> <strong>the</strong> action spaces or <strong>the</strong> continuity in <strong>the</strong> variation norm <strong>of</strong><strong>the</strong> transition law with respect to <strong>the</strong> actions. In this context, <strong>the</strong> basic difficulty is<strong>the</strong> absence <strong>of</strong> upper hemi-continuity <strong>of</strong> <strong>the</strong> best response correspondence in <strong>the</strong>weak-star topology on <strong>the</strong> strategy spaces (consisting <strong>of</strong> transition probabilitiesfrom <strong>the</strong> history or state space to <strong>the</strong> action spaces).Ano<strong>the</strong>r related str<strong>and</strong> <strong>of</strong> <strong>the</strong> literature on dynamic games builds on <strong>the</strong> work<strong>of</strong> Boldrin <strong>and</strong> Montrucchio (1986) on <strong>the</strong> indeterminancy <strong>of</strong> (optimal) capitalaccumulation paths, <strong>and</strong> extends it to various dynamic games with sequential aswell as simultaneous moves, including <strong>Cournot</strong> adjustment processes. See Dana<strong>and</strong> Montrucchio (1991) for a survey.


STOCHASTIC CAPITAL ACCUMULATION GAMES 113The present paper reconsiders strategic capital accumulation/resource extractionwith bounded one-period capacities <strong>and</strong> convex transitions as a discountedstochastic game. While previous work on this problem extended <strong>the</strong> existence result<strong>of</strong> <strong>the</strong> deterministic counterpart under a symmetry assumption on <strong>the</strong> players(Dutta <strong>and</strong> Sundaram, 1990), our analysis goes fur<strong>the</strong>r in a variety <strong>of</strong> directions:(i) <strong>the</strong> assumption <strong>of</strong> symmetric players is removed, (ii) <strong>the</strong> stationary equilibriumstrategies are continuous <strong>and</strong> nondecreasing, in addition to having, as in<strong>the</strong> deterministic case, all slopes bounded above by one, <strong>and</strong> (iii) uniqueness <strong>of</strong>Markovian equilibrium for every finite-horizon truncation is established. Whilepresented in <strong>the</strong> context <strong>of</strong> two players only, our results do extend to <strong>the</strong> n-playercase, n > 2.These surprising conclusions are reached at <strong>the</strong> expense <strong>of</strong> a new naturalconvexity assumption on <strong>the</strong> stochastic technology, represented as a transitionprobability mapping today’s joint investment into tomorrow’s r<strong>and</strong>om output.This key assumption may be simply stated as follows: The probability that <strong>the</strong>next output is at or below any given level is a convex function <strong>of</strong> current investment.A class <strong>of</strong> examples given in <strong>the</strong> next section suggests that this convexitycondition is ra<strong>the</strong>r general; in particular it allows for atoms in <strong>the</strong> productionprocess.Never<strong>the</strong>less, this notion <strong>of</strong> convexity is intrinsically stochastic <strong>and</strong> has nomeaningful deterministic analog. Its convexifying effect may be expressed by <strong>the</strong>following elementary but potent observation: The integral <strong>of</strong> any nondecreasingfunction with respect to a transition probability which is convex in its parameteris a concave function <strong>of</strong> that parameter. In <strong>the</strong> context at h<strong>and</strong>, think <strong>of</strong> <strong>the</strong> typicalvalue function as <strong>the</strong> integr<strong>and</strong> <strong>and</strong> <strong>of</strong> <strong>the</strong> joint investment as <strong>the</strong> parameter, in<strong>the</strong> above statement (cf. Lemma 1.2).It is worthwhile to observe that many <strong>of</strong> <strong>the</strong> qualitative divergences between<strong>the</strong> one-player (optimal growth) <strong>and</strong> <strong>the</strong> multiplayer cases prevailing in <strong>the</strong> deterministicframework are no longer present in <strong>the</strong> stochastic convex setting.The equilibrium strategies have marginal propensity <strong>of</strong> consumption between 0<strong>and</strong> 1, as does <strong>the</strong> (one-player) optimal policy, cf. Brock <strong>and</strong> Mirman (1972).Thus <strong>the</strong> classical properties <strong>of</strong> consumption functions are restored. Under <strong>the</strong>assumption <strong>of</strong> bounded extractions capacities, Markovian equilibria are evenunique for any finite horizon. Elsewhere, similar results are shown to hold for<strong>the</strong> consistent planning problem under <strong>the</strong> same technology.It is instructive to point out that our key assumption on <strong>the</strong> stochastic technologymay also be interpreted along lattice-<strong>the</strong>oretical lines, cf. <strong>the</strong> pro<strong>of</strong> <strong>of</strong>Lemma 1.3. Fur<strong>the</strong>rmore, Topkis’ <strong>the</strong>orem is conveniently invoked a number<strong>of</strong> times in our analysis. Finally, it is shown that an N-period horizon problemmay be analyzed, via <strong>the</strong> dynamic programming recursion, as a sequence <strong>of</strong> Nparametrized one-shot supermodular games, cf. Topkis (1979), Vives (1990),Milgrom <strong>and</strong> Roberts (1990), <strong>and</strong> Sobel (1988). Finally, it is our belief that <strong>the</strong>stochastic technology introduced here will turn out to be <strong>of</strong> crucial importance in


114 RABAH AMIRtackling hi<strong>the</strong>rto unsolved problems in strategic dynamics <strong>and</strong> multidimensionaloptimal dynamics.The paper is organized as follows. Section 2 describes <strong>the</strong> game toge<strong>the</strong>r withall assumptions <strong>and</strong> <strong>the</strong>ir interpretation, <strong>and</strong> a statement <strong>of</strong> our main results. Allpro<strong>of</strong>s are contained in Section 3.2. THE MODEL AND THE MAIN RESULTThe economic interaction under consideration may be simply described asfollows: Two players jointly own a productive asset characterized by a stochastictwo-period input–output technology. Each player, distinguished by his utilityfunction, his discount factor, <strong>and</strong> his one-period consumption capacity, has ashis objective <strong>the</strong> maximization <strong>of</strong> <strong>the</strong> discounted sum <strong>of</strong> utilities from his ownconsumption, over a finite or an infinite horizon, as will be specified. At eachperiod, moves are simultaneous.More specifically, consider <strong>the</strong> following stochastic game: Player i’s pay<strong>of</strong>fis given by, with T ∈ N ∪ {+∞} to be specified,T∑E δi t u i(ct i ),t=0where E denotes <strong>the</strong> expectation operation over <strong>the</strong> induced probability measureon all histories, described below.The (technological) stochastic transition law is described bywithx t+1 ∼ q(· |x t −c 1 t −c 2 t ),c i t ∈ [0, K i (x t )], t = 0, 1,...,T; i =1,2Here, δ i , with 0 ≤ δ i < 1, is Player i’s discount factor, u i his utility function,<strong>and</strong> K i (x) his one-period consumption/extraction capacity, as a function <strong>of</strong> <strong>the</strong>available stock x. The feasibility constraint says that each player may consumeany nonnegative amount up to his one-period capacity. Finally, q is a transitionprobability from <strong>the</strong> state space (i.e., <strong>the</strong> set <strong>of</strong> all possible capital stocks) toitself, specifying a probability distribution over <strong>the</strong> next stock x t+1 , given <strong>the</strong>current joint investment y t = x t − c 1 t − c 2 t .Throughout this paper, any implicit reference to a σ -algebra will tacitly beto <strong>the</strong> Borel σ -algebra, <strong>and</strong> thus measurability will always mean Borel measurability.Our model here will easily be seen to represent a discounted Markovstationarystochastic game with uncountable state <strong>and</strong> action spaces (all realsubsets) <strong>and</strong> state-constrained actions. Thus, <strong>the</strong> measure-<strong>the</strong>oretic structure, on


STOCHASTIC CAPITAL ACCUMULATION GAMES 115which we do not exp<strong>and</strong> here, may simply be adapted from this general class<strong>of</strong> games to our specific model. See Raghavan et al. (1991). In fact, we willmake strong structural assumptions on u i , q, <strong>and</strong> K i , motivated by economicconsiderations, which would make it possible to define our game using only <strong>the</strong>Riemann integral.Before listing <strong>the</strong> model assumptions, we complete <strong>the</strong> formulation <strong>of</strong> <strong>the</strong>game by defining strategies, expected pay<strong>of</strong>fs <strong>and</strong> equilibria. Denote by S <strong>the</strong>state space for this game (S = [0, +∞) as will be described below) <strong>and</strong> byA i (x) Player i’s action space (A i (x) = [0, K i (x)]). Let A i = ⋃ x∈S A i(x) =[0, K i (∞)] (see Assumption A.3 below regarding <strong>the</strong> K i ’s).Define <strong>the</strong> sequence <strong>of</strong> possible histories for this game as follows. LetH 0 = S 0 , H 1 = S 1 × A 1 1 × A1 2 ,..., H n = H n−1 ×S n × A n 1 × An 2 ,....Note here that all S n ’s <strong>and</strong> all Ai n ’s are <strong>the</strong> same. The index n is only addedto indicate time for clarity. A general strategy for Player i is a sequence <strong>of</strong>measurable maps (σ 0 ,σ 1 ,...)with σ n : H n → Ai n, such that σ n(h n ) ∈ A i (x n ).Inthis paper, we deal only with more restricted classes <strong>of</strong> strategies, described next.With T finite, we will consider only Markovian strategies defined as above butwith H n = S, n = 0, 1,...,T. With T =+∞, we restrict attention to stationary(Markov) strategies, defined as Markovian strategies for which (σ 0 = σ 1 =···).Given an initial state x <strong>and</strong> a pair <strong>of</strong> strategies (σ 1 ,σ 2 ) = ((σ1 n), (σ 2 n )), <strong>the</strong>reis a unique probability distribution on <strong>the</strong> space <strong>of</strong> all histories, m(x,σ 1 ,σ 2 ), accordingto <strong>the</strong> Ionescu–Tulcea <strong>the</strong>orem. All expectations defined in our analysisare tacitly w.r. to <strong>the</strong> measure m. Let m i be <strong>the</strong> marginal <strong>of</strong> m on Ai 1×A2 i ×···×AT i<strong>and</strong> min <strong>the</strong> marginal <strong>of</strong> m i on Ai n , n = 1, 2,...,T. The expected discountedpay<strong>of</strong>f to player i when <strong>the</strong> strategy pair (σ 1 ,σ 2 )is used, <strong>and</strong> x is <strong>the</strong> initial state,is given by∫ T∑ i (σ 1 ,σ 2 )(x) = δi t u i(ci t ) dm i(x,σ 1 ,σ 2 ).t=0The tth stage expected utility <strong>of</strong> player i is given by∫U i (σ 1 ,σ 2 )(x) = u i (ci t ) dmt i (x,σ 1,σ 2 ),It is easily shown that i (σ 1 ,σ 2 )(x) =T∑δi t U i(σ 1 ,σ 2 )(x).t=0t = 0, 1,...,T.When T is finite (infinite), a pair (σ1 ∗,σ∗ 2)<strong>of</strong> Markov (stationary) strategies is aMarkovian (stationary) Nash equilibrium if 1 (σ ∗ 1 ,σ∗ 2 )(x) ≥ 1(σ 1 ,σ ∗ 2 )(x)


116 RABAH AMIR<strong>and</strong> 2 (σ ∗ 1 ,σ∗ 2 )(x) ≥ 2(σ ∗ 1 ,σ 2)(x),for all feasible Markov (stationary) strategies σ 1 , σ 2 , <strong>and</strong> all initial states x.In both cases, it is well known, from st<strong>and</strong>ard dynamic programming arguments,that an equilibrium from <strong>the</strong> proposed subclass <strong>of</strong> strategies remains anequilibrium when one or both players are allowed to use more general strategies.Fur<strong>the</strong>rmore, Nash equilibria based on <strong>the</strong>se two restricted classes <strong>of</strong> strategiessatisfy a strong version <strong>of</strong> subgame perfection.We now provide a list <strong>of</strong> structural assumptions on <strong>the</strong> data <strong>of</strong> <strong>the</strong> game underconsideration which hold, without fur<strong>the</strong>r reference, throughout <strong>the</strong> paper:A.1. u i :[0,+∞) → [0, +∞) is a strictly increasing strictly concave function,satisfying u i (·) ≥ 0A.2. q is a transition probability from [0, +∞), with its Borel σ -algebra,to itself. Let F(· |y)denote <strong>the</strong> cumulative distribution function associatedwith q(· |y),i.e., for x ≥ 0, F(x | y) = q([0, x] | y). It is assumed, that(interpretation <strong>of</strong> <strong>the</strong> following conditions is given later):(i) For each x ∈ [0, +∞), F(x |·)is a strictly decreasing function, i.e., Fis (first-order) stochastically increasing in y.(ii) For each x ∈ [0, +∞), F(x |·)is a convex function.(iii) F(0 | 0) = 1 <strong>and</strong> F(x |·)is continuous at y = 0, ∀x ∈ [0, +∞).A.3. K i (·) is continuous, strictly increasing, uniformly bounded above bysome constant C i ∈ R + , <strong>and</strong> satisfies K i (0) = 0 <strong>and</strong>K i (x 1 ) − K i (x 2 )≤ 1, ∀x 1 , x 2 ∈ [0, +∞), x 1 ≠ x 2 .x 1 − x 2Fur<strong>the</strong>rmore K 1 (x) + K 2 (x)


STOCHASTIC CAPITAL ACCUMULATION GAMES 117Assumptions A.2 <strong>and</strong> A.3 are crucial to our analysis <strong>of</strong> <strong>the</strong> game at h<strong>and</strong>.Some direct consequences <strong>of</strong> <strong>the</strong>se assumptions are described below <strong>and</strong> invokedthroughout <strong>the</strong> paper. A.2 (ii), toge<strong>the</strong>r with <strong>the</strong> continuity part <strong>of</strong> A.2 (iii),implies that for each x ∈ [0, +∞), F(x |·)is a continuous function. Hence, asy n → y,wehaveF(x|y n )→F(x|y) for each x ∈ [0, +∞). (2.1)This pointwise convergence is clearly stronger than <strong>the</strong> more common weak (orvague) convergence <strong>of</strong> distribution functions.A.2 (i) <strong>and</strong> A.2 (ii) toge<strong>the</strong>r require that <strong>the</strong> state space S—<strong>the</strong> set <strong>of</strong> all possiblecapital stocks—<strong>of</strong> this game be unbounded above. This may easily be seen froma graph <strong>of</strong> F(x |·), <strong>and</strong> is thus not formally proved here. Note that while thisrequires arbitrarily large stocks to be reachable with positive probability fromany given small stock, this probability can be taken to be as small as desired.As a consequence, all integrals in this paper are taken from 0 to +∞ withoutfur<strong>the</strong>r indication.As an example <strong>of</strong> a transition probability satisfying all <strong>the</strong> requirements <strong>of</strong>Assumption A.2, consider <strong>the</strong> following (uncountable) family, for y ∈ [0, +∞),{[1 − g(y)]G(x) + g(y) if x ≥ 0F(x | y) =0 if x < 0,where G(·) is any distribution function satisfying G(x) = 0ifx ≤0, <strong>and</strong> g(·) isany strictly decreasing convex function from [0, +∞) to [0, 1] with g(0) = 1.We leave to <strong>the</strong> reader <strong>the</strong> computational steps establishing that F is indeed atransition probability <strong>and</strong> that it satisfies A.2. For <strong>the</strong> sake <strong>of</strong> fur<strong>the</strong>r illustration,here is a specific example, obtained by letting g(y) = e −y , y ≥ 0 <strong>and</strong> G(x) =1 − e −2x , x ≥ 0:{1 − eF(x | y) =−2x + e −y e −2x if x ≥ 00 if x < 0.Finally, Assumption A.3 turns out to be crucial for some aspects <strong>of</strong> our results,in particular, <strong>the</strong> uniqueness part. Its main effect is that it allows <strong>the</strong> model ath<strong>and</strong> to be treated as an ordinary (stochastic) game, ra<strong>the</strong>r than as a generalizedgame, in Debreu’s (1952) terminology, i.e., a game in which <strong>the</strong> joint actionspace is not a cartesian product <strong>of</strong> <strong>the</strong> individual players’ action spaces. Animmediate consequence <strong>of</strong> this hypo<strong>the</strong>sis is that <strong>the</strong> stock cannot be driven to 0by exhaustive consumption by <strong>the</strong> players. Hence <strong>the</strong> game will effectively lastforever, with probability one provided q does not have an atom at 0 for all y > 0.Such an assumption <strong>of</strong> nonexhaustive capacities has previously been made in asimilar context, Benhabib <strong>and</strong> Radner (1988).The level <strong>of</strong> realism <strong>of</strong> such an assumption obviously depends on <strong>the</strong> specificeconomic context represented by <strong>the</strong> described dynamic game. In <strong>the</strong> most


118 RABAH AMIRprominent setting <strong>of</strong> noncooperative common-property resource extraction, AssumptionA.3 is undoubtedly appropriate <strong>and</strong> is easily interpreted in terms <strong>of</strong>capital <strong>and</strong> labor (bounded) capacities.In <strong>the</strong> following, it is convenient to refer to <strong>the</strong> (n + 1)-period horizon game(including <strong>the</strong> 0th period) as G n , with G ∞ being <strong>the</strong> infinite horizon game.Ano<strong>the</strong>r related (auxiliary game) is defined in Section 3.We are now in a position to state <strong>the</strong>MAIN THEOREM. The infinite-horizon game G ∞ has a stationary equilibrium.Fur<strong>the</strong>rmore, every finite-horizon game G n has a unique Markovian equilibrium.In both cases, equilibrium strategies consist <strong>of</strong> (consumption) decisionfunctions which are continuous <strong>and</strong> nondecreasing <strong>and</strong> have all slopes below 1,<strong>and</strong> corresponding value functions are continuous <strong>and</strong> nondecreasing.3. PROOFSThe pro<strong>of</strong> <strong>of</strong> <strong>the</strong> Main Theorem proceeds via two intermediate propositions,each <strong>of</strong> which requires, in turn, a few lemmas. Many <strong>of</strong> <strong>the</strong>se results are <strong>of</strong>independent interest. Note that we assume <strong>the</strong> reader is familiar with Topkis’<strong>the</strong>orem (Topkis, 1978).We start with two lemmas which establish useful facts invoked repeatedlythroughout our analysis.LEMMA 0.1. Let F 1 <strong>and</strong> F 2 be probability distributions over [0, +∞), withits Borel subsets. Then F 2 first-order stochastically dominates F 1 , or∫ ∫F 1 (x) ≥ F 2 (x), ∀x if <strong>and</strong> only if v dF 1 ≤ vdF 2 ,for all real-valued nondecreasing functions v on [0, +∞).Pro<strong>of</strong>. This result is well known. See, e.g., Stoyan (1983).LEMMA 0.2. Let U: [0,+∞) → [0, +∞) be a bounded concave function.Let h: [0,+∞) → [0, +∞) satisfy h(0) = 0 <strong>and</strong> [h(x 1 ) − h(x 2 )]/(x 1 −x 2 ) ≤ 1, for all distinct x 1 , x 2 ∈ [0, +∞). Then (i) <strong>the</strong> set ϕ ={(x,y):x∈[0, +∞) <strong>and</strong> y ∈ [0, x − h(x)]} is a lattice, <strong>and</strong> (ii) U[x − y − h(x)] is supermodularon ϕ.Pro<strong>of</strong>. (i) This follows directly from <strong>the</strong> fact that x − h(x) is nondecreasingin x. (ii) We show <strong>the</strong> equivalent fact that U[x − y − h(x)] has nondecreasingdifferences in (x, y) ∈ ϕ. Let x 2 ≥ x 1 , <strong>and</strong> y 2 ≥ y 1 be fixed with 0 ≤ y i ≤x i − h(x i ), i = 1, 2. Then, clearly for i = 1, 2, we havex 2 − y 1 − h(x 2 ) ≥ x i − y i − h(x i ) ≥ x 1 − y 2 − h(x 1 ),


STOCHASTIC CAPITAL ACCUMULATION GAMES 119with <strong>the</strong> sum <strong>of</strong> <strong>the</strong> two outer terms being equal to that <strong>of</strong> <strong>the</strong> two inner terms.Hence, <strong>the</strong>re exists λ, with 0 ≤ λ ≤ 1, such that<strong>and</strong>x 2 − y 2 − h(x 2 ) = λ[x 2 − y 1 − h(x 2 )] + (1 − λ)[x 1 − y 2 − h(x 1 )],x 1 − y 1 − h(x 1 ) = (1 − λ)[x 2 − y 1 − h(x 2 )] + λ[x 1 − y 2 − h(x 1 )].(3.1a)(3.1b)Now, using <strong>the</strong> concavity <strong>of</strong> U twice, with (3.1a) <strong>and</strong> <strong>the</strong>n with (3.1b), yieldsU[x 2 − y 2 − h(x 2 )] + U[x 1 − y 1 − h(x 1 )]≥ λU[x 2 − y 1 − h(x 2 )] + (1 − λ)U[x 1 − y 2 − h(x 1 )]+ (1 − λ)U[x 2 − y 1 − h(x 2 )] + λU[x 1 − y 2 − h(x 1 )]= U[x 2 − y 1 − h(x 2 )] + U[x 1 − y 2 − h(x 1 )]which says that U[x − y − h(x)] has nondecreasing differences in (x, y) ∈ ϕ.This completes <strong>the</strong> pro<strong>of</strong> <strong>of</strong> Lemma 0.2.LEMMA 0.3. Let f n , f :[0,+∞) → R. If f nu−→ f , i.e., uniformly on anycompact subset <strong>of</strong> [0, +∞), x n ∈ arg max f n , <strong>and</strong> x is a limit point <strong>of</strong> (x n ), <strong>the</strong>subsequence (x m ) being convergent to x, <strong>the</strong>nf (x) = sup f = limm→∞ (sup f m).Pro<strong>of</strong>. See Kall (1986). Note here that this result only requires f nh−→ f ,or hypoconvergence, which is weaker than uniform convergence.Additional notation is introduced as <strong>the</strong> need for it arises. The effective strategyspace for Player i in our stochastic game is <strong>the</strong> following space <strong>of</strong> Lipschitzcontinuousmonotone nondecreasing functions (recall that C i is <strong>the</strong> maximalextraction level, cf. Assumption A.3):LCM i ={γ:[0,+∞) −→ [0, C i ],γ(x)∈[0, K i (x)], ∀x > 0 <strong>and</strong>0 ≤ γ(x 1)−γ(x 2 )≤1 for all distinct x 1 , x 2 in [0, +∞)}.x 1 −x 2LCM i is <strong>the</strong> subset <strong>of</strong> <strong>the</strong> space <strong>of</strong> all stationary strategies, which is used inour analysis <strong>of</strong> <strong>the</strong> infinite-horizon problem. For finite horizons, we deal with<strong>the</strong> subset <strong>of</strong> all Markovian strategies consisting <strong>of</strong> sequences <strong>of</strong> elements inLCM i .The corresponding space <strong>of</strong> possible value functions when players use strategiesin LCM i will be shown to be (cf. Lemma 1.1) <strong>the</strong> following space <strong>of</strong>continuous monotone nondecreasing functions:CM i ={v:[0,+∞) −→ [0, +∞) such that 0 ≤ v ≤ u i (C i )1−δ i,v continuous <strong>and</strong> nondecreasing}.


120 RABAH AMIRHere, <strong>the</strong> upper bound on <strong>the</strong> possible values that Player i can get follows from<strong>the</strong> definition <strong>of</strong> pay<strong>of</strong>fs <strong>and</strong> Assumption A.3. It is also useful to define <strong>the</strong>closure <strong>of</strong> CM i under pointwise convergence:We are now ready forM i : {v: [0,+∞) → [0, +∞) such that 0 ≤ v ≤ u i (C i )1−δ i<strong>and</strong> v is nondecreasing}.PROPOSITION 1. The infinite-horizon game has a Nash equilibrium in stationarystrategies which are elements <strong>of</strong> LC M 1 × LCM 2 .It is convenient to break down <strong>the</strong> pro<strong>of</strong> <strong>of</strong> Proposition 1 into three distinctlemmas, labeled 1.1–1.3. We first define <strong>the</strong> associated best-response optimizationproblem.Suppose that Player II, say, uses a stationary strategy h ∈ LCM 2 . Then PlayerI’s value function for optimally responding to h (in <strong>the</strong> infinite-horizon game),to be denoted V h , is defined byV h (x) = sup E ∑ ∞t=0 δ 1u 1 (c 1 t )subject to x t+1 ∼ q(· |x t −c 1 t −h(x t )), t = 0, 1,...(3.2)with x 0 = x,where <strong>the</strong> expectation is over <strong>the</strong> unique probability measure induced by x, h<strong>and</strong> a stationary strategy by Player I (see Section 1), <strong>and</strong> <strong>the</strong> supremum may betaken over <strong>the</strong> space <strong>of</strong> all stationary policies (see, e.g., Bertsekas <strong>and</strong> Shreve,1978, Stokey et al., 1989).LEMMA 1.1. In <strong>the</strong> optimization problem (3.2), assume that h ∈ LCM 2 .Then V h ∈ CM 1 <strong>and</strong> V h is <strong>the</strong> unique solution to <strong>the</strong> functional equation∫V h (x) = max{u 1 (c) + δ 1 V h (x ′ ) dF(x ′ | x −c−h(x)): c ∈ [0, K 1 (x)]}.(3.3)Pro<strong>of</strong>. Define <strong>the</strong> map T : CM 1 →CM 1 by∫T (v)(x) = sup{u 1 (c) + δ 1 v(x ′ ) dF(x ′ | x −c−h(x)): c ∈ [0, K 1 (x)]}.(3.4)First, we show that T indeed maps CM 1 into itself. To this end, we start by provingthat <strong>the</strong> suprem<strong>and</strong> in (3.4) is continuous in (c, x). Let c n → c <strong>and</strong> x n → x.Then, since h ∈ LCM 2 , x n − c n − h(x n ) → x − c − h(x). By Assumption A.2(see also (2.1)), we have F(· |x n −c n −h(x n )) → F(· |x−c−h(x)), <strong>and</strong>


STOCHASTIC CAPITAL ACCUMULATION GAMES 121hence, since v ∈ CM 1 , by a well-known characterization <strong>of</strong> weak convergence<strong>of</strong> measures,∫∫v(x ′ )F(dx ′ | x n −c n − h(x n )) −→ v(x ′ ) dF(x ′ | x −c−h(x)). (3.5)Continuity <strong>of</strong> <strong>the</strong> suprem<strong>and</strong> <strong>of</strong> (3.4) follows from (3.5). Fur<strong>the</strong>rmore, <strong>the</strong> feasibleset [0, K 1 (x)] clearly represents a continuous correspondence, in view <strong>of</strong>A.3. Hence, by <strong>the</strong> Maximum <strong>the</strong>orem, T (v) is continuous.Next, we show that T (v) is nondecreasing. Let x 1 ≥ x 2 . Then, by AssumptionA.2 (i), Lemma 0.1, <strong>and</strong> <strong>the</strong> fact that v ∈ CM 1 , we have, since x 1 − h(x 1 ) ≥x 2 − h(x 2 ), for every c,∫u 1 (c) + δ 1 v(x ′ ) dF(x ′ | x 1 −c−h(x 1 ))∫≥ u 1 (c) + δ 1 v(x ′ ) dF(x ′ | x 2 −c−h(x 2 )). (3.6)Since T (v)(x 1 ) is <strong>the</strong> sup <strong>of</strong> <strong>the</strong> LHS <strong>of</strong> (3.6) over c ∈ [0, K 1 (x 1 )], <strong>and</strong> T (v)(x 2 )is <strong>the</strong> sup <strong>of</strong> <strong>the</strong> RHS <strong>of</strong> (3.6) over c ∈ [0, K 1 (x 2 )], <strong>and</strong> since [0, K 1 (x 2 )] ⊂[0, K 1 (x 1 )] by A.3, we have T (v)(x 1 ) ≥ T (v)(x 2 ). This shows that T mapsCM 1 into itself.Next, observe that CM 1 , endowed with <strong>the</strong> uniform distance, defined byd(γ 1 ,γ 2 ) = sup |γ 1 (x) − γ 2 (x)|, is a closed subset <strong>of</strong> <strong>the</strong> Banach space C,consisting <strong>of</strong> bounded continuous functions on [0, +∞), endowed with <strong>the</strong> supnorm. Hence CM 1 is a complete metric space.A st<strong>and</strong>ard argument in discounted dynamic programming <strong>the</strong>ory shows thatT is a contraction with unique fixed-point V h ∈ CM 1 which thus satisfies(3.3).A best response <strong>of</strong> Player I to h is defined as any argmax <strong>of</strong> (3.3), for x ∈[0, +∞). For <strong>the</strong> next results, it is convenient to define two alternative choicevariables for Player I. Instead <strong>of</strong> choosing c ∈ [0, K 1 (x)], Player I may bethought <strong>of</strong> as choosing z = x − c, z ∈ [x − K 1 (x), x]. Then (3.3) may berewritten in <strong>the</strong> equivalent form∫V h (x) = max{u 1 (x − z) + δ 1 V h (x ′ ) dF(x ′ | z−h(x)): z ∈ [x − K 1 (x), x]}.(3.7)Likewise, Player I may be viewed as choosing joint investmenty = x − c − h(x), with y ∈ [x − K 1 (x) − h(x), x − h(x)].Then (3.3) becomes∫V h (x) = max{u 1 [x − y − h(x)] + δ 1 V h (x ′ ) dF(x ′ | y):y ∈ [x − K 1 (x)−h(x), x − h(x)]} (3.8)


122 RABAH AMIRThe next result shows that, if h ∈ LCM 2 , <strong>the</strong>re is a unique best response g.LEMMA 1.2.Player I.If h ∈ LCM 2 , <strong>the</strong>re is a unique stationary best response g byPro<strong>of</strong>. We show that <strong>the</strong> maxim<strong>and</strong> in (3.8) is a strictly concave function <strong>of</strong>y, for each x ∈ [0, +∞). Since u 1 is strictly concave, it suffices to show that <strong>the</strong>integral term is concave in y. To this end, first note that F(x |·)being conveximplies that for any y 1 , y 2 ∈ [x − K 1 (x) − h(x), x − h(x)], any λ ∈ [0, 1], <strong>and</strong>any x ′ ∈ [0, +∞)F[x ′ | λy 1 + (1 − λ)y 2 ] ≤ λF(x ′ | y 1 ) + (1 − λ)F(x ′ | y 2 ). (3.9)Then, consider∫λ∫V h (x ′ ) dF(x ′ | y 1 )+(1−λ)∫=∫≤V h (x ′ ) dF(x ′ | y 2 )V h (x ′ )d[λF(x ′ | y 1 )+(1−λ)F(x ′ | y 2 )]V h (x ′ ) dF[x ′ | λy 1 +(1−λ)y 2 ],as a consequence <strong>of</strong> (3.9), Lemma 0.1, <strong>and</strong> <strong>the</strong> fact that V h is nondecreasing (infact V h ∈ CM 1 , by Lemma 1.1).This shows that <strong>the</strong> integral term is concave in y. Since <strong>the</strong> feasible set [x −K 1 (x) − h(x), x − h(x)] is a convex interval for fixed x ∈ [0, +∞), <strong>the</strong>re isa unique argmax in (3.8) <strong>and</strong> hence also in (3.3). This completes <strong>the</strong> pro<strong>of</strong> <strong>of</strong>Lemma 1.2.LEMMA 1.3.LCM 1 .If h ∈ LCM 2 , <strong>the</strong> unique best response g by Player I is also inPro<strong>of</strong>. From Lemma 1.2, we know that g is single-valued. We now show thatg is nondecreasing. To this end, observe that <strong>the</strong> concavity <strong>of</strong> <strong>the</strong> integral term<strong>of</strong> (3.8) implies <strong>the</strong> supermodularity <strong>of</strong> <strong>the</strong> integral term <strong>of</strong> (3.3) in (c, x), asaconsequence <strong>of</strong> Lemma 0.2. Fur<strong>the</strong>rmore, <strong>the</strong> correspondence x → [0, K 1 (x)]is clearly ascending since K 1 is nondecreasing. Hence, by Topkis’ <strong>the</strong>orem, g isnondecreasing.Next, we show that no slope <strong>of</strong> g can exceed one. Both terms <strong>of</strong> <strong>the</strong> maxim<strong>and</strong><strong>of</strong> (3.7) are supermodular in (x, z), again as a consequence <strong>of</strong> Lemma 0.2 <strong>and</strong><strong>the</strong> fact that h is nondecreasing (since h ∈ LCM 2 ). Also, <strong>the</strong> correspondence[x − K 1 (x), x] is ascending in x, since <strong>the</strong> function x − K 1 (x) is increasing,as a consequence <strong>of</strong> Assumption A.3. Hence, by Topkis’ <strong>the</strong>orem, <strong>the</strong> argmaxin (3.7) is nondecreasing, which is equivalent to saying that <strong>the</strong> slopes <strong>of</strong> g (<strong>the</strong>argmax in (3.3)) are all less than one, since z = x − c.


STOCHASTIC CAPITAL ACCUMULATION GAMES 123We have thus shown that <strong>the</strong> slopes <strong>of</strong> g are all in <strong>the</strong> interval [0, 1]. Sog ∈ LCM 1 . This completes <strong>the</strong> pro<strong>of</strong> <strong>of</strong> Lemma 1.2.We are now in a position to formally define B, <strong>the</strong> best response map for <strong>the</strong>infinite-horizon game, which is thus single-valued by Lemma 1.2:B: LCM 1 × LCM 2 −→ LCM 1 × LCM 2(g,h) −→ (g ′ , h ′ ),whereV h (x) = u 1 [g ′ (x)] + δ 1∫V h (x ′ ) dF(x ′ | x −g ′ (x)−h(x))<strong>and</strong>V g (x) = u 2 [h ′ (x)] + δ 2∫V g (x ′ ) dF(x ′ | x −g(x)−h ′ (x)).Endow LCM i with <strong>the</strong> topology <strong>of</strong> uniform convergence on compact subsetsu<strong>of</strong> [0, +∞), to be denoted −→ for short. We write LCM i for <strong>the</strong> resultingtopological space as well.LEMMA 1.4.B is a continuous map from LC M 1 × LCM 2 to itself.Pro<strong>of</strong>. We show continuity along one coordinate, say h → g ′ , or equivalently,h → y ′ , where y ′ (x) = x − g ′ (x) − h(x), cf. (3.8). Note that as h variesin LCM 2 , <strong>the</strong> possible y ′ ’s that can arise also form an equicontinuous family(since <strong>the</strong>ir slopes are bounded). Hence, pointwise <strong>and</strong> uniform convergence areequivalent for <strong>the</strong> y’s.uLet h n −→ h <strong>and</strong> suppose (passing to a subsequence if necessary, which isjustified since <strong>the</strong> range <strong>of</strong> B is compact by <strong>the</strong> Arzela–Ascoli <strong>the</strong>orem) thaty n′ u−→ y ′ . We must show that y ′ is <strong>the</strong> best response to h, or, in o<strong>the</strong>r words,that y ′ is <strong>the</strong> argmax in (3.8).We have, with V n denoting V hn ,V n (x) = u 1 [x − y n ′ (x) − h ∫n(x)] + δ 1 Vn (x ′ ) dF(x ′ | y n ′ (x)) (3.10a)∫= max{u 1 [x − y − h n (x)] + δ 1 Vn (x ′ ) dF(x ′ | y):y ∈ [x − K 1 (x)−h n (x), x − h n (x)]}. (3.10b)Since h n ∈ LCM 2 , V n ∈ CM 1 by Lemma 1.1. By Helly’s <strong>the</strong>orem (see Billingsley,1968), we conclude that (V n ) has a subsequence, itself relabeled V n w.l.o.g.,converging pointwise to a nondecreasing (but possibly discontinuous) limit V .


124 RABAH AMIRThus V may, a priori, fail to be in CM 1 . To complete this pro<strong>of</strong>, we need toestablish that V must satisfy∫V (x) = u 1 [x − y ′ (x) − h(x)] + δ 1 V (x ′ ) dF(x ′ | y ′ (x))∫= max{u 1 [x − y − h(x)] + δ 1 V (x ′ ) dF(x ′ | y):y ∈ [x − K 1 (x)−h(x), x − h(x)]},(3.11a)(3.11b)from which we also conclude by Lemma 1.1, since h ∈ LCM 2 , that V ∈ CM 1 .To this end, we would like to show that <strong>the</strong> maxim<strong>and</strong> in (3.10b) converges uniformlyin y, for fixed x, to <strong>the</strong> maxim<strong>and</strong> in (3.11b), <strong>and</strong> <strong>the</strong>n invoke Lemma 0.3to conclude that (3.11) holds. However, because <strong>of</strong> <strong>the</strong> dependence <strong>of</strong> <strong>the</strong> feasibleset in (3.10b) on n, we need an intermediate step. Define, for each fixedx ∈ [0, +∞), W x n , W x :[0,x]→ R, by (note here that one can think <strong>of</strong> W x n<strong>and</strong> W x as representing <strong>the</strong> maxim<strong>and</strong>s <strong>of</strong> (3.10b) <strong>and</strong> (3.11b), respectively, butdefined on <strong>the</strong> common domain [0, x] without changing <strong>the</strong>ir maxima)∫u 1 [K 1 (x)] + δ 1 Vn (x⎧⎪ ′ ) dF×(x ′ | x − K 1 (x)−h n (x))− p(y − x + K 1 (x) + h n (x)) if y ∈ [0, x − K 1 (x) − h n (x)]⎨Wn x (y) = u 1 [x −∫y − h n (x)]if y ∈ [x − K 1 (x)−h n (x),+ δ 1 Vn (x ′ ) dF(x ′ | y) x −h n (x)]∫u 1 (0)+δ 1 Vn (x ′ )dF⎪ ×(x ′ | x −h ⎩ n (x))− p(x − h n (x) − y) if y ∈ [x − h n (x), x]where p is any Lipschitz-continuous strictly decreasing (penalty) function withp(0) = 0, <strong>and</strong>⎧u 1 [K 1 (x)] ∫+ δ 1 V (x ′ ) dF×(x ′ | x − K 1 (x)−h(x))⎪⎨− p(y − x + K 1 (x) + h(x)) if y ∈ [0, x − K 1 (x) − h(x)]W x (y) = u 1 [x −∫y − h(x)]if y ∈ [x − K 1 (x)−h(x),+ δ 1 V (x ′ ) dF(x ′ | y) x −h(x)]∫u 1 (0)+δ 1 V(x ′ )dF×(x ⎪⎩′ | x −h(x))− p(x − h(x) − y) if y ∈ [x − h(x), x]For extra clarity, note that for each x ∈ [0, +∞), Wn x <strong>and</strong> W x are continuousin y <strong>and</strong> that, in view <strong>of</strong> <strong>the</strong> properties <strong>of</strong> p(·), <strong>the</strong>y achieve <strong>the</strong>ir maximumwithin <strong>the</strong> feasible set <strong>of</strong> (3.10b) <strong>and</strong> (3.11b), respectively.uNow, for each fixed x, u 1 [x − y − h n (x)] −→ u 1 [x − y − h (x )]iny.Also, by Lebesgues’s dominated convergence <strong>the</strong>orem, we have for every y,


STOCHASTIC CAPITAL ACCUMULATION GAMES 125∫Vn (x ′ ) dF(x ′ | y) → ∫ V(x ′ )dF(x ′ | y). Since V is nondecreasing, Lemma0.1 <strong>and</strong> Assumption A.2(i) imply that ∫ V (x ′ ) dF(x ′ | y)<strong>and</strong> ∫ V n (x ′ ) dF(x ′ |y) are nondecreasing in y. Fur<strong>the</strong>rmore, Assumptions A.2(ii)–(iii) imply (notethat Lemma 1.2 also holds for V ∈ M 1 even if V ∉ CM 1 ) that ∫ V (x ′ ) dF(x ′ | y)is also continuous in y (see (2.1)). Hence, by Polya’s <strong>the</strong>orem (see, e.g., Bartle,1976), we have∫∫V n (x ′ ) dF(x ′ u| y) −→ V (x ′ ) dF(x ′ | y).Then, by construction, we have, for each fixed x ∈ [0, +∞), Wn x(y)u−→ W x ( y ),in y ∈ [0, x]. By Lemma 0.3, toge<strong>the</strong>r with <strong>the</strong> observation thaty ′ n (x) = arg max W x n (y) <strong>and</strong> y′ (x) = arg max W x (y),<strong>and</strong> <strong>the</strong> hypo<strong>the</strong>sis y n ′ (x)u−→ y ′ (x ), <strong>the</strong> desired conclusion, i.e., (3.11a–b),follows with V ∈ CM 1 as mentioned earlier. Thus y ′ is <strong>the</strong> best response to h.The pro<strong>of</strong> <strong>of</strong> Lemma 1.4 is now complete.We are now ready for <strong>the</strong> pro<strong>of</strong> <strong>of</strong> Proposition 1.Pro<strong>of</strong> <strong>of</strong> Proposition 1. By <strong>the</strong> Arzela–Ascoli <strong>the</strong>orem, LCM i is a normcompactsubset <strong>of</strong> <strong>the</strong> Banach space <strong>of</strong> bounded continuous functions on [0, +∞)with sup norm. By Lemma 1.4, B is norm-continuous from LCM 1 × LCM 2 toitself. Finally, LCM i is clearly a convex subset. Hence, all <strong>the</strong> hypo<strong>the</strong>ses <strong>of</strong><strong>the</strong> Schauder fixed-point <strong>the</strong>orem are satisfied, <strong>and</strong> B has a fixed-point, whichis easily seen to be a stationary equilibrium <strong>of</strong> <strong>the</strong> infinite-horizon game.Next, we state <strong>and</strong> prove Proposition 2.PROPOSITION 2. For every n ∈{0,1,...},<strong>the</strong> finite-horizon (n + 1)-periodgame G n has a unique Nash equilibrium in Markovian strategies.As before, <strong>the</strong> pro<strong>of</strong> <strong>of</strong> Proposition 2 is broken down into three intermediateresults: Lemmas 2.1–2.3. It is convenient to define <strong>the</strong> following auxiliaryparametrized family <strong>of</strong> one-shot games: For a given pair v = (v 1 ,v 2 ) ∈CM 1 ×CM 2 let <strong>the</strong> game G v be characterized by <strong>the</strong> action spaces [0, K i (x)],i = 1, 2, <strong>and</strong> <strong>the</strong> pay<strong>of</strong>fsu i (c i ) + δ i∫v i (x ′ ) dF(x ′ | x −c 1 −c 2 ). (3.12)Here, x ∈ [0, +∞) is to be viewed as a parameter. A strategy for Player iin G v may be thought <strong>of</strong> as a Borel-measurable function γ from [0, +∞) to[0, +∞) satisfying 0 ≤ γ i (x) ≤ K i (x), i = 1, 2.


126 RABAH AMIRLEMMA 2.1. For every v = (v 1 ,v 2 )∈CM 1 ×CM 2 ,<strong>the</strong> game G v has a Nashequilibrium in LC M 1 × LCM 2 .Pro<strong>of</strong>. Repeat <strong>the</strong> argument for proving Proposition 1 step for step uponreplacing V h by v 1 (<strong>and</strong> similarly for Player II, V g by v 2 ). Observe that <strong>the</strong> pro<strong>of</strong>here is a special case <strong>of</strong> that <strong>of</strong> Proposition 1 since (v 1 ,v 2 )is exogenously fixed.In particular, for <strong>the</strong> continuity step (Lemma 1.4) v 1 does not depend on n whileV 1 does.The actual details <strong>of</strong> this pro<strong>of</strong> are left to <strong>the</strong> reader.Define <strong>the</strong> best response map Bv x for <strong>the</strong> game Gx v (which is best thought <strong>of</strong>as <strong>the</strong> game G v with x fixed)asBv x :[0,K 1(x)]×[0, K 2 (x)] −→ [0, K 1 (x)] × [0, K 2 (x)](c 1 , c 2 ) −→ (c1 ∗ , c 2 ∗ ),wherec ∗ 1 = arg max {u 1 (c 1 ) + δ 1∫c ∗ 2 = arg max {u 2 (c 2 ) + δ 2∫}v 1 (x ′ ) dF(x ′ | x −c 1 −c 2 ): c 1 ∈ [0, K 1 (x)](3.13a)}v 2 (x ′ ) dF(x ′ | x −c 1 −c 2 ): c 2 ∈ [0, K 2 (x)] .(3.13b)Here, <strong>the</strong> single-valuedness <strong>of</strong> Bv x above is due to <strong>the</strong> fact that <strong>the</strong> maxim<strong>and</strong>sin (3.13) are strictly concave functions <strong>of</strong> c 1 <strong>and</strong> c 2 , respectively; see <strong>the</strong> pro<strong>of</strong><strong>of</strong> Lemma 1.2.LEMMA 2.2. For each fixed x ∈ [0, +∞) <strong>and</strong> (v 1 ,v 2 ) ∈ CM 1 ×CM 2 , <strong>the</strong>map Bv x is nonexpansive <strong>and</strong> monotone nonincreasing.Pro<strong>of</strong>. We show <strong>the</strong> desired conclusion for <strong>the</strong> map c 2 → c1 ∗ . From Lemma1.2, we know that ∫ v 1 (x ′ ) dF(x ′ |·)is a concave function. Hence, by Lemma 0.2,<strong>the</strong> integral term in (3.13a) is supermodular in (c 1 , −c 2 ). The feasible set [0, K 1 (x)]is independent <strong>of</strong> c 2 , thus ascending in c 2 . Therefore, by Topkis’ <strong>the</strong>orem, c1 ∗ isnondecreasing in (−c 2 ), or nonincreasing in c 2 .As in <strong>the</strong> pro<strong>of</strong> <strong>of</strong> Lemma 1.3, it is convenient here to rewrite (3.13a) with <strong>the</strong>decision variable being joint investment for Player I, i.e., y = x − c 1 − c 2 ,as∫y ∗ = arg max{u 1 (x − y − c 2 ) + δ 1 v 1 (x ′ ) dF(x ′ | y):}y ∈ [x − K 1 (x)−c 2 ,x −c 2 ] . (3.14)


STOCHASTIC CAPITAL ACCUMULATION GAMES 127The maxim<strong>and</strong> in (3.14) is supermodular in (y, −c 2 ) by Lemma 0.2. Moreover,<strong>the</strong> feasible set [x − K 1 (x) − c 2 , x − c 2 ] is ascending in (−c 2 ), for fixed x ∈[0, +∞). Hence, by Topkis’ <strong>the</strong>orem once more, y ∗ is nondecreasing in (−c 2 ),or nonincreasing in c 2 . Note here that y ∗ (c 2 ) is single-valued since <strong>the</strong> maxim<strong>and</strong>in (3.14) is strictly concave in y. Now, since y ∗ (c 2 ) = x − c1 ∗(c 2) − c 2 , <strong>the</strong> factthat y ∗ is nonincreasing in c 2 means thatc1 ∗(c 2) − c1 ∗(c′ 2 )c 2 − c2′ ≥−1, ∀c 2 ,c 2 ′ ∈[0, K 2(x)].Toge<strong>the</strong>r with <strong>the</strong> fact that c1 ∗ is nonincreasing in c 2 (which is <strong>the</strong> first step in <strong>the</strong>present pro<strong>of</strong>), this yields− 1 ≤ c∗ 1 (c 2) − c1 ∗(c′ 2 )c 2 − c2′ ≤ 0. (3.15)Obviously, a similar conclusion holds for c2 ∗(c 1). The desired conclusion is easilyseen to follow from (3.15).LEMMA 2.3. For each fixed x ∈ [0, +∞) <strong>and</strong> (v 1 ,v 2 ) ∈ CM 1 ×CM 2 , <strong>the</strong>game Gv x has a unique Nash equilibrium in pure strategies.Pro<strong>of</strong>. Existence <strong>of</strong> a Nash equilibrium for <strong>the</strong> game Gv x follows fromTarski’s fixed-point <strong>the</strong>orem applied to <strong>the</strong> composite map Bv x ◦ Bx v , which isnondecreasing. A fixed-point <strong>of</strong> <strong>the</strong> composition is easily seen to be a Nashequilibrium. This argument is due to Vives (1990).Alternatively, one may also use Brouwer’s fixed-point <strong>the</strong>orem since Bv x is obviouslycontinuous, as a consequence <strong>of</strong> (3.15), <strong>and</strong> <strong>the</strong> action spaces [0, K i (x)]are compact. To establish uniqueness <strong>of</strong> Nash equilibrium for Gv x , assume, foreventual contradiction, that Gv x has two distinct Nash equilibria (a 1, b 1 ) <strong>and</strong>(a 2 , b 2 ) with, say a 1 > a 2 <strong>and</strong> hence, b 1 < b 2 (since <strong>the</strong> best response functionis nonincreasing). By (3.15), we have− 1 ≤ c∗ 1 (b 1) − c1 ∗(b 2)b 1 − b 2= a 1 − a 2≤ 0b 1 − b 2(3.16a)<strong>and</strong> similarly for Player II− 1 ≤ c∗ 2 (a 1) − c2 ∗(a 2)a 1 − a 2= b 1 − b 2≤ 0.a 1 − a 2(3.16b)It follows from <strong>the</strong> first inequalities in (3.16a) <strong>and</strong> (3.16b) that a 1 − a 2 =−(b 1 − b 2 ), i.e., that <strong>the</strong> first inequalities in (3.16a) <strong>and</strong> (3.16b) are actuallyequalities. This in turn, implies that, say for Player I,c1 ∗(c 2) − c1 ∗(c′ 2 )c 2 − c2′ =−1, for all distinct c 2 , c 2 ′ in [b 1, b 2 ]. (3.17)


128 RABAH AMIRFIG. 1. Reaction functions for G x v .This is because <strong>the</strong> existence <strong>of</strong> slopes <strong>of</strong> c ∗ 1 strictly larger than −1 within [b 1, b 2 ]would imply <strong>the</strong> presence <strong>of</strong> slopes less than −1 also, for a 1 − a 2 =−(b 1 −b 2 )to hold. Fur<strong>the</strong>rmore, c ∗ 1 is clearly interior on (b 1, b 2 ) as a result <strong>of</strong> (3.17), or ino<strong>the</strong>r words (see Fig. 1)0 < c ∗ 1 (c 2)


STOCHASTIC CAPITAL ACCUMULATION GAMES 129[b 1 , b 2 ], (3.17) implies that c1 ∗(c 2)+c 2 is a constant, say C. We now claim that <strong>the</strong>integral term in (3.19) is differentiable at x − c1 ∗(c 2) − c 2 = x − C. Suppose not.Then, from (3.19), we conclude that Vv x(c2) − u[c1 ∗(c 2)] is not differentiable atany c 2 ∈ [b 1 , b 2 ], a contradiction, since both Vv x(·)<strong>and</strong> u[c∗ 1(·)] are differentiablefor almost all c 2 , <strong>the</strong> latter being so in view <strong>of</strong> (3.17). Therefore, invoking also(3.18), <strong>the</strong> following first order condition must hold, for almost all c 2 ∈ [b 1 , b 2 ],for <strong>the</strong> maximization in (3.13a),{∫}∣u ′ 1 [c∗ 1 (c d∣∣∣z=x−c2)] = δ 1 v 1 (x ′ )dF(x ′ | z). (3.20)dz∗1 (c 2)−c 2Let ¯c 2 <strong>and</strong> ĉ 2 be two distinct points in [b 1 , b 2 ] for which (3.20) holds. Sincec1 ∗(¯c 2) +¯c 2 =c1 ∗(ĉ 2)+ĉ 2 =C, <strong>the</strong> RHS <strong>of</strong> (3.20) is <strong>the</strong> same when c 2 =ĉ 2 aswhen c 2 =¯c 2 . Hence, u ′ 1 [c∗ 1 (¯c 2)] = u ′ 1 [c∗ 1 (ĉ 2)], or, in view <strong>of</strong> <strong>the</strong> monotonicity<strong>of</strong> u ′ 1 , c∗ 1 (¯c 2) = c1 ∗(ĉ 2), a contradiction to (3.17). Hence <strong>the</strong>re cannot be morethan one equilibrium for Gv x , <strong>and</strong> this completes <strong>the</strong> pro<strong>of</strong> <strong>of</strong> Lemma 2.3.Now we are ready for <strong>the</strong>Pro<strong>of</strong> <strong>of</strong> Proposition 2. We present an argument by induction on n. Forn=0, i.e., for <strong>the</strong> one-period game G o , Player i solves <strong>the</strong> following optimizationproblem max{u i (c i ): c i ∈ [0, K i (x)]}. Therefore, <strong>the</strong> unique equilibriumconsumption pair is (K 1 (x), K 2 (x)), with corresponding value functions(V1 1,V 2 1)= (u 1[K 1 (x)], u 2 [K 2 (x)]).Then, <strong>the</strong> two-period problem (or n = 1) has pay<strong>of</strong>fs given by (3.12) withv i replaced by Vi 1 , i = 1, 2. The action spaces are, <strong>of</strong> course, independent <strong>of</strong>n. Since Vi1 ∈ CM i , Lemma 2.1 guarantees <strong>the</strong> existence <strong>of</strong> an equilibriumin LCM 1 × LCM 2 for <strong>the</strong> game G 1 = G V 1. But, by Lemma 2.3 we haveuniqueness <strong>of</strong> <strong>the</strong> equilibrium actions <strong>of</strong> <strong>the</strong> game G x , for every x ∈ [0, +∞).V 1Therefore G V 1 has a unique equilibrium strategy pair, <strong>and</strong> fur<strong>the</strong>rmore, this pairis in LCM 1 × LCM 2 .Let <strong>the</strong> equilibrium pay<strong>of</strong>fs <strong>of</strong> G 1 be (V1 2,V 2 2 ) <strong>and</strong> repeat <strong>the</strong> above argumentfor n = 2, 3,.... This process clearly yields <strong>the</strong> desired conclusion.REFERENCESAmir, R. (1989). “On <strong>the</strong> Continuity <strong>of</strong> <strong>the</strong> Best-Response Map in Some Dynamic <strong>Games</strong>,” unpublishednote.Amir, R. (1990). “Strategic Common Property Capital Accumulation: Existence <strong>of</strong> Nash Equilibria,”preprint.Bartle, R. (1976). The Elements <strong>of</strong> Real Analysis. New York: Wiley.Basar, T., <strong>and</strong> Olsder, G. J. (1982). Dynamic Non-cooperative Game <strong>Theory</strong>. New York: AcademicPress.


130 RABAH AMIRBenhabib, J., <strong>and</strong> Radner, R. (1988). Joint Exploitation <strong>of</strong> a Productive Asset. New Jersey: Bell Laboratories.Bernheim, D., <strong>and</strong> Ray, D. (1983). “Altruistic Growth Economies, I. Existence <strong>of</strong> Bequest Equilibria.”IMSSS D.P. 419, Stanford University.Bernheim, D., <strong>and</strong> Ray, D. (1987). “Economic Growth with Intergenerational Altruism,” Rev. Econ.Stud. 54, 227–242.Bertsekas, D., <strong>and</strong> Shreve, S. (1978). Stochastic Optimal Control: The Discrete-Time Case. New York:Academic Press.Billingsley, P. (1968). Convergence <strong>of</strong> Probability Measures. New York: Wiley.Boldrin, M., <strong>and</strong> Montrucchio, L. (1986). “On <strong>the</strong> Indeterminacy <strong>of</strong> Capital Accumulation Paths,” J.Econ. <strong>Theory</strong> 40, 26–39.Brock, W., <strong>and</strong> Mirman, L. (1972). “Optimal Growth under Uncertainty: The Discounted Case,” J.Econ. <strong>Theory</strong> 4, 479–513.Budd, C., Harris, C., <strong>and</strong> Vickers, J. (1992). “A Model <strong>of</strong> <strong>the</strong> Evolution <strong>of</strong> Duopoly: Does <strong>the</strong> AsymmetryBetween Firms Tend to Increase or Decrease?” mimeo. Oxford University.Cyert, R. M., <strong>and</strong> DeGroot, M. H. (1970). “Multiperiod Decision Models with Alternating Choice asa Solution to <strong>the</strong> Duopoly Problem,” Quart. J. Econ. 84, 410–429.Dana, R.-A., <strong>and</strong> Montrucchio, L. (1991). “On Markovian Strategies in Dynamic <strong>Games</strong>,” preprint.Debreu, G. (1952). “Social Equilibrium Existence Theorem,” Proc. Nat. Acad. Sci. U.S.A. 38, 886–893.Dutta, P., Sundaram, R. (1990). “Stochastic <strong>Games</strong> <strong>of</strong> Resource Extraction: Existence Theorems forDiscounted <strong>and</strong> Undiscounted Models,” W.P. No. 241, Univ. <strong>of</strong> Rochester.Harris, C. (1985). “Existence <strong>and</strong> Characterization <strong>of</strong> Perfect Equilibrium in <strong>Games</strong> <strong>of</strong> Perfect Information,”Econometrica 53, 613–628.Harris, C., <strong>and</strong> Vickers, J. (1987). “Racing with Uncertainty,” Rev. Econ. Stud. 54, 1–21.Hellwig, M., <strong>and</strong> Leininger, W. (1987). “On <strong>the</strong> Existence <strong>of</strong> Subgame-Perfect Equilibrium in Infinite-Action <strong>Games</strong> <strong>of</strong> Perfect Information,” J. Econ. <strong>Theory</strong> 43, 55–75.Hellwig, M., <strong>and</strong> Leininger, W. (1988). “The Existence <strong>of</strong> Markov-Perfect Equilibrium in <strong>Games</strong> <strong>of</strong>Perfect Information,” SFB 303, No. A-183, Universität Bonn.Kall, P. (1986). “Approximation to Optimization Problems: An Elementary Review,” Math. Oper. Res.11, 9–18.Levhari, D., <strong>and</strong> Mirman, L. (1980). “The Great Fish War: An Example Using a Dynamic <strong>Cournot</strong>Nash Solution,” Bell J. Econ., 322–344.Lane, J., <strong>and</strong> Leininger, W. (1986). “Price-Characterisation <strong>and</strong> Pareto-Efficiency <strong>of</strong> Game-EquilibriumGrowth,” Z. Nationalökonomie/J. Econ. 46, 347–367.Leininger, W. (1986). “The Existence <strong>of</strong> Perfect Equilibria in a Model <strong>of</strong> Growth with Altruism betweenGenerations,” Rev. Econ. Stud. 53, 349–367.Majumdar, M., <strong>and</strong> Sundaram, R. (1991). “Symmetric Stochastic <strong>Games</strong> <strong>of</strong> Resource Extraction:Existence <strong>of</strong> Nonr<strong>and</strong>omized Stationary Equilibrium,” in Stochastic <strong>Games</strong> <strong>and</strong> Related Topics(Raghavan et. al., Eds.). Dordrecht: Kluwer.Maskin, E., <strong>and</strong> Tirole, J. (1988a). “A <strong>Theory</strong> <strong>of</strong> Dynamic <strong>Oligopoly</strong>. I. Overview <strong>and</strong> Quantity Competitionwith Large Fixed Costs,” Econometrica 56, 549–569.Maskin, E., <strong>and</strong> Tirole, J. (1988b). “A <strong>Theory</strong> <strong>of</strong> Dynamic <strong>Oligopoly</strong>. II. Price Competition, KinkedDem<strong>and</strong> Curves <strong>and</strong> Edgeworth Cycles,” Econometrica 56, 571–599.Milgrom, P., <strong>and</strong> Roberts, J. (1990). “Rationalizability, Learning <strong>and</strong> Equilibrium in <strong>Games</strong> withStrategies Complementarities,” Econometrica 58, 1255–1278.


STOCHASTIC CAPITAL ACCUMULATION GAMES 131Raghavan, T., Ferguson, T., Parthasarathy, T., <strong>and</strong> Vrieze, O. (1991). Stochastic <strong>Games</strong> <strong>and</strong> RelatedTopics (Volume Dedicated to L. Shapley). Dordrecht: Kluwer.Reinganum, J., <strong>and</strong> Stokey, N. (1988). “<strong>Oligopoly</strong> Extraction <strong>of</strong> Common Property Natural Resource:Importance <strong>of</strong> <strong>the</strong> Period <strong>of</strong> Commitment in Dynamic <strong>Games</strong>,” Int. Econ. Rev. 26(1), 161–174.Shubik, M., <strong>and</strong> Whitt, W. (1973). “Fiat Money in an Economy with One Nondurable Good <strong>and</strong> NoCredit,” Topics in Differential <strong>Games</strong>, A. Blaquiere, (Ed.), pp. 401–448. Amsterdam: North-Holl<strong>and</strong>.Sobel, M. (1988). “Isotone Comparative Statics for <strong>Supermodular</strong> <strong>Games</strong>,” preprint, SUNY—StonyBrook.Stokey, N. L., Lucas, R.E., <strong>and</strong> Prescott, E. (1989). Recursive Methods in Economic Dynamics. Cambridge:Harvard Univ. Press.Stoyan, D. (1983). Comparison Methods for Queues <strong>and</strong> O<strong>the</strong>r Stochastic Models. New York: Wiley.Sundaram, R. K. (1989). “Perfect Equilibrium in Non-R<strong>and</strong>omized Strategies in a Class <strong>of</strong> SymmetricDynamic <strong>Games</strong>,” J. Econ. <strong>Theory</strong> 47, 153–177.Topkis, D. (1978). “Minimizing a Submodular Function on a Lattice,” Oper. Res. 26, 305–321.Topkis, D. (1979). “Equilibrium Points in Nonzero-Sum n-Person Submodular <strong>Games</strong>,” Siam J. Control<strong>and</strong> Optim. 17, 773–787.Vives, X. (1990). “Nash Equilibrium with Strategic Complementarities,” J. Math. Econ. 19, 305–321.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!