Hello, dear friend, you can consult us at any time if you have any questions, add WeChat: THEend8_
Lecture 10: The Revelation Principle;
Screening
ECON4411/8011 – Microeconomic Theory
References
• MWG (23.B)
• Bo¨rgers (2)
• Mailath (10)
2/25
The mechanism design problem
How should a collective decision be made, if
1. the chosen outcome a↵ects a number of agents, and
2. the agents have private information regarding their preferences
over alternative outcome choices?
Questions:
• Can the agents’ individual preferences be aggregated into a social
ranking of alternatives (e.g., through a voting procedure)?
(Social choice theory)
• Can the agents be induced to reveal their private information
regarding preferences? (Mechanism design)
Applications: the choice of a public project, provision of a public
good, voting, auctions, exchange economies, contracts, . . .
We assume the existence of a social planner/mechanism designer, who
has the authority to establish an institution/mechanism through
which collective decisions are made, and aims to attain various
objectives (e.g., ecient allocation, revenue maximisation). 3/25
A general model
A collective choice problem is a tuple hN,Z, (Ti)i2N ,⇡, (ui)i2N i, such
that
(i) N = {1, . . . , n} is a set of agents,
(ii) Z is a set of possible outcome choices,
(iii) for each agent i 2 N , Ti is a set of types,
(iv) the Cartesian product T = ⇥i2NTi defines a state space with
associated signal functions ⌧i : T ! Ti, so that for every i 2 N
and t = (t1, . . . , ti, . . . , tn) 2 T , ⌧i(t) = ti,
(v) each agent i 2 N knows his own type ti 2 Ti, and has beliefs over
the types Ti of the other agents that are derived from a
common prior ⇡ 2 T , such that
⇡(ti|ti) = ⇡(ti, ti)P
t0i2Ti ⇡(ti, t
0i)
(the common prior is assumed for convenience, and is not
required for many of the results we derive),
(vi) for each i 2 N , ui : Z ⇥ T ! R is a Bernoulli payo↵ function
(allowing for common values). 4/25
Decision rules
Any institution on the basis of which a collective choice from Z is
made, maps states t 2 T into outcomes z 2 Z.
Definition
(i) A decision rule (or social choice function) is a function d : T ! Z
that assigns an outcome in Z to every state of the world in T .
Hence, the set of decision rules is ZT .
(ii) A decision rule d : T ! Z is ex post ecient if there is no state
t 2 T for which there exists a z 2 Z, such that
ui(z, t) ui(d(t), t) for all i 2 N , and uj(z, t) > uj(d(t), t) for
some j 2 N .
Now consider a social planner who knows ⇡ (but not the true state t),
has a well-defined objective (e.g., ex post eciency), and would like to
choose a decision rule in ZT that attains this objective:
• Can the social planner implement an optimal (according to his
objective) decision rule?
• Not unless he can induce the agents to reveal their private
information/types! 5/25
Mechanisms
Can the social planner design a institution/game/mechanism that
achieves an optimal decision rule in equilibrium?
Definition
A mechanism is a pair M = h(Mi)i2N , ⌘i, such that
(i) for each i 2 N , Mi is a set of messages/actions, and
(ii) ⌘ :M ! Z is an outcome function that assigns an outcome in Z
to every profile of messages m 2M := ⇥i2NMi.
Combined with the given collective choice problem, a mechanism M
defines a Bayesian game. A (pure strategy) Bayesian Nash
equilibrium (BNE) of M = h(Mi)i2N , ⌘i is a BNE of the
corresponding Bayesian game, i.e., a profile s 2 ⇥i2NMTii of strategies
si : Ti !Mi, such that for every i and ti,
si(ti) 2 arg max
mi2Mi
8<: X
ti2Ti
⇡(ti|ti)ui(⌘(mi, si(ti)), ti, ti)
9=; .
Example: (first-price) auction 6/25
Implementation
Definition
A mechanismM = h(Mi)i2N , ⌘i implements a decision rule d : T ! Z
if there exists a BNE s 2 ⇥i2NMTii of M , such that ⌘ s = d (i.e.,
⌘(s1(t1), . . . , sn(tn)) = [⌘ s](t) = d(t) for every t 2 T ).
Remarks:
• Every mechanism M = h(Mi)i2N , ⌘i and corresponding BNE s0
implements some decision rule d0 := ⌘ s0. The question of
implementability asks whether for some given decision rule d,
there exists a mechanism M with BNE s for which ⌘ s = d.
• If a mechanism M implements a decision rule d, the mechanism
may have other equilibria s0 for which ⌘ s0 6= d. Not requiring
that the decision rule is implemented by a unique equilibrium of
M implicitly assumes that the agents will play the social
planner’s preferred equilibrium strategies.
• If ⌘ s = d for every BNE of M , we say that M strongly (or
fully) implements d (see O&R, ch. 10).
7/25
Direct revelation mechanisms
Definition
(i) A direct revelation mechanism (DRM) is a mechanism for which
Mi = Ti for all i 2 N . We denote the outcome function of a
DRM by .
(ii) A DRM D = h(Ti)i2N , i implements a decision rule d if there
exists a BNE s of D such that s = d.
(iii) A DRM D truthfully implements a decision rule d if there exists a
BNE s for which si(ti) = ti, i.e., truth telling is an equilibrium
strategy for each agent, and s = d.
(Note that if such a BNE exists for the DRM D , since the
corresponding strategies si are identity functions, it must be the
case that = d.)
Example: first-price vs. second-price auction
8/25
Proposition (10.1)
A DRM D = h(Ti)i2N , i truthfully implements a decision rule d [i↵]
truth telling is a BNE of the DRM h(Ti)i2N , di, i.e., if for all i and ti,
E⇡ [ui(d(ti, ti), ti, ti)| ti] E⇡
⇥
ui(d(tˆi, ti), ti, ti)
ti⇤
for all tˆi 2 Ti, where
E⇡
⇥
ui(d(tˆi, ti), ti, ti)
ti⇤ = X
ti2Ti
⇡(ti|ti)ui(d(tˆi, ti), ti, ti).
The inequality above is referred to as an incentive compatibility
constraint, since it ensures that no player has an incentive to
misrepresent his type, as long as the other players tell the truth.
9/25
The (Bayesian) revelation principle
Proposition (10.2 – Revelation Principle)
A decision rule d : T ! Z can be implemented by some mechanism
M = h(Mi)i2N , ⌘i [i↵] it can be truthfully implemented using the
DRM h(Ti)i2N , di.
Proof. [(] Set Mi = Ti and ⌘ = d.
[)] Let M = h(Mi)i2N , ⌘i be a mechanism that implements d using
an equilibrium strategy profile s. Then ⌘ s = d.
Consider the DRM D = h(Ti)i2N , i defined by := ⌘ s. We can
interpret D as follows: Instead of reporting a message mi, type ti
reports a type t0i, the social planner computes si(t0i) for each i and
report t0i, and chooses the outcome ⌘(s1(t01), . . . , sn(t0n)).
If an agent has an incentive to misrepresent his type in D , given that
the other agents are truthful, then s cannot be a BNE of M . (Make
sure that you can write down the associated equilibrium conditions in
terms of expected payo↵s!) Since = ⌘ s = d, this implies that
h(Ti)i2N , di truthfully implements d. ⇤ 10/25
The (Bayesian) revelation principle
Proposition (10.2 – Revelation Principle)
A decision rule d : T ! Z can be implemented by some mechanism
M = h(Mi)i2N , ⌘i [i↵] it can be truthfully implemented using the
DRM h(Ti)i2N , di.
Proof. [(] Set Mi = Ti and ⌘ = d.
[)] Let M = h(Mi)i2N , ⌘i be a mechanism that implements d using
an equilibrium strategy profile s. Then ⌘ s = d.
Consider the DRM D = h(Ti)i2N , i defined by := ⌘ s. We can
interpret D as follows: Instead of reporting a message mi, type ti
reports a type t0i, the social planner computes si(t0i) for each i and
report t0i, and chooses the outcome ⌘(s1(t01), . . . , sn(t0n)).
If an agent has an incentive to misrepresent his type in D , given that
the other agents are truthful, then s cannot be a BNE of M . (Make
sure that you can write down the associated equilibrium conditions in
terms of expected payo↵s!) Since = ⌘ s = d, this implies that
h(Ti)i2N , di truthfully implements d. ⇤ 10/25
T d Z
S spy I
Implications of the revelation principle
• In order to identify all decision rules that can be implemented in
a collective choice setting (through any arbitrary mechanism), we
only need to find all the decision rules d that are truthfully
implementable by the DRM h(Ti)i2N , di.
• A mechanism designer who aims to implement a decision rule
that is optimal according to some criterion, but is constrained by
the fact that the agents possess private information, only needs
to “search” over the set of truthfully implementable DRMs to
identify the best (according to his objectives) decision rule that is
attainable in the given social choice setting.
• The revelation principle makes it easy to identify implementable
or optimal decision rules, although the corresponding DRMs may
be complicated and not very realistic. However, after identifying
an optimal decision rule using the revelation principle, we can
look for simple non-direct mechanisms that implement it.
11/25
Screening/Nonlinear pricing
= mechanism design problem with one agent
• agent a buyer of an infinitely divisible good that can be
consumed at any non-negative quantity/quality q 0
• mechanism designer the seller of the good—a monopolist with
a linear production cost cq, where c > 0
The buyer has vNM preferences over quantity consumed q 0 and
total payment p that can be represented by a Bernoulli payo↵
function u✓(q, p) = ✓v(q) p, where
(i) ✓ is the buyer’s type, which is known to the buyer but not to the
seller,
(ii) v : R+ ! R+ is a di↵erentiable function that satisfies
v(0) = 0, v0(q) > 0 and v00(q) < 0 for all q 0.
Hence, the buyer’s preferences are quasilinear, in the sense that they
are separable in consumption and money, and linear/risk neutral with
respect to money.
12/25
The seller does not know the buyer’s valuation for the good, and
wants to implement a selling mechanism that potentially allows him
to price discriminate, i.e., to charge a higher price to a higher
valuation buyer. Furthermore, the seller
(i) can commit to implementing a chosen selling mechanism,
(ii) believes that the buyer’s type ✓ is drawn from a support
⇥ ⌘ [✓, ✓] ✓ R+ according to a distribution function F with
density f that satisfies f(✓) > 0 for all ✓ 2 ⇥ (one-dimensional
type space!), and
(iii) has an objective of maximising expected profits (the seller is risk
neutral).
To guarantee that the model has a non-trivial solution, we assume
that ✓v0(0) > c and limq!1 ✓v0(q) < c.
13/25
To find an optimal selling mechanism, we apply the revelation
principle and only search over the set of truthful DRMs.
A decision rule in this setting maps a type ✓ 2 ⇥ to an outcome
consisting of a pair of quantity consumed q(✓) 2 R+ and an associated
payment p(✓) 2 R. Hence, a DRM is defined by a pair of functions
hq, pi 2 R⇥+ ⇥ R⇥.
Incentive compatibility (IC) of a mechanism hq, pi is equivalent to
✓v(q(✓)) p(✓) ✓v(q(✓0)) p(✓0) for all ✓, ✓0 2 ⇥.
To capture the requirement that purchasing is voluntary for a buyer,
we will also impose the following participation/individual rationality
(IR) constraint:
✓v(q(✓)) p(✓) 0 for all ✓ 2 ⇥.
14/25
Characterising the optimal mechanism
The seller’s problem of finding an optimal selling mechanism hq, pi
can then be summarised as
max
hq,pi2R⇥+⇥R⇥
Z ✓
✓
[p(✓) cq(✓)]f(✓)d✓, subject to
[IC] ✓v(q(✓)) p(✓) ✓v(q(✓0)) p(✓0) for all ✓, ✓0 2 ⇥,
[IR] ✓v(q(✓)) p(✓) 0 for all ✓ 2 ⇥.
Lemma (10.3)
If hq, pi satisfies IC, then q is non-decreasing in ✓.
Proof. For any ✓ > ✓0, IC implies that
✓v(q(✓)) p(✓) ✓v(q(✓0)) p(✓0),
✓0v(q(✓)) p(✓) ✓0v(q(✓0)) p(✓0)
) v(q(✓))(✓ ✓0) v(q(✓0))(✓ ✓0)
, v(q(✓)) v(q(✓0)) , q(✓) q(✓0). ⇤
15/25
Lemma (10.4)
Suppose hq, pi satisfies IC, and define U : ⇥! R by
U(✓) := ✓v(q(✓)) p(✓).
Then U is non-decreasing and convex, and
U(✓) = U(✓) +
Z ✓
✓
v(q(x))dx.
Proof. IC implies that
U(✓) = max
✓02⇥
{✓v(q(✓0)) p(✓0)},
and therefore, since ✓v(q(✓0)) p(✓0) is a non-decreasing and ane
(hence convex) function of ✓ for every fixed ✓0, that U must be
non-decreasing and convex, as a pointwise maximum of a collection of
non-decreasing and convex functions.
16/25
Following a standard property of convex functions, U must be
di↵erentiable except for at most countably many points, and applying
the Envelope Theorem, IC implies that for any ✓ at which U is
di↵erentiable,
U 0(✓) = v(q(✓)).
As a convex function, U is “absolutely continuous”, and thus a
version of the Fundamental Theorem of Calculus yields
U(✓) = U(✓) +
Z ✓
✓
v(q(x))dx.
(See Mailath for additional details and references.) ⇤
Lemma 10.4 implies that in an incentive compatible mechanism, the
utilities of all types ✓ > ✓ are determined by q and U(✓), the utility
assigned to the lowest type ✓. Hence, any two mechanisms that have
the same q and U(✓) are “payo↵-equivalent,” in the sense that they
must yield the same utility for every type ✓.
17/25
Lemma (10.5 – Revenue Equivalence)
If hq, pi satisfies IC, then
p(✓) = p(✓) + [✓v(q(✓)) ✓v(q(✓))]
Z ✓
✓
v(q(x))dx.
Proof. The result follows by substituting U(✓) = ✓v(q(✓)) p(✓) into
the expression for U(✓) from Lemma 10.4, and solving for p(✓). ⇤
Lemma 10.5 extends the payo↵-equivalence arising from Lemma 10.4,
by showing that in an incentive compatible mechanism, the payments
made by types ✓ > ✓ are pinned down by q and p(✓). As a
consequence, all mechanisms having the same q and p(✓) yield
identical payments for all types ✓, and therefore the same expected
revenues (and profits) to the seller.