Econometrica Volume 89, Issue 2 (March 2021) is now online

ECONOMETRICA

Volume 89, Issue 2 (March 2021) has just been published.  The full content of the journal is accessible at
https://www.econometricsociety.org/publications/econometrica/browse

 

Articles

Frontmatter of Econometrica Vol. 89 Iss. 2

Read More

 



Strategic Analysis of Auctions
Robert B. Wilson

Read More
 


Equitable Voting Rules
Laurent Bartholdi, Wade Hann‐Caruthers, Maya Josyula, Omer Tamuz, Leeat Yariv

May's theorem (1952), a celebrated result in social choice, provides the foundation for majority rule. May's crucial assumption of symmetry, often thought of as a procedural equity requirement, is violated by many choice procedures that grant voters identical roles. We show that a weakening of May's symmetry assumption allows for a far richer set of rules that still treat voters equally. We show that such rules can have minimal winning coalitions comprising a vanishing fraction of the population, but not less than the square root of the population size. Methodologically, we introduce techniques from group theory and illustrate their usefulness for the analysis of social choice questions.
Read More


Spurious Factor Analysis
Alexei Onatski, Chen Wang

This paper draws parallels between the principal components analysis of factorless high‐dimensional nonstationary data and the classical spurious regression. We show that a few of the principal components of such data absorb nearly all the data variation. The corresponding scree plot suggests that the data contain a few factors, which is corroborated by the standard panel information criteria. Furthermore, the Dickey–Fuller tests of the unit root hypothesis applied to the estimated “idiosyncratic terms” often reject, creating an impression that a few factors are responsible for most of the nonstationarity in the data. We warn empirical researchers of these peculiar effects and suggest to always compare the analysis in levels with that in differences.
Read More


Selecting Applicants
Alex Frankel

A firm selects applicants to hire based on hard information, such as a test result, and soft information, such as a manager's evaluation of an interview. The contract that the firm offers to the manager can be thought of as a restriction on acceptance rates as a function of test results. I characterize optimal acceptance rate functions both when the firm knows the manager's mix of information and biases and when the firm is uncertain. These contracts may admit a simple implementation in which the manager can accept any set of applicants with a sufficiently high average test score.
Read More


Learning from Coworkers
Gregor Jarosch, Ezra Oberfield, Esteban Rossi‐Hansberg

We investigate learning at the workplace. To do so, we use German administrative data that contain information on the entire workforce of a sample of establishments. We document that having more‐highly‐paid coworkers is strongly associated with future wage growth, particularly if those workers earn more. Motivated by this fact, we propose a dynamic theory of a competitive labor market where firms produce using teams of heterogeneous workers that learn from each other. We develop a methodology to structurally estimate knowledge flows using the full‐richness of the German employer‐employee matched data. The methodology builds on the observation that a competitive labor market prices coworker learning. Our quantitative approach imposes minimal restrictions on firms' production functions, can be implemented on a very short panel, and allows for potentially rich and flexible coworker learning functions. In line with our reduced‐form results, learning from coworkers is significant, particularly from more knowledgeable coworkers. We show that between 4 and 9% of total worker compensation is in the form of learning and that inequality in total compensation is significantly lower than inequality in wages.
Read More


Information Technology and Government Decentralization: Experimental Evidence from Paraguay
Ernesto Dal Bó, Frederico Finan, Nicholas Y. Li, Laura Schechter

Standard models of hierarchy assume that agents and middle managers are better informed than principals. We estimate the value of the informational advantage held by supervisors—middle managers—when ministerial leadership—the principal—introduced a new monitoring technology aimed at improving the performance of agricultural extension agents (AEAs) in rural Paraguay. Our approach employs a novel experimental design that elicited treatment‐priority rankings from supervisors before randomization of treatment. We find that supervisors have valuable information—they prioritize AEAs who would be more responsive to the monitoring treatment. We develop a model of monitoring under different scales of treatment roll‐out and different treatment allocation rules. We semiparametrically estimate marginal treatment effects (MTEs) to demonstrate that the value of information and the benefits to decentralizing treatment decisions depend crucially on the sophistication of the principal and on the scale of roll‐out.
Read More


Micro Data and Macro Technology
Ezra Oberfield, Devesh Raval

We develop a framework to estimate the aggregate capital‐labor elasticity of substitution by aggregating the actions of individual plants. The aggregate elasticity reflects substitution within plants and reallocation across plants; the extent of heterogeneity in capital intensities determines their relative importance. We use micro data on the cross‐section of plants to build up to the aggregate elasticity at a point in time. Interpreting our econometric estimates through the lens of several different models, we find that the aggregate elasticity for the U.S. manufacturing sector is in the range of 0.5–0.7, and has declined slightly since 1970. We use our estimates to measure the bias of technical change and assess the decline in labor's share of income in the U.S. manufacturing sector. Mechanisms that rely on changes in the relative supply of factors, such as an acceleration of capital accumulation, cannot account for the decline.
Read More


Theory of Weak Identification in Semiparametric Models
Tetsuya Kaji

We provide general formulation of weak identification in semiparametric models and an efficiency concept. Weak identification occurs when a parameter is weakly regular, that is, when it is locally homogeneous of degree zero. When this happens, consistent or equivariant estimation is shown to be impossible. We then show that there exists an underlying regular parameter that fully characterizes the weakly regular parameter. While this parameter is not unique, concepts of sufficiency and minimality help pin down a desirable one. If estimation of minimal sufficient underlying parameters is inefficient, it introduces noise in the corresponding estimation of weakly regular parameters, whence we can improve the estimators by local asymptotic Rao–Blackwellization. We call an estimator weakly efficient if it does not admit such improvement. New weakly efficient estimators are presented in linear IV and nonlinear regression models. Simulation of a linear IV model demonstrates how 2SLS and optimal IV estimators are improved.
Read More


Reasonable Doubt: Experimental Detection of Job-Level Employment Discrimination
Patrick Kline, Christopher Walters

This paper develops methods for detecting discrimination by individual employers using correspondence experiments that send fictitious resumes to real job openings. We establish identification of higher moments of the distribution of job‐level callback rates as a function of the number of resumes sent to each job and propose shape‐constrained estimators of these moments. Applying our methods to three experimental data sets, we find striking job‐level heterogeneity in the extent to which callback probabilities differ by race or sex. Estimates of higher moments reveal that while most jobs barely discriminate, a few discriminate heavily. These moment estimates are then used to bound the share of jobs that discriminate and the posterior probability that each individual job is engaged in discrimination. In a recent experiment manipulating racially distinctive names, we find that at least 85% of jobs that contact both of two white applications and neither of two black applications are engaged in discrimination. To assess the potential value of our methods for regulators, we consider the accuracy of decision rules for investigating suspicious callback behavior in various experimental designs under a simple two‐type model that rationalizes the experimental data. Though we estimate that only 17% of employers discriminate on the basis of race, we find that an experiment sending 10 applications to each job would enable detection of 7–10% of discriminatory jobs while yielding Type I error rates below 0.2%. A minimax decision rule acknowledging partial identification of the distribution of callback rates yields only slightly fewer investigations than a Bayes decision rule based on the two‐type model. These findings suggest illegal labor market discrimination can be reliably monitored with relatively small modifications to existing correspondence designs.
Read More


Long-Term Contracting with Time-Inconsistent Agents
Daniel Gottlieb, Xingtan Zhang

We study contracts between naive present‐biased consumers and risk‐neutral firms. We show that the welfare loss from present bias vanishes as the contracting horizon grows. This is true both when bargaining power is on the consumers' and on the firms' side, when consumers cannot commit to long‐term contracts, and when firms do not know the consumers' naiveté. However, the welfare loss from present bias does not vanish when firms do not know the consumers' present bias or when they cannot offer exclusive contracts.
Read More


Model Selection for Treatment Choice: Penalized Welfare Maximization
Eric Mbakop, Max Tabord‐Meehan

This paper studies a penalized statistical decision rule for the treatment assignment problem. Consider the setting of a utilitarian policy maker who must use sample data to allocate a binary treatment to members of a population, based on their observable characteristics. We model this problem as a statistical decision problem where the policy maker must choose a subset of the covariate space to assign to treatment, out of a class of potential subsets. We focus on settings in which the policy maker may want to select amongst a collection of constrained subset classes: examples include choosing the number of covariates over which to perform best‐subset selection, and model selection when approximating a complicated class via a sieve. We adapt and extend results from statistical learning to develop the Penalized Welfare Maximization (PWM) rule. We establish an oracle inequality for the regret of the PWM rule which shows that it is able to perform model selection over the collection of available classes. We then use this oracle inequality to derive relevant bounds on maximum regret for PWM. An important consequence of our results is that we are able to formalize model‐selection using a “holdout” procedure, where the policy maker would first estimate various policies using half of the data, and then select the policy which performs the best when evaluated on the other half of the data.
Read More


Errors in the Dependent Variable of Quantile Regression Models
Jerry Hausman, Haoyang Liu, Ye Luo, Christopher Palmer

We study the consequences of measurement error in the dependent variable of random‐coefficients models, focusing on the particular case of quantile regression. The popular quantile regression estimator of Koenker and Bassett (1978) is biased if there is an additive error term. Approaching this problem as an errors‐in‐variables problem where the dependent variable suffers from classical measurement error, we present a sieve maximum likelihood approach that is robust to left‐hand‐side measurement error. After providing sufficient conditions for identification, we demonstrate that when the number of knots in the quantile grid is chosen to grow at an adequate speed, the sieve‐maximum‐likelihood estimator is consistent and asymptotically normal, permitting inference via bootstrapping. Monte Carlo evidence verifies our method outperforms quantile regression in mean bias and MSE. Finally, we illustrate our estimator with an application to the returns to education highlighting changes over time in the returns to education that have previously been masked by measurement‐error bias.
Read More


Quantile Factor Models
Liang Chen, Juan J. Dolado, Jesús Gonzalo

Quantile factor models (QFM) represent a new class of factor models for high‐dimensional panel data. Unlike approximate factor models (AFM), which only extract mean factors, QFM also allow unobserved factors to shift other relevant parts of the distributions of observables. We propose a quantile regression approach, labeled Quantile Factor Analysis (QFA), to consistently estimate all the quantile‐dependent factors and loadings. Their asymptotic distributions are established using a kernel‐smoothed version of the QFA estimators. Two consistent model selection criteria, based on information criteria and rank minimization, are developed to determine the number of factors at each quantile. QFA estimation remains valid even when the idiosyncratic errors exhibit heavy‐tailed distributions. An empirical application illustrates the usefulness of QFA by highlighting the role of extra factors in the forecasts of U.S. GDP growth and inflation rates using a large set of predictors.
Read More


Strategic Sample Selection
Alfredo Di Tillio, Marco Ottaviani, Peter Norman Sørensen

Are the highest sample realizations selected from a larger presample more or less informative than the same amount of random data? Developing multivariate accuracy for interval dominance ordered preferences, we show that sample selection always benefits (or always harms) a decision maker if the reverse hazard rate of the data distribution is log‐supermodular (or log‐submodular), as in location experiments with normal noise. We find nonpathological conditions under which the information contained in the winning bids of a symmetric auction decreases in the number of bidders. Exploiting extreme value theory, we quantify the limit amount of information revealed when the presample size (number of bidders) goes to infinity. In a model of equilibrium persuasion with costly information, we derive implications for the optimal design of selected experiments when selection is made by an examinee, a biased researcher, or contending sides with the peremptory challenge right to eliminate a number of jurors.
Read More


Local Projections and VARs Estimate the Same Impulse Responses
Mikkel Plagborg‐Møller, Christian K. Wolf

We prove that local projections (LPs) and Vector Autoregressions (VARs) estimate the same impulse responses. This nonparametric result only requires unrestricted lag structures. We discuss several implications: (i) LP and VAR estimators are not conceptually separate procedures; instead, they are simply two dimension reduction techniques with common estimand but different finite‐sample properties. (ii) VAR‐based structural identification—including short‐run, long‐run, or sign restrictions—can equivalently be performed using LPs, and vice versa. (iii) Structural estimation with an instrument (proxy) can be carried out by ordering the instrument first in a recursive VAR, even under noninvertibility. (iv) Linear VARs are as robust to nonlinearities as linear LPs.
Read More


Forthcoming Papers


Read More


2020 Election of Fellows to the Econometric Society


Read More


Backmatter of Econometrica Vol. 89 Iss. 2


Read More