Quantitative Economics March 2018 is now online

TABLE OF CONTENTS, March 2018, Volume 9, Issue 1
Full Issue

Articles
Abstracts follow the listing of articles.

Estimating matching games with transfers
Jeremy T. Fox

Optimal sup‐norm rates and uniform inference on nonlinear functionals of nonparametric IV regression
Xiaohong Chen, Timothy M. Christensen

Learning in network games
Jaromír Kovářík, Friederike Mengel, José Gabriel Romero

An empirical model of non‐equilibrium behavior in games
Brendan Kline

Endogenous sample selection: A laboratory study
Ignacio Esponda, Emanuel Vespa

Pirates of the Mediterranean: An empirical investigation of bargaining with asymmetric information
Attila Ambrus, Eric Chaney, Igor Salitskiy

Neighborhood dynamics and the distribution of opportunity
Dionissi Aliprantis, Daniel R. Carroll

Income effects and the welfare consequences of tax in differentiated product oligopoly
Rachel Griffith, Lars Nesheim, Martin O'Connell

Identifying dynamic spillovers of crime with a causal approach to model selection
Gregorio Caetano, Vikram Maheshri

Identification, data combination, and the risk of disclosure
Tatiana Komarova, Denis Nekipelov, Evgeny Yakovlev

Simultaneous selection of optimal bandwidths for the sharp regression discontinuity estimator
Yoichi Arai, Hidehiko Ichimura

The superintendent's dilemma: Managing school district capacity as parents vote with their feet
Dennis Epple, Akshaya Jha, Holger Sieg


Estimating matching games with transfers
Jeremy T. Fox


Abstract
I explore the estimation of transferable utility matching games, encompassing many‐to‐many matching, marriage, and matching with trading networks (trades). Computational issues are paramount. I introduce a matching maximum score estimator that does not suffer from a computational curse of dimensionality in the number of agents in a matching market. I apply the estimator to data on the car parts supplied by automotive suppliers to estimate the valuations from different portfolios of parts to suppliers and automotive assemblers. Matching trading networks relationship formation semiparametric estimation maximum score C35 C57 C78 L14 L62
---
Optimal sup‐norm rates and uniform inference on nonlinear functionals of nonparametric IV regression
Xiaohong Chen, Timothy M. Christensen


Abstract
This paper makes several important contributions to the literature about nonparametric instrumental variables (NPIV) estimation and inference on a structural function h0 and functionals of h0. First, we derive sup‐norm convergence rates for computationally simple sieve NPIV (series two‐stage least squares) estimators of h0 and its derivatives. Second, we derive a lower bound that describes the best possible (minimax) sup‐norm rates of estimating h0 and its derivatives, and show that the sieve NPIV estimator can attain the minimax rates when h0 is approximated via a spline or wavelet sieve. Our optimal sup‐norm rates surprisingly coincide with the optimal root‐mean‐squared rates for severely ill‐posed problems, and are only a logarithmic factor slower than the optimal root‐mean‐squared rates for mildly ill‐posed problems. Third, we use our sup‐norm rates to establish the uniform Gaussian process strong approximations and the score bootstrap uniform confidence bands (UCBs) for collections of nonlinear functionals of h0 under primitive conditions, allowing for mildly and severely ill‐posed problems. Fourth, as applications, we obtain the first asymptotic pointwise and uniform inference results for plug‐in sieve t‐statistics of exact consumer surplus (CS) and deadweight loss (DL) welfare functionals under low‐level conditions when demand is estimated via sieve NPIV. Our real data application of UCBs for exact CS and DL functionals of gasoline demand reveals interesting patterns and is applicable to other goods markets. Series two‐stage least squares optimal sup‐norm convergence rates uniform Gaussian process strong approximation score bootstrap uniform confidence bands nonlinear welfare functionals nonparametric demand with endogeneity C13 C14 C36
---
Learning in network games
Jaromír Kovářík, Friederike Mengel, José Gabriel Romero


Abstract
We report the findings of experiments designed to study how people learn in network games. Network games offer new opportunities to identify learning rules, since on networks (compared to, e.g., random matching) more rules differ in terms of their information requirements. Our experimental design enables us to observe both which actions participants choose and which information they consult before making their choices. We use these data to estimate learning types using finite mixture models. Monitoring information requests turns out to be crucial, as estimates based on choices alone show substantial biases. We also find that learning depends on network position. Participants in more complex environments (with more network neighbors) tend to resort to simpler rules compared to those with only one network neighbor. Experiments game theory heterogeneity learning finite mixture models networks C72 C90 C91 D85
---
An empirical model of non‐equilibrium behavior in games
Brendan Kline


Abstract
This paper studies the identification and estimation of the decision rules that individuals use to determine their actions in games, based on a structural econometric model of non‐equilibrium behavior in games. The model is based primarily on various notions of limited strategic reasoning, allowing multiple modes of strategic reasoning and heterogeneity in strategic reasoning across individuals and within individuals. The paper proposes the model and provides sufficient conditions for point identification of the model. Then the model is estimated on data from an experiment involving two‐player guessing games. The application illustrates the empirical relevance of the main features of the model. Games heterogeneity identification non‐equilibrium strategic reasoning C1 C57 C72
---
Endogenous sample selection: A laboratory study
Ignacio Esponda, Emanuel Vespa


Abstract
Accounting for sample selection is a challenge not only for empirical researchers, but also for the agents populating our models. Yet most models abstract from these issues and assume that agents successfully tackle selection problems. We design an experiment where a person who understands selection observes all the data required to account for it. Subjects make choices under uncertainty and their choices reveal valuable information that is biased due to the presence of unobservables. We find that almost no subjects optimally account for endogenous selection. On the other hand, behavior is far from random, but actually quite amenable to analysis: Subjects follow simple heuristics that result in a partial accounting of selection and mitigate mistakes. Contingent thinking learning sample selection C91 D83
---
Pirates of the Mediterranean: An empirical investigation of bargaining with asymmetric information
Attila Ambrus, Eric Chaney, Igor Salitskiy


Abstract
We investigate the effect of delay on prices in bargaining situations using a data set containing thousands of captives ransomed from Barbary pirates between 1575 and 1692. Plausibly exogenous variation in the delay in ransoming provides evidence that negotiating delays decreased the size of ransom payments, and that much of the effect stems from the signalling value of strategic delay, in accordance with theoretical predictions. We also structurally estimate a version of the screening type bargaining model, adjusted to our context, and find that the model fits both the observed prices and acceptance probabilities well. Bargaining piracy ransom D23 K42 N45
---
Neighborhood dynamics and the distribution of opportunity
Dionissi Aliprantis, Daniel R. Carroll


Abstract
This paper studies neighborhood effects using a dynamic general equilibrium model. Households choose where to live and how much to invest in their child's human capital. The return on parents' investment is determined in part by their child's ability and in part by a neighborhood externality. We calibrate the model using data from Chicago in 1960, assuming that in previous decades households were randomly allocated to, and then could not move from, neighborhoods with different total factor productivity (TFP). This restriction on neighborhood choice allows us to overcome the fundamental problem of endogenous neighborhood selection. We use the calibrated model to study Wilson's (1987) hypothesis that racial equality under the law need not ensure equality of opportunity due to neighborhood dynamics. We examine the consequences of allowing for mobility, equalizing TFP, or both. In line with Wilson, 1987, sorting can lead to persistent inequality of opportunity across locations if initial conditions are unequal. Our results highlight the importance of forward‐looking agents. Neighborhood effect residential sorting dynamics human capital segregation E22 E24 H73 I24 J15 J62 R23
---
Income effects and the welfare consequences of tax in differentiated product oligopoly
Rachel Griffith, Lars Nesheim, Martin O'Connell


Abstract
Random utility models are widely used to study consumer choice. The vast majority of applications assume utility is linear in consumption of the outside good, which imposes that total expenditure on the subset of goods of interest does not affect demand for inside goods and restricts demand curvature and pass‐through. We show that relaxing these restrictions can be important, particularly if one is interested in the distributional effects of a policy change, even in a market for a small budget share product category. We consider the use of tax policy to lower fat consumption and show that a specific (per unit) tax results in larger reductions than an ad valorem tax, but at a greater cost to consumers. Income effects compensating variation demand estimation oligopoly pass‐through fat tax H20 L13
---
Identifying dynamic spillovers of crime with a causal approach to model selection
Gregorio Caetano, Vikram Maheshri


Abstract
Does crime in a neighborhood cause future crime? Without a source of quasi‐experimental variation in local crime, we develop an identification strategy that leverages a recently developed test of exogeneity (Caetano (2015)) to select a feasible regression model for causal inference. Using a detailed incident‐based data set of all reported crimes in Dallas from 2000 to 2007, we find some evidence of dynamic spillovers within certain types of crimes, but no evidence that lighter crimes cause more severe crimes. This suggests that a range of crime reduction policies that target lighter crimes (prescribed, for instance, by the “broken windows” theory of crime) should not be credited with reducing the violent crime rate. Our strategy involves a systematic investigation of endogeneity concerns and is particularly useful when rich data allow for the estimation of many regression models, none of which is agreed upon as causal ex ante. Neighborhood crime broken windows model selection test of exogeneity C52 C55 K42 R23
---
Identification, data combination, and the risk of disclosure
Tatiana Komarova, Denis Nekipelov, Evgeny Yakovlev


Abstract
It is commonplace that the data needed for econometric inference are not contained in a single source. In this paper we analyze the problem of parametric inference from combined individual‐level data when data combination is based on personal and demographic identifiers such as name, age, or address. Our main question is the identification of the econometric model based on the combined data when the data do not contain exact individual identifiers and no parametric assumptions are imposed on the joint distribution of information that is common across the combined data set. We demonstrate the conditions on the observable marginal distributions of data in individual data sets that can and cannot guarantee identification of the parameters of interest. We also note that the data combination procedure is essential in a semiparametric setting such as ours. Provided that the (nonparametric) data combination procedure can only be defined in finite samples, we introduce a new notion of identification based on the concept of limits of statistical experiments. Our results apply to the setting where the individual data used for inferences are sensitive and their combination may lead to a substantial increase in the data sensitivity or lead to a “de‐anonymization” of the previously “anonymized” information. We demonstrate that the point identification of an econometric model from combined data is incompatible with restrictions on the risk of individual disclosure. If the data combination procedure guarantees a bound on the risk of individual disclosure, then the information available from the combined data set allows one to identify the parameter of interest only partially, and the size of the identification region is inversely related to the upper bound guarantee for the disclosure risk. This result is new in the context of data combination as we notice that the quality of links that need to be used in the combined data to assure point identification may be much higher than the average link quality in the entire data set, and thus point inference requires the use of the most sensitive subset of the data. Our results provide important insights into the ongoing discourse on the empirical analysis of merged administrative records as well as discussions on the “disclosive” nature of policies implemented by the data‐driven companies (such as internet services companies and medical companies using individual patient records for policy decisions). Data protection model identification data combination C13 C14 C25 C35
---
Simultaneous selection of optimal bandwidths for the sharp regression discontinuity estimator
Yoichi Arai, Hidehiko Ichimura


Abstract
A new bandwidth selection method that uses different bandwidths for the local linear regression estimators on the left and the right of the cut‐off point is proposed for the sharp regression discontinuity design estimator of the average treatment effect at the cut‐off point. The asymptotic mean squared error of the estimator using the proposed bandwidth selection method is shown to be smaller than other bandwidth selection methods proposed in the literature. The approach that the bandwidth selection method is based on is also applied to an estimator that exploits the sharp regression kink design. Reliable confidence intervals compatible with both of the proposed bandwidth selection methods are also proposed as in the work of Calonico, Cattaneo, and Titiunik (2014a). An extensive simulation study shows that the proposed method's performances for the samples sizes 500 and 2000 closely match the theoretical predictions. Our simulation study also shows that the common practice of halving and doubling an optimal bandwidth for sensitivity check can be unreliable. Bandwidth selection local linear regression regression discontinuity design regression kink design confidence interval C13 C14 C21
---
The superintendent's dilemma: Managing school district capacity as parents vote with their feet
Dennis Epple, Akshaya Jha, Holger Sieg


Abstract
Many urban school districts in the United States and OECD countries confront the necessity of closing schools due to declining enrollments. To address this important policy question, we formulate a sequential game where a superintendent is tasked with closing down a certain percentage of student capacity; parents respond to these school closings by sorting into the remaining schools. We estimate parents' preferences for each school in their choice set using 4 years of student‐level data from a mid‐sized district with declining enrollments. We show that consideration of student sorting is vital to the assessment of any school closing policy. We next consider a superintendent tasked with closing excess school capacity, recognizing that students will sort into the remaining schools. Some students will inevitably respond to school closings by exiting the public school system; it is especially difficult to retain higher achieving students when closing public schools. We find that superintendents confront a difficult dilemma: pursuing an equity objective, such as limiting demographic stratification across schools, results in the exit of many more students than are lost by an objective explicitly based on student retention. School closing school choice demand for public schools peer effects C35 C52 C60 I20