Publications and Research

The Intuitive Appeal of Explainable Machines

(draft available at SSRN) (with Solon Barocas)

As algorithmic decision-making has become synonymous with inexplicable decision-making, we have become obsessed with opening the black box. This Article responds to a growing chorus of legal scholars and policymakers demanding explainable machines. Their instinct makes sense; what is unexplainable is usually unaccountable. But the calls for explanation are a reaction to two distinct but often conflated properties of machine-learning models: inscrutability and non intuitiveness. Inscrutability makes one unable to fully grasp the model, while non intuitiveness means one cannot understand why the model’s rules are what they are. Solving inscrutability alone will not resolve law and policy concerns; accountability relates not merely to how models work, but whether they are justified.

In this Article, we first explain what makes models inscrutable as a technical matter. We then explore two important examples of existing regulation-by-explanation and techniques within machine learning for explaining inscrutable decisions. We show that while these techniques might allow machine learning to comply with existing laws, compliance will rarely be enough to assess whether decision-making rests on a justifiable basis.

We argue that calls for explainable machines have failed to recognize the connection between intuition and evaluation and the limitations of such an approach. A belief in the value of explanation for justification assumes that if only a model is explained, problems will reveal themselves intuitively. Machine learning, however, can uncover relationships that are both non-intuitive and legitimate, frustrating this mode of normative assessment. If justification requires understanding why the model’s rules are what they are, we should seek explanations of the process behind a model’s development and use, not just explanations of the model itself. This Article illuminates the explanation-intuition dynamic and offers documentation as an alternative approach to evaluating machine learning models.

Disparate Impact in Big Data Policing

52 Ga. L. Rev. 109 (2018)

Data-driven decision systems are taking over. No institution in society seems immune from the enthusiasm that automated decision-making generates, including—and perhaps especially—the police. Police departments are increasingly deploying data mining techniques to predict, prevent, and investigate crime. But all data mining systems have the potential for adverse impacts on vulnerable communities, and predictive policing is no different. Determining individuals’ threat levels by reference to commercial and social data can improperly link dark skin to higher threat levels or to greater suspicion of having committed a particular crime. Crime mapping based on historical data can lead to more arrests for nuisance crimes in neighborhoods primarily populated by people of color. These effects are an artifact of the technology itself, and will likely occur even assuming good faith on the part of the police departments using it. Meanwhile, predictive policing is sold in part as a “neutral” method to counteract unconscious biases when it is not simply sold to cash-strapped departments as a more cost- efficient way to do policing.

The degree to which predictive policing systems have these discriminatory results is unclear to the public and to the police themselves, largely because there is no incentive in place for a department focused solely on “crime control” to spend resources asking the question. This is a problem for which existing law does not provide a solution. Finding that neither the typical constitutional modes of police regulation nor a hypothetical anti-discrimination law would provide a solution, this Article turns toward a new regulatory proposal centered on “algorithmic impact statements.”

Modeled on the environmental impact statements of the National Environmental Policy Act, algorithmic impact statements would require police departments to evaluate the efficacy and potential discriminatory effects of all available choices for predictive policing technologies. The regulation would also allow the public to weigh in through a notice-and-comment process. Such a regulation would fill the knowledge gap that makes future policy discussions about the costs and benefits of predictive policing all but impossible. Being primarily procedural, it would not necessarily curtail a department determined to discriminate, but by forcing departments to consider the question and allowing society to understand the scope of the problem, it is a first step towards solving the problem and determining whether further intervention is required.

Meaningful Information and the Right to Explanation

7 Int’l Data Privacy L. 233 (2017)

There is no single, neat statutory provision labeled the “right to explanation” in Europe’s new General Data Protection Regulation (GDPR). But nor is such a right illusory.

Responding to two prominent papers that, in turn, conjure and critique the right to explanation in the context of automated decision-making, we advocate a return to the text of the GDPR.

Articles 13-15 provide rights to “meaningful information about the logic involved” in automated decisions. This is a right to explanation, whether one uses the phrase or not.

The right to explanation should be interpreted functionally, flexibly, and should, at a minimum, enable a data subject to exercise his or her rights under the GDPR and human rights law.

A Mild Defense of Our New Machine Overlords

70 Vand. L. Rev. En Banc 87 (2017)

We must make policy based on realistic ideas about how machines work. In Plausible Cause, Kiel Brennan-Marquez argues first that “probable cause” is about explanation rather than probability, and second that machines cannot provide the explanations necessary to justify warrants under the Fourth Amendment. While his argument about probable cause has merit, his discussion of machines relies on a hypothetical device that obscures several flaws in the reasoning. As this response essay explains, machines and humans have different strengths, and both are capable of some form of explanation. Going forward, we must examine realistically not only where machines might fail, but also where they can improve upon the failures of a system built with human limitations in mind.

Big Data’s Disparate Impact

104 Calif. L. Rev. 671 (2016) (with Solon Barocas)

Advocates of algorithmic techniques like data mining argue that these techniques eliminate human biases from the decision-making process. But an algorithm is only as good as the data it works with. Data is frequently imperfect in ways that allow these algorithms to inherit the prejudices of prior decision makers. In other cases, data may simply reflect the widespread biases that persist in society at large. In still others, data mining can discover surprisingly useful regularities that are really just preexisting patterns of exclusion and inequality. Unthinking reliance on data mining can deny historically disadvantaged and vulnerable groups full participation in society. Worse still, because the resulting discrimination is almost always an unintentional emergent property of the algorithm’s use rather than a conscious choice by its programmers, it can be unusually hard to identify the source of the problem or to explain it to a court.

This Essay examines these concerns through the lens of American antidiscrimination law — more particularly, through Title VII’s prohibition of discrimination in employment. In the absence of a demonstrable intent to discriminate, the best doctrinal hope for data mining’s victims would seem to lie in disparate impact doctrine. Case law and the Equal Employment Opportunity Commission’s Uniform Guidelines, though, hold that a practice can be justified as a business necessity when its outcomes are predictive of future employment outcomes, and data mining is specifically designed to find such statistical correlations. Unless there is a reasonably practical way to demonstrate that these discoveries are spurious, Title VII would appear to bless its use, even though the correlations it discovers will often reflect historic patterns of prejudice, others’ discrimination against members of protected groups, or flaws in the underlying data

Addressing the sources of this unintentional discrimination and remedying the corresponding deficiencies in the law will be difficult technically, difficult legally, and difficult politically. There are a number of practical limits to what can be accomplished computationally. For example, when discrimination occurs because the data being mined is itself a result of past intentional discrimination, there is frequently no obvious method to adjust historical data to rid it of this taint. Corrective measures that alter the results of the data mining after it is complete would tread on legally and politically disputed terrain. These challenges for reform throw into stark relief the tension between the two major theories underlying antidiscrimination law: anticlassification and antisubordination. Finding a solution to big data’s disparate impact will require more than best efforts to stamp out prejudice and bias; it will require a wholesale reexamination of the meanings of “discrimination” and “fairness.”

Contextual Expectations of Privacy

35 Cardozo L. Rev. 643 (2013)

Fourth Amendment search jurisprudence is nominally based on a “reasonable expectation of privacy,” but actual doctrine is disconnected from society’s conception of privacy. Courts rely on various binary distinctions: Is a piece of information secret or not? Was the observed conduct inside or outside? While often convenient, none of these binary distinctions can adequately capture the complicated range of ideas encompassed by “privacy.” Privacy theorists have begun to understand that a consideration of social context is essential to a full understanding of privacy. Helen Nissenbaum’s theory of contextual integrity, which characterizes a right to privacy as the preservation of expected information flows within a given social context, is one such theory. Grounded, as it is, in context-based normative expectations, the theory describes privacy violations as unexpected information flows within a context, and does a good job of explaining how people actually experience privacy.

This Article reexamines the meaning of the Fourth Amendment’s “reasonable expectation of privacy” using the theory of contextual integrity. Consider United States v. Miller, in which the police gained access to banking records without a warrant. The theory of contextual integrity shows that Miller was wrongly decided because diverting information meant purely for banking purposes to the police altered an information flow in a normatively inferior way. Courts also often demonstrate contextual thinking below the surface, but get confused because the binaries prevalent in the doctrine hide important distinctions. For example, application of the binary third party doctrine in cases subsequent to Miller obscures important differences between banking and other settings. In two recent cases, United States v. Jones and Florida v. Jardines, the Supreme Court has seemed willing to consider new approaches to search, but they lacked a framework in which to discuss complicated privacy issues that defy binary description. In advocating a context-based search doctrine, this Article provides such a framework, while realigning a “reasonable expectation of privacy” with its meaning in society.

The Journalism Ratings Board: An Incentive-Based Approach to Cable News Accountability

44 U. Mich. J.L. Reform 467 (2011)

The American establishment media is in crisis. With newsmakers primarily driven by profit, sensationalism and partisanship shape news coverage at the expense of information necessary for effective self-government. Focused on cable news in particular, this Note proposes a Journalism Ratings Board to periodically rate news programs based on principles of good journalism. The Board will publish periodic reports and display the news programs’ ratings during the programs themselves, similar to parental guidelines for entertainment programs. In a political and legal climate hostile to command-and-control regulation, such an incentive-based approach will help cable news fulfill the democratic function of the press.

Advertisements