The PEP Report – November 2018

The Federal Trade Commission is hosting a series of hearings on Big Data to air and understand our current evidence base. Last week, the hearings on “Competition and Consumer Protection in the 21st Century” were held at American University Washington College of Law. (You can watch the speakers here.) This post will provide a summary of the first panel of the hearings, which I thought was one of the highlights of the event.

Omri Ben-Shahar, who has produced very influential work on mandated disclosures, discussed a new draft that proposes a Pigouvian tax on personal data usage with the assumption that this would allow innovations that are worth their noxious effects to continue to emerge and flourish. At the end of the panel, he also returned to the theme of the failure of mandated disclosures. Privacy disclosures attempt to complete an impossible mission: people do not read disclosures because they are too complex. In response, many people advocate for simplified disclosures, but “you can’t really simplify the complex.” He believes a better, more protective system for consumers would embrace the use of big data so that companies offer products that match a good prediction of what would most satisfy that particular consumer.

Liad Wagman has a lot of cool empirical projects that are highly relevant to the FTC’s policy mission. In one study (coauthored with Jin-Hyuk Kim), he assessed the results of a natural experiment in the bay area. Wagman posits that banks and other firms may collect too much personal information such that the marginal privacy costs outweigh the marginal benefits from the firms being better able to match goods or services to the consumer. But how do we know where this tipping point is? One necessary step is to understand what the marginal benefits to consumers from matching or secondary markets might be in a particular context. Wagman and Kim took advantage of variation in the financial privacy regulations in five counties in the San Francisco area. Three counties required customers to opt into data selling, and two required them to opt out if they did not want data sold by the banks. Because the vast majority of consumers do not bother to change the default, this meant that lenders in two counties had frequent data-collection and lenders in the other three did not. In the counties where data could be easily transferred, loan prices were lower (controlling for other observables) and there were fewer foreclosures.

More recently, Wagman has released results of a study (coauthored with Ginger Jin and Jian Jia) on the early impact of GDPR on the European economy. The results so far are not good—much less investment in EU ventures, and a loss of tech jobs.  (Later in the hearings, a speaker from the EU questioned whether these results will persist, and also whether they suffer from some data incompleteness if there is new investment and jobs in privacy-enhancing services that are not contained in the outcomes data that Wagman, Jin, and Jia are using. The study uses data from Crunchbase, collecting data on “all technology-venture related activity in the EU and US.” Presumably a privacy-related new venture would count as a technology venture, but I suppose investments in ventures that have nothing to do with technology are not included and could differ between the US and EU in response to GDPR.)

In more theoretical work coauthored with Curtis Taylor, Wagman modeled a range of market contexts with oligopolistic features to better understand which consumers tend to win and lose when privacy rights are created and enforced. He summarized the results at the FTC hearings: in most models where data is restricted, consumers are worse off (in terms of prices, but not accounting for any intrinsic value to privacy.) Meanwhile, firms actually benefited from privacy rules because they did not have to compete as aggressively and could retain higher profits. Other work has found that identical compliance requirements will tend to favor the large, established firms because they are more burdensome, possibly even debilitating, for start-ups. These are nice reminders that an industry’s embrace of privacy rights could be anticompetitive.

Wagman’s overall message was that regulators should not use a blanket, uniform approach to privacy across wide ranging market contexts.

I learned a great deal from Florian Zettelmeyer’s presentation on the difficulties of market research when Machine Learning is selecting users into treatments. Zettelmeyer described findings from a study using Facebook’s “Lift Test Tool,” which ran experiments that randomized users into groups seeing the ads that would have been served using the usual auction versus seeing something else. The research group could then link the Facebook ads to actual purchasing behavior, even if no clicks came through Facebook, thanks to a “conversion tracker.” The research team found that there was indeed a significant “lift” (increase in sales) due to behaviorally targeted advertisements. But they also found that the estimated effects of behavioral advertising using traditional observational studies rather than the RCTs would have produced wildly different results. Most of the time, observational data would have overestimated the effects of behavioral advertising, but not always. Outside of the session, I talked to Zettelmeyer about why randomized controlled trials, or A/B tests, are not as prevalent or as easy to run on platforms as I used to think.

The lift test tool does provide convincing proof, though, that targeted advertising is significantly more valuable to advertising firms than generic advertising. On the surface, this looks like it conflicts with some previous findings of Alessandro Acquisti that suggest content providers don’t make a lot more money from offering behavioral advertisements than they do from contextual (but not behavioral) advertising. Acquisti pointed out that his and Zettelmeyer’s work are looking at two different markets—advertisers versus content producers/platforms.

The first set of speakers then had a thoughtful and wide-ranging discussion about price discrimination, the value of bigger data, and a few other topics. On price discrimination, panelists mentioned that very few companies do the sort that irritates consumers the most (personally tailored prices that approach first degree price discrimination), though ride-sharing apps like Uber are beginning to do more targeted pricing. But the differences between the publicly reviled price discrimination and the sort that is generally accepted or even appreciated are hard to define. For example, is price timing the same as tailoring to personal characteristics? Moreover, as most economists point out, price discrimination can be good for consumers even when firms collect a lot of consumer surplus because it opens products and services to the people less willing (and often less able) to pay. Ben-Shahar noted that firms have a hard time doing too much price tailoring with goods because there’s so much opportunity for arbitrage- for another firm to come in and offer the same good for a lower price (but still above marginal cost.) But with services or even goods that can vary more aspects of the experience for consumers, there might be more opportunity for price discrimination. Finally, Zettelmeyer challenged the conventional wisdom that the marginal returns from more and more data is diminishing. He explained that there can be a sea change when a certain level of prediction accuracy is achieved. For example, when a prediction algorithm becomes accurate enough to ship products to the right location before they are even ordered or purchased, costs can drop dramatically.

Throughout the hearings, the Facebook/Cambridge Analytica scandal was used as the stand-in or place-holder for privacy harm. This is not surprising since nearly everybody believes something went wrong in that episode, but the drawback is that it allowed many speakers to elide a concrete definition or explanation for consumer harm. I used my comments on the second day of hearings to address this issue, and I will soon produce a PEP white paper that elaborates on the ideas I presented to the room. I also gave a presentation on the first day on the topic of the clash between privacy regulation and First Amendment law.