My research focuses on the social, political, and ethical problems that arise from the fact that our lives are increasingly lived through information technology. The kinds of things I think and write about range from what privacy means in the digital age to what big data is and how it's changing the way we understand ourselves. Broadly speaking, my projects are grouped along two tracks: (1) the ethics and politics of privacy and surveillance, and (2) issues in philosophy of technology and science and technology studies (STS) more generally.


Privacy/Surveillance

"Information Privacy and Social Self-Authorship"

Abstract: The dominant approach in privacy theory defines information privacy as some form of control over personal information. In this essay, I argue that the control approach is mistaken, but for different reasons than those offered by its other critics. I claim that information privacy involves the drawing of epistemic boundaries—boundaries between what others should and shouldn’t know about us. While controlling what information others have about us is one strategy we use to draw such boundaries, it is not the only one. We conceal information about ourselves and we reveal it. And since the meaning of information is not self-evident, we also work to shape how others contextualize and interpret the information about us that they have. Information privacy is thus about more than controlling information; it involves the constant work of producing and managing public identities, what I call “social self-authorship.” In the second part of the essay, I argue that thinking about information privacy in terms of social self-authorship helps us see ways that information technology threatens privacy, which the control approach misses. Namely, information technology makes social self-authorship invisible and unnecessary, by making it difficult for us to know when others are forming impressions about us, and by providing them with tools for making assumptions about who we are which obviate the need for our involvement in the process.

>> Pre-publication Draft


"Obstacles to Transparency in Privacy Engineering" (co-authored with Kiel Brennan-Marquez)

Abstract: Transparency is widely recognized as indispensable to privacy protection. However, producing transparency for end-users is often antithetical to a variety of other technical, business, and regulatory interests. These conflicts create obstacles which stand in the way of developing tools which provide meaningful privacy protections or from having such tools adopted in widespread fashion. In this paper, we develop a "map" of these common obstacles to transparency, in order to assist privacy engineers in successfully navigating them. Furthermore, we argue that some of these obstacles can be successfully avoided by distinguishing between two different conceptions of transparency and considering which is at stake in a given case—transparency as providing users with insight into what information about them is collected and how it is processed (what we call transparency as a "view under-the-hood") and transparency as providing users with facility in navigating the risks and benefits of using particular technologies.

>> Link


"Hermeneutic Privacy: On Identity, Agency, and Information" (dissertation)

Abstract: The dominant approach in privacy theory defines information privacy as individual control over personal information. Against this view, I argued that the idea of controlling personal information is both incoherent and impracticable. That is because personal information is indistinguishable from non-personal information, and information (of any kind) is nearly impossible to control. Instead of understanding information privacy exclusively in terms of information control, I argued that we ought to think more broadly about the ways people use information to shape how others perceive and understand who they are — what I call social self-authorship. In addition to trying to control which particular pieces of information about us other people have, we work to contextualize and guide the interpretation of that information. I argued that our capacity to do that is central to our ability to draw interpersonal boundaries, and that our ability to draw such boundaries is a necessary condition for social and political agency. In order to protect information privacy in the Information Age, we therefore have to respect what I call norms of hermeneutic privacy. I described those norms, and I discussed how they might be realized in technology design, technology education, and technology law.

>> Intro PDF


Philosophy of Technology/STS

"Transparent Media and the Development of Digital Habits" (forthcoming)

Abstract: Our lives are guided by habits. Most of the activities we engage in throughout the day are initiated and carried out not by rational thought and deliberation, but through an ingrained set of dispositions or patterns of action—what Aristotle calls a hexis. We develop these dispositions over time, by acting and gauging how the world responds. I tilt the steering wheel too far and the car’s lurch teaches me how much force is needed to steady it. I come too close to a hot stove and the burn I get inclines me not to get too close again. This feedback and the habits it produces are bodily. They are possible because the medium through which these actions take place is a physical, sensible one. The world around us is, in the language of postphenomenology, an opaque one. We notice its texture and contours as we move through it, and crucially, we bump up against it from time to time. The digital world, by contrast, is largely transparent. Digital media are designed to recede from view. As a result, we experience little friction as we carry out activities online; the consequences of our actions are often not apparent to us. This distinction between the opacity of the natural world and the transparency of the digital one raises important questions. In this chapter, I ask: how does the transparency of digital media affect our ability to develop healthy habits online? If the digital world is constructed precisely not to push back against us, how are we supposed to gauge whether our actions are good or bad, for us and for others? The answer to this question has important ramifications for a number of ethical, political, and policy debates around issues in online life. For in order to advance cherished norms like privacy, civility, and fairness online, we need more than good laws and good policies—we need good habits, which dispose us to act in ways conducive to our and others’ flourishing.


"Ihde's Missing Sciences: Postphenomenology, Big Data, and the Human Sciences" (invited essay)

Abstract: In Husserl's Missing Technologies Don Ihde urges us to think deeply and critically about the ways in which the technologies utilized in contemporary science structure the way we perceive and understand the natural world. In this paper, I argue that we ought to extend Ihde's analysis to consider how such technologies are changing the way we perceive and understand ourselves too. For it is not only the natural or "hard" sciences which are turning to advanced technologies for help in carrying out their work, but also the social and "human" sciences. One set of tools in particular is rapidly being adopted—the family of information technologies that fall under the umbrella of “big data.” As in the natural sciences, big data is giving researchers in the human sciences access to phenomena which they would otherwise be unable to experience and investigate. And like the former, the latter thereby shape the ways those scientists perceive and understand who and what we are. Looking at two case studies of big data-driven research in the human sciences, I begin in this paper to suggest how we might understand these phenomenological and hermeneutic changes.

>> Link


"Artificial Intelligence and the Body: Dreyfus, Bickhard, and the Future of AI"

Abstract: For those who find Dreyfus's critique of AI compelling, the prospects for producing true artificial human intelligence are bleak. An important question thus becomes, what are the prospects for producing artificial non-human intelli- gence? Applying Dreyfus's work to this question is difficult, however, because his work is so thoroughly human-centered. Granting Dreyfus that the body is fundamental to intelligence, how are we to conceive of non-human bodies? In this paper, I argue that bringing Dreyfus's work into conversation with the work of Mark Bickhard offers a way of answering this question, and I try to suggest what doing so means for AI research.

>> Link