Papers

(See also cv)

Peer reviewed journal articles

Miller, Boaz and Isaac Record. Forthcoming. Responsible Epistemic Technologies. New Media and Society.

Information providing and gathering increasingly involve technologies like search engines, which actively shape their epistemic surroundings. Yet, a satisfying account of the epistemic responsibilities associated with them does not exist. We analyze automatically generated search suggestions from the perspective of social epistemology to illustrate how epistemic responsibilities associated with a technology can be derived and assigned. Drawing on our previously developed theoretical framework that connects responsible epistemic behavior to practicability, we address two questions: first, given the different technological possibilities available to searchers, the search technology, and search providers, who should bear which responsibilities? Second, given the technology’s epistemically relevant features and potential harms, how should search terms be autocompleted? Our analysis reveals that epistemic responsibility lies mostly with search providers, which should eliminate three categories of autosuggestions: those that result from organized attacks, those that perpetuate damaging stereotypes, and those that associate negative characteristics with specific individuals.

Record, Isaac, Daniel Southwick, ginger coons, and Matt Ratto. 2015. Regulating the Liberator: Prospects for the Regulation of 3D PrintingJournal of Peer Production 6.
Soon after Cody Wilson released his plans for the Liberator 3D printable gun, our Critical Making Laboratory undertook to produce a non-functioning version of the gun in order to assess the technical, material, and economic challenges associated with 3D printing proscribed objects. This paper recounts our experiences in creating the gun and analyzes the disruptive implications of increasing availability of emerging fabrication technologies for regulation and regulators. 3D printing promises to upend traditional manufacturing by making complex, precision objects easy to produce. Plans are digital and can be duplicated and distributed over the Internet essentially without cost. 3D printers themselves, like their 2D namesakes, appear to be general purpose machines with many legitimate functions, making their regulation a challenge. Some attention has already been paid to what regulators should do and to predicting what they will do. In this paper, we seek to explore what regulators can do. First, we lay out a conceptual framework that allows us to assess the technological possibilities afforded by 3D printers. Second, we assess the increased regulatory challenge presented by this changed technological infrastructure. We observe that much effective regulation is accomplished by technical, economic, ethical, and social constraints on action rather than by explicit legal proscription. For example, the high cost of precision machining and high level of technical skill required has traditionally been an effective prophylaxis against private individuals producing high-quality firearms “under the radar.” Low-cost 3D printers have the potential to allow for the near-effortless creation of precision parts, erasing this “contextual regulation.” We consider, in broad strokes, several possible regulatory targets: 3D printers, print materials, software, and the design file. For example, 3D printers could be licensed, materials could be watermarked, software could prevent the creation of certain shapes, or designs could carry increased legal culpability for damages or injuries. Against the potential gains to public safety, we weigh the potential costs of regulation that may 1) increase barriers to innovation, 2) unnecessarily restrict or complicate access to general purpose equipment, or 3) be unworkably costly in dollars and person-hours.

Ratto, Matt, Isaac Record, ginger coons, and Max Julien. 2014. Blind Tennis: Extreme Users and Participatory Design. PDC ’14 Proceedings of the 13th Participatory Design Conference: Short Papers, Industry Cases, Workshop Descriptions, Doctoral Consortium Papers, and Keynote Abstracts – Volume 2, 41-44. DOI: 10.1145/2662155.2662199

We explore questions related to materiality, participation, and inclusive design that arise from a series of events involving the design and prototyping of a tennis ball for use in ‘blind tennis.’ We observed that the blind user-designers were full participants in the design discussion and testing phases, but were less able to take part in the construction of the prototypes. This prompted us to examine the role material engagement plays in participatory and inclusive forms of design and, as part of our explorations, to create an experimental circuit design workflow that accommodates blind prototypers. We use this experience to probe the role materiality plays in processes of participatory design.
Record, Isaac. 2013. Technology and Epistemic PossibilityJournal for the General Philosophy of Science 44(2): 319-336. DOI: 10.1007/s10838-013-9230-8.

My aim in this paper is to give a philosophical analysis of the relationship between contingently available technology and the process of knowledge production. My concern is with what specific subjects can know in practice, given their particular conditions, especially available technology, rather than what can be known “in principle” by a hypothetical entity like Laplace’s Demon. The argument has two parts. In the first, I’ll construct a novel account of epistemic possibility that incorporates two pragmatic conditions: responsibility and practicability. For example, whether subjects can gain knowledge depends in some circumstances on whether they have the capability of gathering relevant evidence. In turn, the possibility of undertaking such investigative activities depends in part on factors like ethical constraints, economical realities, and available technology. In the second part of the paper, I’ll introduce “technological possibility” to analyze the set of actions made possible by available technology. To help motivate the problem and later test my proposal, I’ll focus on a specific historical case, one of the earliest uses of digital electronic computers in a scientific investigation. I conclude that the epistemic possibility of gaining access to certain scientific knowledge depends (in some cases) on the technological possibility for the construction and operation of scientific instruments.

Record, Isaac, Matt Ratto, Amy Ratelle, Adriana Ieraci, and Nina Czegledy. 2013. DIY Prosthetics Workshops: ‘Critical making’ for public understanding of human augmentation. Technology and Society (ISTAS), IEEE International Symposium (2013): 117-125.
We reflect on our ongoing series of DIY Prosthetics Workshops intended to engage the public in critical discourse about technology and human augmentation through engagement with prosthetics. The goal of these workshops is to enhance understanding of prosthetic technologies through both conceptual and material exploration. We describe our efforts to capture the makings of our workshop in an open, modifiable “kit” comprising “three Ps:” prompts for reflection, parts for construction, and publics for participation.
Miller, Boaz and Isaac Record. 2013. Justified Belief in a Digital Age: On the Epistemic Implications of Secret Internet TechnologiesEpisteme 10(2): 117-134.
People increasingly form beliefs based on information gained from automatically filtered Internet sources such as search engines. However, the workings of such sources are often opaque, preventing subjects from knowing whether the information provided is biased or incomplete. Users’ reliance on Internet technologies whose modes of operation are concealed from them raises serious concerns about the justificatory status of the beliefs they end up forming. Yet it is unclear how to address these concerns within standard theories of knowledge and justification. To shed light on the problem, we introduce a novel conceptual framework that clarifies the relations between justified belief, epistemic responsibility, action, and the technological resources available to a subject. We argue that justified belief is subject to certain epistemic responsibilities that accompany the subject’s particular decision-taking circumstances, and that one typical responsibility is to ascertain, so far as one can, whether the information upon which the judgment will rest is biased or incomplete. What this responsibility comprises is partly determined by the inquiry-enabling technologies available to the subject. We argue that a subject’s beliefs that are formed based on Internet-filtered information are less justified than they would be if she either knew how filtering worked or relied on additional sources, and that the subject may have the epistemic responsibility to take measures to enhance the justificatory status of such beliefs.

Dissertation

Record, Isaac. 2012. Knowing Instruments: Design, Reliability, and Scientific Practice. University of Toronto. Dissertation supervised by Anjan Chakravartty, Joseph Berkovitz, and Chen-Pang Yeang (University of Toronto). External: Allan Franklin (University of Colorado).
My dissertation explores how scientific instruments figure into the process of knowledge production, focusing on a case study of Monte Carlo simulation in nuclear physics. Virtually every science now relies on scientific instruments. Extant accounts of instruments treat them as information bearers or extensions of native senses, but do not explain how instruments come to have the status they do. My dissertation fills this gap. The central argument is that the instrument design process aims to produce a coherent and reinforcing set of material capacities, conceptual models, and practices of trust that, together, warrant the kinds of inferences scientists make on the basis of instrument results.

Other Published Works

Resch, Gabby, Dan Southwick, Isaac Record, and Matt Ratto. (forthcoming) Thinking as Handwork: Critical Making with Humanistic Concerns. In Jentery Sayers (ed.), Making Humanities Matter. University of Minnesota Press.
Isaac Record & Boaz Miller. (forthcoming). Taking iPhone Seriously: Epistemic Technologies and the Extended Mind. In Duncan Pritchard, Jesper Kallestrup‎, Orestis Palermos & J. Adam Carter‎ (eds.), Extended Epistemology. Oxford University Press.
 
David Chalmers thinks his iPhone exemplifies the extended mind thesis by meeting the criteria that he and Andy Clark established in their well-known 1998 paper. Andy Clark agrees. We take this proposal seriously, evaluating the case of the GPS-enabled smartphone as a potential mind extender. We argue that the “trust and glue” criteria enumerated by Clark and Chalmers are incompatible with both the epistemic responsibilities that accompany everyday activities and the practices of trust that enable users to discharge them. Prospects for revision of the original criteria are dim. We therefore call for a rejection of the trust criterion and a reevaluation of the extended mind thesis.
Record, Isaac. 2010. Scientific Instruments: Knowledge, Practice, and Culture (Editor’s Introduction). Spontaneous Generations 4(1) 1-7.
To one side of the wide third-floor hallway of Victoria College, just outside the offices of the Institute for the History and Philosophy of Science and Technology, lies the massive carcass of a 1960s-era electron microscope. Its burnished steel carapace has lost its gleam, but the instrument is still impressive for its bulk and spare design: binocular viewing glasses, beam control panel, specimen tray, and a broad work surface. Edges are worn, desiccated tape still feebly holds instructive reminders near control dials; this was once a workhorse in some lab. But it exists now out of time and place; like many of the scientific instruments we study, it has not been touched by knowing hands in decades.
The microscope in the hallway of the IHPST is a metaphor for the place of instruments in science studies. They are of central interest, but they do not really have their own place. Science studies, owing to roots in the history of ideas, conceptual and textual analysis, and ethnography, sometimes struggles to do justice to material things. It is no wonder we so often speak of instruments as theories instantiated, as inscription devices, or as actors in a network–that is, as extensions or modifications of things we already know how to study. But instruments are not those things, and treating them as such could distort our understanding of instruments and their role in science.
Record, Isaac. 2009. Review of Daniel Rothbart. Philosophical Instruments: Minds and Tools at WorkSpontaneous Generations 3(1) 233-235.
Record, Isaac. 2008. Frankenstein in Lilliput: Science at the Nanoscale (Editor’s Introduction). Spontaneous Generations 2(1) 22-24.
Record, Isaac and Andrew Munro. 2008. Review of Paul E. Ceruzzi. Internet Alley: High Technology in Tyson’s Corner, 1945-2005Spontaneous Generations 2(1) 251-253.

In Preparation

How simulations become evidence: Monte Carlo and the negotiation of scientific standards

The digital electronic computer was one of the most influential scientific instruments of the twentieth century. One of the earliest and still prominent uses is Monte Carlo simulation, a computational method that arose during the Manhattan Project. In this paper, I trace the rise in status of Monte Carlo from its initial use in the forties as an auxiliary heuristic to the seventies, when simulation results were regularly accepted as scientific evidence. I argue that this rise in status depended on the production and communication of “practices of trust” that subsumed Monte Carlo under accepted scientific standards.
Ulam, von Neumann, and Metropolis developed Monte Carlo for use in cases where analytic methods proved intractable and real experiments too dangerous or expensive. Monte Carlo was intended as merely a heuristic, a way for scientists to gain insight into intractable analytic equations so that they could be simplified for hand-calculation. But the calculations quickly came to be accepted as scientific evidence in their own right. For example, aspects of the simulation that were initially proposed as estimates or idealizations, for example the stochastic behavior of simulated neutrons, were reinterpreted as being representative of reality, leading to new hypotheses about the nature of actual neutrons. When the focus of the Manhattan Project shifted from the atomic to the hydrogen bomb—that is, from a fission to a fusion bomb—reliance on Monte Carlo evidence became even more essential, for there simply were no experimentally available instances of fusion.

The question is how this initially contested method became an acceptable source of scientific evidence. I attempt to get at the answer through an analysis of the early published papers that introduced Monte Carlo to various scientific communities, which implicitly or explicitly include arguments for the acceptance of Monte Carlo results. As the method developed and spread, scientists renegotiated standards of evidence in order to include evidence from the new practice, and at the same time they modified the practice itself to be ever more compatible with existing standards. Of particular importance for the acceptance of Monte Carlo was the development and communication of the practices of trust that gave scientists confidence in the validity and appropriate use of simulation results. I detail several such practices and argue that they were co-produced with the simulation during its design.

Remaking the Humanities: 3D methodologies in the humanistic studies of science and technology

 
I review the state of the art in 3D scanning and printing (3D) and make two arguments regarding its use in the humanistic studies of science and technology (HSST). The weak argument concludes that 3D is acceptable for HSST. The strong argument concludes that 3D makes possible certain kinds of investigations that should now be considered necessary for making certain kinds of arguments within HSST. The chief argument in favour of the weak thesis is that 3D may provide new evidence for HSST. The principled argument against 3D in HSST amounts to boundary policing: Because HSST doesn’t train its practitioners to use or evaluate 3D evidence, it should not be used. I argue that HSST has always benefitted from a strongly interdisciplinary set of methods, so rather than reject 3D, we should instead focus our concerns on recalibrating our evidentiary standards to handle 3D evidence.

Who is a Suitably Prepared Model User?

Scientists now use models in nearly every aspect of scientific practice. In recent decades, philosophers of science have devoted increasing attention to models, and in particular to the question of how models represent. As yet, however, no account adequately explains what makes a model a good one. In this paper, I argue that such an account must first establish who is a suitable model user, for this will give us a clue as to what properties good models will have. Comparatively little attention has been paid to what makes a suitable model user. My proposal is this: suitably prepared user is one for whom the affordances of a model to produce valid and relevant inferences are readily perceptible, and for whom affordances to invalid or irrelevant inferences are either hidden or easily identified as improper. Affordances are the possibilities for action that a given individual is competent to act on. Models afford possibilities for users to manipulate them in order to generate inferences. A suitably prepared user is one who readily identifies valid and relevant possibilities for making inferences and rejects the irrelevant or invalid inferences. I will consider four factors that contribute to a user’s preparation to make the right inferences and avoid the wrong ones: native human capabilities, socialization, experience, and formal training. With this account of users in hand, I then return to the question of what makes a good model. I argue that a good model is easier to manipulate than its target and it affords users useful inferences about its target. If it was not “easier to manipulate” than the target, at least in certain respects, we would just manipulate the target directly. And if the model did not afford useful inferences, we would not be using it, no matter how easy it was to use. These criteria may be assessed only with reference to the specific interests and capabilities of the model user.