Will Science Remain Human?

Rome, Italy | March 5-6, 2018

Can machines substitute scientists in the crucial aspects of scientific practice? This is the question at the heart of the "Will Science Remain Human? Frontiers of the Incorporation of Technological Innovations in the Bio-Medical Sciences" Experts Meeting.

Scientific practice takes place in a context increasingly populated and executed by machines, and objects of study (the ‘observables’) are themselves increasingly constructed and identified by means of technology. Technology is pervading known disciplines, and technology-based fields that weren’t recognized as sciences (e.g., robotics) now are. This current explosion of technological tools for scientific research (and of technological drivers thereof) seems to call for a renewed understanding of the human character of science.

While technically advanced analyses are trying to capture and tackle the issue of unreliable or unelaborated data, we want to pose a deeper problem which has to do with science as a human activity: scientific knowledge runs the risk of being represented in simplified way, hiding human responsibility, freedom, creativity and choice of observables and explananda that ever ceased to characterize it.  Critical thinking about the reliability and meaningfulness of data and information, if tackled from this point of view, acquire renewed urgency, thickness and complexity.

More broadly, we need ‘bio-medical humanities’ to be expanded as both an urgent domain of study and a fundamental component of science training. Traditionally interpreted as cultural add-ons to the science curriculum, bio-medical humanities – with a strong role of philosophy and epistemology together with the social sciences – are now called to intervene right within the advancement of science, making the most of all thinking and operating resources that are being elaborated in the progress of bio-medicine and healthcare.

Speakers

Francesco Bianchini – University of Bologna

Mieke Boon – University of Twente

Fridolin Gross – University of Kassel

Paul Humphreys – University of Virginia

Benjamin Hurlbut – Arizona State University

Giuseppe Longo – CNRS et Ecole Normale Supérieure, Paris & Tufts University

Alfredo Marcos – University of Valladolid

Sandra D. Mitchell – University of Pittsburgh

Barbara Osimani – Ludwig-Maximilians-University of Munich and University of Ancora

Christopher Tollefsen – University of South Carolina

Emanuele Ratti – University of Notre Dame

Eric Winsberg – University of South Florida 

Discussants

Kathleen Creel – University of Pittsburgh

Melissa Moschella – Columbia University

Fabio Sterpetti – University Campus Bio-Medico of Rome

Mariachiara Tallacchini – Catholic University of the Sacred Heart 

Academic Leader

Marta Bertolaso - University Campus Bio-Medico  

Paper Abstracts

Francesco BIANCHINI — Virtually extending the bodies with (health) technologies

Self-perception of one’s own body will increasingly change in the future as a result of new web technologies. The building of big databases concerning bodily features together with the creation of portals dedicated to the management of such data by individual users and by institutions or companies in charge of specific functions, such as electronic health record (EHR), or electronic medical record (EMR), will bring to a new conception of the self as a body, and therefore as a cognitive embodied agent. I want to suggest that the conceptual framework of Extended Mind Thesis (EMT) is useful to analyze and understand the coming developments of such a trend. I propose to consider how it is possible that our perception of body is extended and how this extension is able to affect our cognitive capabilities that involve bodifulness. I support my claim with examples from health care technologies to outline some potential further developments related to virtually extended bodies. Present-day tools are, in fact, less and less material and more and more digitalized. In which sense are we allowed to speak about extended body from this point of view? In other terms, how do digital tools and devices extend our bodies? The virtual body, participated by the institutions and statistically connected with other virtual bodies, would constitute, together with the real body, a whole extended body, subject to a new kind of manipulation and control. What changes have we to expect starting from such kinds of developments? Would they be able to influence our body responses to external stimuli, both environmental and social stimuli? Most likely, biomedical scientific research will benefit from the rising of such frameworks, which are in turn allowed by parallel research in cognitive science and epistemology, where human and non-human elements closely coexist. 

Mieke BOON — How scientists are brought back into science – The error of empiricism

This paper aims at a contribution to critically investigate whether human-made scientific knowledge and the scientist’s role in developing it, will remain crucial – or can arbitrary algorithms, provided by machine-learning technologies that construct relationships between data-input-output, replace humans once the algorithms meet crucial epistemic criteria such as empirical adequacy, reliability and relevance better than limited human beings ever could? In this article a kind of double stroke will be made. First it is argued that fundamental presuppositions of empiricism give reason to believe that machines will ultimately make scientists superfluous. Second it is argued that empiricism is flawed since it does not account for why humans need knowledge.

Fridolin GROSS — The impact of formal reasoning in computational biology

In this contribution I would like to investigate the influence of computational methods in molecular biology by focusing on the very meaning of the concept of computation. At the most general level, computation can be understood as the transformation of a given sequence of symbols according to a set of formal rules. On this view, computational methods are not restricted to processes carried out on a (digital) computer, but may include other practices that involve the use of formal languages. The basic idea, though, is that computation is in some sense ‘mechanical’, i.e. involves procedures that can in principle be carried out by a machine. This broad characterization will allow me to pin down the differences between computational methods and the “informal” cognitive methods of human scientists. As research in cognitive psychology has revealed, human reasoning can deviate substantially from the model of formal computation, for example by relying on implicit background assumptions or collateral information. However, computational methods do not necessarily represent an optimized version of informal reasoning. Instead, they should be understood as cognitive tools that can extend but also fundamentally transform human cognition. Clearly, formal/computational and informal/non- computational approaches do not represent mutually exclusive approaches to science. They are often combined in practice and may support each other in various ways. An account of the impact of computational methods in biology must therefore also investigate the interactions between those different ‘cognitive styles.’ To illustrate my perspective, I will briefly discuss examples from different areas in computational biology: computational modeling, image analysis, and bioinformatics. Even though the influence of computational methods reveals itself in different ways, I show that an analysis of the interactions of formal and informal elements can in each case serve as a key to a deeper understanding of the transformations taking place.

Paul HUMPHREYS — Why automated science should be cautiously welcomed 

I shall focus on the prospects for automated science, argue that it has significant benefits in the domain of the natural sciences, and suggest that the potential problems have been exaggerated. Any scientific development can be used in a formulaic and mindless way; skillful uses bring great benefits, one example being in relieving scientists from drudge work using automated genomic analysis. Central epistemological problems, such as representational opacity and problems of interpretation, are shared by other and previous forms of scientific research. Despite this cautiously optimistic attitude, I recognize that there are serious additional difficulties and intellectual challenges accompanying a shift to alternative epistemologies. The automation of practical and theoretical knowledge together with its inscrutability is something genuinely new. If we humans cannot understand the representations used by machine learning the prospect of unintended consequences of the programs is raised considerably. Moreover, departing from human interests can have serious consequences, and note that some areas of science are now so intertwined with technology that socially harmful effects of the latter cannot be separated from the beneficial effects of the former. I examine three worries about the infiltration of science by technology – the worry from understanding, the worry from error, and the worry about applications – and use supervised and unsupervised machine learning methods applied to deep neural nets as an example of how these worries can and cannot be addressed. One important area is the presence of algorithmic bias and hidden biases in data sets. I conclude by looking at the role of models in these methods, the extent to which explainable artificial intelligence is feasible, and whether some version of the Precautionary Principle should be applied to methods in which occasional but catastrophic failures occur.

Benjamin HURLBUT — Behold the Man: figuring the human in the development of biotechnology

Protecting human integrity has long been a core concern about biotechnology. This is evident in the more than half a century of explicit debate about the potential of biotechnology to transform (and thereby potentially attenuate) human life. Less explicit but equally important have been the development of approaches to risk and benefit and accounts of the virtues of biotechnological research— of the aspirations, potentials and human futures in whose name it is undertaken. This paper explores the question of whether science can remain human by interrogating some of the ways in which the biosciences have figured the human in the name of figuring biotechnology as aligned with and not in violation of the human, in effect remaking images of the human to cohere with projects of biotechnology. I examine three registers in which these modes have played out in the history of biotechnology: (1) in processes of characterizing and governing risk, for instance in notions of biological containment; (2) in a programmatic conceptual reorientation of the purpose of biological knowledge from ontological description to forms of control; and (3) in the debates over what conceptual and discursive starting-point is appropriate to evaluating the potential of biologically transformative techniques for serving versus violating human integrity, for instance in current debates over human genome editing. I show that the conceptual orientations and practical consequences of framings of risk and benefit in biological research have been crucial sites for figuring “the human” in whose name projects in biotechnology are undertaken. These sites are consequential not only for the practical and normative character of the biosciences, but as loci in which socially shared imaginations of human integrity are at stake—and with them capacity for asking whether and in what ways science remains human.

Giuseppe LONGO — Some bias on bio-medical knowledge induced by the digital networks and the political bias on their use

Current Internet technologies are the result of an original assemblage of old and new technologies, whose networking interaction produces novelties. An analysis of some key aspects of this technological blend may help to understand the emergence of unpredictable features and their effects on human communicating communities. Yet, these phenomena are not uniquely determined by the technological infrastructure but also depend on the underlying social (and political) trends. Often, unsound claims and promises bias its role and direct artificially the actual goals and potentialities of the Internet. Typically, bibliometrics is inducing perverse effects on science and distorting human knowledge; as an indicator, it may be transformed into a target and, by a positive feedback, it may increase self-entertaining fashions by killing new research directions. Thus, our fantastic digital network become a tool to average-out human knowledge and behavior, instead of increasing diversity and comparative debates. Moreover, the digital world is often used as an “image” for physical or biological phenomena, or even as an intrinsic structure of knowledge or reality. This sets a bias on science that reaches its highest distorting role in claims aiming to replace human knowledge by computers elaborating information by and on “big data”. Its use, instead, in an interactive and constructive way may enhance human activities and increase knowledge by sound practices. The talk will also focus on the bio-medical sciences and on abusive claims on “big data” therein. Recent mathematical work may help to show that a deluge of spurious correlations necessarily infiltrates sufficiently large databases.

Alfredo MARCOS — Dehumanizing technoscience

Technoscience is experiencing a twofold dehumanizing process. On the one hand, there is a trend toward the dehumanization of the subject involved in doing technoscience. Thus, technoscience activities are performed in a growingly automated context. This approach threatens to marginalize what is truly human: everything that has to do with creativity, with emotions, insights and experiences, with meaning and values –even moral and aesthetic values–, and with reflection and dialogue. On the other hand, some applications of technoscience may transform nature up to the point of making it hostile to the requirements of a properly human ecology. Moreover, some recent anthropotechnics could become also a danger to human nature itself. Trans- and post-humanist projects specifically point to the dissolution of humanity by technoscience. We may deepen our diagnosis by pointing out the ideologization of technoscience as one of the causes of the current dehumanizing trend. An ideologized science becomes scientifism, whereas an ideologized technique derives into technologism. Technoscientifism is incompatible not only with the wealth and meaning of human life, but also with human freedom. The ideologization of technoscience and its subsequent dehumanization rely on an oversimplified ontology and a misguided anthropology, as we shall attempt to uncover. After presenting the framework of the problem, my goal is to offer some ideas that may contribute to solving it. Firstly, it would be necessary to stand for a de-ideologization of technoscience. Secondly, I believe that the indispensable role of the person in the production of technoscience must be duly stressed. Lastly, I argue that technoscience makes sense and becomes valuable within a wider human horizon. Its dehumanization, on the contrary, condemns it to stupidity and sterility, and probably in the end to its own decadence or demise.

Sandra D. MITCHELL — Unsimple truths: Multiple perspectives and integrative strategies

I will present arguments for why having multiple perspectives is not only necessary, but also provide humans the means for getting more accurate models of nature. These arguments rest on the partiality of representation, the character of the perspectives encoded in different experimental protocols and methods of integration. Ever since the introduction of telescopes and microscopes humans have relied on technologies to “extend” beyond human sensory perception in acquiring scientific knowledge. Contemporary scientific experiments, like x-ray crystallography and nuclear magnetic resonance spectroscopy for determining protein structure, involve causally detecting and processing information in ways that scientists trust more than what could be acquired by unaided human detection. Recently the rise of the use of artificial intelligence technologies that “extend” beyond human cognitive capacities has generated new questions for philosophers of science. I want to take up this challenge by investigating AI from two stances. First, is instrumentally. Does AI provide just another instrument, for humans to use in gaining scientific knowledge, like microscopes and NMR spectroscopy? Are the means by which we trust the results produced by these “sensory” technologies transferrable to the results produced by AI “cognitive” technologies? Second, is perspectival. If AI technologies produce results that are not mere extensions of human abilities but substantially different ways of reasoning, can we treat them as additional perspectives on a given scientific problem, like we do different experimental protocols, or different modelling assumptions? Are the means by which we integrate multiple perspectives in these contexts transferrable to integrating results produced by AI perspectives? I will explore these issues appealing to examples in the area of protein structure science.

Barbara OSIMANI — Social games and epistemic losses: reliability and higher order evidence in medicine and pharmacology

In this paper I analyse the dissent around evidence standards in medicine and pharmacology as a result of distinct ways to address epistemic losses in our game with nature and the scientific ecosystem: an ”elitist” and a ”pluralist” approach. The former is focused on reliability as minimisation of random and systematic error, and is grounded on a categorical approach to causal assessment, whereas the latter is more focused on the high context-sensitivity of causation in medicine and in the soft sciences in general, and favours probabilistic approaches to scientific inference, as better equipped for defeasibility of causal inference in such domains. I then present a system for probabilistic causal assessment from heterogenous evidence that makes justice of concerns from both positions, while also incorporating ”higher order evidence” (evidence/information about the evidence itself) in hypothesis confirmation.

Emanuele RATTI — Predictions, phronesis and machine learning in biology

The applications of machine learning (ML) to biology has fostered the idea that algorithms will be able to foster scientific discovery and biological progress in a way that humans alone could not. Moreover, epistemic opacity and black-boxes related to these methodologies seem to undermine the capacity of human beings to play a direct role in scientific research. In this paper, I will show that none of these views are substantial. First, ML is not a tool to generate knowledge but it is a tool to identify instances of a knowledge already present and codified. ML’s aim is to generate reliable predictions, but these are not anticipations of new phenomena that are likely to create new concepts that can possibly go beyond the current theoretical background. Predictions are what would happen in similar situations to the one analyzed by ML in its training sets. The fact that they are unanticipated is in relation to human computational abilities, and not in relation to a theoretical background. Next, despite opacity and black-boxes, ML is not independent of human beings and cannot form the basis of automated science. I will show a number of concrete cases where at each level of computational analysis human beings have to make a decision that computers cannot possibly make. These decisions have to be made by appealing to contextual factors, background and tacit knowledge and researcher’s experience. I will frame this observation in the idea that computer scientists conceive their work as being a case of Aristotle’s poiesis perfected by techne, which can be reduced to a number of straightforward rules and technical knowledge. However, my observations reveal that there is more to ML than just poiesis and techne, and that the work of ML practitioners in biology needs also the cultivation of phronesis, which cannot be automated.

Christopher TOLLEFSEN — What is ‘good science’?

This paper approaches the question concerning the “human character” of science from the standpoint of ethics rather than method. Recent controversies over particular scientific endeavors have led some commentators to assert the impropriety of imposing moral, political, or religious limits on scientific inquiry. In such claims, I suggest, we also have an attempt to minimize the “human character” of science, by divorcing “good science” from the human domain of the ethical. I want here to look more closely at the relationship between science — “good science” – and morality. This relationship exists, I shall argue, on at least three axes which I shall call the external ethics of science, the social ethics of science, and the internal ethics of science. Of these, only the first – the external ethics of science – suggests that good science may not necessarily be good morally. if good science has the characteristics I describe in this paper, then the project of science “populated and exercised by machines” is chimerical. Were science merely a technical pursuit, existing entirely within the realm of instrumental rationality, then perhaps there could be a science carried out by machines. But even the technical aspects of science must be governed externally by human moral norms. And if my accounts of the social and internal ethics of science are correct, then science is not, and cannot be, a merely technical activity governed solely by considerations of instrumental rationality. It is, rather, fully human action, to be governed by sound practical reasoning, both individually and communally; so understood, its “human character” is essential to its continued existence and a professionalized set of practices.

Eric WINSBERG — Can models have skill?

Climate scientists and climate modelers often speak of determining whether or not a model has “skill.” This seems to imply that climate models themselves are epistemic experts. In this paper, I take a careful look at how model skill is evaluated in climate science with an eye to determining the extent to which we ought to view the idea of model skill as a step in the direction toward post-human science. I begin by consider the paradigm of verification and validation which comes from the world of engineering. Those climate scientists sometimes speak of verifying and validating their models, I argue that this paradigm is unsuitable for climate science. I then consider the question of whether or not there are general norms with which climate model skill can be established. William Goodwin has criticized my earlier work on the grounds that it “makes it unlikely that the legitimacy of [computer models] can be reconstructed, or rationalized, in terms of generally recognized, ahistorical evidential norms.” (Goodwin 2015, p. 342) Nevertheless, I double down on my earlier claims in this paper. I concede that “a philosopher who hopes to address the epistemological concerns that have been raised about the reliability of climate models isn’t going to be able to tell a normatively grounded story that will secure the unassailable reliability of the results of climate modeling.” (p. 342) I argue that when we look on the work of those who are in the business of modeling highly complex non- linear systems, the best we are ever going to be able to do is to arrive at a situation where “a simulation modeler could explain to his peers why it was legitimate and rational to use a certain approximation technique to solve a particular problem” by appealing to “very context specific reasons and particular features.” (p. 344). If this is right, it suggests the prospects of science failing to “remain human” continue to be bleak.

Principal Inquiries

  • What are the current main lines of penetration of technology into bio-medicine? 
  • What new knowledge, control, and care potentials are being opened by technology?
  • How is scientific practice changing along with the pervasive introduction of technology deep within the discovery process of the bio-medical sciences?
  • What new cautionary principles, and what models of trust and reliability, are emerging?
  • Are the bio-medical sciences becoming more participatory thanks to Information and Communication Technologies?
  • What are the implications for science training? What personal and public features should characterize the scientist of the future?
  • What are the implications for science communication (also in the clinical context) and what are the main myths and realities about bio-medical technologies?
  • What is the role for bio-medical humanities, and how can it be affirmed and developed?