Technology and the Good Society

Barcelona, Spain | February 4-6, 2016

This experts meeting will examine the role that modern technological advances play in shaping society, for better or worse. Further, philosophers, sociologists and media culture scholars from Europe and America will consider the concept of what constitutes a “good society.”

Technology is a pervasive feature of our everyday lives. Since the Industrial Revolution, modern technological advancements have reconfigured the ways people live, communicate and interact, and have engendered changing ideals of the good, flourishing society. In recent years, new technologies in the information, communication, medical, industrial, and other sectors have further impacted everyday life. To be sure, these technologies have had positive and negative repercussions on the social, political, public health, and environmental levels, and have given rise to heated debates opposing the so-called “technophobes” and “technophiles.” Since technologies bring with them their own promises and desires, but also their uncertainties and risks, it is necessary to critically appraise the extent to which contemporary technological advancements help shape or hinder the prospect of a Good Society.

It should be noted that the design, commercial implementation, and public use of technology are not value free. As most technologies are produced by private and public corporations for which economic profit goes along with specific social and environmental applications, the question can be raised as to how to evaluate the values inherent to the production and implementation of technology, and what the impact of those intertwined values on the constitution of a good society is. This question prompts critical reflection about the implications of, for instance, biotechnology advancements for improving health and food, current and future developments in nanotechnology for new diagnostic and therapeutic techniques, biometric technologies for law enforcement and immigration control, robotics advancements for the caring of elderly people, military technology advancements for national security and protection, developments in Information and Communication technologies for social connection and sharing (but also for massive surveillance and online security), etc. Though each of these examples is fraught with unique social and ethical challenges, what they share is a capacity to generate risks and benefits that affect the flourishing of a society in which all citizens should, in principle, enjoy some minimal threshold level of material well-being, satisfaction of their psychosocial needs, respect of their fundamental rights as persons and citizens, and access to primary goods.

Over the last decade, the impact of technology on society has been a concern for many scholars and decision makers. Previous research has investigated significant relationships between technology and psychological well-being (Amichai-Hamburger, 2011), technology and the Good Life (Briggle, Brey & Spence, 2012) as well as technology and capabilities (Oosterlaken & van den Hoven, 2012). Although this research has provided relevant insights into the ways different technologies impinge upon individuals’ and groups’ quality of life and well-being, the wider and no less important issue of the relation between technology and the Good Society is often obliquely addressed and, as such, remains unexplored from a systematic, critical  perspective. This Experts Meeting will directly tackle this issue by examining different approaches to the good society, by evaluating the social and moral impact of current and upcoming technologies, and by providing informed insight into the ways technology fosters or thwarts the thriving of a good society.

Among the goals of this Experts Meeting are:

  • To bring together and facilitate the interaction of different levels of expertise and disciplinary perspectives in order to assess legal, social and political implications of specific technologies for the prospect of a good society.
  • To provide a critical state-of-the-art account of how technology production and use affect social and moral values suitable for a good society.
  • To gain insight into recent developments in several technological fields and to provide policy recommendations on ethical, legal, social and cultural concerns in the usage of those technologies.

Principal Inquiries

Within the context of this Meeting, we want particularly to focus on the following questions:

  1. Technology and the Good Society. What theoretical and empirical approaches are best suited to study the Good Society in a technological culture? What are the social and ethical implications of technology for the Good Society? What kinds of technology do impact on specific dimensions of the good society? What is the role of cultural models in the constitution of a good society in a technological age?
  2. Designing for the Good. To what extent does technology design impinge upon the project of a Good Society? What values in technology design are best suited for the good society? What design procedures are worth promoting for different conceptions of the Good Society?
  3. Promoting the Good. In which ways can technology shape and foster—or obstruct and thwart—values such as equality, solidarity, security, social inclusion and empowerment? How much technological risk should we accept in order to improve the lives of citizens? Is a strongly technologized society a good one?
  4. Technology and Social Responsibility. How can social responsibility be normatively framed in a technological culture? What models of public participation are suitable for evaluations of risky technologies and attributions of social responsibility? What social divides are created by the implementation and use of different technologies? What conflicts of interest between stakeholders stem from specific technology implementations?
  5. Technology and Policies. To what extent does technology influence policy making processes and how, conversely, social, political and cultural circumstances contribute to the development and implementation of technology? What technology policies are best suited for different conceptions of the Good Society?

Academic Leader

Luciano Floridi – University of Oxford

Speakers

Philip Brey – University of Twente

Johnny Soraker – University of Twente

Mark Coeckelbergh – De Montfort University

Anne Kerr – University of Leeds

Diane Michelfelder – Macalester College

Brian D. Earp – Uehiro Centre for Practical Ethics

Rachel Prentice – Cornell University

Discussants

Omar V. Rosas – University of Navarra 

Charo Sádaba – University of Navarra

Ana Marta González – University of Navarra

Paper Abstacts

Philip Brey – University of Twente
The Strategic Role of Technology in a Good Society

The aim of this paper is to investigate the proper role of technology in a society that approaches the ideal of a good society, and to define criteria for the assessment of technologies for their contribution to the quality of society.  Technology has become integral to the fabric of society, and helps shape its quality.  To assess whether and how technologies actually contribute to the quality of society, an overall, integrated framework for such assessments is needed.  Such a framework would do three things:  (1) define general criteria for the goodness or quality of society; (2) analyze how technologies, and different designs and uses of them, either promote or detract from the goodness of society according to these criteria; (3) help propose particular ways of designing and using technology that better support the overall quality of society.  Such a framework is currently lacking, and it is the aim of this paper to propose one in the context of a discussion of various previously proposed criteria for good society.

In the first step towards the framework, a set of criteria for evaluating the goodness of society will be proposed.  These will include a well-being criterion, a criterion of protection of individual and collective rights and interests, a justice criterion, a sustainability criterion, and an aspirational criterion.  In the second step, it will be analyzed how technological products and systems can positively and negatively affect the realization of these criteria.  This will be done through an analysis of proper functions, social roles and side-effects of technologies.  In a final step, it will be argued that value-sensitive design methodology can be used to arrive at designs and usage scripts that support the overall quality of society.

Johnny Soraker – University of Twente
A Right to be Remembered? Meaningfulness and Permanence in Virtual Environments

The purpose of this paper is to outline a comprehensive framework that integrates philosophy of the good life, ethics of technology, and positive psychology, ultimately resulting in a theory that seeks to balance their respective strengths and shortcomings. The end result is a framework that adds philosophical justification to the incorporation of empirical research in questions concerning the good life, adds empirical support and substantive recommendations (“thickness”) to the philosophical theory of the good life, and – most importantly – provides tools and guidelines that allow engineers to better assess and anticipate how their design choices may affect the well-being of their users.

At bottom lies a philosophy of the good life entitled Confidence-Adjusted Intrinsic Attitudinal Hedonism (CAIAH), a theory of the good life that follows Fred Feldman as conceiving hedonism in terms of “taking pleasure in” (attitudinal) rather than “being pleased by” (experiential), but departs from Feldman in that it is the confidence we have in our pleasures that determines their conduciveness to well-being. The underlying theory has normative implication of its own, but also justifies a predominantly intersubjective notion of well-being which, in turn, justifies the use of empirical research to inform questions concerning the good life.

By empirical research, I primarily refer to ‘positive psychology’ and related disciplines, which not only adds substantive recommendations to the theory of the good life, but also adds what I will argue is a much needed positive perspective to ethics of technology. Ethics of technology, and related disciplines, have for several years been hard at work providing tools, theories and methods for assessing how a concrete technology may pose a threat to core values such as justice, equality, freedom, autonomy, security, privacy, tolerance, trust, equity and diversity. This means that the ethics of technology discourse is permeated by negative criticism and often end up warning against the development or use of particular types of technology. As important as this is, there is also a need for a discourse that allows for positive criticism that can end up recommending the development or use of particular types of technology – which is what positive psychology can provide.

Still, we cannot draw on empirical research alone with the purpose of achieving a quick fix of happiness which again shows the importance of philosophy and ethics in this framework. Given the methodological challenges, the somewhat controversial practice of operationalizing well-being, and the danger of unintended side-effects, we need to consider this empirical research as only part of the puzzle. The application of empirical research needs to be tempered by the philosophical theory of the good life – which also allows for moral, cultural and political values to determine what it means to live a good life. Furthermore, it needs to be tempered by ethics of technology, which emphasizes issues like unintended side-effects, dual use and the broader societal impact. The ultimate purpose of this paper is to show how the three perspectives – philosophical theories of the good life, ethics of technology, and positive psychology – can complement each other and jointly form a new framework – entitled Prudential-Empirical Ethics of Technology (PEET) – for responsibly assessing, anticipating, and designing for user well-being.

Anne Kerr – University of Leeds
Being Human in the Era of Precision Medicine

Reflecting on sociological writings on morality, vulnerabilities and life strategies in the era of late modernity, this paper will consider how stratified tests and treatments for diseases such as cancer - known as precision medicine - are changing what it means to be a patient and possibly also changing what it means to be a human being. Rather than engaging with this as a question of ‘becoming post-human,’ I will explore how growing levels of participation in trials, studies or programmes of tailored or personalised monitoring and treatments are experienced by the human subjects involved, focusing on how the sociotechnical assemblages in which they are involved mitigate but also moderate and magnify private and public vulnerabilities, demanding new life strategies and sociotechnical support for people living with disease longer-term.  Drawing on preliminary research conducted as part of a Wellcome Trust Senior Investigator Award in Society and Ethics (2015-2020), I will focus on how affected individuals, practitioners & policy makers cope with pre-symptomatic and post-treatment surveillance and new treatments, including the data this generates. Reflecting on Bauman’s analysis of morality and mortality (1992, 2003), I consider how the efforts of precision medicine to produce ‘soluble problems’ of disease can introduce anxieties and unease for individuals and society, investigating how this manifests in both public renditions of survivorship and the more subterranean marketization of person-centred data. I will end by reflecting on how ‘thinking with vulnerabilities’ gives new insights into the challenges and possibilities of new biomedical technologies.  

Diane Michelfelder – Macalester College
Promoting a Good Society in an Age of Post-normal Engineering: Some Beta-thoughts for our Technological Times

In the Fall of 2015, a number of American automobile manufacturers signaled their reluctance to raise their research budgets in order to take on more responsibility for making vehicular software less vulnerable to being hacked, arguing that consumers should bear this risk just as they presently bear the risks of having the tires of their cars slashed when parked outside. Such a signal could be interpreted as an invitation to begin a negotiation with the public about risk. The accelerating development of smart devices in general and of the Internet of Things in particular opens up a scenario in which there are many unknown and unpredictable outcomes and where there is a great deal at stake. High stakes, unknown and unpredictable outcomes, as well as the turn toward negotiating risk are all features of what could be called, to alter slightly the term popularized by Silvio Funtowicz and Jerome Ravetz, post-modern engineering. The technosphere of today, and that of the foreseeable future, is one increasingly dominated by post-modern engineering.

It is against this background that I am interested in considering the question of how best to promote well-being in society through the design of public, shared technologies and technological systems. In parallel with the recent “environmental virtue” turn in environmental ethics, this paper will take a “technological virtue” approach to the question just mentioned. But, it will stop short of making this parallel into an analogy, which would maintain that technological virtues need to be thought of as being akin to environmental virtues. More specifically, this paper will not argue that technological design can best promote well-being in society by fostering the development of non-relational virtues, such as Dale Jamieson’s “green virtues” of humility, temperance, and mindfulness. As a result, it will also depart from the technological virtue approach of a philosopher such as Michel Puech, in which non-relational virtues are likewise emphasized. Since in a world of post-normal engineering individual artifacts are connected to broader systems, the lines between private, individual domains of consumption and the institutions lying outside of these domains become blurred. A technological virtue approach to fostering well-being in society needs to be attentive to this state of affairs. 

What then would some of the virtues which could be promoted by for example the design of “smart cities” or “smart transportation systems” look like?  I explore the idea that such virtues would be ones that would work for multiple “platforms” or theories of well-being: objective list-theories, eudaimonia-oriented approaches, desire-satisfaction views, and the like. More specifically, they would be ones that would preserve and expand the capacity of those within society to “take stock of their lives” or, in other words, engage in critical, self-reflection about their own well-being. Such virtues could be called virtues of abundance. It is within the wide swath of the social that we run up against other people and other ideas that can challenge who we are, spark our imagination and our creativity, and move us toward such critical engagement. This implies that designing for a good society should aim primarily at the cultivation of  spontaneity, pleasure, and “rough ground” (to borrow Wittgenstein’s phrase) rather than at maximizing personalization and efficiencies and minimizing risk.

Brian D. Earp – Uehiro Centre for Practical Ethics
Love Drugs: Why Scientists Should Study the Effects of Pharmaceuticals on Human (Romantic) Relationships

Over a series of recent papers, Julian Savulescu, Anders Sandberg, and I (with the help of various co-authors) have argued that the use of "love drugs" and "anti-love drugs"--future technologies that would either enhance or diminish love, respectively, in the context of romantic relationships--could be justified in some cases, or even desirable. We have also argued that scientists should pursue research into these drugs, under the guidance of clear ethical thinking and appropriate regulation, since they could be used to benefit society. However, some critics have suggested that such drugs would be ripe for abuse, or would otherwise have negative consequences, such that we should not pursue such a research program. In this paper, I will argue that many chemical substances currently in use have serious implications for romantic relationships--only we are largely ignorant about what these implications are, since we tend to study the effects of drugs on individuals and their symptoms, rather than on relationships. I suggest that we should strive to reduce this ignorance, so that current (and future) drug technologies could be used in such a way as to help, rather than harm, our ability to meet our romantic commitments. I conclude with a discussion of the implications of such technologies for society at large.

Rachel Prentice – Cornell University
Ethics, Responsibility, Agency and the Techno-Bureaucratic Turn in Biomedicine

Since the 1980s, many practitioners in biomedicine have approached medical errors as systems failures that are amenable to technological solutions and bureaucratic controls. These systems approaches were pioneered by anesthesiologists seeking to reduce potentially fatal anesthesia errors, but went “global” around the turn of the millennium with widely influential reports on errors in medicine, such as the US Institute of Medicine’s 2000 report, To Err is Human. Solutions to unintended human errors in medicine include technological solutions, such as re-engineering anesthesia lines, to bureaucratic solutions, such as checklists that mandate specific practices from hand-washing to triple-checking the correct surgical site. Proponents have argued that these approaches, when properly implemented, can dramatically reduce the unintended harms to patients that take place when fallible human practitioners meet complex systems. Yet few have acknowledged that these approaches represent a deep shift in biomedical culture away from an ethics that locates skill and responsibility in individual practitioners toward an ethics (if the word can still be said to apply) of external controls that emphasizes properly designed, implemented, managed, and audited systems. Proponents of the systems approach argue that highly trained and responsible individuals remain important components of the larger system. This paper raises more questions than answers as it seeks to explore the relationship of on-the-ground ethical practice in clinical biomedicine to the controls implemented with techno-bureaucratic solutions. In particular, it examines what happens to ethics, responsibility, and agency when practices become techno-bureaucratic. 

Mark Coeckelbergh - University of Vienna
Social ontology, political principles, and responsibility for technology

How can we best theorize technology and the good society? This paper responds to this issue by showing how our assumptions about the meaning of the social and the political influence our evaluations of the impact of new technologies on society, and how, conversely, new technologies also shape the concepts we use to evaluate them. In the course of the argument, the paper also recommends that philosophers of technology use the resources of political philosophy to tackle the challenge of understanding and evaluating technology and society.

The paper shows that, on the one hand, evaluations of the impact of technology on society could benefit from more fundamental and critical reflection on their often salient (descriptive) social-ontological and (normative) political-ideological assumptions. These need to be made explicit and discussed; philosophers of technology can learn from social philosophy, sociology, anthropology, and STS, and political philosophy. For example, whether we start from the assumption that society is the sum of individuals, or from a more rational, communal, or organic view of the social, will influence our view of responsibility for technology and will lead to different views of responsible research and innovation (e.g. individual consent versus participation and communal innovation). And our evaluation of technology’s impact on society will differ if we define the social in strictly human terms or if we include technology and materiality in the social, as influential approaches in STS and anthropology do. The choice we make here will have implications for our thinking about for example technology and responsibility (e.g. targeting individual humans or evaluating the entire socio-technical system). Moreover, it would be very helpful to use more resources from political philosophy to deal with the problem of technology and the good society. For instance, there is excellent thinking about justice, equality, and other political principles which, unfortunately, is not often used to discuss the societal impact of technology. Consider for instance how thinking about the ethics of human enhancement is often limited to ethical theory (an exception is Coeckelbergh 2013). But here too it is important to reflect on fundamental assumptions made, including the very term political. For instance, since Aristotle the political has been defined in human terms, even, as Arendt shows, in explicit opposition to materiality and necessity. But other conceptualizations of the political are possible. The precise conceptualization we choose will have implications for how we deal with technology in society, for example whether we see technology as politically neutral or as political will influence how we deal with technological risk.

On the other hand, it is argued that our (conceptualizations of) political-philosophical principles and thinking about the social are themselves not entirely technology-independent and up to choice, but are influenced by human-technological experience and practices. For instance, our experience with the internet may give rise to specific conceptualizations of the social and the political (e.g. in terms of networks or global citizenship), which in turn shape particular uses of the technology, and social robotics may suggest particular conceptualization of the social and of society (e.g. behaviourist view of the social, a sexist view of human relations, and a consumerist view of care and society), which in turn may shape particular practices, e.g. in health care. Furthermore, the very meaning of ethical and political principles such as privacy and security may be redefined by electronic information and communication technologies. Hence when we make assumptions about society and politics when we evaluate technologies, we need to be aware that the object of our reflection also influenced our thinking about it; in this epistemology the object shapes the subject. Again this has implications for our evaluations of what technology does to society. It means, in particular, that such evaluations should not be reduced to “applied ethics” or applied political philosophy, if that means that we have a fixed and permanent set of principles which are applied to the fluid, impermanent world of technology and society. For the purpose of evaluation we can use principles, but they are only conceptual tools, and tools – as we know – change. For example, what we mean by privacy may be different in an age when we are continuously connected via social media. Hence an evaluation of technology in terms of privacy needs to be aware of the instability of this concept and reflect on it.