Written in clear, accessible prose, the Fourth edition of Computer Ethics brings together philosophy, law, and technology. The text provides an in-depth exploration and analysis of a broad range of topics regarding the ethical implications of widespread use of computer technology. The approach is normative while also exposing the student to alternative ethical stances.
"synopsis" may belong to another edition of this title.
Written in clear, accessible prose, this text brings together philosophy, law, and technology to provide a rigorous, in-depth exploration and analysis of a broad range of topics regarding the ethical implications of widespread use of computer technology.From the Inside Flap:
With the publication of the third edition of Computer Ethics, I am reminded of the day in 1984 when I received the page-proofs of the first edition. I had just returned home from the hospital after having given birth to my daughter. I had composed the book on an Osborne computer using a word processor—I think it was called WordStar—that has been obsolete for more than 10 years now. Today my daughter, now a teenager, is more comfortable with computers than I am. She spends a good deal of her day sitting in front of a computer screen chatting with friends, doing schoolwork, and exploring the Web. I composed this edition of the book on a laptop computer using a version of MS Word that automatically corrected my misspellings and grammar. And, of course, in writing this edition of the book, I frequently went to the Web to look for resources and check references. While I continue to be cautious in making grand pronouncements about the significance of these technological changes for the quality and character of human lives, the changes that have taken place in these 16 years are awe-inspiring.
As I began writing this edition, it was strikingly clear that my primary task was to address the technological changes that have occurred since the second edition, especially the growth and penetration of the Internet into so many domains of life. What are we to make of Web sites, cookies, data mining tools, customized online services, and e-commerce? I have addressed many of these new issues while at the same time holding on to what I continue to believe are the core issues in computer ethics: professional ethics, privacy, property, accountability, and social implications and values. Indeed, you will see that in Chapter 1, 1 continue to struggle with the question at the heart of the field, what is computer ethics? Are the ethical issues surrounding computers unique? What is the connection between ethics and technology?
Contemplating the connection between technology and ethics raises an interesting and important question: Does the field of computer ethics simply follow the development of computer technology? Should computer ethicists simply react to technological developments? Wouldn't it be better if the sequence were reversed so that technological development followed ethics? Historically, the field of computer ethics has been reactive to the technology. As I explain in Chapter 1, new technological developments create new possibilities and the new possibilities need to be evaluated. As in the last edition, I build on the idea in Jim Moor's seminal piece "What Is Computer Ethics?" (1985) that new technologies create policy vacuums. The task of computer ethics, he argues, is to fill these policy vacuums. In a sense, the ethical issues are the policy vacuums, and policy vacuums are created when there is a new development or use of computer technology.
On the other hand, I want to suggest that it would be better if at least some of the movement were in the other direction—technology following ethics. Suppose, that is, we lived in a world where ethicists (or anyone, for that matter) identified potentially unethical situations or arrangements or ethically better possibilities, and engineers and computer scientists went to work designing technologies to change or remedy or improve the situation. I can think of a few examples when this has occurred, but only a few. Arguably, privacy-enhancing technologies and anonymous re-mailers are cases in point. Perhaps freeware and shareware are also examples. For the most part, however, the ethical issues have followed, rather than led, the technology. Here in very broad brushstrokes is my understanding of the evolution of the field of computer ethics, especially in the United States. HISTORICAL OVERVIEW
In the decades immediately following World War II, ethical concerns were raised about computers, though these concerns were only vaguely expressed and articulated. One of the most salient concerns was that computers threatened our notion of what it means to be human because computers could do the very thing that was considered unique to humans, rational thinking. There was much discussion of artificial intelligence. There was some fear (and fascination with the idea) that computers might take over decision making from humans. I am thinking here of the movie 2001 but the theme also ran through science fiction literature, for example, in Issac Asimov's short stories. Somewhat later, Jim Moor picked up on this theme and wrote an analytical article, "Are There Decisions That Computers Should Never Make?" (1979).
It could be argued that those very early concerns about computers were not exactly ethical in character. For example, no one explicitly argued that it was immoral to go forward with the development of computers because of the threat to our concept of human beings. And the science fiction literature did not suggest that it was immoral to turn over decision-making power to computers. Rather, the implicit argument seemed to be that there would be terrible consequences—possible catastrophes and degradation of human life—were decision making to be turned over to computers.
These concerns did not come from an effect arising from the use of computers; they arose from the mere idea of computers. The very idea of a technology that could think or do something very close to it was threatening to our understanding of what it means to be human.
Ironically, it could be argued that this idea, the idea that computers do what humans do, has turned out to be rich in its influence on human thinking about thinking, rather than a threat. The model of human thought that computers provide has spawned the thriving new field of cognitive science and changed a number of related disciplines. (See for example, Bynum and Moor, 1999.)
In the late 1970s, the ethical issues began to be more clearly articulated in the works of Joseph Weizenbaum (19'79) and Abbe Mowshowitz (1976), and it was in this period that the Privacy Protection Commission did a major study of privacy. The issues that took shape in this period had to do with the threat of big government and large-scale organizations, the related threat to privacy, and concern about the dominance of instrumental rationality. In hindsight, the concern about big government and privacy followed the technology in that in those early days, computers were being used extensively to create and maintain huge databases, databases of a variety of kinds, but especially databases of personal information. Computers were also being used for large numerical calculations. The large-scale calculations were primarily (though not exclusively) for government activities such as weapons development, space travel, and the U.S. census.
The next major technological shift was the development of small computers (microcomputers and personal computers). Attention turned, for a time at least, to the democratizing aspects of computers. Quietly, at the same time, remote access had come on the scene, first as remote access to large mainframes, later as a web of telecommunications connections between small computers.
Attention turned to software and the ethical issues surrounding it. The development and spread of microcomputers brought computer technology visibly and powerfully into the consumer marketplace. Software was recognized as something with enormous market value, and hence, all the ethical issues having to do with property arose. Should software be owned? If so, how? Would current intellectual property law provide adequate protection? Along with property rights issues came issues of liability and responsibility. In the marketplace, if consumers buy and use computers and software, they want to be able to rely on these tools and when something goes wrong, they want to know who to blame or they want to be compensated for their losses.
During this period, the market in computer games took off and it was also during this period that more attention began to focus on hackers. On the one hand, hackers were responding to the commercialization of computing. They did not like the idea of property rights in software. At the same time, those who were acquiring property rights and/or making a business of computing saw the threat posed by hackers, a threat to property rights and to system security.
In the 1990s, attention turned to the Internet. The coming together of computers, telecommunications, and media was the next major development in the technology. The development and expanded use of the Internet brought a seemingly endless set of ethical issues as the Internet came to be used in so many different ways in so many different domains of life. In effect, we are now in a process of transferring and re-creating much of the world into this new medium. At the same time, the Internet also raised all the concerns of the past. Privacy issues are exacerbated on the Internet; the democracy issue came back into play with new claims about the Internet's democratic character; property rights expanded to Web sites and global property rights became ever more important; and so on.
One other technological development that grew slowly during the 1980s and 1990s was the use of computer technology for a wide variety of visualization activities—not just computer graphics and gaming, but simulation activities including medical imagining and scientific models. This development expanded into the idea of virtual reality, an idea that has captivated many. Very quietly and slowly, ethical concerns have been raised about this thrust of computer technology. Unfortunately, I have been able to give only cursory attention to virtual reality issues.
In summary, during the 1960s and 1970s the dominant uses of the technology were for database creation and large-scale calculations. These uses of the technology brought correlated expressions of concern about centralization of power and big government, and threats to personal privacy. During this time, the very idea of computers seemed to threaten the idea of what it means to be human. During the 1980s, microcomputers were developed and made readily available. Remote access to large mainframe computers also became possible. Quietly, the system of telecommunication lines linking computers, that later became the Internet, was expanding and being made available beyond the "inner circle" of developers. Also, the computer/video game industry began to take off. With these developments came correlative concerns about property rights, liability issues, and the threat posed by hackers. In the 1990s, the coming together of telecommunications and computers reached a pinnacle of development and the Internet and the World Wide Web (Web) became widely available. These technological developments are still being assimilated, but they gave rise to a seemingly endless array of ethical issues as well as exacerbating those that were already there.
This is a story of computer ethical issues following technological developments. The question remains whether this pattern is as it should be. As I suggested before, reversing the order would seem to have some advantages, though scholars in the field of computer ethics do not seem to recognize the possibility of leading rather than following the technology. A central focus on the topic of design of computer technology would go a long way toward reversing this pattern. If the designers of technology were to think about the ethical and social implications of their designs before they became a reality, wouldn't the world be a better place! CHANGES IN THE THIRD EDITION
Readers who are familiar with earlier editions of Computer Ethics will note that in this edition I have added two chapters specifically focused on the Internet, Chapter 4 "Ethics Online" and Chapter 8 "Social Implications and Social Values." The addition of this new material led to other changes in the organization of the book. First, instead of having a separate chapter on crime, abuse, and hacker ethics, I have situated the discussion of hackers and hacker ethics in the first chapter on the Internet, Chapter 4. This placement recognizes that hacking is a phenomenon made possible by the combination of computers and telecommunications lines that we now call the Internet. In 1994 when the second edition was published, the Internet had already been created, but it was far from clear that it would become what it has. Second, instead of having one chapter on the social implications of computer and information technology and another on the social implications of the Internet, I have combined material on the social implications of computer technology from the second edition with new material on the Internet. While I discuss both, the primary focus of Chapter 8 is on the social implications of the Internet and especially its social implications for democracy. I found this approach useful for focusing discussion of the relationships between technology and social change and between values and technology.
As with previous editions, there are many possible paths a reader might take through the book. The topics from chapter to chapter are interconnected, but each chapter has been written to stand essentially alone. When used as a textbook, the path students take through the book should be determined by the type of students being taught, the length of the course, and other books and materials being used in the course. For example, when teaching a class of computer science majors, it is important that the chapter on professional ethics be read early on. This sets students up to think of the issues as part of their professional responsibility. When teaching nonmajors, this chapter can comfortably be read at the end of a course, and can be presented as a way of thinking about how some of the issues discussed in the book might be addressed—control who becomes a computer professional and give computer professionals more responsibility for the effects of their work.
As in the previous editions, I have started each chapter with a set of short scenarios. The scenarios are intended to entice the reader into the topic, to implicitly make the case for the importance of the topic, and to make the topic concrete for those who are impatient with theory. The cases also provide the content for teaching skills in ethical analysis. As before, I have provided study questions and suggested further reading at the end of each chapter. OVERVIEW
Chapter 1: Introduction: Why Computer Ethics?
In Chapter 1, I make the case for the importance of computer ethics and I explore why computer and information technology raises ethical questions when many other technologies do not. Building on Moor's idea that the task of computer ethics is to fill policy vacuums, I describe generally how computer and information technology gives rise to ethical issues. I push further addressing how these issues can be resolved and explore the traditionalist account which specifies that we can extend ordinary moral principles to situations created by computer technology. This discussion prepares the way for asking in what ways computer ethical issues are unique and in what ways not. As in the last edition, I argue that it is useful to think of the ethical issues surrounding computer and information technology as ...
"About this title" may belong to another edition of this title.
Book Description Prentice-Hall. Book Condition: New. pp. 181. Bookseller Inventory # 5268717