Student Pugwash Regional Conference
March 10, 1996
I have been asked to discuss my views of the ethical dilemmas facing computer professionals that I see on the horizon. There are, unfortunately, more than can be easily enumerated in the short period of time I have. However, I will focus on what I feel are the two main social concerns that need to be addressed by the computer industry as a whole, and by individual practitioners and scientists within the field.
Let me start by stating my position on the role of the computer professional in society. I contend that it is important for us to actively participate in the analysis and debate of the social impact of the technology we help to develop, for several reasons. First, we clearly have the technical expertise required to understand the internal mechanisms by which the technology functions. Without this technical skill, it is extremely difficult to identify potential social impacts of the technology. I liken the involvement of computer professionals in this area to biologists and chemists who are involved in environmentalism.
Second, many of the social problems created by technology could either have been avoided by designing a system differently or by the development of a complementary piece of technology. For instance, the design of the control room was in part to blame for the nuclear incident at Three Mile Island in 1979. Software that allows parents to place controls on what their children have access to via the Internet is an effective means of dealing with the problem of children and pornography on the net. More care must be taken when an application is initially designed in order to minimize its potential for causing social harm.
Finally, it is clear that there are many more potential applications for computer professionals to develop than there are people to work on them. This is especially true in the area of research. As Ben Schneiderman, Director of the Human-Computer Interaction Laboratory at the University of Maryland, is fond of pointing out, why not attack research problems that are more than just technical puzzles to be solved; why not look for those problems that will also serve a social good? His favorite example is the instant availability of medical records, so that no matter what medical facility you visit, your complete medical history, including x-rays, EKG charts, etc., is available to the health care provider. A secondary area of socially relevant research that this creates is the need for privacy protection of such records.
I do have to be honest and say that, while the views I express here today are shared by an ever increasing number of computer scientists and professionals, they are in no way universally accepted within our field. Within my own department, five of my colleagues firmly believe that computer scientists have no part whatsoever in dealing with the social impact of the technology, either potential or demonstrably real. The sixth member of our seven faculty department is uncommitted, I strongly suspect because he is untenured and clearly risks his position by siding with me in such matters. This fear of retaliation for following a social agenda is not misplaced. There are many examples within my department, as well as many others, where such politics have entered into employment decisions. Be aware as future practitioners within this field that having a social conscience is not always appreciated.
As should be obvious to anyone who hasn't been living on Pluto for the last year or so, there has been an explosion of interest in the so-called "Information Superhighway" within the media. Numerous cover stories in magazines such as Time and Newsweek have regaled us with both the utopian prospects for the future of this technology, as well as its Orwellian potential. While the truth undoubtedly falls somewhere in-between, the spectrum from one extreme to the other is overwhelmingly vast. Just where the future ultimately falls within this spectrum will be determined by a combination of factors: luck, technological advances, individual willingness to accept responsibility for the social impact of one's technical work, the greed of individuals and groups.
There are two main issues I see affecting where the future ultimately rests within this spectrum. The first is privacy. The second is accessibility.
I won't boor you with a discussion of all the possible definitions of the concept of privacy (such as "control over one's personal information" or Oliver Wendell Holmes' "The right to be left alone."), but we do need a common reference point or we will end up at cross-purposes. First, it must be understood that privacy is not an explicit right given to us by the Constitution. However, there have been many court cases that have been used to show how, for instance, the Fifth Amendment's statement on self-incrimination can be taken as an implicit right to certain kinds of privacy. Even the famous Roe v. Wade decision of 1974, which resulted in the legalization of abortion in this country, was argued from a privacy point of view.
In the US, however, there is still no overarching piece of legislation that spells out explicitly the boundaries of this assumed right of privacy. Since about 1970 a number of laws have been enacted, however, each taking up one specific area of privacy. The Privacy Act of 1974, for instance, deals with the record keeping regulations of federal agencies. The Video Protection Privacy Act of 1988 protects the records of individual borrowers being held by video rental stores. As a consequence of this piecemeal approach to providing privacy protection, most people are either unaware or confused about their rights to privacy, even though most assume that they do indeed have such rights. According to a 1994 Harris poll, 84% of Americans are concerned about their privacy, and 78% feel that they have lost control of personal information.
The Internet is perhaps an easy target as a strawman for our concerns about privacy. More people have more access to more information than ever before, and it is easy to visualize that much of this information will be personal in nature. Consider movements to place public government records on-line for easy access. While information such as real estate records are already publicly available, the difficulty of accessing these records - going to the office that holds them, filling out appropriate forms, perhaps waiting significant periods of time for a response - has given most people at least a sense of privacy where such information is concerned. Once these records go on-line, however, everyone will be able to access them instantaneously. Is this what we want?
What happens when government agencies such as welfare and social work go on-line? Who will have access to this information? How will dissemination be controlled? What cross-agency access will be available? There are indications that many government agencies are looking toward pooling their information databases, both as a cost-cutting measure and in order to provide better service (so-called "one-stop shopping"). What does this mean in terms of increasing security risks for unauthorized access to information? What does this mean in terms of the government's ability to create a unified dossier on citizens, something it has up to now been prohibited from doing by laws such as the Privacy Act of 1974 and the Computer Matching and Privacy Protection Act of 1986?
What types of personal information might we find on-line in the future? Consider just some of the following:
Some of this information clearly fits the definition of "public." Others can clearly be classified as "private." Some clearly comes from government agencies, some from the private sector. The source of this information does in some ways help differentiate what should be considered public versus private, but this method is not entirely accurate - consider tax records. In addition, there are some questions about whether some of the information that has until now been called "public" should be made easily accessible.
One problem with providing computer access to sensitive information is that there are currently technical problems with protecting such information from unauthorized access. Simple logon identifiers and passwords offer minimal security. Such devices will discourage the casual voyeur, but will not keep determined hackers from gaining access to such systems (this has been proved countless times already).
In addition, once even authorized access is gained, the data must still be transmitted over what should be thought of as analogous to a party telephone line. There are countless opportunities for this information to be intercepted - consider that, on the Internet, such records may pass through dozens of computer systems between source and destination. While data encryption schemes appear to offer the best protection for transmitted records, there is an odd quirk that prevents software companies from using the most powerful encryption technologies in their products.
Encryption technology is currently regulated by the federal government as a munition. This means that a company must obtain a federal munitions license in order to make their encryption technology available to anyone outside the country, and such licenses are not often granted because our government does not want US companies to supply technology that foreign agencies could use to keep secrets from the US intelligence community. As a consequence, the encryption technology typically offered in current software is of an inferior level that can be cracked with relative ease (this was demonstrated recently when someone showed how Netscape's much-ballyhooed security could easily be thwarted).
Other potential governmental threats to privacy include the Digital Telephony Bill of 1995, the proposed use of the Clipper Chip technology within communications devices, and the recently enacted Communications Decency Act (as part of the much larger Telecommunications Act of 1996).
The Digital Telephony Bill enacted last year requires phone companies to provide the equipment necessary to allow authorized governmental wire-tapping of digital communications lines. While this sounds highly desirable from a law enforcement perspective, many civil libertarians find much fault with this law. For a fairly complete discussion of this particular issue, I refer you to the article "How Good People Helped Make a Bad Law" in the February 1996 issue of Wired magazine. (If you read the article real carefully, you'll even find a brief quote from yours truly.)
The Clipper Chip was proposed by the Clinton administration as a cure for our data encryption blues. This chip, with an algorithm supplied by the National Security Agency, arguably the world's authority on encryption technology, would be required by law to be placed into every piece of electronic communications gear sold in the US. Thus, it was argued, transactions of every type - financial, medical, governmental - would be exceptionally well protected against eavesdropping.
There are two main reasons why this proposal was effectively shot down by the computer community. First, the plan called for the establishment of a process called key escrowing, where the encryption/decryption keys had to be held in escrow by two separate agencies, in case the government ever needed to - under court order - break someone's security. This plan was widely debated, with the end result being that few people could be made to believe that such a mechanism could itself be made secure, i.e. that such escrowed keys could not be obtained in ways other than by legal warrant.
The second flaw in the Clipper Chip proposal was that the encryption algorithm being supplied by the NSA was completely secret, i.e. was entirely unpublished. As a result, the algorithm could not be held up to scientific scrutiny to determine whether the algorithm did in fact work. Secondly, there was also no way to prove (or disprove) that the NSA hadn't included a "back door" in the device that would allow them to intercept supposedly encrypted transmissions at will, without a warrant.
As a result of these flaws, the Clipper Chip technology proposal was scaled back from requiring that every communications device (e.g. all modems) in the US contain the chip, to requiring only those companies doing business with the federal government to use "clippered" devices. It was a narrow escape for the rest of us...
Concerns over the availability of governmental records, potential governmental intrusions, and the accessibility of private information such a medical records is appropriate. However, we shouldn't let ourselves be led to believe that this necessarily represents the most important threat to personal privacy in the computer age. Perhaps more insidious because it isn't as obvious as the government's role in privacy is the vast world of consumerism.
Transactional data is being generated by the consumer world at an ever increasing rate. It began with credit cards, which for many decades were the only type of transactions routinely monitored by anyone. In the last twenty years, however, many additional forms of transactional data systems have been created, including warranty registration cards when items are purchased, the use of discount cards at the point of purchase, and the increasing acceptance of debit cards. Each of these methods allows purchases to be tracked, to the point where entire purchase histories can be constructed.
Of course, this can also happen when one even pays cash (an assumedly anonymous transaction). Many stores routinely ask for personal information that they enter into their computer system, for every purchase made.
Even the telephone system is increasingly transactional. While most users assume that no-one but the phone company knows about the patterns of their calls, the use of caller-id systems is making it possible for others to capture information about your calls. The interesting thing about this system is that the phone companies market this device to homeowners as a privacy device: "Know who is calling before you pick up the phone." Newer devices even attempt to identify the person calling, instead of just the phone number of the calling party, using a simple table lookup.
The usefulness of this system from the individual's perspective is dubious at best. How many numbers can one individual recognize at a glance? What does it mean when an unrecognized number is displayed: Is it someone you know calling from a pay phone? a tele-solicitor? the school telling you your child is sick? You need the device on every phone you have in order for it to be effective. What about your portable phone that you carry all over the house, even outside? It doesn't really let you identify the person calling, just the number of the phone being used. What about people who pay for unlisted numbers? What are their rights to privacy? Many states allow callers to block their phone number from being transmitted to the id device, thus entirely eliminating the device's effectiveness.
The real market for this device is not individuals, but companies. When you call a store to, for instance, inquire about an item you recently saw advertised, the store can easily capture your phone number, time the call was placed, etc. The salesperson can then easily enter information about the call, such as the item you inquired about. Such databases are then bought and sold by companies in order to build even more detailed dossiers on individuals and their buying habits and product interests.
Not only do the phone companies charge other companies much larger fees for the caller-id service, but they also sell them the software and other services that make creating their databases much easier. It is clear that the phone companies stand to make a great deal more money from these commercial customers than from individuals.
Consider how much worse this is all going to get when we begin using the "Information Superhighway" for large-scale commerce. It is already possible to capture information about everyone who, for instance, browses your world wide web pages. While it is not yet safe to actually make purchases over the net, it won't be long before such transactions are commonplace. This will create transactional records on an unprecedented scale. In addition, unless all transactions are encrypted, every system through which these transactions pass could potentially capture details from them, allowing even the intermediary system administrators to create massive transactional databases for sale. This data will include not only what purchases you make, but what sites you regularly visit and what type of information you download. Finally, consider that utility companies are moving forward with plans to actively monitor, and ultimately control, the use of appliances in your home, as a means of smoothing out demands for power. The possibilities are endless.
As a final privacy issue I would like to briefly discuss the recently enacted Communications Decency Act, a part of the Telecommunications Act of 1996. This act, which, by the way, is currently unenforceable due to a court injunction, makes it a felony to knowingly make available to minors any "indecent" information, in whatever form (graphics or text).
There are a number of things wrong with this Act, not the least of which is the technical impossibility of enforcement. Theoretically every computer system administrator who allowed such indecent information to be passed on to a minor could be arrested for violating this law, since they certainly would have to "know" that such information could be given to a minor. That would mean that they would have to be involved in actively monitoring every item that passes through their system, and to whom the item was addressed, in order to intercept anything that might get them into trouble. This is technically impossible if even the current level of transmissions is to be maintained. It is analogous to making the US post office and all of its employees responsible for allowing a minor to see a copy of Playboy that was delivered by mail.
There are also significant problems with the fact that this law prohibits things that are "indecent" as opposed to "obscene," which is the basis for the current injunction. Since this part is not really related to either computer technology or privacy, I won't pursue this issue here. In addition, much of the argument being pursued in court against the Communications Decency Act revolves around issues of free speech and censorship, also irrelevant to the technology.
How then is this an issue of privacy? Simple: any attempt by the government to monitor the reading or viewing habits of law-abiding citizens is clearly a violation of privacy. This Act implicitly requires these habits to be carefully monitored in the case of information passed over the Internet.
The aim of the law is noble: keeping pornography out of the reach of children. Hard to argue with the intent. What is arguable is whether this is the best way (or, indeed, really a way) of tackling this thorny problem. There are many ironies surrounding this legislation, not the least of which is that most of its sponsors have never even been on the Internet. They have heard and read that the Internet is a haven for pornography, and that it is pervasive in the medium. But they have never verified this information themselves. Much of the claim of the rampant presence of pornography on the net comes from a cover story of Time magazine last year which reported on a survey conducted by a Carnegie Mellon University undergraduate student, which purported to show that pornography was widely available via the Internet. The survey was later completely discredited as entirely faulty research. However, the net's reputation as a hotbed of porn has nonetheless stuck.
This isn't to say that some of the allegations about the net's darker side are not true, and that providing protections for our children is unnecessary. Indeed, just as children may need protection from the negative effects of television and some reading material, monitoring children's surfing habits is an important parental task. But it is a parental responsibility, not a governmental one.
This brings us to an interesting dilemma, however, for the computer professional. If, as I posit, it is incumbent on the computer professional to be socially responsible in his or her work, how can I argue against the position of requiring these professionals to be legally responsible for protecting children from the possible adverse effects of using the Internet, at least with respect to pornography? It is a delicate balance at times between satisfying multiple social goods, such as protecting privacy and wanting to eliminate the accessibility of pornography by children via the Internet. True dilemmas have no easy, or even provably correct, answers. Thus, one must look for viable compromises that in some fashion satisfy all parties equitably.
In this case, my own reaction is that the ethical approach of the computer industry would be to develop software that would allow parents to monitor their children's use of the Internet. This is precisely what has occurred. Today there are several products available that provide a level of control over the accessibility of certain materials on the Internet. While the solution is not a perfect one - kids can still go to a friend's house whose parents do not have such controls - this solution is similar to other types of parental control already extant. For instance, parents might have MTV blocked by the cable company, because they feel it would have a negative impact on their children. However, those parents lose that level of control when they permit their children to visit friends' homes. Likewise with reading materials, where the parent disallows their children to read, for instance, Stephen King books.
Privacy is clearly under assault by the use of computer technology. It is my deepest feeling that computer professionals should take a clear stand on the side of using the technology to increase privacy rather than continue to decrease it. While the right of privacy is not an absolute, taking precedence above all other rights, still it must be guarded assiduously. The major professional associations in our field all have stern, explicit statements recognizing the importance of privacy concerns. The Data Processing Management Association's Code of Ethics states that a member shall "Protect the privacy and confidentiality of all information entrusted to me." The Institute of Certified Computer Professionals Code of Conduct indicates that "One shall have special regard for the potential effects of computer-based systems on the right of privacy..." The Association for Computer Machinery's Code of Ethics and Professional Conduct states "It is the responsibility of professionals to maintain the privacy and integrity of data describing individuals." You can't get much clearer than these statements.
Finally, if you want a prime example of what happens when computer technology is used to its fullest extent to track the movements and habits of individuals, and to create centralized dossiers on every citizen, we have only to look to Singapore. Each citizen carries a universal identity card, computerized, of course. This card is used to obtain every government service, to gain access to public transportation (essentially the only kind that exists), to access computer facilities, and to conduct business. In other words, every activity engaged in during the day is tracked and stored. Now, the government claims that this global surveillance system is singularly responsible for an enormous increase in the standard of living of all Singaporians. I don't dispute this assertion. But at what cost? Do these people have freedom? Will they continue to have it in the future? Can there be true freedom in a surveillance society, i.e. without privacy?
Let me now move on to the second major social issue I see facing the computer industry: accessibility. While I won't spend as much time discussing this issue, I don't want you to think that I value it less than the issue of privacy protection. It is just that this issue is much more easily described. It can be stated simply as the growing division between the haves and the have-nots.
According to US Census figures, white children are three times more likely to have a computer in their home than non-white children. In addition, they are nearly 20% more likely to have access to computers in their schools. As a consequence, minority students are falling further behind because of their lack of access to computer technology. In addition, as the facilities available over networks increases, the pressure to be on-line continues to grow. But what is the cost of making these powerful facilities available to students? Consider that the main commercial networks such as America On Line, Prodigy, and CompuServe charge not only a registration fee and a monthly fee, but also a per-minute fee for using their facilities.
Clearly this is not a technological issue per se, it is economic. Again, many of my colleagues argue that computer professionals have no special responsibility to try and rectify what is so clearly just a social problem. After all, they claim, there are many reasons why the gap between rich and poor, and, therefore, between whites and non-whites, continues to widen, none of which has anything to do with computer technology.
However, we are on the verge on not only an information rich society, but an information poor one as well. It is the technology that we, as professionals, are developing that has created this schism. Consider that just 25 years ago, essentially everyone had equal access to information through a national network of neighborhood libraries. While there was still a dichotomy between how well stocked a local library was depending on its location, i.e. whether it was in an affluent neighborhood or an inner city, the difference between the two extremes was more like a shallow ditch compared with today's deep and wide chasm. The need to have a computer system with large high-resolution screen, lots of memory, high speed modem, CD-ROM drive, and large hard disk in order to take full advantage of accessing the electronic information world is creating the equivalent of an information ghetto for those who cannot afford the equipment. Add to this the cost of an on-line service, and it is easy to see why the chasm yawns ever wider.
Now, as the price of electronics continues to fall, today's computer systems will certainly become more affordable in the future. However, by that time, software currently under development to provide even better facilities will demand even more power to use. It is a never ending cycle of demand for higher powered systems in order to take advantage of the latest and greatest facilities available at any given time.
What is the role of the computer professional in this instance? How can we solve from a technical perspective what is apparently just a financial issue? A couple of thoughts come to mind. First, professionals should attempt to break the vicious cycle that demands that users continually upgrade their systems, or get left behind. No other consumer product line has the turnover rate that the computer field has - a major new development requiring a new investment by consumers every 18 to 24 months.
While this is a natural drive to always "push the envelope" of development of new products, I encourage you to begin thinking in more minimalist terms. What is the smallest system that this new product can be made to work with, rather than how much power can I use to get the job done. In this way older systems (sometimes called "legacy" systems) will enjoy a longevity that will help make it possible for those of modest financial means to afford the technology.
Somewhat surprisingly, the industry itself has begun to realize that the Internet will not become what we want it to be, i.e. universally accessible, unless an inexpensive access system is developed. Recently the Oracle Corporation announced that it was embarking on the development of a $500 Web Machine, essentially a stripped-down, compact (perhaps portable) system whose sole function is to access information via the World Wide Web. This system would have no mass storage capability and minimal input/output capabilities, thus lowering its cost significantly. It would probably also be non-expandable, so internal and external interfaces could be eliminated. The goal it seems is to create a unit about the size of a small notebook, with flat-screen display, weighing under a pound. It appears to me that creating such a device that costs less than a standard television will help make universal accessibility a reality.
However, another form of information discrimination occurs when considering the millions of people with handicaps. Although the Americans with Disabilities Act has done a great deal to remove the impediments that have until now kept the handicapped from actively participating in our society, the growing dependence on computer technology has once again exacerbated the problem. Standard input and output devices do not often work for the handicapped. There is an enormous amount of work to be done in developing new interfaces to help this segment of our population gain equal access to computing facilities. I encourage you to seek out opportunities to work on such systems, and to anticipate when developing a new product how it might be modified to make it easier for use by the handicapped.
Finally, I would encourage you also to become involved in community efforts to create local computer centers where the less fortunate can have access to the wide range of computer technology. This will again undoubtedly involve legacy systems, cast-offs that are no longer seen as productive systems in a corporate setting, but which could still provide an entry point into the computer age for many people. There are many excellent examples of such community projects. Two that I will mention are the Seattle Community Network and the Hill House center in Pittsburgh, different but both excellent models for such work.
In closing, again be aware that you will not always be appreciated for having a social conscience. There are many within our industry, as in others, who prefer to narrowly focus on the technical task at hand rather than concern themselves with the potential impact of their work. They consider that to be the realm of social scientists, not computer professionals. However, don't let them dissuade you, don't let them discourage you, don't let them threaten you. It is you who will make the crucial difference in determining where in the spectrum between Utopia and the Orwellian nightmare we as a society end up.
Hancock, L. The haves and the have-nots. Newsweek. February 27, 1995.
Huff, C. and Finholt, T. Social Issues in Computing. Mc-Graw-Hill. 1994.
Liffick, B. Policy statement on the governmental regulation of cryptographic technology. National Research Council Hearings on National Cryptographic Policy. April 12, 1995.
Oz, E. Ethics for the Information Age. Business and Educational Technologies. 1994.
Ramos, J. How cheap can computers get?. Time. January 22, 1996.
Rosenberg, R. The Social Impact of Computers. Academic Press. 1992.
van Bakel, R. How good people helped make a bad law. Wired. February 1996.