John DiMarco on Computing (and occasionally other things)
I welcome comments by email to jdd at cs.toronto.edu.

Sat 30 Dec 2023 10:13

Fifty years of Social Issues in Computing, and the Impact of AI

Overlapping circles labelled with the various social issues areas for computing: Technical, Pedagogic, Managerial, Economic, Legal, Political, Ethical, Social and Philosophical
Image from Gotlieb, C.C. & Borodin, A (1973) Social Issues in Computing, New York: Academic Press, p.2, Fig. 1-1.

From when I first discovered computers as a teen, I have been fascinated by the changes that computing is making in society. One of my intellectual mentors was the brilliant and generous C. C. "Kelly" Gotlieb, founder of the University of Toronto's Computer Science department, the man most instrumental in purchasing, installing and running Canada's first computer, and the author, with Allan Borodin, of what I believe is the very first textbook in the area of Computing and Society, the seminal 1973 book, Social Issues in Computing [Gotlieb, C.C., & A. Borodin, Social Issues in Computing. Academic Press, 1973]. Kelly was already a Professor Emeritus when I first came to know him, but was still teaching his beloved Computers & Society course, a course he taught for nearly two decades after his retirement. Kelly was a fascinating man, with a broad perspective and deep insight into things that seem confusing. Like a true expert, he knew what was important and what was incidental, and a few well chosen insights from him often served me well, helping me to make sense of complex issues. His book, Social Issues in Computing, still offers interesting, often prescient insights into Computing and Society even today, a half-century later. In honour of the importance of that book, for the 40th anniversary year, I set up a year-long blog, "Social Issues in Computing", which I edited. Throughout that year, top thinkers in the field contributed insightful articles on topics in Computers & Society, many of which are as relevant today as they were ten years ago. For this blog, I had the privilege of interviewing Kelly and Allan, the book's authors, and their insights, four decades on, were fascinating. Sadly, Kelly is no longer with us: he passed away in 2016, in his 96th year. But happily, Allan Borodin, his co-author, remains with us. Allan is a brilliant and insightful man, an active researcher and University Professor in the department. For the 50th anniversary of the book this year, Allan was interviewed by Krystle Hewitt. It is an articulate and insightful interview, well worth reading.

In the decade since, the social impact of computing has only accelerated, much of it due to things that happened here at the University of Toronto Computer Science department around the time of the 40th anniversary blog. I refer specifically to the rise of machine learning, in no small part due to the work of our faculty member Geoffrey Hinton and his doctoral students. The year before, Geoff and two of his students had written a groundbreaking research paper that constituted a breakthrough in image recognition, complete with working open-source software. In 2013, while we were writing the blog, their startup company, DNN Research, was acquired by Google, and Geoff went on to lead Google Brain, until he retired from Google in 2023. Ilya Sutskever, one of the two students, went on to lead the team at OpenAI that built the GPT models and the ChatGPT chatbot that stunned the world in 2022 and launched the Large Language Model AI revolution. In 2013, we already knew that Geoff's work would be transformational. I remember Kelly telling me he believed Geoff to be worthy of the Turing Award, the most prestigious award in Computer Science, and sure enough, Geoff won it in 2018. The social impact of AI is already considerable and it is only starting. The University of Toronto's Schwartz Reisman Institute for Technology and Society is dedicated to interdisciplinary research on the social impacts of AI, and Geoff Hinton himself is devoting his retirement to thinking about the implications of Artificial Intelligence for society and humanity in general.

It's interesting to look at what the book said about AI (it devotes 24 pages to the topic), what the 2013 blog said about AI, and what has happened since. The book was written in 1973, a half-decade after Stanley Kubrik's iconic 1968 movie, 2001: A Space Odyssey, which features HAL 9000, an intelligent computer, voiced by Douglas Rain. But computing at the time fell very far short of what Kubrik envisioned. Gotlieb & Borodin's position, five years later, on the feasibility of something like HAL 9000 was not optimistic:

In review, we have arrived at the following position. For problem solving and pattern recognition where intelligence, judgment and comprehensive knowledge are required, the results of even the best computer programs are far inferior to those achieved by humans (excepting cases where the task is a well-defined mathematical computation). Further, the differences between the mode of operation of computers and the modes in which humans operate (insofar as we can understand these latter) seem to be so great that for many tasks there is little or no prospect of achieving human performance within the foreseeable future. [p.159]
But Gotlieb & Borodin, though implicitly dismissing the possibility of a HAL 9000, go on to say that "it is not possible to place bounds on how computers can be used even in the short term, because we must expect that the normal use of computers will be as a component of a [hu]man-machine combination. [pp.159-160]". Of this combination, they were not so willing to dismiss possibilities:
Whatever the shortcomings of computers now and in the future, we cannot take refuge in their limitations in potential. We must ask what we want to do with them and whether the purposes are socially desirable. Because once goals are agreed upon, the potentialities of [humans] using computers, though not unlimited, cannot be bounded in any way we can see now." [p.160]
Fifty years later, social science research on how AI can benefit human work is focusing closely on this human-AI combination. A 2023 study of ChatGPT-4 by a team of social scientists studied work done by consultants assisted by, or not assisted by ChatGPT-4. Of their results, Ethan Mollick, one of the authors, explains that "of 18 different tasks selected to be realistic samples of the kinds of work done at an elite consulting company, consultants using ChatGPT-4 outperformed those who did not, by a lot. On every dimension. Every way we measured performance." [Mollick]. Evidently, Gotlieb & Borodin were correct when they wrote that the potential of the human-machine combination cannot so easily be bounded. We are only now beginning to see how unbounded it can be.

As for the possibility of a HAL 9000, as we saw, the book was not so sanguine. Neither was the 2013 40th anniversary blog. Hector Levesque, a leading AI researcher and contributor to the blog, wrote in his blog entry:

The general view of AI in 1973 was not so different from the one depicted in the movie "2001: A Space Odyssey", that is, that by the year 2001 or so, there would be computers intelligent enough to be able to converse naturally with people. Of course it did not turn out this way. Even now no computer can do this, and none are on the horizon.

Levesque want on to outline the key scientific issue that at the time (2013) was yet to be solved:

However, it is useful to remember that this is an AI technology whose goal is not necessarily to understand the underpinnings of intelligent behaviour. Returning to English, for example, consider answering a question like this:

The ball crashed right through the table because it was made of styrofoam. What was made of styrofoam, the ball or the table?

Contrast that with this one:

The ball crashed right through the table because it was made of granite. What was made of granite, the ball or the table?

People (who know what styrofoam and granite are) can easily answer such questions, but it is far from clear how learning from big data would help. What seems to be at issue here is background knowledge: knowing some relevant properties of the materials in question, and being able to apply that knowledge to answer the question. Many other forms of intelligent behaviour seem to depend on background knowledge in just this way. But what is much less clear is how all this works: what it would take to make this type of knowledge processing work in a general way. At this point, forty years after the publication of the Gotlieb and Borodin book, the goal seems as elusive as ever. [Levesque]

Yet in the ten years since, this problem has been solved. Today, I posed Levesque's question to ChatGTP-4:

me: The ball crashed right through the table because it was made of styrofoam. What was made of styrofoam?

ChatGPT: The table was made of styrofoam, which is why the ball crashed through it.

me: The ball crashed right through the table because it was made of granite. What was made of granite?

ChatGPT: The ball was made of granite, which is why it crashed right through the table.

me: How did you know which thing was made of granite and which of styrofoam?

ChatGPT: I inferred the composition based on the context of the sentences. In the first sentence, "The ball crashed right through the table because it was made of styrofoam," the implication is that the table must have been weak or easily breakable for the ball to crash through it. Styrofoam is known for being light and not very durable, so it makes sense to conclude that the table was made of styrofoam.

In the second sentence, "The ball crashed right through the table because it was made of granite," the emphasis seems to be on the strength or force of the ball causing it to crash through the table. Granite is a very hard and heavy material, so it suggests that the ball was made of granite, giving it the necessary force to break through the table.

Levesque can hardly be faulted for not knowing that his problem would be solved only a few years after he wrote what he did. While Hinton and his students' seminal work on image recognition had been done the year before, it was not yet clear that the ability to assign names to images based on their content would soon lead to machines being able to reason from context. A further invention was needed: transformers, invented at Google Brain in 2017. ("GPT-4" is an acronym: the "T" stands for "transformer".) Yet the book's implicit and Levesque's explicit pessimism on the possibility of a HAL 9000 now seems obsolete, and the near unbounded possibilities Gotlieb & Borodin envisioned for the human-machine combination apply also to autonomous machines. The impact of this on society will, no doubt, be significant. I expect the next fifty years of "social issues in computing" to be quite interesting.

/it permanent link


Blosxom