This paper was submitted as a terminal project for my BA in sociology at Eastern Washington University in 2014. It was completed under the guidance of Dr. Todd Hechtman, and was the subject of presentations at the 2013 SIRC conference and EWU’s 2014 Student Research and Creative Works Symposium.
Model users: Identity and software development
Introduction
“The main thing computers are good at,” a young computer science major told me in an interview, “is making redundant tasks go fast and easy”. The scale of the generalization was unexpected, within the conversation, but it was informative. There are few institutions that have not in some way been re-ordered around software-controlled processes in recent decades, few redundant tasks that have not been made faster and easier. Software organizes our access to unprecedented resources of information and automation. But as a medium in which we use software products that others have created, it also organizes human beings’ access to one another. Even when our own daily experience of software takes place within the precise constraints of a software interface, we are still participants in a broader discourse between author and consumer, an inherently social interaction. In this social connection, I believe it’s important to ask what tasks, exactly, are being routinized and accelerated.
In Behind the Blip, the philosopher Matthew Fuller takes up the concept of the “deliberate fiction of user identities” (13) in interface design. The concept of “user-centered design”, from the area of human-computer interaction (or HCI,) suggests my essential research question. The concept, codified into a variety of best-practice models in software development, holds that software interfaces should be designed with a user in mind; what are his competencies, his level of experience with the technology, and his vocabulary for interface metaphors? This “model user” stands in as a conceptual tool to evaluate the fit of a designed interface to its probable users. In her essay “Writers, texts and writing acts: gendered user images in word-processing software,” Jeanette Hoffman explores specific ways user-interface design in the word processing programs of the 1980s reflected the developers’ “assumptions about the characteristics” of the users—in this case, female office workers. But if this concept of user-centered design is descriptive of software design, then it also suggests a very specific model by which to interrogate the social role of software: the model of software as a communication between its author and consumer, in which the author’s intent is expressed in a way that seeks to anticipate the needs and competencies of an imagined user.
Through original interviews with software developers, and analysis of existing scholarship, this project seeks to test the hypothesis that this user-centered design model is descriptive of the practice of software development, and thus seeks to examine the role of software itself in mediating the relationship between its producers and consumers. I pay particular attention to software developers’ statements about how they perceive that relationship, and to testimony on ways the culture of software development teaches developers to conceive of the end-user.
There is a growing body of research, under the umbrella of science & technology studies, which looks at the use of software through a sociological lens (Hoffman’s work offers some examples, as do a number of Sherry Turkle’s ethnographies, including Simulation and its Discontents.) There is also a fair amount of ethnographic work seeking to describe the worldview of software developers themselves, such as Steve Woolgar’s “Configuring the User”, or Sharp, Robinson and Woodman’s “Software Engineering: Community and Culture”. But I believe there’s an opportunity to listen to software developers as fellow-travelers with everyone else, as our broader culture crosses new frontiers of computing technology. Insofar as they hold specialist knowledge within that transition, what meaning do they assign to that knowledge and its possession? How, ultimately, should we describe the identities and relationships that our computerized culture is becoming “good at”?
Background & literature review
The first theoretical task of this research is to establish that user-interface design, as an activity, has social implications. Jeanette Hoffman’s “Writers, Texts and Gendered Acts” makes confident strides in that direction, by linking preconceptions of gender with the design of early word-processing programs (introduced at a time when typing in the workplace was overwhelmingly the domain of women.) Here, Hoffman draws a clear picture of the interface’s power to constrict the user to a preconceived conception of her abilities. The first versions of Displaywriter, with repetitive menu-driven procedures, spoke of an assumption that the user would remain a perpetual neophyte at using the machine; “All actions were guided to preclude errors as far as possible.” (226) Contrast to WordStar, of the same era, which made its capabilities available through keystroke combinations that, while harder to learn initially, allowed adept users to perform tasks much more quickly. (230) “Such images of the user”, Hoffman concludes, “become real and effective to the extent that they actually succeed not only in structuring the development of technology but also the operator’s daily actions.” (232)
Steve Woolgar’s “Configuring the User”, likewise, described an unnamed computer manufacturer as multiple departments attempted to interpret and apply data from organized “usability trials” of a new computer. As he concludes: “The user’s character and capacity, her possible future actions are structured and defined in relation to the machine.”
Scholarship in the area of human-computer interaction, in fact, has no timidity about drawing social connections. It is this field that suggests the concept of user-centered design, and to read Helander, Landauer, & Prabhu’s Handbook of Human-Computer Interaction is to read sociology in reverse: it sets out to provide as complete a document as possible on what programmers might be expected to know about the identities in discourse, in order to expose the functionality of their software to its users in an elegant way. One chapter focuses on designing the “mental model” through which the user will conceive of the computer system and its capabilities, and this discussion suggests a great deal about what software design might mean as a site of connection between social and technical problems. A chapter on designing for users with disabilities that might affect their ability to use software interfaces makes another example. The book has no hesitance to discuss the inherently social, and inherently metaphorical, nature of interfaces.
Theorists from outside the field, however, problematize some of its assumptions. Matthew Fuller describes such interface metaphors as tools by which users “imaginally map out in advance what functional capacity a device has by reference to a pre-existing apparatus”, (100) noting that, as the capabilities behind the interface become sufficiently novel, these metaphors have a limiting effect on the user’s actual understanding. In similarly critiquing the meaning of the “command” metaphor, Hoffman writes “The term command suggests a constellation between human and machine in which the control lies with the writer…” Yet this metaphor is actually at odds with the patterns of capability and control in our use of software: “To put it bluntly, the operator has no means to sanction the computer to impose her will; on the contrary, it is the operator who must change her procedure if the computer fails to respond.” (223)
Through the normalization of conventional metaphors of interface, Fuller says, we approach “the fatal endpoint” of this approach, in which interface design “empowers users by modeling them, and in doing so effects their disappearance…” (13) To normalize our access to software via familiar metaphors, in other words, is also to normalize the software’s access to us. As Fuller concludes on the field of HCI (human-computer interaction,) it “has an unusually narrow understanding of its own scope. Much of the rhetoric is about empowerment and the sovereignty of the user, whose “personality” shapes and dialogues with the machine. It should be asked what model of a persona, what “human”, is engineered by HCI.” (12)
We might also ask what kind of a culture is “engineered” by computing. Sherry Turkle’s Simulation and its Discontents examines the disputes over the role of computerized simulations in teaching, among MIT students and faculty in the 1980s. Her analysis is focused on simulations’ roles in illuminating contrasts of values: on one hand, the “pure science” tradition of encouraging students to learn, as much as possible, by direct experience with real subjects, and on the other, the more pragmatic point that simulations were opening vistas of experience and understanding that could not be matched by traditional, empirical experimentation.
At the heart of these arguments was a debate over the appropriateness of simulations to describe reality. Turkle observes, interestingly, that “students who had grown up with video games experienced screen objects as, if not real, then real enough” (41) The computer, then, becomes one faultline of a cultural divide.
Paul Ceruzzi’s “Inventing Personal Computing” manages to find a different dividing line around which to analyze “personal computing” culture, by observing the early moments of its existence. He pays particular interest to software development, in the transition from its perceived role as an engineering task to a more abstract and authorial one, in the domain of “hobbyists”. (78) In his historical frame, there is a clear delineation to be drawn around computing within the wider culture.
But the pace and breadth of software’s reach make it difficult to draw clear boundaries around “computer culture” today. Instead, a sociological perspective on the meaning of software must encompass the perspective that, as Aranowitz and Menser describe in Technoscience and Cyberculture, all culture is influenced by technology. The production of knowledge inherent in the production of software (for example) cannot be meaningfully removed from our analysis of the broader system of knowledge in which it occurs.
Trevor Pinch takes up a similar argument in “The Invisible Technologies of Goffman’s Sociology”, in which he revisits a number of Goffman’s analyses of sited social interactions for their insights on the role of technology in mediating interaction. Goffman’s work, while closely associated with the concept of hegemonic ritual and its power to entrench role identities, is not typically read for its insights on the role of technology. “Modern citizens spend so much time interacting with computers, cell phones and the like”, Pinch says, “that the role they play in framing and mediating interactions is obvious…” Less obvious is the role of other technologies, such as the merry-go-round in one example of Goffman’s work. “Technology plays a part in staging the role”, Pinch says, “and is also crucial in terms of how the interaction is mediated.” His ultimate intent is to invite us to “combine the attention to technological artifacts…” with more Goffman-like approaches which embrace “the meanings which materiality and technology facilitate.”
The theme, again, is that the study of technology and culture must acknowledge that there is no non-technological culture, only non-technological explanations of culture. This perspective might challenge Turkle’s subjects in Simulation…, by challenging the idea that non-computerized methods in science and architecture represent a “direct” experience while computers present an abstracted one. If all culture is technological, then we are remiss in ignoring the technological character of the pen and paper, the experimental apparatus, or even the verbal description, in mediating our experiences.
Role identities are also important to my analysis, particularly those that pertain to programmers themselves. In “Software Engineering: Community and Culture”, Sharp, Robinson and Woodman write that “most people view software engineering largely as a deterministic technical enterprise with some human and social adjuncts”, while “a broader understanding of how the human and social environment affects software engineering escapes us.” (40) Woolgar’s “Configuring the User” also studies a sample of programmers and engineers as they negotiate an interaction with a group of testers. These works offer examples of sampling a specialist group within a wider population. Particularly: a group divided from the wider population (to whatever extent,) by specialized skills, knowledge, and generative activity. I proceed from the assumption that software developers form a basically identifiable population, and that they possess some reflexive sense of that group identity.
In “Configuring the User,” Woolgar sets out to explore the metaphor of the computer as “text”, with particular reference to the machine’s role in articulating an identity for the user: “it is not just the identity of the user which is constructed,” he writes. “For along with negotiations over who the user might be, comes a set of design (and other) activities which attempt to define and delimit the user’s possible actions.” (61) Tharon W. Howard, in exploring the meaning of ‘networked texts’ in online discourses, says “Few media offer individuals the ability to have a voice in social, economic, political and/or pedagogical change as does NT [networked text].” (13) While Howard’s focus is not on software, the work speaks directly to the ability of a new medium to enable new kinds of discourse. So, my model of software’s function borrows from both of these concepts of ‘text’, in its power to communicate overt information, and to prescribe interactional roles.
In Understanding Media, Marshall McLuhan takes up the concept that automation itself can be considered a category of media. “Automation”, he says, “assumes electricity as store and expediter of information… These traits of store… and accelerator are the basic features of any medium of communication” (351) While there is some conceptual distance between this automation, the technology of computing, and software itself, software certainly meets many common-sense criteria to be considered a mass medium. Jean Baudrillard concludes in The Consumer Society that “What defines mass communications is… the combination of technical medium and LCC [lowest common culture] (not the massive number of people taking part.)” LCC being “akin to the “standard package” which lays down the lowest common panoply of objects the average consumer must possess in order to accede to the title of citizen of this consumer society”. (104) If we take computer literacy as part of that “panoply of objects”, then software makes a convenient extension of this definition.
I believe it’s important to seek out these media theory connections, first, to avoid the trap Mesner and Aranowitz point out to “theorists who see technologies as merely ‘mediators'”. We must take software as a medium unto itself, in order to fully recognize that it is both a site of connection between people and the focus of multiple, overlapping systems of specialized knowledge. The medium, by McLuhan’s famous axiom, is the message. But second, to study software as a site of discourse between producer and consumer is inherently to treat it as a medium. The extensions of this research into media theory are simply too close at hand to overlook.
Baudrillard’s characterization of mass media in The Consumer Culture, even more than McLuhan’s, centers on media’s ritual function and its power to create consensus. Advertising and fashion, he says “can be interpreted as a a kind of perpetual referendum – in which citizen consumers are entreated at every moment to pronounce in favour of a certain code of values and implicitly to sanction it.” (168) Our “ritual” participation in these media, in other words, is both a personal and a public act, by which new “common knowledge” is created—even new ways of thinking.
“[I]t can be misleading,” Walter Ong said of using the word “media” in reference to new technologies, in Interfaces of the Word, “…encouraging us to think of writing, print and electronic devices simply as ways of ‘moving information’ over some sort of space intermediate between one person and another”. Instead, he argues, each such technology “does far more than this: it makes possible thought processes inconceivable before…” (qtd. in Howard, 30)
N. Katherine Hayles, in “The Condition of Virtuality”, makes a compelling argument for one such “thought process”: the belief that the tangible, or the experiential, is only an instantiation of a “pattern”, or algorithm, which exists essentially in some perfect, immaterial form – the philosophy, in other words, that we exist in a universe of data, rather than of things. Hayles defines this ‘condition’ as “the cultural perception that material objects are interpenetrated by information patterns”. (69) Ideas about DNA make clear examples: the engineer and futurist Ray Kurzweil, in his book The Age of Spiritual Machines, makes the telling word choice that DNA is data— not a molecule of nucleic acids used in protein synthesis, but information itself. The popular science-fiction theme of human cloning hinges on the concept that, once a human body can be reduced to a stream of data, that person can be ‘duplicated’. Hayles explores similar content in the work of the biologist Richard Dawkins. (Hayles, 70) Kurzweil makes comparable statements about the supposed inevitability of human-like artificial intelligence; that the mind is already quantitative in nature, we just don’t know how to measure it yet. This concept might also echo Turkle’s observation on the “real enough” quality of simulations to the generation that grew up with video games. Hayles’ thesis is that this “bifurcation” of information and materiality (69) represents a special cognitive characteristic of our computerized society.
Meanwhile, in several of the essays in Behind the Blip, Matthew Fuller draws a portrait of computing technology, as something essentially postmodern: a window onto a world fragmented into functional units, and reconceptualized at will via software—which is also difficult to refute. In the essay “A Means of Mutation”, he describes a program that reads HTML data from web pages, like any web browser, but rather than rendering them as web pages, visualizes them in novel, but linguistically meaningless ways. The obvious suggestion is that behind the conventionality of the “normal” interpretation, we glimpse the idea that the data we consume on the web has already been dissociated and reconstituted in and out of meaningful contexts. Likewise with the galaxy of functional sites that are constituted, via software, into a meaningful network of traffic cameras, mobile phones, printers, monitors, users… the culture of software is one of endless, increasingly radical reorganizations of our world.
Attempting to reconcile Hayles’ and Fuller’s broad observations, I am drawn to the possible paradox of meanings: the post-modernism of a world disintegrated by networking, then granted “realness” by virtue of its substantiation of data, in what could almost be called a new sort of platonism. I believe such disruptions of meaning, whether they take their places via technology’s power to insinuate new knowledge, or via that power of a mass medium to “make possible thought processes inconceivable before”, as Ong put it, mark the path of any sufficiently deep reading of culture and technology.
To ask how well the concept of user-centered design describes the social meaning of software, then, we proceed from a breadth of existing scholarship. We are asking how a particular mass medium, (challenging certain features of our chosen definitions of a medium,) in applying the HCI principles of user-oriented design (applied by one describable identity group, in reference to another,) would predict or describe relationships within our deeply technology-centered society. Our jumping-off point, then, is the convergence of these threads on software, as a distinct site of discourse within that social setting.
Methods
The original data collection for this research took the form of interviews with programmers. These interviews were conducted one-on-one, by phone, with participants obtained in a sample of convenience. I was the interviewer in all cases. Each participant received the same set of open-ended prompts, and was encouraged to elaborate and make associations in his answers.
The sample included co-workers, friends and friends-of friends, as well as some participants recruited at meetings of campus clubs. Ten interviews were conducted in total, one of which was unusable due to an unexpected problem with the recording. Two of the interviews were not recorded at all, and in those cases, my analysis was based on notes taken during the interviews. Participants’ ages ranged from twenties to sixties, and all were US citizens who spoke English as a first language, though some variety of ethnicities was present. All were males, which is a noted limitation of the sample. Seven of the respondents had taken college courses in a computer science track, and eight of them had written software as part of a job or internship. I followed no rigid operational definition of a programmer, but all of my participants had, at some point, written code in order to create computer software for the use of other people.
The interview questions fell into four sections: first, questions about the respondent’s early experiences with computers and programming; second, a set of questions about current or recent projects, focused particularly on interactions with users; third, a set of questions on the respondent’s education and training; and fourth, a group of questions about social topics related to computing.
The questions on early experiences included questions about the respondent’s first forays into programming, as well as his early experiences with computing in general. These questions were used partly to lead respondents to begin organizing their recollections about their “careers” in programming, and partly to seek common narratives in programmers’ backgrounds. Such commonalities could help to paint a picture of the “programmer” identity. This section also included questions about what tools the respondents used in their first programming experiences. Subject to further responses, I assumed that respondents’ first programming languages and development tools would color their expectations and approaches to further study. Just as a user interface presents metaphors by which a user is to understand a product, so does a programming language articulate an “imaginal space”, in Fuller’s phrase, by which the programmer is led to conceive of the capabilities before him. I also asked what kinds of software, in general, participants tried to create in those early experiences. My expectation was that if common narratives emerged about programmers’ goals and motivations in these first attempts, I could assume that those trends would hold true of a wider sample and therefore would say something about the motives that draw people to study programming. This could clarify the identity group, and either support or refute the assumption that software design generally follows from a concept of its eventual users.
The prompts about respondents’ recent projects focused on interactions with other people during and after the development process: external design specifications, user feedback, and cooperation with fellow programmers. First, I asked respondents to describe interactions where they’d had to take input from clients or end-users about software they would work to create. If generalizations could be drawn about ways respondents had turned design input from non-programmers into usable specifications, that could describe common contrasts in thinking between programmers and non-programmers. Likewise, the prompt could also lead participants to describe the early stages of their own design process, as they proceeded from those specifications towards an actionable design. The prompts about feedback from end-users were designed to seek testimony that could explore the concept of software as a site of discourse between producer and consumer. If discussion between those same parties about the software contained notable surprises or trends, those contrasts might be illustrative of the discourse that was expected to take place via software. The analysis of all of these sections focused on the question of whether the relationships between producer, product and consumer described by my participants was consistent with user-centered design practices, or if some other set of practices seemed to be at work. Baudrillard’s contention that mass media is defined by its relation to a “lowest common denominator” could also be challenged if there is particularly much allowance for individual users in the design of software for the mass market.
The prompts on education and training invited testimony on three main topics: a continued exploration of biographical narratives in relation to software development processes, the place of the producer-consumer relationship in the respondents’ training, and lastly, ways in which the systems of knowledge and metaphor learned by software developers informed their understanding of the rest of the world. Respondents were encouraged to apply the ideas of education and training as broadly as they saw fit, and not to limit their discussion to formal education. My analysis of the biographical data sought commonalities of experience, and any possible categorizations they might suggest. In analyzing respondents’ testimony about the ways they had been trained to conceive of the end-user in their work practices, my concern was specifically on the process by which participants learned to make interface-design decisions. The questions and analysis on modes of thought, which included both overt and covert prompts, were designed to bridge direct testimony with Hayles’ “virtuality”; it follows from her premise—that our immersion in the metaphors of data give rise to new ways of relating to the real and the potentially real, that developers’ deliberate interactions with the more highly developed metaphors of code, object and instance should reflect some evidence of that philosophical shift. (Importantly, there is no reason to assume that these orientations are the special domain of software developers. Rather, my hope is that programmers are specially equipped to identify and describe these systems of knowledge.)
In the final section of the interview, each respondent was given a set of questions that overtly related social and technological subjects. For instance, one prompt asked how the respondent reacted to the idea that computing “distracts people from the real world”, and another asked how the respondent thought industrial processes, in general, were affected by the transition into computerization. The main intent of these prompts was to invite respondents to comment on the role of computing in the culture. Turkle’s observations about the relative valuation of computing among other sources of knowledge also connected to these prompts, via the assumption that programmers might be expected to value computerized methods of production, knowledge or interaction over non-computerized methodologies.
They were also asked to guess how computing in general, and software development in particular, might change in the course of the next twenty to thirty years. I listened for clues as to whether my respondents saw themselves in the role of consumers or of producers, in responses to these questions. Since the questions concerned products that programmers themselves would both create and consume, certain responses might support the explanation that programmers situate themselves opposite the “user” identity around computing. Conversely, if these software developers spoke of a “them” when describing the progress of computing technology, this would imply that the role division between user and programmer is subordinate to other social distinctions. In two questions about the advents of artificial intelligence and genetic engineering, I attempt to take up Hayles’ theory more directly, by challenging respondents to discuss the mind and the body, respectively, in terms of the data that define them.
After all of the interviews were completed, I analyzed the records of each one qualitatively, following the analytic principles described in this section.
Results
The data I collected in the interviews refuted the hypothesis that programmers’ decisions are guided by considerations of the software product as a communication with its end users. However, they did describe several ways the programmer identity interacted with other social boundaries. I also found my respondents very willing to discuss the habits of thought that arise from their experience with computers, and the place of software in the broader culture.
The responses about early experiences with programming and computing told diverse stories. Two of the participants who had begun their careers earliest, chronologically, described working with sample code from print magazines in their first forays into programming. Two said their parents started teaching them to program. (Multiple participants had parents who were also programmers in some capacity, which was surprising, given the size of the sample.) Several of the younger members of the sample called video games important parts of their early interest in computers, and indeed, a majority of the participants said they had created simple computer games among their first projects. (Only two of these respondents said that creating video games remained a part of their programming into the present.)
A majority of the respondents said the first programming languages they used were variants of BASIC (though their success in learning those first languages did vary.) The choice of languages and other development tools, in those first attempts, seemed more opportunistic than deliberate; in some cases, there was only one computer and one “language” available, some only had learning materials for one language, and in other cases, the selection was strongly influenced by peers or family. (To Sharp et. al, this might be seen to prefigure a lifelong pattern.) While some had help from family members or teachers, in these early experiences, only one had access to development tools that were specifically intended for new learners. Two participants said that in those first attempts, they didn’t “get it”, and revisited programming again, years later. Four participants, a small minority, said their first programming experiences took place at school. This correlated with the year in which those experiences took place, with learning in the home becoming more likely in more recent decades.
Early projects tended to be for personal use. One respondent talked about building a program to calculate how many Lego bricks he would need to build something. Several had simple video game projects for their own use. And the idea of simply “tinkering”, as multiple respondents described it, was a strong component of those early projects, too.
Respondents rarely spoke of any desire to share these early projects with other people, except as necessary to continue working on them. (One respondent started programming in a high school course, on punch cards, and had to hand his boxes of cards to the instructor each night to have his programs run on the computer.) So, rather than producing software for the use of an audience, the programmers in my sample all began learning their trade by creating software that either served some purpose for their own use, or existed simply for the sake of the learning opportunities it presented. Rather than interacting with a distant user, these programmers instead described their interactions with the computer itself, in terms of “tinkering”, “taking apart the programs we had”, or simply “screwing around” with computers. In theoretical connections, these activities seem to be described better by Baudrillard’s or Goffmans’s notions of ritual consumption of media and technology than by any set of best practices from human-computer interactions.
One trend in these experiences across time echoed an observation in Ceruzzi’s “Inventing Personal Computing”: Moving from the 1980s forward, programming becomes less prominent a feature of the overall use of computers. There is simply a larger set of things to do with computers, so the demographics of computer users skew from a small group that was mostly programmers, to a much, much larger group with programmers as a minority subgroup. This pertains to the question of how the “programmer” identity is defined, historically, but it also explains, in part, another pattern in the way “computer literacy” became separate from the activity of programming; The later a respondent’s first programming experiences took place, the more likely they were to have performed other tasks with computers by that time. One said that when he first used a computer, in a grade-school class, it “seemed natural” because he had so often watched people use computers in television and movies.
In discussing their professional careers and formal education, my participants had little more to say about the place of the end-user in their processes. Most of those respondents who had studied software development in a formal setting said they had little or no training in designing interfaces, or in assessing the needs and capabilities of users. When asked to describe the end-user for a current project, respondents were most likely to respond with an organizational role: “students”, “clients”, or a department within the respondent’s company. Comments that directly implicated users were scarce, but one comment that “users don’t know what they want” was representative.
I saw three general concepts that stood in for user-interface design. First, several respondents talked about the practice of designing software to meet the expectations of customers or clients. Individual, ad-hoc methods for meeting specifications from a client were part of several respondents’ educations (although informally) and part of many respondents’ professional practices. However, these communications were almost categorically discussed in terms of expected features of a product rather than of how those capabilities would be exposed to the user. And thus, this practice fails to conform to the patterns described by Hoffman or Woolgar, in which software designs included deliberate decisions about the expected capabilities of the user. This suggests that neither the developers nor the customers are approaching software design from a user-interface perspective. Second, as in the discussion of their earliest projects, many programmers talked about designing products for themselves. Statements along the lines of “I just think about what I would want” were fairly typical. (The hypothetical leap to “…what I would want if I were the client” was a much rarer statement.) And third, the abstraction of an “intuitive interface” often entered these discussions. Multiple respondents, in talking about how they assessed the success of a project, said that they looked for an interface that was “intuitive” or “obvious”. (At the risk of reading too much into the language, I notice that in these descriptions, it is the interface that possesses the quality of intuitiveness, and not the user’s experience of it.) This concept, at least, broadens the frame of reference to suggest a typical user possessing a certain “common panoply of objects”, by Baudrillard’s phrase, that might make an interface “obvious”. This suggests that the practice of interface design itself has an “intuitive” character, more than it has a deliberate one.
Where my respondents did talk about users and interface design, there were few common threads. Two participants said they worked for companies that employed dedicated interface-design staff. One programmer talked about seeking feedback from friends who were “the kinds of people I’m writing for”. Most of my respondents had received feedback from testers or end-users, at some point. Their impressions of that feedback ranged from feeling that it “made sense” to it being “not in line with my vision”, but their handling of that feedback did not suggest any common pattern of practice. “As the programmer,” one interviewee joked, “I see all of what I do as perfect, but it may not be perfect for other people.”
The evidence suggests a parallel to the findings of Sharp, Robinson and Woodman in “Software Engineering: Community and Culture”, that programmers’ choices of programming languages are based more on anecdotal and social causes than on empirical assessments. Likewise, the absence of any rigid practice of UI design or user assessment among the programmers in my sample suggests that, instead, these design decisions are being made with reference to a common vocabulary of expected or conventional interface metaphors. This would seem to support Fuller’s conclusion that our conventions of human-computer interaction, in their emphasis on “narrowly applied psychology”, have “split the user from any context.” (14)
This lack of a clearly-conceived “user” identity in these programmers’ design practices, however, did not prevent a “programmer” role from emerging from the data; My respondents had plenty to say about the social boundary drawn by their status as holders of their specialized knowledge and abilities, and described some of their interactions across that boundary. But the data suggest that the relationship between producer and consumer is not the appropriate archetype by which to understand that social division.
That identity group, not surprisingly, is defined partly by knowledge. One participant, talking about situations where discussions on programming topics had come up in “mixed company,” said “it’s clear within 30 seconds that you’re alienating people” by turning discussion to specialist knowledge. Attitude also came up as a marker: Programmers, as a group, were readily described as introverted, analytical, process-oriented, even “rude”. “Computer science has been dominated”, one respondent said, “by people who are not so sympathetic to human shortcomings.” One said that his own approaches to problem-solving tended to be more intuitive than analytical, which made him unusual among other programmers. A strong ethic of independent, ongoing learning was also apparent in the respondents’ stories, as was a tendency to favor intrinsic rewards: several respondents said they judged the success of a project by their own sense of how well they had met the challenge. Testimony about collaborations with fellow programmers also tended to bear out these general commonalities of approach. While it took some effort to turn specifications or feedback from users into a usable form, respondents described no such translation when collaborating with other programmers. This is no conclusion about the character of the group, but it reinforces the idea that the group identity has a distinct meaning in programmers’ concepts of themselves.
The group is also, notably, not defined by a position entirely outside the general population of computer users. With a few exceptions, my respondents seemed to see themselves as particularly avid users of computers and software, and were very cognizant of user-interface design as it mediates their experiences. A breadth of video games, operating systems and consumer software served as points of reference in discussion about interfaces. One respondent, lamenting the tendency for software to be designed around revenue opportunities rather than an optimal user experience, called the state of interface design “a quality of life issue”. Programmers, in other words, are emphatically a subset of the software-consuming population, not a “priesthood” arranged above it.
When asked to speculate on how a few computing problems on the horizon might be solved, participants were more likely than not to anticipate that a nonspecific “they” would somehow find a way. Even to a sample of trained programmers, then, there is still a “them”, and a perceived inevitability to computing’s trajectory towards ever-greater heights. If these beliefs are common to non-specialist population as well, then it tells us that faith in a linear-progressive trend for computing, as described by Turkle, Hayles and Kurzweil, among others, is not attributable to a simple lack of technical or institutional knowledge.
My interviewees also voiced a variety of positions on the question of whether our everyday use of computing distracts us from reality. There was a fairly even split between respondents who lamented the erosion of the social norms that existed before smartphones and the world wide web, and those who described that position as simple nostalgia in an increasingly wired real world. Holding specialist knowledge as software developers, in other words, did not seem to predict any particular enthusiasm for the ubiquitous use of software, nor the devaluation of non-computerized modes of knowledge, as described by Turkle. There was some consensus, on the other hand, over the meaning of computerization in industrial processes: the result of computerization was described with the words “efficiency”, “simplicity”, and “less work”. Manual processes, in other words, were considered less parsimonious than computerized processes that accomplish the same outcomes. It was also acknowledged that these transitions have a “disruptive” human cost in terms of lost jobs, but as Turkle’s Simulation and its Discontents might predict, few respondents made reference to disappearing knowledge. Where respondents did make reference to skills that disappear in computerization/automation, the majority report was that those skills were, more or less, adequately represented by their automated forms. In total, these data acknowledge that computing carries its own set of social values, but also accept the “realness” and adequacy of the computerized process or interaction. Indeed, several respondents who did talk about the new distractions and de-personalizations of social media also softened those criticisms by acknowledging that those same technologies enable social connections that would otherwise be impossible. In discussing the impact of computerization on less-affluent populations, many respondents pointed to ways new communication technologies can create opportunity and access where none existed before.
Confidence in the power of the algorithm to encapsulate real processes, as both Hayles and Turkle articulated, was strongly supported by the testimony of my interviewees. “Algorithmic thinking,” almost categorically, came up in discussion about the ways respondents’ thinking was influenced by their experience and training as programmers. Several said that studying algorithms in college courses changed their senses of problem-solving and of understanding the world. Here, the algorithm becomes an essential link between the perceptible and the conceptual. One interviewee was working on a program that simulates a specific biological process that takes place in the optic nerve, for the benefit of biology students, and another has an ongoing project to visualize algorithms themselves, for use as a learning tool for programmers. In both cases, there is a strong echo of Turkle’s observations: a faith that a computerized simulation, with all of the limitations of interaction that come with it, is appropriate, even optimal, to support a deeper understanding of the thing simulated.
Algorithms and computers, as systems of metaphor, were described in broad applications. An understanding of the world in terms of “many small functional things fitting together” was one interviewee’s take-away from his education as a programmer. In the words of another, “computational thinking leads us to see the world as algorithms…” and as both “describable and modelable”. Many of the respondents who had formally studied computer science said that courses on algorithms had deeply changed their ways of seeing the world. One participant, describing his understanding of the world as “mostly computational,” said that his concept of the way the human brain works, in particular, was “usually in terms of a computer”. Several, when asked to speculate on whether the problem of human-like artificial intelligence was likely to be solved, expressed confidence that this would be accomplished as soon as “they” gained an adequate understanding of the brain itself (echoing Kurzweil.) The one participant who said these modes of thought did not persist beyond the end of his workday also said that part of his brain “shut down” when he was not at work – itself a metaphor with an obvious computational interpretation.
In “The Chess Master and the Computer”, Garry Kasparov describes a transition in the recent history of the game of chess: at some point in the 1990s, we moved from a period in which top chess software imitated human players, to a moment where the reverse was true, and the way the game was played and taught was following, rather than leading, the design of this software. At top levels of play, the simulation had not only eclipsed the game itself, but changed it. The programmers in my study, for all their enthusiasm for algorithmic thinking, also testified to the power of algorithmic understandings to upend the very things that are meant to be encapsulated in those understandings. One related his concern that we are building a “giant pyramid” of knowledge by which to navigate our increasingly computerized world, and likely failing to recognize when we build up from weak foundations. The power of interface and metaphor to conscript our activity was even applied to software development itself: speculating on how software development might be different 20 years from now, multiple participants were wary of development tools’ increasing ability to anticipate and guide the programmer’s intent. “It will be easier to write software tailored to your needs,” one predicted, “as long as your needs don’t deviate too much from the norm.” Another, in describing the way televised basketball games have been changed by the ubiquity of slow-motion cameras and instant replays, likened the emerging culture of the sport to a video game, in which the systems of interaction, observation, and control are so integrated with each other that, ultimately, “the game is the referee”.
Conclusions and further research
These data offer fairly concrete answers to my central questions. First, at least from the perspective of the developers in my sample, the activity of producing software is not ordered around the interaction, across space and time, with the eventual consumer of that software. If there is an identity imposed upon us by the interfaces we use every day, then that identity more closely represents the interface designer’s self-image than an ‘othering’ of the user. While there is a coherent identity group bounded by the possession of specialist knowledge in this area, that group is not defined in terms of a consumer-producer relationship. Instead, whether we inhabit the roles of creator or non-creator, we seem to organize our experiences around the device itself, rather than with the software’s author or consumer. To McLuhan’s characterization that radio “comes to us ostensibly with person-to-person directness that is private and intimate” (302), software might offer the inverse: an experience of ostensible privacy within a person-to-person transaction. This conclusion should inform any analysis (such as Woolgar’s) that seeks to treat software or the computer as a text. As suggested by Fuller, the reader and writer of this text are mutually invisible to one another. Second, we have some lucid descriptions of the modes of understanding that attend in-depth knowledge of the metaphors of computing. If we take software as a mass medium, then the lineage from Baudrillard to Hayles is elaborative, not only on the concept of the “virtual”, but on the nature of media as well: The metaphysics of algorithmic thinking described by Hayles seems to be one fulfillment of Baudrillard’s prediction, in The Consumer Society, that each mass medium creates some special consensus among its audience. Learning to create software, for my interviewees, meant learning to see the world as an algorithm.
Given the negative conclusion on the hypothesis that user-oriented design offers a link between identity and interface design, there are two clear questions to address in further research: where do those interface designs come from, first, and then: by what pathways do assumptions about users, such as the gender stereotypes described by Hoffman, take root? The clearest suggestion of my research is that these activities belong to a smaller specialist population within the software industry, and that programmers like those in my study largely take their cues from other software. But this possibility suggests many additional areas of social research. I believe further interviews would be an excellent way to explore these questions.
Insofar as this research has produced coherent conclusions on a little-explored theoretical question, it would be valuable to extend the project with a broader sample and a refined set of prompts. Collecting more quantitative demographic data might also help to identify patterns that were suggested in these responses, such as differences over time in how programmers begin learning their trade. I would be particularly interested to add non-programmers to the pool of interviewees, in order to draw conclusions about how programmers’ positions on these questions fit into the wider population’s. It might be illuminating, as well, to “turn the question around” and explore users’ concepts about the people who produce the software they use.
My interest in these questions flows from a simple curiosity about the social meanings of computing technology: what structures of power are weakened or reinforced by it, and what knowledge does it privilege or obscure? How, in short, should we expect this technology to reconfigure the social world? The answer I take away from this research is that the people who create the software we use every day, ultimately, function more as pathfinders than as gatekeepers. Our relationships to the social, technical and even cognitive boundaries of computing seem to give us more points of commonality than of division. Managing the imposition of new identities, navigating the new structuring and ordering of our capabilities in relation to new technologies, seems to be our shared burden, specialists and non-specialists alike.
References
Aronowitz, S., Martinsons, B. & Menser, M. (Eds.). (1996). Technoscience and cyberculture. New York: Routledge.
Baudrillard, Jean.The Consumer Society : Myths and Structures. London; Thousand Oaks, CA.: Sage, 1998.
Ceruzzi, Paul. “Inventing Personal Computing.”The social shaping of technology. Eds. MacKenzie, D., & Wajcman, J. Buckingham, England: Open University Press, 1999. 64-86. Print.
Fuller, M. (2003). Behind the blip: Essays on the culture of software. Brooklyn, NY, USA: Autonomedia.
Hayles, N. K. “The Condition of Virtuality”. The Digital Dialectic: New Essays on New Media {Leonardo Series. Cambridge, Mass}. Ed. Lunenfeld, Peter. MIT, 1999. 68-94. E-Book.
Helander, M. G., Landauer, T. K., & Prabhu, P. V. (1988). Handbook of human-computer interaction. Amsterdam: Elsevier.
Hoffman, Jeanette. “Writers, texts and writing acts: gendered user images in word processing software.”The social shaping of technology. Eds. MacKenzie, D., & Wajcman, J. Buckingham, England: Open University Press, 1999. 64-86. Print.
Howard, T. W. (1997). A rhetoric of electronic communities. Greenwich, Conn: Ablex Pub. Corp.
Kasparov, G. (2011). The chess master and the computer. New York Review of Books, Feb. 2010. http://www.nybooks.com/articles/archives/2010/feb/11/the-chess-master-and-the-computer.
Kurzweil, R. (1999). The age of spiritual machines: When computers exceed human intelligence. New York: Viking.
McLuhan, Marshall.Understanding Media : The Extensions of Man. New York: McGraw-Hill, 1964.
Pinch, T. (2008). The invisible technologies of Goffman’s sociology: from the merry-go-round to the internet. Technology and Culture, 51, 409-424.
Sharp, H., Robinson, H., & Woodman, M. (January 01, 2000). Software Engineering: Community and Culture. IEEE Software, 17, 40-47.
Woolgar, Steve. “Configuring the User: The Case of Usability Trials.”Sociological Review S1 (1990): 58-99.