“You speak. Siri helps. Say hello to the most amazing iPhone yet.”
So the ad on the back of the April 2012 edition of Wired magazine reads. We’re left linger on that last word: yet. Nevermind the agency we’re assigning to Siri, the overly-hyped, distinctly inhuman voice recognition software that yearns to sell you more widgets[G1], focus instead on that triumphant little three letter word. We are assured this is not the first iPhone, nor will it be the last. With each model we gain more features, claim more battery/bandwidth and obsess over the nuances of the shape of the device. And yet there is an absent dimension to this story. This American Life and Mike Daisey (2012) did well to bring a lot of attention to variety of scaling costs to workers that go into one of these gadgets and Johnathan Zittrain (2008) established years ago Apple’s ongoing project to demolish the open web and computer interoperability in favor of compartmentalized proprietary, high-profit-yielding apps. Both of these would seem to me to be steps backwards. So why then do we so easily accept it as a “most amazing” technological evolution in what is most likely assumed to be an inevitable story?
Progress. Or, more accurately, the false progress narrative that everyone seems to have had stuck in their head since the 50’s. Statements like “the evolution of technology” are just what continue to give it life today. It’s a smaller piece of a greater whole I’m afraid, a component (or proponent?) of technological determinism—the dominant, impetuous voice given to the presentation of technology and its relationship with society that always takes the form of continual self-contained (and self-evident) progression. The age-old classic might be the generalized (read: wrong) form[G2] of Moore’s law, the notion that computing capability accelerates in a predictable (periodic and exponential) and inevitable fashion. Faceless, decontextualized “technology” is seen to be a force of change because the darned stuff is somehow always causing itself to increase in speed and effectiveness, relative to how it was, of course. The net result of defining technology as progress is that society must adapt to it instead of shape it. Some say that technology is merely the application of science. Us informatics scholars claim this is faulty because scientists create as much as they ‘discover’ and, historically speaking, science and technology have not had clear-cut or consistent connections. And this proposition that technology simply builds upon preexisting technology is a misnomer (not to mention it breaks Kuhn’s heart). Neither is technological progress an eternal project of addressing reverse salients, unforeseen setbacks or problems [G3]resulting from the unfolding of technologies over time, because not all technology is designed to correct problems caused by previous technology. To be sure, human values are definitely embedded in technologies. Not only was the nuclear bomb a reflection of the values, capabilities and agendas of its time [G4]but it really could be used in only so many ways. Technology comes about as a result of human ideas and agency; the direction it goes and the effects it has is largely up to us. Now I’m not saying technology is purely socially constructed either. In some ways it may have limited agency, like the way a dead person’s Facebook profile might be continually animated by algorithms and interactions. But at the end of the day we are the ones who make sense of what it all means.
A lot of work on the subject of power and our current “information” society examines people’s ability to participate in it meaningfully, be it as part of global conversations, local democracy, or broad movements of social change[G5]. This assertion assumes that participation boils down to a matter (requirement) of information access, known commonly as the digital divide, or, stated succinctly, the power differences between people or communities tied to varying levels of computer and internet opportunity.
Establishing the digital divide as our enemy necessarily embarks us on a quest for digital solutions, but the lack of possession of material access to information technology as well as the absence of skills, community support and perceptions to make effective use of it is certainly a symptom of deeper, more prolonged issues. In the information revolution the have-nots are those who are simply digitally divided. Why do we forget to think about what caused them to be digitally divided in the first place? In some sense the digital divide is a moving target, as the make-up of information communication technology shifts as we look back over time. In this sense we’ve been in an information revolution (or crisis) for over thirty years. To suggest the information revolution is a regularized state of being is to render the term inadequate[G6]. But truthfully it just keeps getting used. First it was the onset of significant availability of computers in business and homes in the 80’s, then it was the beginning of widespread internet adoption that broke out in the 90’s and in the recent decade it has donned the hats of mobility, broadband and Web2.0. Up next might be the semantic web. It is worth taking a step back, disentangling oneself from the ever-changing constitution of ICTs, and interrogating the underlying assumptions and agendas of the digital divide and the credence for the proliferation of ICTs that we find wrapped up in the idea of the information revolution.
One might follow the lead of Jan Pieterse (2005), who questions the motivation behind the digital divide in his critique of information communication technologies for development, or ICT4D. His argument depends on the frame of digital capitalism, a world in which networks of corporations drive and dominate cyberspace and subject the world to certain flavors of media as well as the brunt of larger forces, like consumerism. ICT4D implies the imposition of flawed (or loaded) developmental models, such as the aforementioned technological determinism or neo-liberalism (market forces are development) that serve to mask the true intentions of insidious political and economic agendas: to make money off of poor people through selling more material goods and exploiting labor, to control markets with ideologies like intellectual property rights and to force developing countries to choose between dependence on NGO’s or corporate networks. Pieterse’s stance is accurate, if resoundingly pessimistic, and reminds us of the complex of baggage we drag with us when we deploy ICTs to ‘bridge the divide’ between peoples as we “progress” in the alleged information revolution.
This is why I prefer to shift the conversation to literacy. In the vernacular, literacy often is taken to be equivalent to competency, proficiency or functionality, and is frequently affixed to other words to create compound meanings[G7], such as information literacy, (new) media literacy, and stranger and debatable pairings, such as emotional literacy. I take the term a step further than competency. As an educator I advocate for literacies that affects power. Literacies comprised of social practices that foster critical social awareness as well as measurable knowledge and creative command over relevant communicative[G8] tools. Students who can accomplish some degree of mastery over these literacies are able to look at phrases like the evolution of technology or the information revolution and see them for what they are: political positions inscribed in terms that obscure the tangled masses of sociotechnical forces in operation behind them. These same students can go on to actively create, share and remix[G9] information, media[G10] and ideas to be a conscious and intentioned part of the drive behind the information revolution or technological evolution, as the case may be.
How to foster such literacies, however, is another subject entirely, and will have to wait until next time.
How we pace ourselves into this future of (r)evolution?