If you find either of these books interesting and wish to acquire them, please use an independent bookstore. My favorite: https://www.thethinkingspot.us
A preface to the reviews.
David Brook’s wrote an opinion piece in the NYT about AI and a tagline from that piece was, “In the Age of AI: major in being human.” Of course, no such major exists, but it should. I will be starting a series of posts on this subject in the near future. The three books reviewed below deal with how technology, especially computer-based technology, is diminishing our human-ness.
The three books reviewed here, all by Nicholas Carr, have a common theme: examining how technology affects human beings and human abilities, especially intelligence. All three books see these effects in an adverse light, as diminishing both human abilities and what it means to be human.
The Shallows begins with Marshal McLuhan’s argument that the medium by which content is conveyed is more important than the content itself. The medium at issue in this book is the Internet; more accurately the Web.
The first 30 pages establish the fact of “neuro-plasticity.” The ability of a brain to ‘re-wire’ and re-configure itself such that it process inputs, “thinks,” differently as a function of how the inputs are presented. The Internet, he argues presents content (inputs) in short, rapidly apprehended, increments and the brain rewires itself accordingly. The ‘symptoms’ of this brain alteration include increased difficulty reading an entire book and a kind of euphoria or altered state of consciousness that comes from the consumption of hundreds of informational snippets.
The next chapters provide a bit of historical perspective of how technology—from written sentences with punctuation to the Internet—are “tools of the mind.” Each advance makes it possible to increase the volume of what we “have to think about” as well as “the way we think.”
Then focus shifts to the internet:
“One thing is very clear: if, knowing what we know today about the brain’s plasticity, you were to set out to invent a medium that would rewire our mental circuits and quickly and thoroughly as possible, you would probably end up designing something that looks and works a lot like the internet. It’s not just that we tend to use the Net regularly, even obsessively. It’s that the Net delivers precisely the kind of sensory and cognitive stimuli—repetitive, intensive, interactive, addictive—that have been shown to result in strong and rapid alterations in brain circuits and functions. … the Net may well be the single most powerful mind-altering technology that has ever come into general use.
The remainder of the book looks at specific examples of that mind-alteration. One of the most interesting, to me, was chapter 8, “The Church of Google.” In that chapter, Carr, argues that Google is the embodiment of “Taylorism.” Taylor brought “scientific management” to the factory floor. Google brings it to the mind. Citing Neil Postman, in Technopoly, Carr writes:
“Taylorism is founded on six assumptions: the primary, only, goal of human labor and thought is efficiency; technical calculation is in all respects superior to human judgement; human judgement cannot be trusted because it is plagued by laxity, ambiguity, and unnecessary complexity; that what cannot be measured either does not exist or is of no value; the affairs of citizens are best guided and conducted by experts.” … [only one tweak is required to accurately reflect Google’s philosophy] Google does not believe that the affairs of citizens are best guided by experts. It believes that those affairs are best guided by algorithms, which is what Taylor would have believed had computers been around in his day.
The Glass Cage also looks at technology and how if affects, sometimes redefines, what it is to be human. The introduction begins with an anecdote; how the FAA issued a SAFO (safety alert for operators) encouraging the use of manual flying as often as possible, “too much reliance on autopilot could lead to degradation of the pilot’s ability to quickly recover the aircraft from an undesirable state.”
The rest of the book looks at “how computers are changing what we do and who we are.”
Chapter two provides a framework for exploring the effect of automation, especially computer-based automation. Between the utopian (automation will replace tedious work, leaving humans to pursue higher goals) and dystopian (massive unemployment and poverty arising from automation) is where the more subtle outcomes arise from automation.
If you off-load cognitive tasks, like basic math, to calculators, you—the human—never learn that basic math and therefore have no foundation from which to think “higher” thoughts about math.
Two cognitive problems: automation complacency and automation bias are discussed. ‘Complacency’ is a false sense of security, a belief that the automaton, usually a computer, will always work flawlessly. Bias occurs when you believe what the computer outputs even if it is wrong or misleading. Back in the 1980’s a famous article called, “A Spreadsheet way of knowing,” demonstrated exactly this kind of bias and how it made massive computer-based fraud possible.
Hype-driven adoption of technology is another kind of computer bias. Companies, and individuals, consistently fall prey to claims like, “this tool will increase productivity by an order of magnitude,” or, “AI will vastly improve your work emails and memos.”
All three of these problems are evident in the current AI mania.
Robert Kennedy, U.S. Secretary of Health, recently claimed that “AI-doctors” could replace human doctors in the immediate to near future. Of course, only for those economically disadvantaged or rural areas—never for Mr. Kennedy. Carr uses examples like this to point out the missed opportunity to use computers to augment human abilities. In the 1990s, the AI fad du jour was Expert Systems. American Express tried to develop an expert system to replace the humans who made credit decisions, a complex task that had to be accomplished in a matter of seconds. They failed. What did work for them, was a system that prioritized the presentation of data in a way that human decisions makers could best utilize that information.
Carr’s conclusion emphasizes the need for humans to “do the work” before off-loading to automation. Doing is how we learn. Doing is how we make sense of the world and our place in it.
Superbloom focuses on communication technology and the hopes expressed for advances in that technology versus what actually occurred. The subtitle of the book, “how technologies of connection tear us apart,” captures the main thesis very well.
Charles Horton Cooley (1864-1929) coined the term ‘social media’ in the course of advancing a theory that social evolution was a function of communication technology. His ideas anticipated the work of scholars like Harold Innis, Marshal McLuhan, and Neil Postman—credited as the founders of modern communication theory.
Cooley argued that improved communication would enhance, be a force for the betterment of, human society. Cooley was an idealist. Decades later, Mark Zuckerberg claimed Facebook had a higher purpose than making money, “to create a more perfect society by getting people to communicate more.” An idealist goal.
These ideals were never realized. In fact, the near opposite occurred. Carr’s book examines why.
In the beginning, human communication was face-to-face: sounds (later words), tone and pitch, facial expressions and body language, and context—when, where, and with whom.
Language allowed communication across space, but only at the cost of retelling stories (remember the game of “telephone”). Skilled story tellers could incorporate tone and expressions, in part, but not the context.
Written language allowed communication across both time and space. Skilled authors could incorporate some elements of expression and context, but not all. Shared vocabulary contributes to a loss of meaning. The Oxford English Dictionary contains over one-half million words: the vocabular required to read the average daily newspaper is less than 2,000 words. How much meaning is lost when you the average vocabulary of readers is so limited?
Then the telegraph. All is lost except the words, the punctuation, and grammar. Telegraphic communication works only to the extent that both sender and receiver share enough background, culture, that meaning can be ‘read into’ the text, or correctly interpretated from the text.
With computers came a formal definition of communication, via Claude Shannon and his “Mathematical Theory of Communication.” Shannon’s theory focused only on the acurate transmission of a message. “Frequently the messages have meaning; that is they refer to or are correlated to some system with certain physical or conceptual entities. These semantic aspects of communication are irrelevant to the engineering problem.”
These factors, according to Carr, are one thread leading to an age of miscommunication, the consistent degradation of “meaning” that comes from an absence of essential context. Another factor arose from mass media and from computer-based media, both volume, the amount of communication, and the increasing brevity of that communication.
Using email (an unexpected side effect of Arpanet), then IM, then emojis as exemplars, Carr shows how communication became increasingly brief and informal. At the same time, such communication becomes increasingly insular—meaningful only to the “in group” that understands the meanings of acronyms and emoticons.
The book concludes with a brief sojourn into cyber-space, an alternate (virtual) reality crafted via computers and technologies like AI and CGI. A world as far as possible from the world of communication, fixed in space, time, and context that originally defined human communication.
All three books are wonderfully written, full of, some-time obscure, history—everything revealed via carefully crafted story. All three are highly recommended.
As I said at the beginning, citing Brooks’ admonition to “major in being human;” Carr shows how technology has taken us far from any realization of such a major.