Are Our Smart Devices Turning Us into Dumb Humans?

Are all of our “smart” devices training us to be “dumb” humans, too-often indistinguishable from mere machines? As click-through contracts and “like” buttons increasingly channel our social and personal relationships into algorithm-guided paths, are we losing something crucial about ourselves and our relationships? Is our very humanity at stake? In their new book, Re-Engineering Humanity, law scholar Brett Frischmann and philosopher Evan Selinger sound the alarm. I share their concern, so I am glad to see them taking on the problem in a rigorous and thoughtful way.

Full disclosure: Brett and I are friends, and we have discussed these ideas periodically since he first started research for the book. Brett knows that he can count on me to give him a hard time from a feminist perspective, for the good of the work. So here goes. First, I’ll explain the premise of the book, upon which we fundamentally agree. Then, I’ll describe my chagrin at some of the book’s examples, and show how my response illuminates the potential pitfalls of Brett and Evan’s approach. Finally I will suggest some tweaks to Brett and Evan’s proposals that would create a more egalitarian way out of our growing cultural-technological dilemma.

“Busy Life.” (白士 李/Flickr)

Brett and Evan are concerned that “smart” devices are taking over tasks and decisions that are currently a regular part of our daily lives, promising that we will be more efficient and productive. But when we don’t do those tasks ourselves, we lose something. For example, when our GPS navigates for us, we lose the opportunity to develop navigational skills, and we lose out on opportunities to develop judgment and reflect on our values and preferences: would it feel better to take a shorter but trafficky route, or the longer but scenic route? We also lose out on opportunities to develop our social skills and interpersonal empathy. Would our passenger more likely enjoy a ride by the newly-renovated town hall, or a glance at some park greenery on our way to our destination? Most of us would agree that developing our selves and our relationships is a fundamental part of our humanity, and that our technology is not doing us a favor if it is developmentally stunting us.

In order to identify which technical and social innovations should spark our concern, we need to be able to specify what’s bothering us, and why. Brett and Evan propose an ingenious way of testing our intuitions: what they call a “Reverse Turing Test.” The original Turing Test, designed by mathematician Alan Turing in 1950, was intended as a measure of whether any particular computer program should be designated as “intelligent.” Turing proposed that if a human audience could not tell the difference between a computer interlocutor and a human one after some specified amount of interaction, we could plausibly agree that the computer program deserved to be called “intelligent.”

Brett and Evan ask us to test out the other side of the line that divides computers and humans: if, when we humans blindly follow our GPS, reflexively “like” our friends’ photos, or let Pandora tell us what music we love best, are we allowing our devices to channel us to behave as if we were mere machines, indistinguishable from computer programs? If so, we are failing the Reverse Turing Test, and we ought to ask if we are in danger of ceding our humanity.

Preview of a Health Axiom, “The Doctor is in your Pocket.” (Juhan Sonin/Flickr)

So far, so good. I agree with Brett and Evan’s broad concerns, and the Reverse Turing Test is a clever way to home in on the problem. The devil, however, is in the details, and in Re-Engineering Humanity, it’s in some of the examples. Examining one of the examples that stuck in my craw illuminates a potential problem with the Reverse Turing Test, and points to how we might want to structure our application of the test to get results that are not a de facto conservative attempt to preserve the status quo, but in fact support the flourishing of all individuals, regardless of past and current status.

In their discussion of the potential threat posed by an “internet of things” — smart devices surrounding us, communicating with each other and the wider world, and shaping our behavior — Brett and Evan pose the possibility that today’s frivolous smart toaster could become tomorrow’s widely-desired smart kitchen. We would not need anymore; we could just tell our kitchen to “cook salmon for dinner,” or even just tell it to make us a dinner that we’ll like, based on data it has gathered about how much we’ve enjoyed previous meals. Brett and Evan warn that in this case, “The big existential threat to consider… is whether engineering an appetite for pervasive smart kitchens will disincentivize people from cultivating culinary skills, including the personalized touches many of us associate with the favorite cooks in our lives.” (p. 130)

Wow, that sounds terrible. Now I feel really guilty — just think, for centuries women hogged all those daily hours of self-cultivation via paring knife and frying pan, depriving the men in their lives of the incentive to cultivate their skills as humans and express their love through their household labor. We can’t let the same existential damage befall women! Begone, smart kitchen!

Snark aside, this example points to the importance of applying the Reverse Turing Test in the right way. If we take each innovation, one at a time in isolation, and apply the Reverse Turing Test, the result will be at best conservative. Applied this way, the test either tells us to allow the change, or to resist the change. It doesn’t tell us how to redirect or facilitate changes we would prefer, taking into consideration a range of real lives with a variety of real constraints.

Paperback cover for reproduction of Cheaper by the Dozen by Frank B. Gilbreth (Amazon)

When I was a kid, I read and re-read the lighthearted memoir Cheaper by the Dozen until my cheap paperback nearly disintegrated. The story of Progressive era motion-study experts Frank and Lillian Gilbreth authored by two of their twelve children, it was a personal and amusing look at the early twentieth century Efficiency Movement that promised to improve industry and society alike by excising all wasteful effort (and in the process, routinized and regimented everything from factory lines to school schedules). My yearbook senior write-up said, “likes: grins; dislikes: inefficiency.” Why was I so enthralled by the concept of efficiency, an internet-of-things design priority that Brett and Evan plausibly warn could be humanity’s downfall?

I wanted to be able to play the cello AND the piano AND sing AND dance AND perform in the school musicals AND run the yearbook, and there is only so much time in one day, or one lifetime. If I kept my math assignment on top of my stack of school books all day, I could get my homework done in 2-minute increments between classes and have 20 extra minutes for music.

Later, as I took up adult responsibilities, the performing arts I loved fell by the wayside, one by one. Would I have dropped out of the Berkeley Chamber Chorus when graduate school got intense if I had a smart kitchen (and a self-cleaning bathroom)? No. And while I think that cooking is a wonderful skill, and a joyful pastime for many people, I personally would have been happier spending the time in a choir. I am confident that Brett and Evan would agree with me that singing in a choir is a wonderful mode of self-cultivation and social connection.

Cheaper by the Dozen ends with a reflection on what, exactly, was the point of efficiency for Frank Gilbreth. “Someone once asked Dad: ‘But what do you want to save time for? What are you going to do with it?’”

“‘For work, if you love that best,’ said Dad. ‘For education, for beauty, for art, for pleasure.’ He looked over the top of his pince-nez. ‘For mumblety-peg, if that’s where your heart lies.’”

To me, the mumblety-peg (a traditional boys’ idle game) is the real issue. Efficiency, even to Frank Gilbreth, was not an end in itself, and it should not be for us either. Ideally, it was a tool to give us more time for the things we love, the things that hopefully enhance our humanity. But there is certainly no guarantee that efficiency measures will be used for good. In reality, in Gilbreth’s era efficiency measures were primarily used to extract more productivity out of less-autonomous workers in less time, usually without extra compensation. In our own time, as Brett and Evan observe, the “smart” device and “internet of things” developers who offer us efficiency then pull a bait-and-switch: instead of sending us on our way to use our newly-free time on art, beauty, and education, they channel us into putting our time into mumblety-Facebook and its ilk, or what Brett and Evan aptly call “cheap bliss.”

The Hestan Cue cookware system, on display at the Smart Kitchen Summit in Seattle. It uses Bluetooth technology to communicate with an app and cooking equipment to prepare food. (© Ian C. Bates for The New York Times)

Brett and Evan’s proposed solutions involve various means of slowing down change: introducing friction into the slippery slope leading us to unthinkingly accept the suggestions algorithms prompt us to accept, and deliberately maintaining discontinuities that disrupt some of the more extreme possibilities of an integrated internet of things. These are proposals worth considering. But they tend toward preserving the current social order, which works much better for some people than for others.

I propose taking the Reverse Turing Test and applying it to all of our daily activities whether technology-based or otherwise, not just innovations we find suspect. And we should apply it to all the alternatives that are realistically open to us when we think about how to shape our lives. Perhaps some routinized aspects of our lives could be made more meaningful. Perhaps more men would realize that they ought to give cooking for their families a try.

On the other hand, perhaps some routinizing technology will give us more time for our human endeavors. Perhaps a single mother with two kids and two jobs will look forward to a smart kitchen and appreciate her microwave in the meantime, finding much more humanity in sitting down for a microwave dinner with her kids than in putting them in front of the TV while she frantically throws together a homemade meal at the end of a long day. When we construct social and legal policy concerning smart devices and the internet of things, we need to recognize that innovations affect people differently depending on their current social location.

The Reverse Turing Test is meaningful in the full context of an individual’s complex life, with all of its specific opportunities, constraints, and preferences. All of us spend a certain portion of the day bowing to the demands of productivity and efficiency to preserve time for what really matters. The Reverse Turing Test is likely to give us the most useful results if we use it, not as a way to examine one-off specific innovations, but as a way to examine the trade-offs in our lives and make sure that we are prioritizing our humanity over mumblety-peg. We will need to make some communal decisions about which innovations to normalize, but in doing so, we will need to make sure we are taking into consideration the vast variety in our life circumstances, and the ways in which a specific computer program or smart device might be on balance dehumanizing for one person but humanity-enhancing for another.

Re-Engineering Humanity is a long and fascinating book, and I’d love to keep discussing it. My time for thinking and writing is done for today, though. I have to go make dinner.

About the Author


Laura Ansley

Love the gendered analysis here. Another important lens would be disability – reminds me of when we see people lamenting the prewashed, precut produce in grocery stores and their wasteful packaging, but of course that is very helpful for people who are unable to cut it themselves. (See Lara’s earlier discussion of how occupational therapy helped her with this very task.


I absolutely agree. I think Brett does too, and it’s worth pushing his analysis so that it truly accommodates that critique. For example, what if a smart device like Google Glass could keep track of your social contacts by face with name tags and other information and whisper cues in your ear when you saw the person? Whether or not that is dehumanizing depends on our current situation. If we’re using it so that we can avoid the effort of actually paying attention to people when we meet them, it’s dehumanizing. For someone with Alzheimer’s or face blindness or another condition that impairs their ability to identify or recall faces and personal info, the Google Glass feature has the potential to be humanity-enhancing. I think Brett has, so far, observed something of a bait-and-switch going on with examples like this. The developers and tech companies like to tout how a feature has the potential to benefit people with disabilities, but then in reality focus on their primary audience of healthy 20-somethings with disposable income.

Brett Frischmann

Thanks for the insightful review and comments. I agree with the ideas pushing for nuanced and contextual analysis of how different tech affects our humanity. The bait-and-switch is something I’m thinking a lot about now

Jason Freidenfelds

YES — brilliant, thank you for this. The question isn’t so much whether or not to automate — it’s what we’d do with our time if we could choose what to automate. And each of us might choose different things.

A tricky layer on top of this is that many of us might aspire to “education… beauty… art,” but we end up sliding into “mumblety-peg” — and not because we really want to, but for the same reason we gravitate towards junkfood, nicotine, etc. I’d like to see technology nudge us or enable us to choose long-term payoffs over short-term more often. It can be designed in.

Also often overlooked is that “smart” tech doesn’t just have to automate existing processes. If it’s done right, it should actually suggest new exciting things we hadn’t considered before or weren’t able to do. I could physically enable us to do something previously impossible, or nudge us into practicing and strengthening towards something we couldn’t do before, or open our eyes to things we didn’t know and might find exciting. Again, if designed right, technology can enable this.

Brett Frischmann

Interesting points. What we really want, how we develop our preferences, and how we slide — sometimes down a slope made slippery — are difficult questions to grapple with. i do agree that technology can but cannot assumed to be enabling in the fashion you suggest.


Share your Thoughts