A decade after an Atlantic magazine cover asked whether Google was making us stoopid, an MIT professor is proposing that our information technology is producing “Superminds.” Is it possible for our minds to become super and stupid at the same time? Are some forces dumbing down society while others advance collective knowledge?
In fact, the same force of technological progress may be accelerating society’s collective knowledge and at the same time allowing individuals to forget how to do things that used to be considered essential. That message comes out in rereading Nicholas Carr’s pessimistic 2008 Atlantic piece, and in the optimistic new book “Superminds”: The surprising power of people and computers thinking together.
In the not-so-far future, people won’t know all kinds of things we now think essential to being “educated”: how to write in cursive, how to multiply and divide on paper, or how to spell most English words. Reading a paper map may seem as antiquated as celestial navigation. And the forgetting of skills is only going to continue as technology advances and incorporates artificial intelligence.
One take-home message in “Superminds” is that artificial intelligence is already here, and already changing the world. It’s beside the point whether a robot with humanlike intelligence is still 20 years away. The author — the management and information technology professor Thomas Malone — quipped at a book talk at MIT last month that people have been saying that humanlike robot intelligence is 20 years away for the last 60 years.
He divides the notion of intelligence into general and specialized. General intelligence gives humans the flexibility to acquire diverse skills and knowledge, and adapt to complex social situations. Both people and robots have specialized intelligence, for executing clearly defined tasks. But even the robots smart enough to beat the champions at chess or Go or Jeopardy can’t hold an interesting conversation or understand simple proverbs.
Malone then proposes that groups as well as individuals can have measurable intelligence. This leads to his notion of human superminds. They take the form of collaborations, corporations, rock bands and football teams.
Adding artificial minds to intelligent groups could accelerate the pace of progress just as writing, printing and electronic communication have done before. Already, technology is changing the way the human superminds work, allowing projects such as the MIT-initiated Climate CoLab — a community focused on solutions to climate change. Another citizen scientist project called Foldit brought hundreds of people together to help solve the connection between the shapes of biological molecules and their functions. And Wikipedia worked surprisingly well in usefully organizing the knowledge of diverse volunteers.
In a phone conversation I had with Malone after his talk, he suggested that the prevailing debate about robots gets too focused on individual intelligence and competition between humans and our machines. But what if, he said, instead of replacing people, computers start placing people — helping analyze our strengths, desires and quirks, finding us jobs where we can thrive? Even if unemployment is down, workplace misery continues to grind away, and smart people are stuck in jobs that don’t take advantage of their interests and skills. It’s just one example of a hybrid supermind.
When learning computers contribute seamlessly to collaborations, will human minds become so atrophied that we no longer have much to contribute? Malone doesn’t think so. People are adaptable. Today’s young workers are learning how to do all kinds of jobs in software and information tech that didn’t even exist when they were in elementary school.
In the Atlantic article, Carr’s lament was that the world is changing so fast that he could witness his brain changing. He wrote that after a few years of flitting around on the web, he’d lost the powers of concentration necessary to enjoy a book. Soon after Carr’s piece came out, researchers published a study in Science showing that using computers to search for information did make people smarter at searching, and dumber at remembering facts.
It’s possible that reading books won’t be necessary, but we can always read books for discretionary mental exercise, the way so many of us now choose to walk and run to exercise our bodies. People may cook from scratch or do math puzzles or navigate hiking trails for the same reason.
Still, there’s something worrisome about watching people forget how to do things – a fear reflected in the popularity of apocalyptic stories, from “Mad Max” to “The Walking Dead” to “The Handmaid’s Tale.” When the rules suddenly change, the only ones who get out alive are those who know or learn how to scrounge for food, shoot predators or find their way to Canada.
Even short of a nuclear or zombie apocalypse, there’s always the possibility of being ejected from a supermind at a vulnerable time, as happened to the ousted captain in the infamous 1789 “Mutiny on the Bounty.” Set adrift with a few other men in a launch boat in the vast emptiness of the South Pacific, he survived by navigating for thousands of miles using the sun and the stars. One never knows what old-school skills might come in handy.
Faye Flam is a columnist with Bloomberg Opinion.