Рубрики
Без рубрики

The End of Cyberspace

Another book to add to my bookshelf of «monographs that blend cultural history with neuroscience» (along with Raymond Tallis’ The Hand: A Philosophical Inquiry into Human Being): Maryann Wolf’s Proust and the Squid, which Caleb Crain reviews in the latest New Yorker.

Taking the long view, it’s not the neglect of reading that has to be explained but the fact that we read at all. “The act of reading is not natural,” Maryanne Wolf writes in Proust and the Squid, an account of the history and biology of reading. Humans started reading far too recently for any of our genes to code for it specifically. We can do it only because the brain’s plasticity enables the repurposing of circuitry that originally evolved for other tasks—distinguishing at a glance a garter snake from a haricot vert, say.

Elsewhere, as she puts it, “The brain’s design made reading possible, and reading’s design changed the brain in multiple, critical, still evolving ways.”

This isn’t a radical argument. As Stanislas Dehaene theorized in a 2003 New Scientist article,

learning to read, and other forms of cultural learning, are only possible if this built-in flexibility can be used to divert brain circuits from their previous uses. The brain is predisposed to develop only in certain ways. In effect, we are able to learn to read because the primate visual system evolved to do a different job that was sufficiently similar to allow it to be «recycled» into a reading machine.

There are two things really interesting about her argument, at least as it comes filtered through the reviews. First, Wolf argues that what’s impressive about reading isn’t that it’s hard, but that it can be easy, because the brain learns to specialize, devoting certain sections to recognizing letters.

Wolf recounts the early history of reading, speculating about developments in brain wiring as she goes. For example, from the eighth to the fifth millennia B.C.E., clay tokens were used in Mesopotamia for tallying livestock and other goods. Wolf suggests that, once the simple markings on the tokens were understood not merely as squiggles but as representations of, say, ten sheep, they would have put more of the brain to work. She draws on recent research with functional magnetic resonance imaging (fMRI), a technique that maps blood flow in the brain during a given task, to show that meaningful squiggles activate not only the occipital regions responsible for vision but also temporal and parietal regions associated with language and computation. If a particular squiggle was repeated on a number of tokens, a group of nerves might start to specialize in recognizing it, and other nerves to specialize in connecting to language centers that handled its meaning…. [R]ecent imaging studies… [show] how a modern child’s brain wires itself for literacy. The ground is laid in preschool, when parents read to a child, talk with her, and encourage awareness of sound elements like rhyme and alliteration, perhaps with “Mother Goose” poems. Scans show that when a child first starts to read she has to use more of her brain than adults do. Broad regions light up in both hemispheres. As a child’s neurons specialize in recognizing letters and become more efficient, the regions activated become smaller.

At some point, as a child progresses from decoding to fluent reading, the route of signals through her brain shifts. Instead of passing along a “dorsal route” through occipital, temporal, and parietal regions in both hemispheres, reading starts to move along a faster and more efficient “ventral route,” which is confined to the left hemisphere. With the gain in time and the freed-up brainpower, Wolf suggests, a fluent reader is able to integrate more of her own thoughts and feelings into her experience. “The secret at the heart of reading,” Wolf writes, is “the time it frees for the brain to have thoughts deeper than those that came before.” Imaging studies suggest that in many cases of dyslexia the right hemisphere never disengages, and reading remains effortful…. When reading goes well, Wolf suggests, it feels effortless, like drifting down a river rather than rowing up it. It makes you smarter because it leaves more of your brain alone.

Second, this sounds pretty similar to what Andy Clark describes in Natural-Born Cyborgs, and Tallis in The Hand: a process wherein neural plasticity and technology work to create a human-textual (or human-alphabetic) symbiosis. It’s no coincidence that Dehaene titled his 2003 article «Natural-born readers.»

Technorati Tags: cyborg, neuroscience, reading

Рубрики
Без рубрики

The End of Cyberspace

Just found an online reprint of Ellen Ullman’s wonderful 2003 essay «Memory and Megabytes,» originally published in American Scholar. It’s one of my favorite short pieces ever, and started me thinking about the differences between human and machine memory.

Though her recent New York Times op-ed on adoption and knowing your family history is great, too:

I am not against … the trend… toward openness, a growing “right” to know. I simply want to give not-knowing its due.

I like mysteries. I like the sense of uniqueness that comes from having unknown origins (however false that sense may be).

[To the tune of Dead Man’s Bones, «My Body’s a Zombie for You,» from the album Anti Sampler Fall 2009 (I give it 1 stars).]

Рубрики
Без рубрики

The End of Cyberspace

I’m not a Mac fanatic, but every computer I’ve bought with my own money has been a Mac. I got an SE in 1988, and have gone through various Quadras, iMacs, and laptops since then. Since the beginning much of the appeal of the Mac was the graphical interface. First, it was the only personal computer with a GUI. Then after the appearance of Windows, it was a better version of the GUI: cleaner, faster, more intuitive, or whatever.

I still gravitate to Macs, but I’m beginning to see the outlines of a future in which graphics are really good, but the graphical user interface is obsolete.

Two things are driving the fall of the GUI. One is mobile devices, whose screens are too small to handle the kinds of GUIs we’ve had on personal computers. The other is the growth of search and tagging tools as an alternative to visual (and often hierarchical) systems for organizing and accessing documents on personal computers. I’ll talk about the first here.

Consider the iPod. For all of the attention the neat color screens have gotten— and they are pretty neat— what strikes me about the iPod, and the iPod Touch, is how much of the navigation is text- and list-based. Sure, it’ll play movies and TV shows, and show you album cover art, and the little screens are surprisingly easy to watch (though I have a much more satisfying time watching things I’m familiar with, probably because my brain is filling in details that the screen doesn’t actually show). But you don’t use icons to navigate: you navigate through text menus.

I’ve spent a little time playing with Cover Flow, and my sense is that it really doesn’t make the iPod interface less logocentric: it provides an additional piece of information to, for example, help you tell the difference between two different versions of «Midnight Train to Georgia,» but it doesn’t put you back in a world of folders or desktops.

Likewise, every cell phone has a nice color screen, and some icons that when clicked on will take you to different functions; but again, most of the time, I’m selecting from menus and scrolling through lists. The screen may be pretty, and the color is nice on the eyes, but my cell phone company hasn’t tried to create a little information landscape on the phone’s screen. Instead, they’ve gone with menus.

That’s probably a smart choice, because menus are probably easier to work through, particularly when you’re only giving partial attention to the interface. When I was sitting at my desk, I could focus on icons and folders, but when I’m walking down the street or driving (not that I ever do that), I want something much simpler: looking at simple words, or better yet, one-touch dialing.

Creating devices that let you interact with information while interacting with the world reduces the appeal of interfaces that are themselves little worlds. And I suspect that shifting from situations where we devote the bulk of our attention to graphical interfaces, to ones where we devote fragments of our attention to text-based interfaces, reduces the relevance of the the idea that we’re interacting with an alternate dimension of information.

Technorati Tags: design, end of cyberspace, interface, mobility

Рубрики
Без рубрики

The End of Cyberspace

« September 2009 | Main | November 2009 »

19 posts from October 2009

  • In the late fifteenth century, clocks acquired minute hands. A century later, second hands appeared. But it wasn’t until the 1850s that instruments could recognize a tenth of a second, and, once they did, the impact on modern science and society was profound. Revealing the history behind this infinitesimal interval, A Tenth of a Second sheds new light on modernity and illuminates the work of important thinkers of the last two centuries.
  • «Mindstorm brings everyday surfaces and spaces to life with its range of innovative interactive solutions. From restaurant tables and shop displays to exhibition stands and meeting room walls, our technology enables companies to create compelling collaborative experiences.»
  • «Visionpool er et stærkt procesværktøj, som er designet til at skabe maksimal involvering i forandringsprocesser. Med Visionpool kan du indrage alle i din virksomhed i at skabe resultater — hurtigt.»
  • «It is the academic’s job in a free society to serve the public culture by asking questions the public doesn’t want to ask, investigating subjects it cannot or will not investigate, and accommodating voices it fails or refuses to accommodate. Academics need to look to the world to see what kind of teaching and research needs to be done, and how they might better train and organize themselves to do it. But they need to ignore the world’s demand that they reproduce its self-image.»
  • «Imagine the cityscape of the future. Forget skyscrapers studded with undimmed lights. Instead, think of crystal whites and luminous blues forging the city’s silhouette. Picture a city that sucks in carbon and uses bacteria harvested from dead fish to light the darkness. The city as a living character will no longer be a literary conceit, but a reality. From metaphor to concrete in one generation.»
  • «Saffo has spent the past two decades staring into his crystal ball and seeing just these sorts of contrasts. Once director of the Institute for the Future think tank, he now teaches at Stanford University, alma mater to the founders of Google and many of the technology world’s hottest stars.»
  • «As our surroundings have evolved over the centuries, so too have our navigational strategies and conceptions, shaped most recently by urbanization and the advent of high-speed travel.

    «We’re now on the cusp of an even more dramatic change, as we enter the age of the global positioning system, which is well on its way to being a standard feature in every car and on every cellphone. At the same time, neuroscientists are starting to uncover a two-way street: our brains determine how we navigate, but our navigational efforts also shape our brains. The experts are picking up some worrying signs about the changes that will occur as we grow accustomed to the brain-free navigation of the gps era.»

  • The brains of London cabbies have outsized rear hippocampuses, because they are required to painstakingly learn the byzantine lanes and byways of the Old World city. Not true for most of us — and especially not in the age of the GPS, writes Alex Hutchinson in the Canadian magazine The Walrus.

    Hutchinson says that with the digital navigational tool well on its way to becoming standard in every car and on every cellphone, “experts are picking up some worrying signs” about brain atrophy “once we lose the habit of forming cognitive maps.” Research is showing people, their heads in abstract spatial realms, flummoxed finding their way around in the real world.

  • «Not long ago, I started an experiment in self-binding: intentionally creating an obstacle to behavior I was helpless to control, much the way Ulysses lashed himself to his ship’s mast to avoid succumbing to the Sirens’ song. In my case, though, the irresistible temptation was the Internet.»

  • For years critics have railed against these cultural complexes as pointlessly grandiose expressions of vanity — a poisonous brew of architectural egotism and excessive wealth that was destroying America’s urban centers. Why all the fancy forms, they argued? Wouldn’t the money be better spent on something more valuable, like schoolbooks?

    Yet as the dust settles on the last of these projects, what begins to emerge is a more complex image of America’s cultural values at the birth of a new century. The formal dazzle masks a deeper struggle by cities and architects to create accessible public space in an age of shrinking government revenue and privatization. At their most ambitious, they are an effort to rethink the two great urban planning movements that gave shape to the civic and cultural identity of the American city.

  • «Nearly everyone reads. Soon, nearly everyone will publish. Before 1455, books were handwritten, and it took a scribe a year to produce a Bible. Today, it takes only a minute to send a tweet or update a blog. Rates of authorship are increasing by historic orders of magnitude. Nearly universal authorship, like universal literacy before it, stands to reshape society by hastening the flow of information and making individuals more influential.»
  • «The trend [in adoption law]… is toward openness, a growing “right” to know. I am not against this trend. I simply want to give not-knowing its due. I like mysteries. I like the sense of uniqueness that comes from having unknown origins (however false that sense may be).»
  • Her great 2003 essay on computer versus human memory. «[E]ach new computer has enough disk space to hold everything you’ve ever stored on all the computers you’ve ever owned in your life. The equivalent would be a new house that, every time you moved, would be so much larger than all your past houses that all the furniture you’ve ever purchased would follow you, indefinitely…. everything—the rug you picked up at a garage sale after a tipsy brunch, that secondhand dining table bought hurriedly after the divorce—all of it, no escaping it, the joy or humiliation of every decorating decision you’ve ever made, the occasion that brought each object into your life perpetually, unflinchingly present: the brutality of the everlasting.»

Just found an online reprint of Ellen Ullman’s wonderful 2003 essay «Memory and Megabytes,» originally published in American Scholar. It’s one of my favorite short pieces ever, and started me thinking about the differences between human and machine memory.

Though her recent New York Times op-ed on adoption and knowing your family history is great, too:

I am not against … the trend… toward openness, a growing “right” to know. I simply want to give not-knowing its due.

I like mysteries. I like the sense of uniqueness that comes from having unknown origins (however false that sense may be).

[To the tune of Dead Man’s Bones, «My Body’s a Zombie for You,» from the album Anti Sampler Fall 2009 (I give it 1 stars).]

About a year ago I wrote about Web 2.0 as a time machine for my generation, and my suspicion that «mine may be the last generation that has the experience of losing touch with friends.» This concerned me because

when it comes to shaping identity, the ability to forget can be as important as the ability to remember. It’s easy to implore people not to forget who they are; but sometimes, in order to become someone better, you need to forget a little bit.

Likewise,

Forgetting insults and painful events, we all recognize, is a pretty healthy thing for individuals: a well-adjusted person just doesn’t feel the same shock over a breakup after ten years (if they can even remember the name of Whoever They Were), nor do they regard a fight from their childhood with anything but clinical detachment. Collectively, societies can also be said to make decisions about what they choose to remember, and how to act toward the past. Sometimes this happens informally, but has practical reasons: think of national decisions of avoid deep reflection on wars or civil strife, in the interests of promoting national unity and moving forward.

The idea that digital and human memory work differently, and that we fail to recognize the difference between the two at our peril, is something I’ve been writing about for a while. So I was very interested to see a review by Henry Farrell in Times Higher Education of Viktor Mayer-Schoenberger’s new book Delete: The Virtue of Forgetting in the Digital Age. It sounds like a book I need to read… or at least footnote!

At its heart, his case against digital memory is humanist. He worries that it will not only change the way we organise society, but it will damage our identities. Identity and memory interact in complicated ways. Our ability to forget may be as important to our social relationships as our ability to remember. To forgive may be to forget; when we forgive someone for serious transgressions we in effect forget how angry we once were at them.

Delete argues that digital memory has the capacity both to trap us in the past and to damage our trust in our own memories. When I read an old email describing how angry I once was at someone, I am likely to find myself becoming angry again, even if I have since forgiven the person. I may trust digital records over my own memory, even when these records are partial or positively misleading. Forgetting, in contrast, not only serves as a valuable social lubricant, but also as a bulwark of good judgment, allowing us to give appropriate weight to past events that are important, and to discard things that are not. Digital memory — which traps us in the past — may weaken our ability to judge by distorting what we remember.

[To the tune of Sukhwinder Singh, «Marjaani Marjaani,» from the album Saavn Celebrates Bollywood (I give it 3 stars).]

  • «We’ve rounded up eight of the latest in office designs from around the world, showing how architects are attempting to turn the mundane into the marvellous, creating commuter-friendly communes that don’t destroy the spirit.»
  • Philips shares the fruits of a project that imagines the future of food, 20 years from now.
  • «Objectified is a feature-length documentary about our complex relationship with manufactured objects and, by extension, the people who design them. It’s a look at the creativity at work behind everything from toothbrushes to tech gadgets. It’s about the designers who re-examine, re-evaluate and re-invent our manufactured environment on a daily basis. It’s about personal expression, identity, consumerism, and sustainability.»
  • Writing and reading — from newspapers to novels, academic reports to gossip magazines — are migrating ever faster to digital screens, like laptops, Kindles and cellphones. Traditional book publishers are putting out “vooks,” which place videos in electronic text that can be read online or on an iPhone. Others are republishing old books in electronic form. And libraries, responding to demand, are offering more e-books for download.

    Is there a difference in the way the brain takes in or absorbs information when it is presented electronically versus on paper? Does the reading experience change, from retention to comprehension, depending on the medium?

  • It’s an experiment that has made back-to-school a little easier on the back: Amazon.com gave more than 200 college students its Kindle e-reading device this fall, loaded with digital versions of their textbooks.

  • Kott and others like him are social networking refuseniks: people in their 20s or early 30s who have gone off the grid, eschewing the ecology of Facebook, Twitter, MySpace and the like. In Washington, refuseniks are not exactly operating in isolated, Luddite worlds: One is in a dance company, another is a rapper/hip-hop singer, another is a Georgetown undergraduate.

  • The report, previewed in a speech by Thomas Fingar, the U.S. intelligence community’s top analyst, also concludes that the one key area of continued U.S. superiority — military power — will “be the least significant” asset in the increasingly competitive world of the future, because “nobody is going to attack us with massive conventional force.”

From Newsweek:

For those of us who carry iPhones, this shift to a persistent Internet has already happened, and it’s really profound. The Internet is no longer a destination, someplace you «go to.» You don’t «get on the Internet.» You’re always on it. It’s just there, like the air you breathe.

[To the tune of Future Sound of London, «Room 208,» from the album Lifeforms (I give it 2 stars).]

Рубрики
Без рубрики

The End of Cyberspace

  • «Innovation is often perceived as an unmanageable phenomenon. Bets are placed on new products with the hope that a few winners will compensate for the many losers. At best, sophisticated selection procedures impose a certain discipline and provide guidance for containing costly errors. The research that we have conducted yields a more nuanced view. Innovation, we have found, becomes manageable when managers move away from universalistic prescriptions and recognise that different rules and practices apply in different contexts. Our main argument is that both executives and public officials need to learn from the new realities of innovation. Instead of being a uniform process, innovation takes place in seven distinct ‘games’, focusing on market creation, market maintenance and innovator support.»
Рубрики
Без рубрики

The End of Cyberspace

Sitting in the quiet living in the pre-dawn hours, I came across William Deresiewicz’s essay on technology, sociability, and solitude in the Chronicle Review. For those who have access to it, it’s well worth reading.

One book that influenced me when I was younger was Anthony Storr’s Solitude. I didn’t actually read that much of it, and I doubt I understood it very well, but the idea that solitude was worthwhile and rewarding, and nothing to be afraid of, was a novel concept for me. Deresiewicz argues that his students, who’ve grown up with MySpace and text messaging (among other things), have lost most opportunities to learn and benefit from being alone.

If Lionel Trilling was right, if the property that grounded the self, in Romanticism, was sincerity, and in modernism it was authenticity, then in postmodernism it is visibility.

So we live exclusively in relation to others, and what disappears from our lives is solitude. Technology is taking away our privacy and our concentration, but it is also taking away our ability to be alone. Though I shouldn’t say taking away. We are doing this to ourselves; we are discarding these riches as fast as we can. I was told by one of her older relatives that a teenager I know had sent 3,000 text messages one recent month. That’s 100 a day, or about one every 10 waking minutes, morning, noon, and night, weekdays and weekends, class time, lunch time, homework time, and toothbrushing time. So on average, she’s never alone for more than 10 minutes at once. Which means, she’s never alone.

I once asked my students about the place that solitude has in their lives. One of them admitted that she finds the prospect of being alone so unsettling that she’ll sit with a friend even when she has a paper to write. Another said, why would anyone want to be alone?

To that remarkable question, history offers a number of answers. Man may be a social animal, but solitude has traditionally been a societal value. In particular, the act of being alone has been understood as an essential dimension of religious experience…. For the still, small voice speaks only in silence.

One thing that jumped out at me was that Deresiewicz contrasts the physical solitude that used to characterize being online, with the situation today. It used to be that «connecting» online was more a physically isolating experience, done at desks, in front of desktops. Today, though, you don’t have to be alone to go online: just as cellphones and mobile Web technologies make it less likely that you’ll ever be offline, and lower the bar for jumping onto the Web, they make it less likely that you’ll be fruitfully alone.

But as the Internet’s dimensionality has grown, it has quickly become too much of a good thing. Ten years ago we were writing e-mail messages on desktop computers and transmitting them over dial-up connections. Now we are sending text messages on our cellphones, posting pictures on our Facebook pages, and following complete strangers on Twitter. A constant stream of mediated contact, virtual, notional, or simulated, keeps us wired in to the electronic hive…. Not long ago, it was easy to feel lonely. Now, it is impossible to be alone.

Of course, we all know plenty of people who manage to feel alone even today, and it’s possible to resist the pull of technology: there are people who rebel against constant connectivity, on the grounds that it’s too intrusive and distracting. But still, I think Deresiewicz points to a bigger trend that most of us will recognize.

A rich essay. Worth reading.

Рубрики
Без рубрики

The End of Cyberspace

HealthDay News reports on a study of the impact of Internet use on the brains of elders:

Surfing the Internet just might be a way to preserve your mental skills as you age.

Researchers found that older adults who started browsing the Web experienced improved brain function after only a few days.

«You can teach an old brain new technology tricks,» said Dr. Gary Small, a psychiatry professor at the Semel Institute for Neuroscience and Human Behavior at the University of California, Los Angeles, and the author of iBrain. With people who had little Internet experience, «we found that after just a week of practice, there was a much greater extent of activity particularly in the areas of the brain that make decisions, the thinking brain — which makes sense because, when you’re searching online, you’re making a lot of decisions,» he said. «It’s interactive.»…

«We found a number of years ago that people who engaged in cognitive activities had better functioning and perspective than those who did not,» said Dr. Richard Lipton, a professor of neurology and epidemiology at Albert Einstein College of Medicine in New York City and director of the Einstein Aging Study. «Our study is often referenced as the crossword-puzzle study — that doing puzzles, writing for pleasure, playing chess and engaging in a broader array of cognitive activities seem to protect against age-related decline in cognitive function and also dementia.»…

For the research, 24 neurologically normal adults, aged 55 to 78, were asked to surf the Internet while hooked up to an MRI machine. Before the study began, half the participants had used the Internet daily, and the other half had little experience with it.

After an initial MRI scan, the participants were instructed to do Internet searches for an hour on each of seven days in the next two weeks. They then returned to the clinic for more brain scans.

«At baseline, those with prior Internet experience showed a much greater extent of brain activation,» Small said.

Doubtless some readers will recognize this as an updated version of the Proust and the Squid argument, which relies in part on fMRI studies indicating that the brains of literate people have specialized sections for quickly recognizing letters. What’s interesting here is that you get a similar kind of stimulation with the elderly.

[To the tune of John Coltrane, «A Love Supreme, Part II — Resolution,» from the album The Classic Quartet — The Complete Impulse! Studio Recordings (I give it 1 stars).]

Рубрики
Без рубрики

The End of Cyberspace

I’m working on a long post about the virtues of withdrawing somewhat from the world of Twitter, Facebook, etc., and this post about Qwitter— a service that «monitors your twitter account and notifies you when someone stops following you»— only reinforces my instinct that real-time-updated-and-read social media might not quite be ready for prime time.

My favorite part:

I’ve had 4 people confront me because I stopped following them and Qwitter told them. All 4 of those people were pissed off at me for it. 3 of them had stopped following me to get even. The one who didn’t, well he didn’t follow me to begin with but was still angry, yet in the e-mail he sent me he noted that he didn’t know who I was. The truth is I didn’t know who he was either, don’t remember following him, don’t recall anything he’d ever tweeted about and can only assume I added him by accident at one point when following a reply thread. Qwitter caused negative drama between two people who don’t know each other, have had no interaction, and really no reason for any bad feelings.

Briefly, I’m starting to think that the current generation of instant-update, small-bite social media tools make us too connected to other people in the wrong ways, that they encourage us to sacrifice volume of contact for depth of contact in ways that ultimately are unsatisfying, and promote a highly social version of ADHD. More on this later.

Рубрики
Без рубрики

The End of Cyberspace

Just found an online reprint of Ellen Ullman’s wonderful 2003 essay «Memory and Megabytes,» originally published in American Scholar. It’s one of my favorite short pieces ever, and started me thinking about the differences between human and machine memory.

Though her recent New York Times op-ed on adoption and knowing your family history is great, too:

I am not against … the trend… toward openness, a growing “right” to know. I simply want to give not-knowing its due.

I like mysteries. I like the sense of uniqueness that comes from having unknown origins (however false that sense may be).

[To the tune of Dead Man’s Bones, «My Body’s a Zombie for You,» from the album Anti Sampler Fall 2009 (I give it 1 stars).]

Рубрики
Без рубрики

The End of Cyberspace

  • The SunTrust Center for Strategic Futures at Wake Tech will provide a speakers forum hosting future thinking leaders at Wake Tech for the exchange of ideas related to the nature of education, workforce development, and economic development. The Center’s goal is to create a culture of future thinking students, staff, faculty at Wake Tech engaged with future thinking businesses and workforce and economic development communities.