Рубрики
Без рубрики

The End of Cyberspace

« September 2008 | Main | November 2008 »

25 posts from October 2008

Just announced.

Know How Talks at IDEO Thu, 11/06/08 5:00 pm IDEO Cafe* Free and open to the public See bottom for venue, schedule, and more details

Alex Soojung-Kim Pang The End of Cyberspace

The concept of cyberspace— an alternate dimension of information, accessible from computers, that was separate from and superior to the physical world— has helped shaped the way we think about everything from the design of online environments, to intellectual property law, to predictions about the future of cities, work, and space. I want to explain how the idea of cyberspace came to be so compelling, and chart where it’s going. Cyberspace has its origins in science fiction, video games, the mythology of the Western frontier, and other cultural sources. But it became powerful because it helped us make sense of the emerging relationship between people, information, and the Web, in an era defined by desktop computers, modest Internet connections, and graphical interfaces. Cyberspace was an artifact of a particular moment in the cultural history of human-computer interaction. So what happens to the concept of cyberspace as the character of our interactions with computers and information change? What happens when we move toward an always-on, mobile, ubiquitous future? I argue that the notion of cyberspace will become obsolete. As Gene Becker puts it, «cyberspace was a separate place from our world only because the necessary bridging technologies didn’t exist. Now that they do … cyberspace is coming to us.» Given how influential the idea of cyberspace was, it’s worth asking what its obsolescence will mean, and what might come after cyberspace.

http://www.endofcyberspace.com

Alex Soojung-Kim Pang is a Research Director at the Institute for the Future, where he leads projects on the future of science, and an Associate Fellow at the Saïd Business School at Oxford University, where he works with students interested in futures and forecasting. Before becoming a futurist, Alex studied history and sociology of science at the University of Pennsylvania. He is writing a book on the end of cyberspace (http://www.endofcyberspace.com); his earlier projects include histories of Victorian solar eclipse expeditions; Buckminster Fuller and the geodesic dome; and the development of the Apple mouse.

Upcoming Know How Talks
This will be the last talk for 2008.

Stay tuned! *The Know How Talks are usually held on Thursdays at 5:00, in IDEO’s Palo Alto cafe next to our lobby at 100 Forest. Enter from the alley between Alma St and High St.

The talks are open to the public. No need to RSVP.

I’ve long been a fan of the IDEO talks, so it’ll be a real pleasure— and a real challenge— to give one. endofcyberspace, IDEO, talks

[Reposted from my Red Herring blog, 2005]

When modern architecture emerged in the first years of the last century, it threw down a gauntlet at the feet of traditional neoclassical and academic architecture. Modernism’s style was stripped-down and functional. It celebrated the beauty of machines and the art of engineering, and expressed itself in concrete and steel, rather than brick and wood. Most important, it declared that the future would never again look like the past: from now on, architecture would be about innovation and change, not about working with timeless principles and eternal proportions.

Implicitly at first, and then consciously, architectural exhibits became predictions. Buckminster Fuller’s Dymaxion house, first exhibited in 1927, exemplifies how modern architecture backed into the futures business. The Dymaxion house was a hexagonal structure, suspended from a central load- and services-bearing column. Virtually everything in it was made of aircraft-grade medal. The house wouldn’t be built on-site, like traditional houses; instead, it would be mass-produced, like cars or cans of peas, and delivered to owners.

Soon «the home of the future» became a stock element of every architectural exhibit, World’s Fair, forward-looking corporate display, or popular magazine special issue. (Even World War II couldn’t derail them: a 1943 brochure showed a couple admiring a neighborhood of modern houses under the caption, «After total war can come total living.») Sporting automated kitchens, robot butlers, furniture that you washed with a high-pressure hose, and helipads (the long, sad story of why we don’t have personal helicopters or jet packs will have to wait for another time), these houses were sleek temples of convenience, promises of a world in which the home would be as frictionless and worry-free as a department store.

Of course, almost none of this has come to pass. Instead, the «home of the future» projects serve as textbook examples of how you can get the future wrong, and why.

Continue reading «Smart home, smarter home» »

Long post on the nature of tinkering, coming out of a conference I’ve been attending at the beautiful Carnegie Foundation. I’ll link it to the end of cyberspace soon, I hope.

conference, endofcyberspace, tinkering

[Reposted from my Red Herring blog, 2005]

Recently BBC World had an article on baby blogs— blogs that parents will keep about their children, the digital equivalent of baby books. Coincidentally, that same day I posted my 500th entry on my blog about my children, which I started soon after getting a digital camera. Like most articles about blogs, its substantive points were mixed up with a measure of alarmism and technical naivete. Some of it was taken up with worries about what pedophiles unmentionable things could do to those cute baby pictures, and fretting over how revealing details about your child’s daily routine isn’t very smart. (Hello? Ever heard of password protection?)

The article also suggested that baby blogs were invasions of privacy. What if, twenty years from now, the merest acquaintance could read about your child’s potty-training exploits, or their first visit to Grandma’s house? Wouldn’t making those details of your child’s life available to people they barely know violate their privacy, and make it harder for them to get dates? (At this point in the article I wanted to pump my arm and shouted «Yessss!» My five year-old daughter is only in nursery school, and already I’ve guaranteed that she’ll spend her college years undistracted by a social life.)

My efforts to archive my children’s lives stand in stark contrast to the scanty documentation of my own past. My entire childhood is preserved in just under two hundred pictures, a few letters, and a couple yearbooks: it all fits in a single box. In contrast, I can take two hundred pictures of my daughter at a birthday party. The constantly-falling cost of digital media lower the barriers to recording everyday events, and preserving every last picture and audio file. At my current rate, each of my children are in danger of having me take 50,000 pictures of them by the time they turn 18.

Of course, parenting is one long invasion of privacy, but the idea of baby blogs coming back to haunt their subjects later in life is still an interesting one. Technology promises to take a ritual that had traditionally been a painful but very limited rite of passage— the baby books shown to the fiance, the clever candids shown at the wedding reception— and make it into a full-time affair.

It also shows that the relationship between privacy and technology is really pretty complex. Worries about technology affecting privacy are perfectly reasonable; but worries about specific technologies are often misplaced. To really know what to worry about, you have to think a bit more about what privacy is, and how technology can affect it.

Continue reading «Incognito ergo sum» »

  • An appropriation-friendly, image-rich, experimental research library. Independent and open to the public.
  • «A prototype for a portable collaborative space in libraries built by the Illinois Institute of Technology’s Institute of Design.»
  • In 1999, Brian Eno revisited his 1979 essay on the studio as a compositional space. «I was thrilled at how people were using studios to make music that otherwise simply could not exist. Studios opened up possibilities. But now I’m struck by the insidious, computer-driven tendency to take things out of the domain of muscular activity and put them into the domain of mental activity. This transfer is not paying off. Sure, muscles are unreliable, but they represent several million years of accumulated finesse. Musicians enjoy drawing on that finesse (and audiences respond to its exercise), so when muscular activity is rendered useless, the creative process is frustrated.»
  • The studio is a space for composition, an instrument. «[Y]ou no longer come to the studio with a conception of the finished piece. Instead, you come with actually rather a bare skeleton of the piece, or perhaps with nothing at all. I often start working with no starting point. Once you become familiar with studio facilities, or even if you’re not, actually, you can begin to compose in relation to those facilities. You can begin to think in terms of putting something on, putting something else on, trying this on top of it, and so on, then taking some of the original things off, or taking a mixture of things off, and seeing what you’re left with — actually constructing a piece in the studio.»
  • «Ah, that constant stream of information from the PDA, the wireless laptop connection, cable TV. Sure, you can work from wherever, whenever, be entertained around the clock. But how to manage it? How to pull back and gain time for personal pursuits, reflection and sanity?»
  • «Thus began my “secular Sabbath” — a term I found floating around on blogs — a day a week where I would be free of screens, bells and beeps. An old-fashioned day not only of rest but of relief.»

From the MIT Scratch Web site, a tutorial on how to create popular projects. Just listen to it.

Learn more about this project

[Reposted from the Red Herring blog, ca. 2005.]

Let me begin with a confession. I spend most of my working life in front of a computer, and I suspect a fair amount of that time is wasted. I check my e-mail several times an hour. I regularly scan my RSS feeds for new posts. I visit news sites, just in case they’ve updated the list of breaking new stories. I can follow hyperlinks from one end of the Internet to the other if I’m not careful.

It’s all the electronic equivalent of bouncing your leg up and down, or ripping a napkin apart. And I don’t need to be this wired. It doesn’t help my work or thinking; to the contrary, these information-era equivalents nervous tics are just distractions. Yet I do them.

I’m hardly alone. Some of my friends lead lives that require Blackberries; others have Blackberries that take over their lives. A recent Yahoo-OMD study of 28 people forced to go offline for two weeks shows how dependent—both in the functional, and the emotional sense—people become to being connected. According to The Atlantic Monthly, «Across the board, participants reported withdrawal-like feelings of loss, frustration, and disconnectedness after the plug was plug was pulled.» Indeed, «[t]he temptation to go online was so great that the participants were offered «life lines»—one-time, one-task forays onto the Web—to ease their pain.» Add to this the recent Pew Internet Survey study that found that Internet users are spending more time online, and less watching TV, and you get a picture of growing numbers of people turning productivity tools into weapons of self-distraction.

It’s just the latest evidence confirming the truism that we live in an age of information overload. How did this happen? And is it going to get worse?

Continue reading «Solitude» »

Martin Dodge and Rob Kitchen’s 2001 book, The Atlas of Cyberspace, is now available as a free PDF. Of course it’s a huge file, and I still think the book itself is well worth owning, even though I think the concept of an «atlas» of cyberspace enshrines a concept that’s worth challenging.

books, cyberspace, endofcyberspace

[Reprinted from my Red Herring column, 2004.]

I’ve had my own blog since late 2002. The post with the largest number of comments isn’t my hilarious, cutting review of Matrix Reloaded; it’s not my insightful analysis of Andy Clark’s Natural-Born Cyborgs; it isn’t even my post about Danish train stations. No, the post that has inspired the largest number of comments is one about Super Nanny, a reality TV show.

And most of the comments don’t have anything to do at all with my post. Instead, the commenters are just venting about child-rearing, praising the show, or saying how much they love super nanny Jo Frost. I’m the equivalent of the bartender: I put out the nuts and wipe down the bar. Except this time, the patrons have brought their own bottles.

How did this happen? And why does it matter? The answer to the first question is easy: Google. For reasons that I can’t divine, a search on «Super Nanny» returns my post about the show as the #2 result. I have no idea why. And why does it matter? In its own small way, it’s an unexpected, but illuminating, example of user reinvention, the phenomenon wherein people take a technology or medium intended for one purpose, and remake it for themselves.

Continue reading «Super Nanny and the reinvention phenomenon» »

Рубрики
Без рубрики

The End of Cyberspace

« January 2009 | Main | March 2009 »

11 posts from February 2009

From Metropolis, an essay on «Tracking the Future» that describes a recent book on new urban infrastructures.

The 50-year arc of engines and batteries puts us right on the cusp of viable clean-power transit. The computation and flexibility necessary to make better use of the energy feeding the electric grid are already available; they’re the same technologies keeping cell phones going for days on a single charge. And telecommunications itself is slowly but steadily having a noticeable effect on how and when we use energy, whether through the reduced need for office space because of flexible work locations, the creeping advance of videoconferencing, or even the use of online social networking to buttress face-to-face interactions. It’s not as if we can’t imagine what a viable future might look like (even if it is just as easy to summon a picture of total collapse).

What’s harder to grasp is the inherent flexibility of this new infrastructure. With The Infrastructural City, Varnelis, an architectural historian and the director of Columbia University’s Network Architecture Lab, set out to update Reyner Banham’s 1971 book, Los Angeles: The Architecture of Four Ecologies. The major difference is that where Banham saw in Los Angeles’s unplanned urbanism a logic that could be instructive, Varnelis views it as a city in perpetual crisis—a victim of its own infrastructure. The freeways are perpetually clogged. The wildfires burn faster the more they are suppressed. “Infrastructure is no longer a solution,” Varnelis writes. But he really means the old infrastructure, those masterworks built according to a plan….

The emerging infrastructure is different. Varnelis describes it as something multiple and shifting: “networked ecologies,” plural “infrastructures” that are “hypercomplex” and as likely to consist of legal mechanisms and barely visible cell-phone networks as the heavy stuff of tunnels and bridges. Inherently less apparent than the infrastructure that came before, they’re also as likely to be owned by corporations as by governments—meaning these networks can’t really be controlled, only “appropriated” according to their own logic. With traditional planning made impotent by capitalism and NIMBYism, rebuilding the city now requires a “new type of urbanist,” a designer Varnelis compares to a computer hacker who reimagines a new use for the underlying rules and codes.

I’ve said before that for people my age (I’m 44), Web 2.0 is a time machine. So it’s nice to see that Newsweek has caught onto the idea (though why they had to title the article «Why Facebook Is for Old Fogies» is beyond me. Excuse me!).

  • Facebook is about finding people you’ve lost track of. [This is so true. In fact, I found the article on a college classmate’s Facebook page. We’d been out of touch for about 20 years before I friended her (though she insists she friended me first… yeah, right). Which just proves the article’s point.]
  • We’re no longer bitter about high school. [How can I be bitter about it? I can hardly remember it.]
  • We never get drunk at parties and get photographed holding beer bottles in suggestive positions. [Don’t I wish….]
  • Facebook isn’t just a social network; it’s a business network.
  • We’re lazy. [True that.]
  • We’re old enough that pictures from grade school or summer camp look nothing like us.
  • We have children.
  • We’re too old to remember e-mail addresses. [But we’re smart enough to know that we don’t have to. Our address books sync with our iPhones, we import them into Gmail, etc.]
  • We don’t understand Twitter. [We do. We just think harder about using it.]
  • We’re not cool, and we don’t care. [Well, that’s a pose.]

Via Interaction Design Umeå, Freedom:

Freedom is an application that disables networking on an Apple computer for up to eight hours at a time. Freedom will free you from the distractions of the internet, allowing you time to code, write, or create. At the end of your selected offline period, Freedom re-enables your network, restoring everything as normal.

This reminds me a bit of Write Room, and why I like it: it’s designed to be distraction-free.

At what point did the absence of distraction become a luxury? Is it just me, or is concentration (not just attention, but the ability to really focus seriously for long periods of time) an ever-scarcer state of being? (I hate to call it a commodity, despite its economic or productive value.)

This is my prediction for 2009: in addition to the global recession continuing to play havoc with all of our lives, we’re going to see more people explicitly trying to balance their time online and offline. Zeroing and digital sabbaths will become more popular.

The latest data-point: Lucy Kellaway’s Financial Times column:

This is our first experience of recession in the internet age, and so far I don’t like it one little bit. You could say that the internet makes the recession more bearable as there are all those networks to help people get jobs and there is Ebay for buying things second-hand.

Yet such things are trivial compared to what the internet is doing to our confidence. The internet has created a global psyche. The web has mentally joined us at the hip, so we can no longer put our heads in the sand. If that sounds painfully contorted, it is because it is. Just as no country can decouple itself from the ailing global economy, none of us as individuals can decouple ourselves from the ailing global psyche.

Through blogs, websites and e-mails the world’s economic ills are fed to us on a drip all day long. It is not just that we hear about bad things faster, we hear about more of them and in a more immediate way. My worries become yours, and yours become mine. On the internet, a trouble shared online is not a trouble halved. It is a trouble needlessly multiplied all over the world. After reading this article, people in Australia will surely start worrying about my paint colours, too.

This would not matter so much if it were not for the fact that confidence is the medicine that cures a recession; and all this sharing of bad news leaves one with no confidence at all.

If I had been alive during the last comparable recession, over 60 years ago, I would have limited my news injection to reading The Times every morning. In those days it had a front page given over not to big scary headlines, but to small classified ads. The news inside would probably have left me a little depressed over breakfast, but I would have had the rest of the day to recover my equanimity.

Instead, I sit over my computer all day and feed my anxiety.

Рубрики
Без рубрики

The End of Cyberspace

Via Interaction Design Umeå, Freedom:

Freedom is an application that disables networking on an Apple computer for up to eight hours at a time. Freedom will free you from the distractions of the internet, allowing you time to code, write, or create. At the end of your selected offline period, Freedom re-enables your network, restoring everything as normal.

This reminds me a bit of Write Room, and why I like it: it’s designed to be distraction-free.

At what point did the absence of distraction become a luxury? Is it just me, or is concentration (not just attention, but the ability to really focus seriously for long periods of time) an ever-scarcer state of being? (I hate to call it a commodity, despite its economic or productive value.)

Рубрики
Без рубрики

The End of Cyberspace

Maybe I shouldn’t have worried so much about how my use of Twitter could be more meaningful, in light of this:

corey menscher built the ‘kickbee’ while attending the itp program at new york university this fall. the device is designed to record kicking movements from a pregnant woman’s baby. once a kicking is sensed, the device will send a signal to its onboard electronics, which will in turn transmit the signal to a computer via bluetooth. the computer then logs the information on the online social messaging service twitter. this send a message out to followers letting them all know that the baby kicked.

Of course, you might argue that a kick is a lot more meaningful than anything I could post.

Menscher elaborates:

As an expectant father, I am once-removed from the physical knowledge my wife has of our baby and its development. With the Kickbee, I wanted to create a device that would give me a chance to be aware of our baby’s movements. It can also aid in tracking the frequency of fetal movements, which is an important way to monitor the health of the developing child.

The Kickbee is a wearable device made of a stretchable band and embedded electronics and sensors. Piezo sensors are attached directly to the band, and transmit small but detectable voltages when triggered by movement underneath. An Arduino Mini microcontroller transmits the signals to an accompanying Java application wirelessly via Bluetooth. (a SparkFun BlueSMIRF v2 module that communicates serially with a Macbook Pro)

The Java application receives the sensor values and analyzes them. When a kick event is detected, a Twitter message is posted via the Twitter API. I chose to use Twitter because it is easy to initiate an SMS message to any mobile phone when a kick is detected. It also acts as a data log that can be accessed programmatically for visualization or archiving.

Рубрики
Без рубрики

The End of Cyberspace

A fellow Moleskine enthusiast points me to a fabulous Moleksine art project— the beautiful Alchemy Notebook, an imagined medieval notebook. Kind of a visual hybrid of Calvino’s Invisible Cities and Neal Stephenson’s Baroque Cycle.

Many great pages, and some very ingenious pop-ups, but for my own historical reasons I like the solar corona best.


via flickr

Рубрики
Без рубрики

The End of Cyberspace

Clive Thompson talks about some pretty interesting stuff in his New York Times article of last Sunday, but he doesn’t talk about something that has been a pretty profound thing for me.

The article, not unreasonably, is mainly about young people and their use of Facebook, and how the technology redraws their sense of privacy and friendship. But for people of a certain age— specifically, my age— Web 2.0 is a time machine.

When I went to college, I lost touch with my friends from high school. Actually, I didn’t lose touch with them. I pretty much napalmed my connections to Henrico High in my headlong drive to get out of the South, go to college, and reinvent myself. Of course, after college some of the people I had been close to moved away, and we lost touch. This is the normal way of things, or was way back in the 1980s.

However, a few months ago, I started finding some of these people on Facebook or LinkedIn. Friends from high school who I’d heard about vaguely, but really hadn’t kept up with; people who I’d been close to in college for a year or two, before we went off to to other things with our lives.

Most of these reconnections have been pleasant, low-key things: we do a little high-level catching up, exchange a couple e-mails, and that’s about it. For a couple other people, though, reconnecting has been more intense and rewarding: a couple have been through marriages that I completely missed, lost partners, survived serious illnesses. I feel weirdly guilty about not knowing all this already. It’s not like I haven’t had lots of other friends, and my own life, but still I come away with this sense that not only I should have been better-informed about the lives of these people, it would have been good had I been more present, if only peripherally.

At the same time, that’s more than offset by the pleasure of being connected and findable again, and having some sense of what they’re doing— either very generally, with my LinkedIn reconnects, or with my Facebook reconnects, knowing what they made for dinner and thought of Obama’s speech. It’s a little bit like the connection you have in the dorm, when you can see who’s not around, who’s studying, and who’s got a crush on whom.

It strikes me that barring some serious legal or technological reversal, mine may be the last generation that has the experience of losing touch with friends. I suspect that my kids and their friends will grow up with Facebook (or whatever’s hot ten years from now); and not only will they always be able to get in touch with their friends from seventh grade, the chances are good that they’ll be able to see what those friends are doing. Of course, some friends will mean or or less to them over time— the central nodes in that network of friends will constantly shift— but just disappearing entirely may become a lot harder.

This could often be a good thing— think of all the people whose lives start to drift when they lose touch with friends, or the degree to which becoming anti-social is a predictor of things like depression— but it could have its down side as well. I think it was necessary for me to separate myself from my high school world in order to become someone different, and I’m not sure that I’d have been able to reinvent myself so thoroughly if the whole class of ’82 could comment on what I was up to. Reinventing yourself— or just following a passion that you have, and pushing that interest as far as you can— isn’t normally something you do by yourself, or out in the desert; more often it’s something that you do in the company of other people, and very often something you do with new people.

Maybe my children’s generation will need the ability to turn parts of their network dark when they embark on some new adventure, then re-light them later. As a technical feat, this shouldn’t be tough; as a social one, it might be harder. You could always disguise not being in touch with old friends as a function of time and work pressure and so forth, but switching off a set of friends would be a more explicit declaration that you’re taking a break from them. Still, when it comes to shaping identity, the ability to forget can be as important as the ability to remember. It’s easy to implore people not to forget who they are; but sometimes, in order to become someone better, you need to forget a little bit.

My students are already serious users of these technologies— I hardly have a substantive e-mail exchange with a Said Business School student that doesn’t almost immediately yield a connections on LinkedIn— and my kids are likely to have their social lives shaped by them as profoundly as by school and neighborhood. But I think people in my generation have discovered something that they couldn’t. Humans have an undeniable desire to connect, and people go through certain stages where they need to disconnect. But the more graceful parts of our nature are happy to overlook years of silence or old awkwardnesses for the chance to reconnect. And that’s a great thing.

Рубрики
Без рубрики

The End of Cyberspace

A few weeks ago a friend of mine announced that she was taking a break from Web 2.0.* She was going to prune her Twitter feeds, reduce her time on Facebook, and cut back on her time on IM. She needed to pay more attention to her real life, and to real relationships. Recollecting friends from high school and college was interesting for a while (Web 2.0 is a time machine for my generation, after all), but a large volume of acquaintances can’t provide the same satisfaction and support as a handful of friends you can see— or who can take the kids out to the park for an hour. Getting Tweets on her cell phone was also a poor combination of intrusiveness and minutiae. And there was laundry to be done.

As one of the digital lemmings who pushed her over the edge, the episode got me thinking. Why do I Tweet? After thinking about it for a while, I’ve come the conclusion that while it’s certainly popular with lots of my friends, I have a couple serious questions about Twitter, as a writer and a reader.

First, I have to admit that my regular life isn’t interesting enough to justify throwing out real-time updates about it. Nobody needs to know that I’ve just convinced the kids to make their own breakfasts, or have come back from lunch at Zao Noodles, or am trying to decide where to go on this weekend’s hike. The exception is when I’m on the road or doing something else unusual: at those times, my life— or my world— might get interesting enough document in detail.

There’s also the problem that I’m not sure what I get out of my own tweets. One of the signal features of Web 2.0, I think, is that it’s not just broadcasting: it’s self-documentation. Some of my friends use Twitter to jot down little notes about what they’re reading. But for me, the absence of tags in Twitter makes it hard for me to find things I’ve looked at long enough to know I should look for them again later, or to keep track of citations; del.icio.us is still the better tool for that. (I suppose you could replicate a little of that functionality with #tags, but that’s a workaround, and there’s no auto-complete….) And I’m not sure I’ve gone back and looked at my own Twitter stream, ever. My regular blog is valuable because it’s a way to keep track of my own life; this one has been invaluable for recording and trying out ideas for my book; my kids’ blog has been a place where I could store huge amounts of detail about my kids’ childhoods— those pictures of them doing cute but ordinary things, or saying wonderful things, or just growing up. Tossing out tweets feels like shooting sparks from a wheel: the sparks may be entertaining, but it’s the object you’re shaping with the wheel that’s really valuable.

Finally, as a reader, I find that seeing the raw feed of even a few people’s lives can quickly become overwhelming. In the last 24 hours, a relatively quiet time after Thanksgiving, I got 34 tweets; during a busy time— when people are traveling or at SXSW— I can get several times that, easily. There’s an argument to be made, as Clive Thompson has done, that the minutiae of tweets resolve into ambient awareness… but as it’s currently designed, the system still puts big demands on readers, who have to constantly read their friends’ Twitter streams, develop a sense of the rhythm of their posting, and build up a model of their real-world state from their online behavior. In a world in which the challenge is not to broadcast a lot of information, but to generate a lot of meaning, the stream-of-existence quality of tweeting makes it easy to mistake detail for intimacy, quantity of tweets for quality of expression or depth of understanding. As a preview of the world of ubiquitous computing and ambient awareness, Twitter is an interesting experiment (an experiment that’s being conducted my hundreds of thousands of people on themselves and their friends.)

This is actually not a bad lesson for designers. Creating ambient devices isn’t about pushing information; presence isn’t just about connection. Connecting people virtually is as much about quality and meaning in the digital world as it is in the real world.

Which is not to say that Twitter is hopeless. Twitter is strongest as a platform for conversation and reportage. It’s easy to share a rapid fire of short notes at conferences, for example, and the final result— assuming people are listening and paying attention— can be useful. (I wonder if there are examples of Twitter being used by students in lecture classes?) A couple of the people I follow use it as much for pinging friends as for talking about what they’re doing: for them, Twitter is a cross between the Facebook wall and a chat room. And I find Twitter useful for getting reactions to news events: I stopped watching the presidential debates this fall, for examples, after I realized that most of my friends were tweeting their reactions to them.

So what do I do with my Twitter stream? I’m not going to shut it down, because there are times when I’ll want to provide moment-by-moment updates about what I’m doing («Just cleared customs in Kai Tak! Where’s the cab line?» «Have now been in Victoria Stations on four continents….»). But for me, when I do use it, the challenge will be to figure out how to write the Web 2.0 equivalent of Zen koans: to fit meaning into 140 characters, rather than to fight the limitations of the medium by posting a lot.

*After I started working on this piece, I got interested in what other people had written upon getting fed up with some service, technology, or channel. Turns out that the «declaration of zeroing» is almost a literary genre. I first became aware of it through David Levy (whose book I reviewed in the L.A. Times, and who gave a brilliant talk about this stuff a couple years ago), and his ideas of a digital sabbath and information environmentalism. A couple samples:

Edward Vielmetti on Twitter:

The basic idea is that in systems where there is an infinite capacity for the world to send messages to get your attention, the only reasonable queue that you can leave between visits to the system is zero, because if you get behind you will never, ever, ever catch up gradually. Never. No matter how much time you put into it, there will always be more to do, and you will lose sleep over it.

Carmen Joy King, after quitting Facebook:

The amount of time I spent on Facebook had pushed me into an existential crisis. It wasn’t the time-wasting, per se, that bothered me. It was the nature of the obsession – namely self-obsession. Enough was enough. I left Facebook.

Donald Knuth on email:

I have been a happy man ever since January 1, 1990, when I no longer had an email address. I’d used email since about 1975, and it seems to me that 15 years of email is plenty for one lifetime.

Email is a wonderful thing for people whose role in life is to be on top of things. But not for me; my role is to be on the bottom of things. What I do takes long hours of studying and uninterruptible concentration. I try to learn certain areas of computer science exhaustively; then I try to digest that knowledge into a form that is accessible to people who don’t have time for such study.

On the other hand, I need to communicate with thousands of people all over the world as I write my books. I also want to be responsive to the people who read those books and have questions or comments. My goal is to do this communication efficiently, in batch mode — like, one day every three months.

Mark Bittman on his «secular Sabbath:»

I do believe that there has to be a way to regularly impose some thoughtfulness, or at least calm, into modern life — or at least my version. Once I moved beyond the fear of being unavailable and what it might cost me, I experienced what, if I wasn’t such a skeptic, I would call a lightness of being. I felt connected to myself rather than my computer. I had time to think, and distance from normal demands. I got to stop.

And of course there’s at least one blog about turning off all electronics one night a week. «Because of course,» Ariel Stallings writes, «I can’t unplug without blogging about it! (Irony, is that you?)»

Рубрики
Без рубрики

The End of Cyberspace

Goodbye, virtual world. Hello, new world.

« The Infrastructural City | Main | links for 2009-03-10 »

  • «[E]xperts go wrong when they try to fit simple models to complex situations. («It’s the Great Depression all over again!») They go wrong when they leap to judgment or are too slow to change their minds in the face.»

    «If you want good, stable long-term performance, you’re better off with the fox. If you’re up for a real roller-coaster ride, which might make you fabulously wealthy or leave you broke, go hedgehog.»

    «We need to believe we live in a predictable, controllable world, so we turn to authoritative-sounding people who promise to satisfy that need. That’s why part of the responsibility for experts’ poor record falls on us. We seek out experts who promise impossible levels of accuracy, then we do a poor job keeping score.»

links for 2009-03-06

  • «[E]xperts go wrong when they try to fit simple models to complex situations. («It’s the Great Depression all over again!») They go wrong when they leap to judgment or are too slow to change their minds in the face.»

    «If you want good, stable long-term performance, you’re better off with the fox. If you’re up for a real roller-coaster ride, which might make you fabulously wealthy or leave you broke, go hedgehog.»

    «We need to believe we live in a predictable, controllable world, so we turn to authoritative-sounding people who promise to satisfy that need. That’s why part of the responsibility for experts’ poor record falls on us. We seek out experts who promise impossible levels of accuracy, then we do a poor job keeping score.»

  • About the end of cyberspace

    Cyberspace is a «metaphor we live by,» born two decades ago at the intersection of computers, networks, ideas, and experience. It has reflected our experiences with information technology, and also shaped the way we think about new technologies and the challenges they present. It had been a vivid and useful metaphor for decades; but in a rapidly-emerging world of mobile, always-on information devices (and eventually cybernetic implants, prosthetics, and swarm intelligence), the rules that define the relationship between information, places, and daily life are going to be rewritten. As the Internet becomes more pervasive— as it moves off desktops and screen and becomes embedded in things, spaces, and minds— cyberspace will disappear.

  • This blog is about what happens next. It’s about the end of cyberspace, but more important, about what new possibilities will emerge as new technologies, interfaces, use practices, games, legal theory, regulation, and culture adjust— and eventually dissolve— the boundaries between the virtual and physical worlds.

  • Alex Soojung-Kim Pang is an historian of science and futurist.

    ping Pang

  • Part of the Corante Innovation Hub.

Рубрики
Без рубрики

The End of Cyberspace

Via Interaction Design Umeå, Freedom:

Freedom is an application that disables networking on an Apple computer for up to eight hours at a time. Freedom will free you from the distractions of the internet, allowing you time to code, write, or create. At the end of your selected offline period, Freedom re-enables your network, restoring everything as normal.

This reminds me a bit of Write Room, and why I like it: it’s designed to be distraction-free.

At what point did the absence of distraction become a luxury? Is it just me, or is concentration (not just attention, but the ability to really focus seriously for long periods of time) an ever-scarcer state of being? (I hate to call it a commodity, despite its economic or productive value.)

Рубрики
Без рубрики

The End of Cyberspace

A few weeks ago a friend of mine announced that she was taking a break from Web 2.0.* She was going to prune her Twitter feeds, reduce her time on Facebook, and cut back on her time on IM. She needed to pay more attention to her real life, and to real relationships. Recollecting friends from high school and college was interesting for a while (Web 2.0 is a time machine for my generation, after all), but a large volume of acquaintances can’t provide the same satisfaction and support as a handful of friends you can see— or who can take the kids out to the park for an hour. Getting Tweets on her cell phone was also a poor combination of intrusiveness and minutiae. And there was laundry to be done.

As one of the digital lemmings who pushed her over the edge, the episode got me thinking. Why do I Tweet? After thinking about it for a while, I’ve come the conclusion that while it’s certainly popular with lots of my friends, I have a couple serious questions about Twitter, as a writer and a reader.

First, I have to admit that my regular life isn’t interesting enough to justify throwing out real-time updates about it. Nobody needs to know that I’ve just convinced the kids to make their own breakfasts, or have come back from lunch at Zao Noodles, or am trying to decide where to go on this weekend’s hike. The exception is when I’m on the road or doing something else unusual: at those times, my life— or my world— might get interesting enough document in detail.

There’s also the problem that I’m not sure what I get out of my own tweets. One of the signal features of Web 2.0, I think, is that it’s not just broadcasting: it’s self-documentation. Some of my friends use Twitter to jot down little notes about what they’re reading. But for me, the absence of tags in Twitter makes it hard for me to find things I’ve looked at long enough to know I should look for them again later, or to keep track of citations; del.icio.us is still the better tool for that. (I suppose you could replicate a little of that functionality with #tags, but that’s a workaround, and there’s no auto-complete….) And I’m not sure I’ve gone back and looked at my own Twitter stream, ever. My regular blog is valuable because it’s a way to keep track of my own life; this one has been invaluable for recording and trying out ideas for my book; my kids’ blog has been a place where I could store huge amounts of detail about my kids’ childhoods— those pictures of them doing cute but ordinary things, or saying wonderful things, or just growing up. Tossing out tweets feels like shooting sparks from a wheel: the sparks may be entertaining, but it’s the object you’re shaping with the wheel that’s really valuable.

Finally, as a reader, I find that seeing the raw feed of even a few people’s lives can quickly become overwhelming. In the last 24 hours, a relatively quiet time after Thanksgiving, I got 34 tweets; during a busy time— when people are traveling or at SXSW— I can get several times that, easily. There’s an argument to be made, as Clive Thompson has done, that the minutiae of tweets resolve into ambient awareness… but as it’s currently designed, the system still puts big demands on readers, who have to constantly read their friends’ Twitter streams, develop a sense of the rhythm of their posting, and build up a model of their real-world state from their online behavior. In a world in which the challenge is not to broadcast a lot of information, but to generate a lot of meaning, the stream-of-existence quality of tweeting makes it easy to mistake detail for intimacy, quantity of tweets for quality of expression or depth of understanding. As a preview of the world of ubiquitous computing and ambient awareness, Twitter is an interesting experiment (an experiment that’s being conducted my hundreds of thousands of people on themselves and their friends.)

This is actually not a bad lesson for designers. Creating ambient devices isn’t about pushing information; presence isn’t just about connection. Connecting people virtually is as much about quality and meaning in the digital world as it is in the real world.

Which is not to say that Twitter is hopeless. Twitter is strongest as a platform for conversation and reportage. It’s easy to share a rapid fire of short notes at conferences, for example, and the final result— assuming people are listening and paying attention— can be useful. (I wonder if there are examples of Twitter being used by students in lecture classes?) A couple of the people I follow use it as much for pinging friends as for talking about what they’re doing: for them, Twitter is a cross between the Facebook wall and a chat room. And I find Twitter useful for getting reactions to news events: I stopped watching the presidential debates this fall, for examples, after I realized that most of my friends were tweeting their reactions to them.

So what do I do with my Twitter stream? I’m not going to shut it down, because there are times when I’ll want to provide moment-by-moment updates about what I’m doing («Just cleared customs in Kai Tak! Where’s the cab line?» «Have now been in Victoria Stations on four continents….»). But for me, when I do use it, the challenge will be to figure out how to write the Web 2.0 equivalent of Zen koans: to fit meaning into 140 characters, rather than to fight the limitations of the medium by posting a lot.

*After I started working on this piece, I got interested in what other people had written upon getting fed up with some service, technology, or channel. Turns out that the «declaration of zeroing» is almost a literary genre. I first became aware of it through David Levy (whose book I reviewed in the L.A. Times, and who gave a brilliant talk about this stuff a couple years ago), and his ideas of a digital sabbath and information environmentalism. A couple samples:

Edward Vielmetti on Twitter:

The basic idea is that in systems where there is an infinite capacity for the world to send messages to get your attention, the only reasonable queue that you can leave between visits to the system is zero, because if you get behind you will never, ever, ever catch up gradually. Never. No matter how much time you put into it, there will always be more to do, and you will lose sleep over it.

Carmen Joy King, after quitting Facebook:

The amount of time I spent on Facebook had pushed me into an existential crisis. It wasn’t the time-wasting, per se, that bothered me. It was the nature of the obsession – namely self-obsession. Enough was enough. I left Facebook.

Donald Knuth on email:

I have been a happy man ever since January 1, 1990, when I no longer had an email address. I’d used email since about 1975, and it seems to me that 15 years of email is plenty for one lifetime.

Email is a wonderful thing for people whose role in life is to be on top of things. But not for me; my role is to be on the bottom of things. What I do takes long hours of studying and uninterruptible concentration. I try to learn certain areas of computer science exhaustively; then I try to digest that knowledge into a form that is accessible to people who don’t have time for such study.

On the other hand, I need to communicate with thousands of people all over the world as I write my books. I also want to be responsive to the people who read those books and have questions or comments. My goal is to do this communication efficiently, in batch mode — like, one day every three months.

Mark Bittman on his «secular Sabbath:»

I do believe that there has to be a way to regularly impose some thoughtfulness, or at least calm, into modern life — or at least my version. Once I moved beyond the fear of being unavailable and what it might cost me, I experienced what, if I wasn’t such a skeptic, I would call a lightness of being. I felt connected to myself rather than my computer. I had time to think, and distance from normal demands. I got to stop.

And of course there’s at least one blog about turning off all electronics one night a week. «Because of course,» Ariel Stallings writes, «I can’t unplug without blogging about it! (Irony, is that you?)»