We’re Not in the Middle of a Digital Revolution

In case you missed my last post, here’s the late paragraph from it that I’m extending here in this entry:

“And then, there’s a flip-side. How much help is too much? What’s the role of the editor as collaborator? As trainer? As code fixer? These are questions that I’ll return to soon. But I’m going to try to address them all through the lens of one, simple aphorism: We’re not in the middle of a digital revolution. Not because it’s not over; not because it hasn’t started, either. If you can accept that, these questions start to look a little different.”

There are times when I feel like we’re in the middle of a digital revolution. Like the Web has changed everything. Internet speeds. Wireless access. Mobile phone technologies. Storage volumes. Access to enormous databases. Collaboration tools. Ten years ago, most of this stuff would have looked unrealistically powerful. And twenty years ago? The unfathomability factor goes pretty sky high. So, yeah, on the one hand, it seems like we’re somewhere in the middle of a revolution. But are we at the beginning? The end? The middle? Of course it’s hard to say. But what is it hard to say? Because we need to have a sense for when this revolution will END. Here’s the thing… I don’t think it will. I think the rate of technological development will continue to increase. Innovation will continue to increase. More tools. Better storage. Faster speeds. Cheaper, better-built hardware. Yeah, the world’s economy, at times seems to be lilting and staggering. And our own national debt gets heavier on our backs all the time. But there’s no reason to think that these sorts of factors will slow down innovation. The places it happens might shift (i.e. to technologies of farming, weather manipulation, disaster relief, security, affordable medicine, etc.), but it’s still going to happen.

Yep. That’s a lot of generalizations. None of which are surprising or insightful. They’re merely the premise on which I want to talk about a certain orientation toward technology: the wait-and-see. It’s just not sustainable. More precise. Okay. There are plenty of people I have worked with in the past (as educators, writers, craftspeople, etc) who prefer to wait-and-see. Another term is “late-adopters,” but I don’t like the connotations (“late to the party,” “the late Mrs. Ploium”). Basically, I’m talking about two different sorts of waiting.

The first has to do with interface and usability. This sort of waiting assumes that early technologies are really only accessible to the geeks. (I’m talking about digital technologies, more than anything, I guess. Probably applicable to others, too, though.) One of the simplest and most common examples of this phenomenon is illustrated by the introduction of the Macintosh computer twenty years ago. The graphical user interface at a consumer-accessible price. DOS screens, I’ll admit, are intimidating. And dull. And unintuitive. Before the Mac, did huge numbers of people accept that the computer was going to become a cultural ubiquity? Probably. But still, soooo many people wouldn’t jump in (i.e. embrace the tech, drink the cool-aid) until it was easier to do so. Thus the wait-and-see attitude. Waiting until it’s get accessible for the common user. Wait until the interface is intuitive enough so that the learning curve is reasonable. Maybe that’s what this really comes down to. Reasonable learning curves. I get this. Now that I’m looking back over this paragraph, I’m convinced that this has been a completely reasonable and defensible orientation toward emerging technologies. Here, consider the difference between learning to code HTML vs. assuming that a WYSIWYG editor like Dreamweaver will eventually come along. Why waste the time learning to code?

Then there’s this other orientation. The wait-and-see (if-it-catches-on) orientation. Also an approach which is useful in some ways. This is especially true in our recent age of collaborative technologies and projects. Even though, in a lot of ways, Facebook was superior to MySpace, lots of people were hesitant to switch. In fact, lots of people thought Facebook was actually pretty lame because not enough people were on it. And they were right, given the fact that it’s a social technology. Not productive without your friends (or at least a pool of potential friends). Another example of this attitude has to do with some open sources projects, like OpenOffice, for instance. In a practical sense, for the 95% of users, OpenOffice has ALL of the functionality they might use regarding a word processor. There are some very small functions and integrations that MS Office offers, but there are alternative solutions available. And OpenOffice is easy to use. It’s faster. It’s superior in soooooo many ways. But it’s not fully compatible with the the dominant technology (f*#&-off MS Word). And so, people don’t adopt the other. Some people. And who can blame them? The week of my prospectus meeting, I worked up a distributable copy of my prospectus in Open Office. Then I sent it out to my committee members. One of them mentioned that they couldn’t read most of it. Formatting issues. Okay, so I exported the file as a PDF. Again, wouldn’t work. They were using MS Word and Adobe Reader. Both products which rely largely on their non-interoperability with other products as part of their business model. So I had to scramble at the last minute to get them a copy that I produced in MS word. The headaches and extra time were ridiculous. And after this story, you, reader, should want to adopt OpenOffice? Well, yes. But that’s a story for another day. I’m just suggesting that it’s perfectly understandable why people would choose this sort of a wait-and-see attitude.

But both of these attitudes are becoming increasingly untenable. Consider this analogy: a given technological innovation is like a boat. And that boat is floating near a dock on which a user is standing. It’s floating away, slowly. And the user needs to jump into the boat in order to bring it back to shore. The wait-and-see attitude used to be useful, because the boat floated slowly enough so that the user had plenty of time to consider or put-off the jump until the boat stopped drifting away. I just don’t think the boat is going to stop drifting away. And those who wait are eventually going to have to jump into the water and swim to the boat. The longer someone waits, the more time they’re going to spend in the water playing catch-up to the technology. Because it’s not going to stop. The developmental spurts that technologies have gone through are getting shorter and shorter. To the point where development will seem constant. “Versioning” will become a thing of the past. Think about how often you have to run updates on your computer. Every night for most people. And the differences between Windows Vista vs Windows 7? It might have seemed significant, but it was nothing compared to Windows 95 and Windows 98, for instance. Technological change is becoming fluid; discrete epochs or versions are disappearing.

Those who are waiting-to-see, unfortunately, will eventually come to realize that there is only seeing. They are waiting for the seeing. But they are already seeing. Though they think it is waiting. The difference? Action. The wait-and-see construction is really about action. I should have better described it as a wait-and-see-then-act orientation. But if the seeing requires some stasis, and there’s no such thing as stasis. So, people need to get rid of the waiting. There’s no distinction between now and the future. They’re not the same. Certainly. But there’s no point of demarcation. No release-date for the future.

There’s only seeing and acting. No more waiting. This means demands participation and paying attention.

Sound exhausting. Yep. It does for me, too. So what can you do? I don’t know. But not knowing exactly how to respond to this new “acting-on-seeing” orientation doesn’t make me want to deny it. I think it’s a reality. What it makes me want to do is to get to work. But that’s not going to be enough. Part of that work is going to have to be about efficiency. How to work smarter. Which means making choices about priorities. And maybe most importantly, that work will have to be about figuring out how to pace ourselves. Setting up sustainable work flows. Less project-oriented. More process oriented. Which will be difficult as an academic who will largely be evaluated by discrete scholarly texts, courses, and projects. A constant, relatively low level of technology adoption and exploration is going to have to be a part of that. The question is “How?”

My first suspicions? Ubiquity. Integration. Capture. Tagging. Categorization. Sharing.

The first technologies I want to explore? WordPress. Twitter. RSS feeds. Posterous. Delicious. GoodReads. Zotero. Evernote.

Yes. Yes. Yes. These technologies are all relatively new. But not revolutionary because they’re not ushering in a new era. In order for that to happen, and “era” needs be static. And the stasis is dead. In terms of technology, there is no next thing. These is just a fluid becoming. Innovation streams. In this sense there can be no revolution. No new. Just change. Yikes.

(“Cyborg Che – No Revolution,” remix by Ryan Trauman, Creative Commons Attribution-NonCommercial 3.0 Unported license)

Leave a Reply

Your email address will not be published. Required fields are marked *