Dan Gillmor: Did NY Times Make Major Error Turning To Facebook?

Newspaper’s server problems shows that everyone needs a Plan B, independent of third-party hosts.


Powered by Guardian.co.ukThis article titled “What we can all learn from the New York Times website outage” was written by Dan Gillmor, for theguardian.com on Wednesday 14th August 2013 19.40 UTC

Several hours after the New York Times website and mobile app went offline on Wednesday morning, the paper posted three articles on its Facebook notes page. This was material (beyond witty staff tweets) that the editors urgently wanted to get to their audience. They wrote:

As you may be aware, The Times is experiencing a server issue that has resulted in our website being temporarily unavailable. We expect the site to be restored soon. In the meantime, we are publishing
key news articles in their entirety here on Facebook.

The Times’ impulse to use an alternative platform was laudable. Among the several stories it posted was a detailed update on the horrendous violence in Egypt, written by an expert journalist who did what Times readers have long expected from the organization’s foreign correspondents: a well-reported summary of what we will surely look back on as an important day in Middle East history.

But the venue the paper chose to post its material was ill-advised, for many reasons.

Facebook may have been convenient, but it – not the Times – ultimately controls what appears on its service. Facebook is not hosting this material for the sake of the Times or for people who want quality journalism. Facebook itself is an increasingly threatening competitor to the journalism industry, and it serves its own needs first.

The situation also highlighted a reality all news organizations – and all of us who rely on the web for much of what we read and say – need to understand better. Technology can be fragile. It can be hacked. And we all need a Plan B.

I run several websites. On the rare occasion they’ve gone down, due to problems at my hosting company, I haven’t had much of a Plan B for myself. I’ve tweeted that they’re down and will, I trust, be back up shortly. Meanwhile, I and my hosting provider have backup copies of everything. In a nearly worst-case scenario, I could restore what’s gone missing to another hosting service in a day or so. That has never been necessary, and I hope it never will be.

Web commerce companies have vast and elaborate procedures designed to prevent such failures, and to recover quickly if they do occur. Such cases are rare, but Amazon’s bad outage, a little over a year ago, took down a number of high-profile services, including Reddit and Foursquare; months earlier, an outage had knocked out Netflix streaming on Christmas Eve 2011. Amazon and its customers have learned from these experiences (or should have), and have taken measures to avoid them in future. But we can count on problems to recur, because Murphy’s Law will never expire.

News organizations have a particular issue: what they do is, in large part, about getting information to audiences in a timely way. The Toronto Globe and Mail has used its Tumblr blog when it had to. Its editor, Matt Frehner, tweeted:

I recognize that journalists generally don’t share my belief that promiscuous use of third-party services is much of a problem. I remain convinced, however, that the practice is ultimately bad for their brands when they do it wrong. Services like Twitter and Facebook and Yahoo’s Tumblr (and Google+, to much lesser degree, largely because it’s less widely used) do offer a platform for promotion and, sometimes, conversation.

Certainly, journalists should participate in conversations about what they do wherever people are talking. But to hand over one’s journalism to a competitor strikes me as an error in the long term.

What should they do, instead? The Times and other news organizations should have backup blogs of their own, on domains they control, hosted by services that provide uptime when their own sites are impossible to reach. Then, when an outage occurs, they can use the social networks simply to point readers to the actual journalism. That way, the follow-up conversations take place on sites that don’t feed ever-more information into competitors’ databases.

The Times is back online as I write this. The newspaper has said it believes the outage was caused by glitches during a maintenance update to its servers. (Mentions of possible cyber-attack are purely speculative.) I hope, when the virtual dust settles, that the organization will be more ready for the next time something like this happens.

One thing we all learned in the flurry of online commentary about the outage should hearten the Times’ journalists: what they do is important.

guardian.co.uk © Guardian News & Media Limited 2010

Published via the Guardian News Feed plugin for WordPress.

Quantcast