Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"This is a challenge that journalism faces today -- how to fairly update and keep stories current while informing readers as best we can."

I work in and around the media industry, and I don't think most publishers see this as a "challenge" -- they have other things to worry about (like how to survive post-print, how to operate in an Internet ecosystem with Google/Facebook/Twitter/YouTube, etc.)

Instead, I see a lot of excitement around the ability of journalists to update their work after they publish it -- one of the few upsides, in fact, of the Internet, from their standpoint. Just as Wikipedia articles get better over time, some important pieces of journalism do, too -- with the guidance of new information or challenges to the original writing.

But look, the Internet isn't print. You can publish something at a URL, then change that URL's content. That's not a crime and it isn't worrying -- it's built-in to the medium.

Just like nothing makes the moment of publication special, nothing makes the moment of re-publication or re-writing special.

The New Yorker did a nice piece on the Internet archive, who is trying to "solve" this problem by periodically archiving the web. They pointed out that when Tim Berners-Lee invented HTTP, he considered the idea of versioning, but ultimately discarded it.

    In 1989, at CERN, the European Particle Physics 
    Laboratory, in Geneva, Tim Berners-Lee, an English
    computer scientist, proposed a hypertext transfer 
    protocol (HTTP) to link pages on what he called 
    the World Wide Web. Berners-Lee toyed with the 
    idea of a time axis for his protocol, too. One 
    reason it was never developed was the preference 
    for the most up-to-date information: a bias against 
    obsolescence. But the chief reason was the premium 
    placed on ease of use. “We were so young then, and 
    the Web was so young,” Berners-Lee told me. “I was 
    trying to get it to go. Preservation was not a 
    priority. But we’re getting older now.”
You can read the full story here: http://www.newyorker.com/magazine/2015/01/26/cobweb

I think ultimately having a strong and well-supported Internet Archive (if you care, donate: http://archive.org/donate/) is the best possible solution to this "problem". As long as we can go back and see how content evolved over time, we'll be able to study the changes.

And no, media companies are not going to embed Git into their news stories -- most of their visitors don't even know what Git or version control is. And most probably have never even clicked the "revision history" tab in Wikipedia.



Quite. Who cares whether the NYT revised the article? The more salient criticism is that the article is a deeply lazy and tendentious exercise in using insinuation to push a predetermined narrative.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: