Hacker News new | past | comments | ask | show | jobs | submit login
The future of news is not an article (nytlabs.com)
146 points by pmcpinto on Oct 26, 2015 | hide | past | web | favorite | 70 comments



I created a (non-technical) prototype of basically this exact idea a few years ago to get a better understanding of the economics. It's online if anyone wants to play with it:

http://www.alexkrupp.com/Citevault.html

Basically the economics are insanely good if you're using this as an open source tool to create NYT-style articles, less so if you're Circa.

The people comparing this to the semantic web don't understand Zipf's Law, and just how slowly things actually change. E.g. we only get new data on adult literacy every 10 years. And the last data we have on antibiotic resistance for some bacteria/drugs is from the early 90s, and that's more the rule than the exception. Pretty much every single article about the U.S. is using the same set of a couple thousand facts, and most of those only get updated every ten years or so. There are a few exceptions like with federal arrest data that gets updated yearly, but that's pretty rare.

Basically if you're trying to do this using machine learning or any sort of algorithms, you're completely wasting your time and going down the wrong path. This is way easier to implement well than you think. (But again, not necessarily super profitable unless you own the NYT.)


I ran a wiki-based fact check for a while. This looks like it could be the start of something like that. However, a big problem is that the attempt to find truth is over-rated as a driver for the news market. Most people really don't care, and by the time the few that do care have found "the truth" the news cycle has passed.


I can tell you're not focused on styling of it (fair enough) but it might be good to slap a basic CSS reset on it or something. In my browser (Chrome) it's very hard to read, mostly because there are paragraph breaks in the bullet points but no spaces between different points, so you get weird groupings of text:

http://imgur.com/ShVjXXu


The future of news is not an article, nor video, nor particles, nor arbitraging the cost of a page view from advertisers vs the cost of paid distribution on Facebook (yes NYT does this) — it is finding a business model that actually works and that people want to pay for.

Let's go over some of the points brought up with that in mind:

Would you as a reader pay for more for a service that had "enhanced tools for journalists"? No.

Would you as a reader pay more for "summarization and synthesis"? Probably not, Wikipedia and Google already do this really well.

Would you as a reader pay more for "adaptive content"? No.

Newspapers (including the New York Times) are in serious trouble, and instead of playing catch up to look like a mix of every tech service, they should focus on delivering an experience that people actual want to pay for.

There is no future of news without figuring out the next model for news.


I know what I would pay for:

- More AMA (and not just with the famous people; I loved the "I'm a Joe Random Starbucks employee, AMA" threads before AMAs went mainstream).

- More ELI5.

- Supporting HN and (parts of) Reddit so that every article is analyzed and dissected by people in the know, who point out all the bullshit the news station put in, and point towards relevant resources.

Oh, and at this point in my life I'm willing to pay quite a bit for a news service that can prove to me they a) don't blatantly lie, and b) have some minimum competence on-board to cover the topics they're writing about. An information source that I could trust to incorporate to my daily decision-making process is something worth paying money for.


> I'm willing to pay quite a bit for a news service that can prove to me they .. don't blatantly lie .. An information source that I could trust to incorporate to my daily decision-making process

Looking for such a single source will only ever result in frustration. Whether a given news source is assembled by a huge corporation with political affiliation, or a single intellectual who answers to nobody and has complete artistic freedom, it's irrelevant - the source has its own incentives, biases and ultimately, an agenda of some sort.

Even though it obviously happens in some instances, they don't even have to 'blatantly lie', just select and present facts in a way which supports their narrative, and as such even if you could somehow cut out all sources that simply lie, you'd still have a similar problem. How can you routinely trust a single source for your decision-making when you know that source must have its own independent agenda that may not always align with yours?

The answer is you have to do the hard work of reading several sources with different agendas and incentives, getting different perspectives, and making up your own mind about what's really going on with a particular issue based on that. There's no way around it, no shortcut, in fact a source which would charge to provide willing customers with such a shortcut is probably the type of source more inclined to shape their stories to suit what the most well-paying customers, or at least the section of the market they've cornered off, want to hear.


Doesn't trust factor into this? I have friends that I would trust with asking for fitness related information/suggestions. I may end up doing more research on my own, but I trust those friends to leave out doing a lot of the work myself, even if their agendas don't align with my own.

And when I don't have friends to ask about a particular subject, I might end up going to an online community like HN or a subreddit to ask individuals or groups of individuals those questions.

I tend to trust people who don't have a profit motive more than I do a big company which has an incentive on pushing their product.


I like that third point about dissecting well-chosen Reddit threads. I would certain like to watch a few youtube videos of "Let's read Reddit with Neil deGrasse Tyson".


> Supporting HN and (parts of) Reddit so that every article is analyzed and dissected by people in the know, who point out all the bullshit the news station put in, and point towards relevant resources.

This is what I've wanted to build for a while; some kind of annotation site, like RapGenius for normal webpages.

Tangential, but my favorite annotation ever is Asimov's Gilbert and Sullivan, which I'm fortunate enough to own.


RapGenius has that, it's called genius.com/beta and it's cool.


There's also Hypothesis, but it has serious spam issues, as in "so much spam that your CPU caches fire when displaying a simple website, because there's so much spamnotations" kind of serious.


First of all, NYT has over million paid online subscribers. It's not like NOBODY wants to pay for NYT. Second, I think it's short sighted to say "They should build something people would want to pay for". It's like going back in time when Facebook was just getting started, or Google was getting started, and telling them "You should build a product that users will pay for". Here's what I think: "enhanced tools for journalists", "summarization and synthesis", "adaptive content", all of them will contribute to create value. When there's more value, there will be many ways to translate that into revenue. The reason newspapers are in danger is NOT because they are not monetizing enough. It's because they are not providing as significant value as they used to in their good old days.


I agree that the NYT is in a better position than most papers, but the reality is advertisers would rather put their spend into Facebook or something like Buzzfeed which get have a much larger audience and way more page views than the New York Times. Not only that, the tools are self-serve and the interactions they get are much more clear.

The New York Times is already a large corporation, with lots of employees, and tied to revenue streams from advertisers that are waning. I don't seehow it's like going back to Facebook or Google in that regard, because their head counts were minuscule.


You say media companies should figure out the right business model but you yourself seem to be stuck thinking inside the old business model box. In the short term it is true all the new media sites seem to be crushing it with all the eyeballs they have since the dominant revenue model for our age is advertising, but things always change. Even this online ad model was laughed at a decade ago. And it is increasingly becoming less lucrative as more and more people can publish stuff online. The only way to ensure you survive is to make sure you create a unique value. And page view is not one of them.


I started Beacon which funds journalism online (http://beaconreader.com) — definitely don't fit into the old business model, but that's the game they are playing.


Impressive work on Beacon. Definitely not "old business model" :)

That said I think you're undervaluing the potential of the OP. You say at [1] that one of the key things you've learned is that readers fund journalism because they care about having an impact. If successful, the approaches they're discussing could significantly increase the impact of journalists' work -- reusing the same Particles in different contexts, being able to provide multiple tailored versions of the same post that could resonate with different audiences, etc. And it also may point to ways for publishers to participate in and leverage the Buzzfeed/Facebook ecosystems while still providing additional value on their own sites/apps/publications.

So, agreed that it by itself isn't a business model solution, but don't write it off so quickly ...

[1] https://www.beaconreader.com/blog/beacon-immigration-3-milli...


Maybe, but maybe lots of solutions like this already exist! Tweets, genius for annotations, facebook, youtube... I agree certain backers might care more about having an article published in x ways, but I think they are overestimating their usefulness and things like tweet embeds, youtube embeds, or using phantom js to render custom twitter cards with text PLUS the full length article do the same job more or less.


Where I am, we have several local newspapers that got rid of ads completely except for small listings for like 50€ each "XYZ died" or "Selling ABC".

And it still works.

People are willing to pay enough for quality content or content about local news that it works.


Maybe they need to shave off a couple ad employees.


I think it's both. They aren't monetizing enough because it's impossible to provide the same value when the same topic can be googled or read in a tweet in a matter of seconds.

Really these guys are trying to save a sinking ship.


I would spend money on a wikipedia-like topics engine with nicely written articles on those topics.

I'll be an average user for the site. I look at the frontpage/feed and see recent news about "The Syrian Crisis" the link would aim towards an article about the event specifically ("Violence in Syria Spurs a Huge Surge in Civilian Flight"). The article itself would focus on the specific event without having to rehash what the Syrian crisis is. There could be a very clear link (Maybe the subheader of "Syrian Conflict") to what topic this event relates to.

I could click on this link and be taken to the broad topic of the Syrian Crisis, which would feature a well-written summary of the topic as it is currently, a ticker of the most recent events (weighted for more significant events), an easy way to filter by country ("Germany and the Refugee Crisis", which would also include a well-written summary and Germany-specific statistics), along with any article tagged with that topic. There's still a focus on well written articles, but augmented by live data and focusing on how specific countries or global leaders/groups are handling events (complete with interviews and smaller local stories). If the article contains important events to the global stage, they can be pushed up to higher topics in the tree (From "Refugee Crisis in Germany" to "Refugee Crisis" for example) Following those topics can give you notifications on recent updates with them. I could glance through each topic, featuring longer/higher-quality articles towards the top (interviews, op-ed, etc.), while it just takes a bit of scrolling down to get to easier-to-consume pictures, videos, quick quotes, or graphs below.

It's all about linking great journalism together into cohesive topics. Articles and op-ed is still created and promoted on the "front page", but photographs, videos, statistics, audio interviews, quotes/tweets, and smaller interest pieces can all be provided in a sort of live stream for somebody passing the time or augmenting the topic's page.


>Would you as a reader pay more for "summarization and synthesis"? Probably not, Wikipedia and Google already do this really well.

Actually they don't. Wikipedia's long article format is very poor at creating a news archive that can be filtered, sorted and searched like a database. OTOH, Google has a lot of tools for searching, sorting and filtering, but they can only do those actions on existing articles, which are part of the problem. Simply curating existing articles doesn't create a news database. You end up with Google news which has 2000 versions of the same news. To make a news database work, the articles themselves have to be in a format that can work in a database (shorter, fact based etc).


I guess I should have said "well enough." Would you personally pay for it?


I wouldn't pay out of pocket, but I don't mind viewing ads.


And yet some are fighting the invisible hand in the name of good content.

News is not just about « delivering experience » or « a business model worth paying for », everything is. It has solid social objectives and uf people, if our society, deems newspapers or news not worth anything then it will disappear and it'll be sad but it'll be fine because it will regrow somewhere else.


A great way to make money has recently been "introducing VIP services to the middle class".

What used to be exclusive due to scarcity, labour costs and lack of relevant technology is being made more an more accessible.

To apply this process to "news" as a means of getting relevant information (as opposed to such journalism which is at least equal part entertainment), you need to look at executive briefings, the news delivered to the president, a congressman's morning file prepped by assistants.

Summly and Circa have tried, but picking 5-10 of THE things you need to see out of a ~million stories or even ~100 main news stories is very difficult.

But I bet Facebook, Google and Apple are very well positioned to solve this, eventually making media an ever-lower margin business.


I wouldn't pay for Facebook, but it's worth hundreds of billions of dollars, so I'm not sure this logic entirely computes. Turns out you don't have to pay for something in order for it to be valuable.

A lot of news organizations online are doing fine, and many are making a bunch of money. Granted, they're not making what they used to (because they're the text equivalent of record labels) and the Internet leveled the playing field, but news + ads definitely makes money.


The end readers don't have to use the tool. They could, but the most basic use case is writing the same type of articles for 10x cheaper, with better quality.


I remember seeing your Citevault tool. It was dope! We talked about it a lot at my work.


Thanks! Yeah I'm launching a startup in a few weeks so I had to suspend work on it, but I'll go back and finish it up in a couple years if no one else creates a suitable implementation before then. (And if someone else wants to run with it, all the better!)


Your understanding of business models is flawed.

Over is the time where people would pay for content. The new model is for service providers to provide content whose call to action is in their best interest.


Paying for an "experience" vs "content" are very different things. I definitely don't think people will pay for content, and was not trying to saying they would above.

Perhaps my understanding of business models is flawed, but this year my company will pay out millions dollars to journalists. In that regard perhaps my flawed understanding is an asset!


We have been breaking the traditional news article apart at Newslines for over a year now. The result is a hybrid between daily news, Wikipedia and Google Search. We break each news story down into a 150-word summary unit based on each news event which we then sort by date to make a timeline of the news. For example, Tom Hanks[1].

Because the news summary is treated like data, we can sort it to show different views of the data. for example, reversing the sort gives a "biography view" [2]. Compare this to a Wikipedia page, or even a newspaper article, which, because they text-based, cannot be sorted. By adding meta data we an then filter the data. For example, we use "Event Types" such as births, deaths, arrests, and many more to let the users take control over what they want to see. For example you can see all the apologies on the site.[3]

A big advantage of this way of creating pages is that it results in far less bias than in a traditional news article. If you're interested in more, I wrote a follow up to the NYT article [4].

[1] http://newslines.org/tom-hanks/ [2] http://newslines.org/tom-hanks/?order=ASC [3] http://newslines.org/event/life/apology/ [4] http://newslines.org/blog/the-article-is-dead-long-live-the-...


> Can you imagine if, every time something new happened in Syria, Wikipedia published a new Syria page, and in order to understand the bigger picture, you had to manually sift through hundreds of pages with overlapping information?

That is exactly what Wikipedia does. It works well because most readers are not looking in an encyclopedia for information on yesterday's events.

Likewise inverted pyramid works well because it simultaneously satisfies the needs of new readers who need the most important details at the top and repeat readers who can quickly scan the short paragraphs for new information. I despise new-style live streaming because it is so awkward to read; I have to read backwards, bottom-to-top, and the most important details are often in the middle.


One of the major problem with Wikipedia is that it acts like a book on the web, rather than a database. It is impossible to search the content of articles and meta data on the actual content is practically non-existent. This means you can't sort or filter the pages. The live-streams you have read so far are very messy -- as they should be due to the nature of breaking news -- but there are other hybrids that fit between Wikipedia and news that can work well for most readers.


My takeaway from this article is amazement. Amazement that the New York Times is trying something new. I applaud them for that.

The Tribune Company is about to sell its Michigan Ave building since its bleeding cash. They established a venture fund a few years ago but last I heard hadn't invested yet since they hadn't found anything worth an investment.

In an industry full of failing companies it's delightful to see one Titan try to stay relevant.


Definitely. I agree completely. For someone relatively uninformed on the scene, the NYT certainly seem like a leader in the news industry for going forward with new digital media [1].

I really like the idea of Particles. I had a similar thought this past year about building a news MVP that simply consisted of atomic facts (no more than a sentence) with an attached probability and discussion. Kind of like those you see in the IPCC (international panel on climate change) reports, where they have their claim and degree of certainty.

We were getting there with the whole Wikidata initiative, but it's still in its primitive stages with respect to content being statically authored and updated (AFAIK). It would be very interesting to see where we go with GraphQL - I could see that becoming the dominant machine-to-machine protocol for Semantic Web 3.0. The whole idea of taking what are usually REST resources and turning them into hierarchical JSON documents which you can query very flexibly is immediately appealing.

If this Particle concept catches on like cards have in UI design, I'd expect Twitter to be the first to ride the wave.

[1] http://www.niemanlab.org/2014/05/the-leaked-new-york-times-i...


Larry Sanger, Wikipedia's co-founder, already tried to make a site using atomic facts. It was called Infobitt, and lasted about a year or so before he closed it down a few months ago. The problem with using the fact as the smallest data unit is that it invites fact checking, which slows down the news reporting. Ironically, if sanger had actually used his system to create Wikipedia pages, it would probably have had more success. Wikidata is not going to work. It suffers from the same issue, as well as having problems relying on Wikipedia's "facts".

As for Twitter, I doubt they can do it either. Their "Moments" initiative shows how little they understand curation.


Vox is doing a pretty good job of it. The NYT already has quite a large number of topic pages on a variety of subjects [1], [2]. It's not Wikipedia, but it's not a small number of articles either.

[1] http://topics.nytimes.com/top/reference/timestopics/people/i...

[2] http://topics.nytimes.com/top/reference/timestopics/subjects...


I checked one of the feeds [1]. It's just a list of existing articles, by NYT writers, that don't have meta data. That's not what the NYT article is about. To make the meta tagging work you have to also strip down the article into each individual news event. For example, imagine a list of NYT articles about Yoko Ono. Almost every article about her will mention that she was married to John Lennon, and that he died in 1980. In an article you have to repeat this information because you assume that the physical paper was thrown out each day. But in a news database these events would be separate data events, and would not be repeated. Vox is the same - they still use articles, not data in their place.

[1]http://topics.nytimes.com/top/reference/timestopics/people/g...


There is actually a job title called "Fact checker" at newspapers. That's why people tend to trust high end newspapers like NYT over some random blog that just posts whatever they hear asap to get traffic. I doubt NYT plans on opening up their particles to any randos on the web. They already have great in house employees who can do this job very well.


> The problem with using the fact as the smallest data unit is that it invites fact checking, which slows down the news reporting.

Could you elaborate on this? Specifically, what do you think is the better alternative?


Let's say the news is about a plane crash. At first it is reported that there are 10 deaths, but a few hours later it's reported that there are 100 deaths, and then 99 deaths. So imagine we are writing an article for a newspaper, or even a Wikipedia article. The reporter or contributors are trying to find out the truth, because their objective is to create "the perfect article". So when the later death toll comes in they will often rewrite the original article to make it correct, but in doing so they hide the original reporting. The reader will see an article that says: Plane crash: 99 dead.

Newspapers don't print feeds, there's no room. But if you have the archive to hand, as a database, then you don't need to make a perfect article. You just need to add updates as they happen.

Event 1: Plane crashes 10 dead Event 2: 100 dead confirmed Event 3: Actually 99 dead

This leaves the original events as a record of what was reported at the time, but the reader can also see the update. This is important to readers because while information can be added to articles, it can also be taken out, to try to conform to a narrative. Wikipedia is particularly prone to this kind of selection bias dressed up as fact-checking.


Confused: correcting mistakes is 'selection bias'? How is incorrect early reporting, news at all? The act of reporting isn't the news; its the thing that actually happened.


Often we don't know the facts until much later. Even then the facts may be a matter of opinion, or may have conflicting sources. The gun control debate is a good example where facts on both sides are used to support each side's biases. On Wikipedia we see one side using a particular set of facts and then deleting the other side's facts. The same is true in newspaper articles where certain awkward facts are conveniently left out of reporting and analysis. But what if both sides are correct? We'll never know what is the truth because there are always different ways of looking at something.

In many case, fact checking is often a way to hide bias under a veneer of authority, by proclaiming a selected set of selection bias facts as "the fact-checked truth". You only need to follow the major fact checking sites for a short time to see this in action.

The question is how can you effectively present conflicting information. I believe the article format is biased from the outset, whereas a more data-driven approach leads to less bias.


Infobitt is still technically live, but it is so riddled with problems I don't know where to start.


Amazed they're trying to save their business?


Look at the guy that Sam Zell put in charge of the Tribune while trying to turn it around: http://www.nytimes.com/2010/10/23/business/media/23tribune.h...


Former NYT journalist here. I'm really glad the paper is experimenting like this.

I want to make a slightly bolder statement than the headline of the post:

The future of news is not written, and it's not one way. It's conversational and spoken.

We've been migrating away from the printed word at least since the advent of radio, and TV only accelerated that trend.

What people want is the interaction and surprise and meaning created in a conversation. Papers like the NYT aspire to "drive the conversation." But they are not engaged in "the conversation" on an individual level. They are largely confined to the one-to-many schema of the old news flow, where publications speak and readers listen.

The future of news, imho, is chatbots personalized to the user, that bring up the daily news like small talk on a long commute, based on the AI's knowledge of the news consumer. And it'll have through a voice UX just as much as through print.


Thanks for the interesting perspective. While I can see myself enjoying chatterbots telling me news in a casual way, I still wonder - with such focus on "surprise and meaning", where do facts enter the picture? Are we migrating away from factual information towards personalized, opinionated content?


I think we already migrated. Right now, each story is a bundle of facts and statements. A chatbot could disaggregrate those facts, and feed the news consumer answers to questions they have about the subject without imposing such a total order to the presentation of those facts. That would be the give and take, and the conversation could range far beyond a single piece of news. It's all about contextualizing events. Reporters and editors make a lot of decisions about the context they present, but in this new choose-your-own-adventure format, readers could explore context beyond the limits imposed by a small newshole and all the decisions that entails.


I wonder how this will change the way news and society interact. Right now, news serve as a very important social object - recent events and popular articles provide default topics for conversation for people. If we move to "choose-your-own-adventure format", we may lose the ability to discuss it, as everyone will have their own version of "recent events".

BTW. the way you described it - choose-your-own-adventure, seamlessly explorable news - sounds really cool. I'd definitely love to try something like this out.


There is no reason for new information to be more important than old information.

Sure, there's a subset of information that's ephemeral and must expire at some point. The kind of information with a call to action. But that shouldn't be the whole picture.

We need a new kind of information engines that both knows what you know, and knows what you want to know. Something that both teach you old and timeless facts, and keeps you up to date with new discoveries.

New shiny stories shouldn't distract me from what I planned to read yesterday. We need a cure to novelty.

I like this New York Times initiative.


> There is no reason for new information to be more important than old information.

New information is often more entertaining than old information, particularly if the old information is already known to the reader.

For better or for worse, I think news is often used for entertainment.


>3. Adaptive content

The only reason I don't order morning paper these days is that it's physically so big. I only read about 5% of it. If I forget about it, it's going to explode my mailbox. And I would have to take the papers to garbage twice a week to fend off chaos.

Could I please subscribe only to politics, science, actual news and opinions. Curated by local major news outlet. It would be the best possible way to kill the time needed to drink two cups of coffee.

I think people are overthinking this. By "what would average person want?". Maybe they want what you want?


Funny, I just read this in the book The Logic of Failure by Dorner:

> The information a newspaper-reading citizen receives about economic developments or the spread of epidemics, for example, lacks both continuity and constant correctives. Information comes in isolated fragments. We can assume that those conditions make it considerably more difficult to develop an adequate picture of developments over time.

Maybe this is the answer?


This thread illustrates the need for personalized services, aimed at individuals who will pay for what they want. I mean, we have lived through the 'here's a package of content our psychic powers indicate you will want to buy' (newspapers, record albums, cable TV packages). What people want differs, which is why we need intelligent content. The capabilities (that's what upper management cares about) add value to the organization adopting the new approach and makes them capable (capacity + ability) to deliver better, more people-focused (instead of persona-focused) content.

Agility is key. The world of big content will change everything as we begin to ask questions of content and deliver adaptive content focused on individual needs.

My two cents. My crystal ball is cracked, but seems to have been working fine lately. ;)

Scott Abel TheContentWrangler


I'm the lead dev at a company that ten years ago was a B2B publishing company. We published trade magazines for a variety of technical and R&D type fields.

We're now in online publishing (obviously) and just finished tagging several hundred thousand pieces of content with metadata that we'd used a 3rd party service to extract. We'd reached the exact same conclusions about the process of organizing and categorizing our ginormous back catalog of content, and having humans perform the tasks of summarization and categorization just "doesn't scale" temporally. Interesting topics come and go, editors come and go.

Anyway, I'm really proud that we already have (in production) a system for tagging our content in this exact same way and have already built a v0 recommendation engine out of it.


I think a lot of these ideas in here are awesome... but I don't see how they're different from the basic semantic web concepts that have been tossed around for years now.


Novelty man. People will always go after the new and shiny things, even if they're just redefinitions of old ideas.

I've been spreading the merits of the semantic web for years. People just ignore it. I've never been able to tell whether the idea is flawed, or if people simply don't get it.

I regularly go as far as to claim that the lack of semantic web is the cause of all the world's problems.


The idea is flawed.

Semantic Web expects people to do extra work, with no benefit. Create a benefit for those people, and you'll see it get done.


The benefit is the unifed interface.

Imagine Facebook being replaced by the semantic web.


For the people creating content, that's a problem, not a benefit.

Yes, for the readers, if there were enough structured content available, that would be a benefit. They aren't the ones that need convincing.


I think the difference is that the times potentially has the ability to execute. Semantic web was a cool idea but failed to catch on even with all the grassroots efforts. NYT already has tons of content with tons of metadata and they have the readership. Maybe they can pull it off.


I don't think they can put it into production easily. They would have to rework their entire site, their entire workflow, and the way their journalists work. What they are describing also doesn't work well on existing articles, because those articles, which they have some meta data, are not in a format that works well as a database. For example, each article has a lot of repetition with earlier articles. They say that it would require a lot of manpower. We solve this issue by crowdsourcing the events and their meta data.


The article is published by the same group that's building tools to help with this exact problem. [1]

[1] http://nytlabs.com/projects/editor.html


Could this be done using Twitter? Each reporter/citizen who has an update on a news item would post a tweet with the info, and create a link back to the original news item(s) or the one(s) that needed updating. Hashtags would be used to categorize the news item. You get an annotated DAG.

User can then filter based on the reporter they want to follow on that news item, the hashtag that is interesting to them, or on the timeline of the news item they're following.


Not only news. Articles' future is not articles.



Auto Summarized Content (Algorithm: Tuatara GS1)

In order to leverage the knowledge that is inside every article published, we need to first encode it in a way that makes it searchable and extractable...While news organizations have adapted to new media through the creative use of interactivity, video, and audio, even the most innovative formats are still conceived of as dispatches: items that get published once and don 't evolve or accumulate knowledge over time...

Auto Extracted Ranked Tags (Algorithm: Tuatara GS1)

article, future, apple, media, published, reporting, facebook, print, knowledge, constraint, publisher, shape, app, discussion, rethink, assumption, accumulate, propose, story, reading, website, rhythm, focu, piece, day, specially, publish, searchable, possibility, accumulative, conceive http://52.11.1.7/TuataraSum




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: