Title could be:
"single page with [THE FULL ARTICLE TEXT OF] the top 30 stories"
Feature-wise this is great, it's like an auto-inlined Safari reader version of all the articles.
Unfortunately re-publishing other people's work (written article) is generally considered theft or a copyright violation.
HOWEVER, things like safari reader exist, and I believe side-step the copyright issue by not storing or "publishing" the work and just "displaying" it differently to a single viewer. After all the browser is already display its own visual representation of the "text" (html).
So I assume if this was written as a browser plugin, it would be fine, or at least fine-ish? Is that actually true?
What about if it was writen as a Greasemonkey script? Still fine?
What if the "Greasemonkey script" was actually just a regular javascript that was distributed with the "list of links" that then queried the articles from the client and then formatted them "nicely"? I think that might fly but seems grey? Anyone have any (real) experience with this?
What if the client side script also stashed the "formatted" texts in the local storage for offline viewing?
I've had to deal with a copyright issue recently and the answer to questions like this is usually a solid "maybe." E.g. the New York district court recently disagreed with the Ninth Circuit about the "server test" (https://newmedialaw.proskauer.com/2018/03/02/new-york-court-...)
This is absolutely an incredible piece of Internet History. Thank you for sharing
Does anyone know if 'Virtual' Museums exist that collect pieces of internet art? I feel like this belongs somewhere. I do not mean like Archive.org I mean a curated collection
I suggest reading either of those three feeds on https://inoreader.com/ . If you add the feed there, you will have access to past aggregations since the feed had been submitted.
Is there any thing developers do for you that you wish they wouldn't? I usually try to use the standards when designing websites but everything seem to come from 1980.
Cool! I particularly find the domain blacklist interesting. What prompted you to add the blacklist feature, and what sin did dolphin-emu.org commit to get themselves on it? Unfriendly to scraping, maybe?
What can sites do to avoid getting on the blacklist in the future? Were they non-compliant with some standard, or just too complicated to handle today (but you'll fix it later)?
Also, whoa, this is 5 years old, why is it being submitted now? Might be worth throwing a (2013) on this.
Maintainer here: the folks at dolphin-emu.org politely asked to be removed from the app because content monetization is how they keep their project alive.
> what sin did dolphin-emu.org commit to get themselves on it?
I second this question. The Dolphin team's writeups have gotten a pretty good reception here, from what I've seen, so it seems like a shame for those to get automatically blocked.
That was my first impression, too :) I clicked one headline to see what happens and thought, "cool." Clicked 'back' to return to the title list, and suddenly I'm looking at the HN comments and thinking "what happened?" Took a second for my brain to catch on, like a sort of cognitive shock.
Edit: ya, what hamandcheese said. I guess my tab had been open for a while.
+1, I clicked an article, it loaded (instantly, woo!), then I clicked back which took me all they way back to real HN. I would expect to go back to the top 30 list.
I presume the commenter was asking the author of the project to fix a bug. While the submitter may not have actually been said author, it sure seems like a reasonable request.
And yes, perhaps asking the commenter to contribute is also reasonable, but the tone seems unnecessarily adversarial to me.
Nice I've been looking for something like this for a long time. I'd recommend getting rid of that transparent black bar at the bottom and the buttons could just be arrows < ^ >, and put in the bottom right corner, since you're funnelling the content in the centre.
I find the black bar pretty distracting from the content.
Also a clearer separation between the articles would be great.
Finally if you used page links (#), the browser navigation could support back and forth.
I made one once, for a job interview. They asked for the top 10 articles with all the comments and the titles translated to two languages. It came out pretty ok.
Pretty neat from an archive standpoint. Have something scrape that page every day and you end up with the top 30 daily results in a easy to read format.
You don't get lawyers knocking on your door some preset time after you get started. You get them knocking on your door when you become big enough to be worth their hourly rate.
There is a difference between being negative and being constructive. You might find the down votes are the subtitles between them, this has traditionally been a community of not-trolly, happy go lucky, mostly kind people who are working to build things, not pull them down.
Because people that are upvoting it enjoy HN and like to see all the various ways you might formulate its content presentation. Beyond merely altering HN, sites like that become general experiments in content presentation that others can study and learn from. This process of showing HN formulations now spans a decade and tends to change with the times, which again presents value.
Feature-wise this is great, it's like an auto-inlined Safari reader version of all the articles.
Unfortunately re-publishing other people's work (written article) is generally considered theft or a copyright violation.
HOWEVER, things like safari reader exist, and I believe side-step the copyright issue by not storing or "publishing" the work and just "displaying" it differently to a single viewer. After all the browser is already display its own visual representation of the "text" (html).
So I assume if this was written as a browser plugin, it would be fine, or at least fine-ish? Is that actually true?
What about if it was writen as a Greasemonkey script? Still fine?
What if the "Greasemonkey script" was actually just a regular javascript that was distributed with the "list of links" that then queried the articles from the client and then formatted them "nicely"? I think that might fly but seems grey? Anyone have any (real) experience with this?
What if the client side script also stashed the "formatted" texts in the local storage for offline viewing?