I had the same experience with GR and also Amazon.com which constantly peddles the vampire romance books when I am looking for recommendations for horror/fantasy. Both Amazon and GR strategy make sense because best-selling books sell the best, so they should recommend them to increase profits. However, it does suck being a reader looking for new book suggestions.
I've spent a good deal of time making my own book recommendation algorithm which has been working well for me for the last two years.  Through it I've discovered old authors I didn't know (Ted Chiang, Clive Barker) and new authors which I wouldn't have noticed (Scott Hawkins, China Mieville). Of course, it always helps to get recommendations from friends with similar tastes, too. :)
Like, if you like literary fiction, go through the Pulitzer prize for fiction, and just read. (I'm not even half way through that list, but everything I've read on it has been really, really good.) - there's all sorts of awards for smaller niches... the nebula, the Hugo, etc...
(Actually, that's a question. What is the award for the romance genre?)
I... personally don't understand why people even try to automate making better recommendation engines when highly skilled and respected experts are already doing it for just about any niche, and releasing the results for free.
I'm into SF and Fantasy. In the past, I would read the reviews featured in Locus magazine by the various reviewers. Nowadays, I occasionally read Locus but also reviews from other places like tor<dot>com, the Magazine of Fantasy and Science Fiction and Interzone magazine.
I wish there were a website where I could put in all of my favorite video games, books, movies, anime, etc. And it would recommend me things based on what people with similar tastes liked. Then, I could try out the recommendation and then either like or dislike it.
I would slowly acquire a recommendation network of people like me, effectively crowd sourcing the content-finding to a bunch of clones!
You vote movies, it calculates what it calls your "soulmates" (people who voted similarly) and then you get recommendations Of movies based on what those people loved, which you can filter by genre, era, etc. You can recalculate your soulmates anytime, and there are some options to tweak the algorithm.
You can also organise movies into lists (kinda like music playlists) and those are public, so if my preferences match user X and they happen to have a list titled "movies I enjoyed this past year" I can just check that.
But I don't want recommendations from people like me. I want recommendations from people with good taste.
A lot of really significant cultural works have become so absorbed in our common culture, and sometimes improved upon, that the original starts to look cliché and even naive. Is there a name for this effect?
I suspect people like it for the same reason they liked their first Anime, it’s an unusual style that seems very original unless you have been reading other stuff written in the same vein by say Philip K. Dick.
The focus on cyborgs a year after The Six Million Dollar Man TV show kind if shows how much a product of the times it was. But, that’s the surface.
The way it portrayed both hacking and brain machine interfaces was wildly off base and basically copied from other science fiction. Virtual reality for example goes back to 1933. Main character being a druggy is fairly common in that time period, again not a big deal. As is copping tone from other works etc.
All the big stuff is forgivable, but he also copies little things like replacing liver and kidneys to better filter the blood and thus prevent someone from getting high / poisoned etc. Sounds good, but blood takes around a minute to circulate and most of it does not hit either on the way. It might reduce how long someone stays high or improve their chances when poisoned, but it’s really not enough to prevent it.
Granted I prefer hard sci-fi, but the novel’s focus is really on style over science fiction. It’s IMO somewhere between space opera and fantasy.
How I looked at Gibson's work changed completely after I read "pattern recognition" when it came out in my early '20s - It was very explicitly about style, and I went back and re-read the older stuff which I read as a child, and yeah, you could also say that neuromancer is about style and fashion. It was interesting just how much reading the later book changed how I thought about the earlier books.
(Note, I still really enjoy Gibson.)
I promise not to blame anyone for a recommendation that's flawed. They're all flawed. Anything where the story is based on the implications of known (well, currently accepted) science without any bogus magic is as hard as trying to figure out what will really happen in a large software project that hasn't begun yet. But what have you liked despite its flaws?
I read the first two books (Voyager and Titan) from his NASA trilogy . These books are set in a near future or alternative time-line and cover inter-planetary journeys (Mars and Titan), involving the use NASA technology. Both books seem very well-researched and true-to-life.
Another book I really enjoyed was Coalescent . It’s a blend of historical and science fiction: the historical part tallies with my own understanding of the late Roman Empire in Western Europe while the science part is more speculative – a human society that gradually evolves to become eusocial.
On a very different scale is, Space  which explores the Fermi paradox, communication between different sentient species, and the long-term survival prospects for civilisations of sentient species. Unlike the other books which have more straight-forward scientific concepts, I found some of the ideas in this book to be mind-expanding and really pushed my imagination to its limits.
From a story-telling perspective, his books are well-plotted with well-drawn, compelling characters (you really empathise with the protagonists and want to find out what happens next). I learned about a lot of diverse topics, e.g., the theories of Giordano Bruno, history of NASA projects (e.g., NERVA), the tyranny of the rocket equation, explanations of the slingshot effect, the economics of the Roman Empire, eusocial organisation and behaviour, lunar geology, Titanic meteorology, how humans could survive in a micro-gravity environment (and space in general), consequences of gamma-ray bursts, and much more.
Looking at Baxter’s Wikipedia page, I can see that I’ve only scratched the surface as he’s written many more books. Unfortunately, over the past decade, I’ve got out of the habit of reading novels but I really should make more of an effort.
Anyway, it feels like a cop out but The Martian by Andy Weir is worth the read. The most obvious issue is the opening storm would not have done much because the atmosphere is so thin, but it is generally ok on the science side.
"Seinfeld" is Unfunny
There is a certain amount of calibrating for the awards group; for my taste? the nebula is... not 100%; much like how I enjoy '80s action films rather more than Ebert. like I've never read a Pulitzer for fiction book that I didn't think was incredible, while some of the nebula books I've read were only pretty good. (I haven't read anything by Liu Cixin yet, but it sounds like my thing? I mean, it was hyped, to me at least, as genre sci-fi written from a very different cultural perspective, which is totally my thing.)
(Conversely, by way of making a positive contrast with something in a similar genre that , I really enjoyed A Scanner Darkly by Philip K. Dick).
To be totally clear, I think Neuromancer is probably a thoroughly enjoyable read and I'd recommend it to someone looking for what it is without hesitation. It's just pretty easy to take umbrage with.
The only disappointing part is that we don't have vat grown assassins, if you want a transplant you have to go to China or Iran.
There is a long tail of long tails: niches within niches within niches. Some of these don't have a single proven trustworthy reviewer, let alone enough that the rough edges of their opinions get sanded off by aggregation. For these ultra-niche interests, it'd still be nice to have a guide. ML can do that.
I'm dying for human curation.
With as many YouTube videos that I've watched since 2005, you'd think they might recommend a video with less than 1000 views once in a while. I've found 1 new channel in the last 6 months and used the "Not Interested" option more times than I can count. And the rules are unclear. I don't want tech reviews from 5 years ago, but if I hit Not Interested, does that influence the channel, topic, age, keyword, etc recommendation frequency? I don't mind old DIY or woodworking videos, I'm subscribed to the channel, and watched the recommended video 5 years ago. (Of the current 12 YT Recommended videos, 2 are labelled Watched and only 2 are less than a year old.)
I think part of it stems from the lack of user organization features. Offer too many and you scare away users, while the people with vested interests dump money and time in to wash out any negative opinion (see Amazon reviews). Offer too few and you get poor recommendations during on-boarding/startup.
There are no rules. They just record your preferences and might retrain their black box algo in the future.
The black box decides those rules
Older recommendation systems use background and foreground timeframes to build frames of references, hence repeats.
The question to ask then is: what is the correct thing to optimize for? Dwell time is usually chosen for advertising and stickiness (you are more likely to stick with a service that plays what you like). Optimizing for novelty is difficult because the set of unknown things is much larger than the set of known things. Plus, it is risky from a business point of view.
I am not sure how many actions are needed, but I can think of a few more:
* Not now, but later is fine
* Maybe next month
* Play the opposite of this
* Something similar but new to me
That's not just nice, that is the most obvious "feature" a normal person would design. I've stopped watching Netflix because of nonsense like this. They know I've seen it before, because I watched in their player. Also, even the tiniest, most modest application of analytics would discover that I always watch one series at the time, from start till end, or whenever I'm bored with it. In the past years, I have never ever watched a thing twice. Still, it was constantly showing me things I've already seen.
The only conclusion I can draw from this is that Netflix does not have have my best interest in mind when designing their algorithms and that they don't respect their customers. So I canceled.
With Spotify, they do have a few features that work for me. I've discovered new artists by exploring their "Fans Also Like" feature. The nice thing about this is that they don't try to be too smart there.
Their normal recommendations suffer from the same issues that other sites have and are thrown off by the fact that my tastes are all over the place. I happily listen to sixties psychedelic rock, jazz, metal and some techno or some punk and tend to go from one to the other. Yet I'm very picky about what I listen to. Somewhere along the lines it seems to have decided I'm a middle aged guy (correct) and it consistently does not recommend me any music made this century; which is kind of frustrating if you are trying to find something new to listen to. Recommendation bubbles are a thing and escaping from them is hard.
I work around it by using the fans also like feature and using it's suggested additions to playlists. This works surprisingly well. Example based similarity search is a much simpler problem then recommendations. And it's IMHO a much more interesting feature to explore content with.
Back when I used to use it (about a decade ago), Pandora was good at shuffling, but their library was so small I would get the same songs over and over.
If I can't make it through 5 minutes of a movie I sure as fuck don't want to watch another movie that's just like it. Read the room already.
I don't think that can be stupidity, it must be malice...
> by Michael Dean (Adaptation), Andy Hopkins (Editor), Jocelyn Potter (Editor)
Also in tiny text below the description:
> Paperback, 71 pages
...that seems really short. Not sure this is the full novel.
A couple of points. All my searches selected an (in my opinion) less prominent result as the top hit and I had to click the "see other results". The searches I remember doing were "Blood Meridian" (should show Cormac McCarthy as top hit and "Waiting for the Barbarians" (J. M. Coetzee).
Secondly, I'm not sure if it's sensible to show books by the same author, at least nowhere near the top.
Lastly, while I like the UI and the idea of a handwriting-type font, I find the current font a bit illegible. It's readable, but doesn't skim well.
Will definitely be using this in the future!
It takes a lot of effort to read it. I'd say that for me it's 80% unreadable. Please, at least make a button on the site to select another font if a user feels like I do.
I shared your link in a chat, but it doesn't show itself off: https://i.imgur.com/r5hTqwG.png
(Got good suggestions from the first book I put in, thanks!)
But I like it.
Would you mind describing a bit of your algorithm, or is that secret info?
This is a hard problem because popular != good. But I would like to see data on whether recommending popular books works best. I’m sure amazon and goodreads have this data, but they are horrible at recommending.
Edit: I see you wrote beads. I have to improve the fuzzy search. Thanks!
Note: You have a typo on your About page – "Amazon Convservation Association" (extra "v").
Out of curiousity, where do you source your data from?
The only tweak I’d make? Link to UK (etc, but UK for me) Amazon too! Mainly because I’m lazy but also you could then set up a UK affiliate tag. I buy a lot of books, get your slice!
> Even though there is an AI behind this website, recommendations should like they are hand-picked and hand-written.
Missing the word "feel"
I think in these cases you can do better by bringing in some content-based filtering. I made an experimental book recommender using only story trope tags and I thought the results were already better than what I was getting elsewhere. It's still up at https://bookslikethis.herokuapp.com/ (but it basically only has sci-fi/fantasy titles).
Recommendations seem good for what I checked. One bit of weirdness though, searching for Gödel, Escher, Bach  gets me a book by "Agnes F. Vandome", that on googling is some sort of fake book of compiled wikipedia articles, and I need to click on the alternatives to find Hofstadter's book. So it looks like the search system can be successfully spammed with fake titles for reasonably notable books.
Many, many times I've thought of how I'd build a competitor, but it is pointless because Goodread's moat is too big: the integrations with the kindle go a very long way towards cultivating engagement (when you start a book, kindle will update your profile by default, it'll add your rating, mark the book as finished, all through the normal ux of the kindle).
There is no way, in my opinion, to overcome the handicap of needing users to manually update what is automatically updated by the kindle. And I can't see 'linking' accounts working because there'd be no incentive for Amazon not to block access from a competitor. And, frankly, the kindle is the only platform that matters, it likely has 95%+ of the ereader market.
Goodreads is bad because it is a monopoly, and that, frankly, sucks.
I didn't continue with it because there just wasn't any real value. I don't want a trophy shelf of read books or get automated recommendations by "people who have read similar books" because those titles are very predictable.
I'd need something different e.g. something facillitating deep discussion and Q&A organised by chapter or something. Get authors participating and I'm there 100%.
I've used it a few times to show other people what I've read and help them get book recommendations or even find a book title again.
What I'd really like is more access to the data and filtering mechanisms to try to build recommendations for myself.
Example: Find the highest rated books that I haven't read in the Fantasy category, as rated by an audience of people who have rated at least 7 out of 10 books from a list I provide with a score of 4 or higher, and who have never rated a Twilight book with a 3 or higher.
And if that doesn't work, maybe I tweak it a bit. Build a score threshold for excluding reviews from people based on other criteria about things I don't like. The point is that the data would help make the decisions..
Of course, that won't happen because of privacy concerns. With all the data locked up it means only Amazon can do it, and they're failing at it.
People in many different jobs use Excel and SQL databases - I wonder if enough to make 'power user' but mass market apps with that sort of interface.
We built a pretty complex recommendations engine on top of this and quite frankly, it worked really well. Simplified, if you have actual photos of bookshelves it's easy to start building recommendations. Person a read a,b,c, person b also read a,b,c, however, person a read e,f so maybe person b would also like e and f.
The main issue is that publishers are archaic and building a sustainable revenue model against that is a challenge.
I'd be more disappointed if I had to go to five different sites to see what different segments of my friends were reading. Sort of like...messaging...where I have to use WhatsApp, SMS, FB Messenger, Slack, and GChat because different friends/family are adamant on one particular silo.
I know monopolies are not good, but there are some benefits in scale, especially for a site like this where i'm not paying and i'm not forced into buying bundles/packages I dont want.
"one protocol to suit them all" would be a lot harder with books though
I have trouble finding up-to-date sources, but Googling has some out-of-date data indicating Kindle in the mid-80s for American marketshare and mid-50s for global marketshare.
Personally, I've found Kindle and Kobo ereaders to be broadly equivalent from a hardware perspective. Kindle has better integration with the Amazon ecosystem, while Kobo has better support for DRM-free formats and better integration with local libraries. Ebooks that Amazon and Kobo offer also seem to be broadly equivalent in selection and price. (And Kobo will pricematch Amazon, or anyone else.) Largest different I've noticed are audiobooks that are exclusive to one or the other, with Audible having more of those, but also being more expensive than Kobo's audiobook service.
I took a quick look and there is a official Goodreads API that you can use to pull titles from people’s accounts (once they authorize your app). Yes, Amazon could shut it down but eh, might be worth the risk
I remember someone making open source version of IMDB (can't remember the link now), I think I saw it on Patreon. Would be nice to make something similar for books.
One feature I'd love as a user (specifically for non-fiction books) - given a book, show me all books referenced in that book (in the chapter, footnotes etc). I've found quite a few gems, simply by scanning the referenced books list in the books that I like. After all, the author is an expert on the subject, and it is safe to assume he/she would have read tons of books than me on the subject.
Of course I'm only searching for one book at a time.
1) I desperately want a feature that'll show me the books most-reviewed among my friends, or highest reviewed with >X # of reviews. You can see what individual friends have reviewed, but there is nothing I've found to aggregate.
2) More reading stats (right now it shows number of books, and total pages at the end of the year). I wanna see breakdowns of categories, authors, fiction/nonfiction.
3) I had an idea for per-book or per-series wikis that have spoiler-bracketed info. E.g., [spoiler b2p300 "King Soandso is murdered by Assassin Soandso"] would show in a book wiki if you are past book 2, page 300. I read a bit of fantasy and have a tendency to want to google 'who is x', but that is fraught with spoilers. This is a feature that could exist as its own site, but I think it'd be a good companion to a goodreads-esque site.
4) Book Discovery seems super weak on Goodreads and I feel I could build something significantly more useful.
It's basically just the momentum it has, alongside with crowdfunded book directory (you don't have to enter the metadata manually in like 99.999% of the cases). I'm not gonna say "don't bother", but I've personally used three Goodreads alternatives over the years and nobody mentioned a single one of them yet. I've settled on BookDigits, which is what Goodreads should be if you ask me.
> Yes, Amazon could shut it down but eh, might be worth the risk
Technically API isn't the only way users could extract their data from Goodreads. There's also export/import tool that exports data in CSV, and there's also RSS feed that you could ask for.
If there's one good thing I can say about Goodreads, it's that it's not a walled garden.
The reason “better frontend” is in quotes is because it wouldn’t stop at just making the UI/UX better, but also do better search using Algolia, a better book recommendation engine, etc. but there’s no reason it couldn’t just use the goodreads api as essentially the backend for user profiles and ratings
Amazingly though, this system also doesn't integrate with checkout/-in records, but this seems like a small, easily-fixable feature gap.
Not in Germany though
If not, I'm clueless because I have purchased mine from German Amazon and it does all of those things.
We tuned the relevancy ranking to work for exactly those sorts of searches, and put the things the OP was looking for on top (or at least under other books with exact same title). His examples look just like some of our QA searches for our relevancy ranking (in addition to standard tf/idf: boost adjacent words matching higher than non-adjacent or out of order; boost match of complete title higher than partial title; boost match before the subtitle colon more than after; boost title matches more than author more than other fields; etc).
And this was ~5 years ago, and this was an underfunded university library IT department where the development team consisted basically of me, just using stock Solr for relevancy ranking (the tuning came in how we constructed and boosted our indexed fields). No fancy machine learning, just configuring fields and boosts deterministically.
So, this could be done, if they cared about the site at all.
[In fact, hey, i can give you actual examples from that project. Turns out there are a lot of books called "the confession" -- we didn't do anything fancy to try to guess which one more readers would be looking for.
(hope I don't "slashdot effect" my former employer...)]
For one, you need disciplined acceptance criteria in the form of both qualitative standards (things a non-technical manager can look at and say yes or no) as well as various relevancy measurements like mean reciprocal rank and normalized discounted cumulative gain (via acquiring human annotated data if needed).
When people only focus on qualitative feedback on top of boosts and hacks in an off the shelf tool, they usually end up with some weird witches’ brew of bizarre boosts and time-decay weighting that is extremely fragile and can’t be robustly changed or even understood without the qualitative performance going haywire. You need disciplined study of quantitative ranking metrics to know the drivers of performance, fall off as you move down the ranking position, and to make search index updates reproducible and make incremental improvement measurable.
Meanwhile if you only focus on quantitative metrics, you might miss obvious red flags. The relevance score used for NDCG might be biased some way. You might surface highly relevant results to only one context or sense of the words in a query (like only showing fruit for “apple” and never tech gadgets). You need people who make the subjective appraisal of quality for users to be looped in.
Here’s the point. When this is all missing, you will lose credibility with people making the decisions. They’ll hear some engineer babble about NDCG but then say the darn thing doesn’t work in the QA testing. Or they’ll say the qualitative results look OK and get angry when weird counter-examples pop up in the second or third results pages, which might have been measured with quantitative metrics.
When this happens, executives and managers just want to punt. They want the “nobody ever got fired for buying IBM” equivalent for search, and that’s how you end up with Confluence still only supporting exact title matching and having no ability for actual content relevancy search.
In this sense, the little projects showing “look what us non-specialists could cook up by hacking some boosts in Solr!” do a lot of harm and should not be considered as the plucky success stories they are often painted to be.
This is the big red flag, when non-specialists hacking on Solr boosts are claiming something works because of a few qualitative test cases.
“It works” is a statement that only applies after you’ve done qualitative and quantitative goodness of fit testing.
You wouldn’t have a random IT employee make a stock-trading algorithm and then test it on a month of data and call it a success.
For a search solution to “work,” it needs to pass quantitative and qualitative checking, and be explainable to stakeholders and be reproducible / incrementally updateable. The training and arrival at hyperparameters all need to be reproducible and based on the outcome criteria they are meant to solve.
Making some hacks into a demo that superficially looks good is not at all the same as “it works.”
This whole post was a wild ride. But out of curiosity, have you considered walking up to Atlassian and saying “pay me $1MM a year and I’ll solve this problem for you”?
I agree that underfunded "DIY" enterprise software projects that are not properly/professionally managed/implemented with the proper expertise are a problem, in academic libraries and elsewhere, for search projects and other things.
I still don't see the problem of setting up solr indexed fields and boosts to ensure that "match as phrase" is boosted higher than non-adjacent matches (a feature built into Solr), and "match _complete_ title" is boosted highest of all. This is what the Goodreads examples failed on. It is pretty simple to set up, and I don't see much risk of this causing problems or being worse than not doing it, and would solve those horrible Goodreads results specifically.
I understand since it's what you specialize in, you see the risk of "look at what us non-specialists can set up" sending someone away from... actually I'm not sure what, hiring someone like you? (Which if I were in charge of an academic library budget, which I'm not, I'd be wiling to consider -- don't get me wrong it's not a terrible idea!). In reality, I think what it steers people away from is... a relevancy search like Goodreads has. (Goodreads is _not_ a "plucky little project", and apparently they think their horrible search is good enough! It is not! And I do think they could make it a LOT better pretty easily without having to spend millions on it).
You seem to suggest that products like Solr or ElasticSearch should not be used/configured except by people as speicalized as you backed by relatively expensive search evaluation programs. While I'm sure that would result in better searches everywhere (not being sarcastic, I fully accept that), I think it's unrealistic. If you convince people their only choices are Solr/ElasticSearch/postgres-full-text out of the box using only one indexed field with no configuration for relevance tuning; no search at all; or hiring you or an equivalently expensive search program internal or external -- you're not going to get the expensive search program you want, and you're definitely not going to get "okay, we just wont' have a search at all then", you're going to get people not touching the configuration at all, and ending up with Goodreads search.
Your search really doesn't have to be as bad as Goodreads is, without having to invest in the kind of program and expertise you are suggesting, I really believe that and stand by it. If you invest in what you are suggesting, certainly it will be even better.
(PS: If you have disciplined acceptance criteria combined with qualitative feedback from experts etc -- aren't you still gonna end up tuning your Solr configuration to achieve improvement on those evaluations? I'm confused by your suggestion that turning Solr configuration with boosts etc is not the right tool. Or are you suggesting Solr is the wrong tool for... search?)
2) I love the fact that Goodreads has an old-fashioned feeling. Modernizing it would mean a front-end JS framework - making it buggy and obnoxiously slow to use on my old laptop or phone - and a lot of aesthetic features that reduce usability. Oh, and probably light-grey text on a slightly-lighter-grey background, making it a strain on my eyes. Modern internet is meant to look good, pop on a resume, and have little or no functionality or substance.
I genuinely hope the people at Goodreads ignore this article.
Read reviews of the brand new Oasis, a $300 product featuring a micro USB port.
Amazon does not care about readers, probably because they drive so few dollars and the market is won.
Reading does not drive prime subscriptions, and anyone with decent product management ability and influence is working on something more important.
This is a clear example of an area where consumers are suffering from a monopolistic grip over the market.
I understand it that they would not release any ink devices that fragment their product lineup beyond iPhone and iPad.
However, Apple‘s lack of action in providing quality reading experiences that reduce eyestrain and effectively Compete means that everyone has to deal with Amazon’s garbage.
I hate to say it but I think reading is a niche market. Not enough people do it, a lot more people want to watch YouTube then read Cormac McCarthy.
Reading is decidedly not niche in the US. Books (across all formats) are a huuuge market. They're roughly 2/3rds the size of the entire videogaming industry in this country, by revenue. The total annual revenue of the US book publishing industry is greater than YouTube's (but just barely).
except a huge portion of the reading market is from libraries and other free sources of content. Few people buy every single book they read new.
It's a flat-out monopoly. Amazon has a strangehold on the market. If you only had to compete on the hardware, then you'd see competitors. But you need to compete with the entire market ecosystem, which is very hard.
It's like the mobile phone industry, except instead of two unassailable players there's only a single one.
First they removed the page turn buttons, then they took a small step forward by releasing the Voyage which had pseudo-buttons (you could squeeze the sides of the device and it'd vibrate to give haptic feedback) so at least you didn't have to use a touch screen to turn pages, but then they released the Oasis which has a weird as hell non-symmetrical form factor which means you have to flip the device when you change the hand you are holding it with if you want to use the buttons.
Being able to push buttons on either side of the device to turn pages was one of the best "features" a kindle had over a conventional book for me. Not having it just feels like such a huge step back
You can also get a used Kindle Paperwhite for pretty darn cheap. I wouldn't recommend going to earlier Kindle generations prior to the Paperwhite, because they didn't have backlights which makes reading in low light conditions (like bed) a lot less pleasurable.
I really wouldn't mind having physical next/prev page buttons, though. Having to touch or swipe the screen is kind of annoying. I feel like that's a step back in UI from previous gen devices.
Smartphones initially had physical buttons that have gone by the wayside now, but at least their screens are much more responsive!
They are actively malignant at this point. They are breaking embargoes now, and clearly won't be satisfied until they kill traditional publishing.
I dropped Prime last year, and it was substantially easier than I thought it would be.
Since then, they've just kept doing disgusting, grotesque crap. At this point, I won't buy anything from them. The whole site reminds me of that cleaning-product smell at Walmart - any time I'm exposed to it, I just want to leave.
That orangey-yellow color means a fight with an untrustworthy, grubby greedhead robot that consistently fucks up delivery and makes me deal with them some more. Who needs that?
I only use it for keeping track of what I've read and to look up negative reviews to see whether a book is likely to be one I won't like.
The scoring system is almost useless, as certain categories of books just get endless 5 star reviews from fans.
Dick's stories are typically short and easy to digest, with fast action. Lem's is ponderous, filled with long sentences and complex ideas. Little is happening.
This bothers me to no end in Google maps search results. It seems like every time I'm searching for some chain> the first two results are for stores that are hundreds of miles away and in towns that I've never visited (which Google should also know!). The one that's a couple miles away / the one that I've been to (and mapped to) a dozen times? Third or fourth on the list. Insanity.
The titles contain "the" and "and". These kind of words get filtered out because they make indexing incredibly difficult, ie. you search for "the" and get every single book containing "the" in the title (and sometimes description, depending on how deep the search is). So they exclude extremely common words to get eg. "catch kill". Then they have to rank the results for "catch kill" by some other heuristic - like popularity - and you get a list of books that people weren't looking for. Not to mention the first two books in the list were actually exact matches, so maybe they were more popular on the site.
Big search engines do try to correct for this, but it's not an easy problem. Usually results in some hard-coded exceptions for names like "The Who", but it's not trivially scalable.
In the case of your google maps results, I imagine it'll be something like (popularity * distance), which isn't necessarily the best heuristic.
The problem is that you can't please everyone with heuristics. Give power users the option to adjust the search to distance-first, popularity-first or any other of a dozen different combos and you end up with a confusing mess of a UI. Simplify it and give them your nicely hand-crafted best guess and there will always be someone complaining...
Also visible in Youtube. There’s a French guy who explained what it was to be raped by a woman, and I often refer people to it (and half a dozen other videos). When I search for his video’s title and his channel name, I systematically get 2 other results first, generally videos about how many women get raped, from the Huffington Post, often the ones with fewer views.
It really feels like Google is trying hard to dodge the correct answer.
I think this is to combat fake news but we all know CNN and Fox News all make stuff up from time to time to fit an agenda so they're hardly much better.
This was a solved problem years ago. Windows Phone let you pin nav directions to any address you wanted directly on your home screen.
This seem like it should work around the poor "Home" search the previous poster was referring to.
I would like to be able to weight opinions (both positively and negatively) of individuals, and also based on criteria (e.g. "completely ignore the opinions of anyone who gave a low rating to The Good Soldier Svejk" and also "weight by 100 the opinions of anyone that rates The Silver Chair as the best of the Narnia stories".
For example, my Goodreads "friends" are people who like the books I like. There's no social obligation. FFS, I unfriended my sister. My high-school friend who only reads YA fiction: he was unfriended years ago.
My favorite goodreads friends are a half-dozen people I've never even met, but I agree with their reviews. When they give a book 5 stars, I check it out.
That's the key to Goodreads. If Amazon can't figure out how to make better recommendations, they should look long and hard at how the social graph beats the star-ratings.
It's more labor intensive than a standard recommendation system, but the results are better. Since books are a large time commitment, and people are passionate about them, a lot of people are willing to put uncommon effort into finding good ones.
If you want real discussion nothing really beats the classic forum bulletin board.
 My books: https://www.helloreads.com/ryanhittner
The front page should say something about specific books, something that draws you in! Something from the site itself. And put that book recommendations button higher up (it's not visible on first view for me). :-)
Something that makes you begin browsing and using the site to get recommendations. Then in standard style, somewhere down the line you'll hit a snag where you have to register to use certain features, like commenting, liking, etc.
But I think your right, should open it up a bit on the main page. You don’t need to log in but I think your right, it could be more appealing on the home page. If enough interest, we may fire up the development again and see where it goes now that we know people other than us are also frustratedly with Goodreads.
Could be browser settings but feel feee to shoot us a message. Support [at] helloreads.com
Keeping track if I've read the entire series or not is another feature I really miss for books that trakt.tv provides for shows.
"What Goodreads is good for is keeping your own list of books you want to read or have read this year. It’s a list-making app."
That's pretty much all I want from it, so it works great for me. The recommendations are often (but not always) good enough. I usually don't read based on recommendations from GR but occassionaly something good does come out.
Websites don't need to keep adding features, it's fine to look the same 12 years down the road. I think this website (HN) is a pretty good example of that.
In the first one she searches for "the confession" and another "the confession" title is shown, together with other way more popular titles with "confession" in their titles. I'm guessing most people want fuzzy google-like searches but sure, exact matches of more obscure books can be given more relevance or search options can be given.
The second example is even worse; she searches for "title" and complaints that a book with "title" and one with "title:subtitle" are ranked higher than her "title:some other subtitle"
One more complex feature I'd love from an ideal-world Goodreads is ephemeral book clubs. Find people who want to read this specific book, read it together with them, and then disband.
Then Netflix switched strategies away from highly-personalized recommendations to a dumber algorithm that is more likely to recommend more popular movies, which ends up retaining subscribers better. The reasoning was that users were more likely to cancel their subscription if they had one obviously bad recommendation than if they had multiple so-so recommendations. So rather than recommending, for example, 80% great movies and 20% bad ones, Netflix wants to recommend 5% great movies and 95% okay ones.
Same thing for groups - it's fun to try to find the "rarest common denominator" in the books, movies, or places visited domain.
Got to your Friends page (https://www.goodreads.com/friend), click on "Compare books" in front on the friend with which you want to compare. You will see the list of books you have in common sorted by inverse popularity.
Not to mention all the ways they want to shape the experience that I don't want, with their useless and biased suggestions being one that sticks out for me, as well as their user communities that I'm pretty sure exist to denigrate individual perspectives, since they aren't designed to let people organize and be critical, just a noisy crowd with grossly aggregated ratings.
But these big companies don't care. They dominate in their areas. Their best, only reasonable option is to drag their feet on consumer oriented features as much as possible, to save money and spread out behind the scenes. If a competitor comes along with a unique feature, they can just add it and destroy them. It's impossible for the consumer and innovation.
As we go to very high levels of integration, where our activities, connections and other personal data are involved in every decision, the only solution is to separate data and services, so I can access the data I want, process it in my own system that for any activity can be more comprehensive than they can reasonably provide — unless they all, either via backend "cooperation" or duplication have an incredibly privacy invading profile of me — and finally process it with their service. This is what Solid proposes. It's a long-term, multi perspective, standards based project, and I don't know any reasonable alternative.
they can just add it
and destroy them
1. I think I would find a 5 star rating system too constraining. My opinions in most areas have more gradations than just 0-5. 0-100 would be entirely too many. I think IMDB's 10 point scale is really the main reason why I've stuck with that for movies. On the other hand I prefer to only thumbs up / thumbs down books, so I'm really not sure you can do anything about this in a way that would make everyone happy.
2. I think there's either a bug with your weighted average or else you're ranking using something like "we're 90% confident the true rating is at least x". But this confidence level seems entirely too high for the number of users you have on your site. For example Avengers: Endgame appears on #13 on this page: https://rate.house/chart/movie despite the fact that it has a rating of only 3.64. I would probably lower the confidence required for now, and if you 10x or 100x your users you can raise it again.
3. Would be nice to know a bit more about what features it has on the home page before signing up. Can I import my ratings from other sites? Can I export my data in some usable format like CSV? Can the information database the users create for media entries be downloaded by users? (Even IMDB offers this.) Can I get recommendations from the site once I've rated enough items? Can I get music recommendations based on my movie ratings? (That would be cool.)
Concerning custom shelves exclusive from read/want-to-read etc. I do have created such shelves. Not for "Did not finish" list but to have 2 kinds of want to read: books on my wishlist and books I own but have not yet read.
Features goodreads has that most competitors lack: I can have two different editions of the same book. I can store more than one date I read a book. I can export most of my data in csv.
I don't really care much for recommendations right now, because the user-created lists will often get you what you want faster.
Furthermore, I think GR does a good job of classifying the books, or rather letting their readers classify their books for them.
All in all, I think the site should be more responsive, but there are tonnes of places to have long discussions on books, genres, plots and other things over at reddit.
Anyone else like me, what solutions do you use?
Google books has gotten worse also - to the point of useless unless you have a quote or are looking for common titles.