Hacker News new | past | comments | ask | show | jobs | submit login
We might be swinging away from the newsfeed (ben-evans.com)
400 points by juokaz 11 months ago | hide | past | web | favorite | 232 comments

AI is still oh so much like Clippy. In Google, I keep having to change it from All Results to Verbatim. With Facebook, I keep having to change it from Top Stories to Most Recent.

Try UI before AI. Purely chronological will suffer the spam of frequent-posting friends. Group their posts, which can be flicked through as a cluster.

All in all, am I the only one that thinks that the current fashion in page layout squanders real estate? Make the news feed a huge grid, with each post smaller, like one of those photo walls in a hip restaurant.

Verbatim, Most Recent, and clustered posts are all better for a mindset when you have a relatively specific query, or at least filter bias, in mind. Consider the following statements:

1) I'm looking for something specific. 2) I'd rather not be surprised right now. 3) I'd like to get through the search/index as fast as possible to get where I'm going.

The above are user-voiced versions of principles that drive traditional UI, and are absolutely better met by that optimization process.

AI-driven search and feeds were exciting because they were the first method that captured use cases where users wouldn't make those statements: fuzzy queries about general interests, the desire for diversions "curated" by your own social group, and search/feed review as itself as an entertaining experience.

Then, it turned out that these unexplored use cases were actually the giant, underwater part of the iceburg of what people wanted from information systems. The growth and attention capture you can drive from meeting those cases is not only massive, it sucks the oxygen out of traditional value propositions by out-competing for the same attention.

I think a return to traditional "capture and facilitate intent" UI (or even better a synthesis of the two!) will only be driven by users finding a way to explicitly value their own clarity of attention. We see signs of this in everything from ad blockers to focus-oriented browser plug-ins, but I think we're still missing the FOMO-conquering expression of this value proposition that becomes a competitive product.

Exactly how I feel about this kind of thing. If UI is "smart" then it is also unpredictable. I don't want my computer to be "smart", like people, who could decide to do any fool thing for their own reasons. At least, I don't want that yet. Don't be smart, just follow simple rules that I can understand.

Exactly. There is a distinction that needs to be made between something that is trying to solve my problem for me, and something that will let me best solve the problem for myself, in my own way.

Nothing frustrates me more than when software thinks it knows better than me.

> Nothing frustrates me more than when software thinks it knows better than me.

Would it be fair to rephrase that as "... and fails"?

Because that's what we're really talking about, right? Those times when the smart system failed to correctly predict user intent, but lacked a fallback method for the user to clarify?

I'm sure there's an uncanny-valley-esque term out there for the difficulty "99% correct" smart systems have when users hit that 1% of incorrectness.

  > Would it be fair to rephrase that as "... and fails"?
No. Unless you consider failure to be the vast majority of interactions with smart software

I'm not sure of a good term for it, but by design when these "smart" system work you'll hardly even notice. They "just work." However that 1% of the time they fail disastrously, you'll definitely notice.

> There is a distinction that needs to be made between something that is trying to solve my problem for me, and something that will let me best solve the problem for myself, in my own way.

Max I borrow that phrase for all future references from UX colleagues regarding smart UI? I really hate smart UI as it changes while I keep expecting other things and constantly have to relearn or search for items/stuff that was there just a moment ago, but cannot be fount after a reload.

This sounds like the classic "tool vs agent" design concern.

It's always fun backspacing one character at a time to surface the result I wanted, that showed some number of characters ago. Windows 10's search is especially bad about showing things that don't start with the characters I type until I land on (or chisel out) the right characters.

I always phrase this as "simple vs easy". Often people, especially self-titled UX designers, mistakenly conflate the two.

To make something "easy" requires the system to predict what you want. To make something "simple" requires the system to behave in obvious ways so the user can reason about how to make it do what they want.

I'll take "simple" over "easy" any day.

I think you just solved a huge grievance I've been having with Google search. I get absolute trash results that are hardly even tangential to what I searched. I'm going to try out this verbatim thing.

The issue I have is that if the topic could even remotely sell me something, those selling or reviewing the thing sold in order to sell it will be the only results. The communities actually interested and talking about the things have disappeared from the results.

One trick that works for some topics is to limit your search to a custom time range like pre 2005 or so. Specifically this only works for information that would have been available then, but when that is the case you can avoid a ton of modern spam.

Woah. I just tried a search (that originally didn't find what I wanted) with "Verbatim" selected and I found what I was looking for.

I don't know if it will hold up for other searches, but so far so good.

I've actually stopped trying to search google for things anymore, because it almost always tried to be clever and totally missed the point. Sounds like I should try "verbatim" too!

Can you give some examples you fail to find with Google?

Not right now, I've just done it enough times. It's often when I want to do a search with 6+ words and all of them are important. But because some of them are e.g. "C++ default constructor" google just gives me C++ 101 answers and disregards the rest of my query.

The searches had to do with real things as well as programming.

Yea, this one happens a lot. I was getting scope 101 results for something I was searching yesterday. I've basically given up on mobile search, it feels worse there.

From this past weekend, I was trying to find a shirt I'd seen another student wear back when I was in highschool (~2004), with what I thought was the phrase "How many plants had to die for your stupid salad?"

Just about all the results were for vegan activism, without a hint of it being a joke/phrase.

The top result when switching to "verbatim" mode was the joke, standalone, correcting "plants" to "vegetables". (Searching for the correct phrase in either mode got me to a zazzle page with a shirt identical to what I remember, but that's not really relevant to "use verbatim search!")

When a framework you use

-has uses a weirdly named something

- and Google fuzzes it to something it has a million results for

I think this has happened twice, despite of me using both doublequotes and verbatim.

I've reported it and they've fixed it shortly afterwards but it is really annoying.

You're clearly a smart person but I'd argue that the people designing this nonsense are also smart -- which leads me to believe that they've made it "terrible" on purpose -- which makes me dislike them more.

Definitely true. "Terrible" here means longer engagement, when what we want is efficient engagement. But efficiency has little time for ads...

(Side note, your comment read to me like something Douglas Adams could have written about bureaucrats on a far away planet. That twist at the end that summarizes the entire issue in a few words. Well done!)

I've been noticing the same. I seem to get the most obtuse interpretation of my queries instead of the most contextually relevant one, until I add additional terms to get it to "realize" which idea I was going for, instead of showing multiple interpretations at once. I thought we were supposed to be getting better at this, not worse.

Even verbatim is too “smart” sometimes.

Yesterday, I was trying to find if what I was dealing with was a bug on this particular version of Photoshop.

Quoted CC 2018, version number and verbatim still showed Adobe forum posts from 2012.

Date range works, but you have to find the public release date.

By that time I had finished installing the previous version already.

I miss Google’s simpler times, let me just grep the web.

Yes! How much do I have to pay to not have it automatically switch back to 'top stories' all the time.

$0, just delete your account and never go back to Facebook again

(This is a serious suggestion. Do you really get anything out of Facebook?)

Weekly reminder of why I left my home town ;)

Are you so tempted to go back that you require weekly reminders?

Nostalgia has a way of creeping in without regular inoculations.

I cannot relate. Maybe I could at one time. I don't recall. But I seriously cannot relate.

I like the town I am in. I don't wistfully wish I could go back to my hometown in Georgia, not do I wistfully wish I could somehow return to California and make that work.

/cranky oldster moment

nah, not really.

I log in, check if close friends or family have sent me anything, then log out.

Install the FB Purity Chrome Plug in. It's the only reason I still use facebook. No ads, no groups, no farmville, just the people Im friends with posting in chronological order.

I'm trying to reduce the amount of time I spend on the site and having things in strict chronological order really helps. Scroll down till i see something I've seen before, and then I know it's time to quit.

Link: https://www.fbpurity.com/

I wouldn't be without it either, but "chronological order" is actually "Facebook-filtered chronological order". That is, you don't see everything, and just because you were allowed to see it a few minutes ago doesn't mean it'll still be in the feed now, and things that weren't there yesterday might be where they should have been yesterday today.

That may be a wonderful thing if you're "friends" with a few hundred indiscriminate meme-posters, but if you're looking at a couple-three dozen relatively taciturn folk who are actually friends (though geographically and temporally dispersed), it misses out on a lot, and you have to scroll past things you've already seen to find things you've never seen before. Still, it's better than what FB gives you by default.

I only ever talked to a single person on my Facebook. In terms of Facebook this clearly means to ignore all other posts and show me only selfies of this person I don't have actual interest in as something that is close to chronological.

Point being it is really far from what one would think chronological means.

I set it to Most Recent, changed my password to something I didn’t take note of, and haven’t logged in since last Wednesday. So far this is working great.

Yeah, it's pretty much like the "uncanny valley". Good but not good enough.

Self-driving car tech is also showing this lately.

> In Google, I keep having to change it from All Results to Verbatim

Just updated my Google bookmark to https://www.google.com/?tbs=li:1, which seems to be the parameter to enable Verbatim.

> Group their posts, which can be flicked through as a cluster.

That's how flickr deals with mass uploaders.

I have the same with services like Spotify and Youtube, especially the latter (on the PS) seems to hide new content I haven't seen from me. At the same time it filters the allcaps makeup videos and game streamer/vlogger drama and music videos that seem to plague the homepage.

Artificial Intelligence does not exist. What we have is Artificial Stupidity.

A charitable reading of your post is that current AI, if judged against an average human personal assistant, would make many more "stupid" mistakes than the human. It would maybe forget less explicit stuff, but it's going to be a long time before AI stops us from going in a meeting having missed a button (or recognize which meetings are important enough to warrant a physical inspection of the participant). Or says, "hey before you commit to that meeting tomorrow, don't forget about this other thing that's maybe more important."

Anyone who's ever had a real personal assistant any good at the job would very quickly look at the best AI as the faint shadow of the real thing. And it's going to be that way for a while. Forever? Probably not, but maybe.

I'd argue that AI will sooner replace many legal and accounting jobs before it replaces good personal assistants. Just like how we thought chess is more difficult than walking and the exact opposite turned out to be true.

Even for the relatively constrained case of booking urban travel that's not 90% the same trip over and over, there's a pretty complicated interplay of price, schedule, convenience of location, quality of hotel, participation in loyalty programs, personal time, etc. etc. [ADDED: And just personal preference.]

A longtime personal assistant who has your preferences really dialed in can probably come close. But given how much I sometimes end up fussing over even my own travel, it's really hard to see virtual assistants getting there anytime soon.

> A longtime personal assistant who has your preferences really dialed in can probably come close

If they don't come close, they can safely filter out 90% ahead of time and present you a table of the remainder, saving you lots of time.

On the other hand, more likely that digital assistant was made by the booking company and is trying to upsell you with high margin items.

I think if we can let user control algorithms themselves, to a certain extends it does really help us to gain higher quality of information.

e.g sentimental analysis , fake news filter ...

I know there is a certain sense of hypocrisy in me saying this but maybe we don't need algorithms for everything.

Users can self-select. We can always just yell at Timmy who shares too much or outright unfollow him on Facebook if we don't want to see what they share.

I suggested that just because someone posts something on their Facebook wall, it doesn't mean I will see it. (This was around 2009). People often looked like I was saying something insulting. I think they'd understand things the other way around but I guess they don't see themselves as noise. There is some user education that the network needs to do.

In any case, be it Facebook or SharePoint or 37 signals, there should be zero obligation for anyone to look at things you post.

If you are having a baby shower and don't invite me personally and just write on your wall, that means you don't want me there.

I still believe in a 100% non-intelligent newsfeed. Ben Evans is wrong and I'd even say malicious for dismissing the idea of a reversed chronological newsfeed.

The only reason this is a problem is Facebook desires to inflate the number of connections. Let people cull their newsfeed and they will. Set expectations for posters that their posts may not get to everyone and they will understand as well.

Hm. Facebook not using algorithmic feeds tuned to your liking is hardly a reason to rule them out entirely. People’s eyeballs keep coming back, so Facebook’s news feed works great for Facebook. It’s an incentive or strategy issue more than a technology issue.

I agree. Facebook's News Feed is a culmination of years of research in retention and attention management. While that isn't really what most users want, it's exactly what Facebook has optimised the News Feed for.

We'd all benefit from a very different system which would structure things as we need it to, but that's sadly not what our economy incentivizes corporations to be right now.

Will the learning curve of "self-select" too steep to certain users ?

There's a hard limit to what I as a user can reliably express in terms of explicit rules. I have a cousin I was pretty close to growing up. She mostly uses Facebook to post anti-vaccine nonsense and occasional bits of thinly-veiled pro-Trump propaganda and BS "stories" from the alt-right. I "unfollowed" her a long time ago, but it would have been nice to see all the updates she posted when her dad had a stroke.

I need an algorithm for that. There's no good way for me to say, "I don't want to see all the stupid shit she posts, but please show me important stuff" because I lack the ability to rigorously define "important stuff". And it's unreasonable to treat her unwillingness to personally text those updates to every single person she knows.

Can't you just rigorously define the unimportant stuff? Like that which is related to vaxxiinwa, Trump or politics in general?

Not really. I could spend time and energy building filters that matched current known patterns, but those filters probably won't be very good. Suppose instead of a stroke it had been a gun accident. I'm not confident I could successfully anticipate and build a filter good enough to recognize that it wasn't a political post about the NRA and gun rights.

Ultimately, that line of thought ends with me developing my own algorithmic feed, which I don't really want to do and isn't scalable anyway.

The reason this problem keeps popping up is due to the mistaken belief that you can deduce intent via inferred behaviors, and worse that deduced intent is preferable (or at least comparable) to explicit intent. You can't and it isn't.

As an example of deduced intent let's envision a system where every action I commit over the course of a year can be observed and recorded for inference analysis to determine if I am evil or good. Assume that in all ways I meet the standard definition of good... I help people across the street, save puppies, etc. At the end of the year the BEST that you can conclude is that I behaved ethically (good actions), but that says nothing about MY intentions. I may in fact be evil and am only ingratiating myself within a community in order to later kill them all. There is a reason that within philosophy that morality and ethics are separate words. There is a reason that when viewing demographic data people say "I can tell you what 9 out of 10 people in the group prefer, but I can say nothing about any one individual within it."

As to the preference between inferred vs explicit intent, you're placing your own judgement of the value of inference vs explicit above that of the user which will inevitably lead to frustrations on the part of the user. In the simple case of a catalog there are two distinct intent patterns that users engage in... one is the specific intent to find a known product and the other is what I refer to as "discovery." Discovery is, as the word suggests, finding things that the user did not know existed or knew very little about.

It’s certainly interesting that in sciences like history, determinism is only plausible when you know very little. The more you know the harder it becomes to figure out why something happened.

News feed prediction behaves the opposite of this. It tries to predict your intent based on past behavior, but that is, as you say, not possible, and could likely be why the news feed is dying.

I wonder if another dimension is useful.

YouTube's UI prominently features search. So if my interests change and I search for something I've never been interested before, YouTube can show me a pretty good selection of videos and tune its recommendations for future viewing.

I will say, though, that all social networks seem to fail at weighting past behavior too strongly. Whenever I start a new account on a social network, I find it's recommendations are much more tuned to my interests, and those recommendations become less relevant overtime.

Biologically, I'm a new person every seven years. As a collection of varied interests, it seems like I turnover much more often than that - and I suspect the rate of turnover is different for everyone.

Why not just give me more control? Why can't I say (soon literally) "Dear Zuck - show me news" or "Dear Zuck - show me cat photos"?

For me, FB is far too controlling. So much so that I have 10 groups that I've started to "organize" my friends and in a way build my own feed. It helps. But it still not what I want, mainly because a lot of people are not group share minded.

I don't understand why FB is spending soooo much time and effort trying to guess what I want when all it has to do is...wait for it...ask.

I suspect, but certainly can't prove, that there are two things going on... first that in order to get reasonably true answers to questions you have to provide a fair in-kind trade with the user. If you want their home location you could ask for it, in which case a lot of people will seem to come from zipcode 12345, or you can offer a solution to a problem (or preferably a set of problems) they may have that is easily solved with their correct zipcode such as "lists of X within a mile of your house." That's a bit harder to pull off when it's something like "what political party are you?" for anyone who isn't rabidly political. The second suspicion that I have is that this is a result of a larger conceit of the tech industry that they are an enlightened minority who either know or can know better than the masses (tech isn't unique in this conceit, alas).

> Why not just give me more control? Why can't I say (soon literally) "Dear Zuck - show me news" or "Dear Zuck - show me cat photos"?

That's Google.

You're conflating a recommendation/suggestion system, which uses data from an inference engine, with systems that replace explicit user intent with the results of an inference engine.

A recommendation/suggestion system should offer its opinions and the user is free to follow them or not. Replacing explicit user intent with inferred intent is over-ruling the user, saying that the system's decision-making is more important than the user's.

Behold the wisdom of the disclaimer "past performance is no guarantee of future results."

At the tree level, yes.

At the forest level most people are predictable, often to a fault.

Of course, and yet there is a reason that demographics is useful only when dealing with aggregates... it tells you nothing about a specific individual. Trying to use inferred data from the behaviors of millions of people is quite useful in discovering certain aggregate info, but using it in place of a specific individual's chosen intent is an excellent way to lose a user.

The challenge here is that there's equal and opposing problem - the social sciences (Econ, Psych, etc.) have established that what people say (expressed preferences) and how they act (implied preferences) are inconsistent. So you're adding an additional dimension here that further complicates the calculation.

>the social sciences (Econ, Psych, etc.) have established that what people say (expressed preferences) and how they act (implied preferences) are inconsistent.

This is obvious enough. But if I tell you I would like to lose weight and improve my diet, yet you, knowing I have a sweet-tooth, opt to keep offering tempting sweets and desserts anyway, that would make you kind of a jerk wouldn't it?

I wouldn't want to spend much time around someone who actively tries to undermine my attempts at personal growth or improvement.

knowing I have a sweet-tooth, opt to keep offering tempting sweets and desserts anyway

No, it means you aren't actually doing what you say you want to do. If you however gave the system input about wanting to lose weight, say by buying weight loss supplements, exercise clothes etc... then the "system" will adjust to give you what you want.

Your example is basically akin to a scenario where you frequently order dessert at your favorite restaurant, then decide you want to lose weight, never tell anyone you want to lose weight, then get upset when the server brings you the dessert menu.

> Your example is basically akin to a scenario where you frequently order dessert at your favorite restaurant, then decide you want to lose weight, never tell anyone you want to lose weight, then get upset when the server brings you the dessert menu.

Except for the part where he said "if I tell you I would like to lose weight and improve my diet"? So not like the "never tell anyone" part at all?

What he's describing is more like a system which learns "most people who research and buy diet stuff end up failing and going back to the ice cream and junk food, so let's start aggressively targeting them with unhealthy crap once they start looking up diets."

That may be accurate regarding behavior and revenue maximization, but it's also pretty shitty.

Then why are you on the internet?

That's a nice example.

Which, in my view, just means we've vastly overstated the case for revealed preferences. A heroin addict doesn't actually care more about heroin than about his kids, no matter what his actions seem to say. He's an addict.

If someone consistently behaves in such a way that prioritizes heroin over their kids, then from a purely behavioral and external perspective, they really do care more about the heroin. Maybe they don’t want to want heroin as much as they do, but that’s another level up. You can’t look into people’s souls; down here on Earth the best you can do is try to predict behavior, and it’s safe to predict that junkies gotta junk.

This argument is the common sense fallacy.

Your personal bias is that Family > Heroin. In this example that seems to also be the social consensus (which I hold as well) however that doesn't mean that the individually must certainly value family over heroin.

Your actions are directly tied to your preferences - whether optimal or not. If you neglect your family because of heroin, it's because at some point your reasoning system decided that the hit was more important than the relationship. You can argue that preferences can be hijacked by chemicals, but in that case it becomes a reductionist determinism argument - that can be easily argued down to biochemistry in either direction.

It's too easy to fall into the "common sense" trap when trying to evaluate human behavior.

I'm sorry if I'm wrong, but you sound as if you know nothing about addiction, especially to opiates (but not really limited to them), at all. Unless you can somehow back up your statements or otherwise explain them, I'd like to ask you to stop using heroin addiction rethorically...

>You can argue that preferences can be hijacked by chemicals, but in that case it becomes a reductionist determinism argument - that can be easily argued down to biochemistry in either direction.

Well, yes, exactly :)

I'm not sure the case for revealed preferences is overstated as much as it is so often misapplied.

the mistaken belief that you can deduce intent via inferred behaviors

Maybe this is just a phrasing error, however the idea of inferred behavior doesn't make sense. If you view an action that is measurable (buy x, go to y) then it is an explicit behavior.

Further, the concept you seem to be arguing against is the economic concept of revealed preferences. If I understand your argument, it is that it is impossible to derive a users preference from observing the users behavior. This is contrary to decades of empirical research in economics and cognitive science proving that you can rank preferences based on such behavioral profiling.

Your example builds a strawman by introducing a value judgement into the example (evil vs good) that is unmeasurable. It conflates some kind of moral state with a series of actions without any way to empirically value them.

The reality is that most people cannot accurately state what their true preferences actually are, and thus your actions reveal more about your future behavior than your "intentions" do. This is a classic debate in philosophy of the mind.

Yeah, it's a phrasing error... although there are species of inferred behaviors (he went into a room for an hour and then came out and walked to a store. I don't know what he did in that room but I can infer that he breathed while in it, although as with all inferences that may be untrue.)

I'm not arguing against revealed preferences as useful aggregate information regarding the probabilities of someone choosing X, I'm saying that you both can't know their intentions regarding those choices and more specifically those preferences are untrustworthy in predicting any single opportunity to choose X.

The idiocy of many current products, be they physical or informational, is replacing the user's ability to make a choice with what the product designers believe... especially without explicit agreement. In most cases the user didn't opt-in to outsourcing or subordinating their decision-making to product X and every time their intent conflicts with the product's decision-making the user will become more discontented with the product.

And no it's not a strawman to introduce my intentions into the example... it's the entire point. Measure everything you like with the exception of my brain and you cannot know my intentions which means that you lack the necessary information to substitute your own judgement or decision-making for mine.

The problem is, you seem to be making a blanket statement about the theoretical possibility of prediction and suggestion rather than arguing against the limitations of current predictive systems.

It's simply that current recommendation/prediction systems have incomplete information. In this case an explicit goal or preference is not stated by the users. In the absence of explicit preference statements, a recommendation system can only act based on the behaviors it measures.

The purpose of my life's work is to align revealed preferences with stated preferences, to determine optimal action. The majority of systems are confounded because 1. They don't ask for preferences and 2. People state preferences (signaling) that are not aligned with their true beliefs.

I am indeed making a blanket statement about the theoretical possibilities of prediction and suggestion as a preferred or comparable replacement for allowing the user to choose for themselves, and that statement is that it's neither comparable nor preferred.

Your work, at least as you present it, does not seem to collide with that statement because it asks the user for intent and then examines behaviors to see how they align with that intent... there is no inferring trap. Determining optimal action, while its own complex kettle of fish, is only a concern if it over-rules the user's decision-making WITHOUT the user opting in to such an arrangement.

No, Simpson's Paradox involves incorrect conclusions of statistical data/trend due to a mistake in viewpoint or assumption. I'm saying that even if you have the correct conclusion that it's not a valid replacement for the user's explicit intent or choice because a) the probability on which a prediction, even a ludicrously accurate one, depends makes no guarantee about the results of a single event... and b) that usurping the user's choice because your model, or you, think you know better is a stunningly bad idea in itself.

> No, Simpson's Paradox involves incorrect conclusions of statistical data/trend due to a mistake in viewpoint or assumption.

Not correct.

There's no 'mistake in viewpoint or assumption' involved in Simpson's Paradox.

Your example story was of someone committing 'a sequence of good deeds throughout the year' that may be part of a grander, longer plan to commit a profoundly not good deed.

Analysis of month by month, grouped or in isolation, would reveal an apparent trend that would be opposite to the evident overall trend once you include the ultimate, negative act.

From the wikipedia link I provided earlier:

"Simpson's paradox ... is a phenomenon in probability and statistics, in which a trend appears in several different groups of data but disappears or reverses when these groups are combined."

Very well articulated. I wonder if there is a difference between a machine trying to deduce intent vs a human doing the same task. As humans, we may be able to pick up subtle cues that machines simply lack the ability for.

> I wonder if there is a difference between a machine trying to deduce intent vs a human doing the same task.

Fundamentally, no. There may be practical difference between specific machines and people.

> As humans, we may be able to pick up subtle cues that machines simply lack the ability for.

Depending on the machine, the reverse could be true.

"...As humans, we may be able to pick up subtle cues that machines simply lack the ability for..."

But now we're back to the problem of deducing intent. Only now we have to deduce the intent of the human who deduces intent. (Back to the problem of "who watches the watchers?")

The problem isn't the one doing the inferring, it's the inferring.

I was never all that into social media at all, but the thing that made me really quit was that the only things I ever clicked from my FB feed we're suggested posts. Their algorithm knew me better than my friends.

Maybe, maybe not. A key difference is your friends are posting for themselves, but the suggestions are posts tailored for _you_.

This is ultimately what will limit AI's impact/growth. Most people aren't comfortable with the idea of giving up their own judgement to anything else - other humans or machines.

Yes, mostly because an external judgement that doesn't actually know their intent isn't as good for the user as their own judgement.

Or AI will grow despite this, and the result will be powerful AI without any solution to the alignment-problem, thus being overall vastly damaging.

Spot on. Note that the issues are different (and not as bad) for Reddit and HN type things.

> It ought to be able to work out who your close friends are, and what kinds of things you normally click on, surely?

I wish this didn't even seem right to anyone, even from the beginning. It's different whether I happen to click or be interested in something once than it being what I want to see again. If I go to a movie, that's not evidence that it's my sort of movie — I don't know if I'll like it until I see it! Priority really has to be on the evaluation of things and the explicit desire to get more, not on reading things into the fact that I was open to something in the first place.

With the what-I-click approach, one day I am prompted for some reason to click one sort of thing… then I'm shown that more the next day… then one pretty-arbitrary starting point is turned into a defining filter for me forever. This reality means that simply clicking something out of curiosity threatens to define you and your experience for years.

The ethical way to handle this is to do some mix of (A) giving back control to the readers for what they explicitly choose to follow (not whatever they happen to click, and not even what they "like" because liking should not equal following) and (B) doing the opposite of bubbling and actively insert some mix of stuff-they-don't-usually-click, i.e. novelty so that people are actually exposed to new ideas and perspectives they might otherwise not even know exist.

Your general concern is well taken, but clicking on something once will not "define you and your experience for years." The system will respond to your initial click by introducing similar content to your feed, but if you don't click on those stories, the effect will be transient. Failure to engage is itself a form of feedback, for better or worse.

I don't doubt that works that way sometimes... but man somewhere along the line I googled a Nebraska Cornhusker football score and Google randomly brings it up again and again and again.... even when I give it feedback that I don't want that information, it comes back later on.

I feel like I've seen this behavior on other sites and systems as well. I've no doubt the prioritization initially works, but it seems there are other factors at play that seem to bring up old data.

In my case my team is not nearly as popular as Nebraska so I suspect the logic very roughly ends up something like "Ok he watches college football and oh hey this Nebraska story is really popular and stories about that team are big now so here ya go..."

After a university in another state sent me an unsolicited email offer (never opened), my google feed started showing me everything about that school, even going as far as setting up notifications for its class registration calendar. The feed started to loop in college football and greek life news, which are incredibly off base for my actual and actively searched interest history.

I spent a year slowly cultivating a pretty decent feed of relevant content on my Pixel, and it went full Netflix 2010 seemingly overnight.

Early Netflix suggestions, no matter how selective and consistent your selections over many years, could be instantly subverted by your lonely sister-in-law getting on your account one Saturday night and watching a few foreign language romance flicks. After that, you'd never really stop getting recommendations for "movies starring sexually aggressive male leads" or "films with actors who look like Antonio Banderas".

I think I'm done being used to train everyone's algorithms.

Heaven forbid you entertain your 6yo niece for a day, and find she's polluted your Netflix with the most insanely numb drivel that passes for animation today. It takes months to expunge.

What's wrong with Netflix anyway, that one off-topic movie can sway your 'preferences' so drastically?

I find Netflix doesn't recommend anything based on my behavior at all anymore. Now literally the first 5 or so rows of content on the home page is a random spattering of "originals". But they definitely do select the title image based on my past behavior. So not helpful

I hear ya man. These algorithms seem to fall into stupid uselessness after just one error or prioritization change .... like code does.... normally.... but it is telling that the "intelligence" behind these things is still ultra fragile as far as their ability to cease to function as far as the user is concerned.

Hal 9000 failed because he was told to lie.... You'd think that prioritization would be easy on the outside. Granted it was a story but it seems to fit that a minor change results in a collapse as far as function goes.

Yeah, and it's beyond that. What if you weren't totally opposed to continuing that interest? If you're half interested in the next article, the whole thing will be reinforced. You find yourself genuinely half-interested ongoing, and this becomes part of your life.

It's one thing when something you don't care about keeps showing up. It's another to think, "I'm actually interested, but if I click this once, then I'm liable to effectively commit to this staking a permanent place in the half-interesting things that fill my life from here on!".

Amazon is probably more well placed to make an informed decision, as it’s most likely the products you look at, but don’t buy, that you are probably interested in.

Unfortunately in reality that doesn’t seem to be the case. Last week I bought some wireless headphones, and now my feed shows a bunch of those. It also keeps showing me books by an author who I bought a series from, but never finished (on Kindle). And a selection of Spanish books, that my wife needed once for her studies. And for some reason a load of large kitchen applicances.

Apple News actually follows the model that you describe as the ethical way to do this.

In my experience, it produces better and more pleasing recommended content in it's feed due to this control.

Inference models such as YouTubes always feels like you are being pushed into certain content and actually makes me personally hesitate from sampling / clicking on certain videos.

In the YouTube model, simply sampling a single video starts up the recommendation engine to keep pushing similar videos and as a user you feel railroaded into being profiled.


And thanks for the note about Apple News. I must admit that having left Apple over their walled-garden iOS direction (turning to embrace of GNU etc.), they have stuck to a higher-road in a lot of regards compared to the other big companies. They're still nowhere near as awful as Google, Facebook, Microsoft… if only Apple weren't sabotaging copyleft and hadn't pioneered the in-app advertising, sales, and tracking that are actually so much of what's wrong…

There's a different problem I want solved. I want the option to see only original content, nothing shared.

I don't want to see some random cat video Sarah thought was cute. I can go to /r/kittens myself if I want to see random cute cat videos. I do want to see the video Jen took of her cat because I know him and sometimes feed him when she's out of town.

I think I'm not alone in this, and some of the popularity of Instagram is due to its UX making it easy to upload content directly from a mobile device and not so convenient to share content not created by the user.

I've used the FBPurity browser extension to create a nearly share-free Facebook feed. It reduced the amount of content by 80+% and greatly improved the signal to noise ratio.

You're missing one thing. The reason we have the algorithm and not the latest newsfeed is not because it's better for the users but it's because better for Facebook to be able to charge the people who create the shared content to float up in the feed. You see the shared meme about the latest new Netflix show before your friends Cat photo because Netflix paid Facebook to show it more often. Netflix is the customer here, you are the product.

Again, if you were able to control exactly what you wanted to see with a control panel - that's a simple thing to implement, but that's not what Facebook does/is. It's an advertising platform. "social networking" is double speak.

I'm entirely aware that the only reason there isn't a tab for this on Facebook is that Facebook doesn't want it. It would obviously be easy for them to implement it.

It might, however be a compelling feature for whatever displaces Facebook to offer. It would require a business model that's not identical to Facebook's.

100% ... although I sometimes value some links and images shared by friends that aren't original content, I _rarely_ appreciate the stupid memes, and page shares that are little more than just people screaming into the echo chamber at people that already agree with them.

I've been manually blocking page after page after page ... every time something gets through, and I think to myself, "ugh, this is stupid", I "hide everything" from that page in the hopes that I'll never see it again (and yet, "LAD Bible" keeps coming back into my feed!).

As you say, I much prefer to see original content and pics posted by people ... in fact, even if it's a link that's posted (and not re-shared from another page), it's usually interesting because they took the time to actually copy/paste that link from a website, so chances are they've actually read the content.

> if it's a link that's posted (and not re-shared from another page), it's usually interesting

I agree, and would be fine with including that sort of thing, especially with a UX that encouraged writing something about the link. A text post including a URL is still something my friend wrote.

On the little dropdown menu for a shared link on Facebook, you can tell it to block all posts from that domain. I found that after doing that on half-dozen of the biggest click-baity sites, my feed is almost all original content again (and this was before Facebook did their big refocus recently).

I've often wondered that it wouldn't be that hard to make an image-based 'social network'/sharing platform à la Imgur (or /r/pics, etc.), but using simple visual hashing to make every image is unique. By partitioning your network so that low-effort memes have a place to go, and with some light moderation you could probably make something quite enjoyable.

I'd love a per-user flag for this. "Don't show me shares from Bob" in the drop-down on posts. I've got friends whose lives I enjoy hearing about - kids, families, photos, etc. - but whose multiple times daily political memes I find toxic.

I'm kinda the opposite. I depend on certain people filtering certain subjects and giving me a "best of." I would never go waste time on some sites myself- and now I don't have to. Jackie finds the good stuff and shares.

I'm not sure the concept of the News Feed is the problem. I think it's in the execution. Currently, it's a big algorithm that's thick and opaque to the user.

A full page grid of posts doesn't solve the problem, if anything it makes it worse. You'd lose visual hierarchy and no grouping, making posts that are mostly red or bright get more eyeball time than the rest.

I think a more natural control over your own News Feed without going through a screen of dials and buttons could be the way to do it. Simple filters at the top, similar to what the Gmail inbox introduced. Funny/Sad/Deep/etc. based on the content and the response from other people (like/angry/sad/etc.) And like Gmail inbox filter, use it or ignore the filtering. You still need some smarts to nuke down the every-day spammers. And sometimes chronology isn't the best order, but it's not too hard to work out when a big-deal occurs; e.g. someone posted 12 pictures of their wedding ceremony, 5 hours ago, and their feed blew up, making it more important than someone's best ramen noodles they've ever had 15 mins ago.

> I'm not sure the concept of the News Feed is the problem.

I'm honestly starting to believe that it is. These are just random on-the-spot ideas, but I want to start seeing more innovative and different ideas. For example, if you want to optimize for getting news, let's see some innovation in journalism. If you want to optimize for being entertained, let's see some innovation in curated experiences. If you want to optimize for keeping in touch and connecting folks, let's do something akin to a CRM-for-friends ... where you still share all the stuff, but instead of showing you a feed, the site/app would just give you targeted reminders to reach out to folks you hadn't connected with in X amount of time.

Again, these are just random ideas off the cuff ... but this whole "wisdom of the crowds" thing we need to rethink, because it's too easy to manipulate by any number of actors in the system.

There's a business angle to the newsfeed that's not being talked at all. Your news feed being "algorithmically controlled" is a huge advantage to the platform from advertising perspective. Facebook at this point has liberty to show me a promoted page because one of my 700 friends liked it at some point of time. If none of my friends has liked it then it's promoted as "Popular across Facebook".

I believe promoted tweets work in a similar way. If a platform provides any sort of control over newsfeed it looses the leverage to show you what it wants you to see.

Which is exactly why Facebook and Twitter have "blurrified" your content by making it have no verifiable order by default ("top posts" meaning "totally arbitrary order").

This, as you highlight, probably brings in the big bucks. This makes me think Google had something similar in mind when they "personalized" the search results (a wildly unpopular move), going to show that when you are the product, there may be no UX too unpleasant to sink to.

The upside of this is that these incremental UX sacrifices leave the market open for an alternative, either of the same ad-driven kind or, hopefully, of the more sustainable kind.

Hey, a man can dream.

I think the - actually usable UI as opposed to the dumbified abortion most of the big sites have - train is long gone.

A usable UI (for non-niche sites) is one of those things you can get nostalgic about with "back in my day...". It's not coming back as long as advertisers pay for things.

I fail to see how the ad driven model isn't sustainable. If anything a non ad driven model is less likely to be sustainable. I doubt what Facebook is doing is driving away that many users.

The ad-driven model will show as many ads as possible, while making the user experience just barely tolerable. Usually, they forget about remaining tolerable -- see popups, animated GIFs, Flash ads, etc. -- and get slapped down by technology that actually serves the user. See also the VCR, which let people record TV shows and fast-forward through the ads. Ad-driven media will always cannibalize itself.

Indeed, but the opportunity of that control also carries the potential for ruin. Skew the algorithm for maximum profit and the result is less long-term revenue thanks to a reduction in usefulness to the user, which leads to either inattention or abandonment... no eyes, no ads.

Providing user control (even limited) over the newsfeed actually increases leverage with regards to long-term revenue, since it's an expression of user intent... which is a clear datum for targeting. What it dampens is the kind of "run of site" or "simple demographic" advertising that sites often accept because it's "found money!" Except that in the long-term it isn't, due to inattention/abandonment problem mentioned earlier.

How so? They can give you control and still insert ad cards with any targeting they want.

Banner blindness and ad blockers mean you’re more likely to see something shared by a friend than an ad.

The real problem is that we need to be incentivized to pare down our friends to those we truly care about. But that's exactly the opposite of the platform's incentive.

The reason for the shift to an algorithm is that FB doesn't want to give you any incentive to unfriend or unfollow anyone, because that reduces FB's ability to maximize your engagement. They want to be in control, not you.

If FB just let chronological newsfeeds get out of control, we'd have no choice but to manage it ourselves avoiding this entire problem. It would cost us a little time upfront but save hours in the long run and result in a healthier, less algorithmic experience.

The easiest way to judge what FB believes is in their interest is to compare various actions by their ease of execution. For example, "liking" a post is one click and instant. Unfollowing is 2-3 clicks into a hidden menu at the least. Not to mention that it has an artificial 1-2 second delay built in that kind of makes you "pay" for the action.

The newsfeed is a creation of mobilefirst.

Remember old Facebook? The MySpace ripoff with a nice design?

Your homepage populated with widgets? That can easily cope with 200 friends. You'd just have separate widgets for different types of events. One widget for a feed of genuine status updates, one for friends' shares, one for likes etc.

Mobile first destroys that. You've only got space for one list.

That's a really good point. As awesome as it is to have a supercomputer in my pocket, I wonder if it's really a net benefit: mobile computing has led to a lot of compromises (e.g. the newsfeed) which really aren't good.

FWIW, I don't even have Facebook installed on my phone. My most frequently used apps are Firefox, Signal & Inbox (in roughly that order), followed probably by Maps, Google Play Music, KeePass & Authenticator. For everything else, there's my desktop or my laptop.

"Mobile computing" is almost always "mobile consuming," and trying to fit the requirements of computing into the design of a consumption platform is necessarily compromising, if not failing outright.

That’s part of it, but not the whole story. Mobiles have also produced a lot more content. The article makes the point that checking once a day means you might have 3000 updates.

Yes but not all updates are created equal. A 50 line status update is not equivalent to clicking a like.

The everything-nature of the feed itself is the problem. It is the source of its own weakness.

A source of information too dense to be browsed by a normal human was created, forced as the only way to interact with the product and then automatically curated to 'fix' the issue that was created by having a firehouse of updates.

I vaguely remember a joke about a radio that didn't have a volume knob because "it's already set to the correct volume".

It's certainly possible to overload people with settings, but I don't see why we don't see a choice of a few very different feed filters to choose from. It's not like any single algorithm can be what everyone wants.

This seems more important than more photo filters, don't you think?

The pink elephant you can't forget is that the feeds on most services are not designed to benefit you, but to benefit the company. So for instance Facebook engaged in substantial research manipulating peoples' emotions by making their feed either more positive or more negative, and seeing how this affected their posting and sharing trends. This one was widely reported (search for 'facebook psychological experiment') and demonized.

I think there are a couple of important takeaways from it. First Facebook wasn't just doing that out of academic curiosity. They're manipulating their users to maximize their benefit. And the more important point is that that behavior is, all but certainly, something most of every company that also relies on advertising is engaging in. Letting you see what you want to see is not their goal, except if that coincidentally coincides with you being more likely to stay on their service and provide more personal information to them.

And we might intuitively expect these two goals to be aligned, but this is improbable. Consider casinos. It's not wins that keep people gambling - almost nobody wins in the longrun in most games. Instead it's a complex system of addiction including near wins, and visual-audio manipulation that keeps people dumping their money in the machines. And so too with Facebook, it's that small rush of interaction, likes, and so on that also keeps people 'playing'.

Actually casinos DO depend on wins to keep people gambling, there just needs to be more losing over time. If nothing else the psychology of inflating personal successes and devaluing losses will achieve the needed goals, although adding all the other goodies you mention optimizes the process.

A casino in which you ALWAYS lost, despite a constant "near win!" incentive and all the other manipulations would quickly be empty.

i remember instagram pre-algorithmic timeline, and it worked well: because everyone knew every post would be seen, there was a cultural norm of not posting more than ~ 1 time a day.

it's not like crazy-amounts-of-posting is some inexorable law; it's a response to the social networks as they now exist.

even for too much content to read, a time-based sample is arguably still superior an algorithmic one, because it's _unbiased_. this gives you (1) an accurate impression of mood and content of your timeline and (2) completely short-circuits problems with soft-censorship and manipulative influence. (fb has done various vaguely-evil experiments w/ controlling the emotional content of your timeline one way or the other.)

lastly, there's an easy fix that gives you best-of-both worlds: simply allow tagging some sets of people as "close friends" (or any other label!) and allow filtering posts by that label.

(and, sure, if you want to, go ahead and include an algorithmic timeline view. or even algorithmic w/ various different criteria for inclusion. the fact that fb/instagram/twitter won't even give you the option of chronological ordering suggests that it's more about maintaining control than about creating a good user experience. /tinfoilhat)

My biggest issue with Instagram's algorithm is that it resets on every view. Which is understandable from a product marketers perspective - if a user sees the same post up top they are probably just going to close the app again.

The problem is that I open the app, see something interesting (like a friend announcing a baby,) I leave a comment and swipe back to my home screen. I tell someone around me then they want to see and so I reopen the app and that post has been obliterated from my timeline, because it already extracted value from me with the like and comment. And this is literally within 5 seconds of leaving the app (Not even killing it, just going to the home screen.)

The true reason for the algorithmic timeline is just to increase engagement, that is all. With a timeline based feed you know where you left off, and you know how far you have to scroll to get back there. With an arbitrarily organized feed you just keep scrolling, thinking you will find something interesting or new. It is the same reason why casino's have no windows and are always the same lighting - so you stay longer and forget about time.

> allow tagging some sets of people as "close friends" (or any other label!) and allow filtering posts by that label.

Facebook has this feature. You can tag someone as a close friend, and it will show you posts by close friends first.

in my experience, even with tagging and the "chronological" timeline, i still don't get a strictly time-based timeline that shows everything. i just get a "less algorythmic" timeline.

(i havent tried recently, tho.)

I was fascinated by the "circles" approach Google took with G+. It felt like a great solution to newsfeed overload and directional sharing.

But then Google+ failed, and we never got to find out whether Circles worked or not

I recall Zuckerberg commented on this. Same reason Facebook groups didn’t take off in a major way. Nobody wants to manage assigning people to groups.

I also remember this slide deck from some hip product guy who pitched circles.. 150 slide hot mess.[1]

[1] http://www.businessinsider.com/heres-the-presentation-that-i...

Lately, I can't help but notice the vast number of ways social media is deficient compared to real life.

In real life, you get "circles" for free! Whoever you are with in the current moment is your circle, and only they have access to what you "share" during that time period.

There is some leaking through the grapevine, but we have an intution for that and usually handle it decently well. It's often even desired.

> In real life, you get "circles" for free! Whoever you are with in the current moment is your circle, and only they have access to what you "share" during that time period.

And the whole reason Facebook et al are popular is that in real life it's really hard to get all the people you want in the right "circles" (i.e. places) at the right times.

Facebook doesn't have the circle feature. It's publish to everyone only.

You can send a group message, which I suppose can sort of function like a circle, especially if it's long lived. But it would hard to manage many of those.

Actually you can create a circle (might be called a list?) and publish just to that circle. People just mostly don't bother, because managing circles is so much effort.

correct ... unfortunately this feature has been super hidden over time in the FB UX. It's almost impossible to figure out unless you already know about it. I find it super useful when I specifically want to segment my audience. Don't do it with everyone that I'm 'friends' with, but for example, I have a 'kids' list for every underage person I'm friends with (my kids, nephews, nieces, etc), and I will exclude them from a post if it's non-kid appropriate. But as you mention, it's a pain to do this for everyone.

Oh yeah.. I completely forgot about that, fair point.

Did you finish high school before 2005ish? You may remember a time when 80% of the population didn’t have pizza neck, now it feels like it’s flipping. Since I lost 50lbs in 2013 (only carried for 2-3yr, BA food), there’s been much lifestyle reflection, been pretty unplugged since then. By all means use tech, but reflect on what it’s adding and conduct accordingly. Tech’s been my hobby and career forever; so much potential to empower, entertain, and aid in universe construction—of course the tool has two sides.

> Nobody wants to manage assigning people to groups

Instead we really should use NLP to automatically categorize contents.

e.g. I wanna make my 50% for seriously thought-provoking articles, 20% for shitposts, 20% for news and 10% for memes.

It’s a good idea that Facebook could let you control the algorithms, but set sensible defaults. But would that help Facebook’s bottom line?

Facebook Groups are a different thing and they're huge now. I believe your thinking about lists or something like that.

Yes, the Facebook equivalent of a circle is a list.


And yet we do it all the time when creating group messages.

Those are transient - they exist in the scope and for the duration of a conversation. You don't have to pause while sharing something a week later and wonder "who am I sharing that with again? Will the new friend I added yesterday see this, or do I have to add them to something?".

Apples/oranges.. usually there's only a few people in a messaging group.

I initially liked the circles concept ... but over time I realized that _for me_, it was exactly backwards. I don't want to spend time categorizing every single person I know into these circles. Instead, I found myself wishing that I could just categorize what I post (as normal), but let everyone I know decide what category/circle to subscribe to.

Every social network now has asymmetric following, but it's all or nothing ... I wish I could follow someone's tech posts, and another person's kid posts, and yet another person's political posts ... etc etc

That's why the smart kids on Tumblr have ten different blogs.

I don't have time to clean up my inbox. Who thought it's a good idea to require manual work to "setup" a social network? Of course it failed! It was designed by engineers who spend time indenting their code to the exact right format!

Joel Spolsky talked about this, with regards to StackOverflow. Engineers love organising content - tagging it, grouping it, structuring it. But most people can’t be bothered, including engineers in a hurry.

There were a few reasons for Google Plus's failure, including Facebook: https://www.vanityfair.com/news/2016/06/how-mark-zuckerberg-...

I was pretty excited when I first found out about Google+, but it was similar in excitement to discovering Facebook and also Fitocrac, yet all of that meant little upon the realisation that I wasn't truly getting value out of any of them.

For a long time, I was in denial about Facebook being bad for me (I thought it was everyone else who was using it wrong) but these days I can no longer justify it in terms of productivity, benefit, etc.

Simply, I need to read more books. No social network is going to help me with that; not even Goodreads (or LibraryThing - anyone remember that?).

Another alternative to circles is to create groups of topics for yourself, and have people self-assign to the topics they are interested in.

So rather than creating and putting people into a “Summer Trip 2018” group, you’d create a “Summmer Trip 2018” topic that you share things to, and your friends would subscribe if interested. If you frequently post about food, you could create a food topic, and people would subscribe if interested.

Yeah, Facebook definitely needs subreddits so I can unsub from /f/politics instead of my aunt. Unfortunately there's probably no money in it.

This article is based on the false premise that a person with 300 friends will have 1500 posts a day. This was a lie manufactured by Facebook to sell the algorithmic feed to journalists, who passed it on as truth to their readers.

As we all know, people with 300 or even 500 friends have nowhere newer 1500 posts a day. Sure, some friends post 10 memes a day, but most people do not post anything at all. In any case, people were able to manage the chronological feed without any problems. I did not hear anyone complaining that they had to deal with too many posts, and managing the feed wasn't an issue.

What was an issue, however, was that Facebook needed to make more money. So they created an algorithmic feed that pushed posts that engaged people more (videos and images) and downgraded external links (to keep people on the platform).

They lied about the 1500 posts a day, and journalists passed it on, and Facebook made more money. All at the users' expense.

you will have something over a thousand new items in your feed every single da

This is clearly nonsense. Why do people’s “likes” end up in the feed in the first place? When people say they want a reverse chronological feed they mean of actual posts, not every possible activity. And everyone knows it, too.

I would love it if you could filter Facebook to only show posts of friends, and not their likes of clickbait news articles, shares of stupid math puzzles, or them wishing their third cousin a happy birthday.

Actually, the fact that your friends' likes show up in the Facebook newsfeed is actively discouraging me from liking anything. I do not want my friends to see what posts I like.

I think there is a setting that disables the default option of (edit: some of) your likes showing up on friends' feeds (ad preferences?) but I know that's all beside the point now in the broader context.

Before my deactivation, I thought about those in my list whom I wanted to see or communicate with and to be honest, I couldn't think of that many in proportion to all those I mostly added a decade ago. Frankly, I found the "news" to be more interesting than most "personal" posts.

I also didn't want new Friends to be able to easily trawl through my no-longer-relevant past Facebook life. Upon realising there was no easy way to control the privacy settings for that and in frustration, I started the process of deleting Facebook. (Not many realise this, but Messenger will still work while an account is deactivated.)

It's simply too hard to create a graduated list of different "Friends" on Facebook (or Google+). While their presented mission might be to "connect people", real life doesn't work that way. Coupled with an encouragement of oversharing by design, it's a recipe for not a very good website to spend time on.

I don't know of anyone within my own friends group who has gotten successful from using more social media. I also don't plan to be the first, not when other more lucrative niches exist.

That was a friendfeed feature I really liked. I don't know if the feature ever made it into facebook after the acquisition.

> We’d just invite close family and friends. Then, we actually made a list of ‘close family and friends’… and realized why people have 100 or 200 people at a wedding.

Then you either have a VERY big family or a VERY broad definition of “close friends”.

Having had to make such plans, lists grow fast. Well, we have to invite all three of my sisters. So their husbands/boyfriends, kids, kids boy/girlfriends, grandkids. Then most of our friends have families. We have to invite Joe: he was there on our first date. Justin, for sure: he helped us move in together . . Crap, now there's ove a hundred people.

Then split it. Family and close friends, i.e. people you see regularly for a traditional party in the evening. For everyone else there is an informal dessert reception after the ceremony.

There is this idea that for it to be a "proper" wedding, you have to host a formal party and provide for each and everyone that has had an impact on your life. It doesn't have to be this way, but if one chose to do so, then the implications is the planning and cost skyrocket.

Just say "no kids" and "no dates" and watch 50% of your friends and family opt-out - problem solved. /s

Well, there's two of them, and each family member or friend may bring a partner, so I can see someone reaching 100 quickly. But I see your point.

In terms of social media, I might be the one that a little outside the norm. I have at most 150 friends, family members and acquaintances on Facebook. Maybe 10 of them most the majority of content. If people but nothing but links, games or only share stuff, I just unfollow them. I only care about the stuff that's actually going on in people lives, that is an increasingly small subset of all the Facebook posts.

My Facebook feed is curated, by me, so that I can browse everything that happened in the last week in 10 minutes at the most. So to me the articles claim that wanting chronological order is pointless doesn't make much sense to me.

Facebook just doesn't seem to encourage this kind of usage, there simply isn't enough ad money in it.

Yeah, my wife and I actually pulled off the small wedding in our living room with about 20 people there. The officiant was an old friend of the family. Still glad we did it that way.

Can you think of 25 people you would want to invite to your wedding?

Add in significant others, then double again for bride and groom.

That's 100 people. Sure there's lots of overlap, so it wouldn't actually be 100, but I know I personally have more than 25 people I want at my wedding that aren't friends/family of my fiancee.

Our wedding had close friends and family.

All total, we had 23 people, counting the caterer.

This is why curation is king and a good human editor is a rockstar.

On this algorithmic side Apple and Spotify are leading the way. For example, Apple Music solves this problems by grouping feeds. Favourites Mix, Chill Mix, New Music Mix, Friends are Listening To, Radio.

Spotify is very far from ideal, or even fromwhat last.fm had when it comes to radio based on song though.

This might be anecdotal, but I only listen to my 'Discover Weekly' list on Spotify. Every week I get a new set of songs, and I mostly listen to those songs every day, until I get a new one. Some of the songs on the list, I hop onto the album of, and listen to that as well.

That algorithm knows me very well, and it's able to delight me with new music every week.

I've found this as well. Discover Weekly both gives me old favs as well as introduces me to new stuff. It's pretty cool.

A bit off-topic: because you used the word “had”: last.fm is still going, the algorithms are just as good as they ever were, it plays music through Spotify.

I've lost the link to the article, but there's still a certain something that Spotify lacks compared to a human radio playlist programmer.

Two things:

Ability to craft a mood or tell a story, relate to current events, culture.

The other is few humans get to do that directly now, and the experience is filtered through algorithms, and then cloned and shuffled.

Takeaway: "cool", relevant, "connected" come from minds, not rules, or limited AI in use today.

People are supreme at discerning the intent of other, through nearly any medium. Think prisoner tapping on the wall.

The more diluted, distant, muffled that intent is, the less compelling the expression of it is.

We quite simply get a lesser sense of the mind.

Think original painting vs a print.

Or live DJ making an evening, connecting with, relating to, and me sharing their local scene directly. By that I mean picking tracks they want, for human reasons, talking for their reasons, not delivering liners, even pausing for a few seconds to let something awesome sink in. Couple that with phones, social media, and it's a compelling experience, given that DJ understands and has mastery of their scene and can express it.

Now, that same DJ, voice tracked all over the nation, picking from a curated, "playola" list of "researched" tunes, and delivered via automation, play lists, etc...

The former can be pure magic. The latter ranging from pleasant to barely tolerable.

That is what is missing.

How many big companies have internal playlists? And if they do, what do people do, how do the tunes get on there and why do they matter?

Great questions worth really thinking about in human terms.

An example might be, say me making a music recommendation to a member with access to the list.

It starts with a quick exchange via SMS, email, whatever, "man, I'm waist deep in this driver code and just can't focus."

"Feel you brother / sister, give these guys a go, they are my goto, for low level assembly language programming. Give them 5 minutes, and you are there, in flow."

(ozric tentacles, would plug in here, for a music selection)

Later, "Man, they made a bazillion albums! This is great!"

On the company play list later on, "Lose flow? A buddy sent me these guys, thank him later, enjoy!"

Chatter happens.

Later still, someone tags selects a few tunes for flow, trance, whatever...

At each layer, the human spark of relevancy, meaning, context gets diluted, until it's just another data point tag like all the other tags.

Going back to the evening DJ.

Making a set, live, maybe lightly prepped for Friday the 4th isn't the same as a Friday goto set for summer, which isn't the same as researched, can't miss hits a program selects from on Fridays.

At each layer here the human to human contact is diluted, more distant, less expressive, less relevant, you get the idea.

Lastly, a human playlist programmer is one step removed from a human spinning tunes directly. But it's good, in that streaming eliminated the need for massive, broad generalization common to, say syndicated radio.

Not all expression forms can be done play list programmer style, but the basics can. Mood, time, feel, general matchup people can identify with cone across just fine, depending on how directly the play list is able to be specified.

People like and identify with those.

To connect, and get the real human mind to mind does require a person with considerable agency, perception, creativity, and talent.

Mass broadcast, as deregulation allowed it all to scale, diluted this. Secondly, the need for humans inhibited the ability to trade on the value they create without cutting them in and or yielding editorial control.

The product is a shell of what can be done, but it scales, can be reproduced, bought, sold, managed editorially.

Those are the basic dynamics as I see them.

Think of the problem in this way... you have a butler that is very very knowledgeable about all manner of things, which surely is a valuable and useful resource. When you ask him to pack you a tuxedo for attending the baseball game he responds with "might I suggest a more leisurely attire? I've noted that most people attending baseball games seem to dress casually." You may respond "Capital idea, man! By Jove, Jeeves you are a life saver!" He then packs your jeans and a bowling shirt in your travel bag and you are heartily satisfied.

Over the course of years you repeat this many times, until one day when Jeeves decides not to suggest but instead to simply pack your jeans and a bowling shirt. When you arrive at your destination and open your bag you are aghast and call your butler. "Good God Jeeves! I specifically asked you for a tuxedo to attend the baseball game and you packed me this slovenly casual wear!" Jeeves, in his defense, responds "but sir, for YEARS you've asked for a tuxedo to attend baseball games and I have suggested casual wear, and you have ALWAYS chosen the casual wear!" "True enough Jeeves, you presumptuous butler, but tonight I am singing the national anthem at the game! Without formal attire I shall be reduced to ruin! You're fired!"

Too many products have opted to be the presumptuous Jeeves, without even bothering to spend the years our poor butler did learning the eccentricities of his employer. And yet even with that dedication, poor Jeeves was fired because he was confident he could infer the intentions of his employer. Whether it was the arrogance of Jeeves presuming to know better than his employer, the ignorance of Jeeves in understanding that he lacked crucial information, or that Jeeves suffered under the delusion that past performance predicted future results matters not a whit... because Jeeves has been sacked.

Jeeves may find employment elsewhere but if he does not mend his ways he will eventually find himself unwanted and unemployable due entirely to his own faults.

Jeeves would have been the one to schedule the anthem singing and would have given you the choice of a couple suitable outfits for the occasion. Or just packed both the tuxedo and the jeans. If someone's flying you out to sing, you probably are in first class and have the upgraded baggage allowance.

The mistake you imagine is exactly the kind of thing competent human assistants are good at avoiding and AI is terrible at.

Jeeves is my butler, not my executive assistant and presuming otherwise will get Jeeves fired as well.

Competent human assistants know to negotiate formally or informally with their employers the limits of their authority. Unfortunately those creating products either do not know or do not care to negotiate clarity on such limits, and end up like poor Jeeves.

I believe Jeeves is right in your story. You should have specified to him that this time it was different.

No I should not have, because Jeeves' job is to serve me and at no point did I confer to him the right to insert his decision-making over mine with regards to garment selection. Had I done so, then indeed I should have been explicit and specific that this was a change in behavior.

I did not opt-in to the agreement.

Jeeves is a human.

If you think humans are going to work like that, you better never hire anyone.

Jeeves may or may not be human, but having hired many people and unfortunately having to fire a few people over the last 30 years I do have some experience with humans and work.

Have been seeing similar overload of information on Reddit as well (HN not quite there yet, but would imagine it's not too far behind). The front page is littered with images/gifs of bite size information/memes. A few years ago, I'd learn something new almost every time I visited the front page.

I created a tool [1] that lets users aggregate top posts from selected subreddits and receive a daily email summary, but I think there can be more done in this space. Would also like a better discovery mechanism for things that I may be interested in that aren't obviously connected to current content (e.g. someone interested in programming may be interested in art as well, if fed nice introduction to that information/scene, but current algos just stick to known interests).

[1] www.storyrake.com

My idea: limit how frequently people can post. E.g. max of two posts per day. That could certainly help with info overload, though I don't know how most people would react to the restriction.

I was thinking more like 1 per week :)

Why not just give users more power over their newsfeed, instead of assuming that the site engineers can get the balance right.

A user could be able to specify "I want 50% of my newsfeed to be from immediate family, 35% from close friends, 15% from acqaintances." Use a pie chart slider bar to make it clear. Then let the algorithm figure out how to interpret that chart.

Because your preferred allocation might not produce updates at Facebook's desired frequency. FB's revenue is directly related to manufactured urgency and FOMO.

I like what reddit does. Choose your sorting method (best,top,new,rising,controversial,gilded) Reading my front page sorted by controversial has led me to a lot of unusual and interesting content.

Orkut was the biggest social network in Brazil before Facebook.

Orkut didn't have the concept of a news feed. It had just communities, which were thousands of micro forums people joined and were displayed on their profiles; and scrapbooks, which were walls of text messages -- people could post on others' scrapbooks and on their own, so you used it to both message people and "broadcast" content about yourself, but that wasn't shown to anyone who hadn't voluntarily visited that scrapbook page.


I really like what François Chollet wrote here: https://medium.com/@francois.chollet/what-worries-me-about-a...

Instead of having an algorithmic feed that tunes an opaque optimization problem, we should harness the power of AI/ML and optimize for what the user explicitly wants - whether it be learning, family, or sports commentary.

let users to set the algorithms they want to use maybe ? e.g sentimental analysis , fake new filter ...

I want to build a solution to solve this problem. Back to classic RSS feed , but along with evolution algorithm / strategy (https://blog.openai.com/evolution-strategies/) which means 80% of the information you usually click + 20% of information you never click before. It's a good way to acquire news and knowledge out of your comfort zone.

Even this is too much, I don't want a feed, I just want a daily digest about the cognitive equivalent of the daily newspaper.

Your newspaper idea meshes with what I've been thinking. But there's a problem. Newspapers did have images, ads, and headlines, but they were more than just that and a few one-line editorials.

In the early days, people posted rather lengthy articles/editorials about what they were doing, thinking, feeling, who they were, etc. On Livejournal, then on blogs, even in the early days of Facebook. But those are mostly gone now, reduced to tweet-sized chunks and making up only a fraction of the feed. Instead, people mostly just reshare, reblog, retweet, copy memes, add links, sometimes photos, checkin somewhere, etc. A newsletter with all of that filtered out, and only the actual content posts would usually be empty.

But if someone could start something like that and start a trend of people actually communicating on a large scale, and making it as digest/newspaper style, that could be pretty interesting.

Can you be more specific about "communicating on a large scale" ? You mean public discussion on social issues / news ?

I was picturing larger scale as in more like journal entries or blog posts than tweets and shares and one-line statuses. Not necessarily lengthy, but enough to give some context and some idea of the writer's feelings/thoughts around whatever they're writing about, yet still within the realm of social. That could include their thoughts on social issues or news (rather than just a link, meme, or one-line opinion), or just their thoughts about life and philosophy and what's going on their lives.

Shameless plug: We created https://contentgems.com for this exact purpose. The ability to filter articles in a bunch of RSS feeds, and any articles shared on Twitter based on keywords I specify. It's not trying to be smart, it just gives you what you're asking for. It's a mashup of Feedly, Google Alerts, Buffer, and IFTTT/Zapier.

It was Douglas Adams (I think) who described a VCR as a machine to watch TV for you, so you didn't have to. It's good joke, but joking aside I've been thinking for some years now that it could apply to social media.

You could have your own AI bot that reads your newsfeeds for you and tells you only what you might be interested in. The AI is tuned to your own preferences, not the social media provider's.

Has anyone done anything like this already?

These would be browser extensions that work on the feeds, I think? More complex "viewers" could be devised of course.

Personally speaking, what I would prefer is a stream instead of a feed. Like old fashioned television, I see whatever's on the stream at the time I look at it, and nothing more: there is no temptation to keep swiping down to infinity. Important items can reappear later if they're missed. The content of the stream is based on subject subscriptions.

Would something like https://www.techmeme.com/ count as that? It's basically a "snapshot" of tech news right now, but the page basically ends at that.

How well would this work across different timezones for friends who want to stay in touch but aren't on daily messaging terms?

Is there not something of an expectation mismatch here?

I don't expect a newsfeed to be an exact representation of everything that I'm going to find interesting, nor can it guarantee that it is never going to miss anything - the best that Facebook is ever going to do is a 'best guess'.

A good analogy might be my Spotify Discover Weekly playlist - it is pretty good, but it isn't a complete encapsulation of my musical tastes, and a lot more manageable than a chronological representation of everything that has been released that week.

What Facebook has always been missing is a low friction, invisible way to say "show less of this kind of thing", without it seeming like I am unfollowing that person, or the social baggage of it being a 'Dislike' button.

If I need any more granularity than that, it's probably more worth me contacting the person directly.

This is free AI. If you paid for it, it would serve you. This way, it serves whoever pays for it. Email from top fintech asking me to work on their "personalized recommendation system that balances engagement, revenue, and our partner constraints". Do you see the welfare of their users in there? It's financial clickbait for naive people. Stop saying it's a tech problem, it's the uncanny valley and what not. This problem is created by billions of naive cheapskates and a system that rewards moral flexibility.

> "One basic problem here is that if the feed is focused on ‘what do I want to see?’..."

False flag. The Feed is focused on engagement. Period. There is no right or wrong. Just engagement.

No one can know what the possible feed is, let alone what their ideal feed should be. Let's face it, that's going change from day to day. We're humans. It's what we do.

As long as the long arc keeps you coming back THAT is all FB is concerned about.

> you probably do know several hundred people well enough to friend them on Facebook

Different people have different standards for what they call friends.

The newsfeed is the ultimate version of the 90s wet dream of "portals": an corporate orifice to enter and never leave.

I use Tweetbot as my Twitter client precisely because it uses a chronological feed. I will start scrolling and if there's too much to go through I just scroll to the top, missing hundreds of items. If it's important it will reappear. If not, I've not missed anything.

The wikipedia android app actually does this well. They separate items in the feed into different categories. You may opt out of any category:


my only issue with Evan’s article is his generalisation and potentially racial use of “Russians” ..

> There are lots of incentives for people (Russians, game developers) to try to manipulate the feed.

The tree described in the second tweet is visible in Telegram right now. There are channels that are overloaded with content. I can see Telegram inventing news feed soon.

We need a new social network that helps make the world more isolated and disconnected.

> ‘what do my friends want (or need) me to see?’ feed

Isn’t this what the notifications on FB are for?

This does work well for Close Friends. However, most of Facebook isn't close friends.

Maybe all we need is just a simple news aggregator

I think ahead of that, what we as a society all need is to read more books and do more physical activity. After that along with work/study, there's not much time left for news (in any form).

You'd need people to post to fetchable places first, say, their own website, or at least tumblr.

Looking forward to a new rss feed standard

Right, cause that will solve it, like it did the first time /s

The last thing we need is yet another standard; we need people to utilize the existing ones, like activitypub, webmentions, websub, micropub. And grouping is a client task.

This is not a tech problem in 2018.

How about json rss instead of xml rss . Or even ipfs pubsub

What problem exactly json solves over xml? How does it address the problem of people being lazy and still using FB?

What's wrong with RSS as it exists today? (I've heard people criticize the lack of comments, but that's clearly a feature.)

does anyone else refrain from posting because you know that each post you may take up your friends' time?

"There has been an overwhelmingly negative public response to Facebook's launch of two new products yesterday. The products, called News Feed and Mini Feed, allow users to get a quick view of what their friends are up to, including relationship changes, groups joined, pictures uploaded, etc., in a streaming news format.


Facebook founder and CEO Mark Zuckerberg has responded personally, saying "Calm down. Breathe. We hear you." and "We didn't take away any privacy options."



The funny thing is that privacy options are worthless -- while they protect your content from being seen by other Facebook users or the public -- they don't protect the content from being seen and used by Facebook itself -- which is the problem. A "hide from Facebook ad targeting or newsfeed emotional manipulation" option isn't a privacy setting is it?

"And when you have 10 [P2P] groups with [small number of family, friends, etc.] in each, then people will share to them pretty freely [in each case, only the content that is warranted for those individual groups]."


It's not possible in iMessage - with end-to-end encryption, Apple has no idea what you're sharing."

[ There is no technicological reason that one company, or any company, needs to have control over the management of these "groups" and the "messaging" that may occur within them.

Two "groups" communicating over internet need not be connected in any way, bridging them is an option that should be left for the user. For example, each group is reachable only via a separate network interface. Not "networks" or "interfaces" as defined by marketing copy and buzzwords, but traditional network interfaces defined by software. The type of interfaces displayed when a user types "ifconfig" and "ipconfig /all".

The interface used to "surf the web", the one that connects to the sewer of advertising that is todays web, is the not same one used to reach any group. Think of this like VLANs if you wish but its even simpler. I have been "beta-testing" such a system, which is not new, for many years. Theres no ads, no spam, no BS. It works. The "intelligence" of the network, so to speak, is at the edge, with the user.

Theres no money in this, necessarily. This isnt an elevator pitch. Its called "a private life". Everyone needs one. The whole point is to escape the commercial world for a little while. "Clean separation", like those network interfaces. ]



You're not going to be the next dhouston bud ;) Take it easy!

Please don't reply to a comment that breaks the site guidelines by breaking them yourself.


Having too many newsfeeds sounds like a good problem to have: perfect application for Machine Learning to solve!

I am no FB User. Never was. Never seemed to me interesting enough to join.

When I read these kind of articles I am always amused with which kind of problems your live is enriched when you are.

Not to mention the occasional article about how much your live improved after stop using FB.


Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact