Hacker News new | past | comments | ask | show | jobs | submit login
Fake “like” factories – how we reverse engineered facebooks user IDs [video] (ccc.de)
626 points by sturza on Dec 30, 2019 | hide | past | favorite | 243 comments



I'm pretty sure it's not just parties outside of FB, but FB itself is implicated. Out of curiosity I paid to run ads to promote a page associated with a community group I'm trying to get up and running ("Ski the Great Lakes"); I paid about $10 to run an ad for a few days. It became clear after a day or two that most of the engagement was fake; many Indian and Egyptian etc. accounts completely outside of the targeted demographic and clearly not real accounts.

A couple months later I'm still getting the odd 'like' from these type of accounts despite not running the campaign.

I didn't seek out any third party promotion, just used FB's own promotion tools.

(I also find it objectionable that one can run ads to promote commercial pages, but there's no way to do so for "groups"; if I want to spend some $$ to expand actual community engagement, for some non-commerical purpose, I can't. I can only promote a business.)


If you use the ad manager, try setting the demographic to people interested in a specific ski magazine, not in 'skiing' as a hobby. Notice how the number of potential audience reach is equal to the number of people who liked 'SKI magazine'. [0] Also, if starting out, limit the geography to the most wealthy of urban centers like within a mile like Foggy Bottom in DC or the Upper East Side of NYC, if college students get all the colleges where skiing is a big interest around Amherst, MA, for example. It's about getting creative to limit the number of the audience to around 20,000 or 30,000 using the interest technique. Only show the ad once per day per user. Use A/B testing with different ads and creatives at a TOTAL of $5 a day for a few days. Take the most effective ads and run them at $10 a day. If you have a page, post stories that will gain traction and interest. Boost them using this specific audience for a $1 or $2 to see if people share them. If not, stop promoting them. If they gain traction invest $20 to help them go viral. Facebook rewards high quality content ads by making them much, much cheaper to run. How do they know? People click and interact with them. Find the ads that people interactive with the most and throw the rest out. It only costs a $1 or 2 to know if they are successful.

I keep my account disabled unless I need it to run ads. I feel dirty having reactivated. Im going to disable it again.

[0] https://i.imgur.com/VLiou9O.png


thanks for sharing! are you available to consult on fb/social media campaigns?


I don’t think Facebook is paying Indian companies to fake like your ad.

Those accounts are probably trying to look legitimate so when they like they’re own pages, the signals are trusted.

That said, Facebook should have refunded you obvious non legitimate clicks. Did you report it?


There are probably parts of Facebook that have incentive to ignore fake accounts and trafic, maybe even encourage it, as they are being measured by increases in user count or some raw quantification of user engagement.


"When a measure becomes a target, it ceases to be a good measure." - goodharts law


You can set your demographic and still get many foreign likes. They are either using proxy service or something similar. FB does little to discourage and I frankly feel enables to drive revenue.

These likes also pollute your account and skew your demographic as they are not sincere. Makes it ultimately difficult to understand your buyer.


It's pretty awful; I used to work in ad-tech, and if we had this level of click fraud our customers would have had our head on a platter.


[flagged]


Nicely put.

“used to” is quite reading.


That does sound like a reasonable explanation. It's not worth pursuing a refund for a $10 campaign. I'd be doing that if I was serious about it, though.


IIRC, you are seeing the backscatter from other people's paid like campaigns. If the agents liked only the pages people paid for, it would be almost trivial to discount and delete. So the agents spray likes over a very broad range of items, mostly unpaid.


> but there's no way to do so for "groups";

That is intentional. Facebook had facebook pages, where users could subscribe (like) to be your audience. Then pages became too popular and theyrealized that they were losing potential advertising so they cut down that channel - now you have to pay to reach the subscriber base that you created yourself. They don't want to end up in a similar situation with groups so they are limiting group engagement. FB wants moar money


Can you do a "link to website" ad and then link to your facebook group?


I did try this and I believe it didn't work for some reason that I can't recall. The recommended practice seemed to be to create a "Page" related to the group and then have a link from the page. But then I need to curate content for both, a total hassle.


I may be out of the loop on this, but do people actually click on ads, especially FB ads?


If I'm going to click on an ad, it's going to be a Facebook ad. Believe it or not I actually was a patient in a clinical trial I found out about through a Facebook ad.


I wrote about this a year ago on my blog: https://planiverse.wordpress.com/2019/01/11/advertising-to-⅔...

In short, the fake accounts had several factors in common: - No or almost no public posts. - The only public posts being an update of the cover photo and profile photo in 2017. - The profile photos tended not to be headshots: a photo of a cup of coffee, say, or a landscape. - Between 20-100 friends. - Friends tended to be in places like India, the Middle East, or Southeast Asia. - Their gender didn’t match the name or photo about half the time: a “Martha” referred to as “he” by Facebook, for example.


Given the power of modern day corporate actors, this is a competitive tactic, morals notwithstanding. I thought it was interesting that one person in the video did not view this as a form of fraud. But I have difficulty seeing how this is not fraud. It doesn't seem likely that everyone knows what is authentically or artificially amplified. Even if people do view the internet social atmosphere around them with caution, it's difficult to find who exactly is behind the amplification and to what extent. On other platforms such as reddit I suspect that there has been further political activity.

For nation-state actors, it is a modern form of agit-prop that is probably superior in both its effectiveness and its ability for fine-grained control. What's interesting is how this approach can utilize the idea of proxy control on so many fronts. In the US, the citizens themselves are being gamed as proxies for foreign voting power. Malicious actors hide behind proxy servers as well, which adds another avenue of difficulty in dealing with the problem. And then there are the botnet overlords.

An instance of a normal person posing with their own identity once is its own bit of fun, but now that commercial and then national actors are involved it's a whole different ball park. If your fridge supports the other candidate and products from its superior overlord should this be the advent of bot rights or total bot enslavement? I partly jest.

I did a quick Google Translate and read the Vice Germany article they referenced and it's an entertaining read. I recommend it. If you want to look it up, the title is "Die Applausfabrik: So funktioniert die Industrie hinter gekauften Likes und Followern".


Intriguing. On the other hand Facebook cracks down on new users and requires them to jump through many hoops to access the service.

My wife finally gave in and signed up to Facebook in order to be able to take part in a webinar she already payed for that was staged on the platform just to get her new account blocked for using a different device from the one she signed up.

They wanted her mobile phone number, photo and her ID (think passport).

She didn't want Facebook SPAM on her phone, she already gets bombarded by WhatsApp notifications and sending her photo again didn't help. She got banned for good. Also her IP was blocked so she couldn't sign up again.


I've been curious about the ?fbcid=lkjwejowijtoirj14j blah blah that fbook adds to urls then you click over.. I think this id is always the same(?) - so it's another layer of fingerprinting.

I don't use fbook enough for it to be a big issue - and I had assumed it was always a unique string so fbook could track ads, but I am now things it's not different each time and it's another identifier than can be used by god knows how many groups and peoples to track - when I am employing several other systems to stop tracking - this is crazy - is there a way to strip this auto?


FBCLID is "Facebook Click ID". It is sent to third party sites as a URL parameter so that if an advertiser complains about fake clicks or something they can provide the IDs and the Facebook advertising team knows which ones they are talking about.

Analytics companies (I think Google and a few others) can pay Facebook to get data on your behalf like what page it came from and the demographics of the visitor.

Google AdWords does this as well on paid clicks with GCLID


This is terrible. Someone with access to server logs at a few web sites (or someone that loads a third party resource (ads, analytics, sharethis buttons, etc) onto a few web sites) would be able to know exactly who is coming from fbook and viewing different types of content.. no inside access to fbook or their data required.

I can't believe nothing is auto-stripping this url addition between firefox and ublock origin - must find and tell others.


Note that the ID changes for each "click" - hence "Facebook CLICK ID" - so you can't tie different clicks back to the same user without access to Facebook's data.

But yes, an extension to strip these (along with `gclid` from Google, `reddit_cid` from Reddit, and so on) would be very welcome.


thanks for this - I was confused about it since I have seen a few that looked very similar (first 12 characters of it the same) - did a quick, slightly deeper check into history and see that I do have 2 urls that have the exact same string, but it appears at least in this one case, that it's a click to a short link that forwards (dj-m.ag to djmag dot com) - so the similarity was confusing, and see some doubles freaked me out. now I see at least some differences in several - so I guess unless there is an easy pattern match, or as the title of this hn thread story suggests they may be able to reverse engineer fbook clicks - it's highly concerning to be sure since so many places use third party scripts and assets.

fingers crossed this is good and it gets better.


I've seen this tracking param when people shared with me some links, which they got from Fb.

There is a Firefox extension ClearURLs, which strips some tracking parameters from the url.

From what I remember, if you click in Fb on a link, it redirects first to an internal tracking url, and then to your final link. This is not limited to Fb only.


Yeah, this sort of thing is far from unique to Facebook, and is why it's been my practice for many years to never just click on a link. Instead, I copy it so I can strip out all of the unnecessary decorations and trackers before browsing to it.


Probably a http request tracking ID that can be used for debugging the request.


It's a "Facebook click ID" and it's likely intended to evade Safari's "Intelligent Tracking Prevention" on third-party cookies.


This. We knew this was coming when Safari added that.


While I don't use FB, this sort of thing is why I no longer give any weight at all to "likes", "upvotes", or product ratings.


One interesting thing for me was that the majority of buyers seem to be small shops and individuals. Unfortunately they don't present any hard data on this but their examples and one answer in the Q&A at the end really hint in that direction. I wonder if big brands buy likes in a significant numbers as well?


Majority of cases is often a poor measure for such things -- what you're interested in is the significance (in the general, not increasingly fraught statistical sense) of the use.

There are more small businesses than large ones. But large business practices are, for the most part, far more meaningful. Some metric of net financial or revenue impacts might be a better (though much more difficult to acquire) metric.

It's kind of the "space aliens land on Earth, who do they encounter" problem. Most people are located in cities, but cities occupy a small fraction (~<1%) of Earth's surface.

By statistical likelihood, your alien is most likely to encounter ... a fish (or plankton). If they find land, they're most likely to find a rural area, and hence rural dweller. Not because that's a statistically accurate sampling of human population but because it's a statistically accurate sampling of human population areal distribution.

So: if you look at like-buying campaigns, you'll find, because there are far more small businesses, many more small businesses participating.

If you looked at other metrics -- say, bought likes distributed among all commercial accounts -- you'd probably find the weighting swinging far more toward at least moderate-to-large sized businesses.

Though: for a small business, some early-stage "growth hacking" might be both a modest budget line, a plausible-sounding practice (not necessarily true, but appearing to be true), and something a likewise ethically challenged black-hat SEO marketer could sell.

Then there's the possiblity of joe-job likes -- buying fake likes for a third party in order to present them as fraudulent. Possibly not widespread, but possible, and given the difficulties in attribution, something that's hard to demonstrate one way or the other.

Big brands:

- Likely have more effective tools.

- Have other ways of promoting online content without going through fake likes. (Paid "influencers" being one widely-practiced option.)

- Might be aware of the potential downsides and hence avoid this.

- Are a much smaller fraction of "like" campaigns, and have a higher "organic" (or at least organic-appearing) rate of user engagements.

There are numerous reports of ... large influencers in the political space ... paying $10k - $100k amounts monthly for social media promotion.


I mirror'd a copy of the video to Vimeo for those eager to see it and are having problems with upstream. I suspect they're having trouble dealing with the HN-effect.

https://vimeo.com/382103218/bbe2fe74c3


They already mirror their videos on YouTube...


FB generates a different user id for each FB application id.

"Facebook issues app-scoped user IDs for people who first log into an instance of an app, and page-scoped user IDs for people who first use a Messenger bot. By definition, this means the ID for the same person may be different between these apps and bots."

https://developers.facebook.com/docs/apps/for-business

So the 10 billion number the researchers quoted does not necessarily represent individual users.


Hi, this is actually incorrect. Facebook uses internal IDs (these are the ones used here, they are valid globally) AND external IDs for Apps and their connected pages. The external IDs have a completely different format and cannot be used to access a profile in the web browser (which is what we were able to do for all of them). We cannot completely rule out, that Facebook assigns two IDs to the same profile. However we think this is highly unlikely. We tried to check for that as far as possible. For example: If I search for a profiles name, that I found via a random ID lookup, and then check that profiles ID, its the same ID that was used in the lookup. We couldn't try this at scale though.


It's pretty simple to validate. Create two different FB developer accounts. Then create a Login with FB type app for each account. Then use a third FB account to login to each different app and use the FB Graph api to view the user id in the tokens. It will be different.


But since they only query facebook.com/$ID and facebook.com/profile.php?id=$ID, aren't they only looking at one and the same "ID space", and hence only counting once?


What you're mentioning is actually a very recent change to Facebook API. So it doesn't relate to the work done in the presentation.


This FB ID strategy has been in place for at least 2-3 years.


Very curious about what Facebook's response is to this (outside the response mentioned in the talk, which is clearly not sufficient)

Also, if there are any FB employees on here, what do they think of their employer still enabling massive disinformation, astroturfing etc?

To be clear, I'm not blaming individual employees. Just honestly curious how they deal with these issues on their personal moral compass.


That's kind of like asking how Vint Cerf and Bob Kahn feel about intelligence services tapping internet traffic.

Not very useful.

A more cogent question would be "For anyone in Facebook working on this problem, do you feel like sufficient organizational resources are being spent combating it? Is there low hanging fruit, or are we into the whack-a-mole stage of any popular platform with a financial incentive to cheat?"


If you are developer try using https://github.com/instagrambot/instabot (Instagram is part of Facebook). You will be blocked quite fast. For research and fun I did things more sophisticated than instabot and still got blocked. So Facebook as company is researching this area and is actively using whatever is possible. I'm pretty sure they will get much better in the future. In the end they want companies to buy ads and if competition gives better result than they do, they will lose.


Use selenium or puppeteer with stealth plugins and a real user agent and I suspect it will be A LOT harder for them to block you.


Selenium is still relatively easy to detect by JS, but on the other hand it does require some dedicated[0] effort even if just the user-agent is overriden and that itself might be enough to work for relatively long time. Other important thing is rotating over different IPs that appear realistic[1]

[0] dedicated in a sense of lets include some script specifically designed to detect Selenium. [1] for example it's kind of unlikely that non-bot visitor is using IP from a range used by amazon instances. not sure how often this is used but I assume that most bot-detection systems would use that information at least as one of the metrics


I have written browser extension.


how sophisticated is it? a browser extension sounds pretty high-level and limited to me. does it just use the host browser's user agent? or is it able to spoof multiple legitimate user agents, connect to fb via proxy servers, handle queueing, rotation, retirement, etc.? in which case how is a browser extension more useful than [scripting language of choice]?


There are a lot of JS-based detection techniques that rely on things being or not being available. By using an actual browser you have JS/WASM environment that is identical to what is expected from a real user. By using [scripting language of choice] you would need to emulate everything that a given bot-detection system tries to check.

From my very limited experience there are two main categories of websites:

1. Using curl and/or [library of your choice] in a [scripting language of choice] is enough

2. Forget about not being detected without a full blown browser unless you want to spend endless hours trying to emulate whatever is needed to be emulated and also willing to burn some accounts and IPs in the process.


A lot of questions here. In short I have tried to imitate how I browse Instagram myself but not too sophisticated. I think that higher sophistication might have helped a little bit. Scripting based solutions are detected in minutes, browser based solution was blocked only after 20+ hours. I have not used any proxies while I think it can be done - however that does not solve the problem.

Answering last question, it depends what do you mean by scripting language. Let's assume it is Python then you have two choices: imitate browser or risk to be detected as bot quite fast. Writing browser extension is quite easy and you can imitate real user quite easy. The only problem that you will have is to imitate real human being in a way that does not match Instagram's bot detection algorithm.


This doesn't tackle the click farms / click workers though...


You don't know that


If they still exist and still work...


I don't know, I would like to hear Vint Cerf do a talk on intelligence services. Particularly if the second half of the topic is spitballing solutions.


There are videos of him talking about the inception of the internet. In fact, he explains in detail how we wanted to be able to have mobile military units that could relay data back to HQ real time in enemy territory and that was why ARPA created the internet. I don't want to misquote him, but essentially he said something to the effect of it isn't like you can ask your enemy to set up a network for you just prior to invasion


Vint ain't never been the Freedom Rider, he's the cop (so to speak).


I think there's already an answer to this. In a large organization, such a pervasive and difficult problem can't be solved in a direct fashion, because there's always some weird alternate consideration that keeps it from getting tackled. Only a top-down initiative from C-level implemented across the organization has a chance at rooting it out, and if such an initiative existed we'd have heard about it and gotten yearly progress reports.

I think what we're seeing instead is the same as counterfeits or promoted search results at Amazon: solving it could become an existential threat to either popularity or profit, so they're just "managing" it as BAU.


I guess one alternative explanation would be that it's a hard problem, and Facebook doesn't know how to solve it. Or that current efforts aren't having good results.

In either case, Facebook has a financial incentive to say nothing, instead of publicizing lackluster results.

Or, my guess, they'd rather keep any news about the existence of this off the front pages, as broader knowledge of its existence by their (not as technically informed) advertising customers only negatively impacts Facebook.

Kind of how a beachfront town wouldn't want to advertise the fact that Great White sharks exist... at all.


I don’t buy it, we’re building ai that can drive, classify scans better than professional doctors, translate - but we can’t figure out fake accounts? Come on.


Driving, diagnosing, and translating are not good comparisons, because none of those are adversarial.

The problem here is that adversaries are adjusting to whatever measures are put in place. In that respect it might be more like winning at Chess or Go. Which computers can do, but it’s decidedly non-trivial.

And here’s the kicker: If we postulate some hand-wavium big-data/machine-learning/AI that can detect bots and adjust when the bots evolve, why can’t we also postulate some hand-wavium big-data/machine-learning/AI that can run bots and evade the bot detection?


Driving often is adversarial, and no AI can actually drive yet (outside of some very limited circumstances on a small number of roads).


It's probably not that they can't classify fake accounts but that doing so at a scale requires too large resources and it's more economical to employ only simpler techniques that weed out most of the bots and live with the remaining few.

Then if they also happen to anyhow benefit from the existence of those fake accounts then it becomes even harder to justify costly filters and leads to a sufficient-enough-to-not-look-obviously-weak grade filtering


If it's so problematic, they shouldn't have released the feature(s) in the first place. They unleashed a firehose (a hose that shoots fire) without any controls to turn it down, or off, or even to aim it away from the hospital full of babies. Controls that were well-known as common and standard for decades before Zuckerberg was first rejected by a girl.

Facebook is entirely out of control as a social force, and PR agencies are the only thing holding back public perception.


I'd assume these fake likes improve the metrics of most managers... "look at the improved user activity since rollout of X feature!"

There's probably not much incentivation to investigate them too deeply.


Or perhaps slightly more temporally related, how Vint Cerf feels about ISOC selling .org :)


I'm an engineer who recently started at Facebook. I can't speak for the org as a whole, but during the team selection process, the main themes were ads teams and integrity teams (fake account, scraping, security, etc. all fall under that umbrella), so it's something Facebook is taking as seriously as making money.

> what do they think of their employer still enabling massive disinformation, astroturfing etc?

This isn't a behavior management wants, and there's a lot of internal effort to reduce it.

Something the public doesn't get but engineers do is it's impractical to manually review every action on Facebook, human reviewers aren't necessarily more accurate than ML, and you'll always have some amount of abuse.

> Just honestly curious how they deal with these issues on their personal moral compass.

I have no qualms here. Roads are used during thefts, but no one asks construction workers how they sleep at night knowing the roads they build might facilitate crimes. Spammers gonna spam, but that doesn't mean we can't have any online platforms.


I was hoping a Facebook engineer would show up in this thread. If you don't mind, I have a couple of questions.

> Something the public doesn't get but engineers do is it's impractical to manually review every action on Facebook, human reviewers aren't necessarily more accurate than ML, and you'll always have some amount of abuse.

I'm curious what your thoughts are on other companies (Twitter, Spotify, etc) disabling political ads for this very reason. Facebook has not. It's a given that you can't manually review every political ad -- so why allow them at all, if their disinformation has negative real-world implications? I don't buy Zuck's argument about free speech.

Secondly, what would Facebook do if a grassroots movement started to put pressure on advertisers until Facebook cancels political ads? What if this movement recruited real users to click on lots of ads in their own feeds, with the goal of disrupting advertiser ROI with difficult to detect garbage clicks? Would advertisers get upset? Would Facebook have any recourse?

Thanks for indulging me on this. The idea came to me in the shower and I'm not sure if it's brilliant or stupid.


Disclaimer: I don't work for facebook. I don't speak for my company's policies either.

- You overestimate the zeal with which the grass-roots folk will engage in that behavior, especially if they know it is undercutting fb model. Ad tech is also evolving everyday meaning a reCAPTCHA like functionality around whether you are a real engagement vs these clicks aren't very far away. You should also look up the articles about # of twitter/reddit contributors (including likes) compared to the US population for example (VERY minimal).

- In theory you can politicize anything: Do you think it is possible to talk environmental/social/civic controls etc. today without having a political bent? Meaning the political ad blocking is probably not as comprehensive as the twitter block portrays.

- For all the negativity politics gets, this is something that impacts us very much in our day to day life. Shutting it out completely is probably more of a problem than working with the system. I don't think we can comprehend the macro impact in this subject: As it stands today, some companies saying no just means more money for other corporations.

As you can see I do have a very practical (read cynical) view towards how much mental bandwidth people have towards the small slights in life (which may have a giant impact later) compared to their day to day requirements. As lacking I am in solutions, I don't believe it is sledge hammer (stop ads) OR crowd sourced (let's all click on all ads).


> a reCAPTCHA like functionality around whether you are a real engagement vs these clicks aren't very far away

That's gonna be a no for me dawg. If it turns out to be content I want to see badly at all, I'll search for statistically improbable strings in the part that I can see so I can find an alternate source (/r/savedyoucompletingacaptcha?).


> so why allow them at all, if their disinformation has negative real-world implications? I don't buy Zuck's argument about free speech.

Why don’t you buy it? If you ban “political” ads you are going to start drawing a lot of arbitrary lines. Is an ad for a climate change organization a political ad? How about an ad for UBI? An ad for a local Catholic Church? An ad about farming subsidies? An ad for birth control?

All of these are one step away from direct advertisements for candidates and are very political topics for many people. Twitter hasn’t actually banned political ads, they’ve just used a definition that makes it easy for them to claim to have done so.


You make a good point about how to actually enforce this. Apparently, Washington state did ban political ads on Facebook. As a testing ground for such a policy, the results haven't been great.

https://www.theverge.com/2019/10/31/20941917/twitter-politic...

HOWEVER, this appears to me as a regulation failure. We know Facebook doesn't want the ban, so their motivation to comply is limited to the clarity and sharpness of the teeth of the legislation. And they aren't very sharp.

Regarding:

> Twitter hasn’t actually banned political ads, they’ve just used a definition that makes it easy for them to claim to have done so.

I don't agree with painting Twitter as just wanting to "claim" they have done so. They appear to be making a true good-faith effort. Check out their policy:

https://business.twitter.com/en/help/ads-policies/prohibited...

https://business.twitter.com/en/help/ads-policies/restricted...

> Is an ad for a climate change organization a political ad? How about an ad for UBI? An ad for a local Catholic Church? An ad about farming subsidies? An ad for birth control?

For each of these examples, there is a clear way to apply their policy based on the content of the message. Is it perfect? Probably not. Will it totally kneecap political ads (by 80%+)? I believe it will.

> All of these are one step away from direct advertisements for candidates and are very political topics for many people.

I would argue that your point is too academic. If political ads are reduced by 80%, even though there are still political-adjacent ads (that aren't funded by a political group and don't reference a candidate or initiative), then the policy would be a wild success.


I think the general theme I see is that folks expect companies to solve problems that their governments must be solving. And when they don't get it uniform, everyone's mad.

I'd rather live in a world where companies aren't trying to push their morals on me and have a central entity (govt) arbitrate the same (Believe me I see evil in both places). I have been thinking on and off about the role of government in the current world and sadly I can't see a place where it can be as tiny as people want.


>I think the general theme I see is that folks expect companies to solve problems that their governments must be solving. And when they don't get it uniform, everyone's mad.

People want an outcome and don't particularly care where it comes from, public vs. private. If one fails, they'll push on the other to find a leverage point. Example: A pundit can say some truly awful and damaging shit and it's legal under the law. But if you target their advertisers, that's the leverage point that matters.

>I'd rather live in a world where companies aren't trying to push their morals on me and have a central entity (govt) arbitrate the same (Believe me I see evil in both places).

A company should be free to push its morals on you, given that the market allows it, and their morals aren't illegal. Right?

Personally, in the current environment of profit-at-all-costs capitalism, the major flaws seem to be incentivizing short-term-thinking and negative externalities. When a company flexes morals that appear at odds with short term profit (e.g. Twitter), I tend to assume they are actually acting out of self-interest but are better able to grasp the long vs. short-term incentives, for whatever reason.


I hate Facebook. For its user tracking practices, their attention hacking, and their UX is terrible too. I only use Facebook because others post events on it, and I can't convince these people to use open alternatives.

How can you feel proud about such a product?

PS: Yesterday I got an email from Facebook saying that I had 4 messages waiting for me. I opened the app, it showed a little balloon with "4" in it. I clicked it, and there were no messages ... sigh.


>Yesterday I got an email from Facebook saying that I had 4 messages waiting for me. I opened the app, it showed a little balloon with "4" in it. I clicked it, and there were no messages

I also get this, but now it's always stuck at some arbitrarily high number. in the past few years I've reduced my fb usage to a couple of minutes per month, down from a few minutes per day. I'm sure it's underhand tactics like these (tricking a billion dormant users) which upholds their claim of 2 billion "active" users or whatever bullshit number it is


Such notifications (email, text) can be disabled in the settings.

And you can visit the app/web when you feel like, not when they want.


You can disable receiving emails in general, sure, but you can't disable the fake/lie notifications or the daily "please pay us $XX to promote your page"


Frankly I don't understand this kind of "calls to action" directed to company X employeese.

First of all, it is not like FB employees are pushing people into gas chambers in Dachau, FB usage is not obligatory, what I keep telling everyone who complains about FB censorship or privacy abuse - I don't have FB account because of that.

In the same way we might ask to step out Coca Cola employees (say, delivery truck drivers), because drinking Coke is bad for ones health. Or HSBC employees because HSBC was laundering narco cartels money? Or John Deere employees because company forbids farmers to modify tractor software?

All of those practices are immoral and bad, but why chase the weakest, whose income, ability to pay rent, etc. depends on the employer? Why not target those, who are really responsible for that what is happening and who make huge money thanks to that?

I would say it makes much more sense to vote with our money, avoid services and products from companies we consider immoral.

Publicly discourage people from using such services and products, publicly stand against CEOs, shareholders of those companies, spread the knowledge about their personal responsibility for such kind of behavior - this is not that difficult and actually can make a difference. Imagine PR outcome of a conference that would invite Mark Zuckerberg, but no one else would want to attend it? Mark shows up on, say, TED, and all the people leave the room during his talk? In the media-driven World surely this would become "viral" and even Mark with all his money couldn't ignore that easily.


> privacy abuse - I don't have FB account because of that.

This is just the obligatory reminder than not having a FB account does not stop FB from spying on you.


>FB usage is not obligatory, what I keep telling everyone who complains about FB censorship or privacy abuse - I don't have FB account because of that.

Wow are you saying that people should take responsibility for their actions and should let others choose freely what to do?


Shadow profiles, anyone? The idea that Facebook does not attempt to spy on former or non-users is laughable.


Sounds like privilege.

/s


> Frankly I don't understand this kind of "calls to action" directed to company X employeese.

Because if you make enough noise, and affect the share price enough, stuff changes.

Also, A lot of developers seem to think that their actions have no consequences. This includes people at facebook.

When it is demonstrated that something you have designed, or your team has designed its a massive pile of shit, in a very public way, it leaves a mark. Hopefully for the better.

Having been through something similar, It certainly changed the way I make prototypes. Security and anonymity comes first now, not last.


>First of all, it is not like FB employees are pushing people into gas chambers in Dachau,

WTF ?

> why chase the weakest, whose income, ability to pay rent, etc. depends on the employer?

I assume you are not talking about FB employees which is a shame because it would be nice to see the same employees take some responsibility for the code they produce.


A programmer takes responsibility for their work but they are not responsible for what you think they are.

A programmer doesn't get to determine what the company is building unless it is so small and even then it is rare.

Target product managers.


> A programmer doesn't get to determine what the company is building

But a programmer does get to determine whether or not they'll continue to work on what the company is building. Just saying...


Why not do both? Shame the company and shame the users.


“I was only following orders.”


>First of all, it is not like FB employees are pushing people into gas chambers in Dachau

I laughed at this way louder than I'm willing to admit


Good lord, I know tech workers have a reputation for abhorring responsibility, but this defense of amorality is breathtaking.

Don’t bother the engineers...no, like this post so people will walk out of Zuck’s TED talk. Right out of the Onion!


I'm assuming the ones in denial will say something like "It's up to the individual to evaluate the quality of the content e̶v̶e̶n̶ ̶t̶h̶o̶u̶g̶h̶ ̶w̶e̶ ̶e̶n̶g̶i̶n̶e̶e̶r̶ ̶o̶u̶r̶ ̶p̶l̶a̶t̶f̶o̶r̶m̶ ̶t̶o̶ ̶g̶e̶n̶e̶r̶a̶t̶e̶ ̶a̶n̶ ̶i̶n̶s̶t̶a̶n̶t̶a̶n̶e̶o̶u̶s̶ ̶e̶m̶o̶t̶i̶o̶n̶a̶l̶ ̶r̶e̶s̶p̶o̶n̶s̶e̶ ̶t̶o̶ ̶m̶a̶x̶i̶m̶i̶z̶e̶ ̶e̶n̶g̶a̶g̶e̶m̶e̶n̶t̶."


Every time yet another scandal comes out of Facebook, Google, etc... I think that the 1985 film Real Genius still holds up today.

The TL;DW version is that a young, idealistic tech guy learns the hard way that not everyone is as idealistic as he is about technology when a greybeard informs him that the project he just finished is a weapon.

https://www.imdb.com/title/tt0089886/

Silicon Valley needs more Laszlo Holyfelds.


Why would they even have a response? The advertisers don't really care.

Everyone knows that most FB traffic is garbage, but it's easy to sell garbage. And in a few limited circumstances the micro-targeting works.

Even if it turned out that most FB/IG traffic was fake it wouldn't change anything until someone comes up with something better.

You have to remember how the ad ecosystem works. Until FB came along there was nothing but display and search and the gap was huge. Facebook comfortably occupies the entire middle ground between display and search these days. Even if half the traffic is totally bogus it's still going to be better than display.

Every big company pretends to work on these problems for image reasons, but they don't really care. It's always a "special team dedicated to" whatever the flavor is that reports to a kangaroo court.

If companies actually cared about these things they could use their immense engineering talent to fix them, or just prevent it in the first place. But BOTH of these mean less revenue at the end of the day. The only way companies would actually prioritize these things is if it had a positive impact on revenue somehow, but it doesn't. Never will.

I'll never understand where the idea came from that Facebook had some kind of moral compass as a company that was anything other making money.


Any thoughts on alternative approach they could use--what would it look like etc? Would there be any undesirable consequences of a different approach?


Approach to what? User registration? Tracking fake accounts?


> Very curious about what Facebook's response is to this (outside the response mentioned in the talk, which is clearly not sufficient)

i.e. what would be sufficient in your view?


You could go down the route of what Slashdot did back in the day. "Likes" were something you could only give out after getting them. So you merely passed them along (like money in a market, sorta). And getting those likes was difficult and hard to game. It was a sort of "pay it forward" system with random batches of "likes" being seeded into the community based on some sort of measurable behavior, after which the community would recycle it internally based on the like being a token of "reward".

Whereas with current social media. Likes are given to users with out any care other than "hey you seem to be a valid user, let's allow you to generate large amounts of likes".


Gotcha. Well, I'm not really sure. Maybe just making likes private or banning likes (as suggested in the talk).

Or banning political advertising completely (see the part in the talk on political parties in Germany buying likes).

Or something that completely destroys meddling and fraud and is observable by the public. The "we are working on it behind the scenes"-response is just not sufficient in my view.


What is surprising is your assumption Facebook isn't aware of this. Nay, that Facebook was not designed for this very purpose. Facebook, who employs some of the top data analysts and AI specialists, actively building tooling on the cutting edge of those fields, who's funded primarily by commercial use of its platform, with the for-pay 'reach' being the most obvious monetization scheme.

I don't see how you could be missing that everything about Facebook incentivizes nonsense such as like-factories.


This kind of poisonous invitation to grandstand might as well be grandstanding in and of itself, and all of these performative paper ethics makes me want to puke.


> their personal moral compass

"It is difficult to get a man to understand something when his salary depends on his not understanding it"

Personal morals are just that: personal. Two people can have 100% conviction in their moral position and be totally opposed, it happens all the time in social issues.

Moral beliefs also happen to line up with social circles and group interests suspiciously well too (hence the Sinclair quote).


If Facebook were to defeat astroturfing, all advertising firms would just promote their new social media network on all the major news networks.

Our economy is founded upon mutual relationships of grift and greed, even if Zuckerberg were to disagree with his own platform, he would not be able to change it.


what do you think about the oversea slavery that made your clothes and your phone?

to be clear, I'm not blaming you as an individual, just honestly curious about your moral compass.


[flagged]


For a lot of employees, Facebook pays enough that paying your rent should not be an issue after a couple of years working there if you're in any way financially responsible.


That's what throwaway accounts are for.


ok... but this is kind of a problem that Facebook wants to fix also, so where is the conflict with "Zucc" ?


Oof - what is an "ordinary guy" and why is that person not Asian?

>"you instantly think of mobile phones strung together in multiple lines in front of an Asian woman or man. What if we tell you, that this is not necessarily the whole truth? That you better imagine a ordinary guy sitting at home at his computer? "


The sentence, though worded a bit awkwardly, is not implying there is anything strange about Asians, but it is implying that anyone who has dozens of phones in multiple lines is odd.

The many phones is what makes the person "not ordinary."


"Likes" are as trusted as the platform that created them.


We've become quite adept at pointing out the conflicting incentives, scaling problems vis-a-vis mobs, and hidden rights violations users agree to without realizing the implications. There is, however, another aspect to this we don't talk about much.

Who suffers the most here? It obviously not the social media platforms. They adapt their code and move on. It's not the users, at least in the long run. The problem has been identified and resolved. It's not the people selling likes. They get punished, bail out on some fake accounts and get new ones. Even if you could somehow ban individuals from using platforms, there's a million more people willing to create fake likes from where those came from.

It's the poor, as always. The people who know the system is rigged, know the system has to be gamed in order to make money, and are desperately looking for a way to be competitive and get ahead. So they buy some fake likes, then get destroyed by the social media companies. Perhaps they've already invested a lot of time in their presence before they got desperate. In either case, they're not running fake ids. It's just them. And they don't know enough tech to move on. They get slammed for life for making a poor moral choice. Why? Because they're easy to find and it's easy to punish them.

That's whack.


> It's the poor, as always. The people who know the system is rigged, know the system has to be gamed in order to make money, and are desperately looking for a way to be competitive and get ahead. > They get slammed for life for making a poor moral choice. Why? Because they're easy to find and it's easy to punish them.

This observation has strong parallels with how the poorest are lowest rung of the drug trade - on the street, and bear the highest price of it - death or imprisonment, while the higher level traffickers usually get away with impunity. But the populace feels good because of the disheveled mugshot of the street drug dealer they saw on the evening news.


The drug analogy is very interesting. As you may know, 100-ish years ago, drugs were legal. Some people frowned upon them, some did not. Then there was a great moral uprising followed by legislation, then we had the War on Drugs. Finally the pendulum seems to be swinging the other way. So we've seen most of a full cycle.

Continuing your low-level drug dealer example, if that guy that runs the car wash down the street gets banned for using fake likes, why shouldn't he? It was a bad thing to do! We can see him, we know him, he did a bad thing and deserves his punishment. Not only does he deserve his punishment, we should shame him if we can in order to discourage others from contemplating doing the same thing. Maybe if we increase punishment we can see less of that sort of thing around here.

Ever hear how a lot of social/internet companies got started? Sock puppets, fake likes, generated social proof, paying off influencers, and so forth. All the things the little, common folk aren't supposed to do today. It seems there is a moral code for the common folk and a separate one for our betters. All animals are equal ...

This story, where we find all these millions of fake people and likes, is a "drugs on the table" story: big flash, looks like progress is being made, we have heroes and villains, feats of strength and daring, and people can feel like the net is somehow safer. Then things can go on as usual.


Another recent example of this are the ICE raids on slaughterhouses rounding up undocumented immigrant workers.

I've yet to hear of a single executive of those companies arrested and put on trial for their employment practices. It's almost as if changing the employment practices wasn't the objective.



What is worst than being popular? Perhaps bought popularity. A team member decided to boost an instagram post(same FB platform) expecting a few tens of likes to start the virtuous cycle. We got a few thousand likes. It is our most embarrassing post ever and the worst part is that I haven't deleted that post yet. FB and instagram are black mirrors that make me wonder who am I and how low I can stoop.


A side note: I really like the linear video timeline thing ccc does.


I think the line between was is ethical and isn't is very blurry and there is almost no reason why advertising is ethical.


Any textual write up available?


Honestly I was underwhelmed. They started by scraping a site that offered paid likes for a list of their "clients" i.e. the pages to be liked. They talked to a couple of people who got paid to like pages. They then reported that platform to Facebook so they blocked it (dick move towards the workers they interviewed imo) then they noticed how Facebook profile IDs are incremental so they deduced a profile's creation date from that using "interpolation" even though it didn't account for various sparious points. They used that to look at the distribution of certain pages' and like services' liker profile age (recent profiles -> fake, older profiles -> genuine account) and that's about it. Not to be cynical but I don't find this groundbreaking at any shape or form.


Is there a way to remain anonymous but verify that the person is a human?


There is only one metric that really matters when it comes to using social media as metrics and that is comments. Those are the actual relevant users.


A lot of them are bots or paid astroturfers, also.


Sure and easy to spot, thats not the point.


I see plenty of bot comments on many sites


This may rub some folks raw and be pessimistic, but a half a century on this planet has taught me that any solution which relies on "doing the right thing" (ethics) but does not have any rules, laws or other repercussion for abusing it, then (some entity - person or company) will use your altruistic belief to their own gain, ethics be damned.

Just Google "buy reddit upvotes" and the internet is full of folks who will sell you a million Likes for cheap - in my mind, it's just a modern digital con (confidence game) - some of y'all may remember having created self-signed OpenSSL certs with the Snake Oil, Ltd company named in it. :) Even Wikipedia has a page for "there's a sucker born every minute" it's so institutionalized.


I think the more important point here isn't about ethics and punishment, but incentives. The larger a system is and the stronger it incentivises something, the more effort you're going to have to put into enforcing the status.

This is why, in general, ideas that involve lots of actors changing their behavior contrary to incentives, instead of changing the incentives, are fundamentally flawed.


+1

Also, I feel like every system can be abused if the incentivised metric is defined poorly.

Example: almost every advertising KPI.

1. A Brand wants to sell a product (incentive: sales). 2. Agency sales people want to reach a particular KPI—budget in this case, the higher, the better. 3. An Ad Tech company will serve ads based on a KPI agreed with the Agency (e.g. CPM, impressions).

It's very easy to satisfy 2 and 3 without bringing any additional value to 1., for example by applying less strict viewability metrics (is 50% in view "visible" or 1%? should we pause the video ad below the fold? what is an impression, exactly?).

The original intent (sell more units of product x) is gone. This is due to systematic issues with the industry, not a single specific company. But mainly, this is due to the fact that the established incentives represent only a chunk of the reality and can be followed without focusing on the actual problem. The overlap between actors' goals is poorly defined, small and often doesn't exist.

In a nutshell: if we define a metric serving as a shortcut for our goal, people will find a way of improving the metric, but rarely reaching the goal. A broadly defined incentive is better than punishment.


That has a name, the Cobra Effect [0]

[0] https://en.wikipedia.org/wiki/Cobra_effect


The Cobra Effect is when you're trying to make there be fewer of things (cobras) but the incentives you set up actually lead to there being more of them (it's now worth it for people to be cobra breeders).

Perhaps you're thinking of Goodheart's law? https://en.wikipedia.org/wiki/Goodhart%27s_law


A short term metric is necessary to allow campaigns to optimize their spend without a long multi-month turn around as conversions are fully tracked and users act upon the ad.

While there is risk that the wrong short term metric is chosen (that is not indicative of the real goal the advertiser is trying to achieve), it is not an invalid approach.

It is untrue that setting a short term goal will _rarely_ achieve the long term value. Otherwise, the advertiser will throw out that short-term goal since it is flawed.


That's why attribution is such a big story in advertising. With semi-persistent identifiers, you can learn the story of whether someone was:

1) Visited your store before they saw your ad

2) Only visited your store after seeing your ad

You can do this online or for physical store locations. There are lots of players there because big marketing teams are fairly sophisticated now: they want cross-channel identity (across mobile, web, TV), attribution, and measurement.


I sometimes like to think of ethics as an incentive system that is (generally) stronger than social norms but weaker than laws. Once you reduce it to incentives, it is a matter of comparing incentives. If the incentive to behave ethically is much lower than the incentive to not, then people will pick the better option. That some people seem to be more ethical than others can be viewed as the extent they estimate the incentives from different behaviors (importance placed in the thoughts of others, if they believe some greater power judges them for their actions, etc.).


> This is why, in general, ideas that involve lots of actors changing their behavior contrary to incentives, instead of changing the incentives, are fundamentally flawed.

Indeed, this is what drives the Tragedy of the Commons - everyone acting according to their individual incentives, maximizing their individual benefit, but while exploiting an unregulated limited common resource, and externalizing the cost of doing so upon the commons.

There are two solutions paths which are not mutually exclusive: put a higher price on the resource use or externality, and find a more efficient way to achieve meet needs.

In many cases, the current incentives and resulting approaches keep us stuck in local maxima. An example is commuting in single occupancy personal vehicles in urban areas in the US. When you consider the layout of our cities and the limited mass transit availability, it very often makes more individual sense to drive when you consider factors like transporting children to school/activities, grocery shopping, appointments, etc.

Sometimes the costs are actually to the individual rather than the commons, or are delayed in impact to the commons. In the previous example hours spent sitting in a vehicle instead of walking deprives individuals of opportunities to burn calories to control their weight, resulting in a variety of health issues.

The problem is that there are a powerful set of interests who are interested in maintaining the current approaches and incentives at all cost. Again, in the transportation example above, these would include the fossil fuel extraction and highway expansion industries.


> There are two solutions paths which are not mutually exclusive: put a higher price on the resource use or externality, and find a more efficient way to achieve meet needs.

I think the second one isn't really a solution. If you find a more efficient way to achieve the needs, the needs will expand to compensate. Even if everyone executes restraint, an entrepreneur will show up and start selling the remaining resources to people who don't have access to those resources.

As far as I know, the only way to solve tragedies of the commons is for everyone to agree on a set of rules, and punishments that disincentivise breaking these rules. Which, in real life, is achievable only through centralizing the power that sets the rules and executes punishment - ongoing coordination is an unstable state.


If you study Anarchist philosophy "mutually agreeing on a set of rules etc..." is effectively the syndicalist answer to this problem. However as you point out, history seems to show that this is incredibly unstable and is easily dominated by someone breaking the rule of non-violence.

The radical monarchist-libertarians answer is to eliminate the commons, such that everything has an explicit pricing mechanism. No less problematic, or unstable.

I think in the end, there is something about humanity that does not allow for stable equilibrium over extended periods (100s of years). Not sure what.


The problem with consensus based rules is that it takes way too long to do anything. Around the turn of the century, when anarchism was still opposed to capitalism, rather than just people swinging bike locks in the name of identity politics, any kind of anarchist style organizing basically amounted to spending the whole day sitting in a giant meeting, and then leaving with nothing important decided on.


Not sure which century you're referencing - 2000s?

I concur though. The Occupy Wall Street meetings I had the luck of attending at Zuccotti Park reflected as much and nearly all of the other anarcho-syndicalist meetings I've seen do work at the snails pace of consensus.

The adherents would argue that this speed is a feature not a bug. I would generally agree with the argument that slowing down decision making is a good thing. Not sure it's a survivable strategy as a group however.


> Again, in the transportation example above, these would include the fossil fuel extraction and highway expansion industries.

Those are only side effects. The main set of interests keeping things like this is the American consumers. The majority like way more space in their homes and property than their European counterparts.

The real issue is that public transport doesn’t actually work when the middle class is living in 3-5 bedroom homes with 1/4 acre lots. There is no country with good public transit where a middle class income can easily support that size home. The hard truth is that it’s going to be a trade-off.


> The main set of interests keeping things like this is the American consumers

I concur to a degree, but American consumers have also been so captive to the narrative of large houses that they are mostly unaware of other ways of living. Their only image of denser living is slums and tenements, or Park Avenue penthouses, not middle class townhouses.

However, Americans have increasingly travelled to places that have higher density and transit and seen that those can offer a high quality of life.

Many then seek a different balance of space to transit by moving to areas where they can make a different trade-off than the standard American model, hence the recent urbanization of suburbs and growth of previously sparse cities like Austin and Salt Lake City.


> Their only image of denser living is slums and tenements, or Park Avenue penthouses, not middle class townhouses.

Maybe that was true before the internet became widespread, but it’s not the case anymore. The American Midwest is aware of the walkable streets of Rome, Paris, and Tokyo. It’s still a big sacrifice to those used to yards, sparse commerce, and the scheduling freedom that comes with vehicles.

> hence the recent urbanization of suburbs and growth of previously sparse cities like Austin and Salt Lake City.

Yep, and I personally wish there were walkable options like in Europe. As someone who has lived in Austin for the past 7 years, getting around without a car is a joke and you’re going to take a big quality of life hit if you limit to yourself to one walkable area. Its density certainly is increasing (see east side), but it’s just turning into the Bay Area with worse highways.


There are trustless systems though, like bitcoin

Oh btw you can buy hn upvotes too : https://upvotes.club/buy/hacker-news-upvote/


> Oh btw you can buy hn upvotes too

HN credibility just dropped 80% to me for the moment ;-)

No but seriously, this is pretty bad. Many people are kind of addicted to HN and to many this is a stream of inspiration. I've seen people often share posts in internal Workplace chats where most people are not frequent HN readers.

Also having learned today that one can buy Reddit upvotes, I think this whole count thing is now officially useless.


Edit: I've been asked to make this more explicit. How about: If you do this you will get banned.

Don't assume that those upvotes are counted. It's a cat and mouse game. We eat a lot of mice.

If you or anyone suspects that a post has made HN's front page because of vote manipulation, you should let us know at hn@ycombinator.com so we can investigate. We have years of experience with this data.

I'd never say it never works, because we don't know what we don't know. But I can tell you that many people who've used that spam service have gotten their posts buried, their accounts banned, and their sites blacklisted.


In the case of click farms driving virtual browsers with selenium or puppeteer, there aren't even any actual mice involved in those clicks!


> https://upvotes.club/buy/hacker-news-upvote/

I checked out that page, and at least with this one, it imo doesn't devalue HN credibility for me. The max number of upvotes you can buy for HN on this website is 10 per post, which hardly makes or breaks an HN post.

Those 10 upvotes would only be useful for an initial kick off for a post, but if the post itself is trash and legitimate users don't upvote it enough, it will all go in vain. So it would be kinda impossible to use this service to boost absolutely trash articles that people don't care about it.


i assume it s been happening for quite a while now, at least for some products featured sometimes, but it doesnt seem it's very prevalent


"We upvote with very strong accounts; if you don’t make it on the main page after 10 upvotes, more won’t help"


> On Hacker News, when the post is new, it doesn’t matter how much you upvote it after a certain point. After fewer than 10 early upvotes with great accounts, you’ll usually make it to the front page. From there, HN moderators select manually what they want to keep there. A good strategy is to buy only a few upvotes first in order to make your post visible; then, after it remains approved for 10 minutes or more, buy a lot more upvotes in order to rank it better, or help maintain its good position.

Is this actually true? I don't think so.

It also seems funny they offer to sell a factor of ten more upvotes to articles already on the front page. Maybe they'll just sit back and watch it happen regardless?


Well here is a thing. Do these guys expect any repeat sales? I mean, they could try their technique and if it works then great but if either way they still have the money. Is there a rating system where reviews are posted and would it matter if there was? (see https://www.xkcd.com/325/)

edit: Anyway, the mods have access to the database to see who has been upvoting articles onto the front page. If a group of ('high value') accounts upvote in a similar way on a regular basis then they could be detected..


I feel like mods could easily just pay the $2 and create a post for them to upvote. Then detect those accounts and IPs and block them. However, it's likely just a whack-a-mole situation. Perhaps they could shadow ban them instead?


Is it better for the community to ban the fake clickers with disposable accounts, or to ban the actual people who pay fake clickers to promote their spammy posts, instead? Once you know who the fake clicker accounts are, leave them alone and let them continue to help you identify and ban dishonest self promoters who are trying to game the system to their advantage.

Hell, maybe dang runs upvotes.club himself, so he can take their customer's money then ban them immediately: that would be most efficient and least error prone. As far as I know, buboard might be one of his sock puppets, posting the URL as a honey pot to catch dishonest HN users. ;)


Only problem is that if I don't like you, I signe up and have them upvote the next DonHopkins post.

I think one method is to watch for upvotes that come when someone goes to a link directly instead of organically finding it through /new


They probably have accounts that match the topics and regularly upvote. So they have some accounts that upvote those entry level ML medium articles, another that upvotes lifestyle Saases, another that upvotes pet language posts, etc.


I'm starting to understand how all the mediocre metaphysics articles from Quanta and Medium keep ending up on this website. They're the textual equivalent of graham crackers. Just bland enough to not be thought-provoking, but just interesting enough to keep people reading. And with so little to say on the subjects in question, they provoke huge off-topic discussions.


Bland Graham Crackers for Paul Graham's Hackers?

Looking to Quell Sexual Urges? Consider the Graham Cracker. One of America's first diet hawks, Sylvester Graham was certain that sexual desire was ruining society. His solution: whole wheat. How a zealot's legacy lives in our foods today.

https://www.theatlantic.com/health/archive/2014/01/looking-t...


Oh god. Imagine the thousands of bland consensus talking points they had to regurgitate to build those accounts. Never trust somebody with no gray comments.


I guess that means I can trust you.


They also need a few warnings from dang for absolute trust.


Don't help them :) They can just throw in a few posts disparaging Rust to look credible.


Bitcoin is not trust less, the more compute power you have the more « votes » you have.


then Monero. the point is there is a system that minimizes trust , certainly more than the corporate system


Whenever somebody starts spouting off about Bitcoin or Monero, I minimize the trust I have in them.


The Europol investigation and presentation says Monero works as advertised, I trust that, especially since chain analysis companies aren't on the other side advertising their capabilities to undermine Monero


It doesn't matter if the technology or the mathematical theory behind it is trustworthy or not, the problem is that get-rich-quick money-for-free pyramid schemes attract untrustworthy people, so even if they technically work on the chalkboard in the lab, those people ruin them in the real world.

Social engineering and deceptive marketing trump Razzle-Dazzle Globetrotter Calculus every day.

https://www.youtube.com/watch?v=_P1bu4HUAMs


I just find it incredibly easy to understand private digital assets without being led on to the guru's other untrustworthy product

What you are basically saying is that because people lack critical thinking skills, you distrust the other people that happen/claim to actually know about the topic, that seems awfully reductive.


No, that's not what I'm saying at all.

"Incredibly easy", huh?

Incredibly: adverb. used to introduce a statement that is hard to believe; strangely. "incredibly, he was still alive"

See also: https://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect


Even after your edit, I don't know what you are saying and how it relates to trusting people less for a relevant short comment mentioning Monero. Care to articulate that specific sentiment or how its doing more than painting with a large ignorant brush?


In the case of buboard's post, first he posted something wrong about Bitcoin (and plugged upvotes.club), then when corrected, backtracked to "then Monero". In the case of your post, first you posted something promoting Monero, then you claimed you found it "incredibly easy to understand private digital assets", but then criticized other people who "lack critical thinking skills".

What do you find so "incredibly easy to understand" about private digital assets that people who "lack critical thinking skills" don't get, and why are you more trustworthy than they are? "Incredible" literally means "not credible". Bragging about how "incredibly" clever you are and how dumb other people are doesn't make you sound trustworthy.


> "Incredible" literally means "not credible"

Like many words, incredible has multiple definitions usable in context, such as nothing to do with credibility, and typically what English users do is pick the most likely applicable context to the circumstance, such as the definition which completely undermines your entire argument and is more congruent with everything I have said.

With that in mind, how would you rewrite your entire post?


Since you asked: With the fact in mind that you're deliberately trying to misunderstand what I'm saying, I'd repeat my question that you didn't answer:

What do you find so "incredibly easy to understand" about private digital assets that people who "lack critical thinking skills" don't get, and why are you more trustworthy than they are?

Isn't it a bit ironic that you want us to trust your claim that we should trust Europol about a "trustless" system?

"There are trustless systems though, like bitcoin" -buboard; "there is a system [Bitcoin -- bzzt -- oops! -- Monero then] that minimizes trust" -buboard; "I trust that [the Europol investigation and presentation about Monero]" -rolltiide.

So why should we trust you and buboard, then? Care to link to your resume or PhD dissertation or any code or research papers you've written on the topic, since it's so "incredibly easy" for you to understand? Or should we just trust you without any evidence to tell us who to trust about a trustless?


So on the topic of Monero there are two parties that I mentioned: Europol and chainanalysis companies. Europol, a government agency, can be incentivized to lie about their investigative capabilities to help with their investigations. They detailed how they cannot track Monero, but lets assume they are lying.

Chain analysis companies advertise their investigative abilities in order to attract business. By theory alone they cannot track Monero transactions, and in practice, they are not advertising that they can while they do advertise how they can follow transactions on surveillance networks like Bitcoin. This is a second confirmation that Monero works as private digital money.

This part is a cat and mouse game, all while Monero continues receiving software updates to improve its use, via open source pull requests, which bolsters confidence I have in the system as it doesn't exist in a static state. A lot of people analyze the digital asset ecosystem in a current or prior state and dismiss them based on that.

I think that is enough ammunition for you to investigate further to come to similar conclusions without you needing to trust me or buboard.

But if you would like to dive deeper on why these parties are not able to follow Monero transactions in both theory and practice, there are some books on the subject and then code you can compile yourself to test it out. That's typically what we are referring to when we say trustless, the independent verifiability.


trust comes and goes - we ll see that when the current negative-rate world goes belly up. even as last resort measures, cryptos have their uses


When that happens, I'd surmise that the following is not going to be a good hedge: a completely ephemeral "asset" that is not backed by any conventional power structure or society, while at the same time being fully dependent on a highly sophisticated infrastructure made possible by said power structures and societies.

Cryptocurrencies depend on a redundant and highly interconnected network of many diverse actors to be able to maintain their trust guarantees. How that network is going to stay around in a doomsday scenario is very much up in the air.

The main point, the crucial point of a doomsday hedge is that it should depend as little as possible on the intricacies of the old world.


the same goes for cash & credit card, and in any case an apocalyptic war resets all prices to 0. Or you could extoil the virtues of gold but good luck carrying your gold in an EU country from Jan 1st, let alone the Moon and Mars


Of course traditional assets are highly uncertain if the world goes tits up, but you were the one hinting that cryptocurrencies would be a good hedge in a doomsday scenario.

The truth is that it is extremely difficult to predict what asset to hold in such a case. Physical assets can be confiscated by force, ephemeral assets are likely to just cease to exist.

Perhaps the best hedge is nothing as material as gold, or guns, or money, but having a large and robust network of friends and acquaintances. Ultimately, being connected (not just in a technical sense) is the best enabler of success in or out of a doomsday event I think.


Wow, does that even work - I just assumed that HN was too small a community for misbehaviour to be lost in the noise.

Would love to hear moderators opinions on this?


Recently HN seems to have become more heavily targeted. I think bad actors have realized the value of the community here. I am increasingly wary of opening links to unfamiliar domains except in a sandbox environment.


This has been going on for quite some time. But the mod(s) do a more than credible job at keeping up with the trash and identifying such behavior.

The bigger problem - in my opinion - is people submitting (unwittingly) content cloned from elsewhere.

There is quite a bit of blogspam that makes it to the front page. Usually even that is taken care of sooner rather than later but for every time it works there is proof that it can work which will drive more people to try to get away with it. We can all help with this issue: ensure that we submit content from original sources and in case content from a cloned source is submitted flag the article and point to the original.


> in case content from a cloned source is submitted flag the article and point to the original

It sure would be nice if there was a way to add a comment to explain why one has flagged an article, so the moderators don't have to guess why it was flagged! As it is, I occasionally add an explanation in a comment or direct email, but it always feels awkward.


i presume if someone would use this, they d use an "OK" domain like medium


HN is not small by any measure. It is often featured on the first Google page. The HN hug of death is just as real as the Slashdot effect.

It is quite obvious when companies who think these sites are relevant for their social media is featured. Suddenly a whole lot of upvotes comes along. Make of that what you will. Personally I think it is naive to believe that any open community is free from bought votes.


Does anyone have any actual measures of this? I guess there are some API services re I could investigate but ... I'm lazy and it's xmas. It still just runs on one large server IIRR


Here's a post from 2013, "What happens when you're #1 on Hacker News for a day"

tl;dr numbers: 15,000+ unique visitors, 25,000+ page views

https://levels.io/hacker-news-number-one/

And here's a thread from 2016 where users chime in with their results.

tl;dr numbers: A #2 or #3 ranking was about 50k-75k pageviews.

https://news.ycombinator.com/item?id=12008384


it's not small. even if a show HN doesnt even reach the frontpage it s decent traffic, and , depending on the product, can be pretty relevant too.

one could do the math to figure out how they came up with this $2 / upvote price - i presume it's the equivalent of paying google for the same amount of traffic.


Using that spam service will get your account and website banned here, as plenty have discovered to their chagrin.


Heeiille

can I buy being upvoted outta the shadow ban?


I'm of a similar age, and generally agree with you, but for the sake of countering complete pessimism I also think it's worth adding some context in that it depends on numbers of people and the medium. The village that my mother lives in, population roughly 1000, has a broad cross-section of ages and political views, but everyone genuinely looks out for each other and does the right thing in the vast majority of circumstances. However, I live in a block of flats with around 150 people, and the residents association had to take down the forum on the website because people were making all sorts of wild accusations. (The WhatsApp group with families is pretty self-policing). The Internet is a problematic mix of vast numbers of people and very little personal accountability.


>Just Google "buy reddit upvotes" and the internet is full of folks who will sell you a million Likes for cheap

Some years ago I made a post on /r/learnprogramming that TL;DR was along the lines of "go to college, great ROI, plus for your first job most employers auto reject resumes without a degree" or something along those lines.

I was offered $500 up front and $500 referral bonuses to edit my comment to link to a fake degree mill. IIRC they charged something like $1500 for a fake degree and one year of service for employer education checks/transcripts. (by the by, if you're going to run a fake degree mill out of a residential address in San Mateo, use a VPN!)

Point being, Reddit comments are ripe for abuse.


I used to mod an active subreddit and I saw this type of thing all the time. Once a post hits the front page or a comment gets gilded, the majority will just assume it's factual and will run with it. If anything it was kind of scary to see how easy people can be manipulated.


If you built an upvote system that works more like the old Netflix system worked in theory, then the only outcome of my upvoting something is that I see more things like it.

There are quite a few problems with that, but the obvious ones I think are obvious to different people.

One, movies are a fairly concrete topic. There are tons of laws about Copyright, so every movie has some bits of information you and I could exchange to know that we are talking about the same thing. And for purposes of marketing and awards ceremonies they are even grouped and tagged in some fairly official ways. Comments are amorphous. How do I tell if your comment and mine are 'alike'?

Two, Netflix only sort of works because liking things is not free. If I were promoting a horror movie, I could create a bunch of accounts that like all of the classics of the horror genre, then add a few random movies, and my movie. Everyone who loves classic horror movies would correlate with my account, and that will mean my likes affect their suggestions. But... This is going to cost me. I have to create an account to vote. I have to (probably, I haven't checked in a while) attach a credit card to that account, so I could do that once, twice, a dozen times (everyone on my PR team) but beyond that things get complicated. There are limits to how much I could amortize those costs across multiple PR campaigns, but it might still be lucrative to do so.

With Twitter or Facebook or a million other sites, there is no friction. I could create bots to do this for me, for the cost of some network-connected CPU time (Amazon spot instances). I could speculatively create bots so that I have a stable of 'clean' accounts for future activities that haven't even occurred to me or my customers yet.

These activities are effectively parasitic, in the biological sense of the word (though some would agree in the rhetorical sense as well, but let's not get into that). Only when the host begins to die is there any selective pressure against them. Maybe we're seeing that now. The 'parasite' load is getting so high that the health of the host is in jeopardy. In nature this is one of the major selective pressures on parasites and some pathogens. Kill the host and you lose your meal ticket & transportation.


> This may rub some folks raw and be pessimistic, but a half a century on this planet has taught me that any solution which relies on "doing the right thing" (ethics) but does not have any rules, laws or other repercussion for abusing it, then (some entity - person or company) will use your altruistic belief to their own gain, ethics be damned.

100%.

I live in a big city and while there are bylaws, without enforcement people will break them all the time. An example of this is people bringing their dogs where they are not allowed to (e.g. school yards) or have them off-leash when they're not supposed to (e.g. on-leash parks). But because there are so few bylaw officers, few dog owners follow those rules.


I think you are ignoring all the thousands of laws, bylaws, norms, and codes of conduct that most people do follow most of the time. Without these, society would collapse within hours.

You only notice the few laws that are sometimes broken.


>any solution which relies on "doing the right thing" (ethics) but does not have any rules, laws or other repercussion for abusing it...

Doing the right thing can vary by culture, perspective, and situation. What is right or wrong to us may be entirely different to someone who's family is starving as just one example. Given the world-wide nature of the Internet, as a species, it is unlikely we are going to be able to agree on a single set of rules or punishments within our life-time.


"Rules are only as good as their enforcement"


Does anybody besides me wonder why they never seem to do anything about it? Its really not that hard of a problem to solve.


Thousands of engineers at Facebook (and Google, and probably Twitter, and ...) work on stopping this kind of abuse. There are many very hard problems in this area; it is not remotely easy to "solve" a constantly evolving adversarial landscape.


The problem is incentives.

There is, in all likelihood, a way to build an upvote system that is resistant to factory automation. The problem is that such a system doesn't have the exact feature set of the current system. Which means you have to give someone (probably several someones) bad news whether they want to hear it or not.

This is not a strong suit for software devs in general, and in the case where the 'right thing' involves financial concerns, we almost always lose. Even when we are right.

So rather than losing millions on taking away some feature that someone got promoted for, we lose millions by investing multiple man-decades into palliative care.


Do you have any concrete evidence of major attempts to stop it that failed? I'd like to see what they tried and what failed.


>I'd like to see what they tried and what failed.

Being as how fake likes are possibly fraudulent and detrimental to the paying customers, I feel like these companies should be the ones openly demonstrating their attempts at stopping them. Otherwise, I'll default to "they allow it".


People do the right thing if it's worth their while. You don't need people to fear the state showing up and inflicting physical violence or rotting in hell for eternity in order to get people to act "right". You just need "right" to be more worth their while than "wrong". Now, fear of the state and/or fear of god are often used by society to weight people's choices but there's no reason thing have to be all stick and no carrot.


You're saying the same thing GP says, just in a different way. Right being "more worth their while" than wrong is another way of phrasing "people follow incentives". Rules and consequences are meant to change the incentive structure, and they don't all have to be sticks. E.g. tax deductions are carrots.


I know I think differently than most, but this idea always confused me. Yeah, if I threw a piece of trash out of my car window, I could possibly get in trouble but where I live, nobody will notice and I will probably get away with it. Instead, I could just do the right thing and not do that. Personally, I don't need that scary presence of responsibility to choose the "better" path. I guess it is really hard for me to see it from the other side's shoes.


My girlfriend and I recently found a lost dog. A package of salami, a phone call, and about 15 minutes later, the dog was reunited with its owner.

We helped the dog despite a variety of risks to ourselves, despite not knowing when it would end, and despite being in a hurry. We didn't do it with any expectation of reward or kudos. And although we do train and board dogs as a side gig, we're already booked enough to be choosy about clients, so we weren't doing it for potential business.

We did it because ignoring the dog felt bad and helping the dog felt good. That is, it was completely selfish on our part, but we still did the right thing.

But at least a dozen other people just walked by the dog as if they didn't see it. It was off leash, unkempt, and clearly scared. They saw it, but they just walked on past. Why? Because, for whatever reason, they didn't feel badly enough or wouldn't feel good enough if they helped.

I don't litter, either. And I take some personal responsibility for the litter of others. Again, because it feels good (or, in this case, living in a marginally cleaner city feels good). It also feels good knowing I might not be the only one who does so.

Neither of these makes me a good person. There's one instance where I know I don't do the right thing: I don't visit people in hospitals unless someone else drags me along. The experience, for reasons I'd probably need a psychologist to figure out, is painful for me, and whatever reward I get visiting the person is dwarfed by that pain. Even the shame and embarrassment of being a person who doesn't visit people when they're in the hospital isn't enough of an incentive to go. I need an external incentive. I need a friend to say, "Hey, let's go visit Phil in the hospital" or I'll be 'too busy' to go.

We all 'think differently' than each other. We're all doing what feels best. We all have different internal incentives and disincentives. So, for any sufficiently large group and any desired behavior, external incentives are required to get some of the people to exhibit that desired behavior.


Thanks for writing that. I have been trying to explain for years that in most people, the brain rewards things like this in the same way that it rewards eating candy. Saying that doing something that gives you a pump of serotonin is selfless is ridiculous.

Show me someone who is absolutely repulsed by something, but does it anyway because creates the serotonin pump for someone else, and I'll show you a selfless person.


Well, no-one said "selfless".. This and the GP sound like psychological egoism[0], the claim that "all of our ultimate desires are egoistic":

"Psychological egoism is the thesis that we are always deep down motivated by what we perceive to be in our own self-interest. Psychological altruism, on the other hand, is the view that sometimes we can have ultimately altruistic motives. Suppose, for example, that Pam saves Jim from a burning office building. What ultimately motivated her to do this? It would be odd to suggest that it’s ultimately her own benefit that Pam is seeking. After all, she’s risking her own life in the process. But the psychological egoist holds that Pam’s apparently altruistic act is ultimately motivated by the goal to benefit herself, whether she is aware of this or not. Pam might have wanted to gain a good feeling from being a hero, or to avoid social reprimand that would follow had she not helped Jim, or something along these lines."

I was attracted to this years ago, and even more to the related theory of Mark Twain in What Is Man?, which seemed very convincing[1]:

"...we ignore and never mention the Sole Impulse which dictates and compels a man's every act: the imperious necessity of securing his own approval, in every emergency and at all costs. To it we owe all that we are. It is our breath, our heart, our blood. It is our only spur, our whip, our good, our only impelling power; we have no other. ... FROM HIS CRADLE TO HIS GRAVE A MAN NEVER DOES A SINGLE THING WHICH HAS ANY FIRST AND FOREMOST OBJECT BUT ONE–TO SECURE PEACE OF MIND, SPIRITUAL COMFORT, FOR HIMSELF. ...He will always do the thing which will bring him the MOST mental comfort–for that is THE SOLE LAW OF HIS LIFE"

but came to think (as most philosophers do) that both are just mistakes. They explain too much, are unfalsifiable (how do you know about what people never do - it's just asserted), and are crazy and a horrible way to live. There's no goodness in the world?! No kind acts? Why would you want to believe that. You really didn't care about that dog at all, GP?

[0]https://www.iep.utm.edu/psychego/#SH3d

[1] http://www.fullbooks.com/What-Is-Man-1.html


> You really didn't care about that dog at all, GP?

If you're asking that question, you've adopted some sort of fatalist view of the concept I don't hold. I don't know what caring is if it doesn't have something to do with feelings. To answer your question, I cared, empathized, and sympathized enough that it outweighed my fear, anxiety, and doubts about helping.

Why do I feel that way and others don't? Probably because of my experiences with dogs, my thoughts about dogs, and my thoughts in general. I could, with intention, retrain my thoughts so I am likely to not help dogs in the future, if I so chose. Likewise, I suspect I could have taken the mental hit and not helped the dog; but I don't think that's what people generally do. And I think when people generally do a thing, if they're not getting a reward, they're building a reward system so when they do the thing in the future they get a reward.

I'm very skeptical of any claim of purely altruistic behavior, because altruism can have many rewards, both internal and external. Pam rescues Jim because Pam cares about Jim. But what does it mean to 'care about' a thing? Isn't that just pro-rescue feelings?


I went through this same questioning phase and looked at many of the same sources. However I came out of it with the opposite conclusion of you: Society rewards people who have a neurochemistry that rewards "altruistic" behavior.

In fact when I was researching this topic, I remember there being some reporting about Mother Teresa writing that she had doubts about what she did and how difficult it was - reinforcing the idea that she was suffering internally. I also recall some interesting studies on serotonin reward feedback in people who do altruistic things or have "sacrificing" jobs. So it's not woo woo as you seem to make it sound.

I take issue with a lot in your last paragraph - but I'd suggest you revisit the concept.


Hi, thanks. Yes, I didn't have the hours needed to make a decent last paragraph! Haven't read anything on the subject or thought about it much at all in 20+ years.

I recommend Christopher Hitchens' book on Mother Teresa, I'm pretty sure you wouldn't write about her like that if you'd read it.


Know it well. Saw the documentary too

No matter how misguided her actions were, it was clear she was doing what she did without those same altruism mechanisms - yet millions think she was a saint. That's the whole point of that anecdote.


I always blamed white coat syndrome for not wanting to be in hospitals. A couple years ago I had to visit an ER for some people who were in a highway collision. They were just bruised but one of them had a vital sign the doctors didn't like so they were holding him for observation, to try to figure out if it was new or existing condition.

One bay over, more than a dozen people were consulting on a kid who fell out of a high window. When someone volunteered me to take their kids home so they could stay, I'm not even sure I finished my whole sentence before leaving. Pretty much gone in a puff of dust like a Warner Brothers cartoon.

Hospitals are full of hurt people. Some of them are never going home. Sometimes they know it. Sometimes the family knows it. I'm now pretty sure it's not the people in the hospital I have a problem with. It's what's happening around them and to them.


"All actors are locally rational."


You are ignoring the incentives. If doing the right thing cost $5 instead of being free, there would be a lot more litter


I am not sure I know many examples that would fit like that though. Recycling probably is a big one, since I have to pay to have a recycle bin picked up weekly. Holding on to litter costs me time and energy since I have to hang onto it until I find a trash can. I feel like people have a "minimum" threshold of how much they can be put "out of their way" before they go with the bad option. I really don't know. I think it is tough to think about. How much time/money would you be willing to part with to properly dispose of a candy wrapper? How about the same question but now you have to pay/waste time to not steal a TV? I don't know how to explain it without going full on slippery slope.


that's kind of the reverse for the 'bottle deposits' - where it costs you up front, and not doing the right thing (returning the bottles for recycling), costs you as not getting money back.

So, deposits on plastic cups! and straws! (might get people to pay the cost of washing and re-using a sturdy one)


It's also the beauty of capitalism and the creation of new types of work, that can provide value. For every like factory there is a fraud detection startup, so everyone's a winner.

Never mind where the "problem" came from in the first place.


Yeah, create the problem and then provide the solution is one of the best business models.

Climate change is also making people rich now due to this logic.


It is saving anyone from having to actually innovate. Remember when there was a rush of "but on internet" patents and businesses? Well, now there's a bunch of "but green this time" businesses. Not to say they aren't needed, but redoing the last 100 years of tech isn't really moving us forward.


After listening to the video I have a hard time seeing the difference between paying for likes vs paying for ads from an ethical stand point.

Instead of FB, etc getting ad money, that ad money is being spread out to many others instead. It also seems like this might be more cost effective than ads.

I think I would consider this much more ethical than how Google puts the top 3-5 results as "ads" these days. Sometimes I question how long until I have to go to Page 2 to get off the advertised results.

Note - In the above I'm only considering this practice being done by legitimate places sharing fairly accurate or honest content. Such as a business trying to promote a sale or a blog piece.

I would consider paid ads, paid likes, paid comments, etc., to be unethical if the content they're supporting is false.

TLDR: What's the difference between paying Facebook/Google/etc for an ad vs paying someone else to like your post? The 2nd seems like a better solution for a majority of people.


> I have a hard time seeing the difference between paying for likes vs paying for ads from an ethical stand point.

That's a bit like saying you don't see the difference between peeing in a toilet and peeing in someone's pool. You're relieving yourself either way, after all.

"Likes" are supposed to represent the sentiment of the real users. Ads are supposed to be advertising messages.

The two things are very different for a bunch of reasons. First, buying likes is straight up unethical because it's lying and gaming the system. Second, buying likes is harmful because it reduces the trustworthiness of likes.


Buying likes and buying ads are both ethical and unethical.

By buying ads you are trying to game a users newsfeeds without providing any value to the user (if highly liked content provides value, ads do not provide this value).

Buying likes is the same. It games a user's newsfeed but doesn't provide value.

To the user they are the same. To facebook they unethical because they are not paying them.

To the point around harmful the newsfeed itself is untrustworthy. Links were never trustworthy.


It's mildly amusing in a frustrating way how we just assume that ads provide no value these days because ads and marketing in general have become synonymous with hostile psychological manipulation.

In theory (and ancient, pre-internet historical practice), ads can provide genuine value by informing people about the availability and properties of products.


>Buying likes and buying ads are both ethical and unethical.

Sorry, in what way is buying likes ethical?


In what way is it unethical? No harm is done to anyone.

Facebook decided how things work and create this system with side effects. Finding a way around the rules facebook made does not make it unethical.


>In what way is it unethical? No harm is done to anyone.

I'm not sure what to say to you if you can't see it. No harm is done? Maybe you only imagine physical or financial harm. These aren't meaningless numbers in a vacuum; likes/followers are taken by people to indicate social importance or approval, they show how other people in the world feel and think about things (comments, pages, issues, opinions). A large part of the public sphere is in this form nowadays. Faking likes/followers/comments is deliberately misrepresenting yourself, fraudulent deception - lying.

Seems to me your argument would apply to faking or buying votes in an election, or TV ratings, or referendums or..faking corporation profits, buying fake stories in news media, or anything in the world where the numbers are diddled. "In what way is it unethical? No harm is done to anyone. There's this system with side effects. Finding a way around the rules does not make it unethical."

Imagine if voting and ranking on HN stories and comments was entirely determined by bought upvotes, downvotes, flagging, abuse reports etc. The site would be destroyed. And if the same thing happened on every forum of any kind online? "No harm done?"


Likes are supposed/believed/intended to be a form of social proof by most people. Buying likes destroys that. So harm is done.


> No harm is done to anyone.

I disagree. I think harm is done to everyone who uses the site.


Ads provide value because they pay for it all. Isn't that valuable?


I think your argument is valid, especially when considering an ideal scenario for these platforms.

Unfortunately, I haven't considered likes or even "star ratings" to be "real sentiment" in decades. Even without paid clicks, they rarely reflect quality and are often the result of many other factors. There are plenty of other dishonest methods to boost up your like value without paying for clicks.


> I haven't considered likes or even "star ratings" to be "real sentiment" in decades.

Yes, these things are a prime example of "the tragedy of the commons". But your question was about ethics, and I don't think this affects the ethics of the decision. That the commons may be trashed doesn't make it ethical to contribute to the trashing yourself.


Wether it’s right or not, likes are perceived as social validation.

When a company is buying likes, not only this is not disclosed but also gives a false sense of approval by a community which is not only deceiving for the customers but also unfair competition for businesses providing a similar product / service.


People expect that companies pay for their ads to be presented. A big reason for buying likes seems to be deceiving people into believing the posts aren't ads. I'd say it's similar to "sponsored content" that doesn't disclose the fact that it's sponsored


I think, at this point it is safe to assume that on any platform where individual content can be promoted via a crowdsourced upvote, including star ratings and "likes", much of what rises to the top is manipulated in some way, often sponsored (paid for) by a commercial interest. This includes FB posts and Amazon reviews, but also comments on reddit and even HN comments and articles. I would love to see internal reports from these companies about what % of each platform's accounts and/or activity they estimate to be automated, botted, farmed, fraudulent, etc. but for obvious reasons nobody seems to be willing to regularly publish this information!


Not percentages, but actual numbers of reports and suspicious account challenges: https://transparency.twitter.com/en/platform-manipulation.ht...


I stand corrected. Great job, Twitter!


People buy likes because of price and reach. If the price of ads and the reach were similiar this wouldn't exist.

Who thought like would be a good gauge without dislke is beyond me. Dislike would provide a way to display displeasure and drop the value of the orginal like.


> Dislike would provide a way to display displeasure and drop the value of the orginal like.

And would double the business of the like factories, because they would expand into doing paid dislikes as well.


The difference is the disclosure. Only on the other you know that money is being used to affect your opinion on something.


Excellent point. Though it can be easy to miss the tiny text that says "advertisement".

I recall buying Facebook ads for places in the early days of their system that benefited off this & having it say which friends also liked this content.


> I would consider paid ads, paid likes, paid comments, etc., to be unethical if the content they're supporting is false.

That's not a very useful statement. What if it's in support of an opinion? Advertisements are rarely just a list of facts and likes

> TLDR: What's the difference between paying Facebook/Google/etc for an ad vs paying someone else to like your post? The 2nd seems like a better solution for a majority of people.

The difference is pretty clear, one is purchasing a real ad on a real platform. The other is paying for fake interactions to project a false response to something.


Don't know why you were downvoted but...

> What if it's in support of an opinion?

I think opinions are great if they state their claims and reasons in a respectful manner and don't throw out very false narratives on purpose.

> ...real ad on a real platform. The other is paying for fake interactions to project a false response..

You're right. I'm jaded and feel I'm over advertised to on search engines, blogs & social media platforms. I feel they're shoved down your throat & ruin the quality of the platform. I feel a lot of ads or "sponsored posts" try their hardest to hide the fact that it's an ad. They often blur the lines of accurate claims. I don't even need to go down the route of "influencer" or "affiliate links" that hide it even more. At one time in my life, I felt there was a clear distinction between ads & authentic word of mouth promotion of an item. That was probably way back when TV blurred out logos. I haven't felt that way in a long time. I believe "real ad" is such a gray area these days that I am having a hard time seeing anything wrong with regular people making money off this advertisement hysteria instead of companies. Especially since most of the companies in question here are doing highly unethical things with their user's data.

You can take the above paragraph & state 2 wrongs don't make a right or point out several fallacies made. At the end I would argue that the paid likes are only a small part of a larger problem of advertisements. Yes I would like to see paid likes become irrelevant but I think there are bigger issues.


An ad is a message designed to force the user to hear a certain message. Hopefully invoke associations to other real things.

A real ad is the same as a fake ad. They are all just ads. It doesn't matter who you pay. They do the same thing.


> A real ad is the same as a fake ad.

We're talking about fake likes. An ad without purchased likes is clearly not the same as an ad with purchased likes. One has a genuine representation of response and the other does not. Who you pay does matter as one is a legitimate ad and the other is at worst posing as a legitimate ad, but possibly also a real ad that has been given inflated numbers via purchased likes.

One is trying to operate within the system, the other is trying to game it. Not the same thing and who you pay obviously does matter. Just because they may have a similar purpose doesn't make them the same...

That is actually one of the interesting things about social media as a political platform. This wasn't possible at the same level with television or radio advertising because of the fact that "posts" and "likes" are often by essentially complete anonymous strangers online. The troll farm levels being what they currently are I think people would do well to take all numbers and advertisements they see online with a grain of salt until these issues are more settled.


The 10 billion active accounts figure can not be right. As someone else mentioned Facebook generates new ID for application users.


We don't use application level IDs for this but global IDs. These are unique for users (at least we have no evidence otherwise).




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: