A couple months later I'm still getting the odd 'like' from these type of accounts despite not running the campaign.
I didn't seek out any third party promotion, just used FB's own promotion tools.
(I also find it objectionable that one can run ads to promote commercial pages, but there's no way to do so for "groups"; if I want to spend some $$ to expand actual community engagement, for some non-commerical purpose, I can't. I can only promote a business.)
I keep my account disabled unless I need it to run ads. I feel dirty having reactivated. Im going to disable it again.
Those accounts are probably trying to look legitimate so when they like they’re own pages, the signals are trusted.
That said, Facebook should have refunded you obvious non legitimate clicks. Did you report it?
These likes also pollute your account and skew your demographic as they are not sincere. Makes it ultimately difficult to understand your buyer.
“used to” is quite reading.
That is intentional. Facebook had facebook pages, where users could subscribe (like) to be your audience. Then pages became too popular and theyrealized that they were losing potential advertising so they cut down that channel - now you have to pay to reach the subscriber base that you created yourself. They don't want to end up in a similar situation with groups so they are limiting group engagement. FB wants moar money
In short, the fake accounts had several factors in common:
- No or almost no public posts.
- The only public posts being an update of the cover photo and profile photo in 2017.
- The profile photos tended not to be headshots: a photo of a cup of coffee, say, or a landscape.
- Between 20-100 friends.
- Friends tended to be in places like India, the Middle East, or Southeast Asia.
- Their gender didn’t match the name or photo about half the time: a “Martha” referred to as “he” by Facebook, for example.
For nation-state actors, it is a modern form of agit-prop that is probably superior in both its effectiveness and its ability for fine-grained control. What's interesting is how this approach can utilize the idea of proxy control on so many fronts. In the US, the citizens themselves are being gamed as proxies for foreign voting power. Malicious actors hide behind proxy servers as well, which adds another avenue of difficulty in dealing with the problem. And then there are the botnet overlords.
An instance of a normal person posing with their own identity once is its own bit of fun, but now that commercial and then national actors are involved it's a whole different ball park. If your fridge supports the other candidate and products from its superior overlord should this be the advent of bot rights or total bot enslavement? I partly jest.
I did a quick Google Translate and read the Vice Germany article they referenced and it's an entertaining read. I recommend it. If you want to look it up, the title is "Die Applausfabrik: So funktioniert die Industrie hinter gekauften Likes und Followern".
My wife finally gave in and signed up to Facebook in order to be able to take part in a webinar she already payed for that was staged on the platform just to get her new account blocked for using a different device from the one she signed up.
They wanted her mobile phone number, photo and her ID (think passport).
She didn't want Facebook SPAM on her phone, she already gets bombarded by WhatsApp notifications and sending her photo again didn't help. She got banned for good. Also her IP was blocked so she couldn't sign up again.
I don't use fbook enough for it to be a big issue - and I had assumed it was always a unique string so fbook could track ads, but I am now things it's not different each time and it's another identifier than can be used by god knows how many groups and peoples to track - when I am employing several other systems to stop tracking - this is crazy - is there a way to strip this auto?
Analytics companies (I think Google and a few others) can pay Facebook to get data on your behalf like what page it came from and the demographics of the visitor.
Google AdWords does this as well on paid clicks with GCLID
I can't believe nothing is auto-stripping this url addition between firefox and ublock origin - must find and tell others.
But yes, an extension to strip these (along with `gclid` from Google, `reddit_cid` from Reddit, and so on) would be very welcome.
fingers crossed this is good and it gets better.
There is a Firefox extension ClearURLs, which strips some tracking parameters from the url.
From what I remember, if you click in Fb on a link, it redirects first to an internal tracking url, and then to your final link. This is not limited to Fb only.
There are more small businesses than large ones. But large business practices are, for the most part, far more meaningful. Some metric of net financial or revenue impacts might be a better (though much more difficult to acquire) metric.
It's kind of the "space aliens land on Earth, who do they encounter" problem. Most people are located in cities, but cities occupy a small fraction (~<1%) of Earth's surface.
By statistical likelihood, your alien is most likely to encounter ... a fish (or plankton). If they find land, they're most likely to find a rural area, and hence rural dweller. Not because that's a statistically accurate sampling of human population but because it's a statistically accurate sampling of human population areal distribution.
So: if you look at like-buying campaigns, you'll find, because there are far more small businesses, many more small businesses participating.
If you looked at other metrics -- say, bought likes distributed among all commercial accounts -- you'd probably find the weighting swinging far more toward at least moderate-to-large sized businesses.
Though: for a small business, some early-stage "growth hacking" might be both a modest budget line, a plausible-sounding practice (not necessarily true, but appearing to be true), and something a likewise ethically challenged black-hat SEO marketer could sell.
Then there's the possiblity of joe-job likes -- buying fake likes for a third party in order to present them as fraudulent. Possibly not widespread, but possible, and given the difficulties in attribution, something that's hard to demonstrate one way or the other.
- Likely have more effective tools.
- Have other ways of promoting online content without going through fake likes. (Paid "influencers" being one widely-practiced option.)
- Might be aware of the potential downsides and hence avoid this.
- Are a much smaller fraction of "like" campaigns, and have a higher "organic" (or at least organic-appearing) rate of user engagements.
There are numerous reports of ... large influencers in the political space ... paying $10k - $100k amounts monthly for social media promotion.
"Facebook issues app-scoped user IDs for people who first log into an instance of an app, and page-scoped user IDs for people who first use a Messenger bot. By definition, this means the ID for the same person may be different between these apps and bots."
So the 10 billion number the researchers quoted does not necessarily represent individual users.
Also, if there are any FB employees on here, what do they think of their employer still enabling massive disinformation, astroturfing etc?
To be clear, I'm not blaming individual employees. Just honestly curious how they deal with these issues on their personal moral compass.
Not very useful.
A more cogent question would be "For anyone in Facebook working on this problem, do you feel like sufficient organizational resources are being spent combating it? Is there low hanging fruit, or are we into the whack-a-mole stage of any popular platform with a financial incentive to cheat?"
 dedicated in a sense of lets include some script specifically designed to detect Selenium.
 for example it's kind of unlikely that non-bot visitor is using IP from a range used by amazon instances. not sure how often this is used but I assume that most bot-detection systems would use that information at least as one of the metrics
From my very limited experience there are two main
categories of websites:
1. Using curl and/or [library of your choice] in a [scripting language of choice] is enough
2. Forget about not being detected without a full blown browser unless you want to spend endless hours trying to emulate whatever is needed to be emulated and also willing to burn some accounts and IPs in the process.
Answering last question, it depends what do you mean by scripting language. Let's assume it is Python then you have two choices: imitate browser or risk to be detected as bot quite fast. Writing browser extension is quite easy and you can imitate real user quite easy. The only problem that you will have is to imitate real human being in a way that does not match Instagram's bot detection algorithm.
I think what we're seeing instead is the same as counterfeits or promoted search results at Amazon: solving it could become an existential threat to either popularity or profit, so they're just "managing" it as BAU.
In either case, Facebook has a financial incentive to say nothing, instead of publicizing lackluster results.
Or, my guess, they'd rather keep any news about the existence of this off the front pages, as broader knowledge of its existence by their (not as technically informed) advertising customers only negatively impacts Facebook.
Kind of how a beachfront town wouldn't want to advertise the fact that Great White sharks exist... at all.
The problem here is that adversaries are adjusting to whatever measures are put in place. In that respect it might be more like winning at Chess or Go. Which computers can do, but it’s decidedly non-trivial.
And here’s the kicker: If we postulate some hand-wavium big-data/machine-learning/AI that can detect bots and adjust when the bots evolve, why can’t we also postulate some hand-wavium big-data/machine-learning/AI that can run bots and evade the bot detection?
Then if they also happen to anyhow benefit from the existence of those fake accounts then it becomes even harder to justify costly filters and leads to a sufficient-enough-to-not-look-obviously-weak grade filtering
Facebook is entirely out of control as a social force, and PR agencies are the only thing holding back public perception.
There's probably not much incentivation to investigate them too deeply.
> what do they think of their employer still enabling massive disinformation, astroturfing etc?
This isn't a behavior management wants, and there's a lot of internal effort to reduce it.
Something the public doesn't get but engineers do is it's impractical to manually review every action on Facebook, human reviewers aren't necessarily more accurate than ML, and you'll always have some amount of abuse.
> Just honestly curious how they deal with these issues on their personal moral compass.
I have no qualms here. Roads are used during thefts, but no one asks construction workers how they sleep at night knowing the roads they build might facilitate crimes. Spammers gonna spam, but that doesn't mean we can't have any online platforms.
> Something the public doesn't get but engineers do is it's impractical to manually review every action on Facebook, human reviewers aren't necessarily more accurate than ML, and you'll always have some amount of abuse.
I'm curious what your thoughts are on other companies (Twitter, Spotify, etc) disabling political ads for this very reason. Facebook has not. It's a given that you can't manually review every political ad -- so why allow them at all, if their disinformation has negative real-world implications? I don't buy Zuck's argument about free speech.
Secondly, what would Facebook do if a grassroots movement started to put pressure on advertisers until Facebook cancels political ads? What if this movement recruited real users to click on lots of ads in their own feeds, with the goal of disrupting advertiser ROI with difficult to detect garbage clicks? Would advertisers get upset? Would Facebook have any recourse?
Thanks for indulging me on this. The idea came to me in the shower and I'm not sure if it's brilliant or stupid.
- You overestimate the zeal with which the grass-roots folk will engage in that behavior, especially if they know it is undercutting fb model. Ad tech is also evolving everyday meaning a reCAPTCHA like functionality around whether you are a real engagement vs these clicks aren't very far away. You should also look up the articles about # of twitter/reddit contributors (including likes) compared to the US population for example (VERY minimal).
- In theory you can politicize anything: Do you think it is possible to talk environmental/social/civic controls etc. today without having a political bent? Meaning the political ad blocking is probably not as comprehensive as the twitter block portrays.
- For all the negativity politics gets, this is something that impacts us very much in our day to day life. Shutting it out completely is probably more of a problem than working with the system. I don't think we can comprehend the macro impact in this subject: As it stands today, some companies saying no just means more money for other corporations.
As you can see I do have a very practical (read cynical) view towards how much mental bandwidth people have towards the small slights in life (which may have a giant impact later) compared to their day to day requirements. As lacking I am in solutions, I don't believe it is sledge hammer (stop ads) OR crowd sourced (let's all click on all ads).
That's gonna be a no for me dawg. If it turns out to be content I want to see badly at all, I'll search for statistically improbable strings in the part that I can see so I can find an alternate source (/r/savedyoucompletingacaptcha?).
Why don’t you buy it? If you ban “political” ads you are going to start drawing a lot of arbitrary lines. Is an ad for a climate change organization a political ad? How about an ad for UBI? An ad for a local Catholic Church? An ad about farming subsidies? An ad for birth control?
All of these are one step away from direct advertisements for candidates and are very political topics for many people. Twitter hasn’t actually banned political ads, they’ve just used a definition that makes it easy for them to claim to have done so.
HOWEVER, this appears to me as a regulation failure. We know Facebook doesn't want the ban, so their motivation to comply is limited to the clarity and sharpness of the teeth of the legislation. And they aren't very sharp.
> Twitter hasn’t actually banned political ads, they’ve just used a definition that makes it easy for them to claim to have done so.
I don't agree with painting Twitter as just wanting to "claim" they have done so. They appear to be making a true good-faith effort. Check out their policy:
> Is an ad for a climate change organization a political ad? How about an ad for UBI? An ad for a local Catholic Church? An ad about farming subsidies? An ad for birth control?
For each of these examples, there is a clear way to apply their policy based on the content of the message. Is it perfect? Probably not. Will it totally kneecap political ads (by 80%+)? I believe it will.
> All of these are one step away from direct advertisements for candidates and are very political topics for many people.
I would argue that your point is too academic. If political ads are reduced by 80%, even though there are still political-adjacent ads (that aren't funded by a political group and don't reference a candidate or initiative), then the policy would be a wild success.
I'd rather live in a world where companies aren't trying to push their morals on me and have a central entity (govt) arbitrate the same (Believe me I see evil in both places). I have been thinking on and off about the role of government in the current world and sadly I can't see a place where it can be as tiny as people want.
People want an outcome and don't particularly care where it comes from, public vs. private. If one fails, they'll push on the other to find a leverage point. Example: A pundit can say some truly awful and damaging shit and it's legal under the law. But if you target their advertisers, that's the leverage point that matters.
>I'd rather live in a world where companies aren't trying to push their morals on me and have a central entity (govt) arbitrate the same (Believe me I see evil in both places).
A company should be free to push its morals on you, given that the market allows it, and their morals aren't illegal. Right?
Personally, in the current environment of profit-at-all-costs capitalism, the major flaws seem to be incentivizing short-term-thinking and negative externalities. When a company flexes morals that appear at odds with short term profit (e.g. Twitter), I tend to assume they are actually acting out of self-interest but are better able to grasp the long vs. short-term incentives, for whatever reason.
How can you feel proud about such a product?
PS: Yesterday I got an email from Facebook saying that I had 4 messages waiting for me. I opened the app, it showed a little balloon with "4" in it. I clicked it, and there were no messages ... sigh.
I also get this, but now it's always stuck at some arbitrarily high number. in the past few years I've reduced my fb usage to a couple of minutes per month, down from a few minutes per day. I'm sure it's underhand tactics like these (tricking a billion dormant users) which upholds their claim of 2 billion "active" users or whatever bullshit number it is
And you can visit the app/web when you feel like, not when they want.
First of all, it is not like FB employees are pushing people into gas chambers in Dachau, FB usage is not obligatory, what I keep telling everyone who complains about FB censorship or privacy abuse - I don't have FB account because of that.
In the same way we might ask to step out Coca Cola employees (say, delivery truck drivers), because drinking Coke is bad for ones health. Or HSBC employees because HSBC was laundering narco cartels money? Or John Deere employees because company forbids farmers to modify tractor software?
All of those practices are immoral and bad, but why chase the weakest, whose income, ability to pay rent, etc. depends on the employer? Why not target those, who are really responsible for that what is happening and who make huge money thanks to that?
I would say it makes much more sense to vote with our money, avoid services and products from companies we consider immoral.
Publicly discourage people from using such services and products, publicly stand against CEOs, shareholders of those companies, spread the knowledge about their personal responsibility for such kind of behavior - this is not that difficult and actually can make a difference. Imagine PR outcome of a conference that would invite Mark Zuckerberg, but no one else would want to attend it? Mark shows up on, say, TED, and all the people leave the room during his talk? In the media-driven World surely this would become "viral" and even Mark with all his money couldn't ignore that easily.
This is just the obligatory reminder than not having a FB account does not stop FB from spying on you.
Wow are you saying that people should take responsibility for their actions and should let others choose freely what to do?
Because if you make enough noise, and affect the share price enough, stuff changes.
Also, A lot of developers seem to think that their actions have no consequences. This includes people at facebook.
When it is demonstrated that something you have designed, or your team has designed its a massive pile of shit, in a very public way, it leaves a mark. Hopefully for the better.
Having been through something similar, It certainly changed the way I make prototypes. Security and anonymity comes first now, not last.
> why chase the weakest, whose income, ability to pay rent, etc. depends on the employer?
I assume you are not talking about FB employees which is a shame because it would be nice to see the same employees take some responsibility for the code they produce.
A programmer doesn't get to determine what the company is building unless it is so small and even then it is rare.
Target product managers.
But a programmer does get to determine whether or not they'll continue to work on what the company is building. Just saying...
I laughed at this way louder than I'm willing to admit
Don’t bother the engineers...no, like this post so people will walk out of Zuck’s TED talk. Right out of the Onion!
The TL;DW version is that a young, idealistic tech guy learns the hard way that not everyone is as idealistic as he is about technology when a greybeard informs him that the project he just finished is a weapon.
Silicon Valley needs more Laszlo Holyfelds.
Everyone knows that most FB traffic is garbage, but it's easy to sell garbage. And in a few limited circumstances the micro-targeting works.
Even if it turned out that most FB/IG traffic was fake it wouldn't change anything until someone comes up with something better.
You have to remember how the ad ecosystem works. Until FB came along there was nothing but display and search and the gap was huge. Facebook comfortably occupies the entire middle ground between display and search these days. Even if half the traffic is totally bogus it's still going to be better than display.
Every big company pretends to work on these problems for image reasons, but they don't really care. It's always a "special team dedicated to" whatever the flavor is that reports to a kangaroo court.
If companies actually cared about these things they could use their immense engineering talent to fix them, or just prevent it in the first place. But BOTH of these mean less revenue at the end of the day. The only way companies would actually prioritize these things is if it had a positive impact on revenue somehow, but it doesn't. Never will.
I'll never understand where the idea came from that Facebook had some kind of moral compass as a company that was anything other making money.
i.e. what would be sufficient in your view?
Whereas with current social media. Likes are given to users with out any care other than "hey you seem to be a valid user, let's allow you to generate large amounts of likes".
Or banning political advertising completely (see the part in the talk on political parties in Germany buying likes).
Or something that completely destroys meddling and fraud and is observable by the public. The "we are working on it behind the scenes"-response is just not sufficient in my view.
I don't see how you could be missing that everything about Facebook incentivizes nonsense such as like-factories.
"It is difficult to get a man to understand something when his salary depends on his not understanding it"
Personal morals are just that: personal. Two people can have 100% conviction in their moral position and be totally opposed, it happens all the time in social issues.
Moral beliefs also happen to line up with social circles and group interests suspiciously well too (hence the Sinclair quote).
Our economy is founded upon mutual relationships of grift and greed, even if Zuckerberg were to disagree with his own platform, he would not be able to change it.
to be clear, I'm not blaming you as an individual, just honestly curious about your moral compass.
>"you instantly think of mobile phones strung together in multiple lines in front of an Asian woman or man. What if we tell you, that this is not necessarily the whole truth? That you better imagine a ordinary guy sitting at home at his computer? "
The many phones is what makes the person "not ordinary."
Who suffers the most here? It obviously not the social media platforms. They adapt their code and move on. It's not the users, at least in the long run. The problem has been identified and resolved. It's not the people selling likes. They get punished, bail out on some fake accounts and get new ones. Even if you could somehow ban individuals from using platforms, there's a million more people willing to create fake likes from where those came from.
It's the poor, as always. The people who know the system is rigged, know the system has to be gamed in order to make money, and are desperately looking for a way to be competitive and get ahead. So they buy some fake likes, then get destroyed by the social media companies. Perhaps they've already invested a lot of time in their presence before they got desperate. In either case, they're not running fake ids. It's just them. And they don't know enough tech to move on. They get slammed for life for making a poor moral choice. Why? Because they're easy to find and it's easy to punish them.
This observation has strong parallels with how the poorest are lowest rung of the drug trade - on the street, and bear the highest price of it - death or imprisonment, while the higher level traffickers usually get away with impunity. But the populace feels good because of the disheveled mugshot of the street drug dealer they saw on the evening news.
Continuing your low-level drug dealer example, if that guy that runs the car wash down the street gets banned for using fake likes, why shouldn't he? It was a bad thing to do! We can see him, we know him, he did a bad thing and deserves his punishment. Not only does he deserve his punishment, we should shame him if we can in order to discourage others from contemplating doing the same thing. Maybe if we increase punishment we can see less of that sort of thing around here.
Ever hear how a lot of social/internet companies got started? Sock puppets, fake likes, generated social proof, paying off influencers, and so forth. All the things the little, common folk aren't supposed to do today. It seems there is a moral code for the common folk and a separate one for our betters. All animals are equal ...
This story, where we find all these millions of fake people and likes, is a "drugs on the table" story: big flash, looks like progress is being made, we have heroes and villains, feats of strength and daring, and people can feel like the net is somehow safer. Then things can go on as usual.
I've yet to hear of a single executive of those companies arrested and put on trial for their employment practices. It's almost as if changing the employment practices wasn't the objective.
Just Google "buy reddit upvotes" and the internet is full of folks who will sell you a million Likes for cheap - in my mind, it's just a modern digital con (confidence game) - some of y'all may remember having created self-signed OpenSSL certs with the Snake Oil, Ltd company named in it. :) Even Wikipedia has a page for "there's a sucker born every minute" it's so institutionalized.
This is why, in general, ideas that involve lots of actors changing their behavior contrary to incentives, instead of changing the incentives, are fundamentally flawed.
Also, I feel like every system can be abused if the incentivised metric is defined poorly.
Example: almost every advertising KPI.
1. A Brand wants to sell a product (incentive: sales).
2. Agency sales people want to reach a particular KPI—budget in this case, the higher, the better.
3. An Ad Tech company will serve ads based on a KPI agreed with the Agency (e.g. CPM, impressions).
It's very easy to satisfy 2 and 3 without bringing any additional value to 1., for example by applying less strict viewability metrics (is 50% in view "visible" or 1%? should we pause the video ad below the fold? what is an impression, exactly?).
The original intent (sell more units of product x) is gone. This is due to systematic issues with the industry, not a single specific company. But mainly, this is due to the fact that the established incentives represent only a chunk of the reality and can be followed without focusing on the actual problem. The overlap between actors' goals is poorly defined, small and often doesn't exist.
In a nutshell: if we define a metric serving as a shortcut for our goal, people will find a way of improving the metric, but rarely reaching the goal. A broadly defined incentive is better than punishment.
Perhaps you're thinking of Goodheart's law? https://en.wikipedia.org/wiki/Goodhart%27s_law
While there is risk that the wrong short term metric is chosen (that is not indicative of the real goal the advertiser is trying to achieve), it is not an invalid approach.
It is untrue that setting a short term goal will _rarely_ achieve the long term value. Otherwise, the advertiser will throw out that short-term goal since it is flawed.
1) Visited your store before they saw your ad
2) Only visited your store after seeing your ad
You can do this online or for physical store locations. There are lots of players there because big marketing teams are fairly sophisticated now: they want cross-channel identity (across mobile, web, TV), attribution, and measurement.
Indeed, this is what drives the Tragedy of the Commons - everyone acting according to their individual incentives, maximizing their individual benefit, but while exploiting an unregulated limited common resource, and externalizing the cost of doing so upon the commons.
There are two solutions paths which are not mutually exclusive: put a higher price on the resource use or externality, and find a more efficient way to achieve meet needs.
In many cases, the current incentives and resulting approaches keep us stuck in local maxima. An example is commuting in
single occupancy personal vehicles in urban areas in the US. When you consider the layout of our cities and the limited mass transit availability, it very often makes more individual sense to drive when you consider factors like transporting children to school/activities, grocery shopping, appointments, etc.
Sometimes the costs are actually to the individual rather than the commons, or are delayed in impact to the commons. In the previous example
hours spent sitting in a vehicle instead of walking deprives individuals of opportunities to burn calories to control
their weight, resulting in a variety of health issues.
The problem is that there are a powerful set of interests who are interested in maintaining the current approaches and incentives at all cost. Again, in the transportation example above, these would include the fossil fuel extraction and highway expansion industries.
I think the second one isn't really a solution. If you find a more efficient way to achieve the needs, the needs will expand to compensate. Even if everyone executes restraint, an entrepreneur will show up and start selling the remaining resources to people who don't have access to those resources.
As far as I know, the only way to solve tragedies of the commons is for everyone to agree on a set of rules, and punishments that disincentivise breaking these rules. Which, in real life, is achievable only through centralizing the power that sets the rules and executes punishment - ongoing coordination is an unstable state.
The radical monarchist-libertarians answer is to eliminate the commons, such that everything has an explicit pricing mechanism. No less problematic, or unstable.
I think in the end, there is something about humanity that does not allow for stable equilibrium over extended periods (100s of years). Not sure what.
I concur though. The Occupy Wall Street meetings I had the luck of attending at Zuccotti Park reflected as much and nearly all of the other anarcho-syndicalist meetings I've seen do work at the snails pace of consensus.
The adherents would argue that this speed is a feature not a bug. I would generally agree with the argument that slowing down decision making is a good thing. Not sure it's a survivable strategy as a group however.
Those are only side effects. The main set of interests keeping things like this is the American consumers. The majority like way more space in their homes and property than their European counterparts.
The real issue is that public transport doesn’t actually work when the middle class is living in 3-5 bedroom homes with 1/4 acre lots. There is no country with good public transit where a middle class income can easily support that size home. The hard truth is that it’s going to be a trade-off.
I concur to a degree, but American consumers have also been so captive to the narrative of large houses that they are mostly unaware of other ways of living. Their only image of denser living is slums and tenements, or Park Avenue penthouses, not middle class townhouses.
However, Americans have increasingly travelled to places that have higher density and transit and seen that those can offer a high quality of life.
Many then seek a different balance of space to transit by moving to areas where they can make a different trade-off than the standard American model, hence the recent urbanization of suburbs and growth of previously sparse cities like Austin and Salt Lake City.
Maybe that was true before the internet became widespread, but it’s not the case anymore. The American Midwest is aware of the walkable streets of Rome, Paris, and Tokyo. It’s still a big sacrifice to those used to yards, sparse commerce, and the scheduling freedom that comes with vehicles.
> hence the recent urbanization of suburbs and growth of previously sparse cities like Austin and Salt Lake City.
Yep, and I personally wish there were walkable options like in Europe. As someone who has lived in Austin for the past 7 years, getting around without a car is a joke and you’re going to take a big quality of life hit if you limit to yourself to one walkable area. Its density certainly is increasing (see east side), but it’s just turning into the Bay Area with worse highways.
Oh btw you can buy hn upvotes too : https://upvotes.club/buy/hacker-news-upvote/
HN credibility just dropped 80% to me for the moment ;-)
No but seriously, this is pretty bad. Many people are kind of addicted to HN and to many this is a stream of inspiration. I've seen people often share posts in internal Workplace chats where most people are not frequent HN readers.
Also having learned today that one can buy Reddit upvotes, I think this whole count thing is now officially useless.
Don't assume that those upvotes are counted. It's a cat and mouse game. We eat a lot of mice.
If you or anyone suspects that a post has made HN's front page because of vote manipulation, you should let us know at firstname.lastname@example.org so we can investigate. We have years of experience with this data.
I'd never say it never works, because we don't know what we don't know. But I can tell you that many people who've used that spam service have gotten their posts buried, their accounts banned, and their sites blacklisted.
I checked out that page, and at least with this one, it imo doesn't devalue HN credibility for me. The max number of upvotes you can buy for HN on this website is 10 per post, which hardly makes or breaks an HN post.
Those 10 upvotes would only be useful for an initial kick off for a post, but if the post itself is trash and legitimate users don't upvote it enough, it will all go in vain. So it would be kinda impossible to use this service to boost absolutely trash articles that people don't care about it.
Is this actually true? I don't think so.
It also seems funny they offer to sell a factor of ten more upvotes to articles already on the front page. Maybe they'll just sit back and watch it happen regardless?
edit: Anyway, the mods have access to the database to see who has been upvoting articles onto the front page. If a group of ('high value') accounts upvote in a similar way on a regular basis then they could be detected..
Hell, maybe dang runs upvotes.club himself, so he can take their customer's money then ban them immediately: that would be most efficient and least error prone. As far as I know, buboard might be one of his sock puppets, posting the URL as a honey pot to catch dishonest HN users. ;)
I think one method is to watch for upvotes that come when someone goes to a link directly instead of organically finding it through /new
Looking to Quell Sexual Urges? Consider the Graham Cracker.
One of America's first diet hawks, Sylvester Graham was certain that sexual desire was ruining society. His solution: whole wheat. How a zealot's legacy lives in our foods today.
Social engineering and deceptive marketing trump Razzle-Dazzle Globetrotter Calculus every day.
What you are basically saying is that because people lack critical thinking skills, you distrust the other people that happen/claim to actually know about the topic, that seems awfully reductive.
"Incredibly easy", huh?
Incredibly: adverb. used to introduce a statement that is hard to believe; strangely. "incredibly, he was still alive"
See also: https://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect
What do you find so "incredibly easy to understand" about private digital assets that people who "lack critical thinking skills" don't get, and why are you more trustworthy than they are? "Incredible" literally means "not credible". Bragging about how "incredibly" clever you are and how dumb other people are doesn't make you sound trustworthy.
Like many words, incredible has multiple definitions usable in context, such as nothing to do with credibility, and typically what English users do is pick the most likely applicable context to the circumstance, such as the definition which completely undermines your entire argument and is more congruent with everything I have said.
With that in mind, how would you rewrite your entire post?
What do you find so "incredibly easy to understand" about private digital assets that people who "lack critical thinking skills" don't get, and why are you more trustworthy than they are?
Isn't it a bit ironic that you want us to trust your claim that we should trust Europol about a "trustless" system?
"There are trustless systems though, like bitcoin" -buboard; "there is a system [Bitcoin -- bzzt -- oops! -- Monero then] that minimizes trust" -buboard; "I trust that [the Europol investigation and presentation about Monero]" -rolltiide.
So why should we trust you and buboard, then? Care to link to your resume or PhD dissertation or any code or research papers you've written on the topic, since it's so "incredibly easy" for you to understand? Or should we just trust you without any evidence to tell us who to trust about a trustless?
Chain analysis companies advertise their investigative abilities in order to attract business. By theory alone they cannot track Monero transactions, and in practice, they are not advertising that they can while they do advertise how they can follow transactions on surveillance networks like Bitcoin. This is a second confirmation that Monero works as private digital money.
This part is a cat and mouse game, all while Monero continues receiving software updates to improve its use, via open source pull requests, which bolsters confidence I have in the system as it doesn't exist in a static state. A lot of people analyze the digital asset ecosystem in a current or prior state and dismiss them based on that.
I think that is enough ammunition for you to investigate further to come to similar conclusions without you needing to trust me or buboard.
But if you would like to dive deeper on why these parties are not able to follow Monero transactions in both theory and practice, there are some books on the subject and then code you can compile yourself to test it out. That's typically what we are referring to when we say trustless, the independent verifiability.
Cryptocurrencies depend on a redundant and highly interconnected network of many diverse actors to be able to maintain their trust guarantees. How that network is going to stay around in a doomsday scenario is very much up in the air.
The main point, the crucial point of a doomsday hedge is that it should depend as little as possible on the intricacies of the old world.
The truth is that it is extremely difficult to predict what asset to hold in such a case. Physical assets can be confiscated by force, ephemeral assets are likely to just cease to exist.
Perhaps the best hedge is nothing as material as gold, or guns, or money, but having a large and robust network of friends and acquaintances. Ultimately, being connected (not just in a technical sense) is the best enabler of success in or out of a doomsday event I think.
Would love to hear moderators opinions on this?
The bigger problem - in my opinion - is people submitting (unwittingly) content cloned from elsewhere.
There is quite a bit of blogspam that makes it to the front page. Usually even that is taken care of sooner rather than later but for every time it works there is proof that it can work which will drive more people to try to get away with it. We can all help with this issue: ensure that we submit content from original sources and in case content from a cloned source is submitted flag the article and point to the original.
It sure would be nice if there was a way to add a comment to explain why one has flagged an article, so the moderators don't have to guess why it was flagged! As it is, I occasionally add an explanation in a comment or direct email, but it always feels awkward.
It is quite obvious when companies who think these sites are relevant for their social media is featured. Suddenly a whole lot of upvotes comes along. Make of that what you will. Personally I think it is naive to believe that any open community is free from bought votes.
tl;dr numbers: 15,000+ unique visitors, 25,000+ page views
And here's a thread from 2016 where users chime in with their results.
tl;dr numbers: A #2 or #3 ranking was about 50k-75k pageviews.
one could do the math to figure out how they came up with this $2 / upvote price - i presume it's the equivalent of paying google for the same amount of traffic.
can I buy being upvoted outta the shadow ban?
Some years ago I made a post on /r/learnprogramming that TL;DR was along the lines of "go to college, great ROI, plus for your first job most employers auto reject resumes without a degree" or something along those lines.
I was offered $500 up front and $500 referral bonuses to edit my comment to link to a fake degree mill. IIRC they charged something like $1500 for a fake degree and one year of service for employer education checks/transcripts. (by the by, if you're going to run a fake degree mill out of a residential address in San Mateo, use a VPN!)
Point being, Reddit comments are ripe for abuse.
There are quite a few problems with that, but the obvious ones I think are obvious to different people.
One, movies are a fairly concrete topic. There are tons of laws about Copyright, so every movie has some bits of information you and I could exchange to know that we are talking about the same thing. And for purposes of marketing and awards ceremonies they are even grouped and tagged in some fairly official ways. Comments are amorphous. How do I tell if your comment and mine are 'alike'?
Two, Netflix only sort of works because liking things is not free. If I were promoting a horror movie, I could create a bunch of accounts that like all of the classics of the horror genre, then add a few random movies, and my movie. Everyone who loves classic horror movies would correlate with my account, and that will mean my likes affect their suggestions. But... This is going to cost me. I have to create an account to vote. I have to (probably, I haven't checked in a while) attach a credit card to that account, so I could do that once, twice, a dozen times (everyone on my PR team) but beyond that things get complicated. There are limits to how much I could amortize those costs across multiple PR campaigns, but it might still be lucrative to do so.
With Twitter or Facebook or a million other sites, there is no friction. I could create bots to do this for me, for the cost of some network-connected CPU time (Amazon spot instances). I could speculatively create bots so that I have a stable of 'clean' accounts for future activities that haven't even occurred to me or my customers yet.
These activities are effectively parasitic, in the biological sense of the word (though some would agree in the rhetorical sense as well, but let's not get into that). Only when the host begins to die is there any selective pressure against them. Maybe we're seeing that now. The 'parasite' load is getting so high that the health of the host is in jeopardy. In nature this is one of the major selective pressures on parasites and some pathogens. Kill the host and you lose your meal ticket & transportation.
I live in a big city and while there are bylaws, without enforcement people will break them all the time. An example of this is people bringing their dogs where they are not allowed to (e.g. school yards) or have them off-leash when they're not supposed to (e.g. on-leash parks). But because there are so few bylaw officers, few dog owners follow those rules.
You only notice the few laws that are sometimes broken.
Doing the right thing can vary by culture, perspective, and situation. What is right or wrong to us may be entirely different to someone who's family is starving as just one example. Given the world-wide nature of the Internet, as a species, it is unlikely we are going to be able to agree on a single set of rules or punishments within our life-time.
There is, in all likelihood, a way to build an upvote system that is resistant to factory automation. The problem is that such a system doesn't have the exact feature set of the current system. Which means you have to give someone (probably several someones) bad news whether they want to hear it or not.
This is not a strong suit for software devs in general, and in the case where the 'right thing' involves financial concerns, we almost always lose. Even when we are right.
So rather than losing millions on taking away some feature that someone got promoted for, we lose millions by investing multiple man-decades into palliative care.
Being as how fake likes are possibly fraudulent and detrimental to the paying customers, I feel like these companies should be the ones openly demonstrating their attempts at stopping them. Otherwise, I'll default to "they allow it".
We helped the dog despite a variety of risks to ourselves, despite not knowing when it would end, and despite being in a hurry. We didn't do it with any expectation of reward or kudos. And although we do train and board dogs as a side gig, we're already booked enough to be choosy about clients, so we weren't doing it for potential business.
We did it because ignoring the dog felt bad and helping the dog felt good. That is, it was completely selfish on our part, but we still did the right thing.
But at least a dozen other people just walked by the dog as if they didn't see it. It was off leash, unkempt, and clearly scared. They saw it, but they just walked on past. Why? Because, for whatever reason, they didn't feel badly enough or wouldn't feel good enough if they helped.
I don't litter, either. And I take some personal responsibility for the litter of others. Again, because it feels good (or, in this case, living in a marginally cleaner city feels good). It also feels good knowing I might not be the only one who does so.
Neither of these makes me a good person. There's one instance where I know I don't do the right thing: I don't visit people in hospitals unless someone else drags me along. The experience, for reasons I'd probably need a psychologist to figure out, is painful for me, and whatever reward I get visiting the person is dwarfed by that pain. Even the shame and embarrassment of being a person who doesn't visit people when they're in the hospital isn't enough of an incentive to go. I need an external incentive. I need a friend to say, "Hey, let's go visit Phil in the hospital" or I'll be 'too busy' to go.
We all 'think differently' than each other. We're all doing what feels best. We all have different internal incentives and disincentives. So, for any sufficiently large group and any desired behavior, external incentives are required to get some of the people to exhibit that desired behavior.
Show me someone who is absolutely repulsed by something, but does it anyway because creates the serotonin pump for someone else, and I'll show you a selfless person.
"Psychological egoism is the thesis that we are always deep down motivated by what we perceive to be in our own self-interest. Psychological altruism, on the other hand, is the view that sometimes we can have ultimately altruistic motives. Suppose, for example, that Pam saves Jim from a burning office building. What ultimately motivated her to do this? It would be odd to suggest that it’s ultimately her own benefit that Pam is seeking. After all, she’s risking her own life in the process. But the psychological egoist holds that Pam’s apparently altruistic act is ultimately motivated by the goal to benefit herself, whether she is aware of this or not. Pam might have wanted to gain a good feeling from being a hero, or to avoid social reprimand that would follow had she not helped Jim, or something along these lines."
I was attracted to this years ago, and even more to the related theory of Mark Twain in What Is Man?, which seemed very convincing:
"...we ignore and never mention the Sole Impulse which dictates and compels a man's every act: the imperious necessity of securing his own approval, in every emergency and at all costs. To it we owe all that we are. It is our breath, our heart, our blood. It is our only spur, our whip, our good, our only impelling power; we have no other. ... FROM HIS CRADLE TO HIS GRAVE A MAN NEVER DOES A SINGLE THING WHICH HAS ANY FIRST AND FOREMOST OBJECT BUT ONE–TO SECURE PEACE OF MIND, SPIRITUAL COMFORT, FOR HIMSELF. ...He will always do the thing which will bring him the MOST
mental comfort–for that is THE SOLE LAW OF HIS LIFE"
but came to think (as most philosophers do) that both are just mistakes. They explain too much, are unfalsifiable (how do you know about what people never do - it's just asserted), and are crazy and a horrible way to live. There's no goodness in the world?! No kind acts? Why would you want to believe that. You really didn't care about that dog at all, GP?
If you're asking that question, you've adopted some sort of fatalist view of the concept I don't hold. I don't know what caring is if it doesn't have something to do with feelings. To answer your question, I cared, empathized, and sympathized enough that it outweighed my fear, anxiety, and doubts about helping.
Why do I feel that way and others don't? Probably because of my experiences with dogs, my thoughts about dogs, and my thoughts in general. I could, with intention, retrain my thoughts so I am likely to not help dogs in the future, if I so chose. Likewise, I suspect I could have taken the mental hit and not helped the dog; but I don't think that's what people generally do. And I think when people generally do a thing, if they're not getting a reward, they're building a reward system so when they do the thing in the future they get a reward.
I'm very skeptical of any claim of purely altruistic behavior, because altruism can have many rewards, both internal and external. Pam rescues Jim because Pam cares about Jim. But what does it mean to 'care about' a thing? Isn't that just pro-rescue feelings?
In fact when I was researching this topic, I remember there being some reporting about Mother Teresa writing that she had doubts about what she did and how difficult it was - reinforcing the idea that she was suffering internally. I also recall some interesting studies on serotonin reward feedback in people who do altruistic things or have "sacrificing" jobs. So it's not woo woo as you seem to make it sound.
I take issue with a lot in your last paragraph - but I'd suggest you revisit the concept.
I recommend Christopher Hitchens' book on Mother Teresa, I'm pretty sure you wouldn't write about her like that if you'd read it.
No matter how misguided her actions were, it was clear she was doing what she did without those same altruism mechanisms - yet millions think she was a saint. That's the whole point of that anecdote.
One bay over, more than a dozen people were consulting on a kid who fell out of a high window. When someone volunteered me to take their kids home so they could stay, I'm not even sure I finished my whole sentence before leaving. Pretty much gone in a puff of dust like a Warner Brothers cartoon.
Hospitals are full of hurt people. Some of them are never going home. Sometimes they know it. Sometimes the family knows it. I'm now pretty sure it's not the people in the hospital I have a problem with. It's what's happening around them and to them.
So, deposits on plastic cups! and straws! (might get people to pay the cost of washing and re-using a sturdy one)
Never mind where the "problem" came from in the first place.
Climate change is also making people rich now due to this logic.
Instead of FB, etc getting ad money, that ad money is being spread out to many others instead. It also seems like this might be more cost effective than ads.
I think I would consider this much more ethical than how Google puts the top 3-5 results as "ads" these days. Sometimes I question how long until I have to go to Page 2 to get off the advertised results.
Note - In the above I'm only considering this practice being done by legitimate places sharing fairly accurate or honest content. Such as a business trying to promote a sale or a blog piece.
I would consider paid ads, paid likes, paid comments, etc., to be unethical if the content they're supporting is false.
TLDR: What's the difference between paying Facebook/Google/etc for an ad vs paying someone else to like your post? The 2nd seems like a better solution for a majority of people.
That's a bit like saying you don't see the difference between peeing in a toilet and peeing in someone's pool. You're relieving yourself either way, after all.
"Likes" are supposed to represent the sentiment of the real users. Ads are supposed to be advertising messages.
The two things are very different for a bunch of reasons. First, buying likes is straight up unethical because it's lying and gaming the system. Second, buying likes is harmful because it reduces the trustworthiness of likes.
By buying ads you are trying to game a users newsfeeds without providing any value to the user (if highly liked content provides value, ads do not provide this value).
Buying likes is the same. It games a user's newsfeed but doesn't provide value.
To the user they are the same. To facebook they unethical because they are not paying them.
To the point around harmful the newsfeed itself is untrustworthy. Links were never trustworthy.
In theory (and ancient, pre-internet historical practice), ads can provide genuine value by informing people about the availability and properties of products.
Sorry, in what way is buying likes ethical?
Facebook decided how things work and create this system with side effects. Finding a way around the rules facebook made does not make it unethical.
I'm not sure what to say to you if you can't see it. No harm is done? Maybe you only imagine physical or financial harm. These aren't meaningless numbers in a vacuum; likes/followers are taken by people to indicate social importance or approval, they show how other people in the world feel and think about things (comments, pages, issues, opinions). A large part of the public sphere is in this form nowadays. Faking likes/followers/comments is deliberately misrepresenting yourself, fraudulent deception - lying.
Seems to me your argument would apply to faking or buying votes in an election, or TV ratings, or referendums or..faking corporation profits, buying fake stories in news media, or anything in the world where the numbers are diddled. "In what way is it unethical? No harm is done to anyone. There's this system with side effects. Finding a way around the rules does not make it unethical."
Imagine if voting and ranking on HN stories and comments was entirely determined by bought upvotes, downvotes, flagging, abuse reports etc. The site would be destroyed. And if the same thing happened on every forum of any kind online? "No harm done?"
I disagree. I think harm is done to everyone who uses the site.
Unfortunately, I haven't considered likes or even "star ratings" to be "real sentiment" in decades. Even without paid clicks, they rarely reflect quality and are often the result of many other factors. There are plenty of other dishonest methods to boost up your like value without paying for clicks.
Yes, these things are a prime example of "the tragedy of the commons". But your question was about ethics, and I don't think this affects the ethics of the decision. That the commons may be trashed doesn't make it ethical to contribute to the trashing yourself.
When a company is buying likes, not only this is not disclosed but also gives a false sense of approval by a community which is not only deceiving for the customers but also unfair competition for businesses providing a similar product / service.
Who thought like would be a good gauge without dislke is beyond me. Dislike would provide a way to display displeasure and drop the value of the orginal like.
And would double the business of the like factories, because they would expand into doing paid dislikes as well.
I recall buying Facebook ads for places in the early days of their system that benefited off this & having it say which friends also liked this content.
That's not a very useful statement. What if it's in support of an opinion? Advertisements are rarely just a list of facts and likes
> TLDR: What's the difference between paying Facebook/Google/etc for an ad vs paying someone else to like your post? The 2nd seems like a better solution for a majority of people.
The difference is pretty clear, one is purchasing a real ad on a real platform. The other is paying for fake interactions to project a false response to something.
> What if it's in support of an opinion?
I think opinions are great if they state their claims and reasons in a respectful manner and don't throw out very false narratives on purpose.
> ...real ad on a real platform. The other is paying for fake interactions to project a false response..
You're right. I'm jaded and feel I'm over advertised to on search engines, blogs & social media platforms. I feel they're shoved down your throat & ruin the quality of the platform. I feel a lot of ads or "sponsored posts" try their hardest to hide the fact that it's an ad. They often blur the lines of accurate claims. I don't even need to go down the route of "influencer" or "affiliate links" that hide it even more. At one time in my life, I felt there was a clear distinction between ads & authentic word of mouth promotion of an item. That was probably way back when TV blurred out logos. I haven't felt that way in a long time. I believe "real ad" is such a gray area these days that I am having a hard time seeing anything wrong with regular people making money off this advertisement hysteria instead of companies. Especially since most of the companies in question here are doing highly unethical things with their user's data.
You can take the above paragraph & state 2 wrongs don't make a right or point out several fallacies made. At the end I would argue that the paid likes are only a small part of a larger problem of advertisements. Yes I would like to see paid likes become irrelevant but I think there are bigger issues.
A real ad is the same as a fake ad. They are all just ads. It doesn't matter who you pay. They do the same thing.
We're talking about fake likes. An ad without purchased likes is clearly not the same as an ad with purchased likes. One has a genuine representation of response and the other does not. Who you pay does matter as one is a legitimate ad and the other is at worst posing as a legitimate ad, but possibly also a real ad that has been given inflated numbers via purchased likes.
One is trying to operate within the system, the other is trying to game it. Not the same thing and who you pay obviously does matter. Just because they may have a similar purpose doesn't make them the same...
That is actually one of the interesting things about social media as a political platform. This wasn't possible at the same level with television or radio advertising because of the fact that "posts" and "likes" are often by essentially complete anonymous strangers online. The troll farm levels being what they currently are I think people would do well to take all numbers and advertisements they see online with a grain of salt until these issues are more settled.