
I accidentally built a nudity/porn platform - elazzabi_
https://elazzabi.com/2020/08/11/the-day-i-accidentally-built-a-nudity-porn-platform/
======
jacquesm
\- anything that allows file upload -> porn / warez / movies / any form of
copyright violation you care to come up with.

\- anything that allows anonymous file upload -> childporn + all of the above.

\- anything that allows communications -> spam, harassment, bots

\- anything that measures something -> destruction of that something (for
instance, google, the links between pages)

\- any platform where the creator did not think long and hard about how it
might be abused -> all of the abuse that wasn't dealt with beforehand.

\- anything that isn't secured -> all of the above.

Going through a risk analysis exercise and detecting the abuse potential of
whatever you are trying to build _prior_ to launching it can go a long way
towards ensuring that doesn't happen. Reacting very swiftly to any 'off label'
uses for what you've built and shutting down categorically any form of abuse
and you might even keep it alive. React too slow and before you know it your
real users are drowned out by the trash.

It's sad, but that's the state of affairs on the web as we have it today.

~~~
est
github is fairly open and it's not used for porn, yet.

~~~
arsome
It's definitely used for pirated stuff.

Put any technical book title into google with site:github.com in it and click
the first PDF that shows up.

~~~
HideousKojima
There's also a font piracy site that simply trawls public Github repos for the
font's filename

~~~
spanhandler
Ah, so _that’s_ why the site’s so trigger-happy with the excessive-activity
bans that it’s nearly unusable unless you’re logged in.

~~~
vvpan
And cloud tokens and other credentials. Those get scraped super fast.

------
Demiurge
I've been maintaining a community forum for more than a decade. We had some
abusive users, so we introduced pre-moderation. Meaning, any new user is on
probation for a few posts, and all anonymous posts have to be manually
approved by an admin to be posted publicly. This has pretty much completely
stopped the visible abuse.

However, for about 10 years, there are bots registering every day, some of
them even make realistic accounts with cool and unique username, email, even
description. Strangely, the email and username never match... And they make
post with html links embedded. They even know to actually select the 'HTML'
content type, which is an extra select input. Some bots even make a few
innocent posts, before the link spam. But just too few to get past probation.
Obviously, the spam posts never get approved, and accounts never get out of
pre-moderation queue and yet they're still trying every day... Not intelligent
enough to make more than 2 dumb posts.

Similarly, I have some work projects that have user registration with a human
on-boarding process, where another person has to add the user to their group
for them to share any data, none of which is ever public. So these bots are
tirelessly registering, and staying in limbo forever. Thousands of useless
accounts.

It boggles the mind how much energy is wasted, but I guess it must my
profitable enough.

~~~
Normille
It's amazing how some people can't seem to spot a bot/spam post, no matter how
obvious.

One of the websites I host is for a retired elderly middle-aged academic who
pens articles about his former field and invites comment.

Almost every week I'll get an email from him along the lines of _" Is this
worth following up?"_ and attached will be either a comment reading something
like _" I love your article. So much good info and useful to me.."_ or an
email from some g3gergergew@gmail.com address saying _" We love your site. It
has much good infos and is useful but we are notice your CEO could being
better..."_

No matter how many times I tell him to look for the telltale signs...

* Random gibberish email from address

* Half a dozen links in the comment

* Broken English

* Generic text which could apply to ANY article or website in the bloody world

...he'll still forward them to me and ask if they're worth getting in contact
with. Every time, I'm gobsmacked how he can fall for such inept spamming.

~~~
criley2
The ineptitude of the spamming is actually a sign of eptitude -- they
intentionally include the signs you look for because you're a waste of their
time. They have figured out the precise line to walk where bad marks like you
avoid them but easy marks like him engage.

~~~
Normille
I've heard that's the thinking behind the infamous and infamously 'reeking of
scam' Nigerian Prince email scams too. And the fact that people still fall for
them shows it's a seemingly a strategy worth pursuing.

[But a discussion here on the ridiculously obvious scam that people fall for
all the time is probably too off-topic --even for HN!]

 _PS: Upvote for "eptitude". Even if it's not a real word, it ought to be.
[And I'm not going to spoil things by looking it up]_

~~~
alextheparrot
Aptitude and ineptitude are the now divergent pair, for your interest. Apt was
turned into ept

~~~
jagannathtech
ah TIL

------
epaga
"Take a competitor product, remove all features you don’t need, and make it
crazy fast."

Seems to me there are hundreds of lifestyle businesses just waiting to happen
by following this formula. So many good ideas out there could be made so much
better by reducing them to their essentials, but making them elegant and
"crazy fast".

~~~
treeman79
Been forced to use Jira instead of pivotal tracker.

Wow. Traded all useablity for endless features.

Our setup barely works. And creating a story is such a massive pain. Sometimes
things just won’t work.

So I refuse to use it for stories beyond one mega story.

~~~
nkrisc
One place I worked wanted multiple people to "own" a story, but JIRA doesn't
work like that, so they implemented a totally new custom "owner" field that
did allow it and then told everyone not to use the native owner field. Now you
had to track everything two ways.

~~~
gen220
This isn't _that_ bad.

What makes this worse is when 1/3 of the teams in your org decide that they
want a similar feature, but each comes up with their own custom-named tag
("owner", "lead", "point person", "point of contact", "jefe").

And then you want to run some jql queries against those tickets, and you have
to use disgusting query generators to de-dupe the tag monstrosity.

They've done an excellent job of giving you _just enough_ features to shoot
yourself in the foot with.

------
pjc50
Yup. If you build a communications platform, it will be used for spam. If you
build a hosting platform, it will be used for porn. If you build a linking
platform, it will be used to spam links for porn.

Anything else requires a constant uphill battle of content filtering and
deletion. You could call it censorship, but it's a necessary reality.

~~~
ClikeX
At some point you just have to cave and build a porn site.

~~~
munificent
Waiting for the follow up article, "The day I accidentally built an open
source developmennt platform."

"I made a great fast, free porn hosting site. But then these nerds showed up
and started uploading their Git repos onto it."

~~~
wiml
"The day I realized my camgirl sex website was being used for business
meetings"

------
Abishek_Muthian
We were running a privacy focussed Chat-App-Network dating platform in 2017
which was accelerated by Facebook[1].

i.e. A network between, Messenger<->Viber<->Telegram<->Line App.

By design no media sharing was allowed(to prevent pornography) and the user
profile images were received from the platform itself. But we soon faced
unique challenge of people from certain countries using their children picture
as profile picture(often just the children), there were people with group
photo as profile image and then there were people who using explicit images as
profile pictures.

So we integrated Amazon Rekognition to identify children, group and explicit
images. Those using explicit images were banned immediately and those with
children/group photo image( _Face detection not Facial recognition_ ) were
asked to change their profile image(Their profile was not shown to anyone
until they changed their profile picture with just them). We were processing
>200,000 profile images per month as people change their profile images often.

But, as we very well know Amazon Rekognition or for the fact any such ML
solution is not 100% accurate, we faced issues with people with darker skin
color(Amazon told me that they were working to fix the issue; exactly why this
type of half baked tech shouldn't be used for things which can cause harm) and
so we had to reduce the confidence levels to such an extent that anything
resembling a child would be flagged by the system(False positives are better
in this case than false negatives).

[1][https://hitstartup.com/about/#FindDate](https://hitstartup.com/about/#FindDate)

~~~
nerdkid93
> privacy focused Chat-App-Network... accelerated by Facebook

Why would any company focusing on privacy partner with Facebook or Google? I
would guess that some ardent supporters of such a product/company would be put
off by such a partnership, no?

~~~
Abishek_Muthian
Good question.

> Chat-App-Network

Messenger had >1Billion users and so its users were 98% of our user base. We
enabled them to communicate with users of other chat apps and vice versa. We
didn't even use the 'Name' of the users.

I applied for their bootstrap phase under their FbStart program but they
directly selected it for the Acceleration phase.

As for why I applied for Facebook if I care about Privacy?

I was a disabled, solopreneur from a village in India, without any kind of
network strength competing with Valley behemoths and any kind of help is not
just a force multiplier but life or death(But my product was selected
meritoriously by FB). Facebook privacy issues(Cambridge Analytica) started
only several months after I launched the product, so the image of Facebook was
not like what's today. But it did bother me and just after a year of running
the platform successfully, I had to close my startup due to my health
issues[1]. I did not sell my platform to safe guard the privacy of the users.

[1][https://abishekmuthian.com/i-was-told-i-would-become-
quadrip...](https://abishekmuthian.com/i-was-told-i-would-become-
quadriplegic-68c0371e6f05/)

------
Jenk
A market platform I recently worked on allowed users (free sign up) to create
multiple wishlists and then send those wishlists to arbitrary email addresses.
The user could set a custom title, limited to 100 characters or so.

We soon discovered a similar problem to OPs - bot accounts (mostly @qq.com
addresses) were registering by the hundreds per day to create wishlists and
then send those wishlists to other @qq.com addresses. They were setting the
titles to arbitrary code blocks.

I found it fascinating, if terribly inefficient. Some colleagues and I were
speculating on the purpose, perhaps someone experimenting some kind of
laundered botnet control path?

We tried all kinds of measures to prevent it but ultimately we blocked all
@qq.com accounts and eventually disabled the wishlist feature altogether as it
had such little real usage.

~~~
coldpie
We allow users to sign up for a free trial for our product, you have to put in
your name & email address. After the trial expires, we send an email that says
"Hey So-and-so, your trial ran out, click here to give us money, etc." Some
enterprising spammers filled in the name field with spam URLs and the email
field with victims' email addresses, in order to spam them. So the victim
would get "Hey hxxp://buyfreerolex.com/, your trial ran out..." spam emails,
from our email server. Obviously we've fixed it since, but it's absolutely
wild the length spammers will go to.

~~~
throwaway744678
Perhaps a bug in some spamming script, mistaken your sign-up form for a blog
comment form?

------
yoran
We've had multiple spammer attacks over the years. Our platform allows users
to create and publish their own content. Our primary target is education,
teachers and students. But naturally it's being abused by spammers. It's been
an interesting cat and mouse game to counter them.

\- One time, they used our platform to publish links to their streaming
websites for the quarter finals of the 2018 Champions League. Suddenly we
ended up being first result on Google for "arsenal v barcelona". It was
fifteen minutes before the game so you can imagine that we got _a lot_ of
traffic. On the one hand it was kind of flattering that the SEO ranking of our
domain was so strong. On the other hand, it wasn't great nor beneficial for
the platform to be abused like that. As a counter-measure, we decided to block
indexing of project pages for 24 hours when they're first made public. The
spammers never came back.

\- Another time, we got an email from AWS that our SES bounce rate was 15%,
and it was rising fast. Being blocked from sending emails by AWS would have
been a disaster. It turned out that our invitation system was abused. A
creator of a project can invite an external person by email. That person
receives an email saying "John Doe invites you to collaborate on 'A nice
project about the 2018 Champions League'" with a link to the project. Replace
"A nice project about the 2018 Champions League" by a Chinese ad and you've
got yourself spammers who are sending thousands of emails a second to a random
collection of email addresses. Naturally a lot of these bounce, which caused
AWS to warn us. So we had to start verifying the MX validity of invited email
addresses and throttle the system to a maximum of 100 emails in a window of 24
hours.

\- We still get a lot of spammers publishing obvious spammy projects. One
thing that has helped is the Clearbit Risk API. You send them an email address
and it comes back with an assessment of how spammy the address is. We use it
for certain domains (protonmail.com, yandex.com,...) and it frequently flags
someone as a spammer right after signup. They can still use the platform but
can't make stuff public, completely defeating the purpose of them being
spammy.

I'm sure they'll keep finding creative ways to get around the limitations we
put in. The toughest is to find a way to counter them without it hampering the
experience for all the other users.

~~~
Demiurge
Awesome tips there!

------
joosters
_I later discovered that Instagram banned all mylink.fyi links from the
platform. A customer also confirmed to me via email that Snapchat started
blocking links. Heh, I’m banned by Instagram and Snapchat!_ [...] _If you’re
interested in acquiring the domain name, and /or the app, let’s talk._

That domain name must have a negative value now?

~~~
dijit
Thinking about it; maybe not.

If you're one of those loud folks who dislike instagram/facebook (like me)
then this is a nice way to ensure your content and data does not end up on the
platform.

Of course, they're only enforcing it themselves, so it's unlikely to be
permanent. :(

------
avian
Linktree, the service the author aimed to replicate had similar problems
according to Wikipedia [1].

> In 2018, Instagram banned the site due to "spam," although it was lifted and
> Instagram issued an apology.

> Rumors circulate that Instagram will issue another ban.

[1]
[https://en.wikipedia.org/wiki/Linktree](https://en.wikipedia.org/wiki/Linktree)

~~~
bichiliad
I wonder if using Instagram Login has helped them filter out spam accounts.

------
munificent
Paying for bouncers is an understood part of the cost model for a nightclub.
If you provide any sort of real or virtual venue that allows unvetted
participants, you have to factor in the cost of dealing with bad actors. If
you have a virtual venue, you have to factor in the cost of dealing with
_automated_ bad actors.

It's just how human nature works. 90% of people are great, but 10% can do a
_lot_ of damage.

~~~
mrwww
great analogy

------
taftster
Here's an example of a similar problem - that I created for a website I help a
friend with.

We had outbound referral links, basically to monitor the number of times that
a visitor clicks out of the website. The URL pattern was something like:
example.com/out.php?url=outbound.com

The out.php script would just simply (naively) redirect the user to the url
specified. We never validated if the outbound link was to an authorized
reference.

The result is the same. Eventually spammers figured out the above link and
then just started posting their spam using the site's redirect script to any
number of social media sites, embedded in email, etc.

What's interesting too is that we would see multiple redirects embedded into a
single request. e.g. out.php?url=another.link.service/link=spammer.com

Obviously in hindsight this was stupid, but when we built it (some 10 years
ago), the idea seemed pretty sound if not maybe a little naive. The solution
would have been to only allow redirect links to authorized outbound sites,
which works when those links are relatively static and not open ended.

------
mikorym
There is an easier way to accidentally build a porn website:

1) register domain

2) don't renew

Not completely unrelated either, since it seems like OP's domain is now bust
too...

What is also interesting is that probably half of what FB or <insert
nonexistant competitor> does is moderation of sorts. This is why FB is
becoming a commerce/community page website and why they need Instagram for
social media.

------
dhosek
I run a mediawiki site. I got to play the cat and mouse game as well. For a
while I was getting hundreds of spam page creations per day despite
implementing as many defenses as I could. I was at least once a day going
through and deleting the spam posts using the Smite Spam plugin. It's finally
calmed down and I get one or two posts maybe every couple of weeks. I think I
may have finally been removed from the site list of whatever "packaged"
software the spammers use.

~~~
vorpalhex
With prebuilt software, a small custom change required for creating content is
often times enough to deter mass automated spam. My go to for a number of the
years was to add a form field with the label "Enter the word 'orange'".
Trivial for a human but requires just a bit of customization for a bot, enough
that most spammers won't bother.

~~~
dhosek
I'm building a bespoke replacement for the wiki since all the pages share the
same structure and the mediawiki solution isn't as user-friendly as I'd like.

Hopefully I won't do anything dumb like the naïve contact form I'd made that
turned into a spam vector because I was putting unsanitized user input into
mail headers.

------
harry-wood
I've come across these kinds of prototype link sharing sites in my battles
with SEO spam. On the face of it SEO spam is simple: the bad guys bung a load
of links into your wiki or blog comments in the hope of gaining google
rankings for their "SEO clients", but... Increasingly they started bouncing
links all over the place in complex spamdexing webs. Link sharing sites often
get thrown in the mix, along with cross-linking other blogs/wikis where
they've got their spam to stick. This all makes it easier to evade any
filtering by a particular blog/wiki admin, but maybe also makes it harder for
google to filter and down-rank the baddies, and finally depending on how
complex a spamdexing web is, it offers protection to their clients because it
ends up being impossible to see which end websites the spammer is actually
aiming to promote (the links tucked away amidst the randomised cross-linked
chaos)

But maybe that was a game of a few years ago, and now spam bots are mostly
just trying to push porn on social media.

~~~
heipei
I'm facing the same issue with a URL analysis service I operate which lets you
take a snapshot and screenshot of a website. I get countless submissions to
other sites which allow user-generated content (Reddit, Medium, random support
sites, everything that's "open") and all of these sites themselves host images
and links to "Watch XX Soccer Game Live 2020 HD" etc., so all link-spam. I
found my ways to battle these submissions and users, but obviously don't want
to reveal them here.

The other thing is signup-spam. I'd say a full 50% of my signups are spam, and
I do my best to remove those user accounts. What surprised me was that the
spammers seem to be human, i.e. using a Gmail address (which requires
verification), solving Captcha, entering form fields, clicking the email-
verify link, etc. Just crazy. Again, the countermeasures are not something I
would want to publicize...

------
yelsom
I wonder how one can know the history of a domain name before buying it.
Imagine making a new website and realizing that your domain name is banned
from all social networks!

~~~
gvb
There are various online blacklist check tools. Examples:

email:
[https://mxtoolbox.com/blacklists.aspx](https://mxtoolbox.com/blacklists.aspx)

DNS blacklists:
[https://www.ultratools.com/tools/spamDBLookup](https://www.ultratools.com/tools/spamDBLookup)

------
iagooar
> Other solutions include: requiring credit cards for trial periods, ban all
> adult content from the platform… But they all require me to put extra effort
> in the project. And I don’t have time for that.

There is people who spend their entire life trying to build a working
business, and those who just walk away from what would possibly be a pretty
lucrative business because of lack of time.

Humans, strange and exciting beings, really.

------
mensetmanusman
Link hiding tools are unfortunately a security threat due to all the bad
actors in the world.

------
klyrs
Every time I see somebody identify moderation as censorship, I remember my
experiences with public wikis, forums, etc. It's always spam and porn. They'll
spin up more fake users than you'll ever have real users, every damned time.
It's a vicious cycle, as real users won't stick around if you don't filter.

------
dumbfounder
I struggled with it myself with Twicsy, except Twicsy was just a window into
Twitter and made it much easier to find the porn and (gag) child porn. I
remember the first time I found a network of child porn, it made me sick to my
stomach. Tooks me hours to get rid of it, report it to Twitter, and to the
authorities. It was a problem for a long time, I felt like I was doing all of
Twitter's dirty work that they wouldn't do themselves. I made tools for people
to report it easily, and tools to eliminate it en masse. Twitter took way way
way way way too long cleaning up their act in this respect. They also often
just deleted tweets and not pictures, pictures could linger for months without
being deleted from their content servers, and still appear in Google.

------
twodave
IMO the better way to design a product like this to avoid abuse would be to
simply force them to sign in using each platform they want to link to. At that
point all you're building is a way for people to say, "These 5 social media
accounts are all me".

You could allow them to select one of their accounts to source information and
a photo for their combined profile. At that point you're not storing anything
besides links to social media profile pages.

In effect you get to piggy-back on those sites' abuse mitigation strategies
(though of course you're stuck with the lowest common denominator). Your
biggest decision at that point is which social media platforms to allow onto
your service.

------
fangorn
Was it legal "to take a closer look and see the links they are sharing"?

~~~
1f60c
Why do you think it could be illegal?

The ToS also includes this:

> You represent and warrant that: […] the Content will not or could not be
> reasonably considered to be obscene, inappropriate, defamatory, disparaging,
> indecent, seditious, offensive, pornographic, threatening, abusive, liable
> to incite racial hatred, discriminatory, blasphemous, in breach of
> confidence or in breach of privacy; […]

For which they obviously need to look at the data.

~~~
majewsky
That doesn't imply they are looking at the data during regular operation. That
just means that the user is liable for breach of contract if they post
noncompliant links. The first point in time where the admin really has to look
at the data would be during the discovery phase of the corresponding lawsuit
between operator and customer.

------
boo-ga-ga
Thinking about this when working on a social network (yes, I know) as a pet
project at spare time, I've got a question: are there any tools for handling
improper text/images automatically? I think there are some lists of "bad
words", but not sure if they are available for most widely used languages. But
are there libs/SDKs/online services that I can feed a picture and they will
tag it as potentially improper, for example porn or some swastikas, so I can
pre-moderate manually only such images? Looks like it could be a nice and
useful service.

------
sas41
Ow boy, my first real comment on HN, please be gentle!

Okay, so I have something related to this.

 _two paragraph back-story was cut_...

So I built SASRip[1], an Open Source website with an API that allows you to
download audio or video from any web-page that is supported by youtube-dl (I
use youtube-dl and ffmpeg to do muxing/transcoding). I also built a browser
addon for it called Media Reaper[2] (chromium version available on SASRip's
website[3]).

Now, I wanted to build a no BS, no tracking website, so all I have is internal
logs, these logs keep incoming requests like so: Time, URL, ID string,
success/fail, the ID is just to tell where the request came from, web site,
API call or the browser addon, I keep no IP data or anything like that, and
boy do people download the nasty stuff, there is all kinds of nasty stuff,
taboo stuff, feet stuff and stuff I didn't even know existed.

Now I live in constant fear that someday, looking trough those logs I will
find CP and I am not sure what I should do, I know implementing tracking
methods goes against both my morals and the philosophy of the service, but at
the same time I am not sure if I can keep on going, knowing I could do
something about it but I am not.

Ultimately I think it is very likely that I will shut the service down if I
find CP on it, with no way of tracking the person down, perhaps leave a
message with why I shut down.

\----------------------------------------

P.S. I make no money on this service, it's purely donation based.

P.P.S I know I can do muxing and transcoding via some really cool JS
libraries, but I wanted to sharpen some of other skills with this project.

\----------------------------------------

[1] - [https://sasrip.cf/](https://sasrip.cf/)

[2] - [https://addons.mozilla.org/en-US/firefox/addon/media-
reaper/](https://addons.mozilla.org/en-US/firefox/addon/media-reaper/)

[3] - [https://sasrip.cf/Home/MediaReaper](https://sasrip.cf/Home/MediaReaper)

------
alexellisuk
I really like the design of the page - very clean and easy to understand the
value prop. I could see this being useful for influencers and bloggers. Shame
to hear about the mis-use :-(

------
fl0wenol
Everyone just want to be horny on main and I think the internet needs to get
over / accept this so we can focus and making that profitable and safe.

------
jonnycomputer
I recall in the early days of mobile apps downloading an ipad app which had
this neat idea that kids could share their drawings, made in the app, with
each other in a sort of random way. It did not take long for me to realize
that meant seeing an endless stream of inappropriate, or sometimes potentially
harmful content (e.g. from adults interested in exploiting children).

------
picodguyo
I created a site intended for family photo/video sharing and it did not take
long for people to start uploading penis pics. The weird thing is they don't
even have a reason or recipient. They just want their penis out there on the
internet in the hopes someone might stumble upon it.

------
desireco42
I wanted a long time ago to make a free speech forum and platform. As much as
I believe it would be good to have place you can say what you want, the hassle
of spam and plain nastiness is just what always stopped me from doing this.

------
ensignavenger
I'm building a platform that could be abused in similar ways. Does anyone have
any suggested resources I could read/use to avoid this problem, preferably
without employing captchas? Akismet might be helpful?

~~~
gwd
One thing to think of is not to be valuable to them.

So for instance, ALWAYS do HTML sanitization via whitelist; don't let anyone
put any javascript or any weird CSS or HTML into your posted content that you
don't allow there.

If you allow links, make sure they all have the "nofollow" tag, so that you
can't be used for link farming. (Both because it helps spammers, and because
if Google detects that your site is being used for link farming, your own
sites bomb.)

Other tricks spammers use: Using a link for the username, since that is
sometimes emailed or displayed even when moderated content is not.

The site I run is a special-purpose conference website for a relatively small
community (usually 60-100 attendees), so manually moderating all content until
a user is verified works pretty well. The first year we had a handful of
spammers, but their content was all deleted by me before it was seen by
anyone. (With the exception of the link for a username. Missed that trick.)
The next year we didn't have any spam accounts at all.

No idea how well that will scale for your use case.

~~~
ensignavenger
I will certainly be employing these methods, and doing manual verification at
least initially- but if the project is successful, it will outrun my ability
to manually verify everything.

Thanks.

------
devit
I don't understand what the problem is with using the service to provide links
to porn, and also don't understand why Instagram and Snapchat would care about
links to lists of links to porn?

~~~
buzzerbetrayed
I imagine it would have to do with 1) The number of minors on Instagram and
Snapchat. and 2) The image that Instagram and Snapchat want for their
platforms. They aren’t looking to become tumblr in the eyes of the masses.

------
victor106
> I optimized it in a way I’ll only pay hosting fees when the site is having a
> lot of traffic. Like 1M+ visits per month.

I wonder what optimizations were done so you can only pay for 1M+ hits?

~~~
elazzabi_
Hey, author of the post here.

The whole app lives in Firebase. Once a user hits a profile page, a cloud
function runs, get the data from the DB, constructs the page, and caches it at
the CDN for 24h.

If a user add/delete something on their profile, the cache is removed.
Otherwise, the content is served from cache only with no need for a lot of
computation.

With this, you can still get your way through Firebase's generous free tier
[https://firebase.google.com/pricing](https://firebase.google.com/pricing)

~~~
JustWright
Firebase free tier is only 125k invocation per month. How are covering +1M
invocations per month on free tier?

~~~
rnotaro
Caches hit on the CDN are not trigerring an invocation.

So it's not covering 1M+ invocations per month on the free tier.

------
codezero
For most bad actors, a small amount of work is all that's needed to block them
from a new platform like this.

1\. Use email verification / captchas

2\. Block mailinator/throwaway account addresses

3\. Use something like Cloudflare/Akamai to protect against egregious bad
actors

4\. Add some other level of social validation like a major OAuth provider

5\. Simple pattern matching - as the author noted, these are rarely
sophisticated, and once you put even a small barrier up, they usually trickle
down. You can even go further and just limit the features your product has
specifically to hurt their use case, or you can actively shadow ban the users
yourself, so they don't know their links aren't working.

------
bilater
Firebase has a email verify feature You should use that so spam emails can be
blocked from using your service.

------
r0rshrk
No offence to you but this is why I hate PH so much. Any product launched
there goes down in <6 months.

------
timwaagh
i think it's somewhat of a start for a business if you have the stomach for
it. you just need to figure out how to make money off the freeriders and get
around the social media censorship for the paid users.

------
ganzuul
Slasdot's moderation system is brilliant in this regard.

~~~
ramses0
Can you elaborate?

~~~
ganzuul
Oh, yes. It assigns a small number of 'moderation points' to random users, and
a larger number to trusted users. Essentially their algo picks out random
users to moderate just a little, which lets the diaspora bump up visibility of
what they like. In addition to up and downvotes the mods label a post "+1
Informative", "+1 Funny", or "-1 Flamebait". Users can assign more visibility
to something which has been voted Informative twice than once Informative and
once Funny. There is no "-1 I disagree". The purpose of the votes are obvious.

There is meta-moderation on top of this which has users check the moderation
of other users. I assume people who repeatedly abuse their privilege are given
sub-naive priority for mod point assignment.

This scales with traffic and requires the minimum of admin oversight.

~~~
ramses0
I'd forgotten about the random mod privileges, and meta-mod.

------
guenthert
Pics or it didn't happen!

------
joshspankit
I’ve never wanted the ability to “react” to hn comments as much as I want it
for the comments on this post.

------
tomschwiha
For me the knowledge that bots rather don't execute and run Javascript helps
to get rid of most bot comment/contact requests.

~~~
95_JL_OK
Some can, Selenium or other browser automation is sometimes used in bots.
Really, any headless browser can be paired up with some automated QA toolkit
and turned into a rather effective bot. All that has to happen is for the bot
to load the page in headless mode, then issue the keystroke events to the
specified elements.

~~~
tomschwiha
From my experience most don't as it costs more money (ressources). For me a
hidden input with some math solved by Javascript got rid of 10 spam contact
requests per day to actually 0.

However of course, the larger a page gets the more dedicated the bots get.

But for the plain porn/russian spam it helped.

------
krisgenre
Off topic, I wonder how people publicly admit they released a side project
while working a full time job? AFAIK no company accepts this.

~~~
mvanbaak
As long as I'm not releasing something that is a direct competitor for my job,
it's all fine. Main thing I learned is that being open about it at work takes
away the fear and worries.

~~~
rvnx
As long as it's not very successful and not taking your time it's not an
issue.

------
elric
One of the hidden consequences of the insane censorship that's being pushed by
social media companies.

~~~
jacquesm
No, it's a consequence of not being on top of abuse on your platform. And
because those big social media platforms are working really hard to avoid
going down through no particular fault of their own they shut stuff like this
down at the first sight of trouble.

HN does the same with spam, which is the only reason we can have this
conversation in the first place. I take it that you do not consider that
censorship?

~~~
elric
If big platforms weren't so obsessed with trying to censor porn, there would
be no need for people to hijack other platforms simply so they could create
links to share porn. There's this bizarre obsession with "protecting" people
from nudity (especially female nipples). Advertisers will stay away from your
platform if there's nudity. Credit card companies will refuse to process your
payments (or charge much, much steeper fees). And for what? So people can
pretend that naked bodies don't exist? That sex isn't a thing? Seems odd to
me. If I want to private message my friends dirty pictures, who is FB (or
whatever platform) to tell me I can't?

As for spam, it's a shame that after 20+ years of the stuff, we still don't
have a good answer for it. Some subset of people is spending money on whatever
spammers are advertising, or they wouldn't be doing it.

~~~
lovegoblin
Broadly I agree with you - especially regarding payment processors - but the
problem is that this

> If I want to private message my friends dirty pictures, who is FB (or
> whatever platform) to tell me I can't?

VERY quickly becomes "unsolicited dick pics." That "links to share porn"
quickly becomes "links to share child porn", etc. You're not accounting for
all the genuinely abusive users.

~~~
elric
Sure. But the genuinely abusive users are always a minority. Although on the
scale of FB and the likes, that could still be a large absolute number.
Perhaps if we, as a society, stopped obsessing so much about nudity,
unsollicited dick pics might become a thing of the past. One can dream, I
guess.

------
Santosh83
Why do social media sites treat "links to X" as directly posting X on their
site? Surely the former is not against their terms of service? (Here am
talking about legal adult content obviously)...

These overarching powers to censor anything for any reason is disturbing to
say the least.

~~~
fsflover
That's why we have Mastodon now, where everyone can choose what to censor
themselves.

~~~
hkt
Which inevitably means an environment normal people hate - because they end up
with some of the quirkier denizens of the internet on their global timelines
if they use a public instance.

~~~
fsflover
Not at all. "Normal" people simply choose "normal" instances.

~~~
sfkdjf9j3j
Which ones are you talking about?

~~~
fsflover
Depends on what is normal for you. See e.g.
[https://mastodon.social/public](https://mastodon.social/public) or
[https://floss.social/public](https://floss.social/public).

------
sneak
Guilt by association is illegal in most contexts, but apparently is fine for
the censors at Instagram.

~~~
_Microft
You're pointing out a problem there. The tools and processes around moderation
(censorship might be too strong a word) are pretty lacking.

Let me consider with two cases.

For one the issue of a company making the rules, e.g. Facebook imposing
prudery worldwide, even in places where things like a bare breast are not
considered as such (yet?!). Why should someone half a world away physically
and maybe even further morally have a say in what people can or cannot post in
e.g. Europe?

Another is bans/ignores/mutes by users, I would say. The tools for these are
absolutely lacking. Mutes or bans are usually permanent and there is no way to
do anything about it. It's like solving every problem with a sledgehammer. (If
I wanted to, I could go and mute or ban anyone I like to on e.g. Twitter and
they would have _absolutely no means_ to explain themselves, appeal the
decision or have any expectation that their sentence is going to be over one
day.)

Maybe social media should be rather treated like public infrastructure and
should have to provide services too all while leaving any moderation problems
to other entities (and should just have to execute their decisions instead of
both deciding and executing).

/rant

