Hacker News new | past | comments | ask | show | jobs | submit login
[flagged] TinyLetter shut down by Mailchimp, so I built the letterdrop (github.com/i365dev)
141 points by madawei2699 10 months ago | hide | past | favorite | 99 comments



> secure and efficient

Judging by the fact you generated it with an LLM, the quality of the code, and skimming the videos, I’m highly suspicious of the claim and don’t think you’re basing that assertion on anything defensible.

Adding “make the code secure and efficient” to your prompt doesn’t make it so.

I fear for the future quality of software products. Quality was already going down the drain and LLMs have the ability to accelerate the decline.


Every project that says it's "secure" is already a complete joke in itself. The multi billion companies who say that about themselves, can't even uphold that statement despite their infinite resources. "Secure" isn't a property that anyone can claim, it's a property that only can be attributed looking back in time.


Reminds me of a Steve Jobs video where he talks about quality, and how the Japanese never use that as a qualifier in their advertisements. People don’t assume your product is quality because you tell them it is, they experience it or hear so through word of mouth.


Secure and fast (tm)

(400mb Python script with supply chain CVEs)

Welcome to software postmodernism.


But this works for everyone.

Then at some point you have to prevent the users from harming themselves - and devices get locked in walled gardens; and that arguably works for everyone too.

There are exceptions, kudos to the EU for working to protect user rights. This software postmodernism will go only as far as we allow it.


Does Microsoft Windows ring a bell? The bloat and insecurity are features, Mr., this is capitalism. Thank you for your service.


Quality went down at every step along the abstraction train and look at all we have though. I agree it feels bad but I don't have any examples of abstraction loss of knowledge being a macro negative thing other than maybe Boeing planes right now and I'd argue that's corruption more than abstraction


> Quality went down at every step along the abstraction train

Hard disagree. Abstraction by itself does not correlate with declining quality, you can be a very competent developer working in a high level language, or a crappy developer working in low level languages.


That's too narrowly scoped and it's not a neg on the people. If you write CSS code all day, there's no reason/benefit to understand EE. It's easier for me to think about giant old wooden ships and how that knowledge got lost/replaced at we built modern navy ships. Find me someone to caulk a boat with Oakum today for instance.


> Even I use the GPT model to generate the code, I still need to review the code and test it.

This should be the first paragraph in the README.

It's not responsible to release something like this in the first place, let alone without a big red warning sign in front of it.


I'm not a fan of AI slop, but if you're using code from GitHub, the burden has always been on you to make sure it's suitable. The MIT license provides no warranty. There has always been low quality code on GitHub, the bar to publish is low.


I agree with everything you said. Being aware of that is why I read everything.

One can license without legal warranty and still do it responsibly. If they don't, others will judge as irresponsible. This is more of a social contract than a licensing one.


MIT license? If it‘s all ML-generated, then there is no human author and therefore no copyright. It‘s in the public domain, isn‘t it?


I would argue the person prompting the LLM and piecing it all together is still a human author. The LLM is just a tool. The average person cannot sit down and create a project with an LLM, so the human is still a key factor. To say there is no human would be similar to saying those who use Copilot are not authoring their code, when clearly there is a curator/director managing what the AI models produce. Auditing and testing and packaging the code, too, is an important part of the process.


The law disagrees.


Which law in which country?


It would just depend on what the license for the LLM tool (if any) you're using is I think, right? There are no laws about AI generation and copyright yet that I know about. If what you're suggesting is true then does Microsoft word own everyone's novels and are the originator of patents for the last 20 years? Surely government will do a better job than DMCA even though they're more corrupt and captured now /s


That depends on your valuation.


The courts will be pleased as punch to hear it is that easy /s


The courts have repeatedly ruled on this. Well-known case: https://en.m.wikipedia.org/wiki/Monkey_selfie_copyright_disp...


That doesn't say what you think it does.

Quoting the judgement: "only works created by a human can be copyrighted under United States law, which excludes photographs and artwork created by animals or by machines without human intervention"

Clearly, the machine in question is responding to human prompts. The LLM didn't create this program on its own.

So I think these are still open issues.


> The LLM didn't create this program on its own.

And wouldn't have finished on its own. Sometimes even other models or chats with more fine tuned context just don't spot the mistakes within n iterations. The human needs to, quite annoyed I must say, take a closer look herself. "What did you LLM do and how does it work?" Then you specify what must be done or do it yourself because you can't remember the term which would make the LLM find the right symbols, and describing the term takes as long as fixing the issue yourself. This applies more to new kids on the block than to experienced devs, depending on the complexity of the subject, of course, but is relevant for both because the creation, the final product, belongs to the human, as does the LLM. My chat, my LLM, my copyright. If it ever became important, I'd raise an army against their lawyers.


What is "your LLM"? Did you create it? Buy it?


Well, I am paying for it, the subscription and or the API calls. And the chat is the service I and other users, including B2B, paid for. Dev and server costs are covered by that, theoretically and practically, because investors get what they paid for as well.

The LLM is instructed by my prompts. I'm giving the directions. At some points, the LLM will dynamically update the weights based on my input, it's knowledge being all that is in the higher public domain, published human knowledge. My communication is being collected to my own, and others, future advantage, which would not be possible if we didn't use the LLM. The devs can't make it much further without the user. The devs couldn't make it anywhere without the training data. 'My LLM' is the current chat window, whatevers' coming out of it, I am responsible for that, not the company who created it, and if I am responsible, I have to own it, just like I own the things 'my child' creates until that child is ready to own it's responsibility itself.


Ah so all Legal disputes around AI are completely shut and closed cases?

I think it’s rather we have a very small number of reference cases from the past, but since things changed dramatically since then everything is still up in the air


No, just the question whether machines can hold copyright is decisively shut and closed, until the laws are changed. Many other questions are debated.

Always fun to see the „AI is special, no rules apply“ crowd.


> the question whether machines can hold copyright is decisively shut and closed

That's just a small part of the question. Some of the questions we still need answers for are:

1) If ChatGPT produces an answer that is very similar to something it was trained on, can you use that answer? Or do you need a license from the original author? If you do need a license, how do you figure out if an answer is close enough to some original work to require a license? How much effort can we expect a user of an LLM to put into searching for original sources? If you can't identify sources, can you use an LLM at all?

2) Is it even allowed to train an LLM on data without asking for permission?

3) Is prompting an LLM a creative act? Is ChatGPT just a tool like a typewriter? If you type a poem on a typewriter, no sane person would consider the typewriter the copyright holder. So shouldn't we consider the person prompting the LLM the author?


Those questions are all interesting in themselves, but utterly irrelevant here.

The first one asks if someone else than the „prompter“ should have copyright.

The second one asks about other people‘s copyright.

The third one is answered by case law, including the Monkey Selfie case. The wildlife photography setup is very much comparable to your typewriter there.


If there's a thousand monkeys with a thousand typewriters, and they all write a single page to a thousand-page epic, is the scientist allowed to copyright and sell it as a book?


In the US, absolutely. The scientist set up the conditions and curated the output, and animals have no creative copyright privileges of their own.


You're losing the plot and contradicting yourself across your comments.

What exactly is the thesis you are trying to argue?


The wildlife photography setup had no human intervention when the photo was created since the monkey took the selfie.

In this case TinyLetter was "written" by GPT4o with a LOT of prompting. Have you even read it? https://github.com/i365dev/LetterDrop/blob/main/docs/CDDR/ap...

As always, some random person in a tech forum thinks they have all the answers to non-trivial legal questions...

I mean, we haven't even talked about jurisdiction issues yet. US case law does not apply globally.


LetterDrop, not TinyLetter!


My bad.


If you think too hard about it it becomes pretty clear AI just breaks the already tenuous justifications for copyright.


This is a fascinating comment.

Taken alone, it is concise, correct, and well-cited. It does lack a crux, though.

But in context, seeing the commenter's previous comment, there is an implied Cruz that is the opposite of the truth, since a reader assumes that any comment is implicitly in support of the commenter's previous comment, and in opposition to the proximate comment it is in reply to, unless otherwise indicated.


He said ChatGPT so LLM created. “Fair use” may be the current legal defense, but who knows if or what Congress will do. The lobbying power is definitely in the LLM owners and not copyright holders or creatives.

But there’s a non-zero chance LLMs won’t face a class action lawsuit.


What about open source LLM?


Can open source software use copyright material without permission?

Any copyright holder that can prove their material is a part of the training data could have the model taken down.

But again this will all hinge on the court’s definition of “fair use”.


> This isn't the first time I've faced such an issue, whenever this happens, I lose all my subscribers

That does not strike me as believable - do you really not have the list elsewhere, and does every provider who shuts down not give you the option to get your list out?


In the case of TinyLetter, this is what happened to me: I didn't receive any email notifications before the shutdown (I verified multiple times). I reached out to Mailchimp and they deleted everything (https://x.com/thibaut_barrere/status/1771931157564727468).

Am I responsible for backups ultimately? Absolutely. Will I trust Mailchimp in the future for my own projects? Not very likely.


Wonder if they really don’t have any retention at all. Maybe it would magically become available if they got a letter from a lawyer.


Yes, some form of organisational soft-delete is more than likely.

> We're afraid the TinyLetter data is no longer available

could perfectly mean: the backups we keep for one year are encrypted in case of legal trouble, but we require a legal action internally before them to be worth reaching out.


It sounds like you weren’t sending very frequently, if you logged in anytime over those 5 months there was a banner letting you know it was getting shut down.

If it makes you feel better, deliverability people will tell you a list that’s been sitting with no sends for a year or more quickly becomes worthless due to list rot (abandoned emails, expired domains, job switchers, people who forgot you exist and will mark spam/unsub, etc).

When you send to an old untouched list, it can tank your domain rep in Gmail since the algo sees tons of bounces/unsubscribes/negative signals. They basically assume you’re a spammer. So it probably would have been a PITA to warm that list again anyways. It’s shitty that gmail incentivizes bulk senders to be annoying and send a lot, but it is what it is.


That actually sounds very likely.

I've never had an email delivery company return any communication at all once they have decided you're bad. (0% spam report rate, 0.01% bounce rate)


I think you’re referring to a different event (having an account suspended or banned) than the parent comment, who’s questioning the notion that one wouldn’t have a subscriber list in a CSV or Google Sheet outside of the account, and/or in the event that a service closes up shop, that they would not extend an effort to allow users to export their data.


It's a general comment on the professionalism of the bulk email delivery industry, namely there is none.

Don't expect them to support a free service they decided to shutdown in anyway.


That's still not answering the question though. I don't know of any bulk email provider who doesn't allow you to export and import your lists via an open format like csv.


> It's a general comment on the professionalism of the bulk email delivery industry, namely there is none.

I strongly disagree. I’ve had the chance to speak with numerous deliverability folks who work at esp’s over the years and they’ve all for the most part been incredibly professional and helpful.


The GPT-4o generated project here doesn't even take care of sending the emails - you need to implement a different worker to handle the actual email sending. Of course, this is very easy to create (as per OP), just generate the code:

> After that, you need to create a Cloudflare Worker as a notification service. The code is very simple, you can use the ChatGPT to generate the code.


Jesus. Github is just going to get filled up with autogenerated crap, isn't it?


I mean, the crap that was already there wasn't great to begin with.


For me https://buttondown.email is the equivalent for TinyLetter before it got acquired by Mailchimp.

It's a honest small business and the founder also has an interesting blog: https://buttondown.email/blog


looks nice, but uhm price is pretty much the same as brevo


looking at the pricing, brevo is way cheaper


The fact that this whole project was generated by GPT without any human duration is... something. I don't know exactly what, but it was interesting enough to upvote.


Yeah, I took a look at the code and the prompt documentation (which is in Chinese) and I came to the conclusion “why?”. It is interesting though.


Of course it will happen again. If the model becomes too popular Cloudflare will restrict free usage of their APIs. And if it is too unpopular they will just stop supporting them one day.


@dang maybe the title should reflect the originality of the project, aka the fact it's all AI generated?


Hi, to the author, this is a typo worthy of fixing: "Disscussion"


I wouldn't diss it, if all I want to do is cuss eventually.


"I used the GPT-4o model to generate the code for LetterDrop. That means the code is generated by the AI model, and I only need to provide the prompts to the model. This approach is very efficient and can save a lot of time. I've also recorded a video to show how to create the LetterDrop project using the GPT-4o model.

That also means you can easily customize the code by changing the prompts. You can find the prompts in the CDDR file."

???????


This is from the "How to contribute?" section in readme.

Personally, I found it unbelievable. The author think contribute should change the prompt instead of the code.. To my best knowledge, these kind of code generation are non deterministic. This make it impossible to audit for correctness.


I think code generation is, generally speaking, pretty awful, but contributing to an LLM prompt is kind of an interesting evolution of software design. With that said, I don't really think it saves a lot of time (the author's Youtube videos[1] are literally like 10+ hours of messing with prompts and copy-pasting code), whereas the product in itself isn't particularly complicated.

[1] https://www.youtube.com/playlist?list=PL21oMWN6Y7PCqSwbwesD4...


And the code is godawful. Just look at this mess for internationalization: https://github.com/i365dev/LetterDrop/blob/main/app%2Fsrc%2F... . The code is unmaintainable by a human - there is a ton of unnecessary duplication, a gigantic html string instead of something like tsx, etc etc. I don't know how the OP can trust this output - I certainly don't.


I am in a (large) project currently that has been running since 2016; it has 100k+ files like this. The team (I am there to write a code quality report ;) of humans has 0 problems maintaining it. This weird ‘all code must be pristine’ attitude on HN is interesting; I do a lot of these types of code audits and more than 80% of mission critical code I encounter at large companies / institutions is like this or worse and is happily maintained by humans. Take some Spring projects started somewhere in the 2000s with the original team gone: I see 10000s of lines added in jsps files because it was faster/easier to do than actually understanding the structure and properly writing the classes etc.


> And the code is godawful

You would be surprised on how big corporations are running on not so elegant code and generating revenue all over.

I used to be more principled around code best practices and having things well written, but with the LLMs I just discovered that unless your in a very critical domain where performance or maintenability is a hard requirement, the LLMs with bad code just solve the problem and everyone can be happy.


Everyone, that is, except for the customers dealing with buggy slow products that will eventually be breached and leak their data. And the good developers who now have to hunt for dumb bugs without clues for where to look because the person who generated the code can’t help either.


But good developers can use LLMs to hunt for the dumb bugs ;)


Are these the same companies we hear about because the got hacked because of security holes in their code?


It’s not a direct relationship as far as I know. The point is that maybe the LLMs gave enablement for the people that thinks about software as a “means to an end” and I think it’s completely fine.

I like elegant code, well written and easy to maintain; but for a long time the field of programming started to develop the pristine idea that the programming _was the end_ and not as a way to achieve something.

What’s the post author made was to bring it in a more latent way saying that you could use programming to achieve the same results but you do not need.


Aren't that basically all those web framework that we've been using (except that it's worse and less maintainable)?

The code generated is very hard to be maintained by human unless you're very knowledgeable and knows what specific things to change (since generally we only interact on framework level unless there's some specific requirements, like how the author proposed to "modify the prompt" instead of the generated code)

That said, I don't think that the so called "prompt based" cough cough engineering is viable at current state, or anytime soon.


At least using a framework like React would expose these HTML blocks to the typechecker.


Hmm, yep you have a point there


Wow, CSS in a style tag in a string in try block in an anonymous function all declared right there in index.ts—no units, to say nothing of tests!

Would be a huge nuisance to maintain.


Huge nuisance to manually maintain, but the suggestion is that you maintain it with an LLM.

Is the code bad? Sure.

Could I build this in 10 hours without an LLM having no experience with cloudflare workers and typescript? Probably not.

Could I build this in 10 hours with an LLM? Probably.


Look, maybe I'm naive but the "technical debt" this approach generates is appalling. Someone's going to have to shave that yak or you'll trip over it on your way to market.

Whatever happened to "write code like it'll be maintained by an axe murderer?"[0]

0. https://wiki.c2.com/?CodeForTheMaintainer


What if the maintainer is assumed to be an LLM?


The jury's still out on that one.

Personally, I don't think the GUI's yet been invented which can reliably wrangle LLMs to those ends.


Yikes. Best of luck adding a new language or updating your templates without introducing bugs or discrepancies.


This is interesting as an experiment, but a awful way to do production code.


If you cannot program then maybe it helps?


Isn't GPT4 non deterministic?

So the same prompt won't necessarily give the same result


Yes, it is non-deterministic from our point of view, because OpenAI does not let us set the seed.


Sure you can: https://platform.openai.com/docs/api-reference/chat/create#c...

It does not guarantee determinism, though, because of the nature of GPT4o's optimizations.


That was unclear for me too, so I opened an issue (the very first of the project, yay!). The instructions have been updated, they might be a bit clearer now: direct contributions to the code are welcome.


> ???????

Please clarify what ??????? means here? You’ll find us willing to hear out your point of view, but first you’ll have to provide it.


The technique for producing the code is so unconventional, the grandparent commenter is unsure if this is a serious attempt, a joke, or a stunt.


I'm working on a self hosted Patreon alternative. Would this be effective for sending emails from our creators? Thinking of running an emailing service for them.


No.

Are you using RoR?


Haven't in a while, how come?


Action Mailer would make for an easy solution.


Was TinyLetter the one that Pud from FuckedCompany made?


Yes


Imagine you try to get independent of one service by becoming dependent on at least two others. (ChatGPD and Cloud provider)


Because a free service is not available and you built another non free service?


It's MIT licensed, how much more free do you need?


I mean cloudflare worker.


This is remarkable. If things like these could be AI generated (or even tweaked) to fit a use case, that kind of spells doom for bloated Saas products. Would be amazing to see the progress along this vector.

You may want to look at https://github.com/knadh/listmonk for how to handle heavier loads. This is also a nice project used by Zerodha in India to send millions of newsletters on a daily basis.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: