Judging by the fact you generated it with an LLM, the quality of the code, and skimming the videos, I’m highly suspicious of the claim and don’t think you’re basing that assertion on anything defensible.
Adding “make the code secure and efficient” to your prompt doesn’t make it so.
I fear for the future quality of software products. Quality was already going down the drain and LLMs have the ability to accelerate the decline.
Every project that says it's "secure" is already a complete joke in itself. The multi billion companies who say that about themselves, can't even uphold that statement despite their infinite resources. "Secure" isn't a property that anyone can claim, it's a property that only can be attributed looking back in time.
Reminds me of a Steve Jobs video where he talks about quality, and how the Japanese never use that as a qualifier in their advertisements. People don’t assume your product is quality because you tell them it is, they experience it or hear so through word of mouth.
Then at some point you have to prevent the users from harming themselves - and devices get locked in walled gardens; and that arguably works for everyone too.
There are exceptions, kudos to the EU for working to protect user rights. This software postmodernism will go only as far as we allow it.
Quality went down at every step along the abstraction train and look at all we have though. I agree it feels bad but I don't have any examples of abstraction loss of knowledge being a macro negative thing other than maybe Boeing planes right now and I'd argue that's corruption more than abstraction
> Quality went down at every step along the abstraction train
Hard disagree. Abstraction by itself does not correlate with declining quality, you can be a very competent developer working in a high level language, or a crappy developer working in low level languages.
That's too narrowly scoped and it's not a neg on the people. If you write CSS code all day, there's no reason/benefit to understand EE. It's easier for me to think about giant old wooden ships and how that knowledge got lost/replaced at we built modern navy ships. Find me someone to caulk a boat with Oakum today for instance.
I'm not a fan of AI slop, but if you're using code from GitHub, the burden has always been on you to make sure it's suitable. The MIT license provides no warranty. There has always been low quality code on GitHub, the bar to publish is low.
I agree with everything you said. Being aware of that is why I read everything.
One can license without legal warranty and still do it responsibly. If they don't, others will judge as irresponsible. This is more of a social contract than a licensing one.
I would argue the person prompting the LLM and piecing it all together is still a human author. The LLM is just a tool. The average person cannot sit down and create a project with an LLM, so the human is still a key factor. To say there is no human would be similar to saying those who use Copilot are not authoring their code, when clearly there is a curator/director managing what the AI models produce. Auditing and testing and packaging the code, too, is an important part of the process.
It would just depend on what the license for the LLM tool (if any) you're using is I think, right? There are no laws about AI generation and copyright yet that I know about. If what you're suggesting is true then does Microsoft word own everyone's novels and are the originator of patents for the last 20 years? Surely government will do a better job than DMCA even though they're more corrupt and captured now /s
Quoting the judgement:
"only works created by a human can be copyrighted under United States law, which excludes photographs and artwork created by animals or by machines without human intervention"
Clearly, the machine in question is responding to human prompts. The LLM didn't create this program on its own.
And wouldn't have finished on its own. Sometimes even other models or chats with more fine tuned context just don't spot the mistakes within n iterations. The human needs to, quite annoyed I must say, take a closer look herself. "What did you LLM do and how does it work?" Then you specify what must be done or do it yourself because you can't remember the term which would make the LLM find the right symbols, and describing the term takes as long as fixing the issue yourself. This applies more to new kids on the block than to experienced devs, depending on the complexity of the subject, of course, but is relevant for both because the creation, the final product, belongs to the human, as does the LLM. My chat, my LLM, my copyright. If it ever became important, I'd raise an army against their lawyers.
Well, I am paying for it, the subscription and or the API calls. And the chat is the service I and other users, including B2B, paid for. Dev and server costs are covered by that, theoretically and practically, because investors get what they paid for as well.
The LLM is instructed by my prompts. I'm giving the directions. At some points, the LLM will dynamically update the weights based on my input, it's knowledge being all that is in the higher public domain, published human knowledge. My communication is being collected to my own, and others, future advantage, which would not be possible if we didn't use the LLM. The devs can't make it much further without the user. The devs couldn't make it anywhere without the training data. 'My LLM' is the current chat window, whatevers' coming out of it, I am responsible for that, not the company who created it, and if I am responsible, I have to own it, just like I own the things 'my child' creates until that child is ready to own it's responsibility itself.
Ah so all Legal disputes around AI are completely shut and closed cases?
I think it’s rather we have a very small number of reference cases from the past, but since things changed dramatically since then everything is still up in the air
> the question whether machines can hold copyright is decisively shut and closed
That's just a small part of the question. Some of the questions we still need answers for are:
1) If ChatGPT produces an answer that is very similar to something it was trained on, can you use that answer? Or do you need a license from the original author? If you do need a license, how do you figure out if an answer is close enough to some original work to require a license? How much effort can we expect a user of an LLM to put into searching for original sources? If you can't identify sources, can you use an LLM at all?
2) Is it even allowed to train an LLM on data without asking for permission?
3) Is prompting an LLM a creative act? Is ChatGPT just a tool like a typewriter? If you type a poem on a typewriter, no sane person would consider the typewriter the copyright holder. So shouldn't we consider the person prompting the LLM the author?
Those questions are all interesting in themselves, but utterly irrelevant here.
The first one asks if someone else than the „prompter“ should have copyright.
The second one asks about other people‘s copyright.
The third one is answered by case law, including the Monkey Selfie case. The wildlife photography setup is very much comparable to your typewriter there.
If there's a thousand monkeys with a thousand typewriters, and they all write a single page to a thousand-page epic, is the scientist allowed to copyright and sell it as a book?
Taken alone, it is concise, correct, and well-cited. It does lack a crux, though.
But in context, seeing the commenter's previous comment, there is an implied Cruz that is the opposite of the truth, since a reader assumes that any comment is implicitly in support of the commenter's previous comment, and in opposition to the proximate comment it is in reply to, unless otherwise indicated.
He said ChatGPT so LLM created. “Fair use” may be the current legal defense, but who knows if or what Congress will do. The lobbying power is definitely in the LLM owners and not copyright holders or creatives.
But there’s a non-zero chance LLMs won’t face a class action lawsuit.
> This isn't the first time I've faced such an issue, whenever this happens, I lose all my subscribers
That does not strike me as believable - do you really not have the list elsewhere, and does every provider who shuts down not give you the option to get your list out?
In the case of TinyLetter, this is what happened to me: I didn't receive any email notifications before the shutdown (I verified multiple times). I reached out to Mailchimp and they deleted everything (https://x.com/thibaut_barrere/status/1771931157564727468).
Am I responsible for backups ultimately? Absolutely. Will I trust Mailchimp in the future for my own projects? Not very likely.
Yes, some form of organisational soft-delete is more than likely.
> We're afraid the TinyLetter data is no longer available
could perfectly mean: the backups we keep for one year are encrypted in case of legal trouble, but we require a legal action internally before them to be worth reaching out.
It sounds like you weren’t sending very frequently, if you logged in anytime over those 5 months there was a banner letting you know it was getting shut down.
If it makes you feel better, deliverability people will tell you a list that’s been sitting with no sends for a year or more quickly becomes worthless due to list rot (abandoned emails, expired domains, job switchers, people who forgot you exist and will mark spam/unsub, etc).
When you send to an old untouched list, it can tank your domain rep in Gmail since the algo sees tons of bounces/unsubscribes/negative signals. They basically assume you’re a spammer. So it probably would have been a PITA to warm that list again anyways. It’s shitty that gmail incentivizes bulk senders to be annoying and send a lot, but it is what it is.
I think you’re referring to a different event (having an account suspended or banned) than the parent comment, who’s questioning the notion that one wouldn’t have a subscriber list in a CSV or Google Sheet outside of the account, and/or in the event that a service closes up shop, that they would not extend an effort to allow users to export their data.
That's still not answering the question though. I don't know of any bulk email provider who doesn't allow you to export and import your lists via an open format like csv.
> It's a general comment on the professionalism of the bulk email delivery industry, namely there is none.
I strongly disagree. I’ve had the chance to speak with numerous deliverability folks who work at esp’s over the years and they’ve all for the most part been incredibly professional and helpful.
The GPT-4o generated project here doesn't even take care of sending the emails - you need to implement a different worker to handle the actual email sending. Of course, this is very easy to create (as per OP), just generate the code:
> After that, you need to create a Cloudflare Worker as a notification service. The code is very simple, you can use the ChatGPT to generate the code.
The fact that this whole project was generated by GPT without any human duration is... something. I don't know exactly what, but it was interesting enough to upvote.
Of course it will happen again. If the model becomes too popular Cloudflare will restrict free usage of their APIs. And if it is too unpopular they will just stop supporting them one day.
"I used the GPT-4o model to generate the code for LetterDrop. That means the code is generated by the AI model, and I only need to provide the prompts to the model. This approach is very efficient and can save a lot of time. I've also recorded a video to show how to create the LetterDrop project using the GPT-4o model.
That also means you can easily customize the code by changing the prompts. You can find the prompts in the CDDR file."
This is from the "How to contribute?" section in readme.
Personally, I found it unbelievable. The author think contribute should change the prompt instead of the code..
To my best knowledge, these kind of code generation are non deterministic. This make it impossible to audit for correctness.
I think code generation is, generally speaking, pretty awful, but contributing to an LLM prompt is kind of an interesting evolution of software design. With that said, I don't really think it saves a lot of time (the author's Youtube videos[1] are literally like 10+ hours of messing with prompts and copy-pasting code), whereas the product in itself isn't particularly complicated.
And the code is godawful. Just look at this mess for internationalization: https://github.com/i365dev/LetterDrop/blob/main/app%2Fsrc%2F... . The code is unmaintainable by a human - there is a ton of unnecessary duplication, a gigantic html string instead of something like tsx, etc etc. I don't know how the OP can trust this output - I certainly don't.
I am in a (large) project currently that has been running since 2016; it has 100k+ files like this. The team (I am there to write a code quality report ;) of humans has 0 problems maintaining it. This weird ‘all code must be pristine’ attitude on HN is interesting; I do a lot of these types of code audits and more than 80% of mission critical code I encounter at large companies / institutions is like this or worse and is happily maintained by humans. Take some Spring projects started somewhere in the 2000s with the original team gone: I see 10000s of lines added in jsps files because it was faster/easier to do than actually understanding the structure and properly writing the classes etc.
You would be surprised on how big corporations are running on not so elegant code and generating revenue all over.
I used to be more principled around code best practices and having things well written, but with the LLMs I just discovered that unless your in a very critical domain where performance or maintenability is a hard requirement, the LLMs with bad code just solve the problem and everyone can be happy.
Everyone, that is, except for the customers dealing with buggy slow products that will eventually be breached and leak their data. And the good developers who now have to hunt for dumb bugs without clues for where to look because the person who generated the code can’t help either.
It’s not a direct relationship as far as I know. The point is that maybe the LLMs gave enablement for the people that thinks about software as a “means to an end” and I think it’s completely fine.
I like elegant code, well written and easy to maintain; but for a long time the field of programming started to develop the pristine idea that the programming _was the end_ and not as a way to achieve something.
What’s the post author made was to bring it in a more latent way saying that you could use programming to achieve the same results but you do not need.
Aren't that basically all those web framework that we've been using (except that it's worse and less maintainable)?
The code generated is very hard to be maintained by human unless you're very knowledgeable and knows what specific things to change (since generally we only interact on framework level unless there's some specific requirements, like how the author proposed to "modify the prompt" instead of the generated code)
That said, I don't think that the so called "prompt based" cough cough engineering is viable at current state, or anytime soon.
Look, maybe I'm naive but the "technical debt" this approach generates is appalling. Someone's going to have to shave that yak or you'll trip over it on your way to market.
Whatever happened to "write code like it'll be maintained by an axe murderer?"[0]
That was unclear for me too, so I opened an issue (the very first of the project, yay!). The instructions have been updated, they might be a bit clearer now: direct contributions to the code are welcome.
I'm working on a self hosted Patreon alternative. Would this be effective for sending emails from our creators? Thinking of running an emailing service for them.
This is remarkable. If things like these could be AI generated (or even tweaked) to fit a use case, that kind of spells doom for bloated Saas products. Would be amazing to see the progress along this vector.
You may want to look at https://github.com/knadh/listmonk for how to handle heavier loads. This is also a nice project used by Zerodha in India to send millions of newsletters on a daily basis.
Judging by the fact you generated it with an LLM, the quality of the code, and skimming the videos, I’m highly suspicious of the claim and don’t think you’re basing that assertion on anything defensible.
Adding “make the code secure and efficient” to your prompt doesn’t make it so.
I fear for the future quality of software products. Quality was already going down the drain and LLMs have the ability to accelerate the decline.