Edit: Fun fact, I cannot edit my original comment. But over-zealous flaggers seem to have taken care of it on my behalf. Unclear as to what about that comment deserved flagging, I guess raising concerns for the OPs admittedly problematic project is broadly the same behavior as the racist troll account who was previously active in this thread. Well done moralizing my original moralizing. The irony is…well pretty mundane in this case, really.
You could name a project any number of completely weird and absurd and offensive names, and it would have no bearing on the matter at hand, which is that code was illegally stolen and relicensed without the consent of the author. This is not a moral issue.
You yourself admitted that your original comment was harsh after the author responded to you.
This being on page 2 with 247 upvotes in the three hour time period this post has been up is surprising to me. I wouldn't be surprised if @dang is suppressing it (but I'd also be happy to hear that it's not being suppressed).
It's pretty spineless for the Pickle team to come out and pretend they mistakenly re-licensed GPL code. Hilarious.
> in initially building it we included code from a GPL-licensed project that we incorrectly attributed as Apache
How can you write a sentence like that in good faith?
The first rule of HN moderation is that we moderate (i.e., intervene) less if a story reflects negatively on a YC company or YC itself.
This principle goes right back to pg days, and was the first thing he taught dang [1].
That said, it doesn't mean we avoid moderation at all and it doesn't mean the guidelines all go out the window.
Different factors influence the story's rank and visibility on the front page: upvotes, flags, the flamewar detector, and settings to turn these penalties on/off. I'm actively watching the thread to keep it on the front page, as per the rule.
That said, the guidelines ask us to avoid fulmination and assume good faith. Whilst it's fair enough to criticize and question a company when they do something like this, we can also be adult enough to look the evidence before us and recognize that this was most likely a dumb mistake that they've moved quickly to correct.
Setting the license text is an explicit act and it seems fairly unlikely for anyone who creates software to think they can relicence GPL code or to think they didn't need to Google it first. Doing something that you meant to do isn't a mistake it's a choice.
It seems more likely that they didn't think anyone would notice.
> It seems more likely that they didn't think anyone would notice.
Maybe, but if that's what they thought (and I have no idea, I haven't spoken to them or anyone else about it), it's very foolish, because this kind of thing will always get noticed eventually, especially if the project becomes successful.
YC tells founders that one of the fastest ways to kill your company is to base your product on code that's not legitimate to use (i.e., that you didn't write yourself or that is used in breach of its license). That's because it's one of the fastest ways to kill funding rounds, acquisitions and enterprise deals. Not everyone listens or understands.
It even asks (or at least it did the last time I checked) in the application form, if you wrote your code yourself, to raise the issue of IP ownership/licensing from the start.
The evidence clearly shows it was not a 'dumb mistake'
They claim they wrote the whole thing in 4 days. They did not attribute the original author in ANY way.
They clearly showed they intended to steal the authors work and sell it as if they wrote it. YC has just become such a dumpster fire if that kind behaviour is even remotely accepted or called a 'dumb mistake'
This comment [1] from dang a couple of years ago touches on our reasons for not publishing a moderation log, and links to many more explanations over the years.
We're happy to be judged on the outcome, which, in this instance, is that the story has been on the front page for hours and everyone is able to have their say.
> And as these events keep happening, your credibility erodes.
YC has invested in thousands of companies by now and hundreds of new ones per year. That includes many founders who are young and inexperienced, and also plenty from diverse backgrounds, which, now that I've had time to dig into it, seems to apply here. Screwups are going to happen, as in every part of life; the law of large numbers guarantees it. What matters is what people do to make it right.
Thanks! I would say no. Mermaid is strongly code-first diagramming, which is an excellent usecase and niche in its own right. I would be surprised if Mermaid ended up with a WYSIWYG editor on top of it, since that is pretty counter to its philosophy (as far as I understand anyway).
Only logical thing to do personally is to take it completely off your mobile devices. You still get caught in the dragnet if you have friends and family posting you.
Also in many places WhatsApp is practically a requirement for daily life which is frustrating. What I need is some kind of restricted app sandbox in which to place untrustworthy apps, they see a fake filesystem, fake system calls, etc.
What I need is some kind of restricted app sandbox in which to place untrustworthy apps, they see a fake filesystem, fake system calls, etc.
GrapheneOS comes pretty close to that I think? You can put such apps in a separate profile and cut off a lot of permissions. You can also scope contacts, storage, etc.
Yeah, first thing to do on an Android phone is to use adb or something like the universal debloater to uninstall (besides the Facebook app) crap like: com.facebook.system, com.facebook.appmanager, and com.facebook.services.
Description of the latter from the uad list:
Facebook Services is a tool that lets you manage different Facebook services automatically using your Android device. In particular, the tool focuses on searching for nearby shops and establishments based on your interests.
Why is this even always running on a pristine Samsung, etc. phone? Creepy.
It isn’t nice to use though.
You select your picture then when you need to add more you’ve got to go back into the settings for that app and select the picture. Then add the picture you selected.
I’m grateful though. We would have called meta malware back when.
The built in camera roll widget lets you edit what pictures are allowed without going to settings. Maybe it’s a new change or the apps you use have a custom photo picker, I dunno.
I try to use web versions of everything (fb, insta, x). If it’s shitty enough I’ll use it less.
I.e. messenger.com is possible to use if you request desktop version, change font size and deal with all sort of zoom issues. Of course fb doesn’t support actual calls or notifications just because, so I don’t use it.
Instagram is even sneakier - you can’t post stories via mobile to “close friends”, post videos or view them from instant messages.
You’re being unfairly downvoted. There is a plague of well-groomed incoherency in half of the business emails I receive today. You can often tell that the author, without wrestling with the text to figure out what they want to say, is a kind of stochastic parrot.
This is okay for platitudes, but for emails that really matter, having this messy watercolor kind of writing totally destroys the clarity of the text and confuses everyone.
To your point, I’ve asked everyone on my team to refrain from writing words (not code) with ChatGPT or other tools, because the LLM invariably leads to more complicated output than the author just badly, but authentically, trying to express themselves in the text.
I find the idea of using LLMs for emails confusing.
Surely it's less work to put the words you want to say into an email, rather than craft a prompt to get the LLM to say what you want to say, and iterate until the LLM actually says it?
My own opinion, which is admittedly too harsh, is that they don't really know what they want to say. That is, the prompt they write is very short, along the lines of `ask when this will be done` or `schedule a followup`, and give the LLM output a cursory review before copy-pasting it.
I am writing to inquire about the projected completion timeline for the HackerNews initiative. In order to optimize our downstream workflows and ensure all dependencies are properly aligned, an estimated delivery date would be highly valuable.
Could you please provide an updated forecast on when we might anticipate the project's conclusion? This data will assist in calibrating our subsequent operational parameters.
Thank you for your continued focus and effort on this task. Please advise if any additional resources or support from my end could help expedite the process.
Yep, I have come to really dislike LLMs for documentation as it just reads wrong to me and I find so often misses the point entirely. There is so much nuance tied up in documentation and much of it is in what is NOT said as much as what is said.
The LLMs struggle with both but REALLY struggle with figuring out what NOT to say.
I definitely see where you're coming from, though I have a slightly different perspective.
I agree that LLMs often fall short when it comes to capturing the nuanced reasoning behind implementations—and when used in an autopilot fashion, things can easily go off the rails. Documentation isn't just about what is said, but also what’s not said, and that kind of judgment is something LLMs do struggle with.
That said, when there's sufficient context and structure, I think LLMs can still provide a solid starting point. It’s not about replacing careful documentation but about lowering the barrier to getting something down—especially in environments where documentation tends to be neglected.
In my experience, that neglect can stem from a few things: personal preference, time pressure, or more commonly, language barriers. For non-native speakers, even when they fully understand the material, writing clear and fluent documentation can be a daunting and time-consuming task. That alone can push it to the back burner. Add in the fact that docs need to evolve alongside the code, and it becomes a compounding issue.
So yes, if someone treats LLM output as the final product and walks away, that’s a real problem. And honestly, this ties into my broader skepticism around the “vibe coding” trend—it often feels more like “fire and forget” than responsible tool usage.
But when approached thoughtfully, even a 60–90% draft from an LLM can be incredibly useful—especially in situations where the alternative is having no documentation at all. It’s not perfect, but it can help teams get unstuck and move forward with something workable.
I wonder if this is to a large degree also because when we communicate with humans, we take cues from more than just the text. The personality of the author will project into the text they write, and assuming you know this person at least a little bit, these nuances will give you extra information.
Yeah, now you need to be able to demonstrate verbal fluency. The problem is, that inherently means a loss of “trusted anonymous” communication, which is particularly damaging to the fiber of the internet.
Precisely, in the age where it is very difficult to ascertain the type or quality of skills you are interacting with say in a patch review or otherwise you frankly have to "judge" someone and fallback to suspicion and full verification.
reply