Hacker News new | past | comments | ask | show | jobs | submit | sky2224's comments login

I think it's just changed a lot. Building a forum is a decent amount of work and it costs money. You also take on some form of liability for your user's content today, which can be a headache and cost even more money to address.

Sites like Reddit, Tomshardware, etc, basically took all the work away while allowing you to build a community around a niche of your interest.


I don't think it has specifically changed. The price and stacks are the same. Back then, we'd hang out on sites like Paradox Games subforums. There would be the "upper boards" to discuss the actual company stuff and "lower boards" as its own community.

HN is a similar model, but I feel like they made the mistake of merging the communities. So you have the group that is startup based, the YC backed group that uses it for hiring, the people launching stuff, and the people who just want tech news but hate all the startup stuff.

But reddit is free, comes with moderation tools, and everyone has a reddit account. So it has become the default forum. People start at reddit and move elsewhere, never vice versa.


Just let the man have his legos sheesh. Pick a better time.

Would having it sleep be a suitable compromise?

> If you get a slap on the wrist, do you learn? No, you play it down.

Except Dave didn't play it down. He's literally taking responsibility for a situation that could have resulted in significantly worse consequences.

Instead of saying, "nothing bad happened, let's move on," he, and by extension his company, have worked to remedy the issue, do a write up on it, disclose of the issue and its impact to users, and publicly apologize and hold themselves accountable. That right there is textbook engineering ethics 101 being followed.


> "we've fundamentally restructured our security practices to ensure this scenario can't recur."

"Yeah it was a problem but it's fixed now, won't happen again"

Sure buddy.

It's not something you fix, when stuff like this happen, it's foundational, you can't fix it, it's a house of cards, you gotta bring it down and build it again with lessons learned.

It's like a skyscraper built with hay that had a close call with some strong northern winds, and they come out and say, we have fortified the northern wall, all is good now. You gotta take it down and build it with brick my man.

I'm done warning people about security, we'll fight it out in the industry, I hope we bankrupt you.


> It's not something you fix, when stuff like this happen, it's foundational, you can't fix it, it's a house of cards, you gotta bring it down and build it again with lessons learned.

That's the last thing you should ever do within a large scale software system. The idea that restarting from scratch because "oh we'll do it better again" is the kind of thing that bankrupts companies. Plenty of seasoned engineers will tell you this.

https://www.joelonsoftware.com/2000/04/06/things-you-should-...


I'm aware of that article. I'm saying file bankruptcy for the company, so yeah tear it down, it doesn't need to exist if it can get pwned.

This is the second big attack found by this individual in what... 6 months? The previous exploit (which was in Arc browser), also leveraged a poorly configured firebase db: https://kibty.town/blog/arc/

So this is to say, at what point should we start pointing the finger at Google for allowing developers to shoot themselves in the foot so easily? Granted, I don't have much experience with firebase, but to me this just screams something about the configuration process is being improperly communicated or overall is just too convoluted as a whole.


Firebase let's anyone get started in 30 seconds.

Details like proper usage, security, etc. Those are often overlooked. Google isn't to blame if you ship a paid product without running a security audit.

I use firebase essentially for hobbyist projects for me and my friends.

If I had to guess these issues come about because developers are rushing to market. Not Google's fault ... What works for a prototype isn't production ready.


> Google isn't to blame if you ship a paid product without running a security audit.

Arguably, if you provide a service that makes it trivial to create security issues (that is to say, you have to go out of your way to use it correctly) then it's your fault. If making it secure means making it somewhat less convenient, it's 100% your fault for not making it less convenient.


What if I need to hack together a POC for 3 people to look at.

It's my responsibility to make sure when we scale from 3 users to 30k users we take security seriously.

As my old auto shop teacher used to say, if you try to idiot proof something they'll build a better idiot.

Even if Google warns you in big bold print "YOU ARE DOING SOMETHING INSECURE", someone out there is going to click deploy anyway. You're arguing Google disable the deploy button, which I simply disagree with.


I think that's throwing the baby out with the bathwater; sane defaults are still an important thing to think about when developing a product. And for something as important as a database, which usually requires authentication or storing personal information, let your tutorials focus on these pain points instead of the promise of a database-driven app with only clientside code. It's awesome, but I think it deserves the notoriety for letting you shoot yourself in the foot and landing on the front page of HN. Author also found a similar exploit via Firebase for the Arc Browser[0]

I have a similar qualm with GraphQL.

[0] https://kibty.town/blog/arc/


Any purported expert who uses software without considering its security is simply negligent. I'm not sure why people are trying to spin this to avoid placing the blame on the negligent programmer(s).

And if it is the programmer's fault, what can we do about it? People are trying to avoid finding a solution that isn't throwing their hands up in the air. We either need to solve the problem in a place that is effective with the situation as it is (the tools) or we need to solve the situation such that it has consequences for doing the wrong thing on the part of the developer. Which shall it be?

Since those are only 2 options, and there are many more options, I'll pick option 3: convince people to value and fund universal education more from preschool on, building a better foundation for engineers and other professions in the decades following.

In addition to that, it'd be cool if the blameless postmortems were made public, so everyone could learn from them.

As for the other 2 options of restricting freedom, and extremely blameful postmortems, I reject both.


Yes, being held accountable for your decisions is a restriction on your freedom.

I still choose the third option, because it is the better of the three, compared to restricting what functionality the software people write is allowed to have, and extremely blameful postmortems (which are bad).

All restrictions are bad, and accountability is bad. Got it. I'm glad to have had this discussion with you; very thought provoking.

Again, I still choose the third option, because it is the better of the three, compared to restricting what functionality the software people write is allowed to have, and extremely blameful postmortems (which are bad).

You seem really stuck on the first two options. Why does it matter, given that the third is the best? Do you still insist upon a false dichotomy?


It isn't exactly surprising that someone who is the beneficiary of a system which has no accountability is against the imposing of such, I was just hoping you had something better to offer back than 'I don't like it' and assertions that something is bad without ever explaining why. I have no idea why 'blameful postmortems' are bad because you never told me, you just say it is. Why should I change my mind in that case?

>I have no idea why 'blameful postmortems' are bad because you never told me

Usually when you don't know something, you ask someone who knows. Since you sort-of asked here, I'll give you the answer:

Blameless postmortems lead to fewer failures, which is ostensibly the goal here. So what do you get from your idea of blameful ones? Feeling good about punishing someone, even though you're increasing failures by doing so?


Weak programmers do this to defend this group making crap software. I agree that defaults should be secure and maybe there should be request limit on admin, full access token - but then people will just create another token with full access and use it.

I don't know what exactly happened here, but Firebase has two defaults. Test access rules which auto expire and are insecure, or production rules which require auth.

If you do something stupid, like keep ignoring the insecure warning and updating them so they don't expire, that's your fault.

In no other industry do workers blame their tools.


The issue usually lies in there not being enough security rules in place, not in keeping insecure rules active. For instance, for the Arc incident which we were given more information on, it was due to not having a security rule in place to prevent unauthorized users from updating the user_id on Arc Boosts in the Firestore.

Go into any other industry and hear when they say, "shoot yourself in the foot", and you've likely stumbled upon a situation where they blame their tools for making it too easy to do the wrong thing.


If you don't setup any access rules in Firebase, by default it'll deny access.

This means someone setup these rules improperly. Even if, you're responsible for the tools you use. We're a bunch of people getting paid 150k+ to type. It's not out of the question to read documentation and at a minimum understand how things work.

That said I don't completely disagree with you, if Firebase enables reckless behavior maybe it's not a good tool for production...


And I don't necessarily disagree either; good callout that it _was_ about improperly configured ACL's, I meant more that it wasn't related to keeping test rules alive.

For 150k+ salaries, frontend dev salaries are generally a lot less than their backend counterparts. And scrappy startups might not have cash for competitive salaries or senior engineers. I think these are a few of the reasons why Firebase becomes dangerous.


150k is more than a fire fighter in San Francisco.

https://sf-fire.org/employment-opportunities/h2-firefighter

I don't think it's out of the question to expect professionalism at 150k. These are VC funded companies, not a couple of college kids scraping together a prototype.

Then again, if I was a CTO seeing stories like this I'd be inclined to NOT use Firebase. I'm actually using Supabase right now since I don't like vendor lock in. Deploying Supabase manually is really difficult, but it is an option.

I imagine if I ever run a serious company, which I don't think will ever happen, I would take something like Supabase and run it on prem with some manner of enhanced security.

It's interesting though... For decades the industry has been trying to push this narrative that you don't need servers. You can handle everything using some magic platform, and throw in a couple of custom lambda functions when you need to execute logic.

Parse, Firebase, Appwrite and dozens of others emerged to fill this niche.

ToDesktop, provides even another layer of abstraction. We don't want to handle our own app updates, cool let someone else do it. That someone else doesn't want to manage their own backend, cool let someone else do it.

You end up with multiple layers of potential vulnerabilities which shouldn't exist... Cursor, Arc, etc could run their own update servers.

Maybe the solution is a Steam like distribution platform. Or just using Steam itself. That's a 30% cut to let someone else figure out your app distribution...


> I don't think it's out of the question to expect professionalism at 150k. These are VC funded companies, not a couple of college kids scraping together a prototype.

You can expect whatever you want, just prepare to be disappointed. We have absolutely learned by now that unless there very real consequences for doing or not doing something, you will regularly see the worst possible thing happen. This is why licenses exist and legal 'sign-offs' exist. There needs to be a licensing organization that can revoke people's ability to get certificates or to even be employed working on certain aspects of software if we ever want to solve this problem. I mean, you even need a license to cut hair in many states.


Your solution is to regulate software instead of calling out bad actors?

What a dystopian future, curl without a permit?

Why are you blaming the rank and file employees. The buck stops with the employer. If anything fine the companies


Asking people to be responsible for the damage they cause is called 'accountability' and not 'dystopia'.

Software is not just something someone uses for hobbies or for word processing or whatever. A bad design decision in some critical software can have just as much of an impact as a bad design decision in a bridge or an airplane. If we want to be called 'engineers' then we need to put more on the line than just a public apology when something goes wrong due to a decision someone actively made to save money or reduce the work involved. And of course it should involve the people who make those decisions and not just the grunt who implemented them.

But if the 'grunts' had the power to say 'no, I will not do this because it is insecure and my license is on the line' then that's a good thing. No?


>But if the 'grunts' had the power to say 'no, I will not do this because it is insecure and my license is on the line' then that's a good thing. No?

This will never work in a global economy. If you outsource the software you're just begging companies to find someone making 15$ the fall guy.

Sounds pretty bad. Your manager tells you to do something stupid or your fired. You do so, and when it fails they blame you and your software engineering license is revoked. You can't find a job and now get to live in a homeless shelter.

Meaningful fines for companies is the only way to fix this.

Maybe... For some sensitive things like location data an expensive permit should be required. But this needs to be a corporate responsibility, not an individual one.

In your scenario bad companies are going to ruin the lives of their employees by making them risk their licenses.


How do engineers manage it then? What about banks? Any regulated industry? It obviously works for some professions, why is software the exception?

Why do you want to regulate software?

Arguably this whole thing wouldn't happen if these apps were distributed and updated via the OSX app store. If that's the future you want, it's largely already here.

You can check a setting in OSX to make it so.

Who decides what software to regulate. Do I need a permit to install Python ?


> Why do you want to regulate software?

I don't want to regulate software. I want people to have something to lose if they make a decision that has a large impact.

> Arguably this whole thing wouldn't happen if these apps were distributed and updated via the OSX app store. If that's the future you want, it's largely already here.

I don't understand this point.

> Who decides what software to regulate.

Who decides any laws or regulations?

> Do I need a permit to install Python ?

Does you installing Python have potential consequences for large numbers of people or could it cause a significant amount of harm?

Why do you take the most extreme possible position and apply it to me? Is it that difficult to argue against a sensible one?


> Arguably this whole thing wouldn't happen if these apps were distributed and updated via the OSX app store. If that's the future you want, it's largely already here.

I don't understand this point.

The core of this issue is an insecure updating mechanism for desktop apps. You can argue for security sake, users may opt to only use the official Apple app store or the official Microsoft store. In this case instead of having a random startup manage the update process, you have a couple of multibillion dollar companies.

I'm trying to figure out what exactly you want to happen here. Would you essentially make it a legal to distribute software without a permit ? Would distributing certain software require a permit ?


I am expressing long held frustration that software engineering as a culture is trying to eat its cake and still have it. Wanting to be called an engineer, demanding a high salary and running the largest sections of the economy and disrupting society in highly impactful ways, yet whenever someone asks them to take responsibility for any damage caused they downplay their role and anyone else's in the industry. It is time to grow up. If you make a decision to make more money or do less work, knowing that it has a risk to cause major problems for infrastructure, economy, or other people, then there should be something more on the line for you than a 'oopsie' at the end of it if you lose that bet.

I'm on the side of freedom.

If you want to run your company using vetted software and limit your developers to only use a small list of approved software. You can do that. I've worked in such environments. You can lock down the corporate firewall. The point is choice.

It's completely different if you basically want a regulatory agency which will decide what software people are allowed to build.

Outside of work I like to use niche Linux distros. If I accidentally wipe my vacation photos during the install process, that's a risk I took. I don't have a right to complain that I destroyed my own data and blame it on software largely built by volunteers.

However I don't disagree completely. If you want to build a hardened fork of Linux with software vetted by your private certifying authority, that could be a good market. If all engineers working on your custom fork need to be "licensed" by a privately run organization, that is also fine.

I just wouldn't want the State to do this.


> A bad design decision in some critical software can have just as much of an impact as a bad design decision in a bridge or an airplane.

This is a better point than you realize. Blameless postmortems in IT are largely inspired by blameless postmortems from aerospace failures.


If aerospace gets away with it then we should fix that as well.

On the contrary: blameless postmortems are better.

Then you should have to click a big red button labelled "Enable insecure mode".

Defaults should be secure. Kind of blows my mind people still don't get this.


I’ve seen devs deploy production software with the admin password being “password”. I don’t think you are listening when they are saying “they’ll build a better idiot”.

Right because nobody ever makes a mistake.

That's why we don't have seatbelts of safety harnesses or helmets or RCDs. There's always going to be an idiot that drives without a seatbelt so why bother at all right?


When you drive without a seatbelt, it only affects you.

If you drive in a way that affects the safety of others, there are generally consequences.


Legal Dept: +10, Favorite

Marketing Dept: -10, Flag


Oh they are. Just like mongo and others. It’s a deliberate decision to remove basic security features in order to get traction.

Remove as much hurdles to increase adoption.


To be fair, Cursor does this quite handily also.

Should we outlaw C because it lets you dereference null pointers, too?

Erm yes! Even the White House has said that.

The only reason we didn't for so long was because we didn't have a viable alternative. Now we do, we should absolutely stop writing C.


The white house recently said a lot of things. But of all things, I don’t think they’re even qualified to have an opinion about software, or medical advice, or… well, anything that generally requires an expert.

This is referring to a report issued in 2024, under the previous administration. It has nothing to do with anything currently in the news.

I've never paid a company for C

I don't think Firebase is really at fault here—the major issue they highlighted is that the deployment pipeline uploaded the compiled artifact to a shared bucket from a container that the user controlled. This doesn't have anything to do with firebase—it would have been just as impactful if the container building the code uploaded it to S3 from the buildbot.

Agreed. I recently stumbled upon the fact that even Hacker News is using Firebase for exposing an API for articles. Caution should be taken when writing server-side software in general.

The problem is that if there is a security incident, basically nobody cares except for some of us here. Normal people just ignore it. Until that changes, nothing you do will change the situation.

I'm sorry, but when will we hold the writers of crappy code responsible for their own bad decisions? Let's start there.

I don't know but we're in a thread about Cursor... I don't think anyone is writing significantly better code using Cursor.

I always find unbelievable how we NEVER hold developers accountable. Any "actual" Engineer would be (at least the one signing off, but in software developers never sign off anything - and maybe that's the problem).

> It's incredible how badly Microsoft mismanaged it.

It's incredible how badly Microsoft mismanaged a lot of products. It genuinely makes me think they're aware of it at this point.


I and many others have said this before, but I'll say it again.

Programming is the easy part. It really isn't hard and has never been hard to write software that generates code and other software. We've been doing that since the early 2000s with frameworks like winforms, and oh yeah compilers. The only difference now is that we're doing statistical analysis that costs literally billions of dollars in order to generate that code with a data first approach.

To generate code that actually does what you want and what your client wants, all while not being a clusterfuck of spaghetti that breaks after 6 months of changes? That's the challenging part.

So sure, if you need someone to hold your hand every step of the way in order to get a task done and to literally tell you step by step what must be done after two years of being a junior developer, then yeah you probably don't have a bright future. However, that'd be true regardless of if the AI boom was present or not.

To answer the question more directly: no I'm not afraid.


Honestly, the most astounding part of this announcement is their comparison to o3-mini with QA prompts.

EIGHTY PERCENT hallucination rate? Are you kidding me?

I get that the model is meant to be used for logic and reasoning, but nowhere does OpenAI make this explicitly clear. A majority of users are going to be thinking, "oh newer is better," and pick that.


Very nice catch, I was under the impression that o3-mini was "as good" as o1 on all dimensions. Seems the takeaway is that any form of quantization/distillation ends up hurting factual accuracy (but not reasoning performance), and there are diminishing returns to reducing hallucinations by model-scaling or RLHF'ing. I guess then that other approaches are needed to achieve single-digit "hallucination" rates. All of wikipedia compresses down to < 50GB though, so it's not immediately clear that you can't have good factual accuracy with a small sparse model

Yeah it was an abysmal result (any 50%+ hallucination result in that bench is pretty bad) and worse than o1-mini in the SimpleQA paper. On that topic, Sonnet 3.5 ”Old” hallucinates less than GPT-4.5, just for a bit of added perspective here.

Maybe this will help a bit. I'm a drummer. Drums are literally all about doing the exact same thing for hours and hours on end, but with the goal of perfection. What's perfection? That's all in the eyes of the player. Been doing it 15+ years and don't plan on stopping.

I can do repetitive stuff like that for hours on end. A roguelike takes that and makes it a game with some slight variation to keep it interesting (kind of how a drum beat is the same thing, and then you change it to add some variation).


I definitely get wanting to perfect something, but for me it's frustrating because a roguelite forces you to do so vs just you deciding to redo something to perfect it.

AI models are trained on the data from the internet, so sure, they couldn't do their search feature to scour the internet, but I doubt the material is much different than what the models were already trained on.

Additionally, before the age of stackoverflow and google, SWEs cracked open the book or documentation for whatever technology they were using.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: