Hacker News new | past | comments | ask | show | jobs | submit login
I pwned half of America's fast food chains simultaneously (mrbruh.com)
1081 points by MrBruh 9 months ago | hide | past | favorite | 474 comments



It's not clear if the author was hired to do this pentest or is a guerilla/good samaritan. If it is indeed the latter, I wonder how they are so brazen about it. Does chattr.ai have a responsible disclosure policy?

In my eyes people should be free to pentest whatever as long as there is no intent to cause harm and any findings are reported. Sadly, many companies will freak out and get the law involved, even if you are a good samaritan.


> It's not clear if the author was hired to do this pentest or is a guerilla/good samaritan

Pretty clear to me, "it was searching for exposed Firebase credentials on any of the hundreds of recent AI startups.", running a script to scan hundreds of startups

> Sadly, many companies will freak out and get the law involved, even if you are a good samaritan.

Yeah, but that also ends with that company being shamed a lot of the time


What is wrong with shaming when it's warranted?


It’s an ineffective tool if your goal is change.


"Change" could mean a careless, incompetent or reckless company going out of business, a net positive for possible future victims of their conduct.


It is? I'd say shaming is the best tool there is.


Positive interactions trump negative interactions when your goal is to encourage lasting change.


With humans. With companies it's pretty effective - especially if the post hits front page.

Ask Troy Hunt: https://www.troyhunt.com/the-effectiveness-of-publicly-shami...


Shame is absolutely a valuable tool for change. Without it society would not function since many of our 'rules' are self-enforced.


I don't know why you're being downvoted, shame is one of the most powerful motivators that exists in humans; I'd put money on it being the most powerful. People who are loudly disagreeing don't understand that "shaming (v.)" doesn't always equate to shame actually being felt in the target of said shaming. The act of shaming loses a lot of effectiveness when you can find a community of people who will tell you that it's not actually shameful and that, "no actually the people shaming you are wrong" because those people will suddenly be your best friends. This can be good thing, homosexuality, or bad thing, nazis. And the internet has made every one of those communities 0 distance from everyone. It's why people who try to employ it cut you off from your support networks.

Shame is so effective when it actually lands that you can never fully deprogram it -- I will live with my Catholic guilt for the rest of my life.


Setting aside how wrong everything you just wrote is, companies aren’t people and do not have shame.


Sounds like someone is jealous they didn't get to have any of the apple.

This is why words have meanings. Companies aren't people and hence "shaming a company" can't possibly mean "shaming the legal entity" because that's nonsense but instead shaming the humans that make up that legal entity. Saying such a thing is impossible actually treats the company as an autonomous metamind.

I interpreted what you said as nonsensical and I'm making that your problem isn't really an argument.


You’re saying things like “metamind” but I’m the person writing nonsense? Okay, bud.


It ends up leading to "word-inflation" where you have to keep shouting louder, stretching the truth to be acknowledged. The word "racist" changing meaning over the last 30-40 years is a great example.


Please elaborate on the change in meaning?



why isn't the word "prejudice" used anymore?



I didn't ask what GPT or 3 random people said. why can't you articulate your own position?


Well you didn't answer my question. But I'm going to assume you're hinting at the false equivalency conservative line to trot out about prejudice against poor it rural people.

Prejudice is rooted in lack of knowledge and unfamiliarity. Racism is familiar and has it's own body of knowledge; all wrong, but vehemently defended. Racists try to build their own narrative.


I didnt ask what the difference was, I asked why one word isn't used anymore. Why are you not paying attention?


You still haven't answered me, but I'll leave this for you consideration. Sub in racist for anti-semite. I believe this applies to you

"Never believe that anti-Semites are completely unaware of the absurdity of their replies. They know that their remarks are frivolous, open to challenge. But they are amusing themselves, for it is their adversary who is obliged to use words responsibly, since he believes in words. The anti-Semites have the right to play. They even like to play with discourse for, by giving ridiculous reasons, they discredit the seriousness of their interlocutors. They delight in acting in bad faith, since they seek not to persuade by sound argument but to intimidate and disconcert. If you press them too closely, they will abruptly fall silent, loftily indicating by some phrase that the time for argument is past." Jean-Paul Sartre


Shame can't fight lawyers and handcuffs.


Nope, shame is ineffective as a tool for change. More often people shut down or ignore you if you attempt to shame them than actually make the change you want. Besides, it's frequently just about vengeance anyway. Shame is really hate of other, for the most part.

As a tool for oppression however, yes it's quite effective.


> people shut down or ignore you if you attempt to shame them

Sure, but large businesses entities (as opposed to individuals) often cannot afford such luxury.

Try being a bank in a western country and ignoring a public security blog post, outlining exactly how one can exploit your online banking auth flow to gain unauthorized access to customer accounts.


That’s not shame, that’s risk. Corporations don’t have the capacity for shame, it’s the major thing people gate about corporations in the first place…


> That’s not shame, that’s risk

Risk of what? Risk of losing credibility and revenue due to… people shaming them, perchance?


Shame isn't always for oppression, although it certainly can be - it's also a pretty useful tool to impose reasonable rules that allow you to live peacefully among your neighbors.


That's not shame, that's guilt. Shame is existential, guilt is situational. The cost of shame is too high for whatever value it may bring.


Nope:

> According to cultural anthropologist Ruth Benedict, shame arises from a violation of cultural or social values while guilt feelings arise from violations of one's internal values.

https://en.wikipedia.org/wiki/Shame#Comparison_with_guilt


Yep:

> In sum, shame and guilt refer to related but distinct negative “self-conscious” emotions. Although both are unpleasant, shame is the more painful self-focused emotion linked to hiding or escaping. Guilt, in contrast, focuses on the behavior and is linked to making amends. [0]

[1] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3328863/


Why is a painful emotion a bad thing?


People react poorly to it, and in situations where it is unnecessary, it’s just cruel.


>More often people shut down or ignore you if you attempt to shame them than actually make the change you want.

shame as a tool of change does not work on the person being shamed at the time, it works on that person for the future hopefully as they will be afraid to be shamed again and it works on changing the behavior of other peoples because they don't want to get shamed either.

Thus as a tool of oppression, as you pointed out, it works great. But also as a tool for enforcing otherwise non-enforced social rules - until of course you meet someone shameless or who feels at least that they can effectively argue against the shaming.


There are different types of shame. Shame related to a decision situation (endogenous) and shame not related to a decision situation (exogenous). In the endogenous case the shame is said to be a 'pro-social' emotion.

This is backed by studies.

"Using three different emotion inductions and two different dependent measures, we repeatedly found that endogenous shame motivates prosocial behavior. After imagining shame with a scenario, proself participants acted more prosocially toward the audience in a social dilemma game (Experiment 1). This finding was replicated when participants recalled a shame event (Experiment 2). Moreover, when experiencing shame after a failure on performance tasks, proself participants also acted prosocially toward the audience in the lab (Experiment 3). Finally, Experiment 4 showed that this effect could be generalized beyond social dilemmas to helping tendencies in everyday situations. Therefore, it seems safe to conclude that shame can be seen as a moral emotion motivating prosocial behavior." [1]

You can also contrast 'humiliation' shame with 'moral shame', with moral shame being prosocial. This is also backed by studies.

"Our data show that the common conception of shame as a universally maladaptive emotion does not capture fully the diversity of motivations with which it is connected. Shame that arises from a tarnished social image is indeed associated with avoidance, anger, cover-up, and victim blame, and is likely to have negative effects on intergroup relations. However, shame that arises in response to violations of the ingroup’s valued moral essence is strongly associated with a positive pattern of responses and is likely to have positive effects on intergroup relations."[2]

[1] de Hooge, I. E., Breugelmans, S. M., & Zeelenberg, M. (2008). Not so ugly after all: When shame acts as a commitment device.Journal of Personality and Social Psychology, 95(4), 933–943.

[2] Allpress, J. A., Brown, R., Giner-Sorolla, R., Deonna, J. A., & Teroni, F. (2014). Two Faces of Group-Based Shame: Moral Shame and Image Shame Differentially Predict Positive and Negative Orientations to Ingroup Wrongdoing. Personality and Social Psychology Bulletin, 40(10), 1270-1284.


There’s a reason your citations are nearly a decade old at best; the science has changed.

A 2021 meta-analysis showed that, “shame correlates negatively with self-esteem and is large effect size.” [0] So unless the goal of your shame is to actively harm the people involved, then no, shame is not an effective tool at behavior change, given the damage it causes.

You may be thinking of “guilt” rather than shame:

> In sum, shame and guilt refer to related but distinct negative “self-conscious” emotions. Although both are unpleasant, shame is the more painful self-focused emotion linked to hiding or escaping. Guilt, in contrast, focuses on the behavior and is linked to making amends. [1]

[0] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8768475/

[1] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3328863/


Regarding your sources:

One has to do with self-esteem, which has nothing to do with whether it is pro-social or beneficial, just that some types of shame harm self-esteem, which was never contested.

The second study is about criminal populations, and I specifically mentioned that shame is about self-policing, and that obviously didn't work if someone is incarcerated for a crime.


Did you read them? If your goal is to effect change, hurting people's self esteem is a negative effect that is entirely unnecessary to change.

And criminals aren't some ungovernable animals...


> If your goal is to effect change, hurting people's self esteem is a negative effect that is entirely unnecessary to change.

Yes, your self esteem will likely be harmed if you do something bad and it gets found out.

> And criminals aren't some ungovernable animals...

?


I'm confused too. People live inside such safe spaces now. The thought of a negative emotion is to be avoided at all costs.

Shame makes people feel bad, so we must do all we can do avoid making anyone feel that.

What's next? Is "disappointment" next? Having someone disappointed in you feels bad, therefore no one can ever show disappointment to others?


You’re confused about how shame could make someone unproductive? How shame could drive a behavior underground rather than eliminate it, thus exacerbating the issue rather than reducing it?

As you are demonstrating, shame is more about causing pain than changing behavior. You seem to want to hurt people, and that’s one reason why shame is not effective. You don’t care that equally or more effective means exist for improving behavior.


It isn't letting me reply to you above because it locks down comment chains that get replied to quickly to avoid flame wars, so I will reply here and be done.

> So you admit that shame can be bad? Then you’re close. Next you need to realize that shame’s effectiveness is dependent on a person feeling shame the way you want them to. But that’s not how it actually works, is it? Instead, shame is sourced from the judgements of others, so one way of effectively mitigating shame is to hide the behavior from others, rather than stopping it. So shame is ineffective.

I never claimed that shame couldn't be bad -- I said it is essential for society to function properly. I cited two studies which demonstrated that shame can be prosocial and beneficial depending on circumstances.

> And I’m not being silly. You tried to dismiss the legitimacy of my citation by dismissing an entire category of people. That was inconsiderate.

I dismissed your studies because they were both irrelevant to my point and did not contradict anything I cited. If you feel that I am othering prisoners because I said that the situation of the people in the study made it useless to make your point, then I object to that and say that you are grasping at straws since you have no reasonable argument otherwise.

Look, you have every right to be absolutely wrong in this case, so don't bother changing your mind or looking at my actual standing on the issue and instead imagine I am some kind of meany pants who wants people to feel bad if you want, but I am done with this conversation.


That’s not how HN works, individuals are slowed by IP, there is no “slow a specific conversation”, but thanks for making it clear you’ve been flagged by dang as a troll.

And one of the clearest indicators to me that a person knows their argument is weak is when they declare themselves correct (or me wrong). Of course I’m free to be wrong, the problem you have is you’ve done a terrible job demonstrating that fact.


You talk about how others 'want people to feel bad', but have you considered that you are expressing the most desire to belittle others and make them feel bad? Your abrasiveness and need to triumph in what should otherwise be a genial conversation must really make it difficult to engage with people without them disliking you. Have you considered self-reflection?


Oh yeah, I'm a terrible person, no doubt about that. Doesn't change the argument one bit, however.

(See how your attempt at shame failed? That's why shame is not a useful tool.)


The problem with thinking you know everything is that you miss genuine opportunities to learn things -- I wasn't trying to shame you, I was pointing out the irony of your crusade in this thread (which is completely apparent to everyone) and urging you to self-reflect on things that could improve your life.

Even if you think I am an asshole, self-reflection can only be beneficial. One thing that may be helpful is to take a look at your actions over the thread and think about it from the other perspective, and seeing how you may appear from someone else's point of view. I do that often and though it isn't always pleasant, it does give a reality check in some key areas.


Nope sorry; I didn't say I know everything, I just provided an apparently undefeatable argument (why attack my character otherwise?).


You can lead a horse to water...

Refusing to acknowledge that other people have valid arguments and continuing to repeat the same thing over an over again is indeed 'unbeatable'.

People then ceasing to continue to argue with you because you aren't listening to them is them recognizing they are wasting their time.


Keep thinking beyond the immediate for another step and you will see how harming self esteem means a person won’t productively alter their behavior. It’s in the literature I’ve cited if you’re actually curious.

And you made a value judgement about the people who end up in jail/prison, which was completely uncalled for.


You know that things can be bad sometimes and not bad some other times? Shaming people to make them feel bad is not good; however feeling shame for having done something wrong is good -- it motivates one to avoid doing that again.

> And you made a value judgement about the people who end up in jail/prison, which was completely uncalled for.

You are being silly.


So you admit that shame can be bad? Then you’re close. Next you need to realize that shame’s effectiveness is dependent on a person feeling shame the way you want them to. But that’s not how it actually works, is it? Instead, shame is sourced from the judgements of others, so one way of effectively mitigating shame is to hide the behavior from others, rather than stopping it. So shame is ineffective.

And I’m not being silly. You tried to dismiss the legitimacy of my citation by dismissing an entire category of people. That was inconsiderate.


Would you care to summarize what "related to a decision situation" means for those of us who don't have access to those articles?


Just a guess, but I imagine it's the difference between "I'm ashamed I can't make enough money to save anything" vs. "I'm ashamed I blew all my savings on crypto". One is shame about your situation (which are likely to be out of your own desires and control too), the other is shame about your decision (which you likely had better control over).


This is correct, according my understanding of the study I sourced.


The comment above lacks essential nuance and is overly confident.


The comment above lacks contributory value and is also (ironically) overly confident.


Shall we continue into an infinite regress of zingers?

You are correct that I didn't provide supporting reasons myself. Fair point. I suppose I didn't think your comment warranted it. Saying that might come across as harsh, which isn't my goal. I'd rather shift into a constructive and specific discussion instead. In that spirit, I'll elaborate on my criticism. Let's start with your leading sentence:

> Nope, shame is ineffective as a tool for change.

There are lots of ways to improve this sentence; here is one suggestion: consider a phrasing like "In comparison to _X_, shame tends to be less effective for _particular purpose_."

I'd suggest avoiding empirical claims about likelihoods you aren't able to defend. Take this sentence fragment:

> More often people shut down or ignore you if you attempt to shame them...

If done forcefully, this _might_ lead to "shutting down" or "ignoring"; however, on what basis can one say this happens "more often"? More often than what? The writing here overreaches -- this is why I called it "overconfident".

There are many situations where one person points out a shameful behavior in another, who recognizes it, feels bad, and i.e. apologizes and modifies their behavior. My point: it would be faulty to dismiss the idea of shame as useless in social contexts.

Finally, the next sentence also struck me as an overreach:

> As a tool for oppression however, yes it's quite effective.

Care to elaborate your thinking on that one? What do you mean by oppression?

By oppression I think of a power dynamic where the weak are kept in a lower position by the more powerful. Is this what you mean? Why do you think shaming is particularly effective way to oppress? In my mind, military, physical, legal, and economic mechanisms tend to be more effective, historically speaking.

I could speculate. Perhaps you are referring to the practice by certain religious systems to make people feel ashamed for merely doing things that all humans do (make mistakes) and thus deserve punishment (e.g. by the religious elites, or worse, by yourself, thus making yourself feel weak and unworthy).

In short, I'm sufficiently enough in these ideas to be rather unsatisfied with writing that doesn't unpack the ideas at all. No offense intended. I look forward to learning what you mean.


Eh, you either seem unaware that your comments aren’t the only ones in this discussion, or narcissistic enough to believe only you deserve a full response, because every answer you’re looking for and more are in sibling comments around you, yet you choose to engage only in my shortest comment that had context you could pretend didn’t exist.

If you were trying to show some of the worst faith engagement possible on HN, you did it.


Like how Apple says about the App Store rejections:

> Running to the press never helps.

Except of course, in reality we know that it ABSOLUTELY DOES. In fact, it has been often times the ONLY thing that has helped.


Security is at a point where shame is required. You deserve to feel shame if you have an unjustifiable security posture like plain text passwords. The time for politely asking directors to do their job has passed. This is even the governments take at this point. Do it right or stop doing it at all.


How so?


Because everyone makes mistakes, if you antagonize someone they are less likely to care about you and feel more obligation to protect their own.


I think this is more because we also are quick to shame what are more clear unintentional mistakes or not give positive rewards to good actors. I'm also not unconvinced there are people that want to up-play any controversy (not as any specifically collective and directed force, but an emergent behavior may look that way. More "never let a tragedy go to waste" thing).

But that's different than shaming. That's over-saturating the system with false positives. To combat this I'd encourage you to not respond, in __any__ way to bullshit fake controversy and to also give positive reinforcement for when companies do something good.

I'll give an example, you've probably seen companies like Meta occasionally do something good. For example, they released the source of LLaMA. But people tend to use those opportunities to not congratulate Meta for doing the good thing but rather complain about other bad things they do. Then yes, it fits your model, because you've reached bad steady state and you can no longer turn good because nothing you do that is good will get any signal to continue in that direction.

Us humans are weird and routinely shoot ourselves in the foot only to ask who fired the bullet, smoking gun in hand.


This is absolutely true at the scope of personal relationships. Not at all when it comes to companies, which have a different set of incentives


Using plain text passwords goes well beyond a simple “mistake” in my book. It is negligent.


Is it, often times hacks like this drive people out of business.


>What is wrong with shaming when it's warranted?

Says some pests

---

Shaming for businesses and politicians should be encouraged, not just warranted.

Product Recalls are a form of corporate shaming, but public discourse about companies or politicians should be encouraged, and shaming them should always be warranted.


Plain text passwords, seriously. At that point, I'm not sure what would be a similarity with any other engineering profession. The plain text passwords are beyond any rhyme or reason... and then returned to the end user client. If anything, I'd consider it malicious negligence - in the EU the leak would be a GDPR issue as well.


Don't worry, it was only a couple passwords for their admin accounts.


The issue is it is often impossible to distinguish from a white hat or a black hat hacking your live systems. It can trigger expensive incident response and be disruptive to the business. Ethically, I think it crosses a line when you are wasting resources like this, live hacking systems. There is usually a pretty clear and obvious point where you can stop, not trigger IR, and notify the companies. Not saying that was the case here, but I have been doing cybersecurity assessment work for 17+ years. Even when you have permission sometimes the juice isn't worth the squeeze to keep going as you often have proven the thing you needed to or found the critical defect. There is a balance to whtie hat activities and using good sense to not waste resources.


> The issue is it is often impossible to distinguish from a white hat or a black hat hacking your live systems. It can trigger expensive incident response and be disruptive to the business.

If your servers are connected to the internet, you can expect that people from countries that won't prosecute them will try to break in. This will happen, almost immediately, as soon as they're connected to the internet.

If your servers have been properly secured, this doesn't matter. If they have not, you are paying for that incident response regardless and the only question is if the context is today because of some innocuous kid or a month from now because of some black hats from Eastern Europe and your company's internal database of everything is now public information.

You want it to be the innocuous kid.

> There is usually a pretty clear and obvious point where you can stop, not trigger IR, and notify the companies.

This is obviously not the case.

Suppose you suspect the company could be using a default admin password. Contacting them without confirming this a pointless waste of everybody's time. Checking it takes two seconds, and if you're wrong you just won't get in and will be one of ten billion failed login attempts against a public-facing server. If you're right, the successful login to an admin account from a novel external IP address could very reasonably trigger some kind of alert, which could very reasonably trigger an incident response when the staff knows that nothing should be logging into that account from there. Or it might not, because the kind of company that uses default passwords may not have thorough monitoring systems either, but you have no way to know that.

There is no point at which it would be reasonable to contact them prior to doing the thing that could trigger an incident response.


> This is obviously not the case.

It really is though. People just don't understand the ethics of white hat hacking.

> Suppose you suspect the company could be using a default admin password

Putting in that password on a system you don't own without any sort of permission to do so is very clearly against the law. You are accessing the system without permission. You just walk away if you want to be ethical about it.

The only ethical path is to let them know you have some reason to believe they are not using secure passwords or whatever. Accessing their system illegally is not the move. It just isn't the white hats problem.


> People just don't understand the ethics of white hat hacking.

People just think they understand ethics, even if they don't.

"Don't break the law" is an incredibly poor foundation. Many laws are ill-conceived, ambiguous, overly broad and widely ignored or manifestly unjust. Using this as the basis for ethical behavior would require you to be unreasonably conservative and pedantic while regarding complicity in an injustice as ethical behavior. (It also implies that you could never use ethics to inform what the law should be, since it would just tautologically be whatever you make it.)

"Don't knowingly cause net harm" is at least as valid, but then admits the possibility of curiosity-based shenanigans that could lead to the revelation of a vulnerability that saves innocent people from the consequences of it being later exploited by someone nefarious.

> Putting in that password on a system you don't own without any sort of permission to do so is very clearly against the law.

Driving 1 MPH over the speed limit is very clearly against the law, even if the orphanage is relying on you to have the funding letter postmarked by end of day.

Walking your date home while you're intoxicated is very clearly against the law (public intoxication), even if the alternative is that they drive themselves home while intoxicated.

Ethics is something else.

> The only ethical path is to let them know you have some reason to believe they are not using secure passwords or whatever.

But you don't, really. Your belief may even be purely statistical -- suppose you expect that if you try the default on many servers at different companies, there will be at least one where it works, and you'd like to report it to them, but you have no idea which ones unless you try.

> It just isn't the white hats problem.

If you have the capacity to prevent likely harm and instead do nothing, what color is your hat?


I mean, I am a literal expert in this field <appeal to authority> what do I know. I will just state I have read the relevant laws and feel I have a good understanding of what underpins the ethics of this industry and white hat hacking after almost 2 decades immersed in it. You are mixing up morals with ethics. With ethics we have clear and unambiguous lines. Morals, that’s on you more or less.


The potential downside of stopping once you find a critical defect is that the company may not take it seriously unless you go just a bit further and show what you can do with the defect. In this case, showing that it gives you access to the admin dashboard.


Generally, hacking into a live system without permission is strictly illegal. Once you have discovered some surface level vulnerability you are legally obligated to stop, at a minimum. You can't just keep hacking and exploiting things that cross a certain, generally clear threshold, without permission. Intent definitely matters, but you can still end up in jail if a prosecutor has a hair up their ass and decides they have a good case against you.

I do agree, some of the time you need fireworks to get the right people's attention. You could argue there is some moral imperative there, but ethically you are in the wrong if you keep going. Just have to decide of the moral imperative outweighs clearly breaking the law in situations where you don't have permission.


It is illegal as soon as you break in. Going as far as possible, without destroying anything, is no more illegal than stopping early, but gives less proof of security problems.


"Break in" in a modern web app pretty much happens the moment you access data you aren't supposed to access. Not damaging anything is irrelevant. I mean, no one destroyed anything in the Equifax hack. They just retrieved all the data.


> There is usually a pretty clear and obvious point where you can stop [..] sometimes the juice isn't worth the squeeze to keep going as you often have proven the thing you needed to or found the critical defect

Those who are tasked - and are being paid(!) - to "[do] a cybersecurity assessment" will typically be given a brief.

For those who aren't tasked - or being paid(!) - to do this stuff, things are much less clear. There's no defined target, no defined finish line, no flag you have been requested to capture.

(I don't work in cybersecurity now, but <cough> I did get root on the school network way back when, and man, that took some explaining..)


If you aren't being tasked and you aren't being paid it is still really clear. Go look at almost any bug bounty and they will give you really clear "when to stop terms" Often the moment you access data you aren't supposed to access (exposing PII) or come to a point where you could even potentially disrupt the operation of the system you need to stop.

When we begin any assessment on a production system we have a very clear discussion about the rules of engagement. But we are often authorized to access data someone that is not authorized can't legally access with their unauthorized bug hunting. Once you have some experience and understand the relevant laws it is pretty clear when you should stop without violating the law. The general threshold when you are authorized is that you stop if it would risk the stability of the system. If you aren't being paid the general rule is once you have accessed others' PII you need to stop. If you broke an authorization control or accessed any functionality a regular user can't, you need to stop.

Gaining root to any network you don't own or have authorization to operate is clearly crossing the line. You went from finding issues to actively exploiting them. If you have to actively exploit to find an issue and you don't own the system and you don't have permission you don't do it.


> Gaining root to any network you don't own or have authorization to operate is clearly crossing the line

Q: As an attacker - whatever colour your hat - how are you supposed to know if any particular action may gain you root unless/until you try it?


> Ethically, I think it crosses a line when you are wasting resources like this, live hacking systems.

I agree with everything you wrote except this sentence. There is no ethical obligation not to waste a company's time.


Well, for me there is. As an actual cybersecurity professional I feel bound to not create extra work unless it is for some clear and valuable purpose. Coordinating with the company expends minimal effort and can save them a lot of effort. That is just the right thing to do. It is mostly the wrong, already overworked, people's time getting wasted anyhow if you do trigger an incident or investigation.


Why not?


[flagged]


> "remembered chattr.ai"

They didn't say that, you just made that up.

This is what they said:

"when we remembered the existence of a scanner we made for firebase and found https://chattr.ai"

And in MrBruh's post, the way they found it was scanning .ai domains, using the same scanner that Eva remembered they had made.


You're absolutely right: thing is, they rewrote it between the comment you're replying to and your comment. (overnight EST)


it was changed after being called out

https://archive.is/jUtip


> Good Samaritan

The web is insecure enough as it is, I just want to do my part to make it that little bit safer :)


Everybody has that goal until they get a knock on their door at 6am: https://github.com/disclose/research-threats


From one Paul to another, best of luck! For the goal of improving overall web security, widespread shame doesn't work. My hunch is that we need to be more prideful about having verifiably robust security practices. Kind of like getting corporations to realize that the data is more valuable if you can prove that nobody can breach it.


Thank you, the kindness goes a long way!


Either way it is a fascinating write-up. It will hopefully be a cautionary tale for other businesses and companies out there, and will inspire them to lockdown this credentialing issue. I've noticed a similar blasé attitude when implementing SSO; the devil is in the details as they say.


Does this bug work across all applications that use Firebase? Or just those that didn't push the update with security?


I salute you for it. Take caution though.

The bad guys don't play by the rules so the rules only hinder the good guys from helping. I think Internet security would be in a better position if we had legislation to protect good samaritan pentesters. Even moreso if they were appropriately rewarded.


Why, you’d never catch a black hat hacker again. The authorities would ust reeling in one Good Samaritan after another!


There is a big difference between discovering a vulnerability that allows you to forge tokens and immediately reporting it versus dumping terabytes of data on the darknet for sale.


Unfortunately, door 1 is maybe $200 bounty and weeks or months of back and forth (if the corp doesn't have a clear bounty program) whereas door 2 has infinite upside. Honestly, it might make sense for a gov group to run a standardized bounty program for exploits with notable financial / privacy impact.


The solution is to have fines in place for insecurities and award them to discoverers.


This is an awesome idea. The next time a glibc CVE comes out every company in the world pays a fine, if they are impacted or not! Hey - you could even file 1000s of frivolous CVEs (which is already common) you know would affect your competition! (which is how that would pan out)


It is a shame that ideas never progress any farther than super basic principles before they are implemented so that totally predicable outcomes that cynical people on internet forums mention become inevitable.


What a wonderful idea. Im sure our nobel politicians will ignore their donors this time and craft legislation that puts large companies at constant threat of more fines. This could never be weaponized against small businesses that pose competition to the bigger fish.


Giving corps even more excuse not to run proper bug bounties,

or care even less about shipping secure code?

Pass.


I don't know. I think you could perhaps align incentives such that any bounty claimed via the government program is competitive, public, and companies are ranked by the number and severity of bounties. Then the company would have an incentive to run a bounty program where they had a chance of controlling the narrative a bit.


There are two entities that constantly and consistently stomp all over human rights and sovereignty - governments and corporations. It also seems that most people are comfortable with asking them to increase the amount of control they have over our collective affairs.

It's quite the thing.


How do you propose such a law would work?


  1. White hat submits a "Notice of Vulnerability Testing" document to target company (copy also sent to government body) including their information, what systems will be tested, and in what time window
  2. Company is required to acknowledge the notice within X hours and grant permission or respond with a reason that the test cannot take place
  3. White hat performs testing according to the plan
  4. White hat discloses any findings to the company (keeping government body in the loop)
  5. Company patches systems and may reward white hat at their discretion
  6. Government body determines if fines should be applied and may also reward white hat at their discretion
Something like that. The white hat would have legal immunity as long as they submit the document, stick to the plan, and don't cause damage.


Nothing in your proposed law provides a way to distinguish between white hats and black hats, and instead it just presupposes that the person undertaking the conduct in question is a white hat.


Sometimes these events provoke regulators to take a closer look at the company.

https://www.ftc.gov/news-events/news/press-releases/2023/11/...


Do you feel the same about physical security? It's fine for people to walk around your building, peak in the windows, maybe pick the lock on the door, maybe even take a little walk inside, as long as they don't steal anything?


Weird, I don't feel nearly as touchy about some ones and zeros on a computer as I do my physical body's safety, without which I would not exist.


OK, make the comparison more direct, then. Say you have a filing cabinet with all of your important and \ or embarrassing documents in it. Are you OK with houseguests giving the handle a little wiggle when they come over to check if its locked? What about the neighborhood kids?


This analogy is more akin to exposing your database to to public internet with no credentials or weak credentials. Thinking about it just like the company in the blog post did... Oh and the filing cabinet is out on the street corner as the other commenter mentioned.

As someone else mentioned this would be more akin to a security officer of some sort waking me up and letting me know I left my front door open. I'd sure as hell be shaken but they were doing their job and I'd be thankful for that.


> Say you have a filing cabinet with all of your important and \ or embarrassing documents in it. Are you OK with houseguests giving the handle a little wiggle when they come over to check if its locked? What about the neighborhood kids?

If i leave that filing cabinet in the middle of Times Square in Manhattan (which has an insane amount of foot traffic every day), then yes, I would expect plenty of people to give it a little wiggle to check if it’s locked. And I would be rightfully given a lot of questionable looks for complaining that passerbys stop to check it out or give it a wiggle.

Having your service on the internet is not the same as having a filing a cabinet in your house. I think that the Times Square analogy is even underplaying it, given that on the internet, your audience is many many magnitudes larger and more remote/anonymous.

On the other hand, if I had a private VLAN (that wasn’t exposed to the internet) on my home network, then I would be definitely annoyed if my houseguests would try and pentest it without asking.


A closer analogy would be your friendly neighbour warning you that you left your garage door open. And yes I would appreciate him telling me.


I think a closer analogy would be if your neighbor walked over while you weren't home and lifted on your garage door, noticed it wasn't locked, so went inside and poked around a little. Then came and warned you later that your garage door isn't locked and maybe you shouldn't store those bank statments in the garage.


What if he says that he has discovered that if he stands on one foot in the street in front of your house, holds anyone's garage door opener above his head, and clicks it 25 times at precisely 9:01am while shining a laser pointer at the top of the door, your garage door will open.


I don't think that's a good analogy.

What matters is if the thing they're doing to test your security is similar to what criminals would do to breach your security.

In the case of a physical location, that bar is low. It's things like seeing if your garage door is open, or your car doors are locked, etc.

In the case of computer resources, that bar is high. Probing your database for permissions holes is absolutely something that a normal "cyber criminal" would do. It's the equivalent of a carjacker looking to see if your doors are unlocked.

So an "online neighbor" alerting you that your database is unprotected doesn't feel weird at all. It's not the equivalent of that weird laser pointer thing you talked about, it's the equivalent of looking to see if your car doors are unlocked while you're away on vacation.


Would I be upset at him? No. Would I want to have been told? Yes. Would I think he's a little weird? Yes. Would I want him to keep doing weird shit and letting me know if he finds any other similar issues? Yes.


All in all, you will still be thanksfull he found out and warned you about it before someone malicious does.


Still missing something - the garage would have to be on your private property, not visible from public property, and the only way he could check for you is if he entered your property and tried to get into your garage.


See my reply above.


On the contrary, I would say that this is a garage you rent on a public space. The internet is open and I can do requests to any server. If you don't want your system to answer me, make sure it does not. If I am in front of an ATM on the public street, it doesn't give me money without authorization. Make sure your server does the same.


Streets are generally open. My house is on a public street - that doesn't entitle anyone to attempt to operate my garage door, let alone exploit a security vulnerability in its software to gain access. That's just trespassing.


The closer analogy would be your friendly neighbour warning you that he determined your garage door code was easily guessable after he spent 45 minutes entering different codes.


If I left my filing cabinet on the pavement outside my house, I ought to expect it to happen, and would thank a good samaritan telling me if I left it open


But you would leave it on the pavement right? Little honeypot for nosey punks.


If I owned a bunch of vending machines, and someone came to me and said "Hey, I found out that if you put a credit card in the dollar bill slot, it gives out free soda and empties all its coins through the return slot," I would a.) be pleased to have been informed and b.) not be upset that they did this.

If a neighbor came to me and said, "Hey, your mailbox that's located at the end of your long dirt driveway is protected by a wafer lock that can be opened by simply slapping the side of the mailbox in a funny way," I would maybe wonder why they were slapping my mailbox but I would be grateful that they told me and I would want them to continue doing whatever weird shit they were doing (so long as it wasn't causing damage).

When you put property in a public (or practically public) space, there's an expectation that it will not be treated as though it is on private property. There's a big difference between someone jiggling the door to your home (where you physically reside) and jiggling the lock on a mall gumball machine or the handle on a commercial fire exit.


Would you drive over a group of people with a bus? Would you do it in GTA?

There is a big difference between the digital world and the physical one. Many actions e.g stealing are very different in these 2 worlds and have very different implications.


Communes exist. The internet is supposed to be a giant commune of researchers watching each others backs.


There's a huge fucking difference between "yo, the neighbourhood and country is unsafe and there is no strongly upheld norm here of people not seeing if they can enter someone else's house if their door is easily unlockable. You must be new here since I noticed your door is pretty insecure, I recommend you do x,y and z if you are to live here safely. Take care." Versus "yo, I just entered your home and snooped around since it was easy to lockpick. There are actually strong norms here of people not doing this so I know this is quite the social violation and something like this had a very low probability of happening otherwise but, you know, your door is weak so it was my right to enter. You should fix it btw"

The internet is like the former not the latter and taking a moral high ground stance that it just should be otherwise is just screaming underwater while doing nothing to actually protect yourself from an actual real threat.

I'd be very thankful if I moved to some place I'm unfamiliar with where people lockpicking is just a cultural norm and someone warned me I should get a better door.


Lack of proper regulations, engineering standards, and tangible fines means that the only democracy that exists is the people themselves taking action. The corps being hacked have plenty of malicious intent, perhaps focus on that.


In the American case, the interpretation of the CFAA under Van Buren (2021) would provide at least the defense that one does not violate the law if there is no meaningful authorization scheme in place to determine what constitutes "exceeds authorized access". This may sound pedantic but when reporting on the decision much of the non-specialist media seemed to have failed to appreciate that in order to determine what conduct exceeds authorized access, it's necessary to be able to determine where authorized access starts and ends in every case as a factual matter, and the courts essentially threw out the theory that one can simply use a non-technological solution (like a very broad ToS) as a backstop and require some sort of notice and specificity. I don't think the mere fact that such a technological scheme can be erected is relevant since in theory you can put in some sort of basic authorization scheme - including basic HTTP authorization - around pretty much anything accessible via the protocol, but anything beyond a showing of actually putting such an authorization scheme in place, there's no real way to determine the unimplemented intent of some company in a way with any certainty. It's Orin Kerr's "gate-up-gate-down" theory - you need to have a gate in place to start with, instead of just a space where a gate can go or the assumption where a gate should be to figure out whether the gate is up or down, and without that determination one cannot meet all of the elements required to prove a violation of the statute.

I wouldn't even consider this "hacking" really. If prosecuted a defense attorney familiar with both the technology and the admitted niche area of computer crime law can readily conduct some very effective cross-examination against whoever the state is bringing out as a witness. The government does frequently rely on the lack of tech-competent and accessible counsel as a way to exert coercion (and usually resulting in a plea), and it doesn't help that the layperson has a very difficult time figuring out what qualities constitute competency when looking for attorneys (hence the enduring popularity of jingles since being memorable is frequently mistaken for being competent), but they are out there.


The timeline omits when the article was put online


According to the Wayback Machine, it first appeared January 10 2024. http://web.archive.org/web/20240000000000*/https://mrbruh.co...


It was posted earlier today (NZ Time). If they do end up reaching out though, I will amend that part with a revised statement :)


You could ostensibly make a great tool from this data for those seeking employment....

Make a tool which will look at the list of all the franchises within radius of person, and have it auto submit applications to all of them simultaneously...


That can be easily deduced.


At the time of writing, accessing the link returns a bunch of prometheus metrics... interesting.


Shouldn't anymore, was a "pushing to production" moment. I wanted analytics since my site was getting flooded \w traffic.


Are you not concerned with the CFAA?


does this count as authorized access under CFAA?

I’m curious what the limits are


then again, the people in potential harm's way seem to be the poor sods trying to get hired by these companies for a meager hourly wage

I don't see how this "p0wns" the companies themselves


If you view this page in Safari, it’s just a text document


It is using the Avif format (for images) for a 2x compression bonus over PNG while still maintaining a higher quality over JPG.

If you can't view the images then it means you are likely using an outdated browser, all current versions of browsers support it (afaik) except Internet Explorer.[0]

...And if you are using Internet Explorer, then god help you.

[0] https://caniuse.com/avif


I'm on Edge 120 (released a month ago) and can't see it


I don't usually use edge 120 but have installed on my mac. the images are indeed broken.


I'm not seeing it that way on Safari 16.1 on mac.


Since this is a post about security, this is your daily reminder to update your browser to stay safe on the internet. Up-to-date versions of Safari support AVIF images, and there have been multiple RCE vulnerabilities with known exploits fixed last year in Safari...


iphones are the scariest device to do anything important on.

I had a moment of total freakout when I realized the person across from me at lunch had an iPhone on the table. Actually he had an Android, and we continued talking like no big deal.

To be clear, we were talking about a 10-100M dollar problem, this wasnt small potatoes.

Too many exploits, I can't imagine having anything of value on an iphone.


> I had a moment of total freakout when I realized the person across from me at lunch had an iPhone on the table

Why?


Please, tell us more about each of these points you make, in detail. I'm compelled to know.


i love the picture of your cat on the home page :)


That's my lovely cat, Jingles. She is getting a bit old so I thought I would immortalize her on the homepage of my site.


> Timeline (DD/MM)

> 06/01 - Vulnerability Discovered

> 09/01 - Write-up completed & Emailed to them

> 10/01 - Vulnerability patched

Note those dates are DAY-MONTH. At least they patched it within a single day.

I find it funny that the author found a massive vulnerability but chose to wait a couple days to report it so they could finish a nice write-up.

Reminds me of my experience with HackerOne: We had some participants who would find a small vulnerability, but then sit on it for months while they tried to find a way to turn it into a larger vulnerability to claim a higher prize.

Then when they finally gave up on further escalation and submitted it, they'd get angry when we informed them that we had already patched it (and therefore would not pay them). The incentives in infosec are weird.


I feel I should clarify, the writeup was not the blog but rather than vulnerability disclosure report (PDF) I sent to them directly.


To clarify the dates, the vulnerability was discovered on a Saturday (Friday evening) their time. It was reported on Tuesday (Monday their time)

The only email listed on their site was for the sales team which would not be checked on a weekend.


Yes, I understand, but that’s my point: In my experience, the detailed write-ups that external pentesters sent us could have been replaced by a 1-2 paragraph email for our engineers to read and fix ASAP.


When you turn actual, creative and exhausting work (vulnerability research) into some kind of high stakes gig job you deserve this problem.

I am not against bug hunting by any means, but if you want to me act like I care about your product and not about my money, pay me monthly.


> When you turn actual, creative and exhausting work (vulnerability research) into some kind of high stakes gig job you deserve this problem.

You don’t make HackerOne your primary source of security testing. It’s a fun thing you do in addition to your formal security work internally.

The reason people do it is because so many people expect or even demand payment and public recognition for submitting security issues they found. Just look at how many comments in this thread are insisting that they pay the author various amounts of money. The blog post even has a line about how they have not provided recognition (despite being posted exactly on the day it was fixed, giving the company almost no time to actually do so).

HackerOne style programs provide a way to formalize this, publicize the rules (e.g we pay $25K for privilege escalation or something) and give recognition to people finding the bugs.

Pentesters like it not only because they get paid, but now they can point to their record on a public website.

This isn’t a “gig economy bad” situation.


Furthermore, companies that don't already have very mature security programs will not benefit from bug bounties. I've run a bug bounty program before on H1, and it was a nightmare. No one reads the scope and you're inundated with 99/100 really trashy reports. Managing such a program is a full time job for one or more people especially if it's a big company.


Most vulnerability reports I see at work come from security researchers in Pakistan and India.

I have never found out if this is a side gig, a full-time job, or a hobby for people.


How do you measure productivity? How do you budget for a bug hunting department?


Measuring productivity in a useful way is pretty close to impossible in a vast swath of jobs, though people make a killing (and make everyone involved considerably more miserable) pretending otherwise

The reason most people have converged on a preference for salaried work is that most jobs don't actually need consistency to be useful, but most people do need consistent pay to focus on a job


Very much agree the incentives aren't fully aligned.

From a bug hunters perspective, certain issues are often underpaid or marked as non-issues (and then subsequently fixed without paying out) so it’s in their interest to find a chain of issues or explore to show real impact.

Then from the programmes perspective you have to content with gpt generated reports for complete non issues so I can also understand why the might be quick to dismiss without hard impact evidence rather than a “potentially could be used to”


> "No contact or thanks has been received back so far, I will amend this comment if/when they do so :)"

They couldn't even be bothered to send a proper thank you.


To be fair... that's today. Guessing something might be in the works but it's 1AM Eastern Time in the US.


In cases where a small vulnerability is successfully turned into a larger vulnerability, everyone wins, right?

Considering that there is “more than one way to skin a cat”, it is not a given that vulnerabilities further along the chain will be resolved by closing the initial vector.

When a chain of vulnerabilities is reported it might become clear that not only does the initial attack vector need to be closed, but additional work needs to be done in other areas because there are other ways to reach that code which was called further along the attack chain.


> In cases where a small vulnerability is successfully turned into a larger vulnerability, everyone wins, right?

Nope! The two vulnerabilities are usually one and the same. The person is just trying to find a clever way to access additional data to make their payout larger.

From the customer perspective, getting the initial vulnerability fixed ASAP is the best outcome.

When they start delaying things to explore creative ways to make their payout larger, everything goes unfixed longer.


because writing up a detailed report takes 30 seconds


> I find it funny that the author found a massive vulnerability but chose to wait a couple days to report it so they could finish a nice write-up.

Maybe it's because the write-up was well written that they could patch in a day?


> I find it funny that the author found a massive vulnerability but chose to wait a couple days to report it so they could finish a nice write-up.

That's what you'd expect: finding != understanding, and you need some understanding before you can submit a sensible, actionable report to the vulnerable party. And then you need to write it up in a way that will be understood by the recipient. Going from initial finding to submitting a detailed report in a few days is excellent turn-around time.


Yeah... Is it ok to do a public writeup on the same date the vuln was patched without an acknowledgement from the client? I would have scheduled this blog post at least a week later.


What client? They haven't even answered the guy's email.


Once they changed the credentials and no longer share them, this particular issue should be gone, no?


Maybe... But bashing the client on the day they patched because they haven't communicated is somewhat shaky. Bashing them a week later is totally cool in my books.


What "client"? This looks like a researcher reporting a bug for free (or maybe through a bug bounty program). They have zero obligation and the vendor is not a "client".


Maybe also include in your quote that they didn't thank him for reporting it


    The incentives in infosec are weird.
Full disclosure is the only honest way to operate. For everyone involved.

Much smarter folks than me have been saying it for decades.


Why should you be honest and open with companies? They for sure aren't with you.


It's not about companies. It's about their customers.

Do you even know what Full Disclosure is?


Why should the researchers or other vulnerability spotters care about the company's customers? The companies don't care further than what they can profit from the customers.

Yes, I know what full disclosure is. Companies don't do full disclosure about anything. Full disclosure is better than not disclosing publicly. But monetizing the vulnerability is akin to what companies do.

I find it utterly bizarre that it's totally OK and even lauded that companies are selfish profit maximizing machines that DGAF, but individuals should pamper them like babies.


Full disclosure isn't something for _companies_ to do. It's what _researchers_ do. Full disclosure isn't compatible with the monetization incentives offered by companies. You're publishing in public and immediately.

I think you clearly do not understand what full disclosure is.


My understanding of Full Disclosure is that researchers publish the vulnerability (and potentially exploit) publicly without coordinating with the software vendor. This contrasts with Coordinated Disclosure (sometimes "Responsible disclosure" in corporate propaganda) or No Disclosure (and potentially e.g. selling the exploit).

I admittedly used disclosure in a bit different sense for companies in that companies typically don't give out any (truthful) information they have if they aren't required by law. And they lie when profitable.

The symmetric action from a researcher is to sell the exploit to the highest bidder. Of course if the researcher wants to do other disclosures, that's fine too. But what I don't like is the double standard that researchers are scolded for being "unethical" but companies, by design, not caring about ethics at all is just fine and the way it should be.


But that's exactly why as a researcher you should operate under Full Disclosure. Properly motivate the companies to do what is right and don't take on questions about financial motivations, etc.


Companies can always better align incentives by paying more and not try to downplay vulnerabilities.


thanks for the clarification - I also read this as it took them a MONTH to fix the vulnerability.


> The incentives in infosec are weird.

Well - only the amateur infosec world where you try and force someone to be your client after you do the work, and then get butthurt when they don't become your client.

In the professional infosec world the clients choose to hire you first.


In what year was this? January 10 is tomorrow, even on the east coast, at the time of writing this comment.


Someone living beyond the US's east coast? Impossible!


I don't think it was an unreasonable assumption given that the article talks specifically about American fast food chains.


I guess three clues:

* They were just trolling Firebase accounts for anything left open, and the first hit was a company that works with a bunch of American fast food chains. That doesn't require OP to live in the US.

* They specified "America's fast food chains"; someone living in the US probably wouldn't qualify it with "America's".

* They used a $DAY/$MONTH date format, which is uncommon in the US.


> * They specified "America's fast food chains"; someone living in the US probably wouldn't qualify it with "America's".

I call that US-centrism. Quite annoying to non-Americans living in the States.


* If they are in America, they're a time traveller.


they discovered the vulnerability by reading about it on HN and then going back with the posted write up, classic lazy time travelers.


"[T]hey are", and "they're", in the same sentence. I don't know...I don't know....


The first 'are' is emphasized.


The way the dates were written should be an indication that they aren't in the US.


New Zealand GMT+13 Moment


Not everyone lives in the US of A. Half the day is over already in East Asia.


That's what i was thinking too, not because it's not already 10th January in europe, but because i doubt you can except a 'thank you' in <8 hours. So I assume this might have been 2023?


It's 2024-01-10 07:11 in France


Duh, my head was still not awake. I wanted to write 'it's not even 8 am in europe'.


>With an upbeat pling my console alerted me that my script had finished running

Forget the pwn how do I do this

Also, HN used to think this was cool now there are 20 posts blaming the hacker…


I've appended `; tput bel` to the end of long-running scripts to get the same effect.

Fun fact: the `bell` control character is part of the ascii standard (and before that the baudot telegraph encoding!) and was originally there to ring a literal bell on a recipient's telegraph or teletype machine, presumably to get their attention that they had an incoming message.

To keep backwards compatibility today's terminal emulators trigger the system alert sound instead.


In Java and JavaScript it’s just:

    \u0007
It’s handy to put in your shell code that takes a few seconds, or more, to complete.


The Apple II+ still had a ‘bell’ key on the keyboard (I can’t think of a more recent computer that had that)


I always used to just have 'echo "^G"' instead (where ^G is typed as CTRL-V CTRL-G).


on macos I just add `; say done` to my command. If I didn't think of doing it before starting the command (which is most of the time), I just type it and press enter while the command is runnign, it gets buffered and executed right after the command finishes (be careful that it's not an interactive program that you're executing though, or it might take your "say done" as an interactive entry)


You can also do Ctrl-Z to pause the running process, and then `%1; say done` (or whatever) to restart the first queued job and then run the new command. Avoids the interactive issue


>Also, HN used to think this was cool now there are 20 posts blaming the hacker…

I'm not sure whether it's HN thinking this is uncool (it is cool!) or it's HN taking the unfortunate realistic position that this type of stuff only gets the reporters into trouble, after seeing it happening time and time ago. People doing cool stuff get in trouble, and it's sad to watch.



Yeah, what happened to the "Hacker" in Hacker News. (responding to people blaming the 'hacker', not the sites).

This guy just grabbed publicly available information, and by 'public' I mean put out onto the web un-protected, just put out there. If you can just basically browse to something, is it really his fault for finding it.

It's like if I have a front door on my house, and just in the front hallway I have a huge naked picture of my wife. If I leave the door open, can I get mad at pedestrians walking by, for seeing the picture. Maybe they walkup to ring the door bell just to get closer look, walking up to the door, but not going in, is allowed.


I think according to the law and related suits based on accessing publically available URLs without authorization is still technically prosecutable - I’m not a CFAA expert but I’d double check there


I think you are correct, that the law says that.

I think the law is pretty wrong.

It means I can break law by just accidentally browsing to something. Can be breaking the law just by seeing it, before knowing I'm doing something wrong.

Basically, just see something and be guilty before being able to look away.


Debian (and derivatives like Ubuntu) come with a handy shell alias called `alert`.

It is meant to be used after a command or a chain of commands to give feedback about success or failure. The alias by itself doesn't issue a ping, but can easily be amended to do so.

What worked for me is to add an invocation of `paplay`. Actually it is two different invocations, one sound for success and another one for failure.

In addition to that I also send an ASCII 0x07. I have both `tput bel` and `echo -e "\a"` in my alias, but don't remember why. Probably one of them is enough. I do this because I have my terminal emulator set to visual bell an that causes the tab to change color when the command is finished and I can immediately see it even if I am in another tab.


That being said, there might be an easier way by configuring the desktop environment to make a noise on notification, but that is route I did not want to go.


I have this in my .bashrc

  beep() {
    if [ $? -eq 0 ]
    then
      file=/usr/share/sounds/purple/receive.wav
      ret='true'
    else
      file=/usr/share/sounds/purple/alert.wav
      ret='false'
    fi

    (aplay $file 2>/dev/null >/dev/null &);
    $ret
  }
Can be called like this:

  $ command ; beep
Depending on the return value it'll give a different alert. It preserves the return value so you can still chain other dependent commands after it.

This depends on the libpurple sounds to be where they are (works in ubuntu at least)


My fish shell shows a desktop notification if some other window has the focus and the command ran longer than 10 seconds.

Fish config: https://github.com/qznc/dot/blob/master/config/fish/config.f...

Notification script: https://github.com/qznc/dot/blob/master/bin/notify_long_runn...

I stole it from some zsh solution originally.


    #!/usr/bin/env zsh

    (mpg123 /path/to/processing3.mp3 > /dev/null 2>&1)
processing3.mp3 is the "task completed" sound from star trek,

then it's just `./foobar.sh && boc` or `./foobar.sh; boc` as appropriate.


On Kubuntu, you can use paplay to play short audio files. Change the path to an audio file of your choosing.

    sudo apt install pulseaudio-utils
    ./some_script ; paplay /usr/share/sounds/freedesktop/stereo/complete.oga


I suggest using ntfy [1] for this. It's open source and self-hostable. It lets you push notifications to your phone like this:

   ./myscript.sh; curl  -d "Script done" \
  ntfy.sh/mytopic
Disclaimer: I am the maintainer of ntfy.

[1] https://ntfy.sh/ + https://github.com/binwiederhier/ntfy


This is the most perfect blog post. ZERO fluff, straight to the point. Win.


Except it is almost perfect — it would have been perfect had he been thanked and rewarded. Of course that is not on him, but felt so disappointed reading that at the end.


It's been less than 24hours. I don't think any company works at that speed.


Oh come ON. Sending an email acknowledging the report and saying "Thanks so much for reporting this - we will look into it ASAP" takes about 10 seconds


"Perfect" is referring to the blog post, not the outcome.


Full permissions for a user is blatant negligence.

For anyone who's never used Firebase before this is as simple as a single piece of logic that appears basically as:

if authUserID is UserDirectoryID

That simple.


I've never used firebase before. But are you saying that, in it's default configuration, anyone who registers a firebase account has R/W access to any firebase database as long as the database owner forgot to put that line in there somewhere?

That seems like an insane design...


No, the default is no access to anything. You have to write rules that allow access to each record in the database.

It sounds like the rule that they wrote only checked that the request _is logged in_, because they assumed that visitors can't create their own accounts.


Which, even if that assumption were true, is still bonkers, because from what I see in the article they had no partitioning between tenants or permissions checks for different user roles. So even if they hadn't accidentally allowed creating new accounts, any account on any one of their existing customers had full access to every row in the database.


> any account on any one of their existing customers had full access to every row in the database.

Correct. :/


It's mind blowing to me, as someone who's built a SAAS and then talked to customers and ultimately their CTOs and CDOs that KFC and co ended up using such a service, either they would isolate the level of data exposed to the service and trust them on their contract - and then ruin them in court, or they would require some kind of compliance like SOC2 which should at least mean the solution was pen tested, and any pen tester worth anything will immediately find firebase is part of the solution and immediately test access rules..

The fact that the company/CEO/cto seems to just get away with this is depressing, because why should anyone else? it's not good business sense to invest in security if there's no serious repercussions


Yeah, the whole design of Firebase is that the client interacts directly with Firebase, not via your server. Which makes sense for auth since you don't want to be handling that manually, but the database? That makes me uneasy.


I've seen many many firebase projects with rules disabling access only if "auth != null" instead of implementing some kind of even rudimentary access controls. It's a very dangerous habit that seems to come straight from the firebase docs[1]:

> When the user requesting access isn't signed in, the auth variable is null. You can leverage this in your rules if, for example, you want to limit read access to authenticated users — auth != null. However, we generally recommend limiting write access further.

[1]: https://firebase.google.com/docs/rules/rules-and-auth


When you create the database, you're asked whether you want to give everyone access (development mode), or whether no one gets it (production mode). If you choose the development mode, it will automatically disable that access after a certain timestamp, so you don't forget to update it before shipping. This of course doesn't stop people who don't care about security from just manually giving out public R/W, or extending the timestamp.


This isn’t owning fast food chains; rather compromising some AI startup that has some of them as a customer.

Title is misleading.


It exposed PII of the managers & employees of ~half of the most popular fast food companies.

Personally I feel the title is justified but I understand and respect your viewpoint.

Also keep in mind that trying to clarify the such would also make the title much longer than I desired.


Aren't you afraid one of the companies involved may file a complain with FBI or police and get you arrested?


Arrested for what? The system gave them permissions, they didn't exfil data, and they disclosed it to the company. He did those companies a favor by showing them how vulnerable they are by outsourcing every operation and process in pursuit of profits.


Title: I pwned Chattr.ai via Firebase misconfiguration

That’s what you should call it. It explains to readers what’s going on without over sensationalism.

That isn’t too long either.


that's a bit unfair, I think it's pretty important that it has real world consequences. nobody knows what Chattr is and who their users are


> This isn’t owning fast food chains; rather compromising some AI startup that has some of them as a customer.

By this argument, getting access by phishing a company employee also wouldn't count as an attack on the company.


No, as company employee is directly tied to and the responsibility of the company.

These companies are responsible for their employees behavior and data but they are not responsible for nor legally liable for (in most cases, some exceptions apply) the actions of a third party that they have retained to help with hiring.

In fact the contract they have with said third party likely absolves them of any liability.

The title should be: I owned an AI startup via Firebase misconfiguration.

You can even name the startup if you want. That’s not flashy though and this person wants marketing.


TBF your proposed title is less snappy.


Of course, but it that’s good in most cases as then you don’t get an overreaction.

The right people will read it (Chattr.ai’s customers) and respond . Right now everyone looks at it and some CISO will overreact and make everyone go check their Firebase configurations which may likely be a non-value add.


I think it’s incomplete. The startup needs to be named and shamed on the title.


The article is not shy about naming the startup (chattr.ai)


I don’t disagree with this either, I just didn’t think of it when I put my response in.

Naming and shaming does work.


You are a good human. Seems they had not tweaked the database rules correctly, maybe even left the default setup! That means you could have executed this:

Firebase.database().ref('/').set('All your data is gone').

Better yet, download the whole DB and then:

Firebase.database().ref('/').set('I have all your data, pay me to get it back').


No contact or thanks has been received back so far :)


It's kind of wild that when businesses lose control of people's personal info, they get no punishment. And when someone saves them from losing people's personal info, they give no thanks.

Seems well funded companies are immune from data liability or responsiblity.


honestly at this point is there anyone whos PII hasn't been leaked in a major companies/orgnaizations data breech?

also wikipedai as a list of major data breaches https://en.wikipedia.org/wiki/List_of_data_breaches.


"Wild" is the unreasonable expectation your data is "personal" after sharing it with a third party, under a terms of service agreement no less.


>terms of service agreement

Are those the documents, often dozens of pages of barely understandable legalese word salad, that we've conditioned nearly everyone to click past?

While I certainly agree that people share way too much data, I personally think hiding behind "it's in the terms of service agreement" is getting quite tired when they are designed in such a way that you are encouraged to skip past it, and they are worded in such a way that a lay-person doesn't have a chance of understanding what the ramifications of agreeing to the agreement is.

Not to mention that, quite often, you don't really have a choice in the matter if you want to have a relatively normal life (e.g. being forced to agree to the terms of service of some random service to submit an application to a job, and not having a job isn't an option).


What makes you think you're entitled to anything, let alone a "normal" life, in this world? No one forces you to live in and participate in society, but if you choose to, it's at your own risk.


This reply seems rather… unrelated to my comment. But perhaps it'd be a fun philosophical debate at some other time.


It's directly related, perhaps you just struggle with comprehending the message?

Where does your entitlement come from? I bet working in tech too long.


Personal insults, nice.

It's related in the same way that I can say "nothing matters at all" in reply to literally anything. Which is to say, very loosely, and entirely lacking substance.


I'm sorry you're insulted by your own inability to answer a simple question about your own entitlement. It must be very difficult for your to engage in difficult conversations, and I hope this wasn't too stressful for your frail and fragile ego.


I wasn't expecting a bug bounty, but not even a 'thank you' does hurt my soul :(


Often when pointing out how people fell victim to a con they won’t thank the person who tells them about the con but rather attack them. Basically they can’t admit to being so stupid as to have bought into a con. On some level you can be happy they didn’t come after you or something.

I totally understand how you feel though.


Well, they're incompetent - is it a big surprise that they have poor manners too?


Yea, and if they were actually breached and there were victims, the first thing they would do is issue a press release telling the world "We Take Security Very Seriously."


Is it legally differentiated if they respond to the reporter?

Or is there some weird loophole of "We didn't take action because of your message. We just happened to patch the same vulnerability after you mentioned it. We are not aware of any penetrations, because we didn't notice your message"?


> Is it legally differentiated if they respond to the reporter?

Nobody knows.

But between taking an unknown legal risk, vs being seen as ungrateful, the choice for legal is quite clear.


If it's any consolation, people these days frequently ignore (or read but don't bother acknowledging) pretty much any email that was intended to be helpful, not just security disclosures.


To be fair, it looks like it was only patched in the last 24 hours so not totally unreasonable...yet.


it is 11:26:45 EST. Ready. Go.

"Hi, we have fixed the issue you reported to us. Thank you so much. We are willing to offer a reward of <x> dollars to you, because you have protected our customers. Please reach out with a payment address or any other questions you might have. Thanks again, Tim from <Large corporation>"

and... stop timer. 11:27:38

was that so hard?


If you have time to fix it, you have time to say thanks.


If this had been exploited and the job applicants to Target, Subway, Dunkin et al, had bank/credit fraud committed in their name's, would the big companies be liable for not performing due dilligence on chatter.ai? To be clear, I'm asking from a legal standpoint not a practical one.


For more crucial PII (such as SSN, health data, payment info, etc), vendors are generally required to have certifications from a third-party auditor (such as SOC2). If the big companies fail to check that, then yes, they can be made liable.


No rules or laws that require it. Closest requirement would be PCI around credit cards but you need lots of volume to be required to do an audit. HIPPA just requires you to do risk analysis and implement risk management. SOX is up to the auditor, when I was CTO at a public company, they were fine with me signing at attestation of all things we had implemented. Same with banks, no explicit requirement in both glba and fdic rules. Core bank systems are so old, none of that data is even encrypted neither is network traffic. Stuff is still in cobol.

Forcing function would be cyberinsurance policies that typically want to see audit results if you have multi-million dollar policy limits.


There are state laws that this runs afoul of. https://www.mass.gov/regulations/201-CMR-1700-standards-for-...


https://www.mass.gov/doc/201-cmr-17-standards-for-the-protec...

It’s very basic. There’s no best practices clauses it’s all “reasonable” clauses. Also no requirement for an external audit.


HIPAA requires that no entity involved leak any PHI or penalties will be applied, you absolutely have to do more than "do risk analysis/management".



I think the fact that this is true and well known(amongst those that could abuse it) is evidence that infosec, by and large, is overemphasized.


> No rules or laws that require it

It will just be FTC knocking on your door…


Or the SEC, because one the breach/incident is public, the share price drops and failure to have disclosed those factors prior constitutes "securities fraud." Increasingly this it the default method of corporate regulation.


FTC will come knocking at your door even if you do pass an external audit and have Soc2/soc1/iso certification. Equifax is an example.


Who will promptly slap you on the wrist, wag their finger at you and send you on your way.


The FTC will absolutely not knock on your door if you expose users's SSN to the internet.


I assume it is more like the chattr.ai where the responsibility falls, except if the companies were using it as a tool and configuring the services falls with them. Down to the contractual circumstances. 'Tool provided works well' vs. 'using the provided tool well' kind of thing I guess.


Probably yeah, although it just comes down to if they get Sued or not I guess.

NOTE: I am not a legal professional, just making my guess.


There is probably at least one European citizen in there, so GDPR applies.


Someone applying to work at Taco Bell or Subway couldn’t afford a lawyer even if they worked for a full year and saved every penny.


That why the existence of class action suits is a good thing (imho). It balances power to some extent. The unfortunate reality is that only the lawyers make money in such cases.


I was looking at jobs for my son at Safeway supermarkets and lazily put https://www.safeway.com/jobs in the browser.

That redirects to https://www.careersatsafeway.com/desktop/home -- which is very much not about jobs at safeway -- appears to be an Indonesian gambling/gaming site.

Safeway.com has zero email contacts published and expects communication to be via phone call or chatbot. I found their domain admin email and sent them info with no response, and no change to their site behavior.

This makes me think that they might be ripe for more monkey business but that's not my thing. Oh well.


Seems to be fixed: This request was blocked by our security service


Not fixed where I am.


what the hell, i see the same thing. it's crazy to me when large companies don't even have an option for: in case of dumpster fire, send an email here.


Technically it's not my problem (or on any other basis), but it bothers me because I'm weird.

I was tempted to find their CTO on linked in and post a message there, along with the fact that there was no reply to my outreach nor a proper channel to do so.

I think the only think in their defense is that they must get a lot of angry customer messages and they just don't want to deal with that.


I very much doubt it's got anything to do with their CTO - the management of a corporate website is usually jealously guarded by marketing/corporate communications


Yes, the CTO hopefully has nothing to do with lower level operations like that. But if they get a public burn they're going to issue a decree that will be addressed.


No what I mean is that it won’t even be in their org. The public website will belong to the head of corporate communications or some similar chief bullshit officer


Ah, yes. It looks like my post here did get the security folk involved -- but it appears that they've yet to fix it. The power of Hacker News!


Hi Albertsons/Safeway VP of Security Engineering here. Thank you for disclosing this. I’ll have it fixed along with the fact our VDP submission link is missing from the Safeway site. Here it is for future reference https://albertsons.responsibledisclosure.com/hc/en-us


As it's still not fixed, I tried the form there. It let me fill it out and send it in and then told me I needed to create an account, which made it appear that my submission wasn't sent.

I've done enough here but ffs, if that form requires an account to be created beforehand then don't let the submitter go to the trouble of filling it out and then discard it.


Edit: my bad. Looks like browser cache was "helping out". Problem is resolved from my vantage point.


Hi! I was wondering if it would get noticed here ;-)

But as noted elsewhere, it's still not fixed.

And the link you shared is a good thing but is that going to be easy to find to someone who sees an issue with your websites? I'd recommend putting a link here: https://www.safeway.com/help/contactus


It's definitely not fixed: the (likely malicious?) redirect still happens for me now. How embarassing (for you).


Please also add a security.txt file so that it is not necessary to navigate through a labyrinthine site to get this information.

https://datatracker.ietf.org/doc/html/rfc9116


Firebase is a shitshow. I say this as someone who really tried to like it and sadly built a project for a client using it.

Other than this security vuln, the issues vs. just using postgres are:

* It is more work! Despite being a backend as a service it is much less code to just write a simple API backend for your thing both in time to do it and time to learn how to do it. Think of Firebase as being on the abstraction level of Sinatra or express and you may as well just use those. Things like Firebase and Parse etc. are more complicated. For the same reason it is more complicate to walk to work with just your arms and no legs (even though there are fewer limbs to deal with and no backend!).

* Relational is king. Not being able to do joins really sucks. Yes you need to make async calls in a loop. NoSQL is premature optimisation.

* Lots of Googlization. This means lots of weird, hard to find out clickops configuration steps to get anything working. Probably why this security flaw existed(?).

* Emulator is flakey, so for local dev you need another cloud DB, and yes all that Googlized setup RSI inducing clickops.

* I reckon it is slower than postgres at the scale of starting a project. Traditional architecture are blitz fast on modern hardware and internet. Like playing a 90s game on your laptop.

* Apparently as you scale it gets pretty pricey.

The main thing is: it actually slows you down! The whole premise is this should speed you up.


The flakey firebase local emulator is the bane of my existence, and poorly documented to boot.

On top of the Googlized clickops, there's the whole Firebase vs Google cloud situation, where you end up having to drop down to "real" google cloud for certain specific features. The docs appear to be detailed but you often end up with more questions than answers.

If you are ever thinking about using firebase, give Supabase a try. The emulator works well, the dashboard is there for prototyping but you can just write SQL to clearly define your database and migrations. Since it's just postgres you have a clear route to leave Supabase if you should ever want to.


Just curious, what’s flakey about it?

I’m not at Google anymore but I was a core contributor to the Firebase emulators project when I was. I can think of many flaws with the emulators but flakey is a new one to me


It often just crashed with an error. Now I am a Windows user, so MMMV, and this might be the reason. In some places the behaviour was slightly different and I had to work around that. I don't recall the specifics. And the idea of a test suite that starts the emulator, runs the tests and gives a result, that can reliably run.... well I gave up on that.


I see this kind of post all of the time. If you’re using relational data with a key value store you’re doing it wrong. You can do anything you can do with a relational database with a key value store, but there are trade offs since now you have to heavily denormalize for performance and figure out how to keep things reasonably consistent.

Firebase is not an alternative to Postgres alone. You need an actual API server. The value of Firebase is you don’t need that, nor do you need to worry about ops, authentication, queues or other things.

The issue the OP found could have been easily fixed by simply reading the docs, but that seems to be a rare activity these days.


There is no such thing as “relational data” here. There is the data I need to store to implement my app. No matter how I shaped it, it was suboptimal. Where it might shine is a subsystem like chat with just messages. Oh just got a flashback about Firebase rules. That alone is a time sink where you could have got the project done in Rails already :-)

The hard work of using Firebase’s apis, libraries, reading it’s docs (which are detailed but badly organized) is more than the delta between not needing a backend. And for a non trivial app you will end up using functions: infact if you want a guarantee that your user has a name then you will need to write a function. And that is… a backend, like writing an app.route statement.


While NodeJS+Postgres is my go-to, I think it's harder than you're making it sound. Firebase would probably be easier for someone who's new to this altogether or somewhere in between.

There's still nothing that holds your hand through a proper client-server interface, good relational schema design, and all the glue in between. Partially because nobody agrees on what those are.


From this post I can tell you’re not really understanding how Firebase is supposed to be used, which is fine. For you it’s better to use the traditional approach with database and app server.

And yes, there is such a thing as relational data. If you do not believe this then you really shouldn’t use Firebase (or dynamodb for that matter).


I know I am holding it wrong etc! But I really tried in earnest, as a fanboy of firebase, for quite a long time. The problems I had were with basic things. You have companies, a company can have many users, users might belong to more than one company (hello Slack...) and then there can be relationship between users.

Putting aside the problem between chair & keyboard.

Another difference is more if you make a mistake in your relational schema, you can SQL your way out of it - add an extra join or group by. And you can also fairly easily migrate you way out of it to a new schema that is the right structure.

This requires actual code with firebase, and a lot of patience, and probably a lot more downtime. So you need more of a waterfall approach, I would suggest, to design a schema ahead of time, and know all of your requirements. NoSQL document-oriented schemas just aren't flexible (unless the DB supports something like materialized views to help you get out of it)


Firebase's whole premise is seamless syncing between locally cached data and your backend. If you "just use Postgres", life is simpler until your user goes offline/runs out of mobile data/whatever, and then they're immediately screwed.


It's worth noting that Firebase doesn't have a true offline-first architecture, but rather cloud-first: By default, queries run against the cloud and results of those specific queries are temporarily cached on the client. By default Firestore will try to reach the server first before falling back to the local cache, which can result in a subpar UX on a patch network connection. It does also provide store-and-forward of updates from client to server. But it's not a true offline-first architecture since it does not preemptively sync a database to the local user device for offline-by-default access.

Regarding Postgres, that is where tools like PowerSync (disclosure: co-founder) and ElectricSQL are useful, which are both sync layers for Postgres for offline-first architecture.


Most apps are online by default these days and don’t even gracefully degrade without internet. Firebase does have the offline DB, but it had a ton of more features and I wouldn’t say the offline db is the only selling point of FB.


This is the exact use-case I want to optimize for. Offline-first with robust and seamless syncing. Firebase keeps promising it but I would love to find more transparent tools that work better on mobile + web.



Do you have direct experience with any of these and especially experience using them with mobile offline-first + sync?


I feel this needs a framework (not a library) to take care of it all. Abstract away the webbyness. Something like Elm, with a type that represents data, and behind the scenes it does all the ServiceWorker and syncing crap for you.


https://supabase.com/blog/react-native-offline-first-waterme... may be of interest. https://supabase.com/blog/postgres-crdt seems to be abandoned but would be the next logical step beyond this.


All roads lead back to RDBMS, it's amazing how this piece of theory just works.


> this piece of theory

Key words right there. The relational model is a timeless mathematical model for data that gains both logical consistency and adaptability as a result. It has and will continue to stand the test of time.


And in practice it has a superpower: agility. The pointy haired boss wants your OLTP to be an OLAP, and you can kind of hack it. You want to put the user's birthday on the settings page this quarter? Sure. Even if that is in another table. You can even make it efficient.


I'm coming to this conclusion as well.

Something like DynamoDB can be great for simple data. I liked the idea of Graphql (technically the API query and not the database). Both of them turn into hot garbage once you get into complex data, especially if it's being aggregated from multiple sources. Or maybe the systems I work with just implemented them poorly.


In my experience, the roads lead back to SQL. It deviates from the relational model. It may even be that SQL was successful because it deviated from the relational model. Perhaps the theory doesn't just work?


Roads lead back to SQL because it became a de facto industry standard for "relation-like" stuff.

Can you give an example of a query that cannot be expressed well in relational algebra, but can be in SQL because it deviates from that?


> Roads lead back to SQL because it became a de facto industry standard for "relation-like" stuff.

But what was in question is why SQL is the standard. Did it take that position because of its deviation? If so, that would suggest the theory doesn't just work. Without actually profiling, I suspect that the deviation allows some real-world optimizations to take place, enabling SQL databases to be faster than something with strict adherence to the theory. That would be a good reason why you might have to choose SQL over a strict alternative.

> Can you give an example of a query that cannot be expressed well in relational algebra

Seems not. CloudFlare blocked the submission, complaining that I was submitting a SQL query, which it thinks is a security concern for some reason...

In lieu, just think about what a relation is and how SQL is not relational. Even some of the simplest select queries you can imagine can demonstrate your request.


It took that position because it was what the first viable RDBMS used, pretty much. Similar to how JavaScript became the standard PL for browsers.

The simplest SQL queries map perfectly to relational algebra, so I'm still unclear as to what you had in mind. The two major deviations that SQL has over strict relational algebra are non-uniqueness of rows in a table, and NULL. The first one rarely comes up in practice, and any bag of non-unique rows can be trivially mapped to a bag of unique tuples simply by adding synthetic IDs to them. And SQL NULL semantics is widely considered to be a mess even by many users of SQL itself. With respect to performance, NULLs can be implemented very cheaply while optimizing their relational equivalent (1:0-or-1 relation) requires a little bit more effort on the DB side, but it's still such a simple pattern that I don't see a problem here.


> It took that position because it was what the first viable RDBMS used

Then wouldn't we be using LINUS today rather than SQL? "Viable" is quite hand wavy, so maybe you don't consider MRDS to have been viable enough for some reason. But even once relational databases were moving into the mainstream, there was no clear winner between SQL and QUEL for quite a long time. Even what is arguably the most beloved DBMS of all time, Postgres, picked the QUEL horse originally.

But SQL was generally considered easier to understand for the layman, perhaps in large part because it was less strict with respect to the theory. This may be another reason why it won.

> Similar to how JavaScript became the standard PL for browsers.

I don't know how similar that is. I'm not sure there was ever another realistic alternative you could have ever chosen. The only real attempt to change that, VBScript, was likely to not work half the time due to not having the right dependencies on the host system, making it impractical for real-world use.

Maybe not anymore, but for a time there were practical alternatives to SQL.

> The first one rarely comes up in practice

The first one is the most common source of SQL bugs I see out in the wild. Complex joins can become quite unintuitive because of it. Nothing you can't learn around, and of course work around, but something you have to always be mindful of. As such, I'm not sure I agree that it rarely comes up in practice.

Not to mention I see a lot of people making use of that fact. It is a useful quality in practical applications. It also comes up quite a bit in practice because, frankly, often you don't want rows to be unique.


In practical applications, you pretty much always have synthetic row IDs in cases where there aren't any natural ones.

I think it would be more helpful if you could give a specific example of a simple SQL query that does not map nicely to a relational expression, since it's kind of hard to discuss the specifics in these vague terms.


I don't understand what you are looking for. There is no specific SQL query I know of that could not expressible in relational calculus in some manner. But that also has nothing to do with the discussion.

The discussion is about why SQL became the standard. Being first is not it. It wasn't first. It was second, but QUEL came along hot on its heels before SQL established itself. It was approximately another decade after that before SQL solidified dominance.

- Was there a specific technical reason to choose SQL over QUEL/LINUS?

- Was there a specific human reason to choose SQL over QUEL/LINUS?

- Was it just random chance and if we were to do it all over again we are just as likely to see QUEL/LINUS become the standard instead?


I mean FFS I can get a process to write more rows/sec to AuroraPG than Dynamo with needed semantics, with less code and lower IOP cost


Firebase is a whole platform with auth, file storage, functions, etc besides just its DB feature, but maybe this wasn't always the case. Anyway, yes, I don't look past Postgres unless I have a very specific reason.


Maybe I'm like a Luddite or something, but I feel like I keep hearing about Firebase but still have no idea what it really is or why/how I would use it in a project. I'm just sitting here on my own building projects with mostly Postgresql DBs, once in a while MySQL, and not suffering massive security breaches. Thanks I suppose for giving me a data point that I'm most likely not missing anything.


It’s a hands—off database and auth service, initially intended to be directly accessed by thick clients, with little to no backend logic (although they have since added FaaS).

When mobile apps started out, most had little to no online features.

As the mobile apps market grew, more and more of these apps started requiring account persistence, sharing content with other users, real-time online interactions, etc.

That's when Backend as a Service became a thing (eg Parse), targeting developers with little to no server-side experience. And that's when Firebase popped up.


Ahhh Backend As A Service. I guess that makes sense. Not something I could see myself ever using, but I suppose I can see how somebody might use it if they don't know how to write and run their own backends or don't have authority to spin one up.

Guess I'm a little lucky in that I can spin up personal backend services just for kicks, and even though DayJob is pretty corporate and locked down, I can still spin up a new backend on my own with not much oversight as long as it doesn't touch certain sensitive things.

Thanks for a brief and clear description - it's surprising how few people can't seem to write one, and how many official corporate sites bury what their service actually does behind 10 pages of marketing fluff and stock photos.


When I was evaluating Firebase a few years back, the thing that most annoyed me was that their frontend library wasn't open source. Google just shipped an obfuscated and minified JS library. The lack of source mixed with their terrible docs made it a non-starter for me.

I remember having some issue, and thought: well, it's JS, let me just check the source like I normally would! Only to find out that you couldn't browse the full client source code anywhere. At that point my only option was to reverse engineer the minified source which just seemed silly and like a waste of time.

Firebase moat has nothing to do with their frontend library, which anyone could reverse engineer with a little bit of time. And yet they still kept it closed source. I don't know if anything has changed since then, but that was the primary reason why I lost interest in the service.


Supabase is the iPhone to Firebase's Palm V -- highly recommend, if you're a fellow millenial like me who grew up on mobile, and things like "much less code to just write a simple API backend for your thing" sounds like 6 months and paying another engineer.

EDIT:

loud buzzer

Careful, Icarus: "permissions can be setup to allow global read-writes" is a "vuln" of every system.

p.s. Any comment on why her blog has you guys "remembering Chattr" then getting a seedy Firebase pwner GUI, and yours has you diligently looking through .ai TLDs?


loud buzzer

Sorry, but supabase has a similar issue.

Another blog going over that has or will be made by Eva (referenced on the site)


Would be very interested in reading about SB. Has it already been posted?


I think Supabase is much better than Firebase, but I find its security model worse; Firebase was very clearly designed with this in mind, while Supabase is just a Postgres DB with RLS as an afterthought.

One particular thing that annoys me with SB is that by default, or when you create a table with SQL, they're publicly accessible, which is very bad! (Firebase defaults to no access in production mode.)


> Firebase was very clearly designed with this in mind

Yes and no ;)

The original release of the Realtime Database didn't have security rules (though they were thought of at the time), and they were added in late 2013/early 2014 (IIRC). At that point, in the name of "easier getting started experience (don't force users to learn a custom DSL)", the default rules were `read: true, write: true`. As you might expect, it resulted in a high potential for this type of thing, and sophisticated customers cared _a lot_ about this.

This changed at some point post acquisition (probably 2016?) when the tradeoff between developer experience and customer security switched over to `false/false` (or picking something slightly more secure than `true/true`.

Firebase Security Rules were upgraded and added to Firebase (Cloud) Storage and Firestore, with both integrations being first class integrations, as _the whole point_ of those products was secure client-side access directly to the database from day 1.

The tricky part of all of any system in this space was designing a system that's simple enough to learn, highly performant, and also sufficiently flexible so as to answer the question "allow authentication based on $PHASE_OF_THE_MOON == WAXING_GIBBOUS" or some other sufficiently arbitrary enterprise parameter. Most BaaS products at the time optimized for the former, then the latter, and mostly not the flexibility; however, over time, it turns out that sufficiently large customers really only care about the last one! Looks like Firebase solved this recently with auth "blocking functions" (https://firebase.google.com/docs/auth/extend-with-blocking-f...) which is sort of similar to Lambda Authorizers (https://docs.aws.amazon.com/apigateway/latest/developerguide...), which I believe is a pretty good way of solving this problem.

Disclosure: Firebase PM from long ago


Don’t believe tables are readable by default even if you have defined any RLS policies for that table. I’m building something on SB right now and have been burned more than once because I thought that the absence of policy meant open access to everyone.


I just checked, and newly created tables without RLS are accessible to anyone: After running `CREATE TABLE x` in my SQL client (which succeeds with no warning), if I go back to the table UI on Supabase it says "WARNING: You are allowing anonymous access to your table". (It's good that there's a warning in the official interface, at least, but what if I use my own SQL client? What if my ORM is creating tables?)

Your confusion probably stems from how you can have RLS disabled, or RLS enabled with no policies. If you have RLS enabled with no policies, the access is restricted. But if RLS is disabled (or never enabled!), then your table is blasted to the entire internet.

This confusion kind of proves my point; if DB access from untrusted clients were baked into SQL since birth, RLS would probably be enabled by default.


The "when I create a table via SQL statements at shell it does what I say" isn't a vulnerability, I don't think.

The comment chain went long enough that I got confused and thought I was missing something, I started a brand new account, brand new project, brand new table, RLS is enabled by default, has a big recommended next to it highlighted, it is checked, the entire section is highlighted, and has documentation right below it. Source: https://imgur.com/a/X9oJ2i9

It's enabled by default, quite forcefully so

but I'm not a Postgres admin, maybe there's a stronger way you know of to enforce it, so you can prevent the footgun of CREATE TABLE?


I mean, I don't disagree, but what I'm saying is that SQL/Postgres (hence also Supabase) was not designed for databases accessed from untrusted clients, instead, it's an afterthought and it shows.

Whether it's a "vulnerability" or by design is another question, but it's definitely a footgun (particularly for new Supabase users that use an ORM like Prisma, which has its own UI and creates tables by itself).

The solution might just be to not let untrusted clients access your DB.


I don’t understand the RLS is disabled warning thing. I also have that warning on a project where I migrated to Supabase from a sql dump/restore from another PG instance.

I’m using supabase as “just Postgres” at the moment and the only access to the data comes from a server I control.

Could you explain how my data is being “blasted to the internet”?

Genuinely concerned if I’m grossly overlooking something.


If you don't use the client library (and never expose the anon key) you're most likely fine. If you do (even if just for Supabase Auth or so) your data is exposed and you need to enable RLS on all affected tables ASAP or an attacker can access the entire database, in a similar fashion in which OP did that with Firebase.


Gotcha, yeah I’m not using the client lib at all. Good to know.


In what way do you perceive it to be an after thought?

It's front-and-center constantly, and has _all_ access disabled by default on tables every time I use it.


It only has access disabled if you enable RLS on that table. If you do `CREATE TABLE`, or don't check the checkbox in the UI (TBF it's big and green and has a warning that's hard to miss), then access is public.

I guess my main concern is that it's hard to setup RLS correctly using SQL. Because it's two separate statements, if your `CREATE TABLE` succeeds, but the `CREATE POLICY` does not, you're also exposed. And it is more annoying than it should've been to test the rules (Firebase has a dedicated tool for that).

I now just use Supabase to host a normal Postgres that only my backend connects to. That works well.


I built a supabase app the past two days, and I agree.

I did find it a footgun that creating a table through SQL was not private by default. (Why doesn't Supabase apply RLS by default to tables created through SQL?)

Serverless also turned out to be more trouble than it was worth. In particular:

* Doing DB business logic in JS is gross.

* It's tricky to secure a table to be semi-public. e.g. you have a bookmark site and you don't want users to browse all URLs, just the ones they have bookmarked. The best solution appears to be disabling foreign-keys until transactions are done and then having a complicated policy.

* It's a pain to set up a CLI client that interacts with the DB. I think you have to copy-paste the access AND refresh tokens to it. I couldn't figure out a way to create my own API tokens.

A backend is nice, because it is private by default.


Used it and can't actually recommend it. RLS policies slows down even simplest queries 1000x times sometimes, and postgres' current EXPLAIN ANALYZE isn't helping much. Testing app on it is still a pain. Default migration engine is oneway. Backed in database backups are close to useless. I mean, I managed to solve a lot of those issues for myself, but it still felt like I'm reinventing bicycles instead of doing actual work, and I still had a subpar experience.


> "permissions can be setup to allow global read-writes" is a "vuln" of every system

Question is how much effort that is. It's scarily easy on Firebase, idk about Supabase.


> sounds like 6 months and paying another engineer.

If you take this approach, it's "pay now or pay later".

-- Fellow millenial


I'll happily take 6 months pay to knock up a quick node api. :-). Just need to find a beach first.

What I found is you are right and FB is easier for the Millennial, Gen Z, Boomer or whatever IF everything you need can be done by rules and schema.

As soon as you need to write functions (because rules are not sophisticated enough or too slow/expensive, or you want to know why the thing got denied) then you are writing backend code.

It is actually easier to write the same code in a NextJS template - like there is less to learn, less docs to read. And then chuck it on Vercel which will deploy and devops it for you. So you have all the devops done for you like Firebase would and you have spent less time. Now if you are talking to postgres instead of firebase from the backend, it is actually easier IMO. A line to connect to pg. A line to issue a query.

Guess this is just my opinion, but it is less code to do so, less environment variable farting around, downloading a weird .json with all the credentials. If I were inclined I would write a blog post showing how much less lines of code are needed, how much less understanding is needed, and with the managed infra/DB offered by Vercel etc. you are still serverless, etc.


At that point, you might as well just use PostgREST.


Article gets to the point very quickly, nice.


Much appreciated, I am always open for further feedback too! (If there are ways I can improve my writing)


My feedback: keep the same style :)


Article was good, and your instincts proved correct -- but if you want some truthful feedback, your headline is clickbait. You pwned a single vendor that happens to work with some fast food restaurants, you did not find a vulnerability within the restaurant companies themselves. "I pwned an applicant management system" is a lot less compelling than the headline you used.


Who's to say they're the first to discover this? They're the first to discover it and do something to fix it.

I thought there was a US law now where breaches like this have to be reported?


> I thought there was a US law now where breaches like this have to be reported?

Yes.

> Will they report it?

Probably not (unless forced imo).


i seem to recall a case of hackers anonymously reporting a data breech when a the company they hacked refused to pay up and didn't report it as required by law.


Yes, ALPHV/Blackcat blackmailed MeridianLink by hacking them and then filing a SEC whistleblower complaint [1]. As always, Matt Levine has a wonderful article on it: https://archive.ph/Yffbh

[1] https://www.burr.com/cyber-security-law-blog/ALPHV-extort-Me...


You're probably thinking of recent SEC regulations requiring disclosure for public companies - https://www.sec.gov/news/statement/gerding-cybersecurity-dis...

Chattr is a private company - https://www.crunchbase.com/organization/chatrr


the clients are public companies, and in the contracts they've signed with Chattr there will definitely be a clause that Chattr has to disclose everything to their clients, so that they themselves can raise to the markets


In the EU this would hurt so bad they probably would've needed to close shop.


Thats complete FUD. GDPR fines are proportional to the size of the business and scope of the violation. There are companies that have had data breaches, failed to report them, and still only been fined ~300 EUR. There are others still who have been fined nothing subject to compliance.


As for size: the companies are large. The data processor - not so much.


no contact or thanks for potentially avoiding a lawsuit for them.


And folks, this is why you sell your exploits to the highest bidder.

Being "good" and giving companies free work is a HORRIBLE idea. They're never gonna pay, or even than you. If they're not willing to treat security researchers properly, I see no reason to return the favor.

Remember security groups: if your company wont pay, there are others that will.


Sadly this is the right direction. With time, companies will learn, but we can all be afraid on what world they will push for to solve this (it will be less like "put more resources on proper opsec" and more like "browser attestation").


Did you not see the part where applicants info was exposed? Make a few bucks by selling their data to <whoever> is 10000x worse than the chatr dev not securing the files.


Selling exploits (the words explaining how to) is a 1st amendment protected act.

Actually downloading the data from a hack and selling it is expressly illegal.

Now if the person/group you're selling to expresses illegal actions as a result, you have a duty not to sell. So, don't ask, and dont tell!

The real solution: companies all should allow for bug bounties and good-faith reporting and proper compensation for reported issues. But as long as they don't another group WILL pay.


Could have submitted it to https://haveibeenpwned.com/

Chances are some blackhat already discovered this data and sold it.


Stepping aside for a moment and thinking about the scope of this, I think it’s a good example of why technological diversity is something to long for. If Chattr can be pwned like this so easily, they likely have many much more serious issues which in turn will affect half of America’s fast food chains.


I've heard it told that's why BIND and unbound exist alongside each other


This is my problem with the whole architecture of FE -> DB. Without a middle server layer, things like token storage, authentication, and other things become really easy to screw up.


Firebase has an auth API that is free built in, it's weird that they didn't just use it. Idk if whoever built this would have built a more secure solution with a server layer or just have a public mongo instance instead


Mhhm. It's also a reason why we're making sure our developers have an easy time integrating into the platforms authn and authz systems. For example, if you need an admin interface, it should be just a library include and some bespoke framework configs to be integrated with the central authz framework over trying to think of something on their own.

It's... way too successful internally, lol, because we have a lot of permissions and privileges to manage now. And now we have to figure out good ways to assign these permissions to people more efficiently.

But that's a better problem than a GDPR relevant data breach, to be honest.


It seems crazy that no thanks or recognition has been given.

Is this because doing so might be seen as an admission of liability, and could be used in any legal cases that are brought?


To give the benefit of the doubt, it appears he only contacted them less than 48 hours ago. Their first priority should correctly be to fix the problem. They could be discussing a bug bounty right now and just haven't finalized the email yet


American readers may not have noticed that the dates are in European DD/MM format, so they thought disclosure was Sept 1 rather than Jan 9.


I 100% saw it as MM/DD and was wondering why it took them three months to write up the vulnerability and a month to patch it.

Thanks for the clarification


“Thanks for coming to us with this, we’re looking at it right away” wouldn’t take a lot of time or commit then to anything


> If you grab the list of admin users from /orgs/0/users, you can splice a new entry into it giving you full access to their Administrator dashboard.

I'm not clear on this. Splice a new entry into what? The list of admin users? And then do what with it?


Once he had access to Firebase (the database) he was able to add an entry to the list of admin users. With that done he could login as an admin user to the website and access the administrator dashboard.


I read this as worse - splice being a client side JavaScript function to add items to arrays. My concern here is whether the “is admin user” perms checks were done solely on the client side and not enforced on the API endpoint!


He's using the word as meaning "insert", not the JS function. He is saying he inserted a new row into a database to get admin access to a dashboard.

'splice" means to join two things as if by weaving them together. If used as "splice into" or "splice in" there is a sense of breaking something apart, inserting something into the gap, and joining it back together.

This all makes a bit more sense if you look up the etymology which was about ropes (despite splicing being about uniting, it's closely related to the word 'split').


It doesn't look like an IDOR attack. The poster says they had read/write access to the Firebase DB with their Firebase user they registered. It appears they had the database entirely open to all Firebase users. I agree the usage of "splice" in a JavaScript context does imply that they had an IDOR exploit


Ethical hacking is a good thing.

Nice to see someone doing good.


They reworded things since yesterday:

Before, one collaborator had them in a chat sneering about chattr, checking their Javascript, then getting a GUI pwn tool for firebase.

i.e targeted attack with malice, followed up a blog post wildly exaggerating what happened, with a disclosure policy of 'we emailed them once and they fixed and didn't email us back so we'll just publish'

Only spelling this out because it's important to point out the significant gaps between white hat culture and these actions, not only for the authors, but for people who are inspired and want to practice it


> Before, one collaborator had them in a chat sneering about chattr

"Wow this thing looks crappy"

> checking their Javascript

"I wonder if it is crappy"

> then getting a GUI pwn tool for firebase.

"Huh, it seems crappy. Let's just check to be sure"

> with a disclosure policy of 'we emailed them once and they fixed ...'

"Well, this thing is really crappy. we don't want to harm people. Let's tell them about how crappy it is to avoid harm"

> and didn't email us back so we'll just publish'

"They fixed the thing, nobody will be harmed. We still think it's crappy so let's talk about it"

Why wouldn't they go public at this point? They've gotten nothing else out of it, and since the issue has been fixed there is zero harm to customers. Do you propose they go like

"Hey company, we found this really embarrassing thing you did. I see you fixed it now, so can we talk about it"

silence

"Oh well the company didn't say anything so we won't talk about it. So sad"

In what world do we not hold companies accountable? In what world do we blame the people who find these issues for free?


Note I'm not claiming disclosure is bad, but rather, this is a copy of a copy of a copy of a copy of a copy of a copy of how professionals handle these situations, to the point there's nothing left except the "1) pick a target 2) email them 3) write a blog post when fixed" parts.


What other steps have their ever been? Getting a CVE?


It's interesting because it's quite a postmodern situation.

They did do the most significant signifiers to a layman: hack, write a blog post, wait until fix before talking about it.

I'd have a better explanation for picking a target, avoid having competing versions of the story out there, avoid having one version having you targeting a company while another claims it was a general sweep, get the collaborators together and at least credit them in all versions if you can't get people to agree to write one post, avoid exaggerating, avoid claiming you hacked other companies, and add a contact or two before releasing the vulnerability.

Comparing a Project Zero blog post to this is a good idea, I went off of memory.

As it stands it sure sounds like some people were hanging out in Discord, scrolled through some JS in dev tools and / or ran some automated script against a site, then got puzzled and downloaded a pwner GUI to do the hard part, saw a fix, then rushed to write blog posts and stepped on each other's toes, one wildly exaggerating who was hacked while covering up details, another being honest but had ~0 idea of what they were supposed to say


Well done, well written, great tact. Luckily we have HN to fill the gap on the missing kudos. What an unprofessional firm (chattr)


I worked with Firebase for a while, lured in because of how easy it was to do certain things. It makes certain kinds of operations essentially zero effort, such as getting realtime updates on the frontend when something changes. But it also creates a huge amount of effort that is trivial with other frameworks, such as creating a huge effort for security. I found that what I gained in convenience, I lost by needing to do so much work continuously battling with security rules. I left it behind and never looked back, and it made me much more cheerful about the work that I needed to do to establish and maintain more conventional backend data systems.


I made an app in Firebase once and did it so that people could collaborate but they used per-session IDs that were linked to their real IDs behind the scenes, so people couldn't spot trends of activity over time.

I found it a little tricky to start with while getting familiar with the rules, but it worked really well after I got the hang of it.


From Eva’s post:

> we didnt know much about firebase at the time so we simply tried to find a tool to see if it was vulnerable to something obvious and we found firepwn, which seemed nice for a GUI tool, so we simply entered the details of chattr's firebase

Genuinely curious (I’ve no infosec experience), wouldn’t there be a risk that a tool like this could phone home and log everything you find while doing research?


Sure but it's FOSS, so audits are pretty easy.

Plus as part of the pentesting I watch the network stack in Firefox sometimes so I would tell if it was trying to exfiltrate date


Yes, but that might also be caught by infosec users of said tool who have things similar to “littlesnitch” alerting them to the outbound API call attempt.


there used to be windows GUIs for forcing new connections to ask, but i haven't seen anything like it. I can't recall the name of the one i used to use, but it scored perfectly on shieldsUp - oh, Zone Alarm.

Littlesnitch iirc is macos only, but it sounds lovely for this sort of thing.


There's a very good relatively new open-source GUI firewall app like this called Portmaster:

https://safing.io/

It's available for Windows and Linux


Indeed!

If anyone wants to get to know us, our next Live Q&A is tomorrow at 15:00 CET: https://m.youtube.com/watch?v=S6P8ajLECXg


You can set this with Windows' default firewall. Setting to strict mode with no whitelist causes a UAC alert every time a process attempts communication.


The generic term is “outbound firewall”.


You are looking for simplewall


That would be referred to as a honeypot. Sometimes administrators will set up their own honeypots to see the type of threats they are facing.


No, a honeypot is intentionally insecure infrastructure setup to see who and how it gets attacked. A backdoored pentesting tool is a backdoored pentesting tool.


Im not saying the pentesting tool is a honeypot, but thanks for asking.


They need to pay this guy 100k. And fire someone.


They can’t because most of the people firing will also be held accountable and fired


How much would this leak go for in the darknet?


Deciding to sell this on the darknet is a life changing decision, white to black overnight and imagine not really something most would contemplate. Payment in BTC probably from an already compromised address so loads of factors. Probably an easy + quick 2BTC though


I feel like for a pro-level security person, 2 btc is not worth stressing about it for approx five years after that your whole career can be taken down at any time, as security ppl can absolutely not get jobs if they have a criminal file


This is an easy and obvious exploit so an attacker would need to extract the data from all sources ASAP. High risk of getting caught and ending in jail to be honest for measly 2BTC. Not worth it for anyone in the US or even Europe.


> Not worth it for anyone in the US or even Europe.

Lots of crimes are not "worth it". And yet criminals do it anyway. Because criminals (nor most humans) are not perfectly rational.

There's routinely reports where people try to rob a gas station with a loaded gun - a $200 haul if everything goes perfect. It doesn't and now they have 10 years in jail...


One crime is skilled, one crime is not. The skilled person has more options to earn 2BTC than the unskilled person does to earn $200.

If you’re a felon and unskilled you’re as desperate as it gets in America.


Yeah Sam Bankman Fried probably had the skills and connections and trust to coast through life. But instead he chose to commit the most egregious of financial fraud.

I would strongly argue that almost all crime is irrationally - take it to the game theory strategies of tit-for-tat and such.


2 BTC in most of the world is a life changing amount! And there are multiple measures the wannabe criminal could take to minimise exposure risks.

No wonder PII keeps getting leaked and sold all the time...


Yeah it's like the difference between buying a handgun to go to the range and buying one to rob a liquor store


You mean buying a handgun to give as a gift to your dad, vs buying a handgun for someone who wouldn't pass a background check and might use it for bad things (straw purchase).


If they're already using firebase, can anyone think why they are storing passwords? Firebase Authentication is incredibly easy and quick to setup and use (less than a day for someone new to it), which means you have no need to worry about passwords.


Not a direct answer, but I can tell you I see crazily idiotic mistakes in apps all the time. I think people are hiring the lowest of the low. FWIW, I don't think AI will replace coders anytime soon, but I think it'll replace these coders.

For example, my school's laundry app. Takes 8s to load because it continually refreshes the screen while it is trying to make a connection to their portal. Even now I just checked, it logged me out, took 5 seconds to let me touch the login, I placed in my email, grabbed my password form my password manager, it clearerd my email, retyped, and now the login is grayed out. Looks like I'm currently locked out. Looking at the laundry rooms, it takes 45s to load (literally, I timed it), and then the rooms aren't in order. It'll be like A4, A6, A7, A3, A11, A9, and so on. I'm not sure it's even manually filled in because they seem to change. Plus I have to unplug and replugin the machines constantly because they disconnect from the server. The dryer is a pain. This happens enough that the cords are worn down and it is a fire issue.

Yesterday I ordered from Jersey Mikes. They have a field where you can specify instructions. They do keyword filtering so you can't place a word like "cheese" in it, because they want you to click the box, but the box doesn't let you specify what kind of cheese. You also can't use words like "extra." Employees have always understood my shorthand or leet speak.

My housing processes applications via LIFO instead of FIFO. So all the students who renew their applications a month after the deadline get approved for their housing before anyone who does it within a reasonable time.

Electric bikes are known to light on fire when charging them. Teslas doesn't cover warranty for water damage. Google Maps routinely tells me to be in the wrong lane or miscalculates the number of lanes that exist. Google drive's solution to scrolling through music too fast is to lock you out, which just results in the user picking up the phone. Mine also likes to frequently disconnect itself and there's no low data mode so sometimes it just overloads my car's infotainment computer. Classic halt and catch fire situation.

I can go on about this stuff and it astounds me. Something is fundamentally broken when we can have computers that can talk to us in natural language but we are unable to design a system where employees understand the concept of a sorted list. Not to mention that I won't be surprised when that building catches fire.

Edit: I got logged in. My username was pasting into my password because their password field is labeled as a username field... but it is also hidden... They also double charged me in the past, said they didn't, and their solution was for me to issue a clawback with my bank. These people just don't care.

I really believe a lot of people are building things that they never test and never use. Even at big companies.


offshore workers


this is racist


> No contact or thanks has been received back so far

WTF.


It's pretty common in my experience, especially from larger companies

Recently I reported an issue to a company valued at >$10bil issues were quietly fixed, not a single response back, not even a "thank you"


Some companies intentionally Gray Rock security reports, because they neither want to attract attention by giving bounties, nor do they want attention for not giving bounties. If they just say nothing, the researcher usually just leaves them alone.

One could speculate that these companies want to pretend that infosec isn't a problem for them, and if they ignore the "problem", it will go away.


I'm curious if the best monetary approach for a white hat hacker would be to show them the problem, give them time to fix it, and then give them an option to pay a consulting fee for the discovery in exchange for NOT publishing the exploit (after it has been fixed). The idea being the showing what you have found on other sites has marketing value for a white hat hacker, but had the company hired you to discover the flaw, you wouldn't be publishing it.


The best approach is not to do it. Demanding money from someone that didn't hire you is never ethical - just childish. Would you like it if I showed up at your house, mowed your lawn, and then started banging on your door demanding $100 for mowing your lawn?

Also, what marketing value - if you're just pwning random web sites rather than getting hired to test a site's security you aren't in any market.


> what marketing value

Most people who are doing this type of things offer consulting services to help make sure your site / app are secure.

> The best approach is not to do it.

Don't do what? Don't tell them there is a security problem?

> Demanding money from someone that didn't hire you is never ethical - just childish.

Lets say my front door is open. Someone takes a picture of it and sends it to me so I can close and lock it. Once it is closed, they explain that they offer a service where they will help homeowners make sure that they keep their doors closed. They plan to use the picture they took to illustrate how they help identify open doors to show why people might want to be their client. However, if I want to pay them for the service the provided, then I get to decide if and how any information about my door being open is used.


The grass in the lawn may not be that dangerous to other people. However, if your house is emitting radiation, and a hero breaks in to clean it up for the sake of other people you service, (because the town does not need to wait for you to hire someone) the hero deserves a reward and the owner of the house deserves punishment.


Unless the "hero" is law enforcement or some other government agent with a warrant, he will likely have broken a bunch of laws by breaking into a person's house uninvited, and not very likely rewarded.

That's modern society for ya.


> give them an option to pay a consulting fee for the discovery in exchange for NOT publishing the exploit (after it has been fixed)

I'm not making any moral judgments, but purely from a legal perspective this sounds dangerously like blackmail. If anyone decides to take this path, be sure you understand the risks involved.


> and then give them an option to pay a consulting fee for the discovery in exchange for NOT publishing the exploit (after it has been fixed).

So... you're suggesting blackmail?


They fixed the bug which is the important thing. Corresponding back to the hacker probably involves the legal dept and it's probably safer to not respond at all.


It has been less than a day, relax


It was reported to them in September of last year


Read the dates again


Sad that in 2024 people continue to set their Firebase security rules to be wide open. Back in maybe 2015-2019 that was excusable because that was the default but now it’s just lazy.

Don’t expose your database / api / blob storage bucket / etc to the public! It’s not that hard to do it right, or at least “right enough” that you can’t get owned by someone scanning a whole TLD.


Partially, this seems like an issue with Firebase, where the defaults are possibly set on something that is not sane from most professionals perspective.

Having slightly tried Firebase, I can also say that the Google cloud tool environment was really confusing the last time I tried using it. Just this enormous maze of switches, and dials, and widgets, like a lot of the popular IDEs.

If the defaults are not set on something sane, and I, a personally evaluated competent tech user with some background in security (fed work) can barely find the settings, then most normal humans with limited grasp of those issues probably won't even know to look.


> Sad that in 2024 people continue to set their Firebase security rules to be wide open. [...] Don’t expose your database / api / blob storage bucket / etc to the public!

What is additionally sad, is that your comment - in 2024 - is being downvoted.


This is extremely annoying. Instead of fucking with other people’s companies why not build your own?

You pwned them? What are you twelve? All you did was commit a felony and post it online.


Pretty sure that poking around for holes/exploits is part of the definition of what is a hacker. They notified the relevant organization as well. Not sure why you take that stance.


And then posted it online? If his intentions were good he wouldn’t post their name.


> If his intentions were good he wouldn’t post their name.

Why? They shouldn't be allowed to sweep it under the rug


After the fix as been deployed, i don't see why it should not be. It might be useful to someone else.

If security is mostly an afterthought, maybe naming and shaming might help them take it seriously.

I do not understand your stance at all. Why are you defending corps that were negligent?


> What are you twelve?

Read this please https://news.ycombinator.com/newsguidelines.html


I'll bite -- he discovered a vulnerability in a large company and responsibly disclosed it to them. How is that a felony? Why would you post a felony online?


If I understand correctly, the article was published only after the vulnerability was patched. That sounds OK to me.


How did the author "fuck with" the company beyond discovering a vulnerability and helping them fix it?


Dude should have gotten some free chicken for his efforts.


So the hacker worked for free?


Yes, the way the incentives are aligned ensures nobody ever goes to jail and the little guy pays all the bills.


"move fast and break things" - Mark Elliot Zuckerberg


We need Whitehat awards and give this person that.


Lol. Lmao even. Great writeup


At this point I would not apply for a job if the employer used a third party online service. Seek out employers who do their own hiring and talk to candidates face-to-face.

If they steer you to one of these third party services, send your resume by snail mail directly to the HR director with a cover letter highlighting all the data breaches such as this one, LinkedIn, Indeed, etc. You'll stand out as someone who pays attention.


Not to be pessimistic, but consider the applicant pool MrBruh targets here. One wonders how widely people with the sort of research skills and communication habits you describe are represented in the population applying for a fry cook position at a Checkers franchise. Or even amongst the franchisees themselves...

And for that matter, how that kind of initiative would be received by your potential future manager at the drive-thru.

I feel like I sound a little patronizing, but my broader point is it’s not other people’s job to be responsible for this kind of data security, especially in a relationship so imbalanced as that between a job seeker and the potential employer who offers only one pathway to gainful employment.

As to the remedy you propose, I’m reminded of the inimitable @patio’s Seeing Like A Bank [0], where he points out that banks (like other firms) use techniques like the paper letter that you described as subtle shibboleths to distinguish people who are likely sophisticated customers from the rank and file.

[0] https://www.bitsaboutmoney.com/archive/seeing-like-a-bank/


Firebase is like a half baked product which lures people who are just starting out .It helps build products which can quickly go to market, but then once you start to scale, a lot of their products like firestore, firebase auth have basic features missing


I would have stopped once I confirmed the leaked keys were valid. Looking at what types of data you had access to wasn't required. Downloading plaintext passwords of other people is probably too far. Impacted users may need to be notified about a breach. If needed, create an account of your own and target only that.

If there was a pentester agreement, safe harbor, or other protection that's different. Be careful out there.


> Looking at what types of data you had access to wasn't required. Downloading plaintext passwords of other people is probably too far. Impacted users may need to be notified about a breach. If needed, create an account of your own and target only that.

I'd argue that it was absolutely necessary to gauge the severity of this misconfiguration and furthermore, that Chattr.ai must contact every affected user, not MrBruh.

Their configuration allowed anyone to create an account and access plaintext passwords. There is no telling whether and how many outside of this disclosure have previously accessed this information and may intend to use it. This was negligence of the highest order, and it shouldn't be on the one finding and reporting this issue to rectify it.


> absolutely necessary to gauge the severity of this misconfiguration

Possibly. But what's the legal basis that allows random external parties to make that determination? Report the leaked credential, and let the company assess impact.

The problem is that pivoting to accessing user passwords may cause the companies to spend money notifying customers and harm their reputation. If they want to pursue legal action, those are clear damages.

> Chattr.ai must contact every affected user, not MrBruh.

Agreed, a pentester directly contacting impacted users would increase the risk legal gets involved.

> There is no telling whether and how many outside of this disclosure have previously accessed this information

Typically the company would review logs to determine that.


> [...] may cause the companies to spend money notifying customers and harm their reputation.

I'm sorry, but I don't quite understand. Are you saying that you feel a company should not notify customers when exposing passwords in plaintext and furthermore, that this fact alone isn't harmful to their reputation? Not notifying customers, in my eyes, would destroy any semblance of reputation further.

> Typically the company would review logs to determine that.

Again, do you believe in the competence of someone storing passwords in plaintext? Logs may be incomplete; even a more competent organization that stores credentials following proper procedures and lost a db due to specific phishing rather than such a major screw-up would be expected to contact every customer and advise them to change their credentials, for very good reasons.


I'm not really commenting on the company side, but yes: plaintext passwords are bad, companies should notify customers when legally required, and I'd like companies to go further.

Legally, bypassing security controls, using credentials that are not yours, and accessing data without authorization is a crime[1]. I see no indication that this blog post was authorized. Others should not consider this blog post as a good approach.

Look instead to bug bounty programs and stay in-scope. Often that means creating your own account and avoiding other customer's data.

While it doesn't make a good blog post, I still emphasize that the author should have reported the leaked credentials and stopped.

[1] varies by jurisdiction, I'm not a lawyer, etc.


> [...] reported the leaked credentials and stopped.

But, and this can be a significant differentiator from a legal standpoint in multiple jurisdictions[0], they did not use leaked credentials, nor did they circumvent any barriers. They used a publicly accessible endpoint to create their own, completely new user that just had access rights from the get-go.

[0] Sticking solely with US examples, most notably United States v. Auernheimer which was in part overturned on jurisdictional issues, in part due to the following: "We also note that in order to be guilty of accessing “without authorization, or in excess of authorization” under New Jersey law, the Government needed to prove that Auernheimer or Spitler circumvented a code-or password-based barrier to access. See State v. Riley, 412 N.J.Super. 162, 988 A.2d 1252, 1267 (N.J.Super.Ct.Law Div.2009). Although we need not resolve whether Auernheimer's conduct involved such a breach, no evidence was advanced at trial that the account slurper ever breached any password gate or other code-based barrier. The account slurper simply accessed the publicly facing portion of the login screen and scraped information that AT & T unintentionally published."


That is not just negligence, that is stupidity on an order of magnitude that the responsible people should never again be allowed to work on a software project.


Every company I've worked for, and every pentest contract I've done has found plaintext passwords or credentials stored somewhere they shouldn't. It's unfortunately very common.


Customer credentials as in this example? I'll be totally frank, I'm having some trouble reconciling that with Article 34 of the GDPR and 1798.150 of the CCPA. Do none of these organizations have EU/CA customers or is the approach they take to laws the same as the one they employ for database security?


Less common with things that are directly "customer passwords", but common with other credentials and customer data. Those laws require reporting about breaches and I know I get notifications about breached data somewhat often.

Keep in mind, a good first step to improving security is to hire a pentester. So pentesters have a unique view into companies that are trying to improve. Often the starting place is quite poor. When I leave those contracts, to my knowledge, they are all on track to fix these sorts of defects.


I would argue that looking at the type of data you're dealing with is actually a very important part to assess the impact, but looking at the data itself is beyond this part.

Knowing that they store passwords in plaintext is a security issue on top of the R/W credential


Type of data is very important to assessing impact. But that doesn't grant anyone authority to breach customer data. The company can assess impact.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: