Hacker News new | past | comments | ask | show | jobs | submit | mu53's comments login

They need wins, and once the ball starts rolling, they can shift their focus. Government departments are restricted by budget. Going after 4 behemoths at the same time is not practical.

If google gets restrictions, then it makes apple look even more monopolistic. Like a trimming the hedges


Cookies are an antiquated technology. One of the first introduced while the web was still young in the 90s, and they have had a few iterations of bad ideas.

They are the only place to store opaque tokens, so you gotta use them for auth.


They are not the only place to store tokens. You can store tokens with localStorage for JS-heavy website, in fact plenty of websites do that. It's not as secure, but acceptable. Another alternative is to "store" token in URL, it was widely used in Java for some reason (jsessionid parameter).

To expand on the "not as secure" comment: local storage is accessible to every JS that runs in the context of the page. This includes anything loaded into the page via <script src=""/> like tracking or cookie consent services.

And I feel like it's important to expand on the fact that Cookies are visible to JS by default as well, except if the Cookie has the `HttpOnly` attribute set. Obviously, for auth, you absolutely want the session cookie to have both the `Secure` and `HttpOnly` attributes.

See https://developer.mozilla.org/en-US/docs/Web/HTTP/Cookies#bl...


I think the real scheme behind these companies was to monopolize the restaurant industry the way that uber has monopolized the taxi industry.

7.3B makes sense if you hold 40% of the restaurant business across the US.

Build out delivery network for existing restaurants, capture all of their order data, use that data to create ghost kitchens to sap business, ghost kitchens can turn into full restaurants, and continue as far as you could.


It is still unlikely that this will go anywhere

Birthrate issues in east asia come down to short term rewards vs long term rewards. Income/leisure/career vs national security/sustaining ethnic identity. This fixes nothing and helps no one


[flagged]


I think planned is far to strong a word. I think that instead, it is simply a byproduct of the cultural adoption of hedonism (which places happiness above other human goals) and consumerism (Which places material acquisition as the primary means to achieve ones goals).

Hedonism and consumerism are pushed by a broad by swath of interests, from industry to government, so there is no need for a grand conspiracy.


> I think planned is far to strong a word.

It's not. There were literal campaigns across the world to limit childbirth.

> I think that instead, it is simply a byproduct of the cultural adoption of hedonism (which places happiness above other human goals) and consumerism (Which places material acquisition as the primary means to achieve ones goals).

All culture is planned.

> Hedonism and consumerism are pushed by a broad by swath of interests, from industry to government, so there is no need for a grand conspiracy.

It's not a "conspiracy". It was open government policy. All over the world. China's 1 child policy being the most prominent and heavy handed approach. But governments around the world implemented a 2 child policy.

https://mothership.sg/2018/05/singapore-stop-at-two-children...

https://en.wikipedia.org/wiki/Two-child_policy

In the US, we didn't even have to implement such government because of media control over the masses. The "native" US birth rate has been below replacement since the late 1960s. Thanks to media conditioning.

To show you how easily people are manipulated, people now think everything they disagree with is a "conspiracy theory" because the media told them to.


While yes, I agree that governments tried to control population, I disagree that this is the outcome they were looking for, and disagree that planned government policy is largely responsible for the current situation.

To break that down. I agree people would be a fool to think that governments don't have population objectives and act on them. The part I disagree with is:

1) What those objectives were. No government wanted to create a population bubble.

2) The impact of government reproductive policy relative to popular culture. I think these impacts are tiny in comparison to (other) economic policies, and general culture outside of the government.

3) The idea that policy on population size is treated as an end in of itself, versus a means to achieving other policy goals, such as economics and stability.


SELinux is for distro and package maintainers to use. Not end users.


And yet for a large number of years any RHEL/CentOS SELinux issues with third party software were answered with "disable SELinux".


A large number of years up to and including "this year, right now, like, yesterday".


Same for Windows' UAC in the Vista era, which doesn't make it bad technology or place the fault on Microsoft. The world is full of terrible development practices, the answer shouldn't be "just disable your security mechanisms".


So you agree that end users do use it and are often incapable of getting the things they want to work with it?


hosts file is a really nice global block


I use the one at https://someonewhocares.org/hosts/

It seems to be all I need. I like that it blocks the crap on all browsers without needing to install extensions. Youtube is still practically useless though.


I often use it for self controlling my Hacker News and Reddit addiction.


You can't just hand wave the entirety of the C/C++ ecosystem away.

There is no second option for many applications or hardware


Its amazing how far they have come, but it still lacks so much.


I think the scenario that security through obscurity fails is when the end user is reliant on guarantees that don't exist.

For example Intel's Management Engine, it was obscured very well. It wasn't found for years. Eventually people did find it, and you can't help but wonder how long it took for bad actors with deep pockets to find it. Its this obscured cubby hole in your CPU, but if someone could exploit it, it would be really difficult to find out because of intel's secrecy on top of the feature.


It seems like people are really talking about different things with obscurity. Some are referring to badly designed weak systems, where secrecy and marketing hype is used to attempt to conceal the flaws. Others, like my comment above, are talking about systems carefully engineered to have no predictable or identifiable attack surfaces- things like OpenBSDs memory allocation randomization, or the ancient method of simply hiding physical valuable things well and never mentioning them to anyone. I’ve found when it is impossible for an external bad actor to even tell what OS and services my server is running- or in some cases to even positively confirm that it really exists- they can’t really even begin to form a plan to compromise it.


> where secrecy and marketing hype is used to attempt to conceal the flaws.

That's literally the practical basis of security through obscurity.

> Others, like my comment above, are talking about systems carefully engineered to have no predictable or identifiable attack surfaces- things like OpenBSDs memory allocation randomization,

That's exactly the opposite of 'security through obscurity' - you're literally talking about a completely open security mitigation.

> I’ve found when it is impossible for an external bad actor to even tell what OS and services my server is running- or in some cases to even positively confirm that it really exists- they can’t really even begin to form a plan to compromise it.

If one of your mitigations is 'make the server inaccessible via public internet', for example - that is not security through obscurity - it's a mitigation which can be publicly disclosed and remain effective for the attack vectors it protects against. I don't think you quite understand what 'security through obscurity[0]' means. 'Security through obscurity' in this case would be you running a closed third-party firewall on this sever (or some other closed software, like macos for example) which has 100 different backdoors in it - the exact oppposite of actual security.

[0] https://en.wikipedia.org/wiki/Security_through_obscurity


You're mis-representing my examples by shifting the context, and quoting a wikipedia page that literally gives the same examples to two of the main ones I mentioned at the very top of the article as key examples of security through obscurity: "Examples of this practice include disguising sensitive information within commonplace items, like a piece of paper in a book, or altering digital footprints, such as spoofing a web browser's version number"

If you're not understanding how memory allocation randomization is security through obscurity- you are not understanding what the concept entails at the core. It does share a common method with, e.g. using a closed 3rd party firewall: in both cases direct flaws exist that could be overcome with methods other than brute force, yet identifying and specifying them enough to actually exploit is non-trivial.

The flaw in your firewall example is not using obscurity itself, but: (1) not also using traditional methods of hardening on top of it - obscurity should be an extra layer not an only layer, and (2) it's probably not really very obscure, e.g. if an external person could infer what software you are using by interacting remotely, and then obtain their own commercial copy to investigate for flaws.


> You're mis-representing my examples by shifting the context,

Specific example of where I did this?

> literally gives the same examples to two of the main ones I mentioned at the very top of the article as key examples of security through obscurity: "Examples of this practice include disguising sensitive information within commonplace items, like a piece of paper in a book, or altering digital footprints, such as spoofing a web browser's version number"

I mean, I don't disagree that what you said about changing port numbers, for example, is security through obscurity. My point is that this is not any kind of defense from a capable and motivated attacker. Other examples like the OpenBSD mitigation you mentioned are very obviously not security through obscurity though.

> If you're not understanding how memory allocation randomization is security through obscurity- you are not understanding what the concept entails at the core.

No, you still don't understand what 'security through obscurity' means. If I use an open asymmetric key algorithm - the fact that I can't guess a private key does not make it 'security through obscurity' it's the obscuring of the actual crypto algorithm that would make it 'security through obscurity'. Completely open security mitigations like the one you mentioned have nothing to do with security through obscurity.

> The flaw in your firewall example is not using obscurity itself, but: (1) not also using traditional methods of hardening on top of it

Sooo... you think adding more obscurity on top of a closed, insecure piece of software is going to make it secure?

> if an external person could infer what software you are using by interacting remotely,

There are soooo many ways for a capable and motivated attacker to figure out what software you're running. Trying to obscure that fact is not any kind of security mitigation whatsoever. Especially when you're dealing with completely closed software/hardware - all of your attempts at concealment are mostly moot - you have no idea what kind of signatures/signals that closed system exposes, you have no idea what backdoors exist, you have no idea what kind of vulnerable dependencies it has that expose their own signatures and have their own backdoors. Your suggestion is really laughable.

> not also using traditional methods of hardening on top of it

What 'traditional methods' do you use to 'harden' closed software/hardware? You literally have no idea what security holes and backdoors exist.

> if an external person could infer what software you are using by interacting remotely, and then obtain their own commercial copy to investigate for flaws.

Uhh yeah, now you're literally bringing up one of the most common arguments for why security through obscurity is bullshit. During WW1/WW2 security through obscurity was common in crypto - they relied on hiding their crypto algos instead of designing ones that would be secure even when publicly known. What happened is enough messages, crypto machines, etc were recovered by the other side to reverse these obscured algos and break them - since then crypro has pretty much entirely moved away from security through obscurity.


You are operating on a false dichotomy that the current best practices of cryptographic security, code auditing, etc. are somehow mutually exclusive with obscurity, and then arguing against obscurity by arguing for other good practices. They are absolutely complementary, and implementing a real world secure system will layer both- one starts with a mathematically secure heavily publicly audited system, and adds obscurity in their real world deployment of it.

If there are advantages to a closed source system, it is not in situations where the source is closed to you and contains bugs, but when closed to the attacker. If you have the resources and ability to, for example, develop your own internally used but externally unknown, but still heavily audited and cryptographically secure system, is going to be better than an open source tool.


> They are absolutely complementary, and implementing a real world secure system will layer both- one starts with a mathematically secure heavily publicly audited system, and adds obscurity in their real world deployment of it.

Ok, let's start with a 'mathematically secure heavily public audited system' - let's take ECDSA, for example - how will you use obscurity to improve security?

> If you have the resources and ability to, for example, develop your own internally used but externally unknown, but still heavily audited and cryptographically secure system, is going to be better than an open source tool.

Literally all of the evidence we have throughout the history of the planet says you're 100% wrong.


> Literally all of the evidence we have throughout the history of the planet says you're 100% wrong

You are so sure you’re right that you are not really thinking about what I am saying, and how it applies to real world situations- especially things like real life high stakes life or death situations.

I am satisfied that your perspective makes the most sense for low stakes broad deployments like software releases, but not for one off high stakes systems.

For things like ECDSA, like anything else you implement obscurity on a one off basis tailored to the specific use case- know your opponent and make them think you are using an entirely different method and protocol that they’ve already figured out and compromised. Hide the actual channel of communication so they are unable to notice it exists, and over that you simply use ECDSA properly.

Oh, and store your real private key in the geometric design of a giant mural in your living room, while your house and computers are littered with thousands of wrong private keys on ancient media that is expensive to extract. Subscribe to and own every key wallet product or device, but actually use none of them.


> You are so sure you’re right that you are not really thinking about what I am saying, and how it applies to real world situations- especially things like real life high stakes life or death situations.

Nah, you're just saying a lot of stuff that's factually incorrect and just terrible advice overall. You lack understanding what you're talking about. And the stakes are pretty irrelevant to whether a system is secure or not.

> For things like ECDSA, like anything else you implement obscurity on a one off basis tailored to the specific use case- know your opponent and make them think you are using an entirely different method and protocol that they’ve already figured out and compromised.

You're going to make ECDSA more secure by making people think you're not using ECDSA? That makes so little sense in so many ways. Ahahahahaha.


I very well may be wrong, but if so you are not aware of how, and I will need to find someone else to explain it to me. I’ve been interested for a while in having a serious debate with someone that understands and advocates for the position you claim to have- but if you understood it you would be able to meaningfully defend it rather than using dismissive statements.


You do you champ.


I find the simpler engines work better.

I want the end of the line completed with focus on context from the working code base, and I don't want an entire 5 line function completed with incomplete requirements.

It is really impressive when it implements a 5 line function correctly, but its like hitting the lottery


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: