Hacker News new | past | comments | ask | show | jobs | submit | krooj's comments login

Yep - I remember the CCAT from 4th grade that resulted in my being placed into a different class for 5th. AFAIK, we were given this test "cold" (no prep) and I remember it being timed.

> In short, Open ID Connect is quite accurately described as an Authentication standard. But OAuth 2.0 has little to do with Authorization. It allows clients to specify the "scope" parameter, but does not determine how scopes are parsed, when user and client are permitted to request or grant a certain scope and what kind of access control model (RBAC, ABAC, PBAC, etc.) is used. That's ok, since it leaves the implementers with a lot of flexibility, but it clearly means OAuth 2.0 is not an authorization standard. It only concerns itself with requesting authorization in unstructured form[3].

This misses the mark - scopes are abstractions for capabilities granted to the authorized bearer (client) of the issued access token. These capabilities are granted by the resource owner, let's say, a human principal, in the case of the authorization_code grant flow, in the form of a prompt for consent. The defined capabilities/scopes are specifically ambiguous as to how they would/should align with finer-grained runtime authorization checks (RBAC, etc), since it's entirely out of the purview of the standard and would infringe on underlying product decisions that may have been established decades prior. Moreover, scopes are overloaded in the OAuth2.0/OIDC ecosystem: some trigger certain authorization server behaviours (refresh token, OIDC, etc), whereas others are concerned with the protected resource.

It's worth noting that the ambiguity around scopes and fine-grained runtime access permissions is an industry unto itself :)

RFC 9396 is interesting, but naive, and for a couple of reasons: 1) it assumes that information would like to be placed on the front-channel; 2) does not scale in JWT-based token systems without introducing heavier back-channel state.

I personally do not view OIDC as an authentication standard - at least not a very good one - since all it can prove is that the principal was valid within a few milliseconds of the iat on that id_token. The recipient cannot and should not take receipt of this token as true proof of authentication, especially when we consider that the authorization server delegates authentication to a separate system. The true gap that OIDC fills is the omission of principal identification from the original OAuth2.0 specification. Prior to OIDC, many authorization servers would issue principal information as part of their response to a token introspection endpoint.


Linus always has a great way of summarizing what others might be thinking (nebulously). What's being said in the article is really mirrored in the lost art of DDD, and when I say "lost" I mean that most developers I encounter these days are far more concerned with algorithms and shuttling JSON around than figuring out the domain they're working within and modelling entities and interactions. In modern, AWS-based, designs, this looks like a bunch of poorly reasoned GSIs in DDB, anemic objects, and script-like "service" layers that end up being hack upon hack. Maybe there was an implicit acknowledgement that the domain's context would be well defined enough within the boundaries of a service? A poor assumption, if you ask me.

I don't know where our industry lost design rigor, but it happened; was it in the schools, the interviewing pipeline, lowering of the bar, or all of the above?


I’d argue software design has never been taken seriously by industry. It’s always cast in negative terms, associated with individuals seen as politically wrong/irrelevant and brings out ton of commenters who can’t wait to tell us about this one time somebody did something wrong, therefore it’s all bad. Worse, design commits the cardinal sin of not being easily automated. Because of this, people cargo cult designs that tools impose on them, and chafe at the idea that they should think further on what they’re doing. People really want to outsource this thinking to The Experts.

It doesn’t help that isn’t really taught, but is something you self-teach over years, it is seen as less real than code (ergo, not as important). All of these beliefs are ultimately self-limiting and keep you at advanced beginner stage in terms of what you can build, however.

Basically, programmers collectively choose to keep the bar as low as possible and almost have a crab-like mentality on this subject.


I can see a swing finally starting. It isn’t “huge” by any stretch, but at the same time

“deVElOpErS aRe MoRE EXpEnSivE tHaN HArDwaRE”

Commenters are no longer just given free internet points. This is encouraging as these people controlled the narrative around spending time on thinking things through and what types of technical debt you should accept for like 20 YEARS.

I think maybe people are finally sick of having 128 gigs of ram being used by a single 4kb text file.


There is some truth to the idea that developer time is expensive, and can dwarf the monetary gains gotten through micro-optimization.

I agree that some people took the idea to mean "what's a profiler?" and that is why our modern machines still feel sluggish despite being mind-bogglingly fast.


This might be driven by the cost per computation being vastly lower while the benefit having remained mostly constant. There is little incentive for making a text editor that runs in 10k of memory because there is no benefit compared to one that runs in 10 megabytes or, soon, 10 gigabytes.

I spend a lot of my day in VScode and PyCharm and the compute resources I consume in an hour are more than what the Apollo program consumed over its full existence. Our collective consumption at any given decade is most likely larger than the sum of computing resources consumed up until that point in our history.


> most developers I encounter these days are far more concerned with algorithms and shuttling JSON around than figuring out the domain they're working within and modelling entities and interactions

The anemic domain model was identified as an anti-pattern quite a long time ago[1]. It usually shows up along with Primitive Obession[2] and result in a lot of code doing things to primitive types like strings and numbers, with all kinds of validation and checking code all over the place. It can also result in a lot of duplication of code that doesn't look obviously like duplication because it's not syntactically identical, yet it's functionally doing the same thing.

1 https://martinfowler.com/bliki/AnemicDomainModel.html

2 https://wiki.c2.com/?PrimitiveObsession


The industry predominately rewards writing code, not designing software.

I think the results of bad code aren't as obvious. A bad bridge falls down, bad code has to be... refactored/replaced with more code? It goes from one text file that execs don't understand to a different text file that execs doesn't understand.

And once something works, it becomes canon. Nothing is more permanent than a temporary hack that happens to work perfectly. But 1000 temporary hacks do not a well-engineered system make.

I believe that maturing in software development is focusing on data and relationships over writing code. It's important to be able to turn it into code, but you should turn those into code, not turn code that works into a data model.


> The industry predominately rewards writing code, not designing software.

The sad part of this is that code is absolutely a side-effect of design and conception: without a reason and reasonable approach, code shouldn't exist. I really think that the relative austerity happening in industry right now will shine a light on poor design: if your solution to solving poorly understood spaces was to add yet another layer of indirection in the form of a new "microservice" as the problem space changed over time, it's probably more likely that there was an inherent poor underlying understanding of the domain and lack of planning extensibility in anticipation. Essentially, code (bodies) and compute aren't as "cheap" as they were when money was free, so front-loading intelligent design and actually thinking about your space and it's use-cases becomes more and more important.


> The industry predominately rewards writing code, not designing software.

This also stems from most of the code being written at any given moment being to solve problems we already solved before and doing or supporting mundane tasks that are completely uninteresting from the software design point of view.


> anemic objects

I have yet to come across a compelling reason why this is such a taboo. Most DDD functions I have seen also are just verbose getters and setters. Just because a domain entity can contain all the logic doesn't mean it should. For example, if I need to verify if a username exists already, then how do I go about doing that within a domain entity that "cannot" depend on the data access layer? People commonly recommend things like "domain services," which I find antithetical to DDD because now business logic is being spread into multiple areas.

I quite enjoy DDD as a philosophy, but I have the utmost disdain for "Tactical DDD" patterns. I think too many people think Domain-Driven Design == Domain-Driven Implementation. I try to build rich domains where appropriate, which is not in all projects, but I try not to get mired up in the lingo. Is "Name" type a value object or an aggregate root? I couldn't care less. I am more concerned about the bounded contexts than anything else. I will also admit that DDD can sometimes increase the complexity of an application while providing little gains. I wouldn't ever dare say it's a silver-bullet.

I will continue to use DDD going forward, but I can't help but shake this feeling that DDD is just an attempt at conveying, "See? OOP isn't so bad after all, right?" Of which, I am not sure it accomplishes that goal.


If you replace the Object-Oriented mechanism for encapsulation with some other mechanism for encapsulation then there's probably no reason for this taboo.

But in 99.999999% of real-world projects, anemic object-oriented code disregards encapsulation completely, and so business logic (the core reason why you're building the software in the first place) gets both duplicated and strewn randomly throughout the entire code project code.

Or in many cases, if the team disregards encapsulation at the type level then they're likely to also disregard encapsulation at the API/service/process level as well.


Ok, I see where you are coming from, and I agree. However, I would like to add that poorly implemented DDD can be just as awful.


With decades of exponential growth in CPU power, and memory size, and disk space, and network speed, and etc. - the penalties for shit design mostly went away, so you could usually get away with code monkeys writing crap as fast as they could bang on the keyboards.


Weird - this is the first place I saw the "internet" on display as a kid. Shame to see it close in such an unceremonious way.


You'd be surprised at how little cloud vendors give a shit about security internally. Story time: I recently went ahead and implemented key rotation for one of our authz services, since it had none, and was reprimanded for "not implementing it like Google". Fun fact: Google's jwks.json endpoint claims to be "certs" from the path (https://www.googleapis.com/oauth2/v3/certs). They are not certs - there is no X.509 wrapper, no stated expiration, no trust hierarchy. Clients are effectively blind when performing token validation with this endpoint, and it's really shitty.

Other nonsense I've seen: leaking internally signed tokens for external use (front-channel), JWTs being validated without a kid claim in the header - so there's some sketchy coupling going on, skipping audience validation, etc...

Not much surprises me anymore when it comes to this kinda stuff - internally, I suspect most cloud providers operate like "feature factories" and security is treated as a CYA/least-concern thing. Try pushing for proper authz infrastructure inside your company and see what kinda support you'll get.


Are there any large companies that don't operate like feature factories? It seems to be such a common issue and the natural result of the incentive structure.


although this is a valid insight, it reduces the detail of the conversation into "yes or no" on a topic that is not a "yes or no" topic.. it is behavior and messaging among a dozen critical functions of business. Almost every business is different in their mix.. perhaps faced with similar rhetoric, law says "show me an example then we can discuss" instead of "classify all examples then apply to a situation"


This is basically how I handle it, and we live in the neighborhood mentioned by this article. The disability claim is largely a straw man argument: I've never seen someone in a wheelchair try to navigate SF streets - they're far, far too hilly and para-transport exists. This whole thing comes down to SFMTA being a bunch of crash-grabbing motherfuckers. The amount of revenue the city takes in through ticketing is outrageous, to the point that it's less and less about bylaw enforcement and more about making money.

I think last year my total $-amount for SFMTA tickets came to ~$1300? Still cheaper than paying for a garage or wasting my time trying to find street parking.

More on context here: the sidewalks of the outer sunset and outer richmond are extremely wide, to the point were you could easily parallel park 1.5 cars right on the sidewalk, so it's trivial to park a car such that it being partially in the driveway will leave more than enough space for anyone to pass.


As someone who grew up in the outer sunset where this is also a frequent occurrence, it's surprising to me how so many people engage in such anti social behavior and don't see a problem with it.

If $1300 per year in SFMTA tickets isn't enough to dissuade you, I can only hope that the violations are increased substantially. I wish we would follow the Nordic model of fines being scaled to income.


If they are treating it as a cost, an escalating fee schedule might be a good approach. 1x the first ticket, 2x for the second one within 12 months of the first, 4x for the next one within 12 months of the second.

Doesn't hit accidental offenders too hard, but hits the scofflaws pretty good.


You really think that's anti-social behaviour? It's a matter of practicality, my delicate flower.


> The amount of revenue the city takes in through ticketing is outrageous, to the point that it's less and less about bylaw enforcement and more about making money.

Be that as it may, a society that allows individuals to claim public land for private use without any compensation or penalty also seems outrageous.

I can't speak to SF and personally I'd look the other way if someone is taking less than a couple feet from a sufficiently wide sidewalk in front of their house. But at least in NYC many sidewalks, crosswalks, bike lanes, car lanes, and bus lanes are impassable because the cost of parking there is far too low.


> Be that as it may, a society that allows individuals to claim public land for private use without any compensation or penalty also seems outrageous.

SF is rife with that and the SF Coalition on Homelessness makes its money on that premise.

SF needs to be ticketing sidewalk parkers while INCREASING the availability of parking in business districts. Right now what we have is just anti-car tyranny.


Why should the city provide subsidized parking at all? Car owners should pay the full cost (or the businesses that want to attract them should).


If we were serious about curbing this violations would either be points deducted on your licence or criminal charges

I'm not surprised cities are trying to claw back revenue because car infrastructure and its side effects like enabling massive sprawl are so astonishingly high

Drivers don't even know those costs, consider the roads you have now, they're in the state they are despite absolutely huge political support


I'll echo this - I have had two left leg DVTs, spaced about 7 years apart, and after the second event, really started diving into medical publications - surgical journals, medical textbooks, clinical trials - as a means to better understand the condition, it's pathology, etc. I ultimately submitted to testing and discovered a congenital stenosis of the left iliac vein with heavy retroperitoneal collateralization that necessitated a stent to keep that iliac vein open.

I also had a quick look into the social media (primarily reddit) aspect of these vascular conditions, and it's a pile of dogshit. Most of these patient communities bill themselves as "support groups", but there's never any real discussion on meaningful research, drug, or device advancements. They places serve primarily as "pity pits" for chronic moaners and scammers selling alternative medicine.


This is interesting -- I have Factor V Leiden (heterozygous) and have had one DVT. It never would have occurred to me to seek out a support group.


I also have the same mutation, as does my wife. From what I've been told by various hematologists, vascular surgeons, and interventional radiologists, it's a very weak clotting disorder, but you do have to keep an eye on certain environmental factors: smoking, hydration, movement, and trauma/surgery. To put it another way, FVL is fairly benign until you're already way into Virchow's danger done, and at that point it's gonna work against you. When it comes to VTE in the presence of ONLY FVL, I would shoot serious side-eye at a doc that chalked it up to the mutation - there's usually something else going on.


Possibly true, but don't sleep on it -- I happened to be transitioning insurance when it happened, so I dragged it out for several days before ending up at the ER. They sent me home later that day, but with strict warnings about calling 911 immediately for any sign of stroke, heart attack or pulmonary embolism. Fortunately all I have to show for it is weakened vein flow in the affected leg.


They absolutely do not and also introduce a significant amount of overhead with respect to key/certificate management.


And security (basic auth is as good as sending clear text passwords).


> sending clear text passwords

Which is totally fine to do over HTTPS.


Passwords need to be sent both with the request, and to the requestor. I think GP is referring to sending credentials to the service making the request.

It is far better to give service XYZ a time-bound and scope limited token to perform a request than a user's username and password.


No, there's no explanation.


Yup - OIDC can be boiled down to:

1. The OG OAuth2 spec never said anything about identifying principals 2. OIDC mandated an id_token to avoid the hit of POSTing the access token to an introspection endpoint 3. The shape of the id_token is a JWT

This doc is junk, btw - it's suggesting the client use the id_token for the purposes of authentication? I wouldn't trust that token beyond some basics, unless you have very strong controls over the authorization server and IdP to ensure things like email verification are propagated correctly.


> OIDC mandated an id_token to avoid the hit of POSTing the access token to an introspection endpoint

A bit more fundamental than that, the two tokens are meant for different purposes and different audiences.

The access token is meant for delegated authorization of a user against resources. A client isn't even meant to be able to interpret it (but often can, because they are often signed JWTs). In particular, clients aren't supposed to use the introspection endpoint.

The id_token is meant for the client itself, about the user. This isn't meant to be shared with anyone other than the client, and must be audienced (only usable by the client).

Moreso, access tokens may be indefinitely long-lived access to resources, such as offline reading of a calendar for group scheduling free/busy information. An id_token is a signal at a particular point in time of state, e.g. this browser request represents this user because they bear the id_token.


To illustrate why OIDC introduced audience validation:

When you separately request user info from an endpoint with an access token (which according to OG OAuth2.0 is just an opaque string, which cannot be validated) that access token could be someone else’s, possibly from a user who logged into a different, malicious application which somehow managed to trick you into using that token


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: