> GNAP is not an extension of OAuth 2.0 and is not intended to be directly compatible with OAuth 2.0. GNAP seeks to provide functionality and solve use cases that OAuth 2.0 cannot easily or cleanly address.
> GNAP and OAuth 2.0 will likely exist in parallel for many deployments, and considerations have been taken to facilitate the mapping and transition from existing OAuth 2.0 systems to GNAP
Doesnt look like GNAP will fly any time soon, however there is a very interesting part - Security Considerations section. Looks like it was made by people who are familiar with all varieties of cyberops and usability issues in OAuth2/OIDC spec.
I hope the community will combine it all at some point and add specifications for proper policy and resources management too by looking at the full lifecycle of modern applications.
Hopefully, when OAuth 2.1 is released, OpenID Connect will be updated to be based on OAuth 2.1. This would make some of the useful advice in FAPI (like PKCE) mandatory. A lot of the FAPI stuff that is not included in OAuth 2.1 or the OAuth BCP is just over-engineering by wanabee cryptographers, bad advice, or at least useless advice.
Knowing the OpenID foundation, this could be yet another undocumented errata set released, but we can still dream of a better world, can't we? In a better world, instead of "Use 2048 bit RSA keys" the spec will say "Don't use RSA ever."
The advanced FAPI has even more directly bad advice, as requiring PS256 and ES256. Now, these are not so bad as the common RS256 (RSA with PKCSv1.5 padding), but they are still bad algorithms. The only good asymmetric algorithm defined in JWS is EdDSA, which just like that, is forbidden by OIDC FAPI. So I'm quite happy FAPI is just a profile that would mostly be ignored.
It looks like FAPI 2.0 has finally been released in December, and thankfully it killed off most of the excesses of FAPI 1.0 and is better aligned with OAuth 2.0.
At this point the main differences are:
1. PAR: A good idea that should become a part of the OAuth standard, even if it costs an extra RTT. It prevents a pretty large class of attacks.
2. "iss" response parameter becomes mandatory (it is a core part of OAuth 2.1, but considered optional). This is a useful measure against mix-up attacks in certain conditions, but the conditions that enable it are less common than the ones
3. Requires either Mutual TLS or DPoP. I am less sold on that.
Mutual TLS is great for high security contexts, since it prevents Man-in-the-Middle attacks and generally comes with guaranteed key rotation. But mTLS is still quite troublesome to implement. DPoP is easier, but of more questionable value. It doesn't fully protect against MitM, keys are rarely rotated and it is generally susceptible to replay attacks against you take costly measures and it relies on JWT being implemented securely, by a developer who understand how not to shoot themselves in the foot with their brand new JWT Mark II shotgun. The
4. Which bring us to cryptographic algorithm usage guidelines. These are not part of OAuth 2.1, since OAuth does not mandate or rely on any cryptography with the sole exception of the SHA-256 hash used for PKCE.
This is good design. When there is an alternative that doesn't require cryptography (such as stateful tokens or the authorization code flow), it is generally more secure. You have one less algorithm to worry about being broken (e.g. by advances in quantum computing).
For what it's worth, the guidelines okay, but not good enough. RSA is still allowed. Yes, it requires PSS and 2048 bit keys, but there are knobs left that you can can use to generate valid but insecure RSA keys (e.g. a weak exponent). With EdDSA there is no such option. Weak keys are impossible to generate. Considering EdDSA is also faster, has smaller signature size and better security, there are no good reason to use RSA (and to a lesser degree ECDSA) anymore.
In short, in an ideal world I think I would just want OAuth 2.1 to incorporate PAR and make the "iss" response parameter mandatory. The cryptographic (JOSE) parts of the specification seem to me like too much add complexity, for too little gain, with too little in the way of making cryptography safe.
my guess the idea and intension of .well-known was good, so generic end-user libraries can be implement ... the reality is ugly and generate lot of man hours for cyberops consultancies
Yeah, OIDC with session-based access makes sense, especially for enforcing policies dynamically. For secure PDF access, we’ve found pre-signed URLs to be a solid approach. They allow temporary, controlled access without requiring ongoing authentication.
You're right. Pre-signed URLs can’t be revoked once issued, but one way to mitigate the risk is by setting a short expiration time when generating them. For example, if the URL is only valid for 5-10 minutes, it remains secure, and the risk of misuse is minimal.
If you want to expand your knowledge beyond OAuth2 (and most probably you should if you want to design systems used by big guys from 0 to 1) , highly recommend to jump straight into OpenID Connect (OIDC) which is an identity layer built on top of OAuth 2.0.
Besides reading specs, Sascha Preibisch's videos on both OIDC and OAuth2 were the most useful to solidify a bigger picture for me
Specs are actually well written despite of all jargon and train of buzzwords used inside. The most annoying on my list are OP (OpenID Provider) and RP (Relying Party) ...
The problem with OIDC and OAuth2 space - IDP providers are too "creative" in their interpretation of specs starting from userinfo and token exchange endpoints.
Without allocating significant amount of time getting all flows and related cyberops into your brain might never happened.
Good news - it's a life time investment ...
Oidc search on github gives good results - libraries, open source IDPs, all kind of proxies, etc
Feeling the same pain of walking through the creator's hell.
Heard somewhere we are not supposed to be happy and absent minded. Staying on hard side of things shapes us and make complete.
I'm not sure if this helps, but I’ve personally stopped stressing about outcomes, "get rich quick" schemes, or being in the perfect moment.
The journey is what truly matters.
I use my own tools for consulting and eventually outsource myself as needed. If clients want to hire someone else, I don’t resist. I simply ask for two or three months’ salary for training and limited support afterward.
I’m not trying to build something worth millions to sell. I used to sell CRMs built on top of WordPress for a long time. Eventually, it evolved into a CRM I now use across several businesses. Over the years, I’ve also created numerous small tools that I’ve been using for over a decade—tools for tracking trends, education, and more.
I don’t aim to be in the top 5% of earners, just be in the middle for targeted niche is ok.
For me, freedom means working in "Pendulum mode"—six months of active client phases followed by six months of prototyping new ideas. That balance keeps me sane. No conflict of interests. Clients know I worked hard to kick off things, and productive than most other engineers in slow mode.
As for pressure of being update - I made a simple commitment to try new things 4-8 hours a week min. AI, ML, K8s, hyped programming languages. It was brutal at the beginning, and extremely joyful process now. It's getting easier and easier. The main motivation is to be connected with folks who will come after me.
I'm also trying to surround myself with builders, and be less with transactional people who simply wants salary. Helping other to overcome and internalize the essence of pain of owning code and full responsibilities make my struggles easier.
One thing I would never sacrifice for being in a creator mode - wellbeing and needs of my family. Whatever it takes to bring the food on the table. All high purposes come after.
Naval's mega episodes help me to settle things in mind really well too
Had a recent conversation with nephew about his robotics study, and it seems he’s overwhelmed with ROS, OpenCV, Python, AI transformers tutorial hell.
Told him we should go together to industrial expo - starting from 3d printing, molding, material science and robotics and spend some time together to hack into hard core C++ SDKs used by big tech like Nvidia and Co, and look how to developed linear algebra and statistic on real life examples.
His dream is to automate small mid business manufacturing, and become a Steve Job like figure one day.
What’s the good information and resources to follow and educate about real life robotics?
Any tips or even sarcastic comments are welcome. Please help to be a cool uncle ;)
"3d printing, molding, material science and robotics " sounds like a Mechanical Engineering degree. I'm serious, I learned everything you're talking about and more. Probably hubris, but I'm pretty sure I'm well equipped to start a hardware company if I really wanted to (Not sure what business manufacturing means though, as in he wants to start an "Integrator" company or a contract manufacturing company?).
As for an Expo, I highly recommend ATX West show in a couple weeks. Depending on your employer, you probably could get it covered by your job and have your nephew come along.
(Was a Design MechE for a decade or so, but I'm a SWE now)
It sounds like your nephew has a project in mind. Start there with the basic dependencies and that will start laying out a competency roadmap which looks a lot like a curriculum.
Quick aside: Automate small/mid business manufacturing? Admirable, but will probably choke on the scale problem, so I think the journey will be vastly more interesting than the destination...which is good! Turns out there's a lot of robotics that can be broadly applied.
I also wanted to automate everything. Even got a mentor who brought me back to reality. Basically robot must be available 99,9999% of the time. One unsuccessful operation in 5 year of usage without maintenance. With less it makes no sense to automate because handling a production line incident might be very expensive.
I don't think a lot of Olympics gold medalists are taught by former Olympics gold medalists. Most teachers never go on to accomplish what their students achieve. I think OP has a fair chance of giving the nephew a great start - and whether that leads to Jobs level of accomplishments will depend on perseverance and luck.
I knew the guy who worked with Jobs and hated him with passion.
Let's keep the useful image parts.
“How many of you are from manufacturing companies?”
[Some hands go up.]
“Oh, excellent. Where are the rest of you from? Okay, so how many are from consulting?”
[A number of hands go up.]
“Oh, that’s bad. Yeah, the mind is too important to waste—you should do something.”
“I think that without owning something over an extended period of time—like a few years—where one has a chance to take responsibility for one’s recommendations, where one has to see those recommendations through all action stages, accumulate scar tissue for the mistakes, pick oneself up off the ground, and dust oneself off, one learns only a fraction of what one can.”
> GNAP (Grant Negotiation and Authorization Protocol) is an in-progress effort to develop a next-generation authorization protocol
From spec https://oauth.net/gnap/
> GNAP is not an extension of OAuth 2.0 and is not intended to be directly compatible with OAuth 2.0. GNAP seeks to provide functionality and solve use cases that OAuth 2.0 cannot easily or cleanly address.
> GNAP and OAuth 2.0 will likely exist in parallel for many deployments, and considerations have been taken to facilitate the mapping and transition from existing OAuth 2.0 systems to GNAP
Doesnt look like GNAP will fly any time soon, however there is a very interesting part - Security Considerations section. Looks like it was made by people who are familiar with all varieties of cyberops and usability issues in OAuth2/OIDC spec.
Security Considerations section
https://datatracker.ietf.org/doc/html/draft-ietf-gnap-core-p...
If any cyberops, pentester pro reading this, please advise how to research more. Thanx in advance.