It's been a huge pain navigating all the interlinked (and sometimes contradicting) RFCs around, especially with a distinct lack of resources for actually implementing an authorization server. RFC6749 does not suffice on its own since it says nothing about authentication or tokens payload (not that it should). I did discover the whole RFC universe which are mostly fascinating and very well written.
Most google searches end up to Auth0 (nice SEO!), which I'm sure is a fine product, but usually only gives a very high-level overview of the corresponding spec and ends up like "see how this is all complicated? there's a SAAS for that...".
I'm seriously considering implementing a fully spec-compliant OAuth 2.0 + OpenID Connect Core 1.0 reference server implementation in Typescript, with full documentation quoted straight from the RFCs. The HOW is actually pretty straight-forward once you've figured out the WHAT and WHY.
That'd be awesome - I'd appreciate learning from something like that, and I really appreciate Typescript
I’m actually kind of glad the documentation found via google searches (save one collection of Medium articles) has been disappointing. It has motivated me to look directly at the specification which I’ve found to be the most valuable resource thus far.
Rust would be awesome but I think I'd want it to be as accessible as possible. Tagged unions and proper error handling would make it much saner though that's for sure.
Sorry if that's so basic of a consideration that it's insulting, but I've seen it done before.
I've used @panva's certified  js provider  and client  for many use cases. Is there a reason (aside from learning) to reimplement the spec? If you do, you may consider getting your implementation certified to expand the list!
The reason this bad practice is common is that it is allowed by the spec in https://tools.ietf.org/html/rfc6749#section-6 as an optional action to take on refresh grants. Please, do not do this.
"makes the system unusable due to the frequency of network transmission errors that would result in having to contact the resource owner"
What kind of network transmission errors are you getting? And at what quantity? This shouldn't be too difficult to do
I disagree and think refresh_token expirations do add to security and believe it is the client's job to handle any difficulties that can come with the expiration period
Tl;dr: I don’t know kids, seemed fine to me, world didn’t end
Never once in my lifetime I wanted to do that. For any two services.
I get it – people feel like it is their contacts because they collected them, but they are not. This should be illegal.
Has it really come to this? If you open your phone now you usually have two offline options: to store contacts on the SIM card and to store them on the phone.
If you want backups and comfort without betraying your contacts you run your own nextcloud instance and store them automatically on there.
I currently have at least 6 places where my contacts are stored (should I say scattered). I have no way to search them all, tag them all, archive the ones I seldom need, have versioning of the various data, etc.
Besides, contacts are a very broad topic.
I have contacts for friend and families, I have girls I'm flirting with, I have clients, I have administration offices, hair cutters, providers, restaurants... For some of them I want a way to contact them, some of them the location, some I don't want to appear in the main listing, some I want to explicitly hide from people peeking at my phones, some I want to easily share or even synchronize with others.
And then there is the way things are labelled. When going networking, I will collect many contacts I want to isolate. I want to be able to search them from a "known from" fields, or a "industry" tag. I want to make groups to mass communicate: I organize parties and sport events, and the best way to get people on board is still direct texting, and optional calling: fb events don't cut it.
It's one of those simple topics that aren't solve.
There is still no simple way to just send files from any computer to another. Try telling your mum to send you this videos from their iPhone to your laptop now that the wifi is down. Or share 3Go of photos with this group of 20 persons you just met on a trip.
There is still no simple way to make check lists (not to do list, mind you, there are plenty of those).
Having a personal archive of documents and notes sucks. Best effort used to be Evernote and MS one note. Proprietary plateform and standards, lock ins and of course they access your stuff.
Noting stuff on the way is hacking. I use telegram self-messaging for that because it let me quickly write, film, record stuff and queue them in a thread that sync accross my devices.
And contacts are still a mess. Once in a while, you receive the "I changed my phone/lost my contacts/new email/made a mistake" text asking "who is this ?". I still note metadata in the "name" field of contact to remember where I know the person from, or the name of their children to avoid being awkward when meeting them again.
Still we have an amazing free map of the entire world, more quality video content we can ever consume and encyclopedic knowledge of my entire field of expertise at my fingertip.
The way computer evolved is weird.
The way computer evolved is weird. "
To some extent, that's understandable. Storing and accessing static objective information that is the same for every user, that's easy, in a way it's just a matter of bandwidth and storage space.
Personal information and personal workflows, that is a different matter entirely.
For contacts of clients and such, probably yes, since whatever conditions for allowing their processing in Gmail probably also apply to Skype (and vice-versa). But it might not if for some reason the other service offers less security or control over that data.
But if they take this data and then try to do something else with it, whether that's contact rypskar or give the data about rypskar to somebody else - that's going to run into lots of GDPR problems around correctness (the data processor is obliged to take reasonable steps to ensure the data is accurate and fix problems on request) and as you identified with permissions (did rypskar authorise this? How do we know? Who even is rypskar?)
Do you use any of those services? If so did you create your contact list from scratch on each?
None of my friends are on TikTok AFAICT but if they were I'd want to know.
I'm curious, why? I sign up to different services (reluctantly) because I want to communicate with people I can't yet. Occasionally I might have some contacts on two networks, if there's some specific feature I want to use, but it's rare.
> Occasionally I might have some contacts on two networks, if there's some specific feature I want to use, but it's rare.
Are you sure you don't have business contacts who are also in your email address book?
Sure, I might want that with a few coworkers, but not with my brother, aunt, friend's boyfriend or mother, tenant who rents my house, the shop who delivers butane, etc. Essentially 95% of my contacts are not relevant to a specific network, and dumping them indiscriminately seems not just bad form, but time wasting for me and them as well.
> Are you sure you don't have business contacts who are also in your email address book?
Yes, but they are a small subset of both circles.
Given that we are discussing LinkedIn, wouldn't you just not connect with them? Seems like a small cost for having to add that 5% of people manually.
Yeah, I'll manually add a few people to avoid imposing that on myself and others.
Can you imagine if the HTTPS standard had to have 100 different variations to connect to 100 different sites on the net, with no way to detect which one was in use, so that you had to have the variations hardcoded for every site you visit? Because that's where we are with OAuth.
Worse, some of the practices which are now RFCs are complete garbage. The grant type Resource Owner Password Credentials, for example, requires that the third party server ask the user for their username and password. This defeats the entire purpose of OAuth. Don't give me some crap about this being useful for first-party servers: there are way simpler and more secure ways to authenticate first party servers. OAuth is for authenticating users with third-party servers, and you should absolutely, unequivocally not be letting third party servers ask the user for their username and password. This normalizes putting your passwords for one site into another site for nontechnical, which is outright irresponsible and unethical.
Yet in the handful of times I've authenticated with OAuth as a third party, more than one of the first-party sites I was authenticating with used this method, putting the onus on me to correctly handle their usernames and passwords. The reasons are obvious: 1. Doing it the insecure way is easier because it allows sites to avoid implementing an SSO portal to handle their own passwords and grant initial access (which, incidentally, isn't included in the standard protocol at all AFAIK, despite being necessary to make it work) and 2. The IETF somehow thought it was a good idea to give this awful security practice an air of validity with an RFC.
The OAuth2 protocol standard is a tire fire which is unsalvageable at this point. We need a new standard that is actually a fully-specified standard, with a single, clear path from start to finish, including the initial use of passwords to obtain tokens, which serves the single purpose of allowing third party services to authenticate in a secure way.
And this is only the most glaring error. Another example: there's no specification for how to generate tokens, and I've implemented OAuth for platforms that return tokens which look suspiciously like UUIDs. Not only are UUIDs explicitly recommended against as credentials by RFC4122 because people could reverse engineer your UUID generation, but collisions have occurred in practice without any reverse engineering, meaning that a user could authenticate as another user without even hacking.
And to clear, I'm not a security auditor and I've never done a security audit of any OAuth implementations except my own. These are problems I discovered by treating other people's OAuth implementations as black boxes, all of them compliant to the so-called "standard". Security is hard, and with so little specified, I imagine the majority of OAuth implementations in the wild are actually not secure.
The resource owner password credentials grant MUST NOT be used. This
grant type insecurely exposes the credentials of the resource owner
to the client. Even if the client is benign, this results in an
increased attack surface (credentials can leak in more places than
just the AS) and users are trained to enter their credentials in
places other than the AS.
Furthermore, adapting the resource owner password credentials grant
to two-factor authentication, authentication with cryptographic
credentials, and authentication processes that require multiple steps
can be hard or impossible (WebCrypto, WebAuthn).
This is everything I hate about using remote APIs, squared. I’ve never been able to make even the simplest example work.
*possibly because some system administrator noticed abnormal traffic and blocked it
Has anyone written a blog post about OAuth authorization server UI design?
Also look at their nice django-oauth-server:
They can be pricey at scale, but as long as you follow the protocol without cramming more functionality into authentication than belongs there, you can swap them out for a self hosted solution.