The company I work for has used it as a relatively simple method for implementing mutual TLS (mTLS) for legacy apps or systems for which it would otherwise be annoying or more difficult to integrate mTLS for, or which doesn’t support mTLS with custom trust store.
same here. This thing is gold for "80% solutions" in that respect. It's easier to sanely integrate with legacy transport protocols than trying to update the legacy code base to implement mutual trust the harder, more direct and more error-prone way, IMO.
Schmidt Ocean Institute (which owns/operates Falcor (too)) is an awesome nonprofit supporting a lot of really fascinating and impactful oceanographic and environmental research. One of my favorite oceanography documentarians, Leo Richards, produced a truly beautiful video for Schmidt Ocean Institute and put it up on his YouTube channel, Natural World Facts, which I highly recommend for anyone with an interest in the ocean or scientific research in general:
https://youtu.be/Uh3fNYVwDXM?si=QnzFTJFJ5hIhXWoJ
How would you say your service stacks up against something like TailScale (https://tailscale.com/), which seems to solve the same problems but with end-to-end encryption and without the need to setup separate OAuth2 proxies? Where does ShareWith really shine and make things more easy/fast/secure/scalable/better than competitive solutions?
TailScale is great -- I've met the founders and use it for my own personal network.
At it's core, ShareWith is more about being able to take anything that supports OIDC and adding the ability to dynamically request access in the login flow. OAuth2Proxy, which is a popular way to protect generic HTTP resources, is one convenient way to demonstrate that functionality, but you could leverage that same functionality in a completely different way native to your application if you don't want to have to build that same workflow for your app.
For the borderless network use case, an access request workflow really changes how you use protected resources because getting access to something becomes simple and routine. In a VPN setup, you're unlikely to share with users outside of your organization because of the overhead from provisioning and setup. Consider how you can easily share a Google doc with an outside contractor, but it's not worth the effort to give them access to a JIRA ticket.
I get your points, but I disagree with your contention that the fact that all users of the system used a shared username and password doesn’t complicate the case. We’ve established that an IP address is not strong evidence for identifying an individual. IP address != authentication. What _could_ have established strong evidence tying the alleged unauthorized access to an individual’s identity would have been _actual_ authentication of the _specific_ user. But they don’t have that, either, since they shared one set of credentials.
You said, “they have an IP address which points to a specific one of those users,” but that’s not actually the case. They have an IP address which has somehow been related to the accused (though how is unclear to me since you note above there’s no linkage of IP/customer/date) — maybe they know she sent an email from that IP address at some point around the time of alleged crimes. But in any case, without providing evidence that the IP address is _only_ associated with the accused, and _not_ with any others with similar opportunity and motive (for example, any others with access to the shared username and password who might want to access the data for similar reasons, or wanted to frame the accused for hacking and put an end to the her very public/politicized efforts), then they don’t really have strong evidence of anything - basically only enough _not_ to rule the accused out of the probably large pool of possible suspects. How many other current or former employees had access to the shared username and password? When were they last changed? How many others who _weren’t_ ever authorized to access the system but could have compromised/gained access to these credentials since then? How many times have they been written down and left on a sticky note in some public or semi-public place? Do they ever have controls in place to prevent guessing/brute forcing the credentials (with one login for shared between all users, automatic account lockouts or resets seems very unlikely).
> We’ve established that an IP address is not strong evidence for identifying an individual.
Well no, we haven't. There's one standard of evidence for conviction and another for a search warrant. You'd never get a conviction on that alone though.
> in any case, without providing evidence that the IP address is _only_ associated with the accused, and _not_ with any others with similar opportunity and motive
Because no other fired employee lives at her address? But that's not relevant because they got a warrant to search her address, not simply her person.
> I disagree with your contention that the fact that all users of the system used a shared username and password doesn’t complicate the case.
I do agree that it's not open and shut, but I don't think that the specific fact of the password being shared will complicate this case further.
Having the IP provides the linkage to her that is otherwise lacking because of the shared account.
> for example, any others with access to the shared username and password who might want to access the data for similar reasons, or wanted to frame the accused for hacking and put an end to the her very public/politicized efforts
There's even less evidence from which to come up with conspiracy theories than simply to blame the accused. Sure, it could have gone down in some complex and unlikely way, but why are we discussing zebras instead of horses?
And, any investigation of a conspiracy to frame her would necessarily start with the only clue - that the communication came from her IP.
Hey! A thing I can actually help with! I put together a PoC of exactly this when I had the same idea a couple years ago. Here's a basic generic example I put together at the time, that also defines interfaces for other policy directives (e.g. min length, etc.): https://github.com/milo-minderbinder/policy/blob/master/src/...
I'll add docs and updates if people give a shit. The passwords.dat file in the resources folder is the top 1m most common pws that I compiled from a number of lists available at the time.
I implemented a redis-backed instance of the above common-password bloom filter in a sample Spring app which I was using to show off some features of spring security to a dev (I work in AppSec). You can see the policy and redis config here: https://github.com/milo-minderbinder/spring-ref/blob/indev/s...
I love PlantUML! After a coworker introduced me to it at my old company, it quickly became my tool of choice for putting together security diagrams (I'm an AppSec engineer). I found it particularly helpful for this use case because it made diagram re-use, and updating existing diagrams when architecture changes are planned, extremely easy. We also used Confluence, for which there is a PlantUML plugin that allows you to insert your UML markup directly into a doc, which is then rendered by Confluence when someone views the page.
I threw together a set of macros and sprites for AWS architecture and deployment diagrams, which I've put on GitHub[0] for anyone who finds it useful. Eventually planning to upload a fork I wrote that generically (and much much more efficiently) generates the templates and sprites for other services and products besides AWS.