You sound like you are routinely doing zero knowledge proofs. To me it sounds like a very niche thing. What kind of application area needs knowledge proofs on the regular? Finance?
I agree with this. But I want to ask a similar question as OP but for services. How do you handle service account credentials in a good way? Typically multiple engineers need to be able to test out a given service account. So a number of users need to have access to the credentials of that service account. And you need a good way to enroll a new service to service connection, i.e give a service account access to another service.
I haven't seen this done in a way that didn't feel overly complex, prone to error and oversharing of secrets.
(1) is how a user gets the pass when needed (normal process or break glass debug). (2) is how the pass rotates to an unknown value, automatically, after the user is done with it. (3) is how the new value gets updated in references, without the user knowing the new value.
At the root of any solution are answers to those 3 needs.
The best way to handle service accounts is nobody knows the password. When X needs to troubleshoot, they change the password, do what needs to be done, then change it again afterwards, to some unknown value.
It's a lot easier if you automate the password delivery to the service account. 1Password can do this with their CLI offering and Vault/OpenBAO can also do this. There are other password managers that can do similar things as well.
With Vault/OpenBAO, for many services, it can handle password rotation and can even generate accounts(with appropriate permissions) for you, so you need access to Y service, Vault will create a temporary account. When the token expires, it will delete the account automatically.
If you can't get there yet, then you limit the damage by limiting the shared password to a small set of people, use a password manager and as soon as any one of them leave the group, rotate the password.
> The best way to handle service accounts is nobody knows the password. When X needs to troubleshoot, they change the password, do what needs to be done, then change it again afterwards, to some unknown value.
one way i've seen this is on a 'lease' system: your user requests access to use the account, the system logs your request and generates a random password for you, you log in and use the account for the duration of the lease, and let the system handle the rest
Why do these accounts only have a single user/password?
In any case, my answer would again be automation. Script the test, have the authorized process test out the service account on behalf of any employee who can create and run those tests.
Because that's just how some products are set up. Not every external service account provides separate passwords for different employees, there's just a single corporate account that some certain number of people need to have access to.
And automation isn't an answer to that. The question is how to share the password in the first place, not to automate what is done with it.
I've bumped into this with various API integrations to e.g. warehousing systems, ERPs, hosting providers and what not. The advice in this thread works great until you are on a project to integrate X API into Y project, and you need the admin dashboard. Now you have three developers who all need to share one account.
> How do you handle service account credentials in a good way?
Passwordless. If someone can log into a service account then it's not secure.
If you need direct access to systems then you grant temporary rights to individual users. To make this smooth you need to create a service to do it. Basically the flow would be for a user to request access and for the service to ask their direct manager (or whoever has the rights) for permission.
I know it's common for organisations to let IT operations handle this, but this is a terrible practice. Your IT department will almost never be in a position where they are the ones who have authority to grant access to anything without manager permission, a permission they should basically ask for every time. Also it's a massive waste of time for everyone. Yes, I know it's extremely common to just let IT operations do it anyway, but well, you shouldn't.
One potential benefit should be that with the right tooling around it it should be able to translate your code base to a different language and/or framework more or less at the push of a button. So if a team is wondering if it would be worth it to switch a big chunk of the code base from python to elixir they don't have to wonder anymore.
I tried translating a python script to javascript the other day and it was flawless. I would expect it to scale with a bit of hand-railing.
It seems that this kind of application can really change how the tech industry can evolve down the line. Maybe we will more quickly converge on tech stacks if everyone can test new one's out "within a week".
ChatGPT is trained well enough on all things AWS that it can do a decent job translating Python based SDK code to Node and other languages, translate between CloudFormation/Terraform/CDK (in various languages).
It does a well at writing simple to medium complexity automation scripts around
AWS.
If it gets something wrong, I tell it to “verify your answer using the documentation available on the web”
>>ChatGPT is trained well enough on all things AWS
It was scary to me how to chatting with GPT or Claude would give me information which was a lot more clear than what I could deduce after hours of reading AWS documentation.
Perhaps, the true successor to Google search has arrived. One big drawback of Google was asking questions that can't be converted to a full long conversation.
To that end. LLM chat is the ultimate socratic learning method tool till date.
ChatGPT is phenomenal for trying new techniques/libraries/etc. It's very good at many things. In the past few weeks I've used it to build me a complex 3D model with lighting/etc with Three.JS, rewrote the whole thing into React Three Fiber (also with ChatGPT), for a side project. I've never used Three.JS before and my only knowledge of computer graphics is from a class I took 20 years ago. For work I've used it to write me a CFN template from scratch and help me edit it. I've also used it to try a technique with AST - I've never used ASTs before and the first thing ChatGPT generated was flawless. Actually, most of the stuff I have it generate is flawless or nearly flawless.
It's nothing short of incredible. Each of those tasks would normally have taken me hours and I have working code in actual seconds.
And we are still at the beginning of this. Some what like where Google search was in early 2000s.
As IDE integration grows and there are more and better models, that can do this better than ever. We will unlock all sort of productivity benefits.
There is still skepticism about making these work at scale, with regards to both electricity and compute requirement for the larger audience. But if they can get this to work, we might see a new era tech boom way bigger than we have seen anything before.
I see your point but that specific analogy makes me wince. Google search was way better in the 2000s. It has become consistently dumber since then. Usefulness doesn't necessarily increase in a straight line over time.
Deployment was quite complex for the target audience, due to requiring mutual TLS and thereby requiring the management of self signed key infra or a commercial TLS cert.
> treating users like dumb cattle that can be nudged
Essentially the failure is that we do treat users like this by relying on mass collection of data instead of personal stories. To be human is to communicate face to face with words and emotions. That's how you can get the nuanced conclusions. Data is important but it's far from the whole story.
reply