No. When you finance a car the lender gets a lien on the title. The title is still in your name. Usually one of the stipulations the lender places in their contract is that they hold the title for you in what is effectively "escrow" until the loan is paid off. Then they release the lien and send you your title.
The reason they do that is because a lien gives them the ability to, within a particular legal framework, get ownership of the car if you don't pay your loan, prevent the title from being transferred to someone they don't trust, etc. It's basically just risk mitigation.
This is cool. I like the frontend aspect. Looks really handy for a lot of apps where integration with other services is a core feature (I've built several just over the past few years, so I definitely get it).
I do wish it supported encrypted storage. For example, I wrote/maintain a Vault plugin to do basically the same work as the backend side of this project[0]. I wonder if you would be interested in supporting Vault as a backend in addition to PostgreSQL down the line? Feel free to reach out if so.
To answer your question:
Like some others here, I haven't found the actual integration points to be terribly difficult with most OAuth 2 servers. Once you have a token, you can call their APIs. No problem. I wrote the Vault plugin I referenced above to basically just do automatic refreshes without ever exposing client secrets/refresh tokens to our services, and it works fine.
Rather our customers would get into situations where they inadvertently revoked access, the user that authorized the integration initially left the company and it was automatically disabled, etc. and there was no notification that it happened. Basically all of the lifecycle management side that couldn't be automated down to "refresh my token when it's about to expire" sucked. So anything you're looking to support there would be a huge value-add IMO.
Another one is that each provider has their own scope definitions/mapping to their APIs. Some scopes subsume others (e.g. GitHub has all repos, public repos, org admin, org read-only, etc.). Some get deprecated and need to be replaced with others on the next auth attempt. We could never keep them up to date because they were usually just part of docs, not enumerated through some API somewhere. If you had a way to provide the user with a way to see and select those scopes in advance, that would be huge. Think if my app or a user could answer the question "I want to call this API endpoint, what scopes do I need?" by just asking your service to figure it out.
"Rather our customers would get into situations where they inadvertently revoked access, the user that authorized the integration initially left the company and it was automatically disabled, etc. and there was no notification that it happened. Basically all of the lifecycle management side that couldn't be automated down to "refresh my token when it's about to expire" sucked. So anything you're looking to support there would be a huge value-add IMO."
I can definitely see value in notifying that API access has been revoked. If you think of any other case you'd like covered, I am interested!
If anyone's looking for an example, I used this trick a few months ago to embed a tiny helper binary[0] directly into my application[1] so I wouldn't have to ship two executables or add "hidden" behavior to the main program. It works really well (on Linux)!
I've tested in Fabrice Bellard JSLinux with tcc (x86 arch) and on https://replit.com/languages/c (x64). I failed to see any side effect at all. gdb "catch syscall" doesn't show anything interesting too. Looks like TINY_ELF_PROGRAM is not doing anything.
Relay by Puppet | Senior Software Engineer | Portland, OR or remote (5 hour overlap with Pacific Time) | Full-time | https://puppet.com and https://relay.sh
Hi!
Relay is Puppet's fast-moving SaaS/PaaS endeavor into the cloud-native arena. Relay is an event-driven workflow execution engine that helps connect service APIs and infrastructure.
We're looking for a senior-level backend engineer[0] keen on DevOps and security to make our workflow execution system robust and powerful. Our stack is pragmatic: lots of Go, Kubernetes (we run untrusted user-specified containers), Tekton, Knative, all in GCP.
In non-pandemic times, Puppet has a sweet office in downtown Portland (but you're welcome to remain remote, too). We're a friendly bunch (well, I'd like to think so anyway) and we're absolutely committed to making our customers' lives better every day. If this sounds like something you'd be interested in, please apply!
And if you have any questions or just want to chat, feel free to shoot me an email at noah@puppet.com.
Puppet | Senior Software Engineer | Portland, OR or remote (5 hour overlap with Pacific Time) | Full-time | https://puppet.com
Hi!
Relay[0] is Puppet's fast-moving SaaS/PaaS endeavor into the cloud-native arena. Relay is an event-driven workflow execution engine that helps connect service APIs and infrastructure.
We're looking for a senior-level backend engineer[0] keen on DevOps and security to make our workflow execution system robust and powerful. Our stack is pragmatic: lots of Go, Kubernetes (we run untrusted user-specified containers), Tekton, Knative, all in GCP.
In non-pandemic times, Puppet has a sweet office in downtown Portland (but you're welcome to remain remote, too). We're a friendly bunch (well, I'd like to think so anyway) and we're absolutely committed to making our customers' lives better every day. If this sounds like something you'd be interested in, please apply!
And if you have any questions or just want to chat, feel free to shoot me an email at noah@puppet.com.
The limiting factor right now would be whether your cluster is accessible to the internet, which of course isn't super common. We are working through what our story for connecting to on-prem infrastructure looks like, so if you can provide any additional details on your environment, it would be helpful!
It varies. We have a primary PostgreSQL database for non-sensitive data including some workflow configuration. We store sensitive data in Vault and logs in Google Cloud Storage. A bit more info here: https://relay.sh/docs/how-it-works/#where-and-how-is-my-data...
I'm happy to answer any questions about our storage and security architecture too!
This is a good question, thank you! The flow of events into our system is somewhat different than GitHub Actions. We're trying to be a consumer of all sorts of events, including, say, data published to AWS SNS or via a Docker Hub webhook[0]. All of that isn't quite in place yet, but we want to act more like an event broker than a CI/CD solution alone.
One concept we're throwing around is supporting CloudEvents[1] and dispatching workflows based on event types. If anyone has experience with CloudEvents we'd love to hear if that would be something useful to you.