Hacker Newsnew | past | comments | ask | show | jobs | submit | impl's commentslogin

Zero interest rate policy, which makes borrowing lots of money almost free, which makes it easy to hire lots of people using said borrowed money.


No. When you finance a car the lender gets a lien on the title. The title is still in your name. Usually one of the stipulations the lender places in their contract is that they hold the title for you in what is effectively "escrow" until the loan is paid off. Then they release the lien and send you your title.

The reason they do that is because a lien gives them the ability to, within a particular legal framework, get ownership of the car if you don't pay your loan, prevent the title from being transferred to someone they don't trust, etc. It's basically just risk mitigation.


This is cool. I like the frontend aspect. Looks really handy for a lot of apps where integration with other services is a core feature (I've built several just over the past few years, so I definitely get it).

I do wish it supported encrypted storage. For example, I wrote/maintain a Vault plugin to do basically the same work as the backend side of this project[0]. I wonder if you would be interested in supporting Vault as a backend in addition to PostgreSQL down the line? Feel free to reach out if so.

To answer your question:

Like some others here, I haven't found the actual integration points to be terribly difficult with most OAuth 2 servers. Once you have a token, you can call their APIs. No problem. I wrote the Vault plugin I referenced above to basically just do automatic refreshes without ever exposing client secrets/refresh tokens to our services, and it works fine.

Rather our customers would get into situations where they inadvertently revoked access, the user that authorized the integration initially left the company and it was automatically disabled, etc. and there was no notification that it happened. Basically all of the lifecycle management side that couldn't be automated down to "refresh my token when it's about to expire" sucked. So anything you're looking to support there would be a huge value-add IMO.

Another one is that each provider has their own scope definitions/mapping to their APIs. Some scopes subsume others (e.g. GitHub has all repos, public repos, org admin, org read-only, etc.). Some get deprecated and need to be replaced with others on the next auth attempt. We could never keep them up to date because they were usually just part of docs, not enumerated through some API somewhere. If you had a way to provide the user with a way to see and select those scopes in advance, that would be huge. Think if my app or a user could answer the question "I want to call this API endpoint, what scopes do I need?" by just asking your service to figure it out.

[0]: https://github.com/puppetlabs/vault-plugin-secrets-oauthapp


Thanks a lot for all the insights, this is great!

"Rather our customers would get into situations where they inadvertently revoked access, the user that authorized the integration initially left the company and it was automatically disabled, etc. and there was no notification that it happened. Basically all of the lifecycle management side that couldn't be automated down to "refresh my token when it's about to expire" sucked. So anything you're looking to support there would be a huge value-add IMO."

I can definitely see value in notifying that API access has been revoked. If you think of any other case you'd like covered, I am interested!


If anyone's looking for an example, I used this trick a few months ago to embed a tiny helper binary[0] directly into my application[1] so I wouldn't have to ship two executables or add "hidden" behavior to the main program. It works really well (on Linux)!

[0]: https://github.com/impl/systemd-user-sleep/blob/666cf29871b1...

[1]: https://github.com/impl/systemd-user-sleep/blob/666cf29871b1...


Here's a simple concrete example for folks who don't know Rust:

    int main(int argc, char *argv[]) {
    #define TINY_ELF_PROGRAM "\
    \177\105\114\106\002\001\001\000\000\000\000\000\000\000\000\000\
    \002\000\076\000\001\000\000\000\170\000\100\000\000\000\000\000\
    \100\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\
    \000\000\000\000\100\000\070\000\001\000\000\000\000\000\000\000\
    \001\000\000\000\005\000\000\000\000\000\000\000\000\000\000\000\
    \000\000\100\000\000\000\000\000\000\000\100\000\000\000\000\000\
    \200\000\000\000\000\000\000\000\200\000\000\000\000\000\000\000\
    \000\020\000\000\000\000\000\000\152\052\137\152\074\130\017\005"
      int fd = memfd_create("foo", MFD_CLOEXEC);
      write(fd, TINY_ELF_PROGRAM, sizeof(TINY_ELF_PROGRAM)-1);
      fexecve(fd, argv, environ);
    }
Who here is brave enough to run my C string?


I had to add

  #define _GNU_SOURCE
  #include <sys/mman.h>
  #include <unistd.h>
I've tested in Fabrice Bellard JSLinux with tcc (x86 arch) and on https://replit.com/languages/c (x64). I failed to see any side effect at all. gdb "catch syscall" doesn't show anything interesting too. Looks like TINY_ELF_PROGRAM is not doing anything.


you can disassemble the code portion, they are single-byte instructions

    push 0x2a
    pop edi
    push 0x3c
    pop eax
    db 0x0f, 0x05 ; invalid?


    ; set the first syscall argument to 42
    push   0x2a
    pop    edi

    ; select syscall 60 (sys_exit)
    push   0x3c
    pop    eax

    ; sys_exit(42)
    syscall


LOL my asm is rusty, didn't even know about the syscall instruction (I'd have used int 0x80 here)


What's the point of `push`ing constants to the stack and `pop`ing them to registers instead of `mov`ing them directly?


I think they encode smaller?


That's cool - I wonder why exit code is not set to 42 in practice? binary still returns 0, must be a bug somewhere.


A successful run not exiting with EXIT_SUCCESS usually requires a good reason to justify it.


It returns the answer to an important question :)

Edit: Wouldn't mov be shorter than push/pop? (I am not very familar with x86)


No, mov is longer. Push stores the constant as just one byte and omits the three zero bytes. https://stackoverflow.com/questions/56618815/why-use-push-po...


Relay by Puppet | Senior Software Engineer | Portland, OR or remote (5 hour overlap with Pacific Time) | Full-time | https://puppet.com and https://relay.sh

Hi!

Relay is Puppet's fast-moving SaaS/PaaS endeavor into the cloud-native arena. Relay is an event-driven workflow execution engine that helps connect service APIs and infrastructure.

We're looking for a senior-level backend engineer[0] keen on DevOps and security to make our workflow execution system robust and powerful. Our stack is pragmatic: lots of Go, Kubernetes (we run untrusted user-specified containers), Tekton, Knative, all in GCP.

In non-pandemic times, Puppet has a sweet office in downtown Portland (but you're welcome to remain remote, too). We're a friendly bunch (well, I'd like to think so anyway) and we're absolutely committed to making our customers' lives better every day. If this sounds like something you'd be interested in, please apply!

And if you have any questions or just want to chat, feel free to shoot me an email at noah@puppet.com.

(Note: No recruiters, please.)

[0]: https://puppet.com/company/careers/jobs/2808521


Puppet | Senior Software Engineer | Portland, OR or remote (5 hour overlap with Pacific Time) | Full-time | https://puppet.com

Hi!

Relay[0] is Puppet's fast-moving SaaS/PaaS endeavor into the cloud-native arena. Relay is an event-driven workflow execution engine that helps connect service APIs and infrastructure.

We're looking for a senior-level backend engineer[0] keen on DevOps and security to make our workflow execution system robust and powerful. Our stack is pragmatic: lots of Go, Kubernetes (we run untrusted user-specified containers), Tekton, Knative, all in GCP.

In non-pandemic times, Puppet has a sweet office in downtown Portland (but you're welcome to remain remote, too). We're a friendly bunch (well, I'd like to think so anyway) and we're absolutely committed to making our customers' lives better every day. If this sounds like something you'd be interested in, please apply!

And if you have any questions or just want to chat, feel free to shoot me an email at noah@puppet.com.

[0]: https://relay.sh

[1]: https://puppet.com/company/careers/jobs/2808521


Hi,

I wrote an SSH step for you. Here's the source code:

https://github.com/relay-integrations/relay-ssh/blob/master/...

And since it isn't documented yet, here's how you would use it in a workflow:

  steps:
  - name: ssh
    image: relaysh/ssh-step-exec
    spec:
      connection: !Connection {type: ssh, name: my-ssh-connection}
      username: relay
      port: 2222 # defaults to 22
      knownHosts: |
        server1.example.com ssh-rsa AAAAEXAMPLE
        server2.example.com ssh-rsa AAAANOTHEREXAMPLE
      # or
      #strictHostKeyChecking: false
      on:
      - server1.example.com
      - server2.example.com
      input:
      - whoami
      - uptime
      - cat /etc/passwd
Please feel free to shoot me an email (check my profile) and I'd be happy to help write a workflow for your use case with you!


We do have a Helm integration that might be able to help you out: https://relay.sh/integrations/helm/

The limiting factor right now would be whether your cluster is accessible to the internet, which of course isn't super common. We are working through what our story for connecting to on-prem infrastructure looks like, so if you can provide any additional details on your environment, it would be helpful!


It varies. We have a primary PostgreSQL database for non-sensitive data including some workflow configuration. We store sensitive data in Vault and logs in Google Cloud Storage. A bit more info here: https://relay.sh/docs/how-it-works/#where-and-how-is-my-data...

I'm happy to answer any questions about our storage and security architecture too!


Hi,

This is a good question, thank you! The flow of events into our system is somewhat different than GitHub Actions. We're trying to be a consumer of all sorts of events, including, say, data published to AWS SNS or via a Docker Hub webhook[0]. All of that isn't quite in place yet, but we want to act more like an event broker than a CI/CD solution alone.

One concept we're throwing around is supporting CloudEvents[1] and dispatching workflows based on event types. If anyone has experience with CloudEvents we'd love to hear if that would be something useful to you.

[0]: https://github.com/relay-integrations/relay-dockerhub/blob/m...

[1]: https://cloudevents.io/


I'm writing a book on Knative and have some material in there about CloudEvents. It's on the MEAP program and the relevant chapter was put up yesterday, as it happens: https://livebook.manning.com/book/knative-in-action/chapter-...


Thank you! I'll take a look through this! Would it be okay if I sent you an email in the next week or two for follow-up questions?


Please do.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: