Hacker News new | past | comments | ask | show | jobs | submit | shred45's comments login

I built an OAuth proxy (only Auth0 currently works) hosted on Cloudflare workers. I'm a big fan of the self-hosted OAuth Proxy [1], but some projects don't lend themselves to hosting a container, sometimes you just want to set up a simple app on Heroku, Fly, Workers, etc. and have an auth proxy sit in front of it.

My solution also manages SSL via Cloudflare and integrates with Stripe for simple fixed-price subscription billing models. The idea here is to be able to iterate on product ideas quickly without spending a day each time figuring out authentication and billing.

I did set up a marketing site at the time so that others could use it, but I don't have any users, and I'm happy to maintain it just for my own projects (half a dozen now).

It took me 2-3 weeks to make so on net I have probably not saved much time, but it really helps reduce the friction of launching things which I think is valuable.

[1] - https://github.com/oauth2-proxy/oauth2-proxy


I may be biased because I generally avoid closures entirely (I prefer the certainty over ownership and type signatures that traditional functions give), but I do my best to avoid closures when working with async in Rust. A lot of examples for frameworks will make use of async closures, and I typically convert those to functions as quickly as possible, which can be tricky the first time because of the elided types.


I'm not sure about the subreddit, but from my reading of the forum, there is still quite a lot of work / discussion to be done related to implementing said strategy in a tax efficient manner, directly purchasing bonds rather than using the ETFs, usage of TIPS, alternative asset classes like real estate, and portfolio allocation given age, etc.

I've learned that maintaining even the simplest "strategy" can be a lot of work.


I’ll second the charger. I recently bought the Shargeek Storm 2 battery and bought their recommended charger (100w GaN) on a lark. This turned out to be much more useful than the battery itself (although the battery has been great too).

I’m able to charge MacBook, iPhone, Watch, AirPods all from one brick at the same time, which means I only have to carry one brick.

I will say, I don’t believe it will charge at 100w on the main port when other ports are in use, 100w is the max across all ports. Still, I’m considering purchasing their new 140w option now to take full advantage of MagSafe.


Very cool site. The salary filter is a little unintuitive: the minimum filter appears to apply to the _lower_ end of the listed range, and be non-inclusive. For instance there is a job listed at Netgear with a range of $300k-400k, if I set the filter to $300k, it is not shown. I would probably expect it to show until I set the filter to $401k or more.

Edit: I see now that it is filtering on salary (which is visible if you look at the listing details), but the summary is showing total comp.


What confuses me is that I know for a fact that several of those projects have or will be joining the CNCF, so I’m not sure they are in need of a governing body.


As a co-maintainer of kube-rs, this is the first I've heard of this working group. I don't have anything against them or the idea per se, but I'd have appreciated a heads up before being listed in a way that could be read as us endorsing them.


I'd be happy to re-architect the website to make it clear that site exists to endorse projects and not the other way around. The latter is definitely not the intent, so if it reads that way, we should change it. Please feel free to file an issue: https://github.com/rust-cloud-native/rust-cloud-native.githu...

EDIT: I have added a section explicitly stating the projects showcased are not officially affiliated unless given explicit consent from owners/maintainers.


That looks much more reasonable, thanks!


Of course! I do not want us to disrespect maintainers/owners; quite the opposite actually. Please never hesitate to file an issue for this kind of thing.


This is really great, I remember the original post and had made a note to look into it myself but never got the chance.

In the third paragraph of the first section you mention the "Toledo" protocol out of the blue. Is this a result of autocorrect?

Thanks for writing this up!


Terado, Toredo, Terredo, Tedero, Toledo, Tereo, Taredo... it gets misspelt a lot for some reason. (Yes, I've seen all of those in the wild.)

Not that there's any Teredo going on in the article...


  > Not that there's any Teredo going on in the article...
I'm not sure exactly what you mean, but note that using 6to4 with FOU produces packets that are exactly the Teredo protocol. This is why some packet analysis tools (for example Wireshark) are able to recognize the UDP-encapsulated packet as IPv6-in-UDP.


Teredo involves 2001:0::/32, Teredo servers and Teredo relays; connections over Teredo involve doing things like NAT traversal and asking the Teredo server to send a ping on your behalf to the target IP first. Teredo might use IPv6-in-UDP packets but that doesn't mean that every instance of IPv6-in-UDP is Teredo.

Since you're not using 2002::/16 it's not 6to4 either. It's 6rd, except tunnelling 6rd's 6in4 packets over UDP makes it incompatible with that. I was going to suggest "6rd-UDP" but https://tools.ietf.org/html/draft-lee-softwire-6rd-udp-02 exists/existed and it's different, so maybe something else.


Toledo is a large city in the midwestern region of the US, so it's likely that more people have heard of it than they have Teredo.


Not autocorrect, just neurons misfiring -- fixed.


What you says is true, it can be very difficult to submit ad-hoc patches to large projects. In my experience, however, this has been due to brittle, undocumented automated checks and newbie-hostile maintainers who have built a complex contribution process. I'm not convinced that a different version control system would change this. There are also many projects on GitHub that are very pleasant to work with.


I find "distributed systems" to be a huge source of imposter syndrome. Despite having worked almost exclusively with distributed applications for several years now, it is difficult to consider myself experienced. When I'm asked if I've worked with distributed systems, I don't think they are asking me if I've managed a Hadoop cluster. They are interested in building new applications using some of the primitives discussed in this post. All of these links are great, but the fact is that building and operating tools like this is hard. In addition to consensus primitives, your system may need very precise error handling, structured logging, distributed tracing, resource monitoring, schema evolution, etc. In the end, I probably pause for a second too long when answering that question, but I don't think its because of a lack of experience, quite the opposite!


Do you know what the CAP theorem is, and can explain it to me like I'm 5? Can you tell me how a SQL DB fits into it, and where something like DynamoDB fits into it?

Congratulations, you are better than 95% of the people that I've interviewed out there saying they are experienced building distributed systems. Including system/solution architects.


Sad, but true. That said... the hiring pool tends to be biased towards people that others have passed on.


Sure, but the OP was talking about their difficulty during interviews; I'm just saying, they're almost assuredly better than the majority of the rest of the hiring pool. The company has an opening they need filled; they can only fill it with people from the hiring pool, not those who aren't in the hiring pool.


I'm realizing I've only worked in distributed systems as well, but I'd never feel comfortable telling potential employers I'm an expert. Being an expert in distributed systems seems almost too broad. At a high level couldn't it be expertise at integration, accessible logging, and configuration?


Distributed systems research has been going on since the 70's and Unix Neckbeards have probably forgotten more about them than we have learned, so actually I think impostor syndrome is a bit warranted with them.

The actual hard stuff is not even these papers, it's the implementations that are way more complex than some algorithm or architectural pattern. Anyone who says "X is better than Y" is fooling themselves because it's only the implementation context that matters.

The only thing you can say for certain is that reducing the amount of components and complexity in the system often results in better outcomes.


> The only thing you can say for certain is that reducing the amount of components and complexity in the system often results in better outcomes.

No, there are a few other things that you can say for certain:

Watch out for positive-only feedback loops, you absolutely need negative feedback as well - or only. Eg. exponential back-off.

Sometimes, you just need a decentralized solution, rather than a distributed one, and you don't have to have the same answer at every scale (eg. distributed intra-datacenter, decentralized inter-datacenter, or vice-versa).

Loose coupling is your friend.

Sure, add an extra layer of indirection, but you probably need to pay more attention to cache invalidation than you think.

Throughput probably matters more than latency.

Reducing the size/number of writes will probably help more than trying to speed them up.

Multi-tenancy is a PITA for systems in general, and distributed ones are no exception (aside: there is probably a huge business for multi-tenancy-as-a-service, if anyone manages to solve it in a general-purpose way), but a series of per-customer single-tenant deployments may be worse, especially if they are all on different versions of the code. Here be dragons.

Don't overthink it. Start with a naive implementation and go from there (see loose coupling above).


Maybe there are some other things you can say for certain. But as to some of your points:

> Watch out for positive-only feedback loops, you absolutely need negative feedback as well - or only. Eg. exponential back-off.

Agreed it may need negative feedback, but I'm not sure about always.

If your service has a latency SLA, exponential back-off might kill your SLA (depending on wordage and where the back-off is). The fix is to soft reject requests (RST rather than dropping packets) when you can't meet the demand. This change may allow you to meet your SLA if it's written to prioritize low latency over service unavailability.

This is it's own negative feedback loop, but change from sending RSTs to silently dropping and you no longer have the feedback.

> Sometimes, you just need a decentralized solution, rather than a distributed one

Agreed

> Loose coupling is your friend.

Until it isn't? :)

> add an extra layer of indirection, but you probably need to pay more attention to cache invalidation

Fixes for additional layers tend to increase system complexity compared to fixes for fewer layers.

> Throughput probably matters more than latency.

Until it doesn't :)

> Reducing the size/number of writes will probably help more than trying to speed them up.

Depending on 20 different things... You really have to account for all the system's limits (and business use cases) and find the solution that matches the implementation needs.

> there is probably a huge business for multi-tenancy-as-a-service

Sure, it's called EKS :-) Just build more clusters... Don't worry, we'll bill you...

> Don't overthink it

Yes and no; Yes, in that there will always be unknowns. But no, in that often improvements in communication will provide better solutions without extra work. Think smarter, not harder!


Every maxim has caveats and exceptions. Including this one.


Yes. The area is fairly broad. In my opinion - I think the question to ask for is: Do you have the distributed systems mindset? Not Are you an expert on Distributed Systems


It is definitely a broad term, and I think that disciplined implementation of the things that you mentioned is the real key. It just isn't as exciting to talk about.


I don't trust anyone who comes back with quick answers without any hesitation or pause. I remember a lecture from a while ago, I can't remember who was giving it but they were someone well known, where they asked people to implement bubble sort. The results that came back ranged from something like 20 lines to 2000 lines and every single one had bugs. I don't know how anyone who works in this industry for any length of time doesn't follow up every answer with, "...but there's probably some problem that I'm not thinking of at the moment".


I feel interviews are to blame. The interview is a place where doing this is often not OK. It signals to folks that that is expected day to day.


I wonder if this source of imposter syndrome applies to any field where the job roles and success criteria are not clearly defined and vary from company to company (ie data analyst, product owner, distributed systems developer).


Explain concurrency like I am five and give a concurrency example problem and solution.


Distributed systems, in my experience, most often fall prey to "not invented here" syndrome.


great time to bring up the Dunning-Kruger phenomenon in the interview lol


hahaha

for the uninitiated,

The Dunning–Kruger effect is a cognitive bias in which people with low ability at a task overestimate their ability.


I'm not sure how the broader community does it, but the Rust developers that I know (myself included) will typically get the code working and compiling with panics and unwraps (especially when working with a new API or just experimenting) and then do a second pass to "remove panics". This typically involved reviewing your code thoroughly, finding each panic / unwrap / expect, and using this to identify how your functions can fail. This is a good time to create error types for your API and note these cases in your documentation. You then convert functions which can fail to a Result and all of the unwraps etc. to ?. This can be fairly mechanical and really gives insight into ways that your code can fail and edge cases, but does take time.

As far as loops vs maps, I've found that your use case typically guides this. I think this is the same with other languages like Python. You aren't going to want to write a ton of complex logic in deeply nested maps and filters. But if you just need to do something more simple it might make more sense to do it in a map one-liner.

Edit: The presentation linked to in the other reply appears to confirm this process of starting with dirty compiling code and then refactoring out the possible failure modes. It covers a lot more things like using match and handling errors from other libraries.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: