Hacker News new | past | comments | ask | show | jobs | submit | rekwah's comments login

Congrats on the launch! I can definitely understand the pain point; frequent plan/pricing iteration in the early days always leaves a pile of "grandfathered entitlements" that get carried around.

Two questions.

1) Entitlements seem to permeate systems in few ways (pricing pages & billing systems as you've called out) but also into feature flags systems like LaunchDarkly (my plan offers access to beta feature channel) and authorization systems (RBAC, FGA, etc). Do you see replacing those systems with your SDK or Planship is more of an integrator that helps keep them synchronized?

2) I couldn't tell from glancing through your SDK docs (might have missed it) but do you provide any audit/temporal history? If I store user events in a data warehouse (timestamp, customer_id, action_performed), can I determine a customers plan from that historical timestamp or only their current plan?


Thanks! Great questions.

1. Today, we coexist with entitlement solutions (feature flags, auth systems, etc) by either working alongside them or feeding into entitlement aggregators. Basically, we handle the pricing-related data, logic, and aggregation that eventually reduces down to flags (Or numeric values, lists of items, or other value types). We offer SDKs to make these pricing entitlements easily accessible within a product marketing page, app, etc., but our API can just as easily be used to integrate with other systems.

2. We store complete subscription renewal history but the API for accessing it isn’t public yet. Audit trails, both for customer behavior like you mentioned, as well as admin tasks (E.g. Entitlement value X was added to plan Y) will be available via our API and in our console.


I started looking into this but DeleteObject doesn't support these conditional headers on general purpose buckets; only directory buckets (Express Zone One).


> just put it there, it might be useful later

> Also note that we have never mentioned anything about cardinality. Because it doesn’t matter - any field can be of any cardinality. Scuba works with raw events and doesn’t pre-aggregate anything, and so cardinality is not an issue.

This is how we end up with very large, very expensive data swamps.


that depends on the sampling rate no? I would much rather have a rich log record sampled at 1% than more records that dont contain enough info to debug..


It is a tragedy of the current generation of observability systems that they have inculcated the notion that telemetry data should be sampled. Absolute nonsense.


The people feeling the pain of (and paying for) the expensive data swamp are often not the same people who are yolo'ing the sample rate to 100% in their apps, because why wouldn't you want to store every event?

Put another way, you're in charge of a large telemetry event sink. How do you incentivise the correct sampling behaviour by your users?


Don't let the user pick the sampling rate. In Honeycomb land this is called the EMA Dynamic Sampler.

https://docs.honeycomb.io/manage-data-volume/refinery/sampli...


You should never need to sample telemetry data.


Metrics sample rate yes but logging sample? When an end-to-end transaction for a very important task breaks, do I get *some* breadcrumbs to debug it?


I have used that approach before with sentry. It was a non-issue. It depends on nature of the project of course, we had a system that was running every second so if it failed it generated a lot of data..


I agree. Sampling logs.. sounds dangerous. Obviously every system is different.

At least in GCP you can apply a filter to prevent ingestion and set different expiries on log budgets. This can help control costs without missing important entries.


Sampling can be smart, e.g. based on some field all events have (can be called traceId, haha).


Postgres wire format is indirectly getting there. Plenty of tools use that with wildly different storage engines on the other end.

A clean room implementation would likely yield different results but there appears to be some appetite for a solution.


Don't leave us hanging. Did you get a discount?! ;)


"1Password Unlocks $620M Round, Reaches $6.8B Valuation" would be my guess.


Curious if cuelang just ended up being too much of a hurdle for onboarding. I like it and have used it quite a bit but there's something about the syntax that makes it impenetrable for many.


There's some of that. CUE is incredibly powerful, but it can be polarizing. But the fundamental problem is that developers don't want to learn a new language to write CI/CD pipelines: they want to use the language they already know and love.

So, no matter what language we had chosen for our first SDK, we would have eventually hit the same problem. The only way to truly solve the "CI/CD as code" problem for everyone, is to have a common engine and API that can be programmed with (almost) any language.


In my case, I just simply didn't like it (CUE). I'm much more optimistic about Nickel at this point.


If a selling point is run from "one file", that's quite difficult to do in python. There are things like pyinstaller but you end up shipping the entire interpreter in the bundle.


The pex tool lets you build single file python executables which are well suited for server deployments.


How big is a python3 runtime when pex has packaged it up? And do you happen to know what the output binary is linked to?


As I recall, pex doesn't package up the runtime. It essentially packages up your venv, code, and resources and runs them under an installed runtime. It makes use of a relatively unknown bit of python functionality, which is that CPython will try to treat a zip file as a valid python script.


oh wow thanks i didn't know about this. in that case it would be super simple for me to wrap up a FastAPI server with SQLite or even some in-memory database that runs elsewhere, deliver as a single file!


Yeah this kind of thing was common when I worked at Twitter (where AFAIK pex/pants was developed) on an infrastructure team. It's a cool tool that few outside of Twitter seem to be aware of.


You could also just use Docker to wrap anything


Reminds me of Zach Holman's post "Double Shipping".

https://zachholman.com/posts/double-shipping


I haven't seen this before, this is great! Wow:

The result? The dozens of hours of preparation I put into the talk for the 200 people in the room ends up getting viewed by hundreds of thousands of people online.


As the author of a popular ULID implementation in python[1], the spec has no stewardship anymore. The specification repo[2] has plenty of open issues and no real guidance or communication beyond language implementation authors discussing corner cases and the gaps in the spec. The monotonic functionality is ambiguous (at best), doesn't consider distributed id generation, and is implemented differently per-language [3].

Functionally, UUIDv7 might be the _same_ but the hope would be for a more rigid specification for interoperability.

[1]: https://github.com/ahawker/ulid

[2]: https://github.com/ulid/spec

[3]: https://github.com/ulid/spec/issues/11


I've bee using ULIDs in python for about a year now and so far have been super happy with them, so a) thank you for maintaining this! b) I always felt a bit uneasy about the way the spec describes the monotonicity component. Personally I just rely on the random aspect as I am fortunate enough to say that two events in the same millisecond are effectively simultaneous.

At that point, it's basically just UUID7 with Crockford base32 encoding, more or less.

IMHO the in-process monotonically increasing feature of ULID is misguided. As you mention, distributed ids are a pain. The instant you start talking distributed, monotonic counters or orderable events (two threads count as distributed in this case), you need to talk things like Lamport clocks or other hybrid clock strategies. It's better to reach for the right tools in this case, vs half-baked monotonic-only-in-this-process vague guarantee.


Thank you, I've been using ULID for a while now, and it serves my purposes. But I have long term support concerns.

UUIDv7 really seems like the sweet spot between pure INT/BIGINT auto incrementing PKs and universally sortable universal ids.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: