What surprises me is just how much they've missed the mark.
I'm not an extreme user of Cursor. It has become an essential part of my workflow, but I also probably on the lower/medium section of users. I know that a lot of my friends were spending $XXX amounts/month on extra usage with them, while I've never gone beyond 50% included premium credits usage.
After their changes I'm getting hit with throttling multiple times a day, which likely means that the same thing happens to almost every Cursor user. So that means one or more of:
- They are jacking up the prices, to squeeze out more profit, so it looks good in the VC game
- They had to jack up the prices, so that they aren't running at a loss anymore (that would be a bad indicator regarding profitability for the whole field)
- They are really incompetent about simulating/estimating the impact of their pricing decisions, which also isn't a good future indicator for their customers
My guess is that it's your second scenario there (avoid running at a loss). In the start-up game scale/growth is the most important thing and profits really aren't that important. you want to show to later stage VCs that your idea has traction and there's a large addressable market.
Whilst profits aren't important you also can't burn all your current capital, so if the burn rate gets too high you have to put up prices, which seems likely to be what Cursor is doing.
It looks cool, and I agree with the creators that something like this ought to exist and optimally free from monetization incentives.
From a user standpoint it does seem like quite the undertaking to introduce it though. Most of the needs I'm looking for from such a system are currently already filled quite well by SOPS[0], where I would say I get 80% of the features (I care about) for 10% of the complexity.
Flagged. Contrary to what the title suggests, this is just a "Show HN" post with 0 content (of which the same author has already submitted ~5 for this tool).
If you want people to take interest in you product maybe actually try to articulate and explore the idea in the title instead of plugging your product after the first sentence?
Both SQL and Linq-style queries end up in the same in-memory representation once they hit the query engine/query planner.
Filter pushdown ("correct application of where, and hope it puts the where before the join") is table stakes for a query planner and has been for decades.
And no, Iceberg tables are not special in any way here. Iceberg tables contain data statistics, just like the ones described in the article to make the optimizer choose the right query plan.
Filter pushdown is surprisingly subtle, when you start getting into issues of non-inner joins and multiple equalities. I had no idea until I had to implement it myself. (It's also not always optimal, although I don't offhand know of any optimizer that understands the relevant edge cases.)
One part that I don't understand yet: How does the system ensure "sybil resistance"? (not sure if that's the right term in that context)
By providing both attestation of individual attributes combined with "unlikability", how would even a single verifying party ensure that different attestations don't come from the same identity?
E.g. In the case of age attestation a single willing dissenting identity could set up a system to mint attestations for anyone without it being traceable back to them, right? Similar to how a single of-age person could purchase beer for all their under age friends (+ without any feat of repercussions.
Great question. The current thinking, at least in high level-of-assurance situations, is this. The identity document is only usable in cooperation with a hardware security element. The relying party picks a random nonce and sends it to the device. The device signs the nonce using the SE, and either sends the signature back to the relying party (in the non-ZKP case), or produces a ZKP that the signature is correct. The SE requires some kind of biometric authentication to work, e.g. fingerprint. So you cannot set up a bot that mints attestations. (All this has nothing to do with ZKP and would work the same way without ZKP.)
In general there is a tradeoff between security and privacy, and different use cases will need to choose where they want to be on this spectrum. Our ZKP library at least makes the privacy end possible.
That seems a bit like a game of whack-a-mole where as long as the forging side is willing to go further and further into out-of-hardware emulation (e.g. prosthetic finger on a robot hand to trick fingerprint scanners), they are bound to win. Biometrics don't feel like they hold up much if you can have collusion without fear of accountability.
> Our ZKP library at least makes the privacy end possible.
Yes, that's also one of the main things that make me excited about it. I've been following the space for quite some time now, and I'm happy that it becomes more tractable for standard cryptographic primitives and thus a lot more use-cases.
Thanks for your contributions to the space and being so responsive in this thread!
For walls there is also the GOEWS (Greatly Over Engineered Wall System) - https://goews.xyz
However personally, I've also been a fan of IKEA Skadis boards, as it's quite easy to get up and running in terms of a baseplate + there are already a lot of models for it out there.
The example you bring up is for a single one-off extension. Yeah, for that case it doesn't make a lot of sense.
However, for initial setup of the system (e.g. filling up multiple drawers with baseplates and basic bins, as you will see in many videos online), it would definitely jump start the process a lot, where you'll otherwise spend weeks printing everything. Additionally, if you also go for the fancier baseplates/bins that include the magnets you'll also spend quite a bit of time on assembly and will require external hardware anyways.
I personally didn't think it was a big deal as for me adopting the system incrementally over time worked quite well, but I think there definitely is a niche of people (and possibly businesses) that would like to adopt Gridfinity for its other benefits and appreciate faster initial setup time.
For anyone looking to get into those storage systems I can also highly recommend "Hands on Katie"'s Youtube channel: https://www.youtube.com/@handsonkatie - There are a few videos that go into different storage systems and how to combine them to cover different storage needs and vertical/horizontal surfaces.
Her Discord is also quite active with people interested in the space, and Underware (under the desk cable management system), Neogrid and Deskware are all storage systems that have came out of her community.
There are plethora of mistakes one can make in implementing AuthN/AuthZ, and many of them almost immediately will lead to either the direct leak of PII or can form the start of a chain of exploits.
Storing password hashes in an inappropriate manner -> BOOM, all your user's passwords are reversible and can be used on other websites
Not validating a nonce correctly -> BOOM, your user's auth tokens can be re-used/hijacked
Not validating a session timestamps correctly -> BOOM, your outdated tokens can be used to gain the users PII
I'm not criticizing BetterAuth here, but the idea that rolling your own auth is easy.
BetterAuth is likely an improvement against the status quo for many companies if they have already decided to roll their own auth, as it at least already provides pre-made blocks of functionality that are hopefully battle-hardened rather than building completely from scratch.
If you’re just a developer who works on CRUD apps all day or never touches a backend then yea you probably don’t have the skills but auth is a solved problem and you can learn to do it right. A team of engineers can definitely put together an auth system.
An improvement if their own approach would be worse than 'get a single self taught guy to roll something out'. If it's roughly the same it shouldn't be any improvement.
Counterexample: Storing the bcrypt hash by appending it to a CSV file containing the usernames and hashes of all users then having a login process where that CSV file is downloaded to the client and the password is verified locally against that CSV file using client-side JavaScript would probably be very bad.
Cryptography part is fine but storage or the auth process isn't.
You would like to think that no-one would write their app that way, but there are plenty of slightly less worse things that happen in practice and vibe coding probably introduces all sorts of new silliness.
> but by being the same interface for all APIs it reduces the burden on the LLM
It doesn't reduce the burden for the LLM, as the LLM isn't the piece of software directly interfacing with the MCP server. It reduced the burden for people building clients that call LLMs and have to convert external interfaces into tool calls etc..
I'm not an extreme user of Cursor. It has become an essential part of my workflow, but I also probably on the lower/medium section of users. I know that a lot of my friends were spending $XXX amounts/month on extra usage with them, while I've never gone beyond 50% included premium credits usage.
After their changes I'm getting hit with throttling multiple times a day, which likely means that the same thing happens to almost every Cursor user. So that means one or more of:
- They are jacking up the prices, to squeeze out more profit, so it looks good in the VC game
- They had to jack up the prices, so that they aren't running at a loss anymore (that would be a bad indicator regarding profitability for the whole field)
- They are really incompetent about simulating/estimating the impact of their pricing decisions, which also isn't a good future indicator for their customers
reply