Hacker Newsnew | past | comments | ask | show | jobs | submit | iamrobertismo's commentslogin

I notice the biggest issue with these types of organizational visions is that if you are not in fairly high leadership, it doesn't go very far. You really need buy-in and nearly everyone has an opinion on this type of stuff, so you end up competing on "the best way to not waste people's time."

Luckily if you lead a team, something like a standup, you basically have full control over, but I think the bigger issue is that orgs don't respect team leadership very much and end up stepping over systems like this for a variety of really uncoordinated reasons.


Perfect example of "Why the mids beefin?"

I can't be the only one tired of the consistent relitigation of OpenAI corporate structure and funding, because like a lot of things with AI, generally it doesn't matter who you think is legally correct. We are in an environment in which non of this is actually being scrutinized under it's legal merit, but rather, personal and geopolitical.

But, generally I assume the idea is about building a case for when the dust settles, rather than looking for something to happen now.


Not clear what you are pitching, if you don't control the infrastructure or have a major contract, how exactly are you lowering or stabilizing costs. Especially if you are not chasing the newest model, at this point token economics is essentially a commodity. Commodity pricing is not a engineering problem, it is a financing problem.

That’s fair, and I probably didn’t explain it clearly. We’re building an AI API as a service platform aimed at early developers and small teams who want to integrate AI without constantly thinking about tokens at all.

I agree that token economics are basically a commodity today. The problem we’re trying to address isn’t beating the market on raw token prices, but removing the mental and financial overhead of having to model usage, estimate burn, and worry about runaway costs while experimenting or shipping early features. In that sense it’s absolutely an engineering and finance problem combined, and we’re intentionally tackling it at the pricing and API layer rather than pretending the underlying models are unique.


Would you just be... subsidizing low volume users? I am saying this isn't like a new problem in the grand scheme of things. hopefully I am not being too negative, do you have a site or something to learn more? It's not clear how you can have better token economics to provide me or someone else better token economics, rather than just burning more money lol.

Totally fair question, and you’re not being negative.

We’re not claiming better token economics in the sense of magically cheaper tokens, and we’re not just burning money to subsidize usage indefinitely. You’re right that this isn’t a new problem.

What we’re building is an AI API platform aimed at early developers and small teams who want to integrate AI without constantly reasoning about token math while they’re still experimenting or shipping early features. The value we’re trying to provide is predictability and simplicity, not beating the market on raw token prices. Some amount of cross-subsidy at low volumes is intentional and bounded, because lowering that early friction is the point.

If you want to see what we mean, the site is here: https://oxlo.ai Happy to answer questions or go deeper on how we’re thinking about this.


Oh you're arbing! I see now. Makes sense, seems like it could be useful if you have a rock solid DX.

Thank you!! We are definitely fully focused on Developer experience. Would love some feedback if it looks interesting

I just use Thunderbird or Emacs, for RSS, I do not have a sophisticated setup. I live in a big city, so I stay up to date by continually engaging with friends and colleagues in-person. One person can only see so much, twelve can see a lot more.

Is this just a one off or do you plan on developing this further?

This is interesting, I am guessing the use case for ip address certs is so your ephemeral services can do TLS communication, but now you don't need to depend on provisioning a record on the name server as well for something that you might be start hundreds or thousands of, that will only last for like an hour or day.

One thing this can be useful for is encrypted client hello (ECH), the way TLS/HTTPS can be used without disclosing the server name to any listening devices (standard SNI names are transmitted in plaintext).

To use it, you need a valid certificate for the connection to the server which has a hostname that does get broadcast in readable form. For companies like Cloudflare, Azure, and Google, this isn't really an issue, because they can just use the name of their proxies.

For smaller sites, often not hosting more than one or two domains, there is hardly a non-distinct hostname available.

With IP certificates, the outer TLS connection can just use the IP address in its readable SNI field and encrypt the actual hostname for the real connection. You no longer need to be a third party proxying other people's content for ECH to have a useful effect.


That doesn't work, as neither SNI nor the server_name field of the ECHConfig are allowed to contain IP addresses: https://www.ietf.org/archive/id/draft-ietf-tls-esni-25.html#...

Even if it did work, the privacy value of hiding the SNI is pretty minimal for an IP address that hosts only a couple domains, as there are plenty of databases that let you look up an IP address to determine what domain names point there - e.g. https://bgp.tools/prefix/18.220.0.0/14#dns


I don't really see the value in ECH for self-hosted sites regardless. It works for Cloudflare and similar because they have millions of unrelated domains behind their IP addresses, so connecting to their IPs reveals essentially nothing, but if your IP is only used for a handful of related things then it's pretty obvious what's going on even if the SNI is obscured.

As far as I understand you cannot use IP address as the outer certificate as per https://www.ietf.org/archive/id/draft-ietf-tls-esni-25.txt

> In verifying the client-facing server certificate, the client MUST interpret the public name as a DNS-based reference identity [RFC6125]. Clients that incorporate DNS names and IP addresses into the same syntax (e.g. Section 7.4 of [RFC3986] and [WHATWG-IPV4]) MUST reject names that would be interpreted as IPv4 addresses.


The July announcement for IP address certs listed a handful of potential use cases: https://letsencrypt.org/2025/07/01/issuing-our-first-ip-addr...

Thanks! This is helpful to read.

No dependency on a registrar sounds nice. More anonymous.

> No dependency on a registrar sounds nice.

Actually the main benefit is no dependency on DNS (booth direct and root).

IP is a simple primitive, i.e. "is it routable or not ?".


The popular HTTP validation method has the same drawback whether using DNS or IP certificates? Namely, if you can compromise routes to hijack traffic, you can also hijack the validation requests. Right?

Yes, there have been cases where this has happened (https://notes.valdikss.org.ru/jabber.ru-mitm/), but it's really now into the realm of

1) How to secure routing information: some says RPKI, some argues that's not enough and are experimenting with something like SCION (https://docs.scion.org/en/latest/)

2) Principal-Agent problem: jabber.ru's hijack relied on (presumably) Hetzner being forced to do it by German law agents based on the powers provided under the German Telecommunications Act (TKG)


> some says RPKI

Part of the issue with RPKI is its taking time to fully deploy. Not as glacial as IPv6 but slower than it should be.

If there was 100% coverage then RPKI would have a good effect.


IP addresses also are assigned by registrars (ARIN in the US and Canada, for instance).

> IP addresses also are assigned by registrars (ARIN in the US and Canada, for instance).

To be pedantic for a moment, ARIN etc. are registries.

The registrar is your ISP, cloud provider etc.

You can get a PI (Provider Independent) allocation for yourself, usually with the assistance of a sponsoring registrar. Which is a nice compromise way of cutting out the middleman without becoming a registrar yourself.


You can also become a registrar yourself - at least, RIPE allows it. However, fees are significantly higher and it's not clear why you'd want to, unless you were actually providing ISP services to customers (in which case it's mandatory - you're not allowed to use a PI allocation for that)

> and it's not clear why you'd want to

The biggest modern-era reason is direct access to update your RPKI entries.

But this only matters if you are doing stuff that makes direct access worthwhile.

If your setup is mostly "set and forget" then you should just accept the lag associated with needing to open a ticket with your sponsor to update the RPKI.


Arguably neither is particularly secure, but you must have an IP so only needing to trust one of them seems better.

Yeah actually seems pretty useful to not rely on the name server for something that isn't human facing.

> I am guessing the use case for ip address certs is so your ephemeral services can do TLS communication

There's also this little thing called DNS over TLS and DNS over HTTPS that you might have heard of ? ;)


I don't quite understand how this relates?

Currently when you configure DNS over TLS/HTTPS you have to set the IP address AND the hostname of the SSL certificate used to secure the service. Getting IP Address certs makes the configuration simpler

> I don't quite understand how this relates?

Erm ? Do I have to spell out that I was pointing out that there was more than the "ephemeral services" that were being guessed at that could take advantage of IP certs ?


Maybe you want TLS but getting a proper subdomain for your project requires talking to a bunch of people who move slowly?

Very very true, never thought about orgs like that. However, I don't think someone should use this like a bandaid like that. If the idea is that you want to have a domain associated with a service, then organizationally you probably need to have systems in place to make that easier.

Ideally, sure. But in some places you're what you're proposing is like trying to boil the oceans to make a cup of tea

VBA et al succeeded because they enabled workers to move forward on things they would otherwise be blocked on organizationally

Also - not seeing this kind of thing could be considered a gap in your vision. When outsiders accuse SV of living in a high-tech ivory tower, blind to the realities of more common folk, this is the kind of thing they refer to.


Bruh, I'm not from SV lol. I just don't work at massive orgs.

Yeah, I don't understand the point of the prohibited use section at all, seems like unnecessary fluff.

Yeah I have never been a fan of the devalue part of svelte.

Woah this is so cool. I am literally building a MTA right now! I see we are targeting two very different goals here, but good to see more people working on the email problem space.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: