Hacker Newsnew | past | comments | ask | show | jobs | submit | da_chicken's commentslogin

So there should be a human operator manually gatekeeping every individual request to connect with another endpoint?

It's a good thing those human operators couldn't listen in to whichever conversation they wanted.


Christ, that would make Google, Dell, Netgear, and Comcast a "covered application store".

This isn't a law. It's a prayer.


There should be public works grants to maintain them, or else a foundation specifically to maintain them funded with donations, grants, etc.

The alternative is another XZ backdoor.


Feels like tragedy of the commons.

Feels more like you don’t understand the concept of the tragedy of the commons.

EDIT: Sorry, I’ve had a shitty day and that wasn’t a helpful comment at all. I should’ve said that as I understand it TOTC primarily relates to finite resources, so I don’t think it applies here. Sorry again for being a dick.


the finite resource here is the unpaid developer time. everyone takes advantage of it until the developer burns out.

Maybe.

Or maybe they took what they know to sell to the black hats.


This is legal, correct?

If you can reasonably know they're criminal? No. If you sell an exploit instead of knowledge of a vulnerability? No. If they pay you with something they stole? No.

But otherwise? Usually, yes.


I would agree that Gemini is not keeping up with Anthropic on coding, but I completely disagree on ChatGPT. It's been months for me since I've gotten anything from OpenAI that felt like it was worth my time. I don't really consider them anymore.

Google is mostly doing what they've always done. They've created a few tools like Gemini and NotebookLM, and they're going to focus more effort on whatever gets the most traffic. Then anything they can't monetize will get cut.


Nah, it's much easier than that.

The total amount of computer data across all of humanity is less that 1 yottabyte. We're expected to reach 1 yottabyte within the next decade, and will probably do so before 2030. That's all data, everywhere, including nation-states.

The birthday paradox says that you'll reach a 50% chance of at least one collision (as a conservative first order approximation) at the square root of the domain size. sqrt(2^256) is 2^128.

Now, a 256 bit identifier takes up 32 bytes of storage. 2^128 * 32 bytes = 10^16 yottabytes. That's 10 quadrillion yottabytes just to store the keys. And it's even odds whether you'll have a collision or not.

And if the 50% number scares them, well, you'll have a 1% chance of a collision at around... 2^128 * 0.1. Yeah, so you don't reach a 1% over the whole life of the system until you get to a quadrillion yottabytes.

Because you're never getting anywhere near the square root of the size, the chances of any collision occurring are flatly astronomical.


That's quite expensive. Most systems that need this sort of data will instead implement some form of audit log or audit table. Which is still quite expensive.

At the record level, I've seldom seen more than an add timestamp, and add user id, a last change timestamp, and a last change user id. Even then, it covers any change to the whole row, not every field. It's still relatively expensive.


> Which is still quite expensive.

OTOH, if they managed to do that in an efficient way, they have something really interesting.


Writing to disk is never free.


True, but you do it with blocks that often contain padding. If you can make the padding useful, that's a win.


Yeah, this seemed like a very long way to say, "Our RDBMS has system catalogs," as if it's 1987.

But then, they're also doing JOINs with the USING clause, which seems like one of those things that everybody tries... until they hit one of the several reasons not to use them, and then they go back to the ON clause which is explicit and concrete and works great in all cases.

Personally, I'd like to hear more about the claims made about Snowflake IDs.


> doing JOINs with the USING clause

I'm ashamed to say that despite using SQL from the late 1980s, and as someone that likes reading manuals and text books, I'd never come across USING. Probably a bit late for me now to use it (or not) :-(


I didn't really write USING in anger until around 10 years ago, and I have been around a long time too.

Not all databases support it. But once you start using it (pun) - a lot of naming conventions snap into place.

It has some funky semantics you should be aware of. Consider this:

  CREATE TABLE foo (x INT);

  CREATE TABLE bar (x INT);

  SELECT \* FROM foo JOIN bar USING (x);


There is only one `x` in the above `SELECT *` - the automatically disambiguated one. Which is typically want you want.


I've used SQL for around a decade and also never came across it. I'm maintaining SQL code with hundreds if not thousands of basic primary key joins and this could make those queries way more concise. Now I want to know the reasons for not using USING!


There are reasons for not USING.

First, you need to be aware of the implicit disambiguration. When you join with USING, you are introducing a hidden column that represents both sides. This is typically what you want - but it can bite you.

Consider this PostgreSQL example:

  CREATE TABLE foo (x INT);
  INSERT INTO foo VALUeS (1);

  CREATE TABLE bar (x FLOAT);
  INSERT INTO bar VALUES (1);

  SELECT pg_typeof(x) FROM foo JOIN bar USING (x);

The type of x is is double, - because x was implicitly upcast as we can see with EXPLAIN:

  Merge Join  (cost=338.29..931.54 rows=28815 width=4)
    Merge Cond: (bar.x = ((foo.x)::double precision))

Arguably, you should never be joining on keys of different types. It just bad design. But you don't always get that choice if someone else made the data model for you.

It also means that this actually works:

  CREATE TABLE foo (x INT);
  INSERT INTO foo VALUeS (1);

  CREATE TABLE bar (x INT);
  INSERT INTO bar VALUES (1);

  CREATE TABLE baz (x INT);
  INSERT INTO baz VALUES (1);

  SELECT \*
  FROM foo
  JOIN bar USING (x)
  JOIN baz USING (x);

Which might not be what you expected :-)

If you are both the data modeller and the query writer - I have not been able to come up with a reason for not USING.


Thanks for the reply. The use case I have in mind is joining onto an INT primary key using a foreign key column of another table. This alone would remove a massive amount of boilerplate code.


@da_chicken: You can read more about Snowflake ID in the Wiki page linked in the article.

The short story:

They are bit like UUID in that you can generate them across a system in a distributed way without coordination. Unlike UUID they are only 64-bit.

The first bits of the snowflake ID are structured in such a way that the values end up roughly sequentially ordered on disk. That makes them great for large tables where you need to locate specific values (such a those that store query information).


No, that's like calling Amazon a social media platform.

YouTube is a content delivery platform that has social media features. You can tell because if you shut off all the comments, people still visit the site in droves. But if you shut off the videos and left the comments then nobody would visit the site at all.

Now, it's possible that YouTube doesn't realize that, but I think they're just unwilling to make any changes at all if it doesn't give them any competitive advantages.


YouTube used to have direct messages until 2019.

Reimplemented in Nov 2025 for Ireland and Poland.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: