Hacker Newsnew | past | comments | ask | show | jobs | submit | drwiggly's commentslogin

Voting with your wallet does work, possibly others don't share in your tastes.


It would be fair, but when you go online, everyone (and I mean everyone) shares their distaste for modern gaming industry and its practices. Yet, those practices still bring the most money to this day. So does it mean that people go against their principles? Or is it just another "vocal minority" situation?


There's some evidence that it's a vocal minority. Taking a game made by a terrible company that has a lot of dark patterns, Call of Duty Black Ops 6 has sold at least 491 thousand units (https://steamdb.info/app/1938090/charts/) (certainly far higher, but apparently they haven't published the sales figures, so this is the best lower bound that we get), yet you see far fewer than that number of Reddit posts and comments and upvotes, or upvotes on YouTube videos about these terrible practices.

I suspect that the majority of those who play games would rather these mechanics not exist, but don't feel strongly enough about it to boycott those games. I don't have evidence for this beyond my interactions with personal friends and their "mild apathetic unhappiness" for lack of a better term.

There's also definitely a number of people that are willing to accept some compromise to either play a very well-made game, or one that their friends are playing. I hate Epic Games and its practices, for instance, but I'm willing to play Fortnite with friends if they ask me, and I justify that by telling myself that I'm never going to buy anything with their premium currency.


The publisher can't do that. No one can tell the future.


They could be liable if they shut down the servers and make the purchase unusable before the end of the minimum contracted duration.


This looks to be an in memory db, with a wasm runtime to host domain logic. The hand wavy part was how do they handle scale and clustering? Are we sharding the data our selves (atm it seems so.).

This is nice and all but the hard part is replication and consistency in a distributed database. In memory has its uses, also disk backed tables can have their uses. Pretty much normal databases already do this, just writing domain logic in stored procs is kind of annoying.

I'd imagine embedding sqlite in your binary using memory tables is equivalent at the moment. Well you'd have to write code to publish table updates to clients. I suppose it has that going for it.

I've seen some hand wavy docs about clustering but nothing concrete.


From what I'm gathering.

  Alice measures at angle X, gets value V1

  Calls Bob on the phone, okay I measured angle X.

  Bob measures at angle X, also gets value V1

  Bob measures at angle Y, gets value V2.

  Bob calls Alice back says, okay I measured angle Y.

  Alice measures angle Y, also gets V2.
The correlation here is nobody can do other measurements while the other party is in the process of measuring. Each party can't know the other party is done until traditional communication has happened.

If each party acted independently they would randomly change the state on the other side and each party would get what appears to be random values.


I think my mind bend is more over 3 actors. Note that my understanding, also, is that it has been shown that changing a detector changes what is detected at the other detector.

    A is sending entangled stuff to B and C.
    B measures and gets a set of angles that tells them what C would be measuring
    C changes what they are measuring.  
The question is, how rapidly does the "spooky" distance change happen? I get that it would not be communication between A and B or C. I similarly get that you could not coordinate between B and C. But, from all of the framings I've seen so far, I don't understand why the change between B and C is not faster than speed of light.

(And just to rapidly get it out there, I fully expect that I'm merely misunderstanding something here.)

Edit: Also, to add, my understanding is that they are not "getting angles" per se, but would be seeing distributions. Which is why you would need more than 1 particle, as it were. So, you would say of the X I have recorded, 30% have been blue, 70% have been green. I suppose the concern is that you have no way of knowing when the "100%" mark is done until after classical communication, such that it is impossible to know what the final distribution you are measuring is? Effectively?


>The question is, how rapidly does the "spooky" distance change happen? I get that it would not be communication between A and B or C. I similarly get that you could not coordinate between B and C. But, from all of the framings I've seen so far, I don't understand why the change between B and C is not faster than speed of light.

It's because measurements at B do not convey any information to C while the measurements are performed and vice versa. Unless B calls C to inform them of the choice of measurement setting, C will not know the measurement outcome at B's side. This is true even if they know that they share entangled states prior to prior to performing measurements.

> Edit: Also, to add, my understanding is that they are not "getting angles" per se, but would be seeing distributions. Which is why you would need more than 1 particle, as it were. So, you would say of the X I have recorded, 30% have been blue, 70% have been green. I suppose the concern is that you have no way of knowing when the "100%" mark is done until after classical communication, such that it is impossible to know what the final distribution you are measuring is? Effectively?

There has to be post processing of data where they drop the results of rounds where their measurement choices don't match. This is important because of what's called non commuting measurements. Measurements in one setting don't give us any information about measurement outcome in another setting. So effectively at one end, they have to record their measurement choice and corresponding outcomes of said measurement. And when comparing the data, the participants only have to keep the outcomes of rounds where the measurement choice is the same at both end


I'm reading that as the distributions that are seen are not stable. It may be that you got 30% this time, but 40% next time. On the exact same setup. (Obviously making up numbers.) Yes, you know that the other side saw something, but that is obviously useless.

Framing I saw was more of a truth table like where B/C have known states they can be in that each lead to a known distribution of outcomes. It was not clear that the known distribution was only an observed distribution.


>I'm reading that as the distributions that are seen are not stable. It may be that you got 30% this time, but 40% next time. On the exact same setup. (Obviously making up numbers.) Yes, you know that the other side saw something, but that is obviously useless.

I'm not sure I'm understanding your percentages statement completely, but when the parties have entangled states, these inconsistencies will match on each side under the assumption that their measurement choice is the same.

> Framing I saw was more of a truth table like where B/C have known states they can be in that each lead to a known distribution of outcomes. It was not clear that the known distribution was only an observed distribution.

If the states are locally known to the parties, they'd still have to perform a statistical run of experiments as before. The choice of measurements would be key in distinguishing a classical from a quantum correlation


Its the new delorean.


No need to demean Deloreans like that.


I'm not sure that comparison demeans Delorean's.

Both have a sort-of futuristic look, in a non-conventional way. Both (as cars) appeal to a tiny niche of (non overlapping) users.

Most of the current appeal of a Delorean, indeed probably the only reason it even still exists, is because of BTTF. It's cool in a weird way not a practical way. It's all styling (and doors) - as a car it's, well, not great.

Cybertruck is also really all about the styling. As a truck it's not special. It may turn out to be produced in small numbers. It'll likely be desirable as a collectors piece in the future.

Obviously with styling this distinctive it's not going to appeal to everyone. Most likely think it's ugly.

So comparing it to a Delorean I think is fair (although obviously their heritage - one company bombed , the other is changing the world - is obviously not the comparison being made.


I am kind of joking, but not really. I find the comparison offensive to Deloreans because I like Deloreans but really dislike the look of the cybertruck.

Others' differing opinions are completely valid too, it's merely a matter of personal taste, IMHO.


Yeah, I don't think there will be a lot of overlap between Delorean lovers and Cybertruck lovers.

I don't think the comparison is saying they "look the same", but rather "they're both niche".

(Personally, I agree, the Delorean looks great, and the doors man, the doors. The cybertruck is just ugly.)


It will be interesting to see how collector vehicles evolve

There are still Delorean owners and meetups, because 80s cars can be maintained and operated pretty much as they were back then

In 40 years will a cybertruck even have a cloud service to connect to for all the features it is sold for today? Will there even be a compatible cell network for its radios?

I have computers and peripherals way younger than a delorean and they are effectively paperweights because there is nothing to connect them to anymore. Is that the future of these cars if we can’t update them, or point them to new endpoints when Tesla ends maintenance of their cloud?


Deloreans looked super cool. Tesla had never made a cool looking vehicle, and the cyber truck is at the very bottom.


>Really for LLMs you just need to have the model put it's output to an internal buffer, read that buffer and make sure it makes sense, then output that to the end user.

Makes sense to what. The LLM doesn't have a goal, other then to spew text that looks like it should be there.


The analogy lies in the fact that, much like evolution through natural selection, deliberate intelligence/ability of organisms to comprehend reality is not the objective, but something else entirely is.

For evolution, it's fitness. For LLMs, it's the next token.

Yet despite that, the ability to reason emerges as a means to an end.


To the terminal or instrumental goal of the statement it is working on.

Question to LLM, "I have one hundred and eleven eggs in the store and another two hundred and twenty two are showing up in an hour, how many eggs will I have in total"

Internal response "this looks like math problem that requires addition. the answer is 333. use a calculator validate 111 + 222. (send 111+222, receive 333). Tool returns 333 validing previous response"

External response: "The answer is 333"

This chain of logic is internally consistent, hence makes sense.


This language is interesting. General usability might be a bit away? Higher RAII is something that would be nice in C++ too.


Every user/client operation is serial.


This is mainly because it's single threaded and operates on in memory state. Async io doesn't apply to in memory so all operations have to be serial, is this characterization correct? Thanks.


Yeah but the Dems were in majority.


start transaction;

select id from users where id = ? for update;

if row_count() < 1 then raise 'no user' end if;

insert into sub_resource (owner, thing) values (?, ?);

commit;

??


Do that in most relational dbs in the default isolation level (read committed), and concurrently executing transactions will still be able to delete users underneath you after the select.

If we take postgres as an example, performing the select takes exactly zero row level locks, and makes no guarantees at all about selected data remaining the same after you’ve read it.

edit: my mistake - I missed that the select is for update. Yes, this will take explicit locks and thus protect you from the deletion, but is slower/worse than just using foreign keys, so it won't fundamentally help you.

further edit: let's take an example even in a higher isolation level (repeatable read):

  -- setup
  postgres=# create table user_table(user_id int);
  CREATE TABLE
  postgres=# create table resources_table(resource_id int, user_id int);
  CREATE TABLE
  postgres=# insert into user_table values(1);
  INSERT 0 1

  Tran 1:
  postgres=# BEGIN TRANSACTION ISOLATION LEVEL REPEATABLE READ;
  BEGIN
  postgres=# select * from user_table where user_id = 1;
 user_id 
  ---------
       1
  (1 row)

  Tran 2:
  postgres=# BEGIN TRANSACTION ISOLATION LEVEL REPEATABLE READ;
  BEGIN
  postgres=# select * from resources_table where user_id = 1;
   resource_id | user_id 
  -------------+---------
  (0 rows)
  postgres=# delete from user_table where user_id = 1;
  DELETE 1
  postgres=# commit;
  COMMIT

  Tran 1:
  postgres=# insert into resources_table values (1,1);
  INSERT 0 1
  postgres=# commit;
  COMMIT

  Data at the end:

  postgres=# select * from resources_table;
   resource_id | user_id 
  -------------+---------
             1 |       1
  (1 row)

  postgres=# select * from user_table;
   user_id 
  ---------
  (0 rows)
You can fix this by using SERIALIZABLE, which will error out in this case.

This stuff is harder than people think, and correctly indexed foreign keys really aren't a performance issue for the vast majority of applications. I strongly recommend just using them until you have a good reason not to.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: