Hacker News new | past | comments | ask | show | jobs | submit | crdrost's comments login

That is a legitimate need, but it should not be the default.

Another example of when you need it, is games. I want to store in the game state, that on Day 32 of being in this cave, there is randomly a cache of diamonds if you mine the east wall. If I generate this random number dynamically, then I permit save-scumming behaviors which may distract from the artistic points of my game. So I basically want a random seed and a hash function, hashing (seed, day, activity, location, attempt_number) to see if something is successful.


So hens don't usually have to be force-fed. Some of that color can come from having a diverse source of proteins--like the bugs and insects that pasture-raised hens get access to--but farmers "in the know" will also add paprika and marigold to the usual soy-and-grain supplemental feed, to try to encourage it to come out a bit more.

A few years back I briefly thought that a rich yolk color was a quality signal, until I found that additives could produce that color cheaply. The color comes from dietary carotenoids [1]. Companies like BASF sell carotenoid feed additives that producers can employ to get a yolk color as rich as desired:

https://nutrition.basf.com/global/en/animal-nutrition/our-pr...

[1] https://en.wikipedia.org/wiki/Carotenoid


I wouldn't be surprised if you were correct in your belief and that this became a case of Goodhart's Law when implemented in the egg industry.

At the most extreme end of the other side, Buddhist monks, in addition to not eating meat (or, often, aliums like onions and garlic), also don't generally believe in dinner—they have to eat all their solid food for the day before noon, so you could view this as fasting for half the day every day.

(There are some caveats... At least for the Tibetan monks I knew, morning prayer is early at like 5am and comes with a sort of pita bread and tea, bedtime is closer to 9pm, and during these 9 hours there will be more tea. With a little googling I am able to confirm that some of the "pita" (pao balep) is consumed at the lunch tea, and I think this is after the lunch meal, so it might be 1pm. I think there's none at the evening tea that you'd have around sundown? Also in terms of calories the Tibetan tea is “bulletproof,” consisting of a very long steep for the leaves to extract maximum bitter flavors, that get mixed with a bunch of yak butter and salt. So liquid calories are very much a thing for them.)


I just want to semi-hijack this thread to note that you can actually peek into the future on this issue, by just looking at the present chess community.

For readers who are not among the cognoscenti on the topic: in 1997 supercomputers started playing chess at around the same level as top grandmasters, and some PCs were also able to be competitive (most notably, Fritz beat Deep Blue in 1995 before the Kasparov games, and Fritz was not a supercomputer). From around 2005, if you were interested in chess, you could have an engine on your computer that was more powerful than either you or your opponent. Since about 2010, there's been a decent online scene of people playing chess.

So the chess world is kinda what the GPT world will be, in maybe 30ish years? (It's hard to compare two different technology growths, but this assumes that they've both hit the end of their "exponential increase" sections at around the same time and then have shifted to "incremental improvements" at around the same rate. This is also assuming that in 5-10 years we'll get to the "Deep Blue defeats Kasparov" thing where transformer-based machine learning will be actually better at answering questions than, say, some university professors.)

The first thing is, proving that someone is a person, in general, is small potatoes. Whatever you do to prove that someone is a real person, they might be farming some or all of their thought process out to GPT.

The community that cares about "interacting with real humans" will be more interested in continuous interactions rather than "post something and see what answers I get," because long latencies are the places where GPT will answer your question and GPT will give you a better answer anyways. So if you care about real humanity, that's gonna be realtime interaction. The chess version is, "it's much harder to cheat at Rapid or Blitz chess."

The second thing is, privacy and nonprivacy coexist. The people who are at the top of their information-spouting games, will deanonymize themselves. Magnus Carlsen just has a profile on chess.com, you can follow his games.

Detection of GPT will look roughly like this: you will be chatting with someone who putatively has a real name and a physics pedigree, and you ask them to answer physics questions, and they appear to have a really vast physics knowledge, but then when you ask them a simple question like "and because the force is larger the accelerations will tend to be larger, right?" they take an unusually long time to say "yep, F = m a, and all that." And that's how you know this person is pasting your questions to a GPT prompt and pasting the answers back at you. This is basically what grandmasters look for when calling out cheating in online chess; on the one hand there's "okay that's just a really risky way to play 4D chess when you have a solid advantage and can just build on it with more normal moves" -- but the chess engine sees 20 moves down the road beyond what any human sees, so it knows that these moves aren't actually risky -- and on the other hand there's "okay there's only one reason you could possibly have played the last Rook move, and it's if the follow up was to take the knight with the bishop, otherwise you're just losing. You foresaw all of this, right?" and yet the "person" is still thinking (because the actual human didn't understand why the computer was making that rook move, and now needs the computer to tell them that the knight has to be taken with the bishop as appropriate follow-up).


> you will be chatting with someone who putatively has a real name and a physics pedigree, and you ask them to answer physics questions, and they appear to have a really vast physics knowledge, but then when you ask them a simple question like "and because the force is larger the accelerations will tend to be larger, right?" they take an unusually long time to say "yep, F = m a, and all that." And that's how you know this person is pasting your questions to a GPT prompt and pasting the answers back at you.

Honestly, (even) in my area of expertise, if the "abstraction/skill level" or the kind of wording (in your example: much less scientifically precise wording, "more like a 10 year old child asks"), it often takes me quite some time to adjust (it completely takes me out of my flow).

So, your criterion would yield an insane amount of false positives on me.


This sort of problem also occurs when you're trying to do CRDTs, which can roughly be described also as "design something that does Git better."

So e.g. to frame this, one approach to a CRDT is to just treat the document as a list of facts, "line 1 is 'foo', line 2 is 'bar'", and each fact has a number of times it has been asserted, and to "merge" you just add together the assertion counts, and then you can detect conflicts when a fact has been asserted more than once or fewer than zero times. So a patch says "change line 2 to 'baz'", this becomes "unassert that line 2 is 'bar', assert that line 2 is 'baz'" and it conflicts with a patch that says "change line 2 to 'quux'" because the fact "line 2 is 'bar'" has an assertion count of -1.

But anyway, in this context you might want to allow inserting lines, and then you have the list-labeling problem, you don't want the patch to unassert lines 4,5,6 just to insert a new line after line 3. So then an obvious thing is to just use a broader conception of numbers, say "line 3.5 is <X>" when you insert, and then we hide the line numbers from the user anyways, they don't need to know that internally the line numbers of the 7 lines go "1, 2, 3, 3.5, 4, 5, 6".

So then you need a relabeling step because you eventually have some line at 3.198246315 and you want to be able to say "yeah, that's actually line 27, let's have some sanity again in this thing."

This also maybe hints at the fun of adding randomization, consider that one person might add line 3.5, then add line 3.75, and then remove line 3.5; meanwhile someone else might add a different line 3.5, add a line 3.25, and then remove their line 3.5, and these patches would both amount to "assert line 3.25 is A, assert line 3.75 is B", and would merge without conflict. This means that in general if two people are messing with the same part of the same document asynchronously, this model is not able to consistently flag a merge failure in that case, but will sometimes instead just randomly order the lines that were added.

We can then just make that a feature rather than a fault: you don't insert at 3.5, which is 3 + (4 - 3) / 2, rather you insert at 3 + (4 — 3) * rand(). And then when two people both try to insert 12 lines between line 3 and 4 independently, when you merge them together, you get 24 lines from both, in their original orders but interleaved randomly, and like that's not the end of the world, it might be legitimately better than just declaring a merge failure and harrumphing at the user.


> This sort of problem also occurs when you're trying to do CRDTs, which can roughly be described also as "design something that does Git better."

Aren't the goals of git and CRDTs different. With git you want to get the merged result to be semantically correct. With CRDTs you want to achieve convergence (so no merge conflicts), as far as I know semantically correct convergence (not sure what to correct term is) is not really possible as it is too difficult to encode for CRDTs, though. Isn't that why CRDTs are mostly used for multiplayer interactive applications where these kinds of mismatches are quickly seen?


The technically correct term -- at least in reduction systems -- would be confluence not convergence.


It's interesting that even this gives me an uncanny valley feeling, something about the fact that the motion of the stalk is "bottom-up" rather than "top-down" and that all of the stalks are trying to move in unison rather than wind kind of gently cascading over the institution.

I imagine the piece was supposed to be haunting in a completely different way -- so the idea would be that you'd be embedded in a space which is experiencing a "real" wind that you can't yourself feel on your skin, and so you'd feel like you took a step outside of reality. In that sense this would be an exhibit that a video probably can't do justice to? But I think the uncanny valley thing still sells it pretty well.


This is an impressively high sick-burns-to-words ratio, well done!

“I don't need to make an LLC” also pings way too hard here.


You two are in a battle over aesthetics, it is a battle that neither of you can win.

There is something attractive about a new set of work gloves. They are fresh, clean, almost begging you to use them in a project.

There is something attractive about a set of work gloves that have curled to the shape of your hands and have been burnished by sap and sawdusts and oils: come on old friend, let's remake our previous magic.

There's something nice about a fleecy throw, but they are not as cozy as the quilts my wife made for my daughter, but defining that coziness will remain out of reach as far as my words go.

No one can be correct, here. It's just what appeals more or less to you.


Honestly this sounds like a knock-on effect of the US's constant erosion of the glue of community. Church attendance down, sport attendance down, theater attendance with friends down, it's all the same.

Social norms can change this -- the Netherlands has a very similar culture to the US, But one thing people asked me while I was doing my M.Sc. there was just, "what is your sport?" ... and I got asked it enough that I eventually got one, and then for a good period of time I managed to completely kick my obesity, until I moved back to the American Midwest.

The introvert/extrovert axis also plays a role in what sort of "sport" is right for you, of course, and many of your sophisticated friends still hit the gym or jog etc. -- those are just sports for introverts in my view.

Sport time is not, time that could have been better spent elsewhere. It's like how cleaning the sink isn't time that could have been better spent elsewhere -- if you don't have a clean sink, you'll pay the interest in terms of "ugh what's that smell [...] oh it was the standing water in this bowl" and "crap I don't have a clean glass, hm, I wonder if I can just buy compostable cups on Amazon so that I don't have that problem..." etc. So as an extrovert, I can go once a week to play soccer with friends in a small league, or, just hear me out, I can get lonely and then do what I do when I get lonely, which is pop on Physics Stack Exchange and answer physics questions so that I can feel Of Use. You pay the interest either way.

Chess-time also is no great loss for the world. The top-level world chess community is something we have numbers for -- 17k titled players, 2k grandmasters, 4k international masters beneath that. They are pursuing something that exactly fits the nerdy way that their brain works -- memorize openings out to 20 moves deep, obsessively study and re-study their failed games to understand why the computer thinks they lost and how they might make better mistakes in the future, and for them it HAS to be competitive and they HAVE to have that immediate feedback of trying a new idea in the same narrow niche of ideas that they became a super-expert-in, against another top player who can punish their new mistakes.

It's just not a set of transferable world-changing skills. It's like, my brother became single-mindedly obsessed with pool in High School. This persists even though he now runs a small company operating a strip mall. This was just his thing, he loves that there is no upper bound to how much control he can have over the cue and the balls, using the spins of each to control the layout, and precisely planning a course through a 9-ball break and setting himself up for a clean sweep through the game. There was no world in which some "world-changing create-the-future" lifestyle, would have felt as much of a glove fitting his hand to him, as this did. And it is no great loss for the world that he found the glove that fits his hand. It's not like the strip mall would have become an American retail empire rivaling Amazon, if only he had spent his nighttime hours working on the mall instead of on his life passion.

For comparison, probably most of the people in the bottom 10% performance bracket at Google are being told and pressured "you need to do more, more, more, you're gonna get fired if you keep those low numbers up" and at 180k employees, that amounts to 18k people that, unlike top chess players, probably _could_ flourish and do better in some smaller scrappier company, but because America doesn't have a social safety net to speak of, they feel like "well I got the dream 6-figure job, I better hold onto that until my knuckles are white because if I got fired, Bay Area rent and cost-of-living could bankrupt me in 3 months." And that's literally just one megatech company, not even talking about the world of people Graeber argues are doing "bullshit jobs" etc. etc.


It's philosophically gauche but I often like to criticize arguments based on “what if they were right?”...

So for example the ontological argument putatively argues for the existence of a Perfect Being but it would seem to work even if you restricted the domain somewhat to something smaller than “all beings”, and so presumably also argues for the existence of a perfect Toaster.

Similarly here, the claim is that in a BB universe, even though countlessly more brains see the exact same stuff as you, there is something about the Bayesian update factor that you all have where you all still should conclude you are not the Boltzmann Brains and the evidence is never enough.

How do you look at that description, and not conclude that according to that argument, Bayesian reasoning is just strictly wrong? Like everyone (more or less) is “it” and everyone (more or less) says “it’s not me!” and everyone (more or less) is wrong and here is our philosopher dusting their hands saying ‘yep! sounds good, solved the problem!’


> How do you look at that description, and not conclude that according to that argument, Bayesian reasoning is just strictly wrong?

I believe you're conflating epistemics with decision theory. Sure, the measure of all minds experiencing your current mind-state may be dominated by Boltzmann Brains, with observations that do not correspond to any local state of the world, and which will dissipate momentarily.

But, since your decisions as one of those BB's have no effect, you should make decisions based on the fraction of minds-like-you which are living in a persistent world where those decisions have effects which can, in principle, be predicted.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: