Meta keeps making more and more bafflingly horrible product decisions. Tens of billions into the Metaverse and now “let’s put gen AI into everything” even if it goes against the one thing that is arguably the most responsible for FB’s success - real identity. “Let’s put fake bots everywhere, because GenAI, need I say more?”
Ironically, the freelance consulting world is largely on fire due to the lowered barrier of entry and flood of new consultants using AI to perform at higher levels, driving prices down simply through increased supply.
I wouldn't be surprised if AI was also eating consultants from the demand side as well, enabling would-be employers to do a higher % of tasks themselves that they would have previously needed to hire for.
That's what they are billed at, what they take home from that is probably much lower. At my org we bill folks out for ~$150/hr and their take home is ~$80/hr
People have not been making vessels with 237ft masts for thousands of years. That boat literally had one of the tallest mast ever to exist on a boat. You combine the “extreme” nature of this boat with an extreme weather event and you get an extremely outlier outcome
Materials science has advanced quite a bit since the Preussen though. Aluminum, or even carbon fiber masts, rod standing rigging, Dyneema (UHMWPE) ropes, etc., all add up and drastically reduces weight.
Not saying the Bayesian design was or wasn't insane, I don't know, but my point is that it shouldn't be judged compared to what was done over 100 years ago.
Yes, the yacht is a much smaller ship but it has half as many masts it's masts are aluminum, it has engines so it doesn't have to run sail in poor conditions to maintain control authority and benefits from 150yr of improvements to watertightness.
I get that everyone wants to act smug because "everybody knows that you don't put big weight high up, hehehe, stupid billionares" but I'm betting that when the dust settles, the circle jerking dies down and the reports get published the end result will be the mast being a contributory factor (I'm betting on the reduction in righting moment rather than wind area) at best and that the outcome would not have been that much more unavoidable had the same other currently unknown errors been made on the other ships of the class.
A modern ship in good state doesn't just sink in minutes from capsizing. Other stuff had to have gone wrong here too. These vessels are designed that you can spend all day burying the bow in wave after wave. A little dip of one gunnel into the water should not be catastrophic. TFA discusses this.
Knowing that you have F-you money in investments makes it a lot more likely that you can enjoy your job and removes other stresses in life, or at least it did in my case even with only lowercase f-you money.
Paradoxically, I think the willingness and ability to do that (assuming it's used at all reasonably) tends to increase your visibility and perceived value to the company when averaged over a large number of trials.
"This is stupid, but I'm going to keep my mouth shut because I like to eat and live indoors." vs "This is stupid and I'm going to tell the CEO that and suggest a better alternative."
This happened to me sometime between my first and second kids (last year or so). It was some mix of changing priorities, less time, starting to feel my age, work becoming more difficult, company cultural changes, etc. I think I've stuck a decent balance of saving and living, but man does life come at you fast. Went from happily working 60-80 hours (including some side projects) to struggling to work 40 hours. Reminds me of how one day something switched and I just couldn't drink anymore. Age is probably the biggest factor, and I can't imagine what this is going to be like in my 50s (just turned 40).
Sonnet sometimes repeat its’ previous response too often, when you ask for changes. It claims there were changes, but there aren’t, because the output was already the best that model can produce. This behaviour seems to be deeply added somewhere as it is hard to change.
This version seems to be using slightly different rules: my recollection is that the original 2048 prevented a move if it wouldn't cause any blocks to shift or collapse, while this one spawns a block unconditionally.
A bit lazy approach, but also quite obvious. Pretty much what you'd get from a junior dev.
(Also if you're wondering about "// end of function ..." comments, I asked the AI to add those at some point, to serve as anchors, as the diffs generated by GPT-4o started becoming ambiguous and would add code in wrong places.)
I think it's also not progressing the block size with score: IIRC the original came also begins spawning 8s and 16s once you get above your first 512 block. But I could be misremembering.
(This kind of feedback driven generation is one of the things I do find very impressive about LLMs. But it's currently more or less the only thing.)
I don't remember it doing progressive block sizing - I only vaguely remember being mildly annoyed by getting to 2048 taking >2x the effort it took to get to 1024, which itself took >2x the effort of getting to 512, etc. - a frustration which my version accurately replicates :).
reply