Hacker News new | past | comments | ask | show | jobs | submit | maxlamb's comments login

So you’re telling me this clip isn’t even satire: https://youtube.com/shorts/64TNGvCoegE

Meta keeps making more and more bafflingly horrible product decisions. Tens of billions into the Metaverse and now “let’s put gen AI into everything” even if it goes against the one thing that is arguably the most responsible for FB’s success - real identity. “Let’s put fake bots everywhere, because GenAI, need I say more?”

One of the bullets had “deny” engraved in it.


Most consultants with more than 3 years of experience are billed at $150/hr or more


Ironically, the freelance consulting world is largely on fire due to the lowered barrier of entry and flood of new consultants using AI to perform at higher levels, driving prices down simply through increased supply.

I wouldn't be surprised if AI was also eating consultants from the demand side as well, enabling would-be employers to do a higher % of tasks themselves that they would have previously needed to hire for.


> billed

That's what they are billed at, what they take home from that is probably much lower. At my org we bill folks out for ~$150/hr and their take home is ~$80/hr


Yeah, at a place where I worked, we billed at around $150. Then there was an escalating commision based on amount billed.


People have not been making vessels with 237ft masts for thousands of years. That boat literally had one of the tallest mast ever to exist on a boat. You combine the “extreme” nature of this boat with an extreme weather event and you get an extremely outlier outcome


I did some resarch because i didnt believe your comment at first.

It seems true, the preussen had a similiar height and was a really big ship.

https://en.wikipedia.org/wiki/Preussen_(ship)

It shows how utterly insane the design of the bayesian was.


Materials science has advanced quite a bit since the Preussen though. Aluminum, or even carbon fiber masts, rod standing rigging, Dyneema (UHMWPE) ropes, etc., all add up and drastically reduces weight.

Not saying the Bayesian design was or wasn't insane, I don't know, but my point is that it shouldn't be judged compared to what was done over 100 years ago.


It's only ~10% taller than the top end of stuff that was cranked out for commercial use 150yr ago. Forgive me for being unimpressed.

https://en.wikipedia.org/wiki/Great_Republic_(1853_clipper)

Yes, the yacht is a much smaller ship but it has half as many masts it's masts are aluminum, it has engines so it doesn't have to run sail in poor conditions to maintain control authority and benefits from 150yr of improvements to watertightness.

I get that everyone wants to act smug because "everybody knows that you don't put big weight high up, hehehe, stupid billionares" but I'm betting that when the dust settles, the circle jerking dies down and the reports get published the end result will be the mast being a contributory factor (I'm betting on the reduction in righting moment rather than wind area) at best and that the outcome would not have been that much more unavoidable had the same other currently unknown errors been made on the other ships of the class.

A modern ship in good state doesn't just sink in minutes from capsizing. Other stuff had to have gone wrong here too. These vessels are designed that you can spend all day burying the bow in wave after wave. A little dip of one gunnel into the water should not be catastrophic. TFA discusses this.


Ok true but that boat was much much heavier and bigger. So the ratio of mast height to weight/size on the Bayesian was extreme


Right but what if you enjoy your job and don’t want to retire?


Knowing that you have F-you money in investments makes it a lot more likely that you can enjoy your job and removes other stresses in life, or at least it did in my case even with only lowercase f-you money.


Then take more unpaid sabbaticals, or buy a boat.

"Too much money" is a solved problem


I love my job but having too much money would let me email someone 4 higher in the hierarchy to give them my opinion on things. :)


Paradoxically, I think the willingness and ability to do that (assuming it's used at all reasonably) tends to increase your visibility and perceived value to the company when averaged over a large number of trials.

"This is stupid, but I'm going to keep my mouth shut because I like to eat and live indoors." vs "This is stupid and I'm going to tell the CEO that and suggest a better alternative."


Work is more enjoyable when you don't have to do it for money


What if you stop liking your job?


This happened to me sometime between my first and second kids (last year or so). It was some mix of changing priorities, less time, starting to feel my age, work becoming more difficult, company cultural changes, etc. I think I've stuck a decent balance of saving and living, but man does life come at you fast. Went from happily working 60-80 hours (including some side projects) to struggling to work 40 hours. Reminds me of how one day something switched and I just couldn't drink anymore. Age is probably the biggest factor, and I can't imagine what this is going to be like in my 50s (just turned 40).


Having the option is better than not having it


GPT-4 is way more powerful than GPT-4o for programming tasks.


That's true, but they both made more mistakes than Sonnet for me. I use them with Aider.


Sonnet sometimes repeat its’ previous response too often, when you ask for changes. It claims there were changes, but there aren’t, because the output was already the best that model can produce. This behaviour seems to be deeply added somewhere as it is hard to change.


No it’s not, it’s a multimodal transformer model.


What’s a 2048 clone?



This version seems to be using slightly different rules: my recollection is that the original 2048 prevented a move if it wouldn't cause any blocks to shift or collapse, while this one spawns a block unconditionally.


It doesn't anymore.

You're right. I pasted your comment to aider and it fixed it on the spot :).

EDIT: see https://git.sr.ht/~temporal/aider-2048/commit/9e24c20fc7145c....

A bit lazy approach, but also quite obvious. Pretty much what you'd get from a junior dev.

(Also if you're wondering about "// end of function ..." comments, I asked the AI to add those at some point, to serve as anchors, as the diffs generated by GPT-4o started becoming ambiguous and would add code in wrong places.)


I think it's also not progressing the block size with score: IIRC the original came also begins spawning 8s and 16s once you get above your first 512 block. But I could be misremembering.

(This kind of feedback driven generation is one of the things I do find very impressive about LLMs. But it's currently more or less the only thing.)


I don't remember it doing progressive block sizing - I only vaguely remember being mildly annoyed by getting to 2048 taking >2x the effort it took to get to 1024, which itself took >2x the effort of getting to 512, etc. - a frustration which my version accurately replicates :).


Permafrost, and prob with geological evidence it's been similar or colder climate in that region in past 50k years.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: