Hacker Newsnew | past | comments | ask | show | jobs | submit | re-thc's commentslogin

> A canary for this would be whether Gemini skews toward building stuff on GCP

Sure it doesn't prefer THE Borg?


> but something has changed

i.e. we finally decided to audit head count from post covid-era.

> paired with smaller and flatter teams

i.e. management was axed


you don't think LLM impacts on productivity were a factor at all?

If LLMs really multiply productivity, why would you fire people and handicap the boost?

I have 100 people that can now do the work of 200 people thanks to a new tool.

How is the logical response to fire half of them and bring my productivity back to where it was before?


Because there isn't an unlimited amount of productive work to be done. Sure, a bowling ball factory in a world that demands unlimited bowling balls should take the productivity multiplier AND retain the employees, because they ought to make all the bowling balls they possibly can.

But CashApp jira tickets are not a bowling ball factory in a world with unlimited bowling ball demand. At a certain point, you're just paying people to sit around, or even worse, pretend they're busy.


That’s my point. The letter claims this is a decision made for the purpose of growth, which makes no sense.

This is admitting the company is in maintenance mode at best


> If LLMs really multiply productivity, why would you fire people and handicap the boost?

Presumably, because some of these areas are cost centers versus profit generating.


He explains the rationale, smaller teams work faster.

we're already seeing that the intelligence tools we’re creating and using, paired with smaller and flatter teams, are enabling a new way of working which fundamentally changes what it means to build and run a company. and that's accelerating rapidly.


This is just rephrasing the same concept.

Claiming than a small group with AI can accomplish more than a large group with AI doesn’t make sense.

More likely the company doesn’t have enough work for the large group.


Have you worked at a big company? It makes sense to me that a small group would be much more productive than a large group, even without AI. Throw in some AI help, and it could be much better.

I do see fewer Square terminals these days, more Toast (and other options too I think).

Demand inelasticity.

> our business is strong. gross profit continues to grow, we continue to serve more and more customers

I would say the vast majority of people in this thread don't believe that this is related to AI at all, other than as a pretext. It's kind of incredible.

> It's not awesome, not for us.

Depends on where you stand. Maybe leet code won't be a common thing (can be solved with AI), maybe they'll look for different skills, etc.

If losing 30% means hiring the right people for the job you might have better chances. For a long time these were never aligned properly.


Product Hunt (I assume)

> But a short line "AGI is possible, powerful and perilous"

> At which point the question becomes: is it them who are deluded, or is it you?

No one. It is always "possible". Ask me 20 years ago after watching a sci-fi movie and I'd say the same.

Just like with software projects estimating time doesn't work reliably for R&D.

We'll still get full self-driving electric cars and robots next year too. This applies every year.


> We'll still get full self-driving electric cars and robots next year too.

I've taken a Waymo and it seemed pretty self driving.


Not that 1. Wink.

> I think OpenAI has better chance to winning on the consumer side than everyone else.

Which doesn't make money.

> Of course, would that much up against hundreds of billions of dollars in capex remains to be seen.

Most of that is a bet against enterprise adoption. Automation of customer service, sales, marketing, warehouses, medical discoveries, etc...


Wasn't OpenAI's moat buying up all the RAM or Nvidia cards?

What!? I always thought we could just download more RAM!

on Linux you can actually do that by enabling zram https://en.wikipedia.org/wiki/Zram


Connectix was a big deal in its day. RAM Doubler was considered essential software.

They also marketed the first webcam, and made emulators mainstream. Their PlayStation emulator is the basis for the case law that says emulators are fair use, decided as a result of a suit from Sony.


> RAM was expensive at the time. For example, an 8 MB stick cost $300.

So why you’re saying is that it could be worse, but not by much?


To the downvoters that don't get OP's reference:

https://downloadmoreram.com

Idk if the owner changed or what, but the website used to be more comical.


China doesn't need to buy it. They can continue their policy and look good.

They've already found a better route. Buy it elsewhere e.g. in Singapore. Train their models there using Nvidia hardware.

Ship the result and fine tune back in China.

So "China" is and has always been buying it. No difference. The politics can keep raging.


Google doesn't and won't have enough TPUs regardless. Nvidia owns the supply chain (confirmed with them trying to or has taken over Apple as TSMC's biggest customer).

That's because Google is once removed from TSMC, unlike Nvidia and Apple

Google and Broadcom are in co-design partnership (and now also MediaTek). Google defines the architecture. Broadcom provides the essential infrastructure (IP blocks) that makes the chip work on silicon and works tight with TSMC to make it happen in new nodes. Spending billions to help Broadcom get into par with Nvidia is not what they want.

Apple, for example, designs nearly the entire SoC in-house (CPU, GPU, NPU) and work with TSMC hand in hand. They buy only specific chips and block designs like Wi-Fi, Bluetooth, and RF.


> Apple, for example, designs nearly the entire SoC in-house

I already mentioned Apple lost their top spot. They're also are paying a 100% price hike for Samsung RAM.

> Spending billions to help Broadcom get into par with Nvidia is not what they want

Point being if Apple doesn't have enough whether Google is once removed or not still won't have anywhere near enough to compete. This is about supply. Nvidia reserved capacity. You can't get it.

Even if Nvidia GPUs were worse - they have 10-100x more of them. What would you as a large customer want? Tesla for example has many buildings racking GPUs and more in the works. Specialized hardware like what OpenAI is doing with the new "spark" will just be "experimental" and side projects because better or not there won't be enough for real demand.


Apple gets enough. Their need for chips is not growing as high as Nvidia's.

I think you are trying to say that Nvidia has lock-in due its massive reservations and it starves others out. That's partially true real bottleneck currently is CoWoS/CoWoS-L packaging, not the chips. Nvidia is reaping the benefits of being pioneer who helped to make it possible. I'm not sure of the exact numbers, but I think Nvidia has over 60% capacity reservation of TSMC CoWoS.

That kind of lock in is only temporary. Anyone can reserve ahead of time and take the risk of failing to get competitive node and suffer the losses.

ps. Nvidia is paying 50% to 100% premiums over standard prices just to secure "urgent" additional capacity beyond their original reservations. If others want that capacity they can bid.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: