Hacker News new | past | comments | ask | show | jobs | submit login

Well, those server farms don't pay for themselves.



sure, but once it's trained there isn't a running maintenance cost


If you offer an API you need to dedicate servers to it that keep the model loaded in GPU memory. Unless you don't care about latency at all.

Though I wouldn't be surprised if the bigger reason is the PR cost of releasing with an exciting name but unexciting results. The press would immediately declare the end of the AI growth curve


Of course running inference costs money. You think GPUs are free?


Well if it takes a ton of memory/compute for inference because of its size, it may be cost prohibitive to run compared to the ROI it generates?


There definitely is, storage, machines at the ready, data centers, etc. Also OpenAI basically loses money every time you interact with ChatGPT https://www.wheresyoured.at/subprimeai/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: