Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: San Francisco Compute – 512 H100s at <$2/hr for research and startups (sfcompute.org)
727 points by flaque on July 30, 2023 | hide | past | favorite | 176 comments
Hey folks! We're Alex and Evan, and we're working on putting together a 512 H100 compute cluster for startups and researchers to train large generative models on. - it runs at the lowest possible margins (<$2.00/hr per H100) - designed for bursty training runs, so you can take say 128 H100s for a week - you don’t need to commit to multiple years of compute or pay for a year upfront

Big labs like OpenAI and Deepmind have big clusters that support this kind of bursty allocation for their researchers, but startups so far have had to get very small clusters on very long term contracts, wait months of lead time, and try to keep them busy all the time.

Our goal is to make it about 10-20x cheaper to do an AI startup than it is right now. Stable Diffusion only costs about $100k to train -- in theory every YC company could get up to that scale. It's just that no cloud provider in the world will give you $100k of compute for just a couple weeks, so startups have to raise 20x that much to buy a whole year of compute.

Once the cluster is online, we're going to be pretty much the only option for startups to do big training runs like that on.




I hope you succeed. TPU research cloud (TRC) tried this in 2019. It was how I got my start.

In 2023 you can barely get a single TPU for more than an hour. Back then you could get literally hundreds, with an s.

I believed in TRC. I thought they’d solve it by scaling, and building a whole continent of TPUs. But in the end, TPU time was cut short in favor of internal researchers — some researchers being more equal than others. And how could it be any other way? If I made a proposal today to get these H100s to train GPT to play chess, people would laugh. The world is different now.

Your project has a youthful optimism that I hope you won’t lose as you go. And in fact it might be the way to win in the long run. So whenever someone comes knocking, begging for a tiny slice of your H100s for their harebrained idea, I hope you’ll humor them. It’s the only reason I was able to become anybody.


> Your project has a youthful optimism that I hope you won’t lose as you go. And in fact it might be the way to win in the long run.

This is the nicest thing anyone has said to us about this. We're gonna frame this and hang it out on our wall.

> So whenever someone comes knocking, begging for a tiny slice of your H100s for their harebrained idea, I hope you’ll humor them.

Absolutely! :D


Optimism is (almost) always required in order to accomplish anything of significance. Those who lose it, aren't living up to their potential.

I'm not encouraging the false belief that everything you do will work out. Instead I'm encouraging the realization that the greatest accomplishments almost always feel like long shots, and require significant amounts of optimism. Fear and pessimism, while helpful in appropriate doses, will limit you greatly in life if you let them rule you too significantly.

When I look back on my life, the greatest accomplishments I've achieved are ones where I was naive yet optimistic going into it. This was a good thing, because I would have been too scared to try had I really known the challenges that lay ahead.


>Optimism is (almost) always required in order to accomplish anything of significance. Those who lose it, aren't living up to their potential.

I argue that realism trumps optimism. It's perfectly normal in a realist farming to see something difficult, acknowledge the high risk and failure potential, and still pursue something with intent to succeed.

I've personally grown tired of over optimism everywhere because it creates unrealistic situations and passes consequences of failure in an inequitable way. The "visionary" is rewarded when the rare successes occur, while everyone else suffers the consequences for most failures. No contingency plans for failure, no discussion of failure, and so on. Optimism just takes any idea, pursues it and consequences be someone else's problem and be damned.

Pessimism isn't much better, you essentially think everything is too risky or unlikely to succeed so you never do anything. You live in a state of inaction because any level of risk or uncertainty is too much.

To me, realism is much better. You acknowledge the challenge. You acknowledge the risk. You make sure everyone involved understands it, but you still charge forward knowing you might succeed. Some think if you're not naively optimistic (what most people in my experience refer to as "optimism") you don't create enough pressure. I think that's non-sense.


While I've said something like this comment scores of times in my life, and it's definitely a necessary corrective for a lot of optimists who don't think too hard about how they think, I don't think it's a useful place to stop. It's not hard to get unanimous agreement with "be a realist!" because it's framed so the alternative is irrationality/delusion. But even among people who agree that the goal should be to reason under uncertainty and assess risks clearly, there will be a spectrum of risk tolerance, and I don't think it's the worst thing ever to describe that as "optimism" vs. "pessimism"! (I fully acknowledge this isn't the dominant usage, but I think some spaces lean this way)

In this context, I tend to read the parent claim as something like, "great success requires willingness to sometimes take worse-than-even odds or pursue modestly-negative-EV opportunities". I'm not sure I agree with the strongest version of that, but I think it's likely that the space of risky paths to great achievement is richer than that of cautious ones.


If everyone were a realist, we wouldn't have half the advances we do. Because what can be "real" is proven wrong through innovation, after all isn't that disruption? :)

Sam Altman talks about this quite frequently, that it's not intelligence or luck necessary for an enduring innovation. It is persistence in the face of inevitability, and a high tolerance for being proven wrong and still persisting


Everybody knows socialism is impossible though. Can't work, not worth trying, don't even think about it.


Realism doesn't work in business. Business success requires 10 people to try for 1 person to succeed. If those 10 people were realists, they wouldn't try.


Depends how much that one person wins and how much the others lose.


You can be a realist visionary.


Absolutely, which is what I advocate for.


YC startup founder here,

Mostly agree, except the market is not an optimistic place — it’s the market.

There are a multitude of reasons you lose your optimism, mostly because people take it away — your optimism is their money


I like this quote from Napoleon on taking risks: “If the art of war were nothing but the art of avoiding risks, glory would become the prey of mediocre minds.... I have made all the calculations; fate will do the the rest."


To me the payoff of failed projects is in what I learned. As long as that's the case I can carry my optimism over into new projects.


What a beautiful and articulate thought. thank you


Actually, the TPU Research Cloud program is still going strong! We've expanded the compute pool significantly to include Cloud TPU v4 Pod slices, and larger projects still use hundreds of chips at a time. (TRC capacity has not been reclaimed for internal use.)

Check out this list of recent TRC-supported publications: https://sites.research.google/trc/publications/

Demand for Cloud TPUs is definitely intense, so if you're using preemptible capacity, you're probably seeing more frequent interruptions, but reserved capacity is also available. Hope you email the TRC support team to say hello!


Zak, I love you buddy, but you should have some of your researchers try to use the TRC program. They should pretend to be a nobody (like I was in 2019) and try to do any research with the resources they’re granted. I guarantee you those researchers will all tell you “we can’t start any training runs anymore because the TPUs die after 45 minutes.”

This may feel like an anime betrayal, since you basically launched my career as a scientist. But it’s important for hobbyists and tinkerers to be able to participate in the AI ecosystem, especially today. And TRC just does not support them anymore. I tried, many times, over the last year and a half.

You don’t need to take my word for it. Here’s some unfiltered DMs on the subject: https://imgur.com/a/6vqvzXs

Notice how their optimism dries up, and not because I was telling them how bad TRC has become. It’s because their TPUs kept dying.

I held out hope for so long. I thought it was temporary. It ain’t temporary, Zak. And I vividly remember when it happened. Some smart person in google proposed a new allocation algorithm back near the end of 2021, and poof, overnight our ability to make TPUs went from dozens to a handful. It was quite literally overnight; we had monitoring graphs that flatlined. I can probably still dig them up.

I’ve wanted to email you privately about this, but given that I am a small fish in a pond that’s grown exponentially bigger, I don’t think it would’ve made a difference. The difference is in your last paragraph: you allocate reserved instances to those who deserve it, and leave everybody else to fight over 45 minutes of TPU time when it takes 25 minutes just to create and fill your TPU with your research data.

Your non-preemptible TPUs are frankly a lie. I didn’t want to drop the L word, but a TPUv3 in euw4a will literally delete itself — aka preempt — after no more than a couple hours. I tested this over many months. That was some time ago, so maybe things have changed, but I wouldn’t bet on it.

There’s some serious “left hand doesn’t know that right hand detached from its body and migrated south for the winter” energy in the TRC program. I don’t know where it embedded itself, but if you want to elevate any other engineers from software devs to researchers, I urge you to make some big changes.

One last thing. The support staff of TRC is phenomenal. Jonathan Colton has worked more miracles than I can count, along with the rest of his crew. Ultimately he had to send me an email like “by the way, TRC doesn’t delete TPUs. This distinction probably won’t be too relevant, but I wanted to let you know” (paraphrasing). Translation: you took the power away from the people who knew where to put it (Jonathan) and gave it to some really important researchers, probably in Brain or some other division of Google. And the rest is history. So I don’t want to hear that one of the changes is “ok, we’ve punished the support staff” - as far as I can tell, they’ve moved mountains with whatever tools they had available, and I definitely wouldn’t have been able to do any better in their shoes.

Also, hello. Thanks for launching my career. Sorry that I had to leave this here, but my duty is to the open source community. The good news is that you can still recover, if only you’d revert this silly “we’ll slip you some reserved TPUs that don’t kamikaze themselves after 45 minutes if you ask in just the right way” stuff. That wasn’t how the program was in 2019, and I guarantee that I couldn’t have done the work I did then under the current conditions.


A few quick comments:

> But it’s important for hobbyists and tinkerers to be able to participate in the AI ecosystem

Totally agree! This was a big part of my original motivation for creating the TPU Research Cloud program. People sometimes assume that e.g. an academic affiliation is required to participate, but that isn't true; we want the program to be as open as possible. We should find a better way to highlight the work of TRC tinkerers - for now, the GitHub and Hugging Face search buttons near the top of https://sites.research.google/trc/publications/ provide some raw pointers.

I'm sorry to hear that you've personally had a hard time getting TPU v3 capacity in europe-west4-a. In general, TRC TPU availability varies by region and by hardware generation, and we've experimented with different ways of prioritizing projects. It's possible that something was misconfigured on our end if your TPU lifetimes were so short. Could you email Jonathan the name of the project(s) you were using and any other data you still have handy so we can figure out what was going wrong?

Also, thanks for the kind words for Jonathan and the rest of the TRC team. They haven't lost any power or control, and they are allocating a lot more Cloud TPU capacity than ever. However, now that everyone wants to train LLMs, diffusion models, and other exciting new things, demand for TPU compute is way up, so juggling all of the inbound TRC requests is definitely more challenging than it used to be.


It’s not euw4a. It’s everywhere. The allocation algorithm across the board kills off TPUs after no more than a couple hours. usc1f, usc1a, usc1c, euw4a; it makes no difference.

It would be funny if someone set gpt-2-15b-poetry (our project) in some special way to prevent us from making TPUs that ever last more than a few hours, but from what I’ve heard from other people, this isn’t the case. That’s what I mean about the left hand doesn’t know what’s going on with the right hand. It’s not a misconfiguration. Again, pretend to be some random person who just wants to apply for TPU access, fill out your form, then try to do research with the TPUs that are available to you. You’ll have a rough time, but it’ll also cure this misconception that it’s a special case or was just me.

Again, no need to take my word for it; here’s an organic comment from someone who was rolling their eyes whenever I was cheerleading TRC, because their experience was so bad: https://news.ycombinator.com/item?id=36936782

I think that the experience is probably great for researchers who get special approval. And that’s fine, if that’s how the program is designed to be. But at least tell people that they shouldn’t expect more than an hour or two of TPU time.


It sounds like you're primarily using preemptible TPU quota, which doesn't come with any availability or uptime expectations at all.

By default, the TRC program grants both on-demand quota and preemptible quota. If you are able to create a TPU VM with your on-demand quota, it should last quite a bit longer than a few hours. (There are situations in which on-demand TRC TPU VMs can be interrupted, but these ought to be rare.) If your on-demand TPU VMs are being interrupted frequently, please email TRC support and provide the names of the TPU hosts that were interrupted so folks can try to help.

When there is very high demand for Cloud TPUs, it's certainly possible for preemptible TPU VMs to be interrupted frequently. It would be an interesting engineering project to make a very robust training system that could make progress even with low TPU VM uptime, and I hope someone does it! Until then, though, you should have a better experience with on-demand resources when you're able to create them. Reserved capacity is even better since it provides an expectation of both availability and uptime.


I was using on-demand TPUs primarily, and preemptible TPUs secondarily. Neither would last more than an hour or two. And two was something of a minor miracle by the end.


For future reference, the team looked into this, and it appears that the interruptions you experienced were specific to your project and a small number of other projects. The vast majority of TRC projects should see much longer Cloud TPU uptimes when they are able to create on-demand TPUs.

I'm sorry that you had such a frustrating time and that we weren't able to sort it out via email while it was happening. If you decide to try TRC again and run into issues like this, please be sure to engage with TRC support!


> You don’t need to take my word for it. Here’s some unfiltered DMs on the subject: https://imgur.com/a/6vqvzXs

> Notice how their optimism dries up, and not because I was telling them how bad TRC has become. It’s because their TPUs kept dying.

Unless I'm misreading this they sound pretty happy and you sound pessimistic? Their last substantial comment was "I'm sure Zak could hook you up with something better"?


TRC is supposed to be the “something better”. This insider TPU stuff is for the birds. If TRC can only offer 4 hours with no preemptions, that’s fine, but they need to be up front about that. Saying that TPUs preempt every 24 hours and then killing them off after 45 minutes is… not very productive.

As for their comments, the third screenshot is the key; they’re agreeing that the situation is bad. They’re a friend, and they’re a little indirect with the way they phrase things. (If you’ve ever had a friend who really doesn’t want to be wrong, you know what I mean; they kind of say things in a circular way in order to agree without agreeing. After awhile it’s pretty cute and endearing though.)

I was particularly pessimistic in those DMs because it came a couple months after I thought I’d give TRC one last try, back in January, which was roughly a year after I’d started my “ok, I’m losing hope, but I’ll wait and see” journey. In the meantime I kept cheerleading TRC and driving people to their signup page. But after the TPUs all died in less than two hours yet again, that was that.

I have a really high tolerance for faulty equipment. This is free compute; me complaining is just ungrateful. But I saw what things were like in 2019. “Different” would be the understatement of the century. If my baby wasn’t being incubated in the NICU today, I’d show the charts where our usage went from thousands of cores down to almost zero, and not for lack of trying.

It also would’ve been fine to say “sorry, this is unsustainable, the new limits are one tpu per person per project” and then give me a rock solid tpu. We had those in 2021. One of our TPUv3s stayed online for so long that I started to host my blog on it just to show people that TPUs were good for more than AI; the uptime was measured in months. Then poof, now you can barely fire one up.


I don't have a qualified opinion on the subject of TPU availability.

I'm just pointing out that your summary of the DMs ("Notice how their optimism dries up, and not because I was telling them how bad TRC has become. It’s because their TPUs kept dying") is the opposite of what the DMs show.


As mentioned in another comment, it sounds like you're using preemptible TRC TPU quota. If you use on-demand TRC TPU quota instead, that should improve your uptime substantially.


This is totally fascinating.

Frankly, it sounds to me like they're having severe yield+reliability problems with the TPUv4s that aren't getting caught by wafer-level testing, and have binned the flakiest ones for use by outsiders.

A lot of yield issues show up as spontaneous resets/crashes.


It's more likely Google preempting researcher who are on a preemptable research grant, and it is happening a lot more often because there are more paying customers.


"Preemptable money" sounds like the kind of bullshit I would use to cover up failed chips. And yes, I am a VLSI engineer.


Main problem with the TPU Research Cloud is you get dragged down a LOT by the buggy TPU API-- not just the Google Cloud API being awful but the Tensorflow/Jax/Pytorch support also being awful too. You also basically must use Google Cloud Storage, which is also slow and can be really expensive getting anything into / out-of.

The Googlers maintaining the TPU Github repo also just basically don't care about your PR unless it's somehow gonna help them in their own perf review.

In contrast with a GPU-based grid, you can not only run the latest & greatest out-of-the-box but also do a lot of local testing that saves tons of time.

Finally, the OP here appears to be offering real customer engagement, which is totally absent from my own GCloud experiences across several companies.


Could you share a few technical details about the issues you've encountered with TF / JAX / PyTorch on Cloud TPUs? The overall Cloud TPU user experience improved a whole lot when we enabled direct access to TPU VMs, and I believe the newer JAX and PyTorch integrations are improving very rapidly. I'd love to know which issues are currently causing the most friction.


Wow! I never thought you’d see the light. All I ever see from your posts is praise for TRC. As someone who got started way later on, I had infinitely more success with a gaming GPU I owned myself. Obviously not really comparable, but TRC was very very difficult to work with. I think I only ever had access to a TPUv3 once and that wasn’t nearly enough time to learn the ropes.

My understanding was that this situation changed drastically depending on what sort of email you had or how popular your Twitter handle was.


My experience has been different. Considering how easy the application is I think they're still being fairly generous as I've been offered multiple v3-8s and v3-32s x 30days as well as pre-emptible v3-64s x 28 days for a few different projects within the last 6 months.

Are you affiliated with an academic institution? Otherwise I'm not sure why they're been more generous with me, my projects have been mildly interesting at best.

They're certainly a lot stingier with larger pods than they used to be though.


What Shawn says is absolutely right. The race right now is way too hot for this stuff. A single customer will eat up 512 gpus for 3 years.


> In 2023 you can barely get a single TPU for more than an hour

Oh come on, colab gives TPU access in the free tier for a whole half day. No need to exaggerate the shortage


> In 2023 you can barely get a single TPU for more than an hour.

Um. Can't you order them from coral.ai and put them in an NVMe slot? Or are the cloud TPUs more powerful?


TPU pod is not sold by google, edge tpu is different


So the cloud TPUs are more powerful...? Or what are you saying?


Yeah, it’s a silly branding thing.

One TPU (not even a pod, just a regular old TPUv2) has 96 CPU cores with 1.4TB of RAM, and that’s not even counting their hardware acceleration. I’d love to buy one.


Huh, this doesn't seem right. Based on #s you seem to be referring to pods but even then I'm not familiar with such a configuration existing.

A single TPUv2 chip has 1 core and 8gb of memory. A single device comes in the v2-8 configuration with 8 cores and 64gb of memory.

Pod variants come in v2-32 to v2-512 configurations.


A single TPUv2 host has 8 TPU cores with 64GB of total HBM (8GB per core), but like GPUs, TPUs can't directly access a network, so the host also needs CPUs and standard RAM to send data to them. They are fast, and the host has to be fast enough to keep them fed with data, so the host is pretty beefy. But FWIW, a TPUv2 host has somewhere around 330GB of RAM, not 1.4TB.


Thanks for clarifying, I misinterpreted the commenter as referring to the accelerator as the conversation was about TPU availability for purchase.

I know just enough about the architecture to facilitate using TPUs for research training runs but I'm not sure what's so special about the host?

Sure it's beefy but there are much beefier servers readily available.


There's nothing super-special about the host. The accelerators are the special part (and, as described elsewhere, they are orders of magnitude more powerful than the Edge TPU). However, if you're an academic/independent researcher, being able to access a system with that much system memory/CPU cores for free through TPU Research Cloud is potentially appealing even without the accelerators.


Edge TPUs are low cost, low power inference devices the size of a dime. I have a hundred of them sitting in a closet. (Alas. Anyone want to buy 100 coral minis? :-)

The TPUs you rent that are being discussed here are capable of training, consume hundreds of watts and have a heatsink bigger than your fist and really spectacular network links. They're analogous to Nvidia's highest end GPUs from a "what can you do with them" perspective.

Both are custom chips for deep learning but they're completely different beasts.


Can I hook a microphone up to a Coral Mini and run Whisper? I'd love to have a home assistant that wasn't on the cloud.

As for the rest of them, list them on Amazon and let them do the fulfillment. That $10k of hardware isn't going to sell itself from your closet. (Yet. LLMs are making great strides.)


It has a microphone built in.

And that's a good idea, thanks. I've been dreading the idea of using ebay.


They are entirely different chips - like an order of magnitude in terms if transistor count and die size.


yes


> Rather than each of K startups individually buying clusters of N gpus, together we buy a cluster with NK gpus... Then we set up a job scheduler to allocate compute

In theory, this sounds almost identical to the business model behind AWS, Azure, and other cloud providers. "Instead of everyone buying a fixed amount of hardware for individual use, we'll buy a massive pool of hardware that people can time-share." Outside of cloud providers having to mark up prices to give themselves a net-margin, is there something else they are failing to do, hence creating the need for these projects?


Couple things, mostly pricing and availability:

1) Margins. Public cloud investors expect a certain margin profile. They can’t compete with Lambda/Fluidstack’s margins.

2) To an extent also big clouds have worse networking for LLM training. I believe only Azure has infiniband. Oracle is 3200 Gbps but not infiniband, same for AWS I believe. GCP not sure but their A100 networking speeds were only 100 Gbps I believe rather than 1600. Whereas lambda, fluidstack and coreweave all have ib.

3) Availability. Nvidia isn’t giving big clouds the allocation they want.


What is your differentiator from Lambda? That you are smaller and in a single DC?

Sincere question.


I'm not OP/submitter, but the main differentiator is that Lambda doesn't have on-demand availability for lots of interlinked H100s - you have to reserve them.

Lambda has "Lambda Sprint" which is kinda similar,[1] but Sprint is $4.85/GPU/hr instead of <$2.

So if you want 128 GPUs for a week, you can't use Lambda reserved (3 year term), you can't use Lambda on-demand (can't get 128 A/H100s on-demand), your options are Lambda Sprint or SF Compute, and SF Compute is offering significantly lower prices.

[1]: https://lambdalabs.com/service/gpu-cloud/reserved


Low margins and “will this thing still be around in 2 years” are negatively correlated.

Where’s the capital for upgrades, repairs, and replacements coming from?


Using investor's money to build something with low to zero margin until you capture enough value to make it profitable a few years down the line has been the core SV strategy for more than a decade now, so it's not an extraordinary plan.

Of course it doesn't always work, and it may be even harder to make it work in the current macroeconomic environment, but it's still pretty standard play.


They are working on this. All the major clouds have initiatives to do short term requests/reservations. It’s just not a feature that has ever been of much use pre-GenAI. How often do you need to request 1000 CPU nodes for 48 hours in a single zone?

Secondly, there is a fundamental question of resource sharing here. Even with this project by Evan and AI Grant (the second such cluster created by AI Grant btw), the question will arise — if one team has enough money to provision the entire cluster forever, why not do it? What are the exact parameters of fair use? In networking, we have algorithms around bandwidth sharing (TCP Fairness, etc.) that encode sharing mechanisms but they don’t work for these kinds of chunky workloads either.

But over the next few months, AWS and others are working to release queueing services that let you temporarily provision a chunk of compute, probably with upfront payment, and at a high expense (perhaps above the on demand rate).


> It’s just not a feature that has ever been of much use pre-GenAI. How often do you need to request 1000 CPU nodes for 48 hours in a single zone?

I would srgue this has always been a common case for cloud GPU compute


AWS and Azure would slit their own throats before they created a way for their customers to pool instances to save money.

They want to do that themselves, and keep the customer relationship and the profits, instead of giving them to a middleman or the customer.


It’s just corporate profits combined with market forces, not a some sort of malicious conspiracy.

You can rent a 2-socket AMD server with 120 available cores and RDMA for something like 50c to $2 per hour. That’s just barely above the cost of the electricity and cooling!

What do you want, free compute just handed to you out of the goodness of their hearts?

There is incredible demand for high-end GPUs right now, and market prices reflect that.


You mentioned malicious conspiracy, not me.

It's just business and I'd do the same if I was in charge of AWS.


> You can rent a 2-socket AMD server with 120 available cores and RDMA for something like 50c to $2 per hour.

Source required



Sorry where are these .50c many core servers you speak of exactly?


Azure's HB120rs_v3 size is about 36c per hour right now with Spot pricing in East US. These use 3rd generation AMD EPYC "Milan" processors.

The instances with the 4th generation "Genoa-X" processors (HB176rs_v4) cost about $2.88 per hour. The HX176rs_v4 model with 1.7 TB of memory is $3.46 per hour.

https://learn.microsoft.com/en-us/azure/virtual-machines/hbv...

https://learn.microsoft.com/en-us/azure/virtual-machines/hbv...

https://learn.microsoft.com/en-us/azure/virtual-machines/hx-...


Are these actually attainable, as in I can log in and launch an instances with these specifications right now, or are they just listings? I ask because literally last week I was unable to launch similar instances on AWS despite those specs being listed as available and online.


I could. Availability tends to be region-dependent with all clouds.


Where can you get 120 cores for $2/hr?



AWS and Azure both charge by the hour anyway so it wouldn't but if you wanted you could use Reserved instances and just have their accounts in the same organisation.

A large part of the profit comes from the upfront risk of buying machines. With this you are just absorbing that risk which may be better if the startup expects to last.


Having hosted infrastructure in CA at multiple colos. I would advise you to host it elsewhere if you can, cost of power, other infrastructure is much higher in CA than AZ or NV.


Montreal would be the place to go for cheap power, and the CAD-USD advantage.


Power seems like a very small amount of cost of compute when it comes to GPU’s.


FWIW I tired to look up some numbers, i found California "industrial" electricity at $0.18/Kwh https://www.eia.gov/electricity/monthly/epm_table_grapher.ph... and H100s using 300-700w https://www.nvidia.com/en-us/data-center/h100/ which implies a worst case marginal cost of .18*.7 = $.126 / gpu / hour. Looks like Montana is cheapest at ~$.05 / kwh which would bring that down to $.035. So there may be about a $0.09 California premium (vs the absolute cheapest possibility), which as you say is a small amount of the total cost, but could be material for large workloads.


Retail residential power in the city of Santa Clara is $0.15/KwH, I'm sure commercial could be less. Especially if you throw some solar panels on the roof.

The most expensive part would be the land, but honestly there is some pretty cheap land outside the cities.


For reference, I'm in SF and paid PGE $0.50938/KWh during peak hours, residential, last bill.


Yes, Santa Clara has their own non-profit power company so their rates are way less than PG&E


$0.09 for the GPU alone. Add power for mainboard, RAM, and fans, efficiency loss at the power supply, networking, etc. After that another flat 30% for HVAC, since all that "consumed" electricity got turned into heat and the heat has to go somewhere.

And when we are talking about low margins, a 5-10% difference in cost is very significant.


Meanwhile, AWS is charging $8 an hour for their top of the line gpu server.


over regulation and taxes


> It's just that no cloud provider in the world will give you $100k of compute for just a couple weeks

I've never had to buy very large compute, but I thought that was the whole point of the cloud


How does this compare to https://lambdalabs.com/ ?


Ah, we're running a medium amount of compute at zero-margin. The point is not to go sell the Fortune 500, but to make sure a grad student can spend a $50k grant.

Right now, it's pretty easy to get a few A/H100s (Lambda is great for this), but very hard to get more than 24 at a reasonable price ($~2 an hour). One often needs to put up a 6+ month commitment, even when they may only want to run their H100s for an 8 hour training run.

It's the right business decision for GPU brokers to do long term reservations and so on, and we might do so too if we were in their shoes. But we're not in their shoes and have a very different goal: arm the rebels! Let someone who isn't BigCorp train a model!


> but to make sure a grad student can spend a $50k grant.

As a graduate student, thank you. Thankfully, my workloads aren't LLM crazy so I can get by on my old NVIDIA consumer hardware, but I have coworkers struggling to get reasonable prices/time for larger scale hardware.


So what happens when some big bucks VC backed closed source LLM company buys all your compute inventory for the next 5 years? This is not that unlikely. Lambda Labs a little while back was completely sold out of all compute inventory.


I assume it's up to them to say no. They did say they're not in it to make bookoo bucks


Yeah we aren’t going to let anyone book the whole thing for years. If we ever have to make the choice, we’ll choose the startups over the big companies.


Yeah, if someone doesn't care about the cost and wants to buy whole cluster, they might be better off using an existing provider.


I must say, this is the worst I've seen "beaucoup" spelled.


Hahahaha. I can honestly say no on has told me in my entire uneducated life that "bookoo" was a real word. I appreciate the lesson.


You just might enjoy Jumbo, a track off Underworld's "Beaucoup Fish" album

https://open.spotify.com/track/3VIMS1p3sNifH0RQnmDf7s


Haha, you're bookoo welcome!


Presumably them buy more gpus


How can you allow people to get big chunks of GPU’s without a lot of expensive slack in the system?


This is great. Thank you very much for your work.


Very similar price, but from what I gather very different model. One important difference might be if you regularly run short-ish training runs over many GPUs. Lambdalabs might not have 256 instances to give you right now. With OP you are basically buying the right to put jobs in the job queue for their 512 GPU cluster, so running a job that needs 256 GPUs isn't an issue (though you might wait behind someone running a 512 GPU job).

No idea how capacity at lambdalabs actually looks like though. Does anyone have insight how easy it is to spin up more than 2-3 instances up there?


Yeah it’s pretty hard to find a big block of GPUs that you can use for a short time, esp if you need infiniband for multinode training. Lambda I think needs a min reservation of 6-12 months if you want IB.


You can usually only get a few h100s at a time unless you're committed to reserved instances (for a longer time period)


No real way to get a big block without commitment. Iirc smallest h100 commitment is 64gpus for 3 years (about $3M usd).


My question too. At $2/hr for H100 that seems more flexible? But I haven’t tried to get 10k GPU-hours on any of these services, maybe that is where the bottleneck is.


I am super interested in AI on a personal level and have been involved for a number of years.

I have never seen a GPU crunch quite like it is right now. To anyone who is interested in hobbyist ML, I highly highly recommend using vast.ai


Additional clouds:

For H100s and A100s - lambda, fluidstack, runpod. Also coreweave and crusoe and oblivus and latitude

For non a/h100s: vast, Tensordock, also runpod here too


Depends on what you class as hobbyist but I am running a T4 for a few minutes to get acquainted with tools and concepts and I found modal.com really good for this. They resell AWS and GCP at the moment. They also have A100 but T4 is all I need for now.


Significantly more expensive than equivalent 3090 configuration if you can do model parallelism


What do you mean by this? I use less than the $30/m free included usage.

I am guessing you mean at some point just buy your own 3090 as it will be cheaper than paying a cloud per second for a server-grade Nvidia setup.


I think this is more applicable for training usecases. If you can get by with less than $30/mo in aws compute (quite expensive) then it likely does not make a didference.

What I mean is that you can rent out 4 3090 GPUs for much cheaper than renting an A100 on aws because you are not paying Nvidia's "cloud tax" on flops/$


Many thanks for posting about vast.ai, which I had never heard of! It's a sort of "gig economy/marketplace" for GPU's. The first machine I tried just now worked fine, had 512GB of RAM, 256 AMC CPUs, an A100 GPU, and I got about 4 minutes for $0.05 (which they provided for free).


The only caveat is it is not really appropriate for private usecases.

Also, many of the available options clearly are recycled crypto mining rigs which have somewhat odd configurations (poor gpu bandwidth, low cpu ram).


I know AWS/GCP/Azure have overhead and I understand why so many companies choose to go bare metal on their ops. I personally rarely think it's worth the time and effort, but I get that with scale saving can be substantial.

But for AI training? If the public cloud isn't competitive even for bursty AI training, their margins are much higher than I anticipated.

OP mentions 10-20x cost reduction? Compared to what? AWS?


AWS offers p5.48xlarge which is 8xH100 for $98.32, so 12.29$ per hour per H100 - ~6x the price.


Hi, SF lover [1] here. Anything interesting to note about your name? Will your hardware actually be based in SF? Any plans to start meetups or bring customers together for socializing or anything like that?

[1] We have not gone the way of the Xerces blue [2] yet... we still exist!

[2] https://en.wikipedia.org/wiki/Xerces_blue


Ah the hardware isn’t gonna be in SF (not the cheapest datacenter space)

But I do think a lot of our customers will be out here —- SF is still probably the best place to do startups. We just have so many more people doing hard technical stuff here. Literally every single place I’ve lived in SF there’s been another startup living upstairs or downstairs

Good idea to host some in person events!


> SF is still probably the best place to do startups.

now that's a hot take if I ever saw one


I love the idea of community assets. could it be the start of a GPU co-op?


For consumer-grade cards, that's already here.

Make money off your GPU with vast.AI

https://cloud.vast.ai/host/setup


> Requirements

> Ubuntu 18.04 or newer (required)

> Dedicated machines only - the machine shouldn't be doing other stuff while rented

well that's certainly not what I expected. ctrl-f "virtual" gives nothing, so it seems they really mean "take over your machine"

> Note: you may need to install python2.7 to run the install script.

what kind of nonsense is this? Did they write the script in 2001 and just abandon it?


> what kind of nonsense is this? Did they write the script in 2001 and just abandon it?

Anything AI/ML is a hot mess of cobbled-together bits and pieces of Python barely holding together. I recently read somewhere that there should be a new specialization of "ML DevOps Engineer"... and hell I'm supporting that.


there should be a new specialization of "ML DevOps Engineer"

Do you mean MLOps? Nothing new about it. We have two full-time MLOps engineers at our startup.


Python is awesome because they built what people wanted.

Python is terrible because they built what people wanted.


I just skimmed their FAQ at https://vast.ai/faq, and it seems like it could use an update. E.g., it says "Initially we are supporting Ubuntu Linux, more specifically Ubuntu 16.04 LTS.". That version of Ubuntu has been end-of-life'd for several years, and when I just tried vast.ai out, it seemed to be using Ubuntu 20.04. There were also a couple of words with letters missing (probably trivial typos) that could be found with a spell checker. The questions in their FAQ are really interesting though, in terms of highlighting what users care about (e.g., there's a lot devoted to "how do I use vast.ai + google colab together"?). I also wonder when vast.ai started? Sometimes you can get insight from a company blog page, but the vast.ai blog seems to start in Feb 2023: https://vast.ai/blog . There's a bunch of "personal experiences" with vast.ai from 3 years ago in this discussion though: https://www.reddit.com/r/MachineLearning/comments/hv49pd/d_c...

A comment in that discussion mentions yet another competitor in this space that I've never heard of: https://www.qblocks.cloud/ -- I just tried Q blocks out and the new user experience wasn't as good for me as with vast.ai: you have to put in $10 money to try it, instead of getting to try it initially for free; there is a manual approval process before you can try data center class GPUs; you only see that your instance is in Norway (say) after you try to start it, not before; it seems like there's no ssh access, and they only provide Jupyter to connect; neither pytorch nor tensorflow seemed to be installed. They could probably update their pages too, e.g., https://www.qblocks.cloud/vision is all about crypto mining and smartphones, which feels a bit dated... :-)


TensorDock Marketplace is another option: https://marketplace.tensordock.com/order_list

It's unique in that you can set your own prices, it's a true spot marketplace.. I've grabbed 2x3090 for $0.02/hr before.

Probably no good for training (can be interrupted any time with zero warning ssh just drops and that's it) but for my inference usecases it lets me spot heavy compute for pennies.



check here to see the current bid prices / gpu setups https://cloud.vast.ai/create/


My computer is sitting mostly idle at home, thanks for this.


Serious Q, as I dont know Twitters internal infra at all... but with a shrinking in revenue from ads, or maybe less engagement by users, and the influx of Threads - maybe twitter can use from slices of its infra (even if its rack space, VMs, Containers, connectivity, who knows what, to support startups such as this?

Basically twitter devolves into the Colos of the late 90s :-)

-

For those who didnt notice, it was tongue in cheek.


I've generally tried to give Twitter the benefit of the doubt but I would never trust them as an infrastructure provider in their current incarnation. Reliability and consistency have been so far from their focus.


Would you really trust a company that doesn't pay its rent to run your infrastructure?


Generally when you just stop paying your bills the datacenter holds your hardware and eventually auctions it off to cover some of your debt. I seriously doubt Twitter has any access to the two of three datacenters Elon decided to not pay for.


How did you get the money to buy 512 H100s?


ask no questions hear no lies


EDIT: They seem to be in a raising fund / debt stage. Great initiative


Their announcement says "We can probably get a good deal from a bank [...]", so maybe they don't just have 20M USD sitting around.


Well, this pushes me even further in the direction that they are actually good guys that need support, and that they are trying to bring a good deal on the table :)


unrelated to this specific initiative, but - I keep seeing a lot of announcements of huge VC rounds around what's effectively datacenters for GPUs. Curious about the math behind that - I feel like those things get obsolete so fast, it's almost like the whole scooter rental thing, where the unit economics doesn't add up.

Anyone have an insight?


From sentence one of the post, it clearly states that they are VC funders who are doing this for a round of startups they just funded, and they're looking for others to be a part of it.


Oh no, definitely not. We just got a loan.

Neither Alex or I are currently VCs, and this has no affiliation with any venture fund.

We want to be a customer of the sf compute group too!


How’d you get this loan? Is it from a benevolent individual who just wants to make something happen?

If not and you got the loan from a bank, super curious how you were able to get the bank to trust that renting out the GPUs would cover the loan or if some other reasoning convinced them. Assuming you aren’t trying to turn this into a big business, that knowledge might help a lot of other players run similar programs and further democratize SOA GPU access.


I’m fairly certain this loan is either a private individual or a HELOC or something. No way is a bank just going to loan out a bunch of money to some startup like this.


I’m curious, how are those loans guaranteed?


The only guarantee is them not paying it back


Noob Thought: So this would be a blue print on how a mid tier universities with older large compute cluster ops could do things in 2023 to support large LLM research?

Perhaps its also a way for freshly applying grad students to look at a university looking to do research in LLMs that requires scale...


Like to clarify, a new grad students could look at the current group and ask "Hey I know you are working on LLMs, but how many $$ of your grant are dedicated to how many TPU hours per grad student?"


554 5.7.1 <evan@sfcompute.org>: Relay access denied

554 5.7.1 <alex@sfcompute.org>: Relay access denied


!!!!!! fixing this. For the moment, evan at roomservice dot dev


Ah, putting out flames live on HN. Back in the day it was on IRC or just on the phone with the customer. I miss those times.


fwiw, https://roomservice.dev/ is currently a 404


Ah yeah, that's normal! Was from my old CRDT company, and works as a good emergency email while we debug our DNS.


I assume it was a Take3 reference. I wanted to point it out, in case it was supposed to return more than a 404.


http != smtp

  roomservice.dev. 60 IN MX 5 alt1.aspmx.l.google.com.
  roomservice.dev. 60 IN MX 5 alt2.aspmx.l.google.com.
  roomservice.dev. 60 IN MX 1 aspmx.l.google.com.
  roomservice.dev. 60 IN MX 10 alt3.aspmx.l.google.com.
  roomservice.dev. 60 IN MX 10 alt4.aspmx.l.google.com.
  roomservice.dev. 60 IN MX 15 4ig53n4pw7p3cuxm7n7xi7dpuyq6722aipexvhkngzbd2e4mudmq.mx-verification.google.com.


I know the difference between an email and a web page, tyvm.


done


Correct me if I’m wrong but doesn’t Lambda Labs already provide them at 1.89$? What’s the point if you’re starting out not the cheapest


Ah that’s only if you pay for 3 years of compute upfront. Most startups, especially the small ones, really can’t afford that


Looks like their site is quoting a rate of $1.99 now https://lambdalabs.com/


See this post above: https://news.ycombinator.com/item?id=36935032

Price and market depth are very different things


Nat Friedman and Daniel Gross setup a 2,512 H100 cluster [1] for their startups, with a very similar “shared” model. Might be interesting to connect with them.

[1] https://andromedacluster.com/


Nat & Daniel’s cluster is great, and we fully recommend startups seek out this option as well. Nat & Daniel are some of the best investors one can have


Will it be a Slurm cluster, or what kind of scheduler is SFC planning to use?


Wishing y'all the best of luck. This would be huge for a lot of folks.


What kind of hardware setup are you planning out? Colocation, roll-your-own data center, something in between? Any thoughts on what servers the GPUs will be housed in?


Honest question I don’t know how to consider: are we further along or behind with AI given crypto’s use of GPUs? Has the same cards bought for mining furthered AI, or maybe that demand lead to more research into GPUs and what they can do - or would we be further along if we weren’t wasting these cards on mining?


Ethereum's (thrice delayed) move to PoS put a glut of GPUs on the market, just in time for the AI boom to swallow them back up, so I think it ended up okay. NVDA certainly had a great few days in the market thanks to AI though.


Eth was mined mainly on consumer GPUs which as far as i understand, have too little VRAM for most AI training


How are you going to sell access and divide the resources?


Just curious, do you guys use renewable energy to power your cluster?


I love this. Us at Phind.com would love to be a part of this.


During a gold rush, sell shovels.

When was the last time you spoke to a chatbot?


For me, today and almost every day since the beginning of this year. Not sure if that saying applies here.


Chatbot in the sense I think you mean is a horrible application. Millions of people are using large language models daily though.


Downvoted by others, yet very true. This is a valid business model, nothing to be ashamed about it.


"Once the cluster is online ..."

Where will the cluster be hosted ?

May I suggest that you get your IP transit from he.net ?


Not to mention, San Francisco is not known for having cheap real estate, nor is it known for having cheap electricity. My last (residential) bill to PGE, I paid $0.50938/KWh at peak.


While business rates may be different, California cannot be a sensible place to host power-hungry infrastructure - our electrical rates are easily 5-8 times of other locations within the US


[dead]


The billion dollar question is:

Who is funding this?

Cause if it’s VC then it’s going to have the same fate as everything else after 5-7 years.

I hope y’all have as innovative of a business model. You’ll need it if you want to do what you’re doing now for more than a few years


What's wrong with doing something profitable for a few years? H100's in a couple of years will be like having a cluster of K80's today.

Not everything has grow to have the appetite of Galactus and swallow a whole planet. Making single digit millions of dollars over a couple of years is still worthwhile, especially if it helps others and moves humanity forwards.

This project isn't ever going to want to try and compete with AWS, so no, it's not a billion dollar question. $20 Million, yeah.


You’re completely right in everything you say about growing sustainably and making some money over time. But if this project is VC that all goes out the window and it won’t be profitable unless it massively galactus scales to compete with AWS in 5-7 years, and will fail after that almost certainly, like the vast, vast majority of VC projects.


Hey I agree!

That’s why I’m asking because a “bootstrapped” company like you describe has a future…

One backed by VC doesn’t

I mean they may have a future but not like you describe


Please take this question without prejudice.

Is it accurate to say you’re willing to go into ~20,000,000 USD debt to sell discounted computer-as-a-service to researchers/startups, but unwilling to go into debt to sponsor the undergraduate degrees of ~100-500 students at top-tier schools? (40k - 200k USD per degree)

Or, you know, build and fund a small public school/library or two for ~5 years?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: