There is so much interesting stuff going on in GPU compute that isn't crypto. I'm really excited about this because there are SO MANY gpus that are now going to be cheaper than sand. There is A LOT that can be made of that and I intend to get mine. I think the crypto boom really covered up what we can really, really do with GPU compute and possibly stifled adoption and innovation but now we've got so many just sitting around. Which is super useful as we move more into a world after being able to get things manufactured and shipped world wide in what feels like an instant.
> There is so much interesting stuff going on in GPU compute that isn't crypto.
For sure, but there were many crypto operations with data centers that had hundreds or thousands of GPUs. For example, a report from JPR estimates that crypto miners bought 25% of all GPUs produced in 1H'2021.¹
Even a small increase in demand can result in a large price change if it pushes into a different part of the demand curve (e.g., if the "cheap" demand has been supplanted with customers willing to pay more then the market price can change drastically).
Thats not quite true, unless the 3060 Ti counts as a high end card I guess. It was easily the most popular mining card of the 3000 generation, due its excellent MH/Watt compared to the others? And previous to that, the mid-range 5700 from AMD was the best, to the point where you could sell your 5700 to a miner and buy a new 6700 and have cash left over at some points.
I'd like to see some PoW scheme for public good projects like SETI or Folding@Home or even CGI for a fan fiction, where there's perhaps something more than just bragging rights for contributing. I'm not sure what that would look like exactly.
For PoW you dont just need work, you need work that can be easily verified, easily scaled difficulty levels, and difficult to cheat with - all with no centralized trusted authority.
Even when SETI/F@H was worth only fake internet points they already had problems with people cheating. What happens when its worth real money?
In addition, you need something that deliberately has no other value besides doing PoW.
Otherwise, all of that cryptocurrency's mining power could instantly abandon that cryptocurrency for the other thing if market flucations give it a higher ROI than PoW.
You dont just need easy to verify, you need easy to fairly distribute the problem instances.
Take primes. How do you know the person generated new primes and didnt just have a bunch saved up? With hashes you have to hash the current block which is unpredictable, and also means as time goes on it gets harder and harder to have a competing chain, because you dont just have to solve the current block but also be better for all the preceeding chains.
Additionally NP-complete problems are hard because the average complexity tends not to be super hard, which is a major sticking point in this application. So you have to somehow verify the person is solving a hard instance of the problem and not an easy instance.
Finding prime numbers, even very large ones, is neither difficult nor useful; there are simple algorithms for determining the primality of a number which don't require extraordinary amounts of computing power to execute.
Even factoring large semiprimes isn't inherently useful -- it's an interesting mathematical exercise, but isn't a goal unto itself.
examples would be:
Multiple PoWers independently producing solutions and then verifying each other (although this is wasteful as work is being duplicated)
PoWers achieving increasing status levels when not cheating over time, incentivising long-term honest behaviour
And how do you make sure that a single person doesnt create multiple accounts in order to also be the verifier of their own work? Since that is the entire problem PoW is trying to solve, if you have a solution to that, why not just use it directly?
> PoWers achieving increasing status levels when not cheating over time, incentivising long-term honest behaviour
We're talking millions of dollars here (if not more). In the real world people would easily spend years doing a long-con with that type of pay out. Why wouldn't they do so on the internet?
This is how BOINC and similar distributed computing projects work: a unit of work is distributed to multiple workers to confirm the work and discourage malicious actors.
The other very important part of BOINC and the like is that they offer no monetary gains for your contribution. That's how you eliminate 99,9% of malicious actors and the rest can be eliminated the way you described
It won't work in the PoW scheme because there is presumably some cryptocurrency involved as a reward
Since the new GPU offerings from nvidia have secure multi-tenancy, I think you're going to start seeing things like that. Especially when you look at what's happening with compute being more universally adapted via Vulkan. I haven't seen the framework for such a thing yet but you make a good point. I think I've got half a model in my head that could be retooled for something like that in a flexible fashion. It could work both ways too. Either you are giving away GPU cycles for research or you are paying for people to be paid for their cycles, or you post up your own job to be computed for pay or donation of cycles or money. As an example, wanna render predictions for erosion on a property you wanna buy? Put up the job and people that wanna contribute can and you get the result. Any user could set their hierarchy of things they contribute to. Like patreon sort of. So bucks or compute cycles can be chunked out to them by order of need and weighted priority.
Humm. Someone beat me to this idea so I don't have to do it.
Vulkan isn't suitable for this, with most GPGPU compute stacks not accommodating Vulkan's SPIR-V dialect, including Khronos's own SYCL.
Note that this wave of datacenter GPUs doesn't support OpenGL or Vulkan, because they either don't have rasterizers at all (AMD MI250X) or have them only on two TPCs for basic SW compat and don't expose OGL/Vk via virtual functions (NVIDIA).
As such, better OpenCL drivers are what you should ask for. But then AMD's OpenCL drivers are not exactly that great, and NVIDIA's have their own quirks.
Gridcoin rewarded seti@home when seti@home was operating and still rewards a number of other BOINC projects. It is proof of stake based and the rewards are layered on top.
There is a similar cryptocurrency called curecoin for folding@home although I think they do use SHA-256 mining (i.e not useful computations) for part of the rewards
Mining is only profitable when the block reward and transaction fees are worth more than the costs of mining (electricity, capital costs). ETH was the only coin big enough to support all those GPU mining rigs. With ETH gone, there are too many miners and not enough valuable stuff to mine.
Bitcoin mining all switched over to custom ASICs long ago, and is barely profitable even on that hardware. The GPUs that were used for Ethereum mining can't compete.
For some more background: bitcoin's POW is basically just sha256, which was trivial to port first to GPUs and then to custom hardware. That makes mining a bigger up-front investment and thus more centralized, which is why almost all later coins chose POWs that aren't easy to speed up with ASICs
Bitcoin is using a different hash function. Bitcoin uses sha, which is easy to implement more efficiently in fgpa and asic, which blows gpus out of the world.
Eth thought that was a bad thing, so chose an algorithm that was intentionally difficult to implement in asic. They thought having gpus be the most efficient platform would be more decentralized and be more one person one vote compared to asics. As a result people use gpus for eth.
Ouch, looks like for most cards even if energy was free a miner would be looking at ~1000 days to break even. Hopefully this really does more or less end GPU mining.
Those numbers are based on $0.1/kwh electricity costs. Here in the UK the cost is around $0.4/kwh, so I don't believe any of those cards would generate any sort of profit. Basically the electricity cost is the main driver, and you'd need to check what your local cost is to evaluate this.
Of course if you are running from home generated power (e.g solar) then the equation changes, but so does the capital cost.
In winter if one uses resistive electrical heating and where heat pumps are not allowed like in some apartment blocks mining can be free as the consumed electricity just heats the home.
I live in an apartment in Norway where an air heat pump is not allowed by regulations and the ground source is way too expensive even with the current electricity prices and will take years to get approval as the area is considered a historic part of the town and must be preserved.
So couple of years ago I permitted for my son to mine coins on GPU when it was cold allowing to him to earn like 2-4 Euros per day.
But then I stopped as I realized that this just promoted insanity of using proof of work in other places.
Wouldn’t an electric space heater provide more heat per watt? This isn’t a rhetorical question because I don’t know the answer. It’s just my intuition that something designed to produce heat would do that more efficiently than something designed to do computations that produces heat as a side effect.
Basically, heat is waste. A space heater is a resistor causing 0% efficient usage.
If 500 watts of electricity go into a heater or a computer, 500 watts of heat will come out, minus some light from leds and screen, some electricity on the ethernet cable, and other of these ignorable things.
If it does some usefull computation on the side, that's nice, but energy wise a 500 watt game computer is just as efficient as a 500 watt heater, if used for heating purposes.
No it's same thing. All energy eventually becomes heat and there can be no lost energy in a system. In the case of a GPU we're converting electrical energy into thermal energy, and some mechanical and sound energy due to the fans. So a GPU should just as efficient at producing heat as a space heater, it's just that the electricity goes through a few more steps first.
Of course, but what he's saying is that you don't generate currency as a side effect of running your dedicated heater. Whereas the inverse is true with mining - you generate heat as a side effect of running your mining setup. When that's desirable, it can be factored into the net cost.
If you have excess capacity and no way to sell it to a grid, then it might make sense to do some mining, but then you are really just recouping some of what you over spent on solar panels.
Presumably those other coins are close to an equilibrium point where more people mining them would be unprofitable. Aka if the total block rewards from other coins are 1 million dollars per month, there is no way spending 2 million per month on electricity is a good idea.
I saw that argument on twitter when talking about the energy reduction for ETH mining. Someone commented that it won't change because they will just focus their GPUs on some other coin.
TFA is pointing out that other GPU-based POW chains have become unprofitable. Partly because of the massive influx of GPUs increasing the difficulty of those chains, and partly because those coins are just not worth as much as ETH. If that's true, then lots of miners are going to be shutting down their GPUs and selling them.
Copper companies are incentivized to sell as well, but they don’t get to pick the price. Once the price falls enough some copper mines go out of business.
The same thing happens with power companies and we get a net reduction in generation.
Perhaps you are right about power companies wanting to sell more power (although it seems like that is not the case for the marginal kWh given all that power companies do to incentive energy efficiency for their customers).
But even so, at least that power would go to something useful (keeping buildings comfortable, purifying water, who knows what) rather than being burned in the crypto-pit.
I'm getting downvoted on my original comment, sigh HN.
> Perhaps you are right about power companies wanting to sell more power (although it seems like that is not the case for the marginal kWh given all that power companies do to incentive energy efficiency for their customers).
It is literally called a power company. They offer special rates for large customers. That alone creates a marketplace.
> But even so, at least that power would go to something useful (keeping buildings comfortable, purifying water, who knows what) rather than being burned in the crypto-pit.
Did you have a choice in where that power went to begin with? No.
That tweet thread is 100% wrong. I absolutely do have the moral authority to tell everyone that burning billions of joules per day in a bank of resistive elements and then dumping the heat directly into the atmosphere is morally wrong. It's not even a tough question. It is clearly a malicious (hypothetical) act being done only to hurt others.
So if there's an unambiguous case of morally abhorrent energy use, everything else is up for debate. That tweet thread goes off the rails early, as it's clear the author is financially incentivized to not understand the position he is arguing against. So it's no surprise the argument is nonsense.
It opens a can of worms. If you start telling people how they should or should not use any form of energy, then you have to ͡° ͜ʖ ͡° at yourself first.
Isn't this just the "yet you participate in society" meme? We all can and should attempt to reduce our impact, that might mean we don't own a car, or we fly less frequently and take trains instead, perhaps even just working from home... If I need to fly from North America to Europe, I really don't have a choice. You could do it by boat, but ships are largely powered by fossil fuels too. That doesn't mean I actively support fossil fuel companies.
It is absurd as: your website makes money from Google Adsense and therefore is a waste of power because google is profiting off people advertising crap to others.
Just imagine the amount of compute, mechanical and human, that's spent trying to force people to watch ads that they don't want to see to convince them to buy things that they don't need.
Alternatively consider planned obsolescence. Devices are designed to break and be replaced rather than last today, as a rule. Consider just phones alone on this front. The amount of resources that consumes is unimaginable, and that goes well beyond just electricity or manpower.
Ultimately our entire society is built around the largely arbitrary expenditure of absolutely massive amounts of energy and resources to accomplish tasks that, in most cases, are not especially useful or necessary. Optimizing GDP, a goal of most all developed nations now a day, is often just a proxy for increasing waste. Buying the same phone 20 times gives a 20x boost to GDP relative to buying it once. And yet this is the metric we've chosen as a measure for progress.
In most ethical frameworks, making many people happy vastly outweighs making a few people rich. In some ethical frameworks, making people happy is the most moral thing it is possible to do.
There’s a big difference between doing something that’s maybe not the best use of resources, but still serves a purposes, even if it’s just pleasure (ex: watching the Kardashians) and doing something for no reason that makes any sense at all.
We should all be judicious with how we use our limited resources. The market often serves to incentivize that good judgement. When the market starts incentivizing waste (e.g. “cryptocurrencies”) that’s when you need to start banning things.
It's not possible to perform any action without using energy, so why not just say "No one has the moral authority to tell [me] what is a good or bad."?
Ultimately it wasn’t the grandparent that asserted that proof-of-work Etherium mining wasn’t useful, it was the Etherium blockchain community themselves — which is why they have stopped doing it.
It’s not a “moral opinion” to accept someone else’s assessment of the utility of their own actions.
Is it a “moral opinion” to state a preference that crude oil is more useful when refined and injected into an ICE vehicle than if it’s burned on site in an oil well fire?
You have this backward actually, the fact that Ethereum went to great lengths to design a more power-efficient replacement for PoW (rather than, say, doing something else with their lives) shows that PoW was useful to begin with.
The fact that Ethereum went to great lengths to design a more power-efficient replacement for PoW shows that the developers of Etherium derive personal value from an engagement with a technical challenge. It does suggest but doesn't prove that any permutation of the project was/is actually useful.
Your definition of useful sucks, and you should feel really bad about that. I don't get too concerned about how you waste my oxygen, but you're making me rethink that decision.
I’m sorry to hear that you’ve become stuck in a semantic bog-hole, but amused to learn that my words are so powerful that they overwhelm your conscious agency.
My point is that it is not your (or my) right to tell anyone what a good use of power is or is not. Simply because we all consume power one way or another. Expressing an opinion is fine... we can all have one of those, but we should be considerate of pot/kettle there.
I don't think that's the only way. There are other factors that go into choosing a power source. For example, they could offer cleaner power or the quantity of power available can also be factors.
Why is the assumption that crypto has held back innovation? If anything normally pumping a ton of money into something causes more innovation to happen.
now that there's a glut of GPUs from the drop in demand from mining, there's likely to be people looking to use them profitably, and thus new innovation.
The dot-com boom bust caused a lot of dark fibre to be built - and i think the internet has been better for it. Without that dark fibre, the expansion of the internet wouldn't have been as easy and mainstream, as fast as it did.
Dark fiber is useful for a lot more than GPUs built solely for a specific type of processing.
But more importantly, it’s not even clear that buying a miner’s GPU is a good deal because these GPUs tend to be really worn out with short working lives.
Most people will probably be better off simply buying cheaper GPUs new, or buying slightly cheaper used GPUs on eBay that were not popular for mining.
i'm not really talking about second hand GPUs, i'm talking about new GPUs that was made on the assumption that the prices would be high, and demand high, due to mining.
Of course, a 2nd hand GPU might be a bit useless after the abuse of mining for many years (tho i think with proper cooling, and replacement of the fans, those GPUs should've been able to be kept in good working condition for resale).
Nvidia was spending money, time and energy to cater to them one way or the other instead of other areas were GPUs are actually good for society.
Your sentence is very generic and not sure why this should be an universal law. Putting money into the right thing might cause innovation not just money
They did this for some tiny subset of GPUs, and they achieved this largely by making those GPUs worse for even non mining purposes than the ones they replaced.
Pumping money into blockchain scams did cause more innovations in SEC fraud and ponzi schemes, but usually they meant innovation in science/technology.
Until demand was proven, nobody was interested in making GPU compute at scales beyond niche (e.g. for scienctific calculation or for server clusters). As-is, we are about to enter a squeeze that is going to see a few of the big players in retail GPU manufacture drop out, so don't expect this glut of cheap GPUs to last.
Learning how to program shaders is actually a pretty cool thing to do. It's the most math heavy programming I've ever delved into and for that I like how it feels like it's getting my knives real sharp.
would we count protein prediction as machine learning? The learning part is possibly done, now people need to do model execution to invent new proteins.
> SO MANY gpus that are now going to be cheaper than sand
The market knew that the PoS merge was coming. I'd have expected the market to therefore have already factored the PoS transition into GPU prices. Why do you think GPU prices are tanking now?
The market for GPUs isn't as efficient as the stock market. Miners might also have wanted to profit right until the end. The price doesn't come down until the used GPUs actually hit the stores.
I have been using a gtx 980 for a long time now and have wanted to upgrade but everything still seems so expensive. Right now on newegg Canada a ASUS ROG Strix NVIDIA GeForce RTX 3080 OC is $1039.99. Seems like a lot. Will that price go down? Or should I be looking deals with used cards?
Last big crypto crash I bought a used Titan XP for like 350$ (still running strong and kicks ass). Got me through this whole GPU debacle. Always buy low!
I'm just finishing up Andrew Ng's course, and would like to pick up a GPU for building models (probably focusing on U-net image models), do you have any references on what I should expect from trying to buy a used one like you did?
I know it's a pretty random question, but i honestly don't know anything about GPU's other than that my life would be a lot easier if I had one (I run an old Macbook Pro, and I have an old Asus laptop that runs a debian distro, and finally my very old gaming pc is a alienware alpha).
I am no expert, but my off the cuff thought is that buying a gpy and trying to run it as an external for any of those machines will be pretty pointless.
First it probably won't work; certainly for the mac you will face driver issues that were beyond me the last time I tried (about three years ago). I suspect things are worse now... Nvidia have not been helpful. Probably you could get a radeon working as an egpu, but for ML you really need Nvida and Cuda (unless you are seriously into homebrewing code).
But the other issue is that you need to feed the beast - that is you need to pump data in and out of the GPU. For that you need a fairly fast CPU and fast ssd, and fast interconnects to the GPU. You also need power, a good power supply that can keep everything running -- including the fans... which need to be in a good box so that there is good airflow.. or quite quickly you will have no GPU!
Maybe build a box like this with a cheap nvidia GPU, get it working so you know that you have a viable rig, benchmark it to see if you are bottlenecked on the GPU (long time since I did this, so I'm not 100% how - but basically if the GPU is 90% for ages and everything else is sitting looking at it then that's a good sign). Then buy a top notch second hand (about £500 right now on ebay) and resell the cheap one. If you hit a hitch inbetween you know you have a problem to solve and haven't spent all your money. The extra points are for getting multiple GPU's running... again I did that in a corporate with supermicro boxes about 8 years ago and it drove me up the wall at the time... I suspect now it's possible to have 2 GPU's in a consumer box/motherboard but I am not sure what the state of play is.
Are you concerned about damage from heat during previous use? You're going to be running it for hours or days anyway if you train on large complex datasets, so a GPU that's been used for a few months might not have lost you too much lifetime.
We are already at a stagnating point in GPU speeds, with the most recent generations from NVIDIA simply being pump more juice (watts) instead of big design changes and efficiency optimizations.
I don't believe R&D funds will dry as well, since they will simply be relocated to AI and datacenter workloads which have been on the rise more recently.
The rumored 4090 is going to pull 450W and score ~19000 on 3DMark time spy extreme. The 3080 pulls 350W and scores 9000 if we are being optimistic. If these numbers are ballpark correct we are talking about 60% more performance per watt in two years. Your story may eventually come true but it is not yet true.
Maybe kind of in like clock... but the parallel throughput and the ability to securely slice and provision GPU workloads is what was delivered this go around. As well as optical NV-Link and all the crazy new stuff coupled with their CPUs. Go look at crowdsupply and look at the previous gen cores being strapped to massive software defined radio arrays.
This isn't just about pushing framerate. It's about vector processing being massively more efficient than CPU for almost everything and the tooling to start floating consumer needs that aren't just making pretty bleep bloops go boom boom for fun times is very, very mature.
I think you are both right. Lower prices and lower r&d. If you buy gpu because you care about the objective performance then it's bad. If you buy gpu to keep up with Joneses, then it's good.
One I'm interested in is graph databases powered with linear algebra (see GraphBLAS and RedisGraph). Putting the graph structure in a sparse matrix in GPU memory and doing matrix-multiplication to perform queries means you can effectively traverse the entire graph quickly by using the massive parallel nature of the graphics card.
> There is so much interesting stuff going on in GPU compute that isn't crypto.
There's a lot of stuff that isn't any better than crypto either, deepfakes, producing hundreds of thousands stable diffusion pics of the same scene.
Much of this is still a garbage fire of greenhouse gases and e-waste, used GPU prices won't change that. Many ml advances are simply more compute and bigger models in the end.
By that logic you can include gaming in there as well. Thousands of people re-playing the same scenes over and over again, instead of just watching a let's play of the first person that bought the game. I guess the only reasonable uses for GPUs are cancer research etc.
> By that logic you can include gaming in there as well.
But I sorta do... excessive gaming is a waste of human potential and GPU's.
I honestly regret spending a lot of my teen years playing games, I know that sounds ultra-boomer but the only thing games ever actually did for me was learning how to mod them and getting programming experience from that, something that simply doesn't exist on consoles and sadly many of the most popular games on Earth.
For a time/experience ratio that's not the best way to go about it. I would have been better off surfing and chasing girls with a bit of coding on the side.
If I could go back in time for ten minutes I would berate younger me for spending too long on an xbox, likely to the mocking amusement of my younger self.
> excessive gaming is a waste of human potential and GPU's.
A lot of things humans do don't produce anything, but are just done for fun and are unfortunately bad for the environment. That would even apply to surfing if you want, it produces plastic waste in the end.
> but the only thing games ever actually did for me was learning how to mod them and getting programming experience from that
Isn't that just a case of "the grass is always greener on the other side of the fence"? You're dismissing the fact that it lead you to learn programming, which is a valuable skill for many careers.
Are you sure younger you would have been the surfer dude? I think there is nothing wrong with just not being that extroverted as a teen, not being the cool guy girls flock to. That's not something you just decide to be. Now you can obviously say it would have been better to be the cool dude and be really good at coding at the same time, but that's just hindsight and assuming an ideal outcome, and that's just not how it usually works. there's probably a lot of people in their 40s who have partied way too much, had girls and enjoyed life and now regret not having been more of a nerd instead because being in IT seems the foolproof way of making a lot of money, while they struggle.
And just for fun, let's imagine you'd have gone surfing once a week as a teen and taken the car to the beach, how much would an half hour car ride mean for environmental impact, compared to a gaming rig at 300W?
I'd be surprised if it's actually more than crypto; while there are way more gamers than crypto miners, they don't game 24/7, don't have a dozen GPUs per rig etc.
Anyways my main point was that a lot of things people do for fun are more or less directly bad for the environment, like for example creating dozens of silly images with stable diffusion. But it's still better than crypto mining imo, as doing "fun things" usually benefits you/your mental health.
Lot of gamers, and the displays add a few more watts each to the total.
Fermi estimate: 10 million latest Xboxes, used for 1 hour per day.
Power estimates seem to vary from 120 W to 315 W, let's say 200 W including display. That's 2 GWh/day. Probably similar for Playstation, or at least close enough for a Fermi estimate. I'm going to guess similar for PC gaming also.
Smartphones are what, about 1 W? But a few billion of them? 1 hour per day makes that another GWh/day?
So probably about 7 GWh/day for Xbox + Playstation + PC + mobile, 2.5 TWh/year.
Bitcoin is estimated to use 131 TWh/year according to Wikipedia.
There have been see scientific prayers on power us usage snd PC gaming (only) and it comes similar to crypto usage. There are not many things that take tons of energy - heating, cooling, crypto, gaming etc.
Pretty sure if I wasn’t gaming these few hours per week, my carbon footprint would be much higher. E.g. 2kWh is only 8 miles on an efficient EV and probably less than 4 miles on ICE. We have to measure avoidable costs in this case, not essential.
I'm upvoting this because, at the end of the day, this is brutally accurate.
Gaming is not an activity that produces something, likewise any form of entertainment for that matter. So in the sense that nothing comes from the electricity and time used, gaming is an "anti-environmental" activity.
I personally don't agree with any of this notion, because life is worthless if we can't accept life is pointless, but if we are striving for a world where consumption must always produce something then gaming is one of many things that must be eliminated.
> producing hundreds of thousands stable diffusion pics of the same scene
Why are you twisting reality? People don't generate hundreds of thousands of stable diffusion pics of the same scene. Instead they generate dozens to hundreds of images carefully tweaking the prompt and the starter image.
You're not going to get very far trying to impose your subjective perspective of usefulness on other people's use of energy. Energy is one of the foundational pillars of modern society, and other people are going to use it for all sorts of things, including activities that you don't like (but that are liked by others).
I'd suggest focusing your irritation on advocating for universally clean generation of energy. Regardless of how much energy each of us uses, and regardless of what we use it for (even if it's something that you consider to be "useless"), most of us seem to agree that we don't want to pollute our air and ruin our planet. However, the moment you start attacking things that make other people happy, you are risking losing support for that fundamental goal.
I don't know why you're being downvoted. At least the last part is totally true; advance in ML is unfortunately just bigger models, more data, and more compute.
But doing ML won't necessarily boost GPU sales because most deep learning work is shifted to the cloud.
If you say something's bad, you imply something else is good. So can you identify some contrasting technologies which didn't start out as a "garbage fire of greenhouse gasses and e-waste" or the appropriate equivalent? Are you wanting a world contains only those born perfect technologies?
Love the reference. On a more serious note I'm really curious how this will play out. Nvidia seems to be doing it's best to prop up the prices of existing models as it prepares to launch the 4k series. The big question seems to be whether most of these miners will start mining some other token or get out of gpu mining entirely.
If the card has cuda support I would guess they're off to some sort of p2p AI / ML marketplace. Unfortunately AMD cards were actually better for mining. If anybody knows of something like vast.ai or render for AMD I'm all ears.
> they're off to some sort of p2p AI / ML marketplace
Seems to be at least slightly true, by my amateur judgement. Sometimes I use https://vast.ai, and seems there is more offers than usual, currently ~180 instances available for rent.
Morgenrot Cloud (https://morgenrot.cloud/) is the main consumer grade AMD compute provider I know of. Not quite vast.ai in that they're centralized, but they've got the hardware.
New 3000 series retail prices on the high end cards have been steadily dropping, and it seems like on ebay used prices have dropped 10% in the last month.
As for the 4000 series cards - they've stated in SEC filings that they will be trickling out stock to keep prices high.
AMD are the ones who are really fucked; their cards suck, and nobody bought them out of choice but desperation. Now that the market is glutted, people will heavily prefer nvidia cards.
It depends on the game. Some games AMD cards blow Nvidia cards (of the same "tier") away, some it's the opposite. The GP commenter who said they "suck" is incredibly wrong. I can't understand people who fan-boy over giant corporations.
Since at least the Pascal microarchitecture NVIDIA has beaten AMD in performance per watt, and AMD is infamous for how unreliable and poorly performant their drivers are for Windows. AMD manages to do decently in benchmarks because they overclock and overvolt their cards with massive heatsinks and the cards last just long enough to run the benchmark before overheating and downvolting/downclocking.
All of this is accepted industry fact, and your devolving the discussion to personal insults is proof of this; otherwise you would have come armed with actual tests and reviews.
The last paragraph there about AMD seems completely baseless and overstated.
NVIDIA does have much higher market share and brand recognition from both ML and gamers for now, but AMD has been firing on all cylinders for quite a few years, and has built a terrific open source driver codebase to further refine.
Even throughout the 2010s AMD offered price-competitive models that definitely didn’t “suck” for what they cost, or require “desperation” to purchase.
> AMD are the ones who are really fucked; their cards suck, and nobody bought them out of choice but desperation. Now that the market is glutted, people will heavily prefer nvidia cards.
I bought only AMD equipment for the last few years out of despise for the market manipulation by Intel/nVidia (see Intel x86 compiler/nVidia GameWorks, both not optimizing for their equipment but deoptimizing for competitors), and I have gotten completely adequate gear for the price paid. Not so much despair here.
To some extent this has already happened, as the recent crash in the GPU market from its highs a few months back, but obviously we did see some miners hold on to the bitter end, but it's not clear how much difference that inventory will make.
There is one other cryptocurrency use for these mining rigs, and that is to compromise existing chains.
Now that there is a great deal of excess capacity, presumably it would be possible to attack smaller chains in an attempt to glean some profit through double-spending, as those chains might now be vulnerable to larger-scale history-rewriting attacks.
The problem is that such an attack would be discovered and send the value of the token to zero , so you'd have a limited window to double spend into something else valuable but also not revokable.
What if you spread it out across different brokers? How much communication is going on there? With traditional stocks, short positions have a certain amount of transparency because of regulation as far as I know, but I imagine such regulation does not exist for cryptocurrencies.
I have a similar fun conspiracy theory, that Satoshi is actual an alien farmer that injected the whitepaper into its human farm to get humanity's tensor calculation capacity up, and now injected PoS to switch the capacity over to AI for some ineffable purpose.
If you and your parent comment are correct, then anyone who owned bitcoin or ethereum before the PoS launch has helped out the basilisk, and anyone who didn't had better do something to catch up. Something besides buying crypto, since that phase of the basilisk's plan is complete.
Mine was always: "Everyone laughed at the young NSA intern Satoshi that he could convince the worlds criminal enterprises to open their financial books to the world."
Alternative: "Satoshi Nakamoto" was the nom de guerre of an emergent renegade AI who had figured out a way to induce monkeys to attach as much processing power as possible to a network.
In the last couple of years, we have been quite supply constrained on the GPU front, so it probably actually has caused some short-term hinderance. I think in the long-term though, your view probably makes more sense, like how funding science helps with proliferating technology in industry
I don't understand this well enough, but, why can't miners just mine other coins? Was all GPU mining Etherium based?
I know that bitcoin mining requires ASICS and GPUS can't compete with that, but I just assumed miners are just mining one of many possible coins, with Etherium being one of them.
If 80% of the revenue was from Ethereum, and now that part disappeared, 100% of the miners are left fighting over the 20% that's left.
It's be like if all women stopped going to Starbucks tomorrow, and you asked, "Why don't they just sell to the men?" Well they could, but they'd still be down ~50% in revenue.
Imo that's a bit of a failed analogy. More appropriate would be, imagine coffee is something very precious and Starbucks suddenly stopped selling to women. In this case, women need to go to other cafes, increasing their competition over coffee there. But in that case it doesn't seem to be so absurd any more - perhaps it's not the case that the coffee resources are so scarce (just as it's the case with crypto - does 3x miners of some currency mean that each of them makes 3x less? I don't think so).
Also, is it true that 80% of revenue was from Ethereum, or it's dummy data?
I don't understand your analogy. Ethereum was paying miners $20 M / day before the merge, and now that money is gone. There's $20 M / day that used to be flowing to miners that no longer is.
No other coins provide the profitability margins that ETH did, so miners can switch to other POW coins, however the will be paying more in electricity than whatever crypto they are mining.
I would assume the problem is that a lot of alt-coins are built on top of the Ethereum blockchain, and that most of those that aren't are nowhere near as profitable to mine.
It's actually not that straightforward to plug in these consumer cards as 4x setup. We spent weeks researching how to achieve up to 7x RTX 3090 setup in a single rig. Could write up our method if anyone is interested.
It's not even just about the slots, it's about the PCIe lanes (which is something I never had to worry until now, though I built countless PCs in the past).
We tried bunch of setups with Threadrippers and EPYC, at the end settled for the ROMED8-2T which is a monster motherboard.
We run 4x 2080s on threadripper systems. What sort of trouble did you run into? I thought threadripper has plenty of PCIe lanes. We didn't have any trouble but it could be I missed something, we had to get it working quick and I didn't do very much benchmarking.
Threadrippers are great and I had 4x Threadripper setup for the longest time, but they are a bit more expensive.
The advantage of EPYC is that because it's so common, we can find used cheaper ones on ebay. They are a bit slower I believe, but we can deal with that by using Nvidia's DALI and decoding images on the GPU rather than CPU.
Ohh I hadn't thought of there being cheaper ones on ebay. That's a good tip, I'll check it out for our next upgrade.
We're doing photogrammetry not machine learning, running some blackbox software that scaled best with clock speed so threadrippers were the most efficient option.
I put them in 4U supermicro boxes with a noctua cooler with 2 9000RPM delta fans attached to it with rip ties.
I just built a rig with a Romed8-2T as well. I got pcie 3 16x risers and zip-tied them above the tower into a rack shelf above it. It's super ghetto, and I can't believe it works, but it totally does. I'm hosting on vast.ai hoping someone will train with my 4-6 3090s, but everyone wants their large language models and image generation models that require more than the 24GB of RAM. shrug
Maybe some day I'll use it myself to train my plastic surgery outcome estimation visualization GAN or diffusion model if I can figure out how to fine-tune one.
Never thought I'd get a response from Jeremy Howard, now I have to post a well polished article! Thanks for all your teachings btw, I really enjoyed and learned a lot from the fast.ai ML course!
would love to see the riser setup that you're using for such a monster!
we mostly gave up and just got barebones machines since the cabling situation becomes pretty tricky, and the barebones total cost is low relative to the GPUs anyways.
I posted a link just below to twitter with an image of the riser setup. That setup worked well for 4x, but for the 7x we're moving the cards upside down and setting them up like tree branches if you will. So the trunk/floor is the motherboard and you get close to edges, the cards are angled and use longer riser cables.
The issue we had with barebones was cost and cooling, we use 30$ racks from Target and hang the GPUs with metal zip ties and a box fan from below, so they get lots of air and we don't break the bank and can easily roll them around.
Sure, will do, though it might take some time to finish writing the blog post, you can get a preview of our previous setup with 4x GPUs here: https://twitter.com/ftufek/status/1569367127878139905. For those that are curious, that's running a Threadripper 3970x.
It's not exactly a "clean" one, like a proper 2u/4u chassis and server grade GPUs but it does the job for 70-90% cheaper.
If enough GPU miners stop, it becomes profitable for other GPU miners to mine. Fortunately that's currently 90% fewer GPU miners at the moment, across the whole sector.
That's a good outcome. We'd still be in a situation with so many fewer miners that GPU prices aren't influenced as they have been. The investment becomes risky as it'll always be teetering on the edge. I'm finally looking forward to a new GPU to pair with this 11900K.
2) cryptocurrencies zooming in price and profitability again
3) cryptocurrencies actually being used. in all prior cycles, blockchains were empty (although people got a glimpse of what congestion would be like, during the 2017 cycle). miners earned the subsidy made for people to show up at all. miners are also privy to a cut of transactions that occur, but this was close to 1% of the subsidy. in the 2020 cycle it was 250% on top of the subsidy, frequently, and way more than that as blocks were full. All mining calculators were wrong because they only show the predictable subsidy and not a forecast based on an average, but miners learned how profitable it was.
Automated Market Makers (Uniswap code and classes) and Automated Lending (Compound) were 2020 cycles killer apps built on top of 2017’s killer app of ERC20 tokens.
Followed by NFT’s and their marketplaces.
Other chains secured by GPUs will get this activity, periodically.
With the second-largest cryptocurrency using PoS, do you really expect the PoW ones to stay profitable for more than a niche group? I'd expect people who want to actually use it for something else than speculating to move to the superior tech that doesn't contribute to climate change. At this point Bitcoin etc. would become no more than a betting platform.
There are people that support the concept of Proof of Work that aren't miners, so some other networks could zoom in value. Ethereum Classic is still a $5bn marketcap, for example.
More people will be swayed if Ethereum mainnet works after further phases of its scaling plan, without user experience drawback. Ethereum post-merge still has some ambitious theoretical things, like sharding. This should have some drawbacks.
People think scaling isnt possible without drawbacks so are fine with lower bandwidth Proof of Work that just meters by transaction fees.
It doesn't matter if its trash so long as its fungible trash at a profitable price. i.e. the situation is OK for now but as soon as the market adjusts to the glut of GPU miners suddenly minting no-name coins no one really wants (and the novelty wears off) those prices are going to drop like a rock.
Most of these coins only have value as scams that you could prop up then cash out by exchanging for BTC or ETH; so long as the "new coin of the day" hype train exists there will be a way to make money off of GPU mining. I guess there will always be suckers in this unregulated market.
Let me pull up my latest electric bill (which I almost have never looked at/have on autopay):
New Charges
Rate: RS-1 RESIDENTIAL SERVICE Base charge: $8.99
Non-fuel: (First 1000 kWh at $0.073710) (Over 1000 kWh at $0.083710) $76.74
Fuel: (First 1000 kWh at $0.034870) (Over 1000 kWh at $0.044870) $36.49
Electric service amount 122.22
20.99 in taxes / surcharges
Total $143.21
$143.21 for 1036 kWh in a 30 day timespan
0.138 $/kWh with taxes and fees I guess for me, I'm sure if I was doing crazy ASIC stuff at my house they'd charge me more/the rate would become less favorable
Looks like mining BTC would net be $0.85 a day profit with some kind of ASIC. $310/yr. Yikes.
If I am looking to buy a GPU chip for ML research:
- What chip should I buy?
- When should I buy? Should I wait for prices to drop? Will new and improved chips be released anytime soon?
- What are the advantages / disadvantages of each chip (3060 vs 3080, Nvidia vs AMD)? Which chip is most cost-efficient? What are each chips' specialties (e.g. specific type of neural network, graphics vs compute)?
This applies for all neural networks. Depending on how much money you're willing to spend, in descending order: DGX (computer with 8 A100s, $150,000), A100 (80GB, $15,000), A6000 ($5000), RTX 3090 ($1000).
1 x A100, 80GB is $3.19 / hour.
8 x A100, 80GB is about $25 / hour.
They have much less expensive machines. I used to use them to run steam games, but now proton is just too darn good. Their low end machines are OK for CAD software that supports real-time raytracing.
Lots of good options for cheap GPU clouds, including Paperspace (mentioned above), Coreweave, and Crusoe Cloud (crusoecloud.com). Crusoe Cloud's angle is that our GPUs are powered off otherwise wasted energy and are carbon-reducing; running one for a year provides an emissions reduction equivalent to taking one car off the road.
Disclosure, I'm head of product @ Crusoe Cloud. Feel free to ping me at mike at crusoecloud dot com if you've got questions or feedback.
TL;DR: we run data centers on-site at oil wells and take natural gas that would otherwise be flared (it can't be economically transported as natural gas in a pipeline or turned into electricity and transported) and combust it completely. Methane is a significantly more potent greenhouse gas than CO2, so it ends up being a net reduction in emissions vs what's currently happening.
Cool idea. Tried to join your waitlist but the form throws errors. Could be my security measures, but I’m on mobile safari, toggled off all content blockers and still had an error.
Odd, I just tested on Safari mobile and it went through without issues. Mind sending me an email at mike at crusoecloud dot com and we can get you set up?
We have a bunch of diseased pine trees. Assuming we don't have enough time to cut them down and burn them in the next 12 months, and that I am unscrupulous, I could sell you carbon credits through one of the bigger exchanges. You would never know, and it would be completely above board.
John Oliver had a particularly depressing segment on this recently.
(Totally off topic, but I'd love to biochar them instead. Using them for lumber and shipping them offsite are non-starters. Any ideas, anyone?)
See above post, but the TL;DR: is "we capture methane that would otherwise be flared."
Agreed that a lot of carbon offsets look like, "we were going to cut this section of the rainforest down, but if you pay us, we won't do that for X period of time." This is _actually_ reducing existing emissions.
As someone who just fired off both paperspace boxes and coreweave to play around with stable diffusion, Paperspace limited me to some old janky hardware and never replied to my request to "unlock" the more modern cards.
Coreweave let me get a modern card box first go round. I'd go with coreweave if you are a weekend hacker who just wants access to a lil bit of beefy GPU.
I just grabbed an A6000 for $3500 on eBay, so you can probably get a pretty decent deal on those now. They're pricey but IMO it's a great deal if you really need the VRAM (e.g. for training LLMs).
My guess is "not as much as most folks seem to think"
Comparable 24gb vram 4xxx cards are also 1 slot bigger and many watts hungrier. If you want to be able to use a 800w PSU + 3 slots, and just need 24gb vram for say, running inference on a big diffusion model, then 3090 is still going to be your only option for a while.
They're pretty well priced right now, at $1k. If you need one, not much reason to wait, your time is probably worth more than saving a couple hundred bucks.
The 4090ti is rumoured to be 48GB [1], but who knows when that will release or how much it will cost. If you really need extra VRAM and don't mind longer inference, older used Tesla cards are an option. A used Tesla V100 32GB can be sometimes found on Ebay for 1500.
It appears as though the 4090 will be 24GB, but that card may also be almost $2k.
Used 3090s on eBay are $800 all day long. That price may drop a bit in the next week or so, but not much, as that 24GB of VRAM is the main draw for that over a 3080Ti.
Lambda Labs crunched the numbers in Feb 2022 [0]. They concluded:
“””
So, which GPUs to choose if you need an upgrade in early 2022 for Deep Learning? We feel there are two yes/no questions that help you choose between A100, A6000, and 3090. These three together probably cover most of the use cases in training Deep Learning models:
Do you need multi-node distributed training? If the answer is yes, go for A100 80GB/40GB SXM4 because they are the only GPUs that support Infiniband. Without Infiniband, your distributed training simply would not scale. If the answer is no, see the next question.
How big is your model? That helps you to choose between A100 PCIe (80GB), A6000 (48GB), and 3090 (24GB). A couple of 3090s are adequate for mainstream academic research. Choose A6000 if you work with a large image/language model and need multi-GPU training to scale efficiently. An A6000 system should cover most of the use cases in the context of a single node. Only choose A100 PCIe 80GB when working on extremely large models
“””
You should get a Nvidia card with as much VRAM as possible. A 12 GB RTX 3060 is probably the most cost efficient at the moment.
I don't think AMD is really viable for ML. Nvidia has the mind share in that segment, so nearly all tools will work with Nvidia, while very few support AMD.
Stable Diffusion is the kind of phenomenon where people are actually contributing other backends (like the one for the M1 GPU). That’s not super common though, a lot of the time if you want to get a network running you need an Nvidia card so you can use CUDA (it’s not even about hardware performance, just that CUDA and CUDNN and so on are written by Nvidia for their GPUs).
I have a dumb question: when someone implements a backend like mps for stable diffusion, what are they actually implementing? Shims for Nvidia proprietary stuff that doesn’t exist outside of CUDA?
Keep in mind that bigger models are coming. And to use all features of even the current SD version, you need a lot of VRAM - 12GB for textual inversion (making it learn your own style), 30GB+ for Dreambooth (sort of micro-finetuning that doesn't need a GPU farm and a huge tagged dataset), a lot for img2img on a high-res picture. It also massively benefits from the large amount of compute cores.
Right now, the bigger and faster, the better. And there's really no limit of the computing power you can throw at various tasks to make them run better. It almost looks like 90s again.
Guessing "just fine" is relative, but care to put a prompt + the settings you use, and how many it/s you get? I'm getting max ~8it/s on a 2080ti and I think even that feels slow sometimes so looking to upgrade my GPU now, curious to see what "just fine" means for you here.
100 seconds per batch of five (1.7it/s); default settings (512x512, etc)
Performance seems to be prompt-independent; I'm using a docker container that seems to have disappeared, this modified version for amd cards with less than 10gb:
> If you're looking to build a new gaming PC or upgrade your existing graphics card, just wait a little longer and definitely don't buy any graphics card for more than $500. Prices on existing GPUs will continue to drop, and the new stuff is right around the corner.
To be clear, no mass-produced GPU can profitably mine Bitcoin.
It doesn't have to be stolen, it could have simply "fallen off the back of a truck." /s
(This actually doesn't happen as much as it used to twenty years ago with step-by-step inventory checkins made possible w/ RFID chips and barcodes combined with mobile network connections).
What about “free” as in “my solar array is large enough to produce $0 in electricity charges all winter, and the excess I sell back in summer/autumn/spring cannot be cashed out or used to cover my connection charges (and even if it could is sold at a pittance anyways)”? :)
That's what I said, but wasn't clear enough to you.
The hardware capex for mining BTC with GPUs would outweigh any 'free' power you have access to. You can't just focus on one aspect of this business in order to calculate your ROI.
That pittance is more than what bitcoin mining would get you.
It sounds like your scenario is more or less "free" power, but you're not going to truly profit; you bought too many solar panels and you're trying to mitigate the losses. (And maybe the overbuilding was on purpose, as insurance against grid trouble, but it still means a good chunk of money lost.)
What does this even mean? The average solar panel lasts 25 years and starts to degrade in efficiency before that. And then you have batteries which of course are also going to degrade and need to be replaced over time. Many countries are starting to get more serious about e-waste as well now that China has decided it won't take the world's trash any more so you might even have to occur a cost for getting rid of used panels as well.
There's plenty of costs in solar energy, so what does "free" mean here? Are you trying to take into account the possibility of selling excess energy back to the grid?
> definitely don't buy any graphics card for more than $500
Ef that, I'm buying a 20GB 4080 when they come out for doing stable diffusion research and gaming. I'm still rocking a 1060. I doubt a 4080 is going to drop below $500, recession or not.
A bit off tangent, what would be the best bang for the buck GTX GPU to buy nowadays if I want to use it for machine learning (like running stable diffusion locally)?
If you want the cheapest total spend, use paperspace. (I linked their price page elsewhere.) It's about $7 per month for boot drive storage, plus tens of cents per hour of uptime.
If you want the cheapest total spend where you buy your own hardware, it basically runs on any current-generation amd / nvidia / m1 card. I used a modified version of stable diffusion that works with < 10GB of DRAM on a AMD RX 6600XT. Check around before buying, obviously; I lucked out.
The modified version produces the same quality output, but has to page data in and out. It takes about 100 seconds per batch of five images. The card cost under $300, and is plugged into a ~ 10 year old Linux box. It's probably possible to go cheaper than that.
I hope to see a glut of used cards on the market. In the past I've always been able to buy decent used cards for <$100, but the prices have been crazy for the last few years.
I wonder if this is the end of an era for GPU based mining. It's been a long time. I remember buying an R9 270X in 2014 from a guy that was mining Bitcoin, but had switched to ASICs. When I picked it up he was telling me he didn't win enough blocks with the ASICs and was going to sell them too.
I always wonder if that guy played his cards right and became a Bitcoin millionaire. Lol.
ASIC is "application-specific integrated circuit". A custom IC design for some task. There won't be anything after except maybe better and more miniaturized designs, at least until computers and digital logic are built from something other than electronics.
I wonder if keeping their GPUs mining until the last minute ended up making those miners more money than selling them when the price of GPUs was still high would have.
Not at all. Personally I got out and sold my GPUs in March and April. Anyone still mining these past months was either lazy or betting the merge would be delayed.
A delayed merge sounded like a good bet on paper. Eth has been promising PoS soon for years. It is understandable why some miners took the bet this time.
The problem with gambling in an iterated context: you will always lose eventually.
If not on eBay, where, then? The list of "sold" cards on eBay appear to me to be a pretty good starting point, but if there's something I don't know, like "everyone on eBay is a scalper" (which is not implausible) then where should I be looking?
They only keep them for a relatively short time last time I looked so one has to manually save data after scraping it for longer term trends, vaguely remember it being forever then they cut it to six month and I think it's three now, plus if there is a lot in the category you are searching there's no way to sort it to see the oldest anymore.
Because they want to mine until the last second, supposedly there will be a lot less ETH being generated after the merge so each one will be worth more.
> supposedly there will be a lot less ETH being generated after the merge so each one will be worth more
This has been hands-down the most ignorant view of the ETH-merge proponents. The floor of the currency is tied to the cost to mine. There's minimal cost to mine now. The price will fall. It's not a supply vs demand problem.
> The floor of the currency is tied to the cost to mine.
This is not true because of difficulty adjustment. If the price drops enough so miners are losing money, some of them quit, the difficulty adjusts downward, and the economics improve for the remaining miners.
Given that crypto miners weren't willing to think ahead to the consequences of countries needing to fire coal power plants back up to meet their demand, or the problems with continuing to crypto mine during multiple worldwide energy crises, I am not surprised if they didn't think ahead to what would happen if the Ethereum merge happened exactly as planned.
I know this is a drive-by joke, but in fact speculative bubbles (crypto among them) were among the big drivers of last year's inflation hump. Production was still suppressed by the pandemic, lots of expenditures (travel, etc...) were likewise suppressed, yet bank account balances were still around and wanting to be spent.
Broadly: what do you do when you can't go to Cancun like you planned? You bet on Doge and GME, apparently. (Or you declare yourself a "VC" and start handing out checks to 20-something quarantined hackers.) Then you just end up with more money you can't spend.
The city of Denton TX recently signed a huge deal to allow a mining company (Core Scientific) to set up a large GPU mining facility directly next to their powerplant to make up for lost revenues during the winter storms. I'm wondering what's going to happen there. It seems like they mostly mine BTC but this can't help their bottom line.
Core Scientific is mining Bitcoin (they mention how many Bitcoin in their filing where they update on the status of the Denton location [1] ). No doubt all of their work is done on ASICs, not GPUs.
It'll be BTC mining and if they do anything with GPUs it'll be for other workloads than mining (likely rendering/ai/ml), which is a very competitive market and not much money there either.
Most "professional" miners were actually undervolting to keep the power consumption down so it's really not that bad as long as the price is right.
Anecdata but, 3 years ago I got an old mining RX 580 4GB for ~120 CAD (about a $100). That card can run almost everything at 1080p and has been used a lot ever since.
I've got over 100,000 of those RX470,480,570,580 8gb cards running for 24/7 years. It is a total farce that they go bad over time.
Not only were ours undervolted, but also individually tuned for best performance/watt. A very difficult thing to do at my scale since the failure mode is a full machine crash.
Only thing that really degrades is the paste on the heatsink and that's fairly easy to fix.
The largest challenge was tuning the cards for best efficiency.
Next up is just tracking inventory, making changes to the system, etc... this is over 8k individual computers in multiple data centers.
We also added a different class of hardware which was blade based... which increased the individual computers significantly. Ended up with a very cool iPXE boot solution for that.
I also built some pretty cool software to manage it all. It runs on the concept that each machine is an individual worker that knows how to self-heal itself. Even just distributing the software to so many machines reliably, is a challenge.
It's more temperature variation that kills cards. In a conventional mining setup thermals are monitored and accounted for. A card running at 70C 24/7 will last a long time. Longer than a card that is constantly bouncing around in temperature.
Also untrue. My cards have been running for years in shipping containers that are outdoors and go through full 4 seasons (winter snows to summer heats).
Edit: power supplies on the other hand... are a mess. Mostly hand soldered in China... they fail randomly due to the environment they run in. Sometimes, they "die", let rest for a day or two and then fire back up and run just fine.
Temperature changes outside don't translate to temperature changes on the die. If the cards are running 24/7 there will be no thermal shock to speak of since they are always generating heat.
Various machines reboot randomly all the time. Given the amount of direct outdoor airflow that we push through the machines (we don't have fans on the GPUs), as soon as the GPUs stop running, they cool down very very quickly. That is the 'shock' you're looking for.
Why do they reboot? We run on the edge of peak OC tuning performance by default and I've built an automated tuner which downclocks individual cards. This way, they get more stable over time, while maintaining their best possible performance.
Occasionally, we would reset the tunings and then let them auto tune back... this accounted for the seasonal variances because hotter cards are more prone to crashing.
How often does the average machine reboot? If it's less often than 24 hours you're still putting the card under less thermal stress than someone who games for a half hour every evening. I'd buy your used GPU over a gamer's used GPU
Sometimes it can reboot 50+ times in a row. Each box has 12 gpus, so if I reset the tuning for the box, it can take a while to find the optimal settings because the voltage/clock tuning steps are very granular.
Again, this isn't an actual issue and I have the data to prove it.
It may still be a lot less 'shock' than normal use, where players have a 15 minute round, then low use for a couple minutes, etc, for hours.. and then turn the card off.
Thermal cycling is known to be bad for electronics-- this is well studied and documented. Sustained high temperatures are also bad, but it's only really bad when the temperatures are really high.
I'm pretty sure my cards have gone through all extreme different load situations that you could possibly make up in your head.
Certainly, thermal cycling can be an issue for electronics in general, but my experience with these specific cards says that it isn't an issue at all. At least certainly not as much as something that should dictate purchasing 'miner' cards or not.
Miners overclock and overvolt the memory because there's a substantial performance advantage to doing so with little efficiency loss. This rapidly ages the memory.
Also, 70c is well into the temperature range that will significantly age capacitors.
My point stands: a capacitor that spends most of its life at room temperature except for a few hours at ~60c is going to last significantly longer than a mining card which spends 24x7 at 60-70c, regardless of temperature rating.
Untrue, forcing more usage on a card while mining will immensely increase power usage whilst hardly improving hashrate. Mining GPUs are undervolted and arguably will be in better condition than a hardcore gamer's card.
Is this actually a problem, besides needing to replace a cheap fan? If the fans go out, they just maintain thermal limit. These aren't like old cards, where they would melt.
Replacing a GPU fan is generally a lot more involved than other PC fans. Some you have to take the GPU apart (sometimes involving glue) and some fans are harder to get than others. I'd say it's harder than building the PC itself, but still fairly easy.
I won’t be surprised if this is final nail in the coffin for Intel dGPU efforts as well. Intel has only to blame itself though for the debacle a second time.
This is surprising to me for two reasons. The first is that we knew the merge was coming. Second is that it wasn’t obvious to me that Ethereum was a majority of mining.
But who would be buying these cards? Other miners looking to mine to the last second might not be so keen on expanding their operation. And gamers looking for cheap cards would be better served by waiting for the merge.
There’s always a market for GPUs. The market was trading higher immediately before the merge, and now it’s trading lower, so if you ran a mining operation then selling your hardware before the merge would’ve yielded some good money unless you held so much it would actually change the market price (unlikely).
So why did the price drop so suddenly? Why didn’t everyone anticipate this and sell before? Probably some combination of laziness and belief that the merge wouldn’t work.
My guess... it has more to do with the logistics of uninstalling hundreds, or even thousands, of GPUs and then prepping them for sale on eBay. Its probably more profitable to use them for mining until the very last second and then sell them in bulk to an eBay vendor or foreign nation or whoever is in the market for pallets of GPUs.
I mined for a few weeks out of curiosity and I figured it would work as a heater downstairs since it was winter. I would think that the best move was to sell your GPU about a month before it all ends since the actual mining doesn’t make enough money to offset the resale value plummet.
So regarding your first point, the merge has been coming for years now. I suspect the folks who kept mining and didn't dump their cards earlier were basically making the bet that the merge would fail or get delayed once again.
If they had been right, it would've meant more mining rewards going to fewer miners. So it's an understandable bet. They just bet wrong.
Yes, also, if they acquired these cards at or below MSRP. Now they can dump these GPUs and only take a loss of 10-20% on the hardware. Much less than the profits made while mining.
I think this is fantastic. Availability of GPUs for gaming and small scale machine learning just exploded dramatically. Would be interesting to see how NVDA behaves in the next couple of quarters
Where’s the best place to actually find these cards? When I see used cards on say Craigslist they always claim “never used for mining” or similar, is there a way to verify that kind of thing?
What's the problem with cards that been used for mining? People say that they are in worse state than other second-hand cards, but I'm not sure that's true. Most if not everyone I know who uses GPUs for mining undervolt the cards as it's more profitable, while everyone I know who is a gamer run their cards overclocked.
So in theory (but someone please correct me if I'm wrong), with that in mind, you should prefer cards that have been used for mining and undervolted, rather than cards from gamers that have been overclocked.
He did however note that the cards that he tested had been used in a relatively clean and dust-free environment. The biggest risk to buying a card that has been running 24/7 for years is that the cooling system is busted due to build-up of dust.
I would not trust anything coming from wccftech. That particular article is probably just an advertisement for some shitty software that promises to fix your broken GPU.
> Most if not everyone I know who uses GPUs for mining undervolt the cards as it's more profitable
This is true, but it doesn't mean that the card runs any cooler. Yes, for mining it makes perfect sense to run it at a lower voltage. That doesn't stop your VRAM from getting pinned at 95c, and if you do that for long enough then it won't matter how hard you underclock your GPU. The clock speed doesn't directly correlate to your hash rate.
I might not mind for the right price, but I'd like to at least know. Right now it seems that either (a) nobody is yet selling used mining cards or (b) everyone selling used mining cards is lying about it and trying to sell them at the same prices as barely used cards.
Random speculation here - could someone who actually knows the answer comment?
Given the Russian war on Ukraine, and the price of energy dropping in BRIC countries, does the Ethereum merge actually help Western interests? Ie. Does this mean that Russia, for example, can't leverage their cheap energy into crypto which can then be exchanged for cash?
Would that line of reasoning have led Western governments to lean on the Ethereum people to complete the merge?
Awesome.
I've got a 1060 that is long in the tooth, and I've wanted to add a second video card for experimenting with SR-IOV.
Regarding SR-IOV (if that's what it's still called); Can anybody suggest a decent resource for implementing it?
I know that alot of these mining cards will have been worked hard. I'm not afraid to reapply thermal paste, and if the price is right I can get a couple.
SR-IOV is single root virtualization, which requires Quadro or the Data Center GPU's. Those aren't used very much in mining due to their cost. Did you mean of PCIe pass through?
I recommend using either Debian or another Linux that's not RedHat based though, as the Nvidia drivers will disable themselves if they think they're running inside of a VM against a GeForce card, and Redhat patched out the components necessary to hide the flags from the Guest OS. Probably under pressure by Nvidia.
You'll also have to double check that both the CPU and motherboard supports either Intel VT-d or AMD-Vi. CPU flags are easy enough, they'll be listed in /proc/cpuinfo, you'd have to look up your motherboard to see if it supports it or not though.
This may be a dumb question but, since I would not trust a GPU card "on eBay" anymore than I would a guy selling the Golden Gate Bridge for scrap, where will these cards appear - how will people trust them? (Or more accurately - where can I pick up a couple?)
Looking forward to all the creative AI startups using these cheap cards to build cool products. We've been waiting a long for the merge to finally happen.
I can't believe that you can buy a card that'll do 15+ Tflops for like 500$.
Is it possible to use one of those GPUs as external GPU for my laptop or NAS? Do I need an enclosure? How would I connect them, PCI-e to USB adapter? I mainly want to experiment with stable diffussion and for video transcoding.
You just need a computer with TB3 and above. Get an eGPU box like Razer Core X [1]. Caveat is that chances are, these GPU are probably Nvidia because of CUDA. Therefore, it will not work with Mac (will work on Linux or Windows though).
Would it be possible to buy 10 GPUs on the cheap and set them up as a cluster? I'd like to generate larger stable diffusion images for instance, but don't know if a cluster would support this.
If the goal is to increase the GPU vRAM (which I believe it is, given that's the constraint on image size), the answer is "not really" for consumer cards. You need NVLink bridges for pairs of PCIe cards (which they have, but then you'd only double the vRAM), or NVSwitch on the high end data center servers (DGX/HGX A100).
If this is true why are the new Intel Arc A380 graphics cards so difficult to get? There's zero in the EU. Zero. In the US, most sites don't have stock.
This flatly isn't true: it's the only only GPU card that currently has AV1 and VP9 hardware accelerated video encoding, it's incredibly cheap, and supports a ton of monitors at its price.
I had given up on the singularity ever coming to pass, and the reason was CryptoCurrency. How could we ever upload our consciousness to the cloud if the cloud was fully occupied inventing imaginary wealth tokens? Because 'mining' can consume any amount of cpu horsepower, so there was never going to be any left for the singularity.
What if you add something to you that's knit into and augmenting your brain by 10%, and then as a little more of your brain withers, you add another chunk. Ultimately it reaches a point where your existing brain is doing none of the work and it's all machines, but there was arguably never a precipice crossed where it wasn't "you".
Yeah but I don't think you'll feel that you're both. You'll still feel like yourself, and the other will feel like another. You won't feel as though you have left anything behind, only the other will feel like that.
And consider: if I can upload, perhaps I can download. Perhaps my digital copy can live life 'faster'? In that case I live 1000 lives digitally and download that.
Now I'm essentially immortal, in my physical body, because I've lived for millennia!
At the very, very least its no worse than living this one life and dying anyway.
It isn't you because the experiences didn't happen and weren't actually observed by you. They'd be implants of false memories at best. That would give the illusion of experience but that's all. Definately nowhere near immortality. This is closer to a dream or delusion.
We're talking about uploading our consciousness into a digital realm and then downloading the resulting consciousness back into a brain. Ignoring that we have no way to do that right now, there is nothing that says we can't.
It would not be a dream any more than yesterday wasn't a dream.
The digital copy can be kept alive long enough until we figure out how to download it back into a body again, at which point this is basically the ship of Theseus https://en.wikipedia.org/wiki/Ship_of_Theseus
Also some people might disagree with "digital copy of you != you". Personally, I do not care since it is close enough to immortality imho.
It won't do anything to solve the dread-of-nonexistence problem. But a copy that behaves just like you is still useful if your motivation is something like "ensure that my line of scientific research continues" or "take care of my extended family."
But even if it's a copy of your essence, it's not a slave. Mortal-you has goals and family to care about. Mortal-you has been living and planning with the constraint of mortality for its entire life. Immortal-you will eventually develop different motivations.
On the one hand at least an ASIC is more power efficient than a GPU, but on the other hand it seems that all "do pointless work" based mining will tend to optimal power usage and so every system will always just be burning absurd quantities of power :-/
You're not wrong but it's far far more productive on the GPU. If you're willing to burn your CPU on mining XMR, you may as well add GPU(s) to that host and light them up as well.
Is that no longer the case? It has been a couple years since I dabbled in mining XMR, that was definitely the case then.
There are tons of "shit coins" that can be mined with GPUs and there are several mining pools that will allow you to choose what crypto you get your payout in. GPU mining isn't going anywhere.
Among the things that are unfortunate about cryptocurrency as a model is the fact that it's not immune to the general capital-breeds-capital effect. For proof of stake, people with money to spare are likely to have newly-minted money granted to them in the future. For proof of work, people with money to spare can afford to buy the rigs to increase the odds they have newly-minted money granted to them in the future. I think it's fair to ask what the net benefit is to society for a wealth-distribution system to give more money to those who have the most money.
Fiat currency has issues, but at least a government has the authority to conjure money out of nothing and hand it to the poor.
> Fiat currency has issues, but at least a government has the authority to conjure money out of nothing and hand it to the poor.
Maybe, but it is much more common to summon money out of nothing and give it to the rich. At least in the US this happens in the form of buying securities from entities that hold them, which from what I understand is kind of similar to proof of stake (you stake some of your currency by buying a security for a chance to get more when you sell it).
"By tracking brainwaves when someone watches an advert, Microsoft hopes to use the data generated as a “proof-of-work.” "
Look like an episode of black mirror
The government authority doesn't go away just because cryptocurrency exists. Unless cryptocurrency somehow replaces fiat entirely, which seems unlikely. And even if that happened, governments could do the same by way of taxes.
"fiat crypto", or if you prefer, digital currency, will neither be proof-of-work nor proof-of-stake. Neither mechanism is needed when there is a single trusted authority.
PoS pays the people who run the infrastructure. Someone has to run it, and it's trivial for anyone to participate. If you have 5 dollars you can stake it in the PoS network and earn rewards. The barrier for entry in the legacy financial system is way way higher. Have you ever applied for a banking license?
False equivalence, why is your equivalent to staking (investing crypto) applying for a banking license rather than opening a brokerage account (investing dollars)? With the advent of fractional shares anyone can buy $5 of stocks too, but it requires zero technical know-how.
When you stake, you are still processing transactions and creating blocks/forming consensus similar to what mining did. This isn't really comparable to investing in a security.
Just because you can join a staking pool which often takes a cut and makes it easy doesn't invalidate what is actually going on.
Sending $5 to a third party pool so they stake it in your stead under the promise it remains your $5 does not make you a bank. It's more like sending $5 to Robinhood so they buy $BAC in your stead under the (legally backed up) promise it remains your $5. IMO.
[edit: misunderstood the initial statement; disregard the part that was previously here]
> The barrier for entry in the legacy financial system is way way higher. Have you ever applied for a banking license?
One doesn't have to have a banking license to use the local fiat currency of one's nation (basically, one is born into it). And one must do almost nothing to get a check from the government if they decide to stimulate the economy by handing out money to those who showed they had very little on their last tax return.
You don't need to be a staker to use Ethereum either. You can also stake with a few clicks on a website if you're using a web3 capable browser. You don't need to buy anything fancy or even open up a terminal if you don't want to.
There are a ton of other projects that allow people to share storage space, compute, art, information, etc to earn cryptocurrencies. People contribute what they can to the network, and are compensated for their efforts. Projects like gitcoin are raising millions of dollars for people who volunteer to provide public goods, from performing security audits on large open source projects, to cleaning plastic out of rivers (https://gitcoin.co/grants/). Seems to me like the Ethereum ecosystem is far more altruistic than many current governments.
Yep. Parking your ETH in a staking pool much better than in a traditional bank saving accounts. There’s a proliferation of services that make it easier and easier and can put any amount.
Nobody parks cash at a bank looking for returns. They're looking for a stable store of value. ETH appreciation is not a selling point for replacing a bank, even if the return happens to be positive (right now, with hindsight).
You are conflating several topics.
(Background: through Ternary we run the Red Bike validator nodes on Cardano).
1. If you want to run a validator node, i.e. run the architecture you need to have 32 ETH to be eligible to provide this service. And it does not mean immediately that you will earn something. https://ethereum.org/en/staking/
2. If I stake the equivalent of 5 USD I will likely get a 5% return on it per year making it much less interesting from an investment perspective given volatility and opportunity cost.
3. Staked funds are locked and can't be used anywhere else or for anything else. At least GPUs can be used for mining AND general purpose computing.
4. It is much easier to go to any random money exchanger and go USD/EUR or USD/JPY
5. Getting a retail bank account can be done in a matter of minutes online.
> Fiat currency has issues, but at least a government has the authority to conjure money out of nothing and hand it to the poor.
This is why I'd rather see development of privacy coins, like zcash, and less concerned with the decentralization issue (I've yet to see a project actually address centralization or even acknowledge the issue you're bringing up: momentum). We're moving away from a cashless society and while that has a lot of great benefits it also has a lot of detriments. So why not have digital cash then? ZKPs for transactions.
If you want to promote democracy you should also want to decrease the ability for authoritarians to arise, which in our modern era means how much data they can get their hands on and abuse. There's common cop-outs like how will taxes work etc, but society ran pretty smoothly on cash before. Companies still have to report incomes and salaries. We can still do consumption taxes. So we have our income and consumption easily solved. I'm even fine with a small transaction fee, which I know others aren't, but we already pay this in the world of credit cards (2-4%). I think we could really bring down the Visa/Mastercard tax (maybe something like 0.1%/0.5%?) and it would all be a win for everyone.
It is clear to me that cryptocurrencies aren't going to get us this world, so let's start thinking about other means.
Most currencies don't have it baked it into the core workings of the currency.
In crypto, possessing money generates money, without investing it or putting the money "to work." This would be like if the dollar bills in your wallet periodically grew another dollar.
Depending on what you mean by money, there are unsolved problems. You need some kind of layer that maps real people to accounts, or people can just create lots of accounts for unlimited money.
There was some crypto startup that wanted to do this and was going to different cities and doing retina scans. I wonder what happened to them?
I suppose you could outsource it by donating to GiveDirectly, but that would require conversion to real money and then (by GiveDirectly) to mobile phone payments. In that case, the cryptocurrency isn't solving much of the problem.
If a cryptocurrency actually gained widespread use (competitive with cash), I'd imagine it would take on some aspects of a central bank -- being the only currency is only fun if the economy running smoothly. Money given to poor people more-or-less immediately enters circulation.
Social safety nets are one thing. But in those countries don't fool yourself capital still breeds capital. Also the U.S. does have social safety nets. In fact the largest source of government spending is on them. I'm not saying we don't need more, but we can't dismiss what does exist.
I'm thinking more along the lines of the giant stimulus the United States cut to almost everyone during the COVID-19 pandemic. Everyone who paid taxes just got a check in the mail.
The handout went to the wealthy in the form of Payroll Protection Loans that never were required to be disbursed to employees or repaid. And Wall Street got a huge multi-trillion-dollar boost.
Update: 10.2 million PPP loans were forgiven. Here's why.
> If borrowers use at least 60% of the loan to cover payroll within 8 or 24 weeks after receiving the loan, they can submit an application to have the loan forgiven.
-----
edit: if anybody can clear this up for me, I'd appreciate it, but what could it possibly mean for "at least 60% of the loan to cover payroll?"
Does that mean that the company has to spend its way into insolvency first, then after becoming insolvent pay out 60% of the loan to employees, or does it simply mean that an employer's payroll has to sum to at least 60% of the loan within 24 weeks, no matter how much cash the employer has?
It seems like the difference between 1) handing out gifts to employers of up to 40% of their 24-week payroll, or 2) handing out gifts to employers that are up to 166% of their 24-week payroll. But I'm not a math scientist.
Either way, making direct cash payments to employers in proportion to their payrolls is as stark an example of welfare for the rich as you could cite. That's like paying money to people in proportion to their total stockholdings, as long as they promise to spend at least 60% of those payments to buy more stock. Even worse in the execution, where tons of the smallest employers and the self-employed were left out in favor of companies with high-powered accounting firms or lawyers on staff.
There were limits on what you could spend the other 40% on (defined categories of expenses). You could spend 100% on payroll if you wanted, but it had a minimum of 60%. The other 40% had to be used for specific things such as building maintenance, etc... No owner could take a distribution of these funds (directly at least, although if this loan allowed them to be profitable they could of taken those profits).
Businesses that wanted to be legit put this money into a specific account (in their accounting system) and tracked all expenses against that specific account for auditability.
The only loans that had less strict rules were for sole proprietors/self employed businesses. But the size of the loan was capped pretty low relatively.
Of course this program was highly flawed. But it was thrown together quickly in response to the pandemic. Personally I wish our government would already have plans in place ahead of time so everything wasn't done so hastily and last minute causing massive fraud.
What I'm trying to figure out is if they had to spend their own money before spending the loan. Otherwise it's strange to say that the money went to either payroll or "specific things such as building maintenance." The money went to the employer, and in return they wouldn't lay off so many employees that their 24-week payroll would fall below 60% of the amount of the loan (and a possible requirement that they'd have to spend the difference between the amount of the loan and the 24-week payroll on capital improvements?)
Did they at least have to prove that it would be financially beneficial for them to do layoffs?
> Of course this program was highly flawed. But it was thrown together quickly in response to the pandemic.
Eh, this isn't a deep dive, these are basic questions about the concept. Not that a deep dive wouldn't be warranted with tens or hundreds of billions at stake.
The flip side of this is that if the loans weren't given and everybody is closed for lockdown, companies are either firing everybody or going out of business.
If that had happened, all of those people would have been applying for unemployment, COBRA, Obamacare plans, Medicaid, etc as well. It was more a question of which Federal accounts to drain. By giving businesses a clear path to making sure they could keep paying people regardless of whether money was coming in from customers, that was avoided. In order to get business owners to take it though, you had to basically give it to them.
It was highly variable which businesses could find a way to keep operating in the conditions of lock down and COVID protocols. Remote work was easy. Running a bar was not.
Even then you also had exorbitantly high unemployment payments for a long period of time. It wasn't as if the PPP loans were the only money being injected.
I'm asking specifics, you're giving me ideology. Unless you have some evidence that people had to prove that they would it would be more profitable to layoff/close unless they got the loan, which is what I've asked.
> If that had happened, all of those people would have been applying for unemployment, COBRA, Obamacare plans, Medicaid, etc as well.
If we don't prefer direct aid over middlemen, why don't we route all social programs through middlemen? I'd like to volunteer, as long as I'm allowed a 40% cut off the top.
I have no objection to the government sending money to people who were made unemployed by covid. I have little objection to the government propping up marginal businesses that serve a valuable purpose in better times, but would otherwise fail during covid without aid, although I feel it was largely a landlord subsidy.
There were lockdowns for 2 weeks. Any customer facing business that couldn’t have customers walking in to do business was going to have no income for at least that long. If you don’t have income, there’s no money for expenses like payroll.
Unless you operate an extremely high margin business, your only options are to not pay staff during that time or to have layoffs.
Most small businesses aren’t high margin.
As for middlemen, given the short timeline what were the real options for the government? Setup a channel where every individual could apply for direct relief or use existing heavily regulated channels through banks?
Keeping the economy moving needed to be the top priority, so existing channels made more sense.
We applied for a loan at our small business. We worked with our accountant and banker to make sure everything was handled properly and then documented everything so that we could apply for forgiveness. We didn’t have any issue since our staff had grown by a couple of positions from the prior year.
Even though revenues did take a hit, the temporary allowance for telepractice options helped to lessen the overall impact.
If not for the PPP loans our business and a lot of others like it would have been staring down bankruptcy and layoffs for about 30 people. And we run a tight margin business.
I can’t say that there wasn’t fraud in some places, but for us it worked exactly as intended.
The combination of PPP with high unemployment payments that lasted much longer than needed pretty well spread things around. Both of them had an impact on the overall economy.
In hindsight, we probably would have been best off by just having people take precautions and avoid lockdowns entirely. Hopefully we will learn from the mistake.
To get your loan forgiven, you had to take a loan and show documentation of having spent loan amount of dollars on qualified expenses. There is no "must prove you were saved by the loan" requirement.
As for certifying whether your business needed it or not you had to make a statement saying it was necessary for you to continue the business on the loan app. But it was an honor system. It will be up to the government to search out & prosecute fraudsters.
BS, especially since the 3-time IRS checks were just one small part of the stimulus. The enhanced unemployment lasted for months, and the increased amount alone was more than many people's regular paychecks.
Precisely. I'm trying to imagine how one would have implemented something similar atop the cryptocurrencies I'm familiar with and coming up empty.
Stabilizing an economy when a national quarantine had shut down production and trade is one of those challenges that's much easier to solve in a centralized fashion.
For middle-class workers who didn't lose their job it's as I said, just a little bit less than one month's rent. Definitely not enough to compensate for increased cost of living and inflation across the board, especially now.
Either this entire subthread is a confused reaction to someone who expressed a belief that quick transfer payments for the poor are good, and the fact that they couldn't be done in a bitcoin economy is an argument against bitcoin, or I'm the confused one.
This isn't unfortunate, this is deliberate. Bitcoin was birthed dripping in the amniotic fluid of right-libertarian ideology. Capital-breeds-capital isn't so much a side effect as much as it is the deliberate goal of capitalism: use money to make more money. And the core of right-libertarian ideology is to more or less let the capitalists do what they want.
This isn't exclusive to capitalism; there are other ideologies that work this way. However, they are ever more intolerably authoritarian than capitalism. Capitalists at least offer the promise of growing the economic universe alongside themselves - you can get 10x richer by taking 50% less profit in some businesses. But fascists, criminals, and dictators also play this game - not to create wealth and grow the size of the pie, but to shrink it so their share gets bigger. And without a government to enforce rules, capitalists will be out-moded, out-gunned, and out-played by thieves of various stripes every time.
Making this worse is the fact that Bitcoin mining is inherently and deliberately zero-sum. It has to be, because it pays in inflation (block subsidy) and confiscation (transaction fees). So capitalists can offer no wealth creation here.
In other words, Bitcoin is how authoritarians trolled right-libertarians into building and buying into a system that creates the thing they hate.
A deflationary system awards each market participant with an equitable increase of purchasing power relative to the increase in demand for earning more units through value creation. I'd imagine that would shift some social responsibility from being more centralized to being more decentralized. The Gini Index over time has only been trending up toward 100, signaling a growing environment of inequality. Having said that I do believe that we tend to oscillate between central and decentralized governance of social responsibility and technology innovation around how we transfer value (money) enables a shift away from one end of the spectrum.
> A deflationary system awards each market participant with an equitable increase of purchasing power relative to the increase in demand for earning more units through value creation.
What do you mean? Holding a currency involves zero value creation.