AC power supplies on computers are notoriously inefficient and "noisy" (as one datacenter facilities manager described them to me--he was referring to the voltage on the line, not sound.) Many larger datacenters have now switched entirely to DC-powered computers to cut down on energy use.
Years ago, when I was running my hosting company, DC power supplies were more expensive, but you could typically make up the difference in a year or so (less than the average 2-3 year lifetime of a typical server.) I suspect costs have come down since then.
I seem to recall at least one datacenter in SF running DC directly from the grid, as PG&E used to sell it back then. This article makes it seem like that may no longer be the case. Regardless, many datacenters now sell DC as an option, and will have installed their own converters to supply DC power to customers.
an AC/DC power supply at best like 70-80% efficient DC-DC converters are upwards of 98% efficient.
So you loose 10% in the UPS/PDU Scenario (AC - DC (Float on Batteries) AC) then another 20-25% in the power supply. Meaning to generate 100w inside the PC you need 137.5w input, plus whatever distribution losses you have.
If you use HVDC supply with rack based DC-DC Converters. So AC-DC (90% efficient) and DC-DC (98% efficient) You can generate 100w with only 112.2w input. Figure that's for the the whole rack, verses per chassis power supply conversion for the AC model. You can do one 3.3v one 5v and one 12v power supply per rack. You're also saving a huge amount of heat too, heat that you dont need to cycle out thru additional A/C Hardware.
A power-supply with 70-80% efficiency (at, say, around 50% of the power-rating) would be a very, very crappy one. On the other hand, a DC/DC with 98% would be a very good one. So your comparison is not really fair. See e.g. the book mentioned below.
The same book quotes overall inefficiencies from power line to the CPU from as low as 48% for the crappy-PSU + UPS + converters to up to 92% for the Google AC/DC+integrated PSU design.
E.g. "The Datacenter as a Computer An Introduction to the Design of Warehouse-Scale Machines, Second Edition, Luiz André, Barroso Jimmy, Clidaras Urs Hölzle" page 52 (PDF page 70) figure 4.2 quotes AC/DC supplies with 79% at worst, but the other two numbers quoted are 89% and 92%. That's more reasonable.
But the numbers I used are largely borne out on the link you provided. Look at page 52 and 53.
I think however larger rack/frame mounted DC/DC converters would likely be even more efficient than per chassis DC/DC converters, but would require almost totally custom hardware.
>I think however larger rack/frame mounted DC/DC converters would likely be even more efficient than per chassis DC/DC converters, but would require almost totally custom hardware.
Nope, you can buy off the shelf ATX PSUs that accept 48v DC as input, though it's kinda hard to find. It's easier to find, usually, on the higher-end.
For switching gear, though? it's a standard option, as 48v DC is standard for telcos.
In fact, most of the higher-end datacenters will sell you 48v DC if you ask for it. It's sort of like having an always-online UPS, as they can just leave the battery-banks in-line; no switching from mains power to UPS required. (and that switch is... quite often what fails when a data center loses power.) Whenever I take folks through my setup at coresite santa clara, they are always drawn to the 48vdc battery bank on the other side of the room. "Hey guys, that's not mine." "No, don't touch that giant copper bus bar."
(That's the thing about low voltage; you need serious amounts of copper if you want to move very many watts at all.)
It used to be a bigger deal than it is now, as the PSU in your server is dramatically better now than it was even just ten years ago.
But yeah, from what I can tell? (and I spend what, 10x rent on data-center power? I'm not saying I'm an expert, but I certainly have an incentive to be right.) The negotiation differences between the cost per watt using different voltages is far greater than the efficiency differences of different voltages. Sales people ruin everything.
At many places, 208v cost you twice as much per amp as 120v, so you are better off just buying twice as many 120v amps. (I mean, 208v is more efficient for you and for the datacenter, but it's not 10% more efficient, and you get more than 10% more watts that way.)
I was at one place where an amp of 208v was only about 30% more expensive than an amp of 120v. Too bad that free ride ended.
The problem is that most salespeople don't really understand any of this... they understand how to sell. And a 10% difference in power cost, well, that's less than the referral bonus they give if you have an agreement to feed them leads. (this is why you see so many small companies claiming to have datacenters across the country. They just have referral programs, and they get upwards of 10% of what the customer ends up paying the real data center, essentially in exchange for some slimy seo.)
So really, the negotiation game matters a whole lot more than what is actually the most efficient, until you get to the point where you can build your own data-center.
I'm aware of the neg 48 vdc stuff out there, I work in the telecom industry, and plan on building a 48v power plant whenever I buy a house :-P.
That's not quite what I was talking about - Rather than need to distribute 48v you could do 240v or 384v (5 or 8 48v strings) and then do DC-DC in each frame to generate 12v/5v/3.3v power supplies, which supply the voltages directly to the hardware, if you needed 48v in frames, its easy to put a DC-DC converter in to get that too. I shudder to think at copper costs to distribute 48v.
huh. Interesting. How would the safety implications of running 240v DC differ from running 240v AC? (my understanding is that the amount of copper required would be the same.)
From what I've seen, 48vdc is usually just two leads sticking out from the equipment in question.. you just strip the ends of the wires and screw 'em in. I don't think I'd be comfortable doing that with 240v. I guess that's just a simple matter of standardization.
If you get bit (shocked) by DC, you let go, drop it instantly - AC will not allow you to drop it, you have to be pulled off. If anything its safer. Also, DC has lower loss over a given conductor. The problem historically with DC was changing from a high voltage for transmission and low voltages for loads, something changed by solid state devices.
>If you get bit (shocked) by DC, you let go, drop it instantly - AC will not allow you to drop it, you have to be pulled off. If anything its safer.
I know I was once shocked by 120v AC while working; I was reaching around a metal PDU with 15-15R plugs to plug something in. As I couldn't see, I was guiding the prongs in with my fingers, and I didn't get my fingers out of the way before it made contact. ouch. (let us just say that I was younger and... a lot less careful than a sysadmin ought to be.) Fortunately, the case of the PDU was grounded (I think it was a baytech RPC) and the case of the server above it was grounded, and everything was screwed in well, so the circut went through my fingers and arm and into a chassis. my impression was that my arm bounced between the PDU and the next server up for a while before I managed to jerk my shoulder back and pull my whole arm out of the rack.
Apparently I'm not a very good conductor, or there was something up with the breaker, 'cause it didn't blow the circuit. I wasn't hurt, save for maybe some light bruising, and no downtime resulted, so it turned out okay, but I'm much more careful now (and my PDUs use C14 receptacles, making this problem impossible, at least without a knife.)
That's actually pretty normal - your body doesnt conduct well, people die because of current in the sub 500ma range, a bigger issue is once hold of AC, sometimes you cant drop it.
Yeah, I considered setting up a GFCI after that, but decided that was probably a bad idea in production. I've become religious, though, about GFCI on the test-bench, as that's where most of the stupid happens anyhow.
In the USA, old landline telephones all ran on (nominally) 48V DC, with the "positive" being ground, and the negative 48V "below ground". Telco central offices all have/had huge glass carboy lead/acid batteries, with giant copper buses running 48V DC all over the buildings.
It took me years of asking, but I finally found the right person, who told me that the 48V-below-ground negative meant that dissimilar metal corrosion is drastically reduced in telco "outside plant".
Telecom equipment is actually -48v. It's done that way to reduce galvanic corrosion. The system has been in place since the original Bell system was based on 12v batteries.
I wonder what is done with the heat produced by the rectification. Presumably when you convert power for 7-10 customers at a time they aren't residential customers and the heat must be quite great. It seems a shame to just radiate that all away.
They don't do anything with it, judging by the photos of rectifiers mounted on the wall without any heat exchange or recycling equipment to be seen. It's unlikely that there's much heat to recycle though. If there was, those same photos would show significant heat sinks to help dissipate the energy away from the rectifiers.
We don't bother recycling any heat from roadside transformers or larger substations either, and they carry a great deal more energy than the rectifiers powering 7 - 10 lift motors.
Roadside transformers are AC to AC, which is way more efficient and easy than AC to DC. My car runs on DC, and the quick chargers that provide 60 KW are the size of a large closet and have loud fans to dissipate heat.
Largest source of inefficiency (and heat) in common isolated SMPS designs is output rectification, which has to run on frequencies ranging from tens kHz to units of MHz, thus the diodes there tend to spend relatively large proportion of operating time in various half-open-half-closed states. Active rectifiers somewhat improves overall efficiency for high output currents, but the problem with switching time still remains.
On the other hand for line-frequency rectifier times required for rectifier to switch can be ignored and only significant source of heat is voltage drop on rectification diodes (ie. 0.6 - 1.5V), so one watt of heat dissipation per one ampere of current is good ballpark.
Solid-state rectifiers typically have a forward voltage drop of about 1 V. A 25 kW load at 250 V would draw 100 A. The rectifier power would be 1 V × 100 A = 100 W, for an efficiency of 99.6%. Bulk power rectification is quite efficient.
In deep dives into the literature (in search of other things), I've found rectifier circuits that use transistors (don't remember whether MOSFET or whatever) switched at line frequency to form a bridge rectifier with a lower forward drop. Of course it doesn't mean that anybody ever commercialized it.
It's very common to use thyristors in rectifiers in high power applications. They can scale up to many kilovolts and kiloamps and are used in HVDC links.
Linear and Intersil make COTS support chips. From a power supply design perspective it makes some interesting ideas possible, because you can turn a bridge rectifier on and off as fast as you want this way. So you can make a super low frequency switching power supply using the rectifier itself as the switch.
There are two obvious problems. First the voltage regulation is garbage because you're running 60 Hz instead of a typical switcher in the KHz region, so you'll be investing heavily in large filter capacitors or you'll have awful ripple on the output. Maybe it would make a good battery charger instead of using perfect regulated 5 volt USB power as typical? The other obvious problem is you can use four cheap diodes and one switching transistor on one heat sink, or four switching transistors on four heatsinks and probably a very small diode bridge to provide power to the chip to run the works (although maybe controllers have on chip rectifiers) And the cost at reasonable power levels of a switcher is a rounding error compared to its heatsink and the effort / labor of mounting it on a heatsink. So unless you'd doing some crazy electrochemical refining type stuff you'd probably not use a synch rectifier, just not economical. I bet hydrogen electrolyzer plants and aluminum and copper electrorefining plants DO use synch rectifiers.
Active rectifiers (ie. transistor based) are common in various high current applications. One quite weird example are line-interactive UPSes, which often use topology that allows same MOSFET bridge to be used as both rectifier when charging and frequency converter when discharging. Somewhat more representative application are various high current DC supplies, for example Vcore DC/DC converters on PC motherboards (12V DC -> <1V DC) almost invariably use active rectification.
>In January 1998, Consolidated Edison began a program to eliminate DC service in its operating territory. At that time there were over 4,600 DC customers. By 2006, sixty remained. Between then and today[2007], when the last customer at 10 East 40th Street was switched to rectifiers on their side of the meter to generate direct current to supply its building’s elevators and sprinkler system, Con Edison has been switching DC customers to alternating current.
Since the start of the program in January 1998:
4,541 accounts in 4,288 buildings have been converted;
6,172 DC meters have been removed from the system;
193 in-building rectifiers have been transferred to customers;
148 250KW street rectifiers have been removed;
35,965 sections of DC mains (5,682,337 circuit feet or 2,660,634 cable feet) has been retired; and
545,185 circuit feet or 236,611 cable feet of DC services has been retired.
I live in a building in downtown NYC near Canal and Broadway. It was built in the 1890s and our Otis freight elevator is supposed to be one of the oldest operating elevators in the city. The first passenger elevator (also and Otis) was installed in 1857 around the corner on Broome and Broadway. Our elevator is powered by a DC motor. Until a few years ago, Consolidated Edison, who championed DC back in the day, delivered DC to power the elevator. Over the course of a decade or so, in an effort to get us to switch away from DC, they raised the price and in the end it was less expensive for us to install a rectifier in our basement to convert AC to DC to power the elevator. The rectifier is a nondescript box about he size of a small refrigerator and emits an audible 60Hz hum.
With so few customers (and no new ones allowed), it’s not surprising that the DC distribution system in San Francisco isn’t widely known, but it’s not a secret.
Regulated electric utilities like PG&E provide their services pursuant to tariffs, and this is no exception. See Tariff A-15, “Direct-Current General Service,” on the PG&E website:
"By the mid-20th century, large rectifiers installed at two downtown substations supplied the current, pushing 250-V DC onto a rectangular loop of 3.6-centimeter-diameter cable."
OK, can anyone explain why each of the drum-wind elevator motors does not have its own small rectifier and be run from local AC mains (possibly cross-phase for 1.4x voltage)?
As this interesting article goes on to say the local power company is splitting the DC distribution into smaller isolated networks anyway...
Remember that these were installed ~50 years before the transistor was invented. At the time, the hardware was very expensive. It was more economical to have a single centralized rectifier than multiple distributed rectifiers.
Since they were installed, it comes down to maintenance costs vs gutting and installing a brand new system. Maintaining an older system was apparently the choice these building owners made.
At that time, rectifiers often consisted of a AC motor driving a DC generator. Which probably could have been moved to customers, but was avoided for maintenance and reliability reasons (more easy to service one big generator/motor pair than a few hunderes scattered aound the city, making noise in residential homes, ...)
Nowadays it would be trivial, a complete motor control for variable-speed 3-phase motors, that could technically of course also drive DC, costs a few hundred dollars/euros.
But probably not possible for certification reasons, as soon as you mess around with an old system you have to upgrade it to correspond to contemporary safety standards.
Yes, I accept your point for when the system was originally provided. But now it strikes me that the older systems could be run from a power cabinet with a step up transformer and a rectifier running off AC mains given a supply with sufficient current capacity.
They could have then, but mercury vapor rectifiers are/were expensive, mogen sets are noisy and need service - its far easier to put a big one in, than lots of little ones.
I'm sure that the power company could say to them "we're not doing DC any more" and force the businesses to fix the problem by installing their own rectifiers or upgrading the elevators but obviously the power company still thinks it is more cost efficient for them to continue to provide DC to these businesses than to cut them off. I'm sure that when the time comes that the power company doesn't think it is cost effective any more then it will turn off the DC.
It's an interesting point. Maybe it was easier to reuse the existing cable topology.
Edit: or perhaps the rectifiers are an off-the-shelf variety with a capacity far beyond the requirement of a single motor so sharing cut the costs.
A lot of places are switching to this; see NYC for an example. Instead of getting DC from the power company they get AC and convert it to DC on-site. ConEd shut down it's DC grid in '09 I think?
See Commonwealth Edison Company, Schedule of Rates for Electric Service, Ill. C. C. No. 10, 1st Revised Sheet No. 148:
However, in certain individual situations, certain retail customers in the
central part of the City of Chicago are provided with direct current (DC)
electric service. Such retail customers are provided with electric service
through rectifiers that convert AC to DC. Such retail customers have been
provided with DC electric service since the early years of the twentieth
century. Beginning in the 1930's, the Company has been working toward the
retirement of DC electric service. The Company does not serve new or
increased electric power and energy requirements of any retail customer with
DC electric service. For a situation in which DC electric service is retired
at a retail customer's premises, the Company removes its rectifier and
associated AC to DC conversion equipment that had been used to provide
electric service to such premises. Eventually, all such rectifiers and
associated AC to DC conversion equipment will be so removed, and all retail
customers will be provided with AC electric service.
I think the customer pays for the rectifier losses, though. According to Sheet No. 191:
For a situation in which DC is provided through a rectifier, meter-related
facilities are located on the AC side of the rectifier.
AC power supplies on computers are notoriously inefficient and "noisy" (as one datacenter facilities manager described them to me--he was referring to the voltage on the line, not sound.) Many larger datacenters have now switched entirely to DC-powered computers to cut down on energy use.
Years ago, when I was running my hosting company, DC power supplies were more expensive, but you could typically make up the difference in a year or so (less than the average 2-3 year lifetime of a typical server.) I suspect costs have come down since then.
I seem to recall at least one datacenter in SF running DC directly from the grid, as PG&E used to sell it back then. This article makes it seem like that may no longer be the case. Regardless, many datacenters now sell DC as an option, and will have installed their own converters to supply DC power to customers.