" the M1 Mac mini was the perfect hardware to test out PoE, as on idle, the device only consumes 6W. When some load is applied to the internals, that power draw can go up to 40W. After some thorough research, we found out that the maximum throughput of Power over Ethernet was 15.4W "
They'll have to bump it up to 802.3bt (Poe ++) which can support 60W.
Cool Project though, I've been wanting to mod my Mac Mini m1 to run off USB-C PD which should be possible with modification because it uses the same PD IC as the Macbooks (CD3217) which could mean I could get it to eventually run off of a battery pack
Surely they're all 'up to'? It's not 'you must sink this much current or else not compliant'?
It seems like a weird thing for TFA to say anyway - my PoE[+] switch was the cheapest 8 port I could get a few years ago on Amazon, and does 30W per port. I don't really understand how you could look into it at all, be willing to attempt the hack, but not use a switch (or injector or whatever) that's capable of powering it under load.
Depends. Within the standard, you can do 51w at the full 100m on standard cat6, which is 300ma/pair. If your runs are shorter, you should be good to go at the full type4 current, or, 960ma.
Hah, yes they are all 'up to' on the device side. A switch port or injector is not compliant with a particular standard if it does not provide for the type-specific load, however.
They said it "can support 60W." As you point out, Type 4 can support up to 71W, so yes it can support 60W. They didn't assert that it is a standard power level.
What's the goal of having a MacMini running off an external battery? I mean, it seems less cost effective than just getting a MB Air and not using the screen...
Headless servers where you want a modular or easily removable battery backup.
I really wish desktop PC power supplies would support such a thing. It seems stupid that we need to use a UPS with an inverter when the power supply should be rigged to accept DC input straight from a battery.
Remember that DC to DC power supplies are much smaller because existing AC PSU has to deal with the transformer aspect.
I worked for an ISP that was also a CLEC back in late 90s / early 2000s. All of the telco side was DC (massive 48v battery banks) - so when we were doing server implementation it made sense to get all DC PSU on the servers. Not sure why DC PSU aren't a more universal option as you'd think UPS providers could easily offer DC output models.
There are options out there, I ran across this [0] looking for typical ATX DC to DC PSU.
Where I live, this is actually extremely common now - typically referred to as a "Mini DC UPS" [1].
These cheaper ones just provide 9V and 12V output to power a fibre and/or WiFi router, as well as 5V USB-A, and start at around 30Wh capacity - some for less than USD 20. If you pay a little more, you can get 24/48V PoE included as well.
Unfortunately, ones providing USB-PD are quite rare. You do get "power stations" [2] with USB-PD (in addition to an AC inverter). These are typically USD 100 - USD 1000, and have much larger capacity.
Examples (I haven't tested these and do not specifically recommend them):
Yeah that would be great. Pimoroni makes a version of the Pi Pico that takes a LiPo battery and automatically switches to battery power when the USB power goes away, but I don't know of any full computers that support that. I think adafruit makes a LiPo addon for picos. I wonder if it could be modded to work on a zero or something?
That's where USB-C PD is nice. It's all DC at several different voltages the recipient can request. The only downside is it's not quite powerful enough for many desktop PCs, but it's getting better.
Pretty much expect the Mini to end up this way within a few generations. Somebody in the Mac team really hates wall warts, but given all their other hardware is going USB-C one would think they’d want to standardize & eliminate the space/heat of the internal power supply.
Not really. However, as a long-time Mac and Apple fan, I can say the community has a lot of overly aggressive people who can't stand any criticism (valid or not) of Apple and will do various things in retaliation.
No it just reads as kinda paranoid because li-ion cells typically become spicy pillows long after the device becomes obsolete. It's also trivial to avoid by limiting cell voltage. Many devices, most notably iPhones, automatically go into kiosk mode after charging for a few days for exactly this reason: https://support.apple.com/en-gu/HT208710
Definitely disagree. My own anecdotal evidence is that many many laptops left plugged in 24/7 will develop a spicy pillow after a year or two. That includes a set of Apple laptops and hundreds of Dell laptops.
Definitely a bit paranoid - even if I'd prefer to call it robust :)
I've had very bad luck in this regard... but haven't honestly used a laptop in probably a decade. I was pleasantly surprised to see my latest (dust-collecting) Lenovo will stop somewhere around 80%!
Point being, sure - there are safety things... but batteries are consumables. I like these things to be easy-to-replace!
I find these being difficult to replace accelerates this pattern of obsoleting. CPUs and (especially disks) tend to pack plenty of punch/life, these days - well beyond the mechanical/chemical things they depend on.
Something to ponder - these chips may see particularly long use, being so power efficient. The utility bill won't be such a driver, sipping power and parallelizing decently.
How does one make sure that removable, consumable, high capacity batteries are not discarded as trash but instead taken to the proper facilities to handle them?
Respectfully, today people just chuck the entire phone/laptop in the bin. I don’t think making the battery module removable changes the outcome one bit, except that it would allow a tremendous amount more life out of these devices, reducing all the other (non-battery) waste streams and reducing the environmentally costly manufacture of new devices…. which is the reason why the shareholders love the current strategy and why all electronic manufacturers are moving the same way. Why allow users to get 3 more years of life from their laptop when you could convince them it’s e-waste after 3 because of a $15 battery?
Is the dichotomy here really that we can either have modular batteries, or we can have more intact waste trucks and workers?
However, having just taken an entire stroller out of my shared duplex recycling bin, I know there will never be an ethical way to get 100% of people to care enough to dispose of waste properly.
Having things that are have special handling for disposal not be as user serviceable is one solution... but that goes against the replaceable battery.
The AA lithium ion batteries getting tossed into the trash and setting garbage trucks on fire are problematic enough. The energy capacity of a cell phone is quite a bit more and correspondingly more spectacular in the combustion.
> USA Today reports that 65 percent of fires at waste facilities in California were started by lithium-ion batteries. In a 2018 survey of 21 waste facilities across California, 86 percent reported a fire at their facility in the last two years, according to the California Products Stewardship Council (CPSC). Of those fires, 56 percent were attributed to batteries, with the remainder attributed to “traditional hazards of combustibles.” In other words, batteries are causing more fires than the oils, fuels, and other hazardous materials of waste management—combined.
I was replying to a question on why one wouldn't "just get a MB Air/not use the screen"... not the article.
How is this relevant? I don't say this to be rude, but this internet phenomenon of relentless pedantry is annoying.
Things tend to make [some degree of] sense when your first reaction isn't to shoot from the hip.
Mains batteries exist - most people should save the effort, get a UPS. Hackers on the other hand... it's kind of silly to ask why. The answer is 'because'.
No idea, but if they got it running on USB PD, battery pack aside, it would make it a one cable connection to a monitor that has PD support. Multiple cords don’t actually bother me but it sounds kinda neat.
I could see it making sense in a 'van life' context to use a smaller local battery rather than plug everything in to your main leisure battery. Use the latter to charge Makita packs say and then run most other stuff off those (there's a decent amount of open 3D printable adapters for them, as well as third-party/AliExpress stuff).
Would make way way more sense to have a macbook that you can then charge direct from a DC battery over USB-C. With the mac mini you still have to work out how to power an external screen as well.
Battery aside, POE also allows you to put it anywhere you have ethernet cable with sufficient power delivery. In this case you could have a workstation powered entirely off ethernet delivered to the location (some monitors will also work with a POE splitter as well).
That it happens to be 12 volts DC has some value versus whatever power loss an inverter has for use in a vehicle. Though I'm with you on the relative ease of starting with a laptop instead.
This would be fun for when LEOs come in to take your computer, they can easily keep it powered so all of the decrypted keys stay in memory. Killing power means going back to an encrypted state. In high profile cases for desktops, there's techniques for splicing the power cable to switch to a battery pack. This would make it much easier for the unskilled LEO to take your shit. Cause we all know you're the one they're after. Sleep tight! ;-)
That's why pretty much all desktop and server mobos have a "chassis intrusion" header that you can connect to a sensor/switch. Your machine should wipe keys or reboot if it's opened or moved.
This is a good idea! Kind of makes me wish I was a drug kingpin with a PC full of secrets so that I could justify trying that. Sadly I would just end up with a kid rebooting the PC by bumping into it :D
I'm going to try this with my extra Mac mini and one of my switches that outputs POE++ at the full 60w. I should even be able to do this using a fairly bog-standard POE splitter from POE Texas that'll actually deliver a full 60W.
This change/mod involving one of the most locked-down devices seems pointless. What do you like about the project? Or is it that the additional cable is so irksome?
I have my own "must-do" projects that from outside perspective is seen as pointless. So, this is more about understanding than poking holes.
Another way to look at it, you can put one of the most powerful mini computers at the edge with PoE, think machine vision with industrial USB camera type of scenario.
If you don't get PD working, you can always just build a battery pack that charges itself over USB-PD, and outputs 12V. Apparently the built in PSU on the Mini takes 120/240V and outputs nothing but 12.
I’d be very curious to understand (even very crudely) the breakdown of those 40W. Roughly how much does each internal component consume at its peak load? Or at least for the top offenders.
80%+ will be the M1/M2 SOC with it's on package Ram, ~5% will be SSD, 5%-10% Fan, remaining will be supporting circuitry (buck converters, regulators etc)
I'm very skeptical of PoE now after 3 PoE adapters killed 3 Raspberry Pis I have.
I wouldn't want to risk something more expensive to that shit.
At this point I'd rather just have straight up 19V + and - cables bundled together with the Ethernet with some heatshrink around the whole thing to make it look like 1 cable.
I have 13-15 devices running over PoE at my house and all of them work just fine. Everything from the high end Ubiquiti gear to raspberry 3/4 PoE hats. It’s powered by a Unifi 24 port PoE switch.
The only time I had issues was with a 2 mesh APs and sketchy power at a condo that has minor power outages due to thunderstorms. I brought the devices back home and they work fine, the new device doesn’t have an issue. I was curious if a UPS would’ve smoothed the power blips but the new AP restarts and connects just fine.
> Sure it could be the RPi hat, but it just means the standard is so complicated that people don't implement it correctly.
Or there are ways to cheap out on a PoE device that may work in some cases but don't fully and properly implement the standard.
The standard is widely used in VoIP phones, wireless access points, security cameras, and all sorts of other networked devices that get installed in places that may not have nearby power outlets or where a single wire solution is beneficial.
Personally I have three Pis that have been on PoE their entire lives and have had no problem, but I used a name brand PoE hat (the Waveshare hat with the OLED display) and am powering them from a mainstream PoE switch. If you're using some random AliExpress hat with janky injectors you get what you pay for.
> I wouldn't trust the hat or whatever you call it for Mac Mini.
Almost any large commercial building has had hardware running on PoE that costs more than the average Mac Mini for years. Most PTZ cameras for example, high-end directional wireless bridges, even some nicer wireless access points.
The whole point of PoE, 48V PD USB-C, and similar tech is that they don't need to be UL listed. PoE is also electronically current limited unlike mains power so you can't pull 200A to start a motor nor can you start fires without a lot of effort. That's assuming you use a real PoE switch that negotiates power levels, not cheap passive injectors.
You're dealing with the same manufacturers that won't even bother to include the USB C termination resistors required to make "USB C" charging ports on their devices work with actual USB-compliant PD. You're talking about the kind of part that's so cheap that adding five of them is going to round down below a penny in your BOM. The kind of manufacturer that, if it had a motto, that motto would be "no corner too cheap to cut".
From memory, and it's been a few years/models... but the GPIO pins, including the 5V input pin bypassed all of the circuit protection fuses and what-not the normal barrel jack had.
It wasn't uncommon for people to roast their RPi with incorrectly done, or poor quality GPIO powering devices.
This is still true. There are ample warnings to be careful when using the GPIO as input because of this. The hat probably passed on some variance that is supported by actual consumer devices and fried the RasPi because it has no protections.
Don’t buy eBay junk and blame it on the standard - which is used by tons of high end commercial equipment and is more appropriately a commercial feature - PoE isn’t cheap - if it is that probably explains your poor results.
I have been thinking it would be cool to just salvage a functional MacBook Air motherboard from a “broken” device. install that in a small case, 3D print an IO shield, and you have what you’re looking for!
We powered the Mac Mini M1 using 12V DC, bypassing the built-in AC power adapter, and used it for some quadruped robot experiments. Some details on the connector are here:
https://www.ifixit.com/Answers/View/574827/What+PSU+connecto... (that article mentions Mac Mini 2018 but the connector/pinout still works fine for Mac Mini M1)
That looks cool! Here is our paper with videos of the quadruped robot with Mac Mini M1 on the back, receiving power from the robot https://sites.google.com/view/fast-and-efficient
I did a version without enclosure (just 250 gram) and (with DC-to-DC Buck converter for stable 21V -> 12V) but it was too vulnerable, hard to replicate and it didn't fit properly inside the robot.
I've been wondering more than once if we couldn't improve (larger) offices a lot by distributing power pre-transformed so every computer and every screen wouldn't have to include a transformer.
I also sometimes wonder why these days, when every new lamp is led, why we don't replace most of the dangerous 230V outlets in new houses with 12V and use 230V only for dishwashers, laundry machines, dryers and that kind of stuff.
Is there some kind engineer or hobbyist here who could shoot this idea down for me so it won't bother me for another 3 years?
(I have a couple of years studies in electronics, but not power distribution. I think I understand why backbone networks uses extremely high voltages to reduce losses, but at least for now think it is more a issue over long distances, but I am willing to reconsider.)
For a given wattage, voltage and current can be adjusted (inversely proportional) within reason, but higher current with lower voltage results in some problems: voltage drop (which mostly only matters at long distances, typically negligible within an office) and thicker conductors (double the cross-sectional area for double the current, if memory serves). This is why things like PoE and phantom power for audio are often 48V instead of something lower.
It really is bonkers. I wonder if it was the case of a VP with a pet project who pushed it hard to ship, got a promotion to a different org, at which point it was promptly canceled.
Only makes sense if it had a line of sight wireless AP in it, like 40 ghz or (to a lesser extent) the new 6 ghz band.
Plenty of domestic applications still require higher voltage (well actually power but I presume you aren't suggesting 12V with high current). Vacuum cleaners, hair dryers, anything portable that produces heat. Building houses with low voltage sockets would be really frustrating for people living there.
Is the danger of 230V actually significant in this day and age? I wonder how many fires/injuries are caused by 230V that wouldn't happen at a low voltage.
Fires and injuries at 230V aren't caused by the voltage, but by improper installation and handling. Don't stick forks in outlets, turn the power off when you're working with it, etc.
Also, it's not the voltage that kills you, it's the amperage. With regards to the 12V discussion, there's also AC vs DC to consider; AC will flip voltage 60x a second, making your heart go haywire, whereas DC is a continuous jolt, meaning your heart and other muscles will freeze in place until the power is released again, like how a defibrillator works.
> Also, it's not the voltage that kills you, it's the amperage.
A bit off-topic but I never liked that saying, it is kind-of-right but also soo wrong in so many levels. Amperage is not a thing that happens on its own, it is always a result of voltage, voltage is the driving force, so it is the voltage that actively kills you, by forcing amperage through your internals. When the killing happens the voltage is the real murderer, the amperage is just the murder weapon.
My hypothesis has been that the saying is due to how we generally think about power supplies. The voltage is a fixed quantity but the current fluctuates based on what the circuit asks for and what the power supply can give you. Something could be rated at 100V, but if the supply can’t deliver any significant amps it doesn’t matter much
I have this vision of a future where everything just runs off of usb-c and usb-pd. Even today, expect in the kitchen and the bathroom (refrigerator, washing machine,...) you could totally run most things off of the 240w max. Much more so in an office. It would be so much more efficient than what we have now, and safer. No idea how realistic it is though...
Office equipment doesn't really have "transformers" anymore, everything uses switched-mode power supplies. The way they work, is that they have an output buffer (a capacitor or inductor) and rapidly turn a switch on and off to keep that output buffer at the right level: if the output buffer is below the target voltage, the switch closes and the voltage will start to rise. Once the output buffer rises above the target voltage, the switch opens and the voltage will drop as power is being used from the buffer. This happens thousands or even millions of times per second. This principle works exactly the same for modern AC/DC power supplies as it does for DC/DC ones.
The problem with your plan is Ohm's Law, and more specifically that fact that wires aren't perfect and have some resistance (for now!). Ohm's Law gives us V=I*R, where V is the voltage in volts (V), I is the current in ampere (A), and R is the resistance in Ohm (Ω). In a wire, the resistance is constant, and V is the voltage loss across the wire. So how do we reduce the loss? We reduce I. Luckily we only actually care about the total power, which is given by P=U*I. If we want the power to stay the same and reduce the current, we have to increase the voltage.
Let's say the two wires from our central transformer to the computer are 14-gage copper, and they are 100 feet long. Their resistance is about 0.5Ω combined. We want to power a 120W computer. If we transfer that at 12V (the normal voltage computers use internally), we'd have to transfer 120W/12V=10A. The voltage loss across our wires is 10A*0.5Ω = 5V! So we put in 12V, but get out only 7V as we burned 50W in the cable itself. To get out the desired 12V we'd have to put in 17V at 10A instead, or 170W to power a 120W computer. It would also mean supplying way too high of a voltage to a computer connected with a 3-foot cable.
If we increase the voltage across the wires to 120V and down-convert that to 12V at the computer we'd only need to conduct 1A and the wire loss would be 0.5V, which at 1A is a power loss of 0.5W. That's completely acceptable, and because the computer down-converts anyways we don't really have to care about it getting 119.5V instead of 120V either.
But now we are back with a power supply at each individual computer, so in the end we didn't really gain anything. Instead of an AC/DC power supply in every computer we now have a virtually identical DC/DC power supply, so what's the point? You might have some small gains by doing the initial AC/DC conversion centrally, but in practice it probably isn't enough to care. It is only really worth it when your power comes from DC anyways, like an office with rooftop solar.
Alternatively we can use way thicker cables, but to get that same 0.5W loss at 10A would mean a wire with a resistance of 0.005Ω. To illustrate, that means using two 0000 AWG wires in parallel.
Yes, that's the tradeoff. High voltage for the long haul, low voltage locally is common.
There are computer equipment racks where distribution within the rack is at 12 VDC. These often have big busbars in the back, and a power supply in the base. Facebook's OpenRack started at 12VDC, but a later rev is at 48 VDC.
That's just within the rack; there's a power supply in the rack base running off something like 3-phase 220VAC. There are advantages to running off 3-phase power; there's always power available from at least one phase, and the capacitors needed to smooth DC are far smaller.
Telephone central offices have run the whole office at 48VDC for a century, with a big battery for backup power. Big bus bars carry that around the building. (Do telco offices still do that?)
Much industrial control gear runs at 24VDC. So do many military vehicles. It's a reasonable voltage to send a few meters, but not hundreds.
(The extreme case is ultra-high voltage DC power transmission, where power is sent thousands of kilometers at a million volts.)
For telco gear - at least the local distribution stuff in a box on the side of a road - a common setup is AC for normal operation, but 24VDC (or is it 48VDC?) between UPS and device. Because of the telco heritage those devices come with both AC and DC power inputs, and using both protects against power supply failures and allows the UPS to last a bit longer.
> Office equipment doesn't really have "transformers" anymore, everything uses switched-mode power supplies. The way they work, is that they have an output buffer (a capacitor or inductor) and rapidly turn a switch on and off to keep that output buffer at the right level...
Overwhelming majority of mains-powered devices does, in fact, have a transformer. It is part of a switched-mode power supply, though. The only devices that do not have one are those that are fully enclosed in plastics and user can not under any circumstances come to contact with any of the conductive parts. Typical example would be a LED light bulb or wall-socket powered WiFi repeater (without RJ45 port).
It is pretty hard to create a transformer-less device using plain rectifier/switch/capacitor topology. The biggest issues obviously are the high voltage before the switch and dead time when the mains voltage drops near zero 100 times per second. Most switch ICs made in the "west" cannot go up to 325V and are thus usually used with a transformer to step the voltage down below 60V. If you get a Chinese chip that can work off-line (as in directly with the 325V mains voltage), such as KP1063 <https://datasheet.lcsc.com/lcsc/2103171532_Kiwi-Instruments-...>, it still needs an inductor to bridge the mains dead time. This is incidentally done by the transformer in traditional power supplies. Capacitor would have to be huge to smooth over those for any significant power draw.
> If we increase the voltage across the wires to 120V and down-convert that to 12V at the computer we'd only need to conduct 1A and the wire loss would be 0.5V, which at 1A is a power loss of 0.5W. That's completely acceptable, and because the computer down-converts anyways we don't really have to care about it getting 119.5V instead of 120V either.
We could use 48V, which would bring the cable losses to about 3.3W (for those 100 ft). Or maybe instead of wiring from the mains box, we would place transformers along the wall sockets. Some new installations already do that and install charging USB ports. USB-PD is now specified up to 48V / 5A.
I think that we are already seeing devices abandoning the traditional DC connectors in favor of USB. A lot more people now have a large bank of charging USB-A and USB-C ports on their desks. Some vendors already integrate them in extension cords. It won't take long for LED lamps to come with an USB cord and an optional mains/USB transformer SMPS ("phone charger"). Laptops are already charging (and docking) via USB-C with PD, it might not take long for screens to follow.
Stepping 48V DC down to 12V, 5V, 3.3V is way easier and can actually work the way you have described above.
Transmission losses in DC are higher, so you need a lot more wire. You actually can get off the shelf low voltage lighting, though, but it tends to be high end designer stuff.
For the same voltage and wire size, transmission losses in DC are the same or lower (due to skin effect and capacitive/inductive losses). They are only higher when comparing low-voltage DC with higher-voltage AC, but the key difference is the voltage, not DC versus AC.
(As a bonus, DC has lower peak voltage for the same RMS voltage, so the wires need less insulation.)
> Thanks to the power efficiency of Apple Silicon, the M1 Mac mini was the perfect hardware to test out PoE, as on idle, the device only consumes 6W. When some load is applied to the internals, that power draw can go up to 40W. After some thorough research, we found out that the maximum throughput of Power over Ethernet was 15.4W and that too over varying voltages, which are details that Ivan had left out when showing off his findings on Twitter.
The last sentence has enough typos that I'm not able to follow what they're trying to say. What happens when the machine requires more than 15.4W? If the thing isn't actually usable or stable in real-world scenarios, this becomes a lot less exciting.
It'd also be more interesting if the full components list of what was added to the inside of the machine to make this possible was shared.
The article simply assumes that the creator only supports the original POE when they say "POE". There's a good chance they used a PoE+ or PoE++ adapter that supports more wattage.
Yes, but passive PoE is almost universally at 24v, and per the Ethernet spec (as quoted above) an Ethernet PHY should tolerate 24v fine. This is important as transients from nearby lightning or occasionally even coupling to power cables can produce this kind of voltage. Ethernet connectors are magnetically coupled for protection from these transients.
The problem with PPoE in these cases is, I think, not the voltage so much as the current. The continuous 24v supply may overheat the magnetic coupling transformer and cause it to fail. Some Ethernet interfaces, usually on telecom equipment and quality switches, have over current protection to prevent this. Unfortunately consumer devices usually don't.
It's important to understand this because 802.3af etc. does provide power without being asked - as a rest for a characteristic resistance on the receiver. Otherwise it wouldn't know if a PoE-capable device was connected. Up to 20v can be applied during this process but it is time limited. In general, 802.3 PoE supplies must monitor the current usage of the powered device and cut off power if it is too high or even too low for more than a short period of time. This is in part to prevent this overheating problem on devices that might, for some coincidental reason, fall into the appropriate resistance range to activate PoE.
In other words, 24v or even hundreds of volts for a few seconds is perfectly safe. 24v for minutes is likely to cause damage to devices without better protection than the spec requires. Old Ethernet equipment used to make the non-isolated components relatively easy to replace so that repairs after a problem like this were easier but now the isolation is a tiny surface mount part and replacing it will require tools and skill.
Looks like PoE uses pin 4 and 5 for +48V, and 7 and 8 for GND, which respectively are pair C and D of GbE. On ISDN, pinout could vary but one I could find said 1-2, 4-5 for V+ and 3-6, 7-8 for V-. In either cases, both sides of isolation magnetics are connected to pins of same potential and current is only proportionate to voltage imbalance within each pair which should be minuscule.
I wonder if the problem is that 1:1 signal transformers for decoupling are being replaced with simple DC blocking circuit. That can be most simply done with a spare 0.1uF and an R10k per pin, which by the way generate load of from V/R=I 48V/10kohm = 5mA and 48V * 5mA = 230mW > 1/6W. That could cause resistor to burn off if phone-sized components would be used. Or if the cap might only be rated for 10V, it could burn open. I have nothing to support these hypothesis though. I could be entirely off.
Also I suspect the "passive PoE" mentioned above could be cheap injectors with very rounded corners and always active 48V. Those are widespread for surveillance cameras and other neckbeardy applications.
> Looks like PoE uses pin 4 and 5 for +48V, and 7 and 8 for GND, which respectively are pair C and D of GbE.
There's three common pin outs. You've described 'alternate B' -- the unused pairs of 10/100; you can also use 'alternate A', using the pairs used for 10/100. Or 4-pair would have both.
Note that there are real 48v 802.3af/at power over ethernet devices where the power source has to assert that the power drain wants the power before it powers up and fake "passive" 24v systems where the line is always energized. probably where the GP got the injector from.
My personal involvement on this was several years ago when I was going to buy several unifi access points. It turns out you have to be careful because many of the models advertise as being POE but in reality are jank 24v passive systems. I have not kept up with the current unifi lineup but at the time you had to make sure to get the "AC Pro" to have real 802.3af compatibility.
Eh, it is not like they are outright lying. when you dig into the specs they do say 24v passive. but when someone says poe I expect 48v 802.3af compatible poe.
I don't know how true it is but I heard horror stories where ubiquiti 24v gear would activate, then fry when hooked to a real poe source. personally I suspect it was a 48v passive source.
I did a quick informal survey of the ui store and it looks like the newest generation(U6) all handles 48v while the prior generation(AC) the pro models were 48v and the lite and long range models were 24v passive.
It should be that all devices have isolation. Ethernet is capacitively coupled which keeps you from having to coordinate your power/ground between devices. PoE is the recognition that if the DC levels don't matter because of that required isolation, we might as well not isolate in a power stage before the magnetics and deliver some DC power.
I have been thinking that PoE is seriously underutilized for what it's capable off. There are so many use cases where one needs both networking and power, yet if you want a one cable solution, you are forced to contend with wireless which, while very reliable in a lot of scenarios nowadays, still can suffer from interference and complicated setup. Sonos and other wireless speaker systems are one of my main examples for obvious PoE use cases that have yet to be realized.
Hats of to Ivan for their amazing project, the reliability and seemless switch between AC and PoE is especially impressive[0].
PoE makes sense if you have a house or building with ethernet cables everywhere, but using your Sonos example, 99.9% of people or more do not have ethernet cables everywhere. They do have regular electricity from an outlet and wifi though.
That said, I'm all for installing ethernet cabling and the like in every room. But it'll likely only happen for new builds.
This seems to simply ignore the max continuous power rating of 150W. The mini can output 15W on each thunderbolt port and another 15W on its two USB ports, which is already nearly the limit of 51W assured to each PoE++ endpoint, not even counting the mini's own requirements.
An electrical outlet or USB-C connection is more readily available to me than an Ethernet port with POE. Laptops aren't onboard because the problem of powering laptops is largely solved. Also...not many laptops have Ethernet ports.
Historic energy consumption is probably higher than classically POE supported.
Also, any length of POE run gets voltage drop, and POE switches and injectors often have tedious modal configuration based upon length of run and are designed with non-standard limitations such as maximum draw limits shared across multiple ports, which in aggregate will cause no end of issues. For verification, ask any experienced CCTV installer. These are exactly the sort of issues that cause users to take products back to their distributors.
So it's a case of "works in theory, PITA in reality, probable support and brand image impact huge, resulting priority zero".
That's right, but factories have other problems too: especially EMI due to huge loads being switched, which twisted pairs, shielded twisted pair and ethernet termination magnetics are designed to resist. Also, industrial cable routing can get pretty hairy, this means the ability to round a corner at a relatively tight radius can be important. Using standard ethernet cables for low current DC power keeps these concerns generally within acceptable realms of normality.
The more direct problem in my experience is that all kinds of network devices that could reasonably be powered that way don't support it. My Fritz!Box router and repeater don't support it, my Sophos firewall doesn't support it (tho the Sophos XGS can supply PoE it doesn't seem to be able to be powered that way either) and of course the Hue hub and Raspberry Pi can't be powered by PoE either.
I use PoE extractors to power a few different devices in my house, including RPis and some non-PoE switches. It's ridiculously easy to use them, but you generally need to know the voltage you want ahead of time.
I got some POE adapters for my Google WiFi points so I can place them around the house and not mess with an extra power cable. Works great. You can get them on Amazon with various outputs (USB, USB-C, barrel connectors of various types).
Holy moly, didnt know these things existed :O and fairly cheap as well, always though PoE a bit of useless since most devices dont support it and it was (is?) quite dangerous.
That's what I want as well. It will/should always have an Ethernet cable attached anyway, but why do I need to run two wires?
Apples are a little weird, in that they clearly have no love for cables, yet they refuse to adopt PoE. The AppleTV and HomePods are prime candidates for PoE.
I also wish the HomePod supported wired networking in some constellation. It's powered via USB-C but only uses it for power, whereas you could totally use some USB-C + Ethernet dock. It's not like iOS (which HomePods are kinda based on anyway) doesn't support those.
AC in the context of electricity has meant "alternating current" for literally more than a hundred years. This is a you problem, not OP's, so turn down the snark. https://en.wikipedia.org/wiki/Alternating_current