Hacker News new | past | comments | ask | show | jobs | submit | bsoft16385's comments login

I have 7 Mitsubishi heat pump head units being controlled using ESPHome running on D1 mini clones (ESP8266). The D1 mini clones are powered by and interface with the head units with Mitsubishi's CN105, which is just 5 volt UART.

The total cost was maybe $30 of parts on AliExpress.

I use 433Mhz Acurite temperature sensors with a software defined radio (rtl433) running on my Home Assistant box to have remote temperature sensing. The 433MHz sensors are cheap, have good range, and have excellent battery life.


Are you doing temperature control by yourself or are you feeding them to the Mitsubishi units as a remote temperature reference?

I know the Mitsubishi "wired controllers" (basically the official thermostats) can provide remote temp to the unit and the unit has DIP switches to select between thermostat-reported temp and internal return air sensor temp.

I'm not sure if CN105 has a way to provide this temp ref - if so, you could try it. Just make sure to set your wired controller (if any) as a "secondary" controller (otherwise it will also send its temp every second and overwrite the one you sent) and then set the proper DIP switch.


T-Mobile has a SIM lock feature that you can enable to block at least most employees from being able to swap your SIM. You can enable it in the account management app or website.

I was able to verify that it worked because an employee in a store literally could not transfer my SIM with it enabled. Their iPad app just gave an error of "customer has SIM lock enabled".

Interestingly the T-Mobile employee had never even heard of this feature, which suggests that basically no one uses it.


> You can find the settings under My account > Profile > Privacy and Notification.


T-Mobile already does exactly this for eSIM transfers, though the waiting period is 10 minutes, not 48 hours.


Energy usage isn't measured in "time the dryer needs to run".

A typical US vented dryer has a 5500 watt element, although it doesn't use that amount of power continuously since it cycles on and off.

I have no idea how much power a typical dryer uses in Switzerland, but the heat pump ventless dryers you can get in the USA now typically use around 1000W when they are running. They typically take around 90-120 minutes to dry a load depending on how big it is and how much water there is to evaporate.

This white paper has a nice graph that shows power usage over time: https://www.energystar.gov/ia/partners/pt_awards/SEDI_Fact_S...

You may notice that, while the heat pump dryers take longer to dry, they use so much less power that it more than offsets.


In EU a common classic dryer absorb 2000W, a heat-pump one around 800W. Of course they have a certain load limit, you can't dry an entire winter bed lingerie all together.

Anyway most washing machines (those who also dry and those who only wash) and dishwashers absorb around 2kW when they heat the water, for washing machines typically only one time (two if they also dry) for dishwashers 2 or 3 times. Running time depends of the program you choose: the quickest wash in general is 15' as the quickest dry, a typical wash for misc clothes is 1h/1h30', drying is 20' (much depend on how quick you can run the washer spin cycle, at 1500rpm 15' drying is enough in most cases). A dishwasher takes typical cycle is 1h30'.

Of course professional dishwashers run in few minutes, and they are far larger than the "standard 60x60x90 cm size", washing machines and dryer take the same times even at nearly-industrial product.


…while the heat pump dryers take longer to dry…

I’m not even sure how true that is with the latest all-in-one washer/dryer combos. The ones released in the U. S. just this year, one of which we have (GE) can do a full load wash and dry in about two hours, which is probably about how long our old separate units would take. Or at least close enough that we don’t notice or care. A load of blankets might be 2.5 hours.

And if takes a few minutes longer, the lack of a vent and the fact that it runs on a 120V plug more than makes up for it.


> I’m not even sure how true that is with the latest all-in-one washer/dryer combos.

Haven't used all-in-ones, but one possible concern: lack of pipelining (to use a CPU term).

I wash darks first, and when they finish washing I put them in the dryer. While darks are drying, I can put whites in the washer. When the darks are dry, I remove them, and then put in whites to dry. (Continue for any further cycles/types of clothing.)

With an all-in-one I have to wait for darks to completely finish before anything can be done with white (etc). Is this a problem for you?


Here in Colorado (Denver metro) I frequently get items same-day, though most items are still 1 or 2 days.


Modern deep packet identification gear looks at packet size and pacing to distinguish video traffic patterns. It is possible to fool these systems, but it is not trivial.


Any reason fast.com couldn’t stream a video as the test content?


Yes, this is exactly what it does, and it's a very clever trick by Netflix to discourage their traffic from being throttled!


I think it does pull a real video, I'm not sure its pulled in the same way the actual Netflix video player would do it.


you wouldn't pull at gigabits/s bandwidth when casually watching netflix


dunno about netflix, but youtube usually does. They send you a chunk of data as fast as possible, and then send nothing till your client requests the next chunk.

By doing that, they can get a good estimate of your connections available bandwidth, which is needed for the decision whether to automatically switch to a higher/lower quality feed.

It also means they can use 'dumb' servers which don't do any application-specific logic for throttling.


Recently I've noticed my phone disconnecting from my Wi-Fi router (a relatively underpowered device running OpenWRT and WireGuard) when starting YouTube videos. I guess this could explain it, if the router is hiccuping on that initial blast of data. I also use a Safari extension that lets me choose the YouTube video quality, so that might affect it if I'm not giving the feedback they expect for "auto" quality selection.

(The issue is not the router, though - this doesn't happen on my laptop. I think my phone is just more aggressive in assuming the Wi-Fi is down and it's better to switch to cellular. But couple that with my phone's policy to use VPN on cellular, and the switch becomes much higher friction. I tried simply disabling cellular data, but then it's even worse because every time the phone disconnects from Wi-Fi it pops an alert telling me to enable cellular data.)


makes sense, but in that case it's quite easy to distinguish between a burst of traffic of <1s to a speedtest that last tens of seconds


Costco manages to do this well. If you buy a Mac (or almost any other high-value small item) you take a paper slip which is scanned by the cashier or self checkout. You then go to the cage and they hand you your item right then and there.


The brand "WeChat" is not as well known as you might think. In China, it's known as 微信 (Weixin), and most people in China have never heard of the WeChat brand.

I also think it's an understatement to call Weixin as ubiquitous as Google in China. There are alternatives to basically every Google service. In China, many times there is no alternative to Weixin. 支付宝 (Alipay) is sometimes a substitute, but in many cases only Weixin is supported. Want to order food at a chain restaurant? Want to buy tickets for a museum? Want to choose the song at a Karaoke bar? Need to fill out a customs exit form? Weixin is often the only option.


Isn’t QQ similar?


QQ is made by the same company as WeChat. It was like the last gen IM, with a desktop focused application and an accompanying social media (QQ Space, so you know what the inspiration was). They are still used nowadays but in a much more niche.


Not quite as common or popular but similar.


To be more precise, fractional scaling in macOS is implemented by rendering the entire display at 2x dpi at a resolution greater than the actual display resolution, then scaling the entire output down to the display resolution.

In some ways, this is smart - it avoids the need for applications to handle anything other than 1x and 2x modes, and it avoids the complications of handling fractional pixel sizes (how does an application draw a 1 logical pixel wide line at a 1.5x scale).

In other ways it is monumentally stupid. The macOS technique wastes resources rendering at a higher resolution than you actually need. Worse, even though the downscaling is excellent in macOS, it still has ringing artifacts and it still is blurrier than native fractional scaling.

This is personally one of the things that I hate most about macOS. Many Mac laptops ship with internal displays that default to a resolution that is scaled in this way (though fortunately not the 14/16 inch Apple Silicon MacBook Pro). That's annoying, but it's not nearly as bad as the situation with external displays. I use a UHD (3840x2160) 32" display. On Windows it's perfect at 1.25x scale. On macOS, my laptop ends up rendering at 6144x3456 and then scaling the output down. This looks very noticably blurry, and it's a huge waste of resources.

Only Windows really gets this right among desktop/laptop operating systems.

GNOME uses the same behavior as macOS, except that the "render at higher resolution and scale down" options are hidden by default, and the scaling itself is a lot blurrier.

KDE does support native fractional scaling, but there are a ton of bugs related to rounding errors causing 1px gaps and random cases where applications end up being blurry, especially on Wayland. Worse, display scaling changes often don't apply until you log out or restart applications, which is particularly annoying if you have a different scale factor on an internal and external display.

Windows on the other hand handles this relatively well, although there are still some annoying bugs on Windows 10 related to the taskbar and notifications (mostly fixed in Windows 11). There are also still a lot of Windows applications that do not handle scaling at all, which end up in blur city.


> Worse, display scaling changes often don't apply until you log out or restart applications, which is particularly annoying if you have a different scale factor on an internal and external display.

This doesn't sound right - i had gnome/Wayland running on two external displays with different fractional scaling - and a window spanning the gap or being moved from one screen to the other looked fine?

This (fractional scaling) only worked under Wayland - but there it worked fine.


That reminds me of a story about SiCortex.

My university (University of Colorado Boulder) bought one of a very few SiCortex systems ever sold. As an undergraduate I competed at SC07 and SC08 in the Cluster Challenge competition.

Our coach was a CU facility member, Doug, who also was responsible for the SiCortex box we bought. At SC08 he told us that another one of the teams was competing with a SiCortex box. We knew that they would win the LINPACK part of the challenge, but we didn't know that LINPACK was basically the only thing they managed to get working.

We had also heard rumors that SiCortex was in trouble financially at that time. When we were walking the show floor at SC08, we came across the huge SiCortex booth, which had 10 or so machines of different sizes (I believe the smallest was a 64-core workstation and the largest was a ~5000 whole rack system).

I remarked to Doug that SiCortex didn't look to be in such bad shape.

Doug turned to me and said, "25% of the machines SiCortex has ever made are in that booth".

The SiCortex idea was like VLIW. On paper the numbers look great. On highly optimized synthetic benchmarks it looks good. On real world code you find out how hard it is to get good performance.


Sounds about right. The processors were six-core 500MHz single issue, which was pretty damn slow even by the standards of the time (2006-2008), beefed up with some extra floating point and some relatively very fast interconnect stuff. The key is that there were a lot of processors - 972 in the big machine. So you really really had to rely on parallelism to get any kind of performance, and a lot of code doesn't parallelize well at all. Even in the HPC space, a surprising number of codes just aren't designed to work for more than about 64.

Also the machine positively sucked for linear integer code - like, say, compilers or OS kernels. One of the first things customers would do, naturally, was compile. Bad first impression. Also, it was nearly impossible to get a Lustre MDS for a thousand-node cluster (which is what the biggest machine was) to run for any length of time without falling over, because Lustre was designed around the assumption that the MDS would be bigger and beefier than anything else and have "poor man's flow control" in the form of a relatively slow network. In our case it was exactly the same and completely unprotected because the interconnect was the fastest part of the system by quite a margin. That was my nightmare for those two years. I've heard that Lustre has since added some flow control ("network request scheduler") but I was never able to benefit from that. PVFS2 worked better, and Gluster (which I worked on for nearly a decade afterward) would probably have been better still because it's more fully distributed and less CPU-hungry.

The reason I mention all this is that there's an important lesson: building a system with a very unusual set of performance characteristics is a terrible idea business-wise, because people won't be able to realize its potential. Not even in a fairly specialized market. They'll just think it's slow. Unless it's truly bespoke, literally a one-off or close to it, nobody will want it.

P.S. I actually had to visit CU-Boulder to debug something on that machine, with the aforementioned Doug. It became one of my favorite "war stories" from a 30-year career, but this has gone on long enough so I'll skip it.


Hey, some of us are listening, and would love to hear that story!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: