Both bathroom and kitchens scales are large enough they can run on AA / AAA batteries and casually go a decade (or more) without replacement.
Tangentially: I used to have a 3 button timer, it was a little LCD countdown display and 3 buttons. One button set hours, one minutes, and one starts / stops the countdown timer. I found this timer when I was a kid, used, and the 1 AAA battery it had lasted into adulthood, literally decades. I have since bought almost the exact same device - 3 buttons, one display - a few different times and the newer ones only last a year on the same battery.
What's going on here? Presumably someone designed a new 3 button timer, since they can't just steal an existing schematic, and I guess just nobody cares to make it good? Why don't all 3 button timers last multiple decades on a AAA battery?
I've seen the same thing with other small electronics, like the dozen or so stopwatches I've owned in my life. A few of them go strong for decades, others have their LCD display start fading in a year.
> Both bathroom and kitchens scales are large enough they can run on AA / AAA batteries and casually go a decade (or more) without replacement.
For bathroom scales, sometimes the design doesn't permit batteries larger than button cells, since the entire scale is a piece of tempered glass with four 1/4" tall feet and a lighted LCD display that shines through the glass. I like this particular design because it's thin enough to fit under a door as it swings. Batteries in my scale last over a year with near-daily use, and mine has a very bright display.
My relatives have a similar type of bathroom scale.
When the button cell ran out, I popped in another one we had lying around.
One month later, the button cell was dead, after using the scale a handful of times. Obviously something was draining the button cell battery when the scale wasn't in use.
I could try and debug the scale, but given that even ten minutes of investigation would be more than the amount of time I spent using the scale in one year, I just decided to remove the button cell when the scale wasn't being used.
Two years later, that button cell still works in the scale...
I buy a lot of those little timers for my business. They get pretty heavy use, maybe running timers for about 6 hours per day. They don't last anywhere near decades for us.
Anyway, my insight is simply that some of them turn off the LCD after a while of non-use, and some of them keep it on indefinitely. Maybe that explains the difference between your two experiences.
The timer I had never turned off the LCD. I found it in early elementary school and the same battery was still going strong when I got married in my late 20s. So it lasted 15+ years on a single AAA battery.
I had the same exp. One I bought in the mid 90s. That original battery that came with it lasted nearly 10 years. Then another 5 with the new battery. I retired it when the plastic casing came apart. The LCD never turned off. That thing was a tank.
The newer ones I am lucky to get a year out of the battery. Even using the on/off switch.
I've noticed something similar with remote controls. Receivers, CD players, DVD players, Blu-ray players, TVs, cable boxes...almost every one I've ever bought came with batteries that lasted for many years. Sometimes longer than the device the remote controlled.
Then, when the remote needs new batteries and I put in the batteries sold in US stores they would last a year or two.
The batteries that came with these devices were brands I'd never heard of and never seen at any store in the US. Usually they had lots of Japanese-looking writing on them and little English except for the brand name.
1) The OEM battery was probably fresher than the one that wound its way through retail distribution and then sat in a drawer at your house for longer than you realize
2) By the time you have to replace a battery, your remote's PCB has had time to adsorb water and contaminants from the air, get soda spilled on it, etc and is going to have higher leakage. It will then always draw more current in standby.
Batteries are sold with an expiration date on the package, and I don't think that's usually decades out.
In the 80s I remember seeing expiry dates on AA packs of 99, and my child-brain thought they didn't expire so they put a meaningless number there instead - I couldn't process the idea of the year 1999.
I've had the opposite problem. I was reading a lithium battery package many times, frustrated that I could not find the expiration date. I finally realized that "2032" was printed as a year and was not an alternate size code.
But 10 years is also the rated lifetime for CR2032 cells. Those made in 2022 will definetly still be good to use in 2032. I've had some last as PC CMOS ram backup cells since the late 90's and they are not rechargeable, nor being charged in the system they are in.
Sounds like nearly decades to me if buying in 80's and have an expiration of 99... Just because a childlike brain of hours can't understand that sort of time doesn't mean it's not literally decades.
That's asking for corrosion damage from the battery.
I've got a bathroom scale that goes several years on it's pair of coins even though it turns on only when stepped on. I suspect there's a physical switch involved, though.
why don't they use gravity as the power source? I've seen scales that require pushing a button several times to power it. Getting rid of the button and using the full scales' surface seems like the next step.
IIRC it's not really the build quality, it's the new limitations around chemical composition inside the battery. I'm not sure about non rechargeable lithium batteries, but I think alkaline leak more and last less due to mercury getting phased out. Some newer Li-ion cells might also suffer from having less and less cobalt inside, due to high cost and supply related issues. Both are good trade offs, especially since alkalines are probably on the way out and manufacturers are getting better at making lithium batteries without cobalt
I have a digital LCD clock, a giveaway by NTT from a trade fair, that still works with the same single Maxell AA battery since 1998. The battery quality must have been amazing at the time, or the clock uses almost no power. I honestly don't know how that's possible. :D
Ine the same way designers discussed above try and big dick their designs, clocks when through some EE big dicking on trying to minimize battery usage. Same thing with LCDs.
They got them to really low power usage. Some of the ones today would basically last your whole life on a single button cell.
I have this ultracheap made-in-China big kitchen clock, with a single AA that I've replaced twice in eight years. I don't buy expensive batteries, more like in line with the clock.
It has second hand (the kind that doesn't tick but appears to move uniformly) so there is some extra mechanical work.
I think the quality is hit and miss, even with premium batteries. I had enough devices damaged or entirely ruined by a leaked battery so now I remove them from anything not actively used.
If anyone knows a reliable model for AA/AAA cells that will absolutely not leak when left unused in a device for year+ I would love to hear it.
I’ve had much better experience with energizer not leaking while Duracell is terrible. Searching the net for other experiences, I found some prepper forums discussing this and apparently Duracell tries to extract so much capacity from the cell they sacrifice wall and end thickness causing them to leak.
I don’t think anyone can say they absolutely won’t leak, but search these types of forums/subreddits and you find other recommendations as well.
We switched to buying USB-rechargeable AA/AAA lithium ion batteries. The brand we buy is Pale Blue, but there are many other brands.
These batteries have a regular form factor, with an added micro USB (and USB C on some larger battery sizes) port for recharging. They work really well for us and charge quickly (and conveniently — everything has a USB socket nowadays).
The price may seem expensive, but after 10 or so charge cycles they should have paid for themselves. I expect these batteries will be good for many hundreds of charge cycles.
I wouldn't recommend this personally, having had bad experiences with poor charge circuitry in integrated chargers like these (in my case it was in a 18650 battery). Get a good charger (e.g. eneloop), and then you can get generic rechargeable batteries incredibly inexpensively for it, without risking poor charge management causing overheating or battery failure.
The same little charger circuit built into the batteries also converts the lithium battery chemistry voltage to 1.5v making them a drop in replacement for AA and AA batteries.
Or are you saying you can get 1.5V lithium rechargables that work with an external charger?
Get IKEA's LADDA rechargeable NiMH, they are a rebrand of Eneloops, the some of the highest quality NiMH batteries available, and are made in Japan; all tests on YouTube and elsewhere consistently put them at the top of the line.
They have very low self-dishcharge rate (and even come precharged in packaging) and with recharge cycles ranging from 500 (for the highest capacity 2450 mAh AA ones) to up to 1000 (for 750 mAh AA/1900 mAh AAA ones). Also, they do not leak.
Big Clive suggests using nimh batteries for expensive electronics since they won’t leak. Also they are much better than they used to be, and there are many slow self-discharge brands that you can actually buy pre charged.
My solution for this sort of thing is the rarely-used devices get lithium cells. More expensive but they're not prone to leaking once they're discharged.
> Kitchen scales, bit more use, at least once a day but again 2 button cells a year
I had a kitchen scale that would eat through its battery pretty quickly (within a month) even when not being used. I figured it could be an interesting project to open it up and try to figure out where the stray power draw was happening, but the obviously correct solution was just to leave the battery cover off and pop it in and out for regular use.
Anyway my current kitchen scale was given to me as a present and has a mini-usb input to charge an internal battery. I personally found it to be a bit overkill, but at least I wouldn't need to get a new battery all the time. Then I realized that it can't be used when plugged in and charging. As in even if it is fully charged, it will not work when plugged in a charging. I can't believe that design idiocy. The engineers/managers/everyone involved in the production of this product should be ashamed.
I dropped my kitchen scale and it broke, so I had to go shopping for a new one. Which was kind of fine since that scale wasn’t great. But finding a good scale turns out to be pretty hard!
In the end I found some review that compares different scales and as a reference they used an entry-level lab scale. So I figured, hey, why don’t I just buy the benchmark scale? It costs about the same anyway.
So now I have an ugly German lab scale that actually has an off button and runs on a 9v battery. Here’s hoping I never have to buy another.
This is the secret for lots of these things; if you step slightly above “home use” you can often find industrial/commercial products that will work just fine - though do be aware of the limitations in some.
Or you could go with fully mechanical balance scales.
I have Sony WH-1000X M3 noise cancelling fancy pancy wireless headphones and they can't charge and be used at the same time either! It is lunacy. Is there some cracy cheap charging chip that has this drawback? The headphones were very expansive still ...
I have the same headphones and that is not the only quirk thats driving me nuts.
It doesn't remember the last noise canceling setting i used. I almost always use them at home so i don't need NC but every time I turn them on its in NC Mode and I have to press the button twice.
The electronics industry is sleeping on the job, because they have not defined standard sizes for consumer grade lithium batteries (that means with an inbuilt protection circuit). All headphones, etc. should be using standard batteries.
I have just replaced a battery on my laptop, it completely failed after 2.5 years of use - they are a consumable, not a durable good. All headphones, gaming mice, etc. are headed straight for the landfill when their battery fails, it is impractical to replace that inbuilt lipo cell as it has custom size and shape, etc.
I buy devices with replaceable batteries whereever possible.
They should have taken the apple approach and put the charging port on the bottom so you couldn't even lay it flat on its bottom with the charger plugged in.
>” How much extra cost does a hard on/off switch add to the bill of materials?”
BoM cost is not the whole story; a physical switch can greatly complicate an assembly, and may require hand-soldered wires. In addition to those cost-related factors, external switches are usually quite ugly, and getting them right can be very challenging, so it’s much easier to just omit them.
I think this is special pleading. At the volume we're discussing a surface mount flow solder on off device is not going to kill the BoM, and could be designed not to be ugly.
It's no different to any other bad design. It could be better.
All phones and computers can be completely powered down. They're hardly ugly switches.
Ermm, most phones and computers cannot be completely powered down. Either by nit-picking (CMOS clocks) or soft-switches (ACPI, soft button power-on), there's always some voltage in the system unless your PSU comes with an actual hard power switch (and even then, again, CMOS).
They consume a lot less power, but leave an 'off' phone in a drawer for a month starting at 100%, it will not be 100% when you 'turn it on'.
In low-power mode even those microcontrollers are using just nA (yes that's nano-Amps). Vs 100's of mA in operation. So the battery life goes from hours to what should be decades.
I'm thinking the battery goes dead because of leakage currents through the rest of the circuits, which were probably not designed to the nA standard.
A solid-state Off switch would also need a good silicon device to cut current consumption at the battery. Which also costs something.
This is an emotional, low-quality response to what is a measured comment actually providing nuance to the discussion.
The parent poster is very clearly not saying that leaving off the power switch leads to a better experience for the consumer - they're just giving legit reasons that a design/marketing team might use to justify doing so. Please don't try to cheapen the knowledge that they provided like this.
> All phones and computers can be completely powered down. They're hardly ugly switches.
iPhones (and I assume most Androids as well) can't be completely powered off. Neither can Macbooks. In fact, I'd guess that most modern digital things with non-replaceable batteries can't be powered off.
There’s a specific Dell laptop reset procedure that requires you to open the case and disconnect the battery so you can then press and hold the power button to get it to discharge all residual power. It’s a pain.
I had to do this a couple of times with a Lenovo that sometimes locked up completely when it went to suspend - battery was highly inaccessible so it was a 30 minute tense process each time - firmware update eventually solved the problem. Not sure what the solution for this is - something like one of those paper clip reset holes?
My $20 "Pittsburgh" calipers from Harbor Freight (don't judge) are still on their original batteries, after 6 years. They have a power button, and a separate zero button.
They don't actually turn off though. It just turns the display off. They need to continuously sense position since digital calipers measure movement increments rather than absolute position. That's why if you take the battery out and put it back in it will prompt you to re-zero the calipers.
Yeah, my explicit mention of kitchen scales and calipers was intentional, they both become much more useful if the user can choose to zero the readout at any time.
It's weird but true. Adding physical switches to a device can be one of the most complex design elements. It complicates both the layout of the PCB and the design of the case. Finding a suitable switch is also a real pain, as parametric searches for switches don't work particularly well.
We did, but maybe not cheaply enough. If they can save a few cents by passing on the cost to you (in the form of more battery usage) they will do so as long as the market (or regulatory environment) doesn't mind.
I recently bought this [1]. It works with friction - you turn the nob once or twice and you're good to go for a couple weigh ins. It feels rather cheap - so let's see if it really lasts. I love the idea though. I wish we had more energy harvesting gadgets like this.
When I was a child my parents had a purely mechanical scale, with a dial to adjust the zero.
Yet another case of why fix what isn't broken. Only, I expect that an IC or two and an lcd to run a digital scale are cheaper at volume than some amount of mechanical clockwork, once the cost of batteries is pushed off on to the consumer.
A quick glance at Amazon seems to confirm your intuition. The cheapest mechanical scale I can find is $32 but there are lots of electronic ones for ~$15.
Since scales inherently involve putting weight on them it should be possible to use that weight to push down a plunger generating enough electricity to run for the weigh-in.
If there's a plunger resisting the scale going down to harvest energy, that would prevent all the force from being on the load cell, giving you incorrect readings.
Not if the plunger rests on the load cell. Then plunger just captures the work done as it is compressed, it doesn't reduce the force that passes through it (once at equilibrium.)
Have had a similar model for like 5 years now. No regrets yet. The small effort it takes to power it up by far outweighs all of the disadvantages of a battery.
Before that I installed a switch in the scale we had to erradicate battery drain.
I love the idea, but the non-flat top, and the display on the flat top kill it (and admittedly, most cheap kitchen scales) for me. If I can't put a big bowl on it and still see the readout it's useless to me.
Between clicking on/off at least once a day, and changing batteries once a year, I'm picking changing the batteries. I'd also guess most people would just leave it on. They may have already tried that.
To be honest, I also have a Dymo label printer (very old model), and it does have a physical on/off switch, but even with the switch off it still consumes batteries like in 6 months.
>Strangely enough none of our kids toys with batteries seem to suffer from this problem
we have some toys around where the way to turn it off would be to unscrew the battery compartment and take out the battery. The toy cuckoo clock that the kid broke that now goes cuckoo at odd times was fun, but hmm, nobody can seem to find it in the house anymore, wonder what happened to it.
Get Vernier calipers! The scale is easy to learn and use and measurements will be at least as precise as electronic calipers. With no batteries required the tool has much less of an environmental impact and you never need to worry about whether the calipers will work or not.
It's just the cheap no-name calipers that eat batteries. Genuine Mitutoyo (and probably similar brands) treat their batteries very kindly.
I have used vernier calipers but the time savings of not having to scrutinize the vernier scale every time you take a measurement is quite worthwhile. Just buy a good set.
Fair enough, usability will obviously be poor if the scale is difficult to read. My eyesight is terrible but at least it is still all myopia so I can compensate by putting the calipers extremely close to my eyes. Looks awkward and people call me Mr. Magoo but it works!
I just bought a digital caliper with a big LCD display. It will work with Imperial or Metric units, which I really like.
My analog calipers are Imperial. I love them, but run into Metric units a lot. Got those in '88 and they are still calibrating in spec and working great. No battery. Did need light oil and cleaning a decade ago.
I have one Vernier caliper and... it is getting hard to read in some conditions. Bummer. I still prefer these in longer forms, say greater than 12"
it doesnt even have to be a mitutoyo to get you down to 0.01mm precision, i got mine for about 40euros. Using equipment i had access to at work i checked that rating and indeed: the tool i got is good for measuring at that scale, and you really dont need any electronics.
Usually you can pay the vendor to calibrate the tool for a couple bucks more.
of course digital ones are much, much easier to read, but you only really feel that convenience when having to measure hundreds of items quickly to sort them into a tray or something...
And in the end, usually a Nonius type caliper is good enough anyways, and skilled users can read those to up to 0.02mm... and thats already pretty good and most of the time: good enough
and in case you do need to know the length of something down to a micron, there are other tools for that with less sources for measuring errors
so yeah: mechanical calipers rock
and: you dont have to always get the brand things to get what you need
I keep a bunch of the US $16 stainless dial calipers lying around so I don't feel bad when I drop them or use them as a scribe. the mitutoyos only come out when its important - getting dimensions to make a copy, or the last pass on a part that really cares about it.
I have a very cheap plastic one with a dial, which works amazing well for the cost (<$10). I don't remember where I bought it, but it looks like this one:
https://www.summitracing.com/parts/sum-900003
Could you check the voltage on those cells next time you swap?
I've found that seemingly similar devices have very different notions of what constitutes a discharged battery.
I have a remote controlled humidifier and RGB LED - controllers for which use one and two infrared LEDs respectively. Both are powered by CR2025 button cells, which when new have an open circuit voltage of 3.3V and 2.7V once depleted.
The two-LED device considers a 3V cell "dead", while the one-LED will happily work at this voltage.
Voltage doesn’t tell you much about the state of charge of a battery (especially with no load being applied), and there may not be much energy left at a battery ‘floating’ at 3V. The cutoff may have more to do with saving money on the power circuitry than what the designer thought was a ‘dead battery’.
Our kitchen scales have a single on button and auto power-off after a minute or so.
I discovered completely by accident that holding the power button down for 3-4 seconds powers it off. Nowhere in any of the manuals was this mentioned (I went back and checked)
I bought an RF pet tracker last year that comes with rechargeable CR 2032 batteries and a charger [0]. The rechargeable batteries factored highly in my decision-making process.
After someone nicked my electronic calipers, I also got mechanical ones - they are perfectly adequate for all home use so far.
I used to use electric ones for 3D prints and designs, and haven't tries that since - might not be able to get the same perfect results with mechanical ones.
> electronic caliper: Same, eats up 1 button cell a year, just sitting in my toolbox
ooo a chance to plug one of my favorite youtubers; I believe it's this video [0] that compares cheap with expensive / brand digital calipers; the TL;DR is that the expensive ones use much less when off, or, the cheap ones use 4x as much. They both keep using power when off though, so that when you move the calipers while off it will still measure the distance.
This seems to be common in battery powered test equipment. I have a decade old Fluke multimeter and five year old Mitutoyo calipers that both seem to be powered by magic. Neither has needed battery replacement despite regular use.
When started to live in my own appartemen and was the first time sick, I bought a thermometer to measure fever. The second time when I was sick a couple of years later the battery was flat. So I bought an oldschool thermometer without battery to measure fever. I use still the same 20 years later. Of course it does take a bit longer to know your body temperature, but when you are sick, you are not in a hurry anyway.
I bought a Yamaha FX-500 processor in 1989. 29 years later, I did a mod to convert it to a batteryless NVRAM chip. The original button cell battery was still good; settings I last touched in 1990 were still being held.
Garage door openers can last 10-15 years too; those use the battery to transmit a signal, not just to hold a piece of SRAM alive.
The bathroom scale is just a poor electronic design.
Bathroom scales, kitchen scales and calipers all have versions that work without electricity and aren't much less convenient (if not more convenient since no waiting for electronics to boot up slower than they should or clunky menus)
I disagree. Measurements are funny things depending on the level of detail you need; analog versions of measurements are very clunky when you need precision. (now this is a bit much for bathroom scales and kitchen scales, but it was the calipers that interested me). When I need under .01 precision, my analog calipers just do not do the job as well as my electronic calipers. I can read them, but the last couple of thousandths are always sort of a guess. If I'm working on a project with tolerances of .01 or less, that's just not an option.
I guess a fun middle ground - I have a set of mechanical calipers with a readout in actual numbers on the side like electronics would, but all mechanical, like clockwork or whatever it is in there, going to .001. They're infuriating to use, because the numbers are so small on the side, and at that level of detail, stupidly sensitive.
edited the decimals because it's early and my brain shut off.
Is your caliper really that accurate and calibrated? In my experience you need a calibrated micrometer for that level of accuracy, especially to get a consistent measurements between several persons
Yes, reputable digital calipers are fine for thousandths of an inch. A micrometer is good for further precision and when you what more consistent measurements of softer materials. For example, if I measure my set of - gage pins, the caliper agrees exactly with the spec (a .210 pin reads 0.2095), and it's pretty hard to torque down too tightly on hardened steel. But if they were nylon and not steel, you'd probably get better repeatability with a micrometer. (And it's worth noting that the pins generally aren't actually half a thou smaller, a micrometer reads 0.20990. If 4 ten-thousandths of an inch are important to your project, yeah, you need a micrometer.)
Yup. These calipers that measure to .0005 of an inch only show hundredths of a mm (10 microns).
As for language, "thou", short for thousandths of an inch, is kind of the base unit in imperial machining. That's why the next one down is "tenth" in the vernacular. It's very confusing because 100 thou is one tenth of an inch, but you'd never call it a "tenth" even if "inch" is implicitly the base unit. Also confusing is that there are SI prefixes for all of these things, but they aren't in use. (Why not "milliinch"?)
Finally, one more advantage of the metric system; to measure 1 micrometer, you need a tool called a micrometer. That's easy to remember!
Yes it is. It and my mics are calibrated to at least the thousandth. The shop has a 6" mic set on the ten-thousandth. But nobody uses that one because that's a level of detail that is just amazingly frustrating to chase.
>between several persons
This is just wrong. Rule one of precision equipment - do not touch my precision equipment.
Hum... Since I've brought a digital caliper that allowed reading sizes with actual 0.01mm precision instead of "guessing it's around X", I've noticed I have no object around that won't compress by something larger than that amount when I apply reasonable amounts of force into the caliper.
I'm also sure it's not calibrated to this level, but that doesn't hurt relative measurements.
The nice thing about analog is they don’t confuse you about precision - a digital caliper could be only precise down to .01 and yet show .0001 increments.
For many things like a bathroom scale precision isn’t really needed anyway.
I try to never buy something that uses batteries and has a version that doesn't. My bathroom scale is a spring one. I do notice it isn't as accurate. The ones with weights are best, but those are quite expensive and take up a lot of room. For the kitchen I use electric for the extra precision while saving space.
My digital bathroom scale doesn't have batteries; it has a big capacitor and a foot-operated push-button that charges it. I tap the button maybe three times and that's enough energy to use it. It was also dirt-cheap.
I have a oven thermometer that has two active parts which are linked via some sort of wireless system.
One part is the one you plug the actual thermal probes into, and that then transmits data to a separate display. You can turn that one off.
Then there is the separate display that lights up and beeps and what have you: this one doesn't turn off, presumably so you can admire it say "--" whenever you're not actively measuring the temperature of something...
I had some digital bathroom scales, and I weigh myself pretty much every morning. The battery lasted a month, and those button cells are pretty expensive.
I decided to take it to be recycled and just bought a mechanical one. It was a little annoying at first as it's not as precise, but for my purposes that works fine.
Things that sit for much of the year I tend to remove the batteries when possible because I assume the batteries will leak.
It might not be true, but it feels like battery quality has gone downhill recently (especially AA and AAA). I had very few leaks from the late 90's to about 2015. Since then, a lot more.
Someone at my work has a really nice electronic calipers that is solar powered (think like a calculator… low enough power requirement that ambient light is more than enough) and thus will effectively never need battery change.
That is really clever, should really be used more often.
When it comes to kitchen scales, most Kenwood food processors double as very accurate and premium kitchen scales with backlight, as as they are plug-in they don;t need battery - great option if you have the counter space - you just place a platform where the chopping bowl goes.
After someone nicked my electronic calipers, I got mechanical ones - they are perfectly adequate for all home use so far.
I used to use electric ones for 3D prints and designs, and haven't tries that since - might not be able to get the same perfect results with mechanical ones.
My wife has a calculator that is solar powered--unfortunately, the light requirement to reasonably use the calculator is lower than the light requirement to run it.
All of those are ones I would argue the best design is to eat a few batteries a year.
Calipers need to hold memory specifically.
But more generally most of those are using alkaleak batteries and you really wanna be changing those out on at least a yearly basis. So it actually makes since to let them have a small always on drain. (Some might be using a 2032 lithium but I can almost guarantee at least the calipers are a 357/lr44 alkaline battery.)
You don’t need power to retain a tiny bit of settings data.
I believe the real reason is because most of them have the “helpful feature” of turning on when you move it. Without something monitoring the sensor, no way to know to turn on automatically.
No, most calipers can only do relative measurements. It’s constantly counting how many 0.01mm the head has moved so it doesn’t lose track of where 0mm is located. Turning on the LCD when the volatile counter changes value is a free feature since the mcu is always on.
Of course better calipers do absolute measurements, but they’re too expensive to scribe lines with and therefore useless for hobbyists.
I use the kitchen scales (and a smaller set of digital scales) for measuring out things like resin where you do need a reasonable degree of precision, especially when you're measuring out small amounts e.g. 5g of A and 6g of B.
Ever tried baking with some of your ingredients 20-30% off? Doesn't work well for lots of recipes. What's even worse is when the error isn't a linear x%, but changes with increasing weight.
It comes down to the electronic design: specifically, low current draw when "asleep". Obviously it's possible to make electronics that costume a tiny amount of power (think quartz watch that lasts for years with a very small button cell). It's just poor workmanship by the designers or penny-pinching.
I got this fancy tire gauge+filler. It uses a couple coin batteries. They live around 6 months with it "off". I use it a couple times a year, so basically every time I go to use it it is dead. Removing the batteries requires disassembly. A year or so ago I soldered in a couple AA batteries in place of the coin batteries, it's been good so far, but I'm waiting to see.
On the other hand, I have a Mitutoyo caliper, which also uses a couple coin batteries. I sometimes forget to turn it off when I put it back in the case, and so far, over 6+ months, it hasn't run the batteries out.
So it CAN be done. Both devices are basicaally the same: LCD display and they measure a sensor.
Back when I was in HS, I had a Casio calculator watch, but it ate through batteries (3-6 months IIRC). The bracket for the strap broke, so I just taped it up in my locker. Soldered it to a D battery, and never had to replace it again. :-)
User control has a big benefit here. I want certain devices to sacrifice battery power in exchange for fast wireless re-connections over bluetooth. My headphones for example - I don't want to wait, use the bluetooth antenna all you want.
But I don't want that 100% of the time. With a power switch, I get my ideal user experience (fast reconnects) and I have control when to apply that user experience (on/off switch).
It’s totally possibly to engineer devices that last a lot longer.
I’m currently working on a new device that will monitor my weight for 3 years without recharging. (It reads continuously, it’s a scale for Bottomless.)
My most ridiculous experience was when an MP3 player froze back in the day. We had no way to switch off, or pull the battery out, so we waited 2 weeks, until the battery was dead. Then it worked fine after a recharge.
my son's hoverboard had a similar malfunction. literally the handbook said there is no reset, and we just need to let it drain the battery. it took DAYS for the LED lights to drain the battery......
Accidentally fork bombed a MacBook and had to wait for it to die as well. Had to put it in the fridge for a bit of that time due to overheating as well
Every Mac laptop I’ve used in the last 15 years has supported forcing a shutdown by holding down the power button for a few seconds. If the model is new enough that there’s no power button, you can hold down the Touch ID sensor instead.
…And, one should be able to turn of every led even when powered on… At Ikea I almost bought a nice brushed steel extractor hood, except it had this obnoxiously bright blue led on the front. Why? I know when that thing is on without it drawing my attention constantly.
Same with my Lenovo X13 gen 2, there is a bright white led next to the power button… just why…???
I don't know about the X13, but a lot of ThinkPads you can control the LEDs with some settings; /sys/class/leds/tpacpi::power/brightness on Linux (also some other knobs/LEDs in there).
Thus far this has worked on every ThinkPad I've had; but again, I don't know if it will work on your X13 (or how to do it from Windows or other systems), but something to look at if you haven't already.
Looking at the Linux source "tpacpi::power" is in the "TPACPI_SAFE_LEDS" list; the setting mentioned in that thread only protects things like "tpacpi::dock_active", "tpacpi::dock_status1", etc.
I checked, and my kernel doesn't have that option, and I can change it just fine ("echo 0 >/sys/class/leds/tpacpi::power/brightness", as root of course).
I like how modern routers have a button on the front to just turn off any LEDs, that's a great improvement I think.
Anyway I've got a speaker in my bedroom with an LED that won't turn off, and a USB-C hub that has a tiny blue LED that requires me to disconnect the whole thing first. Really annoying.
I have a TV with a flood light on the bottom when it's off and the IR receiver is also behind the translucent plastic for the light. It's in my bedroom and is bright enough to keep me up, so I have electrical tape over it which I then have to remove to use the TV because it doesn't have buttons.
An off switch doesn't convince me more though. If the company really wants things always on, they could easily wire the hardware off switch to a software off switch and elevator-close-button it.
In fact, even a non-nefarious company might do that so a sudden power loss doesn't cause data corruption.
When my iPhone's battery is depleted the screen goes blank. If I try to turn it on, an empty battery logo appears and it says "Find My enabled" underneath. So, I assume not.
Sorry, the following rant is only tangentially related.
New Windows sleep (""sleep"") states are bewildering.
My S/O's new laptop only supports "S0" or "Modern Standby"[0]. It can sit there closed up in its deepest supported level of "sleep" and have its fans spinning and keyboard backlight lit while downloading updates or whatever else it deems important.
Sure, introduce something like S0 and spend time improving it. Try to "perfect" it like how Apple has seemingly perfected its low energy use modes on its devices. It will probably take MSoft 10 more years. But don't also artificially restrict the use of other sleep states. On the new laptop it is supposedly a hardware restriction, but I'm guessing it's more just some firmware toggle (IIRC some laptops have had S1-3 re-enabled via firmware updates, but I'm not entirely sure now).
Edit: it seems that MS has patched a bunch of regedit tweaks that used to allow re-enabling more sleep states[1], though it depends on exactly what laptop model you are on as well I suppose.
Note that the charge controllers for many lithium cells have volatile state memory--the current draw is minuscule but it's there and if the state memory is lost the battery is bricked.
The battery will shut down when the voltage reaches a cutoff, not when it is truly depleted.
Me too, for more than a decade. I never trust anything that could not be physically unpowered. Long before IoT, IPMI and smart-stuff attacks became a thing. The more I learn about hw/sw implants the less I believe in device security. (
Related to the topic, besides the energy argument and the ability to cycle system state, I'd argue the huge importance of being able to power things off for personal freedom, especially for things with connectivity. We still have the power to shut off devices and largely the ability to block their connectivity (e.g. routing TV ads through a network filter).
With 5G and future connectivity, it feels as there will be push towards not being able to block the network access of individual devices, such that a TV or fridge will just speak to its server directly via some XG/IoT network which the user does control, without any user ability to block this communication.
After losing control over connectivity, it seems as power switches will be the next thing we'll start to lose control over, for engineering and ergonomic reasons, but also for control reasons. Such a shift may be further pushed by more ubiquitous and long-range wireless charging, if it gets to the point where devices such as our phones can be sufficiently charged simply by being inside buildings.
Just thought I'd throw in my $0.02 vote for the ability to power off things.
In a somewhat similar manner, my CPAP has a 4G modem that's a huge double-edged sword (for Americans only, you can see where this is going...).
On one hand, it is nice to have an app that shows my stats, and my doctor can also remotely access how my therapy is going.
On the other hand, it also phones home to the manufacturer & my insurance. When I don't hit my usage target for a specific time period, insurance stops paying for it and I have to personally pay out of pocket. So now you add the stress of not only complying with sleeping with an uncomfortable apparatus on your face but also if you don't keep it on enough (I take it off unconsciously in my sleep) you start paying thru the nose for it. Vicious cycle...
Not really. The device belongs to the insurance company and they're only willing to pay for it if it's actually being used. They are quite expensive machines, and they don't want people having them in their house, and not actually using them (CPAP has a very low patient compliance rate).
And that's why they have tracking built-in. You're perfectly able to buy the machine yourself without insurance and then you don't have to turn on the modem.
And also after you've used it for a while they give it to you and then you can turn off the modem.
IMO it's a misuse of the technology. Controlled remote access was a good idea for doctors to see how therapy was working and adjust when necessary. I'd argue that newer APAP machines are so good at self-adjusting that it's less clear that remote access is necessary. And using it for insurance purposes should never have been permitted to begin with. Price it into the premium and skip the dystopian monitoring part.
Personally, I just bought my machine outright. Yeah, it was close to a thousand bucks. But it's been going strong for years, so I'm happy with the investment. It has never been allowed to connect back to resmed.
I agree APAP is so good you no longer need a Dr. involved. People should be allowed to purchase an APAP machine if they want it, with no prescription.
However I disagree about insurance - they are paying for it, so they can chose how they want to monitor usage. If you don't like it, then pay for it yourself, exactly like you did.
Also, the older machines don't have remote access, they have a little card you have to bring periodically to the Dr. so they can confirm to insurance you are actually using it. Remote access was meant to make it less annoying for patients, and cheaper too since you don't have to go to the Dr. so often.
Pricing it into the premium is a terrible idea - it would raise premiums for everyone, for very little gain.
> Pricing it into the premium is a terrible idea - it would raise premiums for everyone, for very little gain.
Enough to matter? Let's put this in context. A typical CPAP machine will go for years before needing to be replaced, and can be purchased retail for about $800. Compared to routine medical expenses, that's nothing. Heck, the reduction in costs for monitoring compliance would probably lead to -lower- premiums, not higher.
I think the modem worked its way in there because some employers mandate compliance and that's how they check. (Think safety-critical jobs like driving a train; if you're diagnosed with sleep apnea and don't use the prescribed mitigation, you can't drive the train. Too many people feeling "fine", falling asleep, and running their train around a 30mph curve at 80mph.) Insurance companies obviously love the excuse to not pay, but that's probably not the primary purpose for telemetry.
You can buy your CPAP outright for around ~$1500. This is a problem for many people, but probably not HN readers. The telemetry basically facilitates an interest-free loan that lets you pay for the machine over a couple years. More favorable financing than a credit card, so people aren't getting super screwed here.
CPAP: Continuous Positive Airway Pressure. A device to push air into your lungs usually during sleep to treat or give support with different conditions.
> With 5G and future connectivity, it feels as there will be push towards not being able to block the network access of individual devices, such that a TV or fridge will just speak to its server directly via some XG/IoT network which the user does control, without any user ability to block this communication
How does one ensure that a product is built this way, to avoid it?
I understood that a number of current 5G capable devices - e.g. telephones - can be disconnected the usual way (a button in the (quick)settings that toggles the connection).
Yes, for a telephone currently it is simple, both as it is necessary for the user to be able to disable the 5G functionality, for legacy and usability reasons.
However for a device such as a TV, once it is sold with a network plan arranged by the manufacturer, there doesn't remain much of a point to allow the user to normally control whether this connection is active. For a while, an option would remain to disable or configure such an option, but over time it seems likely that the user would no longer be able to control this directly.
It gets even easier for devices such as a smart lightbulb or a smart toaster - devices for which there wouldn't be any direct user interface. Once these communicate via wireless wider-area networks, the trend seems to be clear.
True. My car came with ATT sim installed. It uses that to remote start via App, and to tell me the location, miles, other data in app. Also, to show me directions on its display screen. There is no way for a user to switch off that data connection (not that I want it, its free of cost, and it helped me few times).
I hope it can be disabled by the manufacturer upon request, like the E-Call system (microphone+GPS+radio device that should trigger an emergency call in case of actioned airbags).
It is something that some workers in some department at the manufacturer's should know. Like for the E-Call: it is in the European law that it may be disabled (by the manufacturer and not by the car owner), but your interfaces to the manufacturer will easily not know.
> With 5G and future connectivity, it feels as there will be push towards not being able to block the network access of individual devices, such that a TV or fridge will just speak to its server directly via some XG/IoT network which the user does control, without any user ability to block this communication.
Amazon Sidewalk [1] is pretty much already there (assuming uptake is high enough, and I believe it's opt-out, so it probably is). From what I recall, the bandwidth is low, but that didn't seem to be a technical requirement but a political one to prevent leeching so much bandwidth the user notices.
I think you're right, though. Usage is going to proliferate until laws reign it in, or someone develops an "electromagnetic radiation firewall" that allows fine-grained control over physical signals. The only other option I can think of is a soldering iron, but that's not terribly feasible. I'll take a soldering iron to a $40 part, but I'm not going to risk frying a nice new refrigerator or other expensive appliance.
It rains 9 months a year here. Much of the US gets quite cold or hot for a significant part of the year. I really don't want to step outside to use my phone. WFH means I need to leave it on my porch in order to receive work calls, I guess.
Does that actually work for 5G? I believe that, in urban areas, you can even get cell reception from inside a microwave because a phone needs very little signal.
I can still get cell reception in my bedroom that has three coats of the paint with proper copper grounding running through it all. But it barely registers on my meter now whereas before it was 100x. I'm sensitive to emf and do sleep noticeably better now. I still turn my phone and wifi off at night. I'm about 400 metres from a cell tower. Rural so no 5g yet I think. Yeah I have the window film and metal window screens and metal fabric curtains too. Wasn't a trivial project.
Can you explicate more about emf sensitivity! Just asking because I have never heard such thing. And, yea caging entire house is probably arduous and costs fortune.
Its just anecdotal, I definitely sleep better the further I'm away from cell phones, cell towers and pulsed wifi routers. I've got a meter and I've used emf shielding clothes prior to addressing my bedroom. Its pretty hard to do decent studies on it since its so prevalent. Prone to quackery and people making quick bucks on it too.
You should check out better call Saul, main character is a lawyer who has a brother who lives with debilitating emf sensitivity, he is the only one who believes him though. They go really in depth about it for like 2 seasons
Isn't it because the microwave is shielded against a very specific frequency, that is much lower than the 5G one? So that the 5G frequency penetrates the shielding without much trouble?
(I'm really not sure, that's my initial guess, but the questions are not rhetorical.)
Microwaves are at 2.4Ghz. Normal 5G ranges from 800Mhz till about 5Ghz. That is about the same ballpark, and Faraday cages are not incredibly frequency sensitive devices.
Having worked in a former alphabet agency building that did have a faraday cage built into it, don’t worry too much about the windows. They weren’t a big enough hole to allow connections even in the middle of the city. Even standing at one with your cell phone was iffy.
Cars already do this kind of comms; Teslas are in permanent communication with head office, and I think the Volkswagen ID3 electric will be as well. Charging by harvesting ambient signals is a thing without needing ubiquitous dedicated charging stations, for very low power uses: https://duckduckgo.com/?q=electronics+charging+from+ambient+...
It won't be many years until the bluebottle fly which came in the window, round the room and off into your house, is indistinguishable from a camera drone from the census bureau, your insurance company, your nosey neighbour. And then many more years until Vernor Vinge's "smart dust".
The trouble with a mechanical off switch is that you cannot easily / periodically turn it on, so it depends on your usecase.
AVR Picopower has been around for about 10y now. These devices dramatically drop power consumption to the point where batteries should last a very long time, definitely >1y, more like 3-5y.
The fact that modern devices drain a lot of juice is either sloppy SW design or sloppy HW design, since I believe the cost for microcontrollers is very similar.
That's the point though, in this case off means off. There is no reason his controller should have to periodically turn on, when you want to use it you turn it on.
Evening with batteries should have user replaceable batteries. Ideally double A's. I am a huge fan of rechargable double A's.
It's the main reason my Xbox sees so much more use than my PlayStation. My PlayStation controller is seemingly always dead when I go to use it, but I can just pop some new rechargeable batteries in the Xbox.
> Evening with batteries should have user replaceable batteries
There should be a law for this. Even for cellphones. When corporations market themselves as environmentally conscious, but don't allow the replacement of common wear items in their gadgets and products, then that's how you know they are full of shit.
No. User-replaceable batteries don't work too well with IP68 ratings. Nor do they work too well with devices where miniaturization is important.
I'm wearing a Fitbit as I write this. User-replaceable battery?? It's 30 grams, not counting the band. How much bigger and heavier would it be with an openable seal? The seal has to take a lot--it's got 50m water resistance. Likewise, the battery needs to be secured against motion--that either means glue or mechanical pressure against it. The latter likewise increases size and weight.
Other than these cases I agree with you. Batteries should be readily swapable.
Sometimes you don't want that actually. For example if the reason that device has an battery is a RTC (Realtime Clock) and it uses it to keep track of time.
Or with things where battery life is a much smaller issue than forgetting to turn the thing on (e.g. smoke detectors).
Or with things that draw unspeakably low amounts of current and where such switches would be hard to implement (e.g. digital wrist watches).
Having an off switch on a life saving device like that sounds like a profoundly bad idea to me, so not sure why it would be brought up in this context. I personally would’ve used an example of something you might actually want to turn off, but it would just be a bad idea, not a potentially fatal idea, if you failed to turn back on.
I was listing things where I would not want an off-switch along with the reasons. As you rightly mentioned yourself having an off-switch on a smoke detector is a very bad idea.
The only place where smoke detectors are frequently switched off are remote huts that might be empty for the better part of a year. And there the best strategy is to unmount the whole thing, pull the battery out and place both things somewhere really obvious so the next guys notice immediately what needs to be done.
Most firealarms I know have a button that switches of the activated sound when longpressed. I also do not know smoke detectors without removable battery (outside of wired ones).
A deactivation button is different from an power switch where the thing is off when the switch is in the wrong position. And if you don't notice people die. I didn't check the ISO norms on smoke detectors but I would be surprised if power switches wouldn't be explicitly forbidden in those norms.
And sure you can get a detector that doesn't conform to the norm, but then the insurance won't pay in case of a fire.
> They never seem to just use the bluetooth module as the sole microcontroller
There's more reasons than just being able to plug a different BT controller (which is great in times of supply chain problems)
You're also usually stuck with a different toolchain than you're used to, so you will have to learn all the quirks. And they may not have enough pins or not the right type..
But I get the point. It's kinda funny to read the "w000t my C64 now has Bluetooth". When the BT controller is actually much more powerful than the C64 :P
But I get the point. It's kinda funny to read the "w000t my C64 now has Bluetooth". When the BT controller is actually much more powerful than the C64 :P
Just this last weekend, I was woken up at 4:00 in the morning on Saturday by a cheap toy drone remote controller that was incessantly beeping because its batteries were running low. So I'm feeling particularly strong about this subject at the moment, and I'd add required off switches for anything with a camera or microphone while we're at it, battery-powered or not.
Number one on the list of "gifts your family can give you to make you hate them": the BeatBo[0] child's toy. It rolls around, has no on/off or volume control, and is motion/vibration activated, so it will indeed start blaring music and rolling around if you take actions like "walk across the floor".
I like to walk into cell phone stores and ask if they have any phones with an off switch. These are of course rare and only sold by niche companies like Pine and Purism. Everything else just looks like it is off while it is still transmitting in low energy mode.
Wait... If I power off (not sleep) a typical phone, it still transmits?
I know there's the baseband processor that does its own thing, but I would've thought the userland "off" state shuts that off, along with everything else except for whatever watches the power button and battery charging.
I would be very curious to know how one could test this so easily. https://www.imobie.com/icloud-unlock/locate-a-lost-cell-phon... and many articles demonstrate that even when a Google or IOS phone is off it can still transmit its location. This means it is not truly off and could possibly transmit more information than you realize. My custom OS phone does not do this TMK, and I am just SOL if I lose it. This is the price I am willing to pay.
An RF power meter or spectrum analyzer. It would be very clear if a device is radiating enough RF energy to make a connection.
Battery life is another giveaway, I've had android phones turned off in a drawer for months and they still have nearly 100% charge when turned back on. If they were using enough power to make any kind of cell connection, the battery would be dead in a few days.
The link you have there even says "When your phone is turned off or runs out of battery, you won’t be able to track its real-time location"
And this just posted to HN today to explain in detail for you how the Iphone stays on and transmitting when you think it is off. https://arxiv.org/abs/2205.06114v1
Forget about "with a battery" - everything should have an off switch. Rebooting hardware like the Amazon Fire TV is so annoying, and it's not like it doesn't like up or crash...
Interesting philosophical idea. I would argue that the position of the OP is really that things should have an off switch for when they're not being used. A watch is expected to be correct every time it's worn or looked at. It's never not being used.
i was just thinking how it always annoyed me that my garmin smartwatch doesnt have an off switch but that’s good context. though quartz doesnt have gps sensors..
Another hot take: every device where Bluetooth isn't the sole method of interacting with the device (think AirPods), you should be able to turn off Bluetooth.
You haven't seen hell before you've seen a speaker with Bluetooth that you can't turn off and you can't stop your neighbours from accessing. Or more specifically, heard hell.
My phone is paired with my car--but the bluetooth audio in the car will pick up whatever signal it hears regardless. Including from the car next to me.
Super bright LEDs is one of my biggest pet peeves on electronics. And 99% of them seem to be blue. This one is red, but pulsing - oh god.
For SMD leds it's pretty easy to destroy them directly by using a hot enough iron - they're heat sensitive and die easily. Just touch it with your iron and boom, annoying light gone.
Soft switches are fine if the power is really, really, low. For real.
Like, if the battery is rechargeable, 5-25uA is pretty much zero drain.
If it's disposable, you get about 1-5uA before it's A Problem(Because the only good use for a disposable is on stuff that only gets changed once in multiple years).
In a lot of cases the switch is the least reliable part, and it makes case design harder, and maybe compromises waterproofing.
Beware of alkaline cell leakage, though. They fare much better if they are isolated when not used for long periods. But you could use a low self discharge rechargeable cell like NiMH instead of alkaline, of course.
Lithium batteries, even when they don't provide power, slowly lose power.
They can also "degrade" when they're not in designated voltage range. This is one reason why most batteries are soldered, because they need to be regulated at all times.
So no, you cannot "turn off" a battery. A battery should either provide power, or be plugged to a charger.
According to the table in the following Article, lithium batts have the lowest self discharge rate. When you dont have credentials to present please present citations.
Well, thats because I am a theorist who lately had to look at industry technical engineering websites about matters that academics dont care about and I find them to be mostly hearsay and heuristic garbage. I need to know whether your “credentialed” experiences can be written off.
Citation: search for the terms “self-discharge”. I dare you to find something less kosher than the wikipedia page. I already find the wiki page less than kosher.
There's also the safety issue. In the event of an internal short, the battery can actually catch fire. Would be nice to have a hardware power button that physically disconnects the battery from that load.
Sure, the battery degrades once its voltage drops to low or rises too high. But it also degrades from charge/discharge cycles. And without the ability to turn the device off the battery loses power even more quickly, either causing more charge/discharge cycles or bringing the battery quicker into the unhealthy zone.
you can snip off smd LEDs pretty easily if you have a set of flush cutters. I really like the knipex ones, they're high quality, truly flush, and very durable
PS: foone knows what a chip on board is, right? I'm assuming they just used weird phrasing, but the fact that they called it a "blobbed over IC" kind of threw me off
One of the most annoying on-off interfaces I deal with for an electronic device is my insulin pump. When one intentionally turns off delivery or it runs out of insulin, it beeps loudly every few minutes. To turn it off, it has to be plugged in, and then the shallow, stiff button on the top held for 20-30 seconds. Turning it on is easier, but opaque - connecting it to power starts it up after a few seconds.
I understand why a crucial medical device doesn't have an off switch that could be engaged accidentally, but I don't understand why they tie the function to being connected to a power source.
It's cool that they managed to make the device work for them.
But on the other hand, I think it should not be the job of the consumer to "fix" shitty design choices. If we accept devices that are borderline defective by design instead of sending them back to where they came from, what we'll end up with are more devices that are similarly broken for the sake saving a few cents.
The only way we can truly fix bad design choices is by making them unprofitable for the manufacturers.
Hidden benefit of this--my wife still can't understand how I can get up and go to the bathroom at night without turning on the light. The various dots of light provide me enough reference that I can navigate correctly even though I can't actually see what is in front of me.
When I first read the title I thought about security reasons. I think this problem is more related on how the products are designed. Why does a controller need to be always looking to connections? For wearables, this could be ok, but not for a gadget that I'm going to leave in a drawer once I finish playing.
Cars are ubiquitous. They are everywhere. We all relate to them without thinking. Cars run on batteries. Even IC runs various systems using battery power when parked. But do cars have an off switch? To turn them truly off, such as when storing for a long period, the procedure is take a wrench and physically disconnect the 12v battery. So I disagree with the author. At least some battery-powered products are doing perfectly well without off switches.
Other IC vehicles, notably aircraft, do have true off switches that fully shut electrical power at the source. So it isn't anything inherent to IC technology. Consumers have spoken: some things do not need off switches.
My car's entertainment system is stuck in a crash loop 24/7. It is slowly draining the battery, to the point where I worry my car won't start if it has been off for more than a week or two.
And I don't agree that "consumers have spoken". Consumers don't design cars; their feedback loop is long, lossy, and easily ignored. Profitability drives car design, not what consumers want.
I have a Ford that sometimes has a similar problem: a freeze/crash in the entertainment system makes it partially unresponsive (so it can be stuck playing whatever it was playing at whatever volume it was at) and it will just eat the battery in no time. The solution is to pull the fuse to restart the thing.
There's a big Microsoft logo badge right on the center console letting me know who to thank for that.
So true. I just found out in the hard way, by letting the battery in a home safebox expire. I cannot find the physical key and the electronic lock does not work without power. I will have to call some specialists to break in.
I'm sure you had an array of considerations when buying an electronic safebox, but this is the bottom line as to why I would never buy one.
As a programmer my day job is "this isn't working the way it's supposed to, lets figure out why and fix it." I can't fathom ever wanting to deal with that stuff when I get off work. A lock and key have been working just fine since the Roman times, it's relatively easy to conceptualize how it works, the things you can troubleshoot on it is a relatively short list, the UX surface area is small and predictable (it's just a lock and a key). I could go on for a long time. If something got along just fine before bluetooth and wifi existed then I never want to own a version that requires bluetooth or wifi.
A lock and a key has other failure modes. I broke a key in a lock more than once in my life. I also once accidentally fouled up a lock with some kind of dirt/fluff that it stopped working with the key.
First look up the manual to make sure the battery isn't replaceable from the outside. My fire safe has an electronic lock, but you can replace the battery by twisting off the face of the lock (which isn't obvious from looking at it)
I have a safe with an electronic lock. The batteries that run that lock are on the *outside* of the secure area. There's a little plate you can slide off and fish out the battery pack. That's how it's supposed to be done.
It'd be very, very nice if devices with re-flashable memory had a physical write-enable switch on it. No, not a software switch. With physical write-enable, no more malware getting installed on them.
This is doubly so with kids toys. At least let me turn it off when piling them all in the toy-box without having them pressing a button on eachothers, making noises and draining the batteries..
Just scanning my desk which has a laptop, mac mini, monitor, phone, nintendo switch, a bunch of ipods ... the only things I can spot that have a power switch are 4 of the 5 ipods.
You're right — a few of them are hold, not power. Even the OG shuffle, which definitely looks like it's a power switch, appears not to be — when I switch it "off" during playback, there's a suspicious delay before the music stops.
It’s been a while since I used my iPod Video/Classic, but I think I remember being able to turn them off by holding the Play button on the dial for three to five seconds. Maybe that was just the hard reset; I don’t remember.
An off switch is always handy. I have a monitor without a power button and so I can’t shut it off without disconnecting it. It also makes it harder to troubleshoot when there’s no signal: is it the power or data that’s at fault? I’m not sure because there’s no buttons at all so I just have to keep connecting and disconnecting cables until it works which is more difficult to discern because I’m looking at the back where the cables go rather than where the output is displayed.
I would say there are a few reasonable exceptions to the rule. For example I wouldn’t want smoke detectors or carbon monoxide detectors to have off switches.
Of course there are always exceptions. Smoke detectors usually have replaceable batteries though, which kind of work like an off switch. Pacemakers however should definitely not have an off switch, and a replaceable battery would be tricky. You'd have to do the switch like that BMO character.
> Smoke detectors usually have replaceable batteries though, which kind of work like an off switch.
That is changing. Several states, including California and New York, no longer allow replaceable battery smoke alarms. They require long life sealed batteries. Other states are in the midst of doing so.
They are doing this because smoke detectors have a limited lifetime. You are supposed to replace them 10 years after manufacture. However, they don't suddenly stop working at 10 years...they just get less and less effective over time.
Many people don't realize they are supposed to replace them, or know it but forget. I didn't know, for instance, until mine was ~17 years old. It still went off when I'd sear a steak if I forgot to close the door between the kitchen and the room it was in so there was no obvious thing to tell me that it was likely losing effectiveness.
By going to units with a sealed battery meant to last the lifetime of the unit they hope to make it more likely that people will remember to replace them. When it is time the units will beep like the replaceable battery units do when the battery is getting low.
I would guess there are also people with the replaceable battery units who when it starts signaling low battery take out the battery planning to pick up a new one next time they are shopping, forget, and take a long time to remember if ever. Sealed units will help there, too.
BTW, when buying smoke alarms it is probably best to avoid Kidde (also avoid them for fire extinguishers). They are prominently featured at Home Depot and Walmart, and can be a little hard to avoid if you don't take a bit of care. Here's why to avoid them, from the Wirecutter.com article on basic smoke detectors:
> With placement on the shelves of Home Depot, Kidde is the most prominent competitor to First Alert, but its overall track record is, in a word, disturbing. In 2018, the company recalled more than 450,000 dual-sensor smoke alarms; in 2016, it recalled 3.6 million smoke/CO alarms, and in 2014, it recalled 1.2 million smoke/CO alarms. In addition, since 2005, Kidde has enacted three separate fire extinguisher recalls of 470,000 units (2005), 4.6 million units (2015), and 40 million units (2017). Worst of all, in early 2021, a federal judge ordered Kidde to pay a $12 million civil penalty “in connection with allegations that the company failed to timely inform the Consumer Product Safety Commission (CPSC) about problems with fire extinguishers manufactured by the company.” For these reasons, we can’t, in good conscience, recommend any Kidde products to our readers.
> Although First Alert is not immune to recalls (it recalled nearly 150,000 smoke alarms in 2006 and roughly 600,000 fire extinguishers in 2000), none of them are recent or on the scale of what Kidde has had to do.
Apple Airpods Max: My biggest complaint about these is that they don't have a proper power switch that absolutely cuts the power. This is especially awful since the case they ship with is awful and I don't want to use (but it, or some other magnetic solution is required to put them in low power mode. I don't want low power mode, I want no power mode. Just put a damn switch on them!
At some point when Apple introduced the touchbar someone decided that you don’t need to truly turn it off… and it automatically turns on when you press a key. Meaning that you can’t cleanup the keyboard without draining the battery first. (I tried looking for undocumented configs to turn this off… for years -well more or less since 2017)
I used KeyboardCleanTool but it didn't worked for me. What it does is to put an app window that captures keyboard events, but you can still press certain keys (like a lock screen shortcut in the touch bar) to by-pass it. Many times what I did is to open vi in the terminal... and it does the same without the extra app.
Is the battery removable? Best tip (if not) is to put it inside an ice chest, and put that in your car or refrigerator, then try to go back to sleep once the adrenaline rush subsides.
How about when my desktop won't stay asleep, because, I don't know, some update is scheduled to run, or some unknown number of errant processes that have a schedule exists somewhere. And yes, I turn off my mouse each time so table vibrations or gusts of wind are ruled out.
Assuming that you use Windows, run the Event Viewer as admin. In the Event Viewer go to "Windows Logs"->"System" and look for events from source "Kernel Power". Find a wake-up event and read the report. It will tell you what caused this event. Happy debugging!
Especially those which supposedly are turned off, but still use battery power for low-power Bluetooth for device discovery. So many times I was driven mad by this very shitty and very expensive device, because a deep-drained battery is no joke!
Recently my trimmer turned on by itself and then stayed on. I couldn’t turn it off. Had to put it inside a bocal until it drained itself of battery so that we could bare the noise.
Don't you find it annoying losing all the settings in your car? Radio stations, etc.? If you're leaving your car for a long time without driving it, I would suggest adding a pigtail to make it easy to plug in a battery tender instead.
Not really. I lose the presets, but I play music from a USB stick anyway. It's far, far less trouble than coming back from a trip and finding your car dead in the airport parking lot. Many times I've also discovered a dead battery in the morning because the door wasn't closed all the way and the interior light was on. Ugh.
Having a separate "tender battery" for car-is-off power needs is a fine idea. Another idea is for a battery voltage monitor that will cut off the battery if the voltage starts to drop.
They're not cheap, but Antigravity batteries (1) implement that. It monitors battery voltage and disconnects when it reaches a given level. To start the car you just press a button on a fob to internally connect to the reserve, and off you go. Doubly useful for Lithium batteries since they really don't like being over-discharged.
wtf? how else are our Silicon Valley overlords supposed to check in on us, how else do they prematurely wear down our gadgets so we have to pay expensive money for their tech team to un-glue and then re-glue a battery inside, or to replace the gadget entirely (with various components still functional, but turned into e-waste due to their fundamental black-box proprietary nature?
People say everything (and everything should have user replaceable batteries).
I would not like a pacemaker with an off switch you can trigger by accident or have it have user replaceable batteries - seems dangerous to your health
>Buys lowest common denominator/cheapest product
>"oh no this product is really bad and the features don't make sense"
Alright buddy. There are plenty of devices I have without an off switch that work perfectly fine. In terms of controllers, the Xbox controllers I have will go to sleep after not having been used for a little while and conserve battery that way.
The problem isn't lack of off switches, it's the lack of features related to the lack of an off switch.
Except that the PDP folks are the biggest name in blessed Switch controllers, making many of the officially licensed character controllers such as the Mario and Luigi branded ones. They were given access to the original GameCube controller molds and design files and created the better SSB controllers. The hierarchy is
Nintendo, PDP, hyperkin, 8bitdo, everyone else.
This is endemic to the PDP controllers.
If I want to turn off an Xbox One controller, I can pop the batteries or it'll turn off after it's not connected for 30 seconds. PDP wireless Switch controllers? 10 minutes later and they're still hunting for their paired console, often with no way to forcibly turn it off.
Even 8bitdo do in several iterations have Xbox style battery compartments and their fighter pad has a physical switch.
PDP can ship these cheap because they can have loss leaders in the low cost options, making up for the difference with their high end controllers and even their rather extensive selection of other console controllers. Ironically, their switch joycon charger dock is one of the best and some of their headsets are rivaling even the better Gamer ones now.
Not necessarily true. Embedded software with extremely well defined process loops should never need rebooting. In practice there can be bugs, but this is not a given. There's probably substantially more embedded software than you're aware that never needs or gets rebooted.
(Background: I work in product design/new product development, and conceivably might have designed this thing! If any of those were still designed in the Western world. Which they aren't. I have designed some stuff like it, though!)
Oh, great, this debate again! I shall use this post as ammunition next time I am arguing with our stupid design team about why we need an off switch on the stupid product we are designing, because it is a giant pain to live with devices you cannot turn off.
(It is especially fun to put something together for the Very First Time, then realize you can't turn it off... and you need to turn it off because it's the Very First Time this thing has ever existed at all, and it isn't quite right, and you need to fix something. But it can't be turned off, and this is somehow by design.)
(Our design team will never listen. This is because they, one, don't listen to anyone ever; and two, never use the products they design. Oh, sure, there might be a user study or two. But they never actually have to live with or work with the damn things.)
> They never seem to just use the bluetooth module as the sole microcontroller
We actually do this! It works pretty well.
> and also it's not like microcontrollers are expensive these days. with a blobbed-on IC of some 8-bit custom non-programmable thing, they're probably spending like 1-2 cents each
Yeah I wish we could get those prices. But we don't do COB (chip-on-board) and we only use microcontrollers we can get and read datasheets for, so we pay at least 20x that price. Maybe that's why we don't have any clients?
> This is because they, one, don't listen to anyone ever; and two, never use the products they design. Oh, sure, there might be a user study or two. But they never actually have to live with or work with the damn things.
I wonder the percentage of consumer (and business - have you ever used a POS in a pub?!) electronics this applies to? It's gotta be high 90s. I've lost track of the number of times I've lamented "did anyone actually use this?"
> I've lost track of the number of times I've lamented "did anyone actually use this?"
We have a combination oven and microwave (yes...) that seems to have been designed with the primary purpose of enraging the owner.
The microwave can only be set in 15 second increments. But you can fine-tune it, in five-second increments, by spinning a dial. Except that you can't do that while the oven is on.
The microwave makes a noise while it's operating. When it's finished, it will beep unobtrusively and then continue to make the operating noise. You can stop it from making the operating noise by opening the door.
The microwave can't be replaced, because it's part of the oven.
When we moved into our current house, it had a microwave like that. When we replaced it, my primary requirements were 1. multiple fan speeds, and 2. a nice, simple touch pad for entering the time; preferably one where you didn't need to press a button before starting to enter the cooking time.
The dial thing is... infuriating. It never once found a situation where it made using the device better, and in most situations it made it worse.
> The dial thing is... infuriating. It never once found a situation where it made using the device better, and in most situations it made it worse.
Digital dials can be nice in some cases.
My parents had an oven with a dial for the temperature. Spin it either way to increase or decrease the temperature by extra 5 degrees. I loved it. It was so much better than the touchpad design. The range of actual temperatures in an oven is pretty small and I found this much more friendly than punching in the temperature, getting it wrong, cancelling, and punching it in again. It was a “finger tip” dial so it was quick to spin. Probably horrible for people with bad arthritis or other hand mobility issues, though.
I wouldn’t mind a microwave with a similar dial for time. Again, the actual range of times in typical use (for me) is not that high. The design described with a two tier system for adjusting time seems moronic though.
For temperature, I can see it working... though I don't see how it would actually be _better_ than tapping in the number.
For the microwave time, it was a horrible experience pretty much every time I interacted with the device. A keypad is simple, intuitive, and easy to use.
Keypads should be simple and easy to use, but they often are not. The buttons are inconsistently responsive and there’s always at least one other key you have to press, often two. And those other two are named whatever the designers decided and placed wherever the thought would be most confusing.
“Okay, I want to cook for 3 minutes.”
‘300’
”Ok, the numbers don’t work until I press something else. Hmmm”
Maybe 250. 300-550. I rarely go up to 550 and never below 300 that I can recall. With 5 degree increments, that’s only 50 clicks, fewer if the system is speed sensitive.
My microwave oven uses a dial, and it works great. It uses increasing intervals between dial ticks (5 seconds for cooking times under 1 minute, 10 seconds for cooking times under 3 minutes or so, than 30 seconds, than 1 minute, than 5 minutes; I don't know the cut-of points by heart). The intervals are chosen in a way that they're precise where they need to be, and large where they don't need to be precise. It's really easy and fast to set the time you need.
It's a bit limited, in the sense that you can't set the time to e.g. 17:27. But who needs that?
When my old mechanical kitchen timer died on me, I bought a digital one with a dial, in the hope that it work the same way as the one on my microwave oven. Boy was I wrong. It kinda increases the interval when you spin it faster, but in a completely unpredictable and impractical way.
So I guess it's easy to make impractical dials, but it certainly is possible to make very well functioning dials as well.
If "the operating noise" is the cooling fan for the magnetron, it keeps going when the time is up to cool it down. Mine does the same, but actually says "COOL" on the screen when it does so.
It's more reliable if you connect the cooling fan to a thermostat rather than the power switch. Otherwise the hottest time is actually after you turn it off (as heat from the inside leaks to the outside but is no longer being cooled by the fan.)
No doubt it was some combination of lower power, physically larger or simply more prone to failure. Do you suppose the designers go to the trouble of a cool-down timer just for the hell of it?
I always use Reverse Hanlon's Razor, so I suppose they do it in bad faith. /s
Actually I guess they both shave off weight/cost and design for built-in model first (that are naturally more heat-constrained) and make standalone variant as an afterthought.
That must be frustrating, but was quite funny to read.
I would happily invest in kitchen appliances that have a silent mode for apartments and shared living spaces.
My microwave beeps at the end, and then beeps again 10 minutes later once the food has been removed. My old toaster beeped 5 times when the toast popped, which is unnecessary when the popping action is loud on it's own.
Have you looked to see if your microwave has a volume control menu? After years of listening to my microwave blare across the house I saw a tweet that informed me that most microwaves have a menu setting somewhere to control the volume and sure enough, my Frigidaire microwave actually does. I had just never thought to look.
> The microwave can only be set in 15 second increments. But you can fine-tune it, in five-second increments, by spinning a dial
Mine is similar, also a combi. You spin the dial and it will go up in 10 second increments to a point, then in minutes, then in 5 minutes, then ten minutes, etc. Fine, I get it, it's like floating point. But it starts doing 5 minutes at the 10 minute mark! So you can't do 11-14 minutes. Which is actually about right for some things in an oven.
You also can't add more than 10 minutes to a short time without stopping the oven (this, I assume, is intended to be a safety thing to avoid being able to accidentally add hours of cooking without positive user intent).
And I do wish it would just stop the fan once it stops cooking, but I suppose the electronics next to the hot oven don't suddenly need less cooling just because the hot metal box doesn't have food in it.
I do like it though because it's much more efficient for small things than a big oven and I don't have space for a toaster oven and a microwave. And it has a much better UI for setting a delay start than the 6-button interface of the big oven.
> I do like it though because it's much more efficient for small things than a big oven and I don't have space for a toaster oven and a microwave.
You may have gotten the wrong idea. This isn't a combination where the same box functions as an oven and a microwave. This is a microwave that is for no reason physically attached to a separate oven - it takes up all the space of a big oven plus a normally sized microwave. The only thing they share is the display panel. And yet that didn't stop anyone from disabling microwave functionality while the oven is on.
There is absolutely no advantage to this compared to having an oven and a separate microwave, and many, many, many disadvantages. It continually boggles my mind that anyone ever designed, manufactured, or purchased this thing.
> The microwave makes a noise while it's operating. When it's finished, it will beep unobtrusively and then continue to make the operating noise. You can stop it from making the operating noise by opening the door.
Sure. Sibling comments to my comment explain that the continued "operating noise" is a cooling mechanism that's required by the microwave oven.
So, it wouldn't be a good idea to just rip that part out of the design. Even if it may be annoying. Perhaps a better explanation is needed, like some models showing "cool" in the display. But it's per se not a design oversight.
Chesterton's Fence is about not changing things before understanding what led to the current state of affairs. I think the principle easily extends to complaining as well.
No, absolutely not. This is not a problem that other microwaves have. It seriously impairs the functioning of the microwave. Chesterton's Fence tells us that there is no cost to eliminating this ridiculously awful functionality, because other microwaves already don't do it. An explanation doesn't help in any way. The solution is to stop misbehaving.
Maybe those other microwaves have a different design that doesn't make it necessary. So, sure, you can re-design the microwave oven, but that may entail significantly more work than to just deactivate the fan.
I could criticize cars with combustion engines for having one and being so noisy/stinky/... Then I could go on claiming that they should just turn them off, cause there are cars without combustion engines which are less noisy/stinky/... I'd be missing that there is more to it than just turning off the engine. I'd need to replace it with something else, like an EV. Maybe that did not exist yet when that car was build. So, claiming that this old Ford from 1970 having a combustion engine is stupid because "this is not a problem that other cars have" is missing the point. I could go rip the engine out because I don't like that it's noisy/stinky/... But I'm in for a surprise here. The vehicle won't move anymore. Something I'd have understood had I followed the principle put forward by Chesterton's Fence.
> Chesterton's Fence tells us that there is no cost to eliminating this ridiculously awful functionality
No. It tells you to first understand what you are doing. Blindly claiming something misses that part.
A friend of a friend retired from the tech industry to become a joint-investor in a small chain of pubs. He ended up writing his own Linux-based POS software for the pub tills, AIUI...
Are you me? Lol. Sometimes I use software where I wonder "am I the only one that bought this and tried to use it?" I think part of the issue is that requirements get written by people who have no experience. They've never done the job the software will be used to augment and they don't know anything about software development. Then the requirements are used to outsource the actual design and development.
Using POS as an example (in a country with tipping culture). Imagine if a requirements document for the payment system said "must include an option to tip", but the design and development was done in a country where tipping culture isn't a thing. You'd likely end up with the "tip" option hidden away in a hard to reach place while everyone who uses the software knows it should be prominent and easy to use and could immediately tell you the design is poor.
You might enjoy [1] about a microcontroller that's ~3 cents, in quantity 10.
Of course it's only one-time-programmable, and generally pretty basic. And that's a pre-supply-chain-crisis price - these days prices have risen to 9 cents [2]. Still very cheap though!
Have you developed anything with that microcontroller, or tried to use that datasheet?
An ATTiny10 or PIC16F is so much better documented, so much better supported, so much more featureful, and so much better in every way that the average small-volume product design house will never make up the 40-cent price difference per part. How many development hours or support hours can you bill before you're penny-wise and pound foolish?
Because the chips are so simple, I actually found the datasheets for Padauks to be sufficient. In particular, I don't think it's any more confusing than say STM32 or even NRF52 datasheets, mostly because the chips are so much easier and thus the datasheets are sooooo much shorter. Granted, I have not worked with 8-bit MCUs other than Padauks.
Their compiler/macro-assembler also seems to be working just fine, once you get past the UI from early 2000s. The error messages that must have been auto-translated from Traditional Chinese, but you squint a little bit and get the idea.
Even at a higher level than microcontrollers, i've worked with arm chips whose only documentation was random posts in chinese characters on sites i had never heard about before. They were cheap, but for the small volume devices i was working on dev time became significant.
Make the battery removable, then you have an off-switch and a repairable device. For bonus points, use a battery in a standardized form factor so I can buy spares.
I know, I know, the industry doesn't want me to keep my device working longer than the lifespan of a Li-ion cell.
the real problem is the design team doesn't give two shits about neither users nor developers of the product... developers aren't treated as customers by design teams and then everybody is confused why the thing doesn't work. (hint: maybe it's stupid hard to make it work if you are actively prevented to be able to work on it?)
the real problem is that you can't keep all those little features you'd wish for in your head, so while you are making a buying decision they don't factor in as much as the lower price achievable without those features.
I'd love this - simultaneously beneficial to consumers, improving the market (reducing information asymmetry), and not something that laissez-faire-market-types can complain about (because you're not restricting what sellers can do, technically - you're just forcing them to reveal more information about their product, and then it's market forces that actually deal the killing blow).
More ideas: "online account required for use", "collects location data", "phone call required to cancel subscription", "memory use: 350 MB" (just like calorie counts!), and so many others...
We'd need some regulation for the regulations. My son's laptop battery died last week. Today, it is considered non-removable. But, he did remove it, after removing about 20 screws, carefully prying open the case, removing a few more screws and layers of insulation inside. Eventually, he was able to remove it and test that the battery was the fault and order a new one.
If we require labels, without the necessary strictly defined definitions, today's non-removable battery becomes tomorrow's "removable" battery. Because, hey, it is possible to remove it. Good luck getting everything back together if you aren't an engineer or extremely careful in how you disassembled things.
If a battery is "removable", that means replacing the battery is a normal, reasonable use of the device. Ergo, if the device breaks while the customer is replacing the battery, the manufacturer would need to replace it under warranty.
"Field replaceable" might be better way to describe it. If my flashlight runs out of power, I unscrew the tailcap, take out the 14500 cell, and put another one in it. I don't need tools and I can do it in the dark; it would be a major flaw for a flashlight if I could not.
I've advocated similar as a possible way of countering other consumer-hostile practices in the past but for something like batteries I think we're already past that point.
E-waste is a huge but so far mostly overlooked problem. Built-in obsolescence is great for exploitative businesses and bad for basically everyone and everything else. Unnecessarily restricting hardware repair or replacement might be the largest contributory factor in this problem.
That varies a lot based on product type, price point, and target market.
For something long-lived and expensive, I will pay extra and accept a less sleek design for standardized removable batteries. I research most purchases of durable goods. Of course, I may not be the most profitable customer since I don't plan to buy a new widget every six months.
This is the real problem. Otherwise the markets would have solved the crappy products problem a long time ago.
When I use something at my home, I usually have a constant stream of minor annoyances that could easily be fixed by the manufacturer. However, when I'm shopping for a new product, I dont remember any of them.
The odd part is that for many people that's even the case if it's software _and_ they are programmers themselves _and_ it's open source. But they go complain instead of just fixing the minor annoyance.
ikr? They have these things now, they call them "AAA" batteries...holy heck, you can take them out, and they even make rechargeable ones. Like, if your thing doesn't work, you can just swap the batteries instead of waiting N hours for it to charge itself....what a concept.
Rechargeable AAAs are crap though. Power density sucks. Voltage level sucks (and voltage regulators will suck the battery dry before your micro does). And the form factor sucks.
I'd sooner have a 18650-family (e.g. 14240 or 14500) become standard than a bunch of AAAs.
Panasonic Eneloops and similar modern low-self-discharge NiMH batteries are really nice. Their capacity is not quite as high as alkalines, but their internal resistance is an order of magnitude lower. At loads over 100mA, they're a far superior battery in both ability to deliver power and effective capacity. They have low self-discharge, so you can charge them after using them instead of discovering you need to charge them before using them. And I've never had one leak, but I assume they can.
> Rechargeable AAAs are crap though. Power density sucks. Voltage level sucks (and voltage regulators will suck the battery dry before your micro does). And the form factor sucks.
Not in my experience: our main TV is connected to a 2008 computer running Linux Mint that we use for netflix and amz prime. I've been using it in this way since 2016. The remote is an off-the-shelf wireless pointer thingy with a few buttons on it (works like a Wii controller) that takes 2x AAA batteries.
I've been recharging those 2x AAA batteries since 1996, and it still works fine enough that I don't feel compelled to replace them.
Not compared to alkalines, not since about 2005. Voltage regulation (i.e. a boost converter) plays well with NiMH rechargeables too, though not with alkalines.
That's assuming one of them is an appropriate size and voltage for the application. If more than one starts to seem like a good idea, consider a larger cell, higher voltage chemistry, or both.
I use a lot of rechargeable AAAs and AAs. For AAA, 1100mAh are pretty good, which almost up to the ~1250mAh of an Energizer alkaline. For AA, there are 2800mAh rechargeables available, which is on par with an Energizer alkaline.
It was a valid point. There's a significant performance/weight reason most of the devices with non-removable batteries and built-in charging are using Li-ion instead of NiMH.
Getting a bunch of consumers used to buying spare Li-ion cells and using external chargers might be a non-starter, but it's possible to have both onboard charging and removable cells so the upgrade path is there for those who want it.
I've had a series of Canon powershot cameras that have all used more-or-less the same Li-ion cell design[1] and I've been able to swap batteries with them. They're big enough that third parties have even offered replacement batteries.
There is probably room for standardization of a compact, rectangular, rechargeable battery of roughly those dimensions. Is someone pushing for it?
I don't think anyone is pushing for that, though sometimes a company will design a device around another company's battery design. There's quite a bit of small photo/video lighting that uses Sony NP series camera batteries. Those are just a pair of 18650s in series inside a plastic box.
It seems to me the size of the G7X could have been designed around a 14500 cell instead.
Sure, but you have a manager who thinks... against better judgement... again... that they are gonna sell 20 million units of the thing, so their math suggests that you should... yep... prematurely optimize.
> This is because they… never use the products they design. Oh, sure, there might be a user study or two. But they never actually have to live with or work with the damn things.
Yeah, I do software and Ive found the same thing: if I’m building it, then I’m actually the first user of the software, and I have all sorts of feedback for the designers. Luckily I’ve worked with some great designers who were usually more than happy to adjust the designs though.
> (Our design team will never listen. This is because they, one, don't listen to anyone ever; and two, never use the products they design. Oh, sure, there might be a user study or two. But they never actually have to live with or work with the damn things.)
If you don't mind me asking .. if it's that bad, why do you still work there? Is there a shortage in suitable jobs? (Honest question..)
I hope you don't mind this sentiment but you seem like one of the "good ones." A lot of designers seemingly do things like this for some insular reasons rather than for market reasons (better for consumers or it gives them an edge). Just another example how some developments in tech are top-down rather than bottom up from market demand.
Can you tell whether you work in a product design company, or an engineering company?
Does that come from clients?
I used the same 3-in-1 latch circuit for power on/off, pairing, and software power off for 12 years.
A good enhancement mode NMOS FET can be as good as a physical switch at low voltages.
If you really need nano-amp level leakage current AND very low drop, you can consider turning to beefy depletion mode devices, which gate the user will discharge with the button press. A nice benefit is that you can directly sense the button press with an MCU.
I once even did a bluetooth mouse with this "hardware" switch, and a graceful shutdown: we sense voltage on the buck capacitor, and detect when the battery disconnect triggers.
The few microfarads in the capacitor are enough to send the last update with the 0% battery level, and a disconnect command.
> Can you tell whether you work in a product design company, or an engineering company?
Both! There aren't many of those out there, so I probably just outed myself. Oh well. Keep it quiet please?
> A good enhancement mode NMOS FET can be as good as a physical switch at low voltages.
Yes, it can work very well. You are correct that it need not be a physical switch. But it still does nothing if it is never brought out to the device's interface. And physical switches are still preferable if the software is broken, as it often happens to be during development.
> you can consider turning to beefy depletion mode devices
Depletion mode devices are not cost effective. If you think they are, point me to one cheaper than the LND150. (And, yes, I do know how to use them. Many designers do not.)
Not applicable - efficient market hypothesis requires a host of conditions to be true, and all of the following are unmet: information symmetry, sufficient number of suppliers, lack of monopoly effects, and rational consumers.
Sure, but the point is that you can keep making these things and people will keep buying them, so "noone wants another e-waste" is clearly not true. Discussions about "the market", its effects on the environment, etc. are all important and worth having, but don't really have much bearing on whether or not "people want another e-waste".
I mean, if you look at it that way then people really want corrupt politicians and exploitative businesses because they keep electing the former and paying the latter.
I think people really don't want e-waste. They don't want oceans (and now bodies) full of plastic. The industries have just done a great job of convincing them nothing can be done about the problem.
> I think people really don't want e-waste. They don't want oceans (and now bodies) full of plastic. The industries have just done a great job of convincing them nothing can be done about the problem.
I think it's a bit more complex than that; a lot of properties that distinguish a "solid product" from "e-waste" are not at all obvious when you buy it, and it's not illegal and it is cheaper, and people got to make rent too, so... yeah. I agree: in the abstract most people don't "want e-waste", but they also look at costs and other factors, and they do "want e-waste" once you factor that in.
It probably shouldn't be strictly illegal too, but now you have a situation where:
1. Consumers blame industry and say that government should do something.
2. Industry says they're just producing what consumers want and that it's not illegal.
3. Government says they want consumers and industry to have free choice and call for different consumer actions and "self-regulation".
So basically, everyone is pointing at everyone else, and nothing gets done. And the arguments from the various sides aren't necessarily bad or malicious either: regulation does come with a cost, and cheap stuff does have its use.
Personally I think factoring in external environmental costs in to the product price would be good, but it's not so easy as it will make things more expensive, and already enough people are struggling to make ends meet (as the current high inflation rates show) so it all ties in to a lot of other issues as well.
> Oh, great, this debate again! I shall use this post as ammunition next time I am arguing with our stupid design team about why we need an off switch on the stupid product we are designing, because it is a giant pain to live with devices you cannot turn off.
I increasingly subscribe to the thesis that many designers and design teams design primarily in an effort to impress other designers. They don't have to use the things. They often don't even really care if the things work at all. They care that other designers - their artistic peers - are impressed.
This does not generally result in what we the customers or users would regard as good design. The phenomenon reminds me of software engineers I've worked with who were more interested in building something with event sourcing and stream processing and functional languages rather than solving the problem more readily.
This used to be how I assumed that awful UX keeps coming about, repeating the same mistakes and then making new ones by removing existing conveniences for no clear reason. But surely, I thought, I must be assuming the worst of an assuredly more rational situation.
I then heard more about the designers at Microsoft for Vista through Windows 11 which deeply confirmed most of my prior negative assumptions. It sometimes really is as simple as folks who think they know better than anyone who uses their product; people who refused to dogfood or even touch the thing they ostensibly work on, in any given day.
Does anyone remember Office 2013 with the UPPER CASE MENUS?
They were bad enough, but then Visual Studio 2013 HAD THEM TOO.
That was triply bad: ugly, took more screen width than Mixed Case, and perhaps worst of all, to a programmer they looked like errors.
Most of the languages we work with are case sensitive, so FILE and File look like two different things. It was a constant irritant.
The chief designer for VS2013 had a blog post about the new design, and naturally in the comments there were a lot of complaints about the uppercase menus.
The designer eventually replied: "Thanks for your feedback. We decided to keep the uppercase menus because we want more energy in this part of the interface."
More energy. I am not making this up.
They eventually came to their senses, and the next releases of Office and Visual Studio went back to conventional Mixed Case menus, just like every Windows app has used since the beginning.
(Note: my use of uppercase at the top of this message is not for emphasis or shouting, it's for illustrative purposes.)
> "We decided to keep the uppercase menus because we want more energy in this part of the interface."
It always seems to be new-age mumbo jumbo with these sort of design people. Design needs to be given back over to the engineers again. I know that suggesting is going to draw ridicule; it's easy and popular to point to the worst engineer-designed interfaces to ridicule the design abilities of all engineers. But form needs to be balanced with function and these artsy new-age designers who justify everything they do with faux-poetic metaphors have been a disaster. There needs to be balance.
Engineers can be taught the pragmatic virtue of interfaces designed for common people. Using standardized widgets and guidelines, engineers are perfectly capable of creating interfaces that are intuitive to users, even if they don't satisfy the dedicated designer's sense of aesthetics. Just look at the GUI libraries from the 90s and compare it with the dumpster fire of modern interfaces. In the 90s an engineer designing an interface would use a button widget and the button would look like a button. Today designers make everything look sleek and special to leave their artistic mark and I am left with no fucking clue what I can even click.
Design needs to be given back over to the engineers again.
No, no, then you get the open source approach:
In case the desktop shortcut for your application is not available with the /usr/share/applications/ directory you have and option to create the Desktop launcher manually. In this example we will create and Desktop application shortcut for Skype application. Obtain the following information for any given application you wish to create shortcut for. Below you can find an example:
To obtain a full path to executable binary of any program use the which command eg.:
$ which skype
/snap/bin/skype
In regards to the application icon, the choice is yours. You can either head over to /usr/share/icons/hicolor/ directory and search for any relevant icon to use, or simply download new icon from the web. Now that we have all the necessary information, create a new file Skype.desktop within ~/Desktop directory using your favourite text editor and paste the following lines as part of the file’s content. Change the code where necessary to fit your application specific details.
Yeah yeah, typical. "it's easy and popular to point to the worst engineer-designed interfaces to ridicule the design abilities of all engineers."
The general dysfunction of open source design is much broader than UI design and has little to do with engineers in general being incapable of good UI design. You might as well linux bluetooth woes to claim that engineers shouldn't be allowed to write device drivers. Whether it's designing a GUI, an API, or any other system, there will be some engineers who are good at it an others who suck. To get good design of any sort out of engineers you need an organizational structure that promotes talent and weeds out the hacks. The open source scene broadly lacks such structure and struggles to do this.
What you're talking about is a deficiency with community driven model of development, not a general deficiency of all engineers. Put 20 random chefs into a kitchen with no organizational structure imposed on them and your restaurant will be a disaster. But using that to claim chefs can't run a restaurant is absurd.
> 1) to keep Visual Studio consistent with the direction of other Microsoft user experiences
In other words, we jumped off this bridge because all our friends did to. Consistency... with other Microsoft products. What a joke. What about consistency with virtually every other Windows application, both from 3rd parties and Microsoft's own history? Citing consistency to justify such a bizarre departure from the 20 year norm is truly insulting.
> We decided to keep the uppercase menus because we want more energy in this part of the interface.
Translation: Here is vague explanation that distracts from the fact that I made this choice emotionally rather than rationally.
I see this time and time again. People are simply reacting emotionally. If they try to explain why they behaved a certain way, they offer an explanation with something that sounds rational but isn't actually the reason.
In this case, the designer probably was hurt that people didn't like their work and stubbornly resisted it.
It sometimes really is as simple as folks who think they know better than anyone who uses their product; people who refused to dogfood or even touch the thing they ostensibly work on, in any given day.
The default white title bars that blend seamlessly into any white content behind them is a perfect example of this and so many people don't even realize they can change it.
Windows usability has declined since Windows 7 which had a lot of things that seemed like the designers were trying to improve their own lives while using the software. My favorite was the thick borders that were easy to grab for resizing, etc., even on high DPI displays. Compare it to Linux UIs where you often have to grab a single pixel border on a 4k monitor to resize a window.
Thankfully Win 10 still has large targets for grabbing window edges even though the border is only 1px.
> Compare it to Linux UIs where you often have to grab a single pixel border on a 4k monitor to resize a window.
On Linux you hold the super/windows key and click/drag anywhere on the window to move it, or right-click/drag anywhere on the window to resize it. It's much easier than using thick borders. Discoverability is admittedly not great, but this feature easily coexists with window borders. Microsoft could keep the thick borders by default and also implement this.
“Don’t do it the way everyone else does it or the way you’ve always done it; our designer decided to make that hard for <inscrutable reasons not related to delighting existing users> and instead we offer this hard-to-discover non-idiomatic workaround.
“You will grow to love the workaround because it doesn’t hurt nearly as much, kind of like stopping after you’ve been hitting yourself in the head with a hammer.”
I guess you (deliberately) missed the part where the two features coexist perfectly fine. There is no "“Don’t do it the way everyone else does it", you pulled that out of your ass. All the Linux DEs worth mentioning have draggable window borders that are familiar to any Windows or MacOS user. The default width of those borders varies, but thick borders by default are common.
> I increasingly subscribe to the thesis that many designers and design teams design primarily in an effort to impress other designers. They don't have to use the things. They often don't even really care if the things work at all. They care that other designers - their artistic peers - are impressed.
To put things in perspective, this is how we get stuff like the Eiffel tower.
The Eiffel Tower was built as a temporary display for the World's Fair. It was very much intentionally designed to impress other designers; that was kind of the point
> The Eiffel Tower was built as a temporary display for the World's Fair. It was very much intentionally designed to impress other designers; that was kind of the point
I'm not sure I got the point across. The whole point is that focusing on design is how we get outstanding, high-quality results. It makes the product more appealing and can even make it transcend it's status. High quality design creates value with negligible impact on cost.
The Eiffel Tower exemplifies that. It was designed as a temporary exhibition but it's impact transcends that of a mere temporary structure. It became the landmark of a European capital, and the symbol of one of the richest, most successful and culturally dominant nations on earth, and in the process generates fortunes in revenue due to tourism.
Another good example is the Sagrada Família cathedral in Barcelona, or pretty much each of Gaudi's work. Even the Gaudi's Park Guell, which failed as a real estate project, is a resounding landmark and tourist attraction.
Not at all. And I understand the now obsolete technical show-off value of the tower at its time. But now I think it's just a humdrum industrial eyesore.
I think most people disagree with me so I don't think it's going anywhere. But that's my view of it.
I know, I've seen a few of those. Most of them are tiny replicas built ironically for the novelty factor. Besides the ones built for tourist traps, most are justified as radio towers. But as radio towers the design is laughably obsolete. Compare with guyed masts, which virtually every developed town and city around the world has numerous examples of.
Oh man, do I believe this. I think every profession does this, tbh, I saw a lot of developers try to do stuff just to impress the rest of the team, but designers are one of the most impactful because they don't do stuff that is _so obvious_ like this one
I experienced this in programming when I got into Scala 13-14 years ago. For an outsider, it looked like the community was more interested in impressing each other than solving real problems.
I have to say that scala was the most fun in programming I’ve had. You can make beautifully elegant code, that is oh so clever. It is easy to get seduced into thinking about elegant code instead of good solutions.
Yes, every profession does that to some degree. But most professionals are impressed by things that make the results of their work better, even if at completely unreasonable costs.
Designers nowadays are in a fashion to get impressed by things that make the results of their work worse. That leads to completely different problems.
> I increasingly subscribe to the thesis that many designers and design teams design primarily in an effort to impress other designers. They don't have to use the things. They often don't even really care if the things work at all. They care that other designers - their artistic peers - are impressed.
I've been a designer for about a long time. I've done many dumb things, but never that. I have definitely had the irrational thought that "users will think I'm a bad designer if we ship this product", but that's usually because it was a bad product that shouldn't have been shipped.
I've never gotten the sense that my peers do this either. I have definitely gotten the sense that some designers want their designs to impress Apple, and Steve Jobs specifically.
Everybody wants to impress their peers to some degree, but to be clear, that usually translates into just doing a really good job. I've never had another designer show me something and say "Nobody else will appreciate this, but I thought you'd like it".
Agreed. I find dumb design usually comes from two pools: inexperienced designers and cost cutting measures. What's a switch to a company but another point of failure that they can't as readily blame the user on?
Chris Bangle, who was the chief designer at BMW for the E65 generation of 7 series and E46 generation of 3 series, said that he was designing to the other designers. Which is why he ignored the feedback that the E65 was ugly. The term "Bangle Butt" was applied to the E65. His successor took the radical position that this thing called "beauty" was important. I don't have sources at hand but it was discussed in print and web publications of the consumer side of the auto industry.
It was applied to a lot more than that. Even the F11 6er still had it.
What is interesting, though, is how much flame surfacing started to flood car designs after Bangle's BMWs had been out for a few years. I'm not arguing that the E65 looked good, but he certainly made an impact on all of car design, seemingly overnight.
Sometimes they're not entirely wrong though. The E46 design is now decently well-regarded by many car enthusiasts. Granted, I'm probably not the best person to ask about BMW aesthetics since I think the best looking one they've made is the clownshoe and thought the hood bulge was exceptionally lazy.
It depends, I worked briefly for Flex(tronics), and my projects were entirely focused on battery life and size (or at least, the client design requirements were).
There's a difference between a customer who plans on selling a billion units, and some company like this game controller design team that clearly making some crappy kids toys. I think the OP twitter person just learned a valuable lesson.
Learn to talk to designers. Have a copy of Tog on Interface around. Know about Fitts' Law. Know who Susan Kare is.[1] Know what Bang and Olufsen audio gear looks like. (They took the black-on-black look to the extreme. Designers really like that look. You can't find the controls.)
Go on to learn what Bauhaus architecture looks like, how that led to modernism, Frank Lloyd Wright, then Mies Van Der Rohe, then brutalism, and finally Frank Gehry in ridiculous mode.[2] Understand why this didn't result in liveable cities.
Literally every industry has industry circle jerks like that.
You always get some people who build asinine crap in order so show off their "mastery" of their craft. It's just that product design and web development have more money sloshing around to provide for that kind of indulgence than residential plumbing and semi trailer manufacturing do.
>> (Our design team will never listen. This is because they, one, don't listen to anyone ever; and two, never use the products they design. Oh, sure, there might be a user study or two. But they never actually have to live with or work with the damn things.)
This is so obvious from everyday life that someone really ought to write a paper about it and have it published in Nature. Because it will certainly be published. It will probably become the most cited paper of all times, just because of all the pent-up pissed-offery that everyone carries in them, going around in a world where nothing. ever. fucking. works! dammit.
Pfh. Sorry for the vent. This is what impractical design does to people.
I get that you're pissed, but there's plenty of technology that you seemingly forget about (being pissed off makes you a bit irrational) that works most of the time. Saying nothing ever fucking works is just a dumb take
It’s nice sometimes to have a deep dive into technology that does work, and works so well you barely even notice it.
When it’s failing is when it comes to the forefront. Crappy touchscreens seem to be a major component in lots of them, had an experience with an elevator “system” with a touchscreen that was mildly annoying.
Btw, here is the litany of broken systems that led to my outburst above.
I'm two months (at least) without data on my phone. I didn't need to use it and I'm a lazy ass so I didn't try to fix it. When I call them, a kind youg man on the other end of the chat says he's reset my service and within a couple of hours I'll definitely have data.
In a couple of hours I have no data, but now I also have no service: "emergency calls only". Result!
Of course I can't log into my account to see what's up, because it's trying to send me a TFA code. It can't send it to my mobile, since it's dead, but it sends it to my landline which is in the UK. I'm in Greece now.
I try to get help by clicking on a link in the page where I fail to login. I'm taken to a page with an "AI assistant" called TOBi. The page has the assistant's icon, smiling at me encouragingly and a wheel that's spinning, spinning, spinning... and spinning.
I think I pass out from all the spinning and all the screaming and pulling at my pigtails. I finally realise I can actuall call a number to ask for help. Fortunately my friend still has service.
I have to blow a few raspberries down the line to convince the (other) chatbot that it can't understand my speech, but I'm finally connected to a human being.
The kind young woman on the other end of the line tells me she has reset my service and I will definitely have my service back in 24 to 48 hours.
Three days later I definitely don't have service. I try to log in to my university email to pick up some work I need to do, but it asks me for a TFA code. That it sends to my mobile.
I call Vodafone for a third time. This time, I'm put through to a technician. He asks me to take my sim out of my phone and read the S/N off it "to make sure it's the right sim".
Hang on. To do what?
I put the sim back in the phone and I have service. Ah. So that's what he wanted me to do. He could have said so.
And I should have thought of it.
So, nothing works. Not even my goddamn brain works anymore. If that was a dumb take, that's because I'm dumb and useless and so is everyone else, it seems. Except for the guy who doesn't know how to tell you "did you try taking it out and putting it back in again?".
I could make and receive iphone calls, I could hear the other party, but nobody could hear me. This went on for a couple days, I was ready to go get a new phone.
Then, silly me, I remembered the good old DOS days. I did a cold reboot of the phone, and voila! people could hear me.
What I don't get is that my phone doesn't have a windows OS. It has an Android OS. Like yours, it should not need a reset. Let alone three. So what's up with all those resets?
Alright, I exaggerate. But when there's so many things that just don't work, and so few things that just work, then I think I'm justified to say "nothing".
Kind of like Yogi Bera says "nobody goes there anymore, it's too crowded".
That's such a godawful book. In my view it epitomizes everything that is wrong with design today. And the worst part is that comments like these make it come off as the go-to reference to actually solving these problems.
The reasons it's the worst design guidance out there is that it teaches:
1) that engineers should NOT be entrusted/involved with user design.
2) to conceptualize users as blissful idiots.
3) that obscuring errors in a system is actually a good thing.
All of these are things that I see the lasting effects of in products I use in every day. Don't get me wrong, I bought this with the best of intentions. It was the highest rated design book on Amazon on the topic. Reading through was an extremely painful exercise in essentially coming to realize that everything I see broken in the engineering teams I work with and products I use is actually codified in words.
Take this excerpt as an example -- p. 65:
"Eliminate all error messages from electronic and computer systems. Instead, provide help and insight."
Are you f'ing kidding me? You cannot seriously think this is right. For the longest of time I wondered why I hated using Apple products. It turns out Norman was schooled at Apple and they absolutely practice this teaching. I bought an early-day iPod touch circa 2007 and proceeded on to loading all my music on it. A few upgrades later, some albums disappeared without any error. I searched and searched and searched, trying to describe my problem in a number of different ways on Google to see what I could find. And all I could find were semi-answers. But you know what? If it had actually displayed "error 0xFHJAD1234" when failing during the upgrade, I could've actually google that. And I'll spare you the story of what I had to do when the family Mac's HDD failed ...
So this is what we actually get, devices that SILENTLY fail, designed for people who are assumed not to know what they're doing and who likely do NOT have any technically-literate person around them that could assist them if they could in fact identify the problem.
Worse, this philosophy probably hints at a level of intellectual ignorance/arrogance that is quite fascinating. You see, in my line of work I have to be able to walk engineers I work with from the gate-level silicon all the way up to cloud and AI. Without any false modesty, I think I have a fairly good idea of how modern computer architecture works. Yet, I will be the first to tell you that I have no idea how any of these systems work in full. In fact, no one does. Not the best of kernel developers nor the best of chip designers. To claim that you can somehow inventory every possible problem in a computerized system and then provide a graceful exit from that or useful info is either ignorant or arrogant or both. But since you've eliminated engineers from the design loop (see #1 above), no one will tell you you're full of it.
So yeah, no. Do read Norman's book. But look at it as everything you should NOT be doing.
> So this is what we actually get, devices that SILENTLY fail, designed for people who are assumed not to know what they're doing and who likely do NOT have any technically-literate person around them that could assist them if they could in fact identify the problem.
It sounds like your complaint isn't with the advice from the book, but rather that Apple didn't follow it. They did hide away the error messages, but they didn't provide help and insight.
If they had done so, you'd have (presumably) been told in the software that there was a problem with certain albums, and been walked through a flow that would fix said problem. Or maybe a link to a support page that'd do the same. This, if it was done, would have been much better than showing "error 0xFHJAD1234".
Thus I think it's fairly good advice, as long as you read the "instead" as a critical instruction. You should eliminate opaque error messages so long as you can provide help and insight for those cases you have removed.
> It sounds like your complaint isn't with the advice from the book, but rather that Apple didn't follow it. They did hide away the error messages, but they didn't provide help and insight.
> If they had done so, you'd have (presumably) been told in the software that there was a problem with certain albums, and been walked through a flow that would fix said problem. Or maybe a link to a support page that'd do the same. This, if it was done, would have been much better than showing "error 0xFHJAD1234".
The parent post addresses this: there are so many ways for for the system to fail that it’s not feasible to “inventory every possible problem in a computerized system and then provide a graceful exit”. So where an error is not gracefully handled, the system should allow for a human to see the error and solve the problem, rather than throw away the error.
>> Without any false modesty, I think I have a fairly good idea of how modern computer architecture works. Yet, I will be the first to tell you that I have no idea how any of these systems work in full. In fact, no one does. Not the best of kernel developers nor the best of chip designers. To claim that you can somehow inventory every possible problem in a computerized system and then provide a graceful exit from that or useful info is either ignorant or arrogant or both. But since you've eliminated engineers from the design loop (see #1 above), no one will tell you you're full of it.
> The parent post addresses this: there are so many ways for for the system to fail that it’s not feasible to “inventory every possible problem in a computerized system and then provide a graceful exit”. So where an error is not gracefully handled, the system should allow for a human to see the error and solve the problem, rather than throw away the error.
Yes, that's what I said as well. It's why I said the problem was that Apple didn't follow the book's advice, insofar as they removed error messages without providing help for them... and so it's unfair for the grandparent-comment to blame the book when its rule wasn't followed.
It's definitely fair to blame the book when the book gives advice that is impossible to follow.
If it's not possible to always give help and advice, sometimes you have to give an error message and unless the book mentions that then it is definitely fair to blame the book.
Eh, I read the book as presenting a maximalist vision: in a perfect world, all error messages would instead be help that lets you resolve your issue.
Since it says to hide error messages and instead provide help, anyone who hides error messages and doesn't provide help can't really be said to be following the book's advice. It feels wrong to me to blame the book for not, in every bit of advice it offers, saying "don't half-ass this incorrectly". That seems inherent to me.
It's not a very useful book if it doesn't provide advice that works in the real world. I have not read the book - does it give advice on what to do if it's not possible to give help? Or does it just assume that it's always possible to be perfect?
In Ye Olde Dayes, Macs would in fact give you messages like "error 39!" when something went wrong. Only, in those days, there was almost no way to figure out what that meant.
I don't mean to nitpick, but please refer to my comment regarding the inability of being able to inventory everything that can possibly go wrong in a computerized system. That's especially true given that if you take any successful engineering project, you'll see that its useful lifetime usually spans several individual engineering careers. Take something like Windows, or even Android for example. The people who were involved in its early days usually end up moving on to other projects. It's in fact not uncommon to change teams or companies every 3-5 years. There generally is no way for newcomers to understand everything about the system they're inheriting and they'll typically be encouraged not to break legacy stuff.
So not only are systems increasingly complex, but those maintaining them at any given point in time likely don't understand them nearly enough to accomplish what you suggest: understand all potential outcomes and provide useful error recovery. Therefore, the "provide help and guidance" advice provided by Norman effectively becomes "silent failure" for those cases not anticipated by the current-day release team. And since anticipating all outcomes is by definition impossible, Norman's advice is awful.
> And since anticipating all outcomes is by definition impossible, Norman's advice is awful.
Yes, that's why I said that I read the advice as saying it should only be followed when you can provide help and insight. You don't have to do it in an all-or-nothing way.
You can have common problems with a helpful flow that tells the user how to fix them. You can have less-common problems which just show an error code. Removing the error without providing help is the problem here, and is the thing that I feel goes against the book's advice.
For the example you gave, as a user, what would you do with "Error: 0xFHJAD1234"?
The best you could do is report it to Apple (or whoever). But, the error alone doesn't achieve anything. You would have been better off with "Something broke, click here to send log to Apple." (which is roughly what the book recommends - replace the error with something actionable).
I have copied-and-pasted the strangest of kernel errors, compiler errors, god-knows-what-obscure-tool errors, etc. over the past 20+ years into search engines and have almost always been delighted and amazed at how somehow someone somewhere at some point in time had not only encountered the same thing but actually wrote a description of possible root causes and potential solutions that often unstuck me right away. The success of a site like stackoverflow is exactly this.
It turns out that if you actually do take the time to report the error, no matter how obscure it is, someone in a dark corner of the internet will spend enough time on it that they'll even see fit to document it for others to help their own selves out. And given that, per my initial post, nobody is smart enough to understand how the computer system/software they're creating is going to work under all circumstances, providing error verbosity is sometimes the most empowering thing you can give to your users.
I call it "Somebody Else Has Had This Problem". It's a heuristic that became useful starting about twenty years ago, eh? Before the internet you had to have (and write) fairly complete manuals for software, now you put the error message into a search engine and, hopefully, amortize the suffering.
However, I suspect this has also made it easier to write and (kinda sorta) maintain larger and more complex systems. Like the phenomenon where you widen a road to ease traffic congestion and it works for a while but then encourages more traffic. The reduced cost of creating and maintaining complexity may have encouraged it overall.
- - - -
As an aside, can I quote you, like, on my blog?
> in my line of work I have to be able to walk engineers I work with from the gate-level silicon all the way up to cloud and AI. Without any false modesty, I think I have a fairly good idea of how modern computer architecture works. Yet, I will be the first to tell you that I have no idea how any of these systems work in full. In fact, no one does. Not the best of kernel developers nor the best of chip designers. To claim that you can somehow inventory every possible problem in a computerized system and then provide a graceful exit from that or useful info is either ignorant or arrogant or both.
That's an interesting take. Thanks for sharing. Sure, I can definitely see the "google this error" effect as having permitted a higher degree of complexity. Or, maybe to be more accurate, lowered the barrier to entry to create more complex systems. There are probably pros/cons to this. It takes less sophistication to build something very complex, thereby democratizing the ability of building sophisticated systems/services, but it also means there are many more complex systems out than there is a community of people who have a more-or-less "good" understanding of what they do and, possibly, whose overall behaviour is likely only partially understood by their own designers. Cue in all sorts of security implications, etc. Food for thought.
I don't mind the quoting, but, just in terms of mental hygiene, I generally dislike posting unequivocal opinions on the internet ... because it's often come back to bite me ;) I'm especially skeptical of my own writing when I use labels such as "ignorant" or "arrogant". So long as you understand that I don't take myself very seriously, I mostly stand by that quote you mention.
No, the best that you can do is paste it into Google.
Giving a user an error ID gives them a partially-if-not-completely unique identifier that they can then use to find other people struggling with the same error and possible solutions.
This is infinitely better than having no error message and having to try to type different permutations of your symptoms into Google in order to try to win search engine bingo.
Sure, if Apple quickly and responsively fixed issues, then you wouldn't need the error code. But, they don't! The problem is that "hide the error message and replace with actionable advice" only works in the idealistic case where the vendor will quickly fix the issue and/or the advice consistently fixes the problem. But, they don't, and it doesn't - the advice doesn't work unless implemented flawlessly - it's not robust.
The robust approach that will actually survive contact with the real world is "include an error message and ID code - even if you have to hide it behind a "more details" button".
Or worse, the fix suggestion being wrong: "Please try again later", when the root cause is a misconfiguration, so it'll never work until the configuration is corrected.
And if it's a program display it in some fashion where it can be selected so you can ^c^v it rather than have to retype it. Such errors are all too often displayed on non-selectable labels.
Bonus points to the mythical developer that includes a google-this button on the error display.
And then what? Is Apple going to receive the error log and dispatch a technician to your house to fix it?
In my experience sending crash data to software companies has never resulted in my satisfaction, nor has using some 'wizard' to diagnose an error. As alluded to above these systems are just not smart enough to actually know what's wrong. If the original programmers could create a fully automated flow to repair all error conditions then why bother notifying the user at all?
Apple doesn't have to lift a finger beyond showing an error message with some kind of identifier. The user isn't expecting an onsite visit anyway. They can go to the local Apple store and explain the issue to an employee there, instead of wasting time trying to reproduce the error.
If they're not tech-savvy (and you would never talk to Apple store employees besides at the checkout counter if you were), they likely won't be able to reproduce the error anyway. They'd fumble around trying to explain what went wrong, and the employee would run through multiple scenarios trying to reproduce it.
You are making the same error as the book: assume that every user is non-knowledgeable, helpless and stupid, and that no user has access to a trusted expert to help them (whether for free or for a fee). And by designing for that error, it will become true. And knowledgeable, capable and smart people will resent you for it.
If you have to say "Oh it's good advice but you have to interpret it right," when the advice has an obvious misinterpretation pothole that clearly wrecks many prominent applications of it, maybe that reflects poorly on the original advice and indicates that it shouldn't be offered without the necessary critical context.
It’s perfectly reasonable to point out the second half of the sentence. “It says to provide help and insight” isn’t law school levels of interpretation.
Why not just include an error code and help and insight?
That way, when the help and insight inevitably fails to solve the problem at some point, the user can still dump the error code into Google to see if other people have run into/solved the problem.
First off, Norman wasn't schooled at Apple, he basically created the HCI guidelines with Nielsen. Apple's design guidelines have long been based o years what Norman worked on so you have it the other way around.
In no part of the book does it say that engineers shouldn't be part of the design process. What the book advocates is that if engineers are entrusted with design, they should at least understand the users and design their products based on their needs and for the users. Anyone can be a designer, anyone can apply design thinking. No one is excluded.
And your comment about error codes, I would always say that an error message is more useful than an error code. You say that you would Google the code, wouldn't it be more useful if you could just read the actual error code and figure out what went wrong without habing to look it up first somewhere else?
What Norman advocates for is to help the users help themselves, show the state of the application and don't hide things behind error codes.
Again, silently failing is definitely not recommended by the book or anyone really so if thats been your experience with Apple, it's not because they follow the teachings of Norman's book, it's because they _don't_ follow it.
See the "Acknowledgements" section of his book -- pp. 299-304. Yes, Norman had written "The Psychology of Everyday Things" prior to joining apple. But in his acknowledgements to this particular book (i.e. "The Design of Everyday Things" p. 301) he very heavily credits his experience at Apple: "I have learned a lot in the years that have passed since the first edition of this book ... The most important experience was at Apple ... I learned about industrial design first from Bob Brunner, then from Jonathan (Joni) Ive. ... Steve Wozniak, by a peculiar quirk, was an Apple employee with me as his boss, ..."
So while Norman wasn't a neophyte when joining Apple, he clearly credits his experience at Apple for having heavily influenced the specific book we are discussing.
Norman certainly doesn't recommend silent failure verbatim. But what I'm saying is that this is the net effect of his recommendations.
Hum... He clearly says that engineers have ideas that are antagonistic to interaction design. There is an implied message that they can learn not to, but one message is explicit, the other is implicit.
And it gets much worse in "The Inmates are Running the Asylum", that says engineers shouldn't do design right in the title, on the most sensationalist way possible.
Later he toned down that message a lot. Nothing from his group will say that anymore, and you will get a clear message that making your engineers think about design is better than nobody thinking about it. It still carries a message that you should leave design to experts, but you won't find anything there saying that engineers can't be UX experts anymore.
Regardless of what the book says, anyone too close to the product during its development will have difficulty designing it appropriately for the user.
If you know how the product works then you already have a mental model of interacting with it. That mental model is NOT the same as the user will have, and thus an engineer will think that something is obvious even though it is not obvious to the user.
The same goes with anyone else who is close to it during the development.
You need outside users to make it clear how people new to a product can understand and interact with it.
Unless you're an engineer who can forget everything they know about their own product, you shouldn't try to design everything yourself with no outside feedback.
That's correct, and the designer is too close to the product too.
That is the correct message, and the one every article about design should push. Neither the engineers nor the designers can design a product in a vacuum.
Those are both old books that do not deny this message, but focus on less relevant subjects, and have less than clear advice. We shouldn't recommend those books for people without previous knowledge on UX design, because they will be harmful.
Besides, given that Nielsen was himself a very important voice on the creation of the modern user-focused design, there is very likely a newer book from him to recommend instead (I stopped reading his books and started reading his papers at the time of the change, so I don't know one).
> To claim that you can somehow inventory every possible problem in a computerized system and then provide a graceful exit from that or useful info is either ignorant or arrogant or both.
I don't think you read the book. I read the book in 2000/2001. The emphasis was on helping the user.
Not a single line in that entire book, IIRC, advocates silently failing. I have no idea how you came up with that interpretation, when almost every single example in that book highlights "silent failure" as something to avoid.
There's an endless debate in programming languages about how to handle errors in software. Error codes, exceptions, optional types, NaNs.
One solution I try is to redesign the cause of the error out of the system. For example, compilers often have maximum quantities of language constructs that are supported, like the maximum length of a string literal. Then, when the length is exceeded, an error message is concocted and generated, then error recovery has to be done, then the compiler has to not generate an object file, etc.
I don't know what other compilers do, but one day I realized that it was less work in the compiler to not have a limit, but to keep enlarging the string literal buffer. There was only one limit left on all these things, that was globally running out of memory. Globally running out of memory is a fatal error for compilers, and so error recovery isn't necessary. Just print a message and exit.
This works great. Large numbers of errors just go away, like "line length too long", "string literal too long", "too many cases in switch statement", "too many symbols", etc.
There are, of course, still some limits, like the object file formats often have hard limits, and of course you don't want to overflow the program stack.
Yup, these days any serious use you can afford to throw memory at the problem. So many things become so much easier when the limits are pushed back to out of memory or integer overflow territory.
30 years ago I had to live with problematic memory limits and made some design decisions that over time I would come to hate because I had to shoehorn data into EMS memory banks. Data objects ended up sliced and diced into separate arrays, never did they point to the relevant things because such pointers would always have been into a different bank and the only possible allocation was the whole bank.
I had a funny discussion with Don Norman about pie menus and SimCity, in which I blamed the nuclear meltdown on the linear pull-down "Disaster" menu, and blamed round pie menus for putting out fires quickly but ultimately leading to urban sprawl:
Don Hopkins and Donald Norman at IBM Almaden's "New Paradigms for Using Computers" workshop
Talks by Don Hopkins and Donald Norman at IBM Almaden's "New Paradigms for Using Computers" workshop. Organized and introduced by Ted Selker. Talks and demonstrations by Don Hopkins and Don Norman.
Norman: "And then when we saw SimCity, we saw how the pop-up menu that they were doing used pie menus, made it very easy to quickly select the various tools we needed to add to the streets and bulldoze out fires, and change the voting laws, etc. Somehow I thought this was a brilliant solution to the wrong problems. Yes it was much easier to now to plug in little segments of city or put wires in or bulldoze out the fires. But why were fires there in the first place? Along the way, we had a nuclear meltdown. He said "Oops! Nuclear meltdown!" and went merrily on his way."
Hopkins: "Linear menus caused the meltdown. But the round menus put the fires out."
Norman: "What caused the meltdown?"
Hopkins: "It was the linear menus."
Norman: "The linear menus?"
Hopkins: "The traditional pull down menus caused the meltdown."
Norman: "Don't you think a major cause of the meltdown was having a nuclear power plant in the middle of the city?"
(laughter)
Hopkins: "The good thing about the pie menus is that they make it really easy to build a city really fast without thinking about it."
(laughter)
Hopkins: "Don't laugh! I've been living in Northern Virginia!"
Normal: "Ok. Isn't the whole point of SimCity how you think? The whole point of SimCity is that you learn the various complexities of controlling a city."
As indicated in the thread via his own words and pictures this is an off-brand switch controller, not a Nintendo official one. They don't have the same requirements for even features as the official Nintendo Pro or Joycon controllers.
I don’t have this device, but reading the manual there is no mention of any off-switch or off procedure. There is only syncing, charging, programming back buttons, adjusting the lights, changing light modes. And as for the lights, there are only 4 modes, none of which is “off”.
Who reads the manual? I suspect the HN crowd tends to read them more often, even so, I suspect many of us avoid that until "necessary" (IE: I've been stuck on a problem I care about for too long)
What's more disrupting, my one little comment, or you bickering with me about it? it's not language policing, it's treating folks with respect. Also, glad you created your account just to say this.
for what it's worth, i was grateful to be called out on my gender assumption. that assumption can cause widespread harm. i'm less concerned about my pronoun assumption.
I'm glad people are connecting with this comment (it's mine), but it's actually not about the article! My apologies, there was another, pro-car article that this should be attached to, but now I cannot find on hn. Carry on!
Obligatory addendum every time foone is posted here: foone goes by they/them pronouns and foone hates HN and all it stands for so please don't bother them with twitter replies just because you saw this thread on here. If you think about following them on twitter because of a cool tech thread, please be aware that they're not a tech novelty account and don't complain when they tweet about politics.
Well, this thread was less focussed than the last ones that got posted here so this seemed less likely to prompt the usual "Twitter threads are unreadable, why doesn't foone just write a proper article" replies, but yes. Foone has stated in the past that Twitter threads are the right medium for their ADHD-fueled bursts of writing and that blogs aren't.
Why should we care whether Foone's mad at HN? Posting to a medium you don't own and expecting to be able to control who responds and how never really works out. If you want control over who sees your content and how they can respond, you have to use a medium you control, and I don't see why Foone is special in that regard.
The reason I said what foone wants and doesn't want is precisely that they can not control what others do. Yes, you can disregard all that, follow them on twitter and angrily chastise them for not sticking to cool tech projects and tell them to use a proper blogging platform, but I was providing this information based on the expectation that most people reading it would not want to be intentionally rude.
If the only thing stopping you from being rude would be the police literally dragging you away from your computer, yes, by all means, try to prove a point by being intentionally rude just because they can't stop you.
My point is that when Foone posts are shared here, someone always follows up with The Official Rules for Interacting With Foone. Others neither expect nor get such treatment. What's different about Foone?
Most of the other people who get their tweets shared on HN don't vocally hate HN as much? They're also usually men, so there's no need to clarify pronouns.
> I went out and bought this as it was the cheapest bluetooth controller I could get on a same-day notice and boy I'm not happy with the quality results
Man buys cheapest product, is surprised by its lack of quality.
"Please don't complain about tangential annoyances—things like article or website formats, name collisions, or back-button breakage. They're too common to be interesting."
> Not to humblebrag or anything, but my favorite part of getting posted on hackernews or reddit is that EVERY SINGLE TIME there's one highly-ranked reply that's "jesus man, this could have been a blog post! why make 20 tweets when you can make one blog post?"
> CAUSE I CAN'T MAKE A BLOG POST, GOD DAMN IT.
I have ADHD. I have bad ADHD that is being treated, and the treatment is NOT WORKING TERRIBLY WELL.
I cannot focus on writing blog posts. it will not happen.,
You can append to a blog post as you go the same way you can append to a Twitter feed. It's functionally the same, the medium just isn't a threaded hierarchy. There's no reason it has to be posted fully formed as he declares.
My own blog posts often have 10+ revisions after I've posted them.
"Fix" what? That the medium or style they choses to express themselves doesn't work well for you? Well, tough; that's your problem, not his. They're not your bitch. The entitlement...
> That the medium or style he choses to express himself doesn't work well for you
It doesn't work well for thousands of people, which is why there are always complaints.
You can say the exact same things about his post asking them to fix the controller. Oh "the entitlement" and "that's his problem, not the controller manufacturers". We're asking him to fix his posts. He's asking them to fix the controller.
When something is suboptimal, you're well within your rights to complain about it. Posting long rants as Twitter threads is suboptimal for the consumers of said threads, just as a controller you can't turn off is suboptimal for the consumer of the controller.
> "When something is suboptimal, you're well within your rights to complain about it."
You can complain about it on your own Twitter. Here, it's explicitly against the HN guidelines: "Please don't complain about tangential annoyances—things like article or website formats, name collisions, or back-button breakage. They're too common to be interesting." - https://news.ycombinator.com/newsguidelines.html
generally speaking coming out hot with a "whatever you're doing is wrong and you need to do it the way I want you to do it" is always a hard sell - regardless of how valid your original argument may be
It’s strange I have no issue reading posts like these but every time they come up someone complains about the format of twitter and I still have no idea what makes it so difficult for people and I definitely did not grow up with it.
Well first of all you need to be familiar with the format. Or better said, with the abuse of the format. Because while it might look like replies, you need to figure out it was used to post an article and not actually additions/corrections what would be a common use for someone replying to themselves. But it's still hard to figure where it ends as the replies from the author seamlessly blend into other users' replies. It doesn't help this particular author doesn't number those tweets (which in itself is ridiculous) or even use punctuation or capitals. It just seems like random rant, not something we're supposed to be be reading when someone shares a tweet.
And then there's the noise. Why do I need to skip icons, names, numbers, padding, dividers after every sentence? And while our brains are relatively good at it, I still need to process it as it gives clues into where the "article" ends.
Twitter is the worst format to post articles. At least it was until someone came up with the idea to post them as videos with computer voiceover.
(I'm using the word article here, but I'm still not convinced it isn't just a random rant)
I just tried and I could read that whole thread without being logged in. Maybe twitter lets you read the whole thing if you've ever logged in with that browser?
I guess if this is actually a problem, anyone who's bothered by it, who doesn't have a twitter login could look into rewriting twitter.com links to nitter.net instead.
If you scroll a little too far at the end, it'll lock you out and prompt you to login. Basically as soon as the first "recommended" tweet is displayed mid-screen.
The way I scroll as I read, I often have to go back and reload the page just to read the last tweet in a thread.
Because it's actively user-hostile. It's not that we can't figure it out, it's that it takes extra effort and creates a problem that shouldn't be there in the first place. If things like ThreadReaderApp or Nitter can present the content in a more accessible & pleasant form, Twitter should be able to do so as well.
I see this and would argue it’s not user hostile. Rather twitter’s target users are those who sign up and log in to view and consume content so they can target. It’s not a hostility towards user but rather a non-catering to its non-users. For the majority of content , tweets and threads are not an issue. They easily consume it and if they don’t want to deal with the format they utilize tools that utilize Twitter’s API.
I kind of appreciate the signal: When someone chooses to blog on twitter you know it's facile at best, and more likely simply stupid (as in this case).
It's fine to prefer different approach to content publishing, but there are lots of solutions to this problem – use them instead of complaining about the ways that other people choose to publish their content.
ahahahah how ridiculous is this idea.
Just want to see author having off buttons on all their remotes: TV, AC, ect. Enjoy manually turning them off and having redundant button=)
The remote to my gas fireplace runs dead every few weeks. The remote uses radio, so I assume it is constantly communicating with the fireplace. I don't use it everyday, so I keep the back off to dislodge the battery when not in use, and reconnect the battery to turn the thing on. I would love a simple on/off switch.
While I get the motivation, I'd rather have my Apple devices which have some economic value to be findable using Find My, which can't function with a physical "hard" off switch.
I'd rather have Bluetooth use infinitesimal amount of my iPhone/Macbook battery than them being lost/stolen which could be potentially tracked when off with "soft off".
But besides some use cases like this, I agree with the post.
I'd rather have my Apple devices which have some economic value to be findable using Find My, which can't function with a physical "hard" off switch.
Why not? I mean obviously they wouldn't be findable if they were off, but that's a feature. Apple could add a physical switch and leave everything else the same. Your devices would function as they do now in "On" mode, but you'd also be able to turn the off properly. Isn't that better?
That would literally kill the whole point of being able to find it if it's stolen.
I know there are other ways to do it (a sophisticated theif might just put it in a Faraday Cage), but Find My would just work in its current form otherwise, and putting a physical switch that physically switches it off kills the whole purpose.
HOT TAKE: maybe not. Just make it default off. Official switch pro controller does not have this issue because it's always off, unless you press buttons, then it connects. Corollary: you cannot store it in a place where buttons might be pressed, or it will indeed drain the battery like in the post.
(Edit: if you disagree, please tell me why. With proper design, it's possible for every button on this type of controller to act effectively as an on button, and I even cited a working, shipping example.)
You can have an electronic circuit that is open, unless you press a button, then it's powered on and keeps power until a timeout/event. I know, it requires some more complex concepts than a bidirectional physical switch, but it's doable.
The twitter thread complains about controllers which empty their battery between uses because they keep trying to connect; but it's not something that's mandatory: an at-rest device can stay at rest without user interaction. I've used hardware on which the battery was still usable after years without use; it does requires very careful design, though.
Bathroom scale: lovely it powers up when you tap it, but eats up 2 button cells a year for a few dozen uses a year
Kitchen scales, bit more use, at least once a day but again 2 button cells a year
electronic caliper: Same, eats up 1 button cell a year, just sitting in my toolbox
Strangely enough none of our kids toys with batteries seem to suffer from this problem
How much extra cost does a hard on/off switch add to the bill of materials?