Hacker News new | past | comments | ask | show | jobs | submit login

We had a similar problem at work in the late 90s. A member of staff reported that their mouse would stop working between certain hours of the day. It had apparently been okay in the morning, stopped working over lunchtime then started again later.

On some days it would work perfectly all day long, but on others it would stop working between those hours.

The biggest clue was it would always work perfectly on overcast days, but on sunny days this strange behaviour would manifest again.

Turns out the problem was related to the mouse being a cheap mouse. The case had very thin plastic.

The mouse was a ball mouse, and it worked by shining an LED into a sensor on each of the X and Y axes. On sunny days the sun would completely overpower the sensor due to the plastic case being very thin and on overcast days it would not. On sunny days the mouse would only work when the sun had moved around the sky to cast a shadow over where the mouse was being used.

Perfectly logical but baffling at first.




Reminds me of a problem that I had (many years ago) with my iPhone 4 - if I tried to boot it in a dark place, it would get stuck on the Apple logo in an infinite boot loop.

Turns out some versions of the Pangu jailbreak for iOS 7.1.x would crash during boot if the reading from the ambient light sensor was below some threshold. To this day I don't know the exact explanation of this bug, but it seems that Pangu included some unnecessary code that messed with the light sensor [1].

If you don't believe me, there is a huge reddit thread[2] with a lot of people confirming this.

[1] https://www.reddit.com/r/jailbreak/comments/294wob/jailbreak... [2] https://www.reddit.com/294wob/


Engineers love problem solving. I always see it as a challenge. No matter how unimportant.


That's funny, there exists a similar issue with the LG G7 that a friend of mine ran into several years ago. The fingerprint sensor on his phone just straight-up completely stopped working, and subsequent OS updates did nothing to fix it. At first we assumed it was hardware failure, and he was ready to send it to a repair shop. While investigating it I saw a comment somewhere that it had something to do with the light sensor, and after holding my thumb over it for 10 seconds it "magically" started working again after 4 months of being completely non-functional.


Seems unlikely. I don't have access to the paste but from the comment below it I think it's probably a false positive that Pangu was doing something with the sensor. (Not that I don't doubt that the sensor could be the problem, it's just that the code is not very conclusive.)


The intent might have been to prevent it turning on while in your pocket.


I had a similar problem, but in the opposite direction. My cable internet speeds at home were fairly good (for the US, anyway), but sometimes would absolutely bottom out. Not dead, just glacially slow. After troubleshooting everything under the sun, I came to realize that the problems would happen not when it was raining per se, but when it was heavily foggy or misting. Normal to heavy rain was fine.

Called the cable company, tech came out. Everything inside was fine, but the cable from the main line to the house had a tiny cut in one spot, not enough to really affect the connection, but enough for ambient moisture to work its way in and foul the connection.


on dslreports or broadbandreports there's at least two instances of me complaining about two cable companies because, at last, it was figured out there was moisture ingress in the LE (line extender, usually on cable lines on poles). The only common denominator was it happened during prime time, every night, and went away around midnight.

The other common denominator was the cable company refusing to believe it was an issue with their equipment; this meant it took a couple of months of calling them every night until they finally sent a technician and a manager to my house to verify that I wasn't wrong, leaving my house, coming back 15 minutes later to say "it'll be fixed tomorrow, there's a problem with the LE balance up the road" - and then the issue is resolved.

Now this doesn't sound so bad, until you learn that the first time this happened to me, i had only VoIP - so the internet would start to foul, i'd call the cable company, and the tier 1 would reset my modem at some point, and then i wouldn't be able to call back until after midnight (or whatever), when there was no longer a problem. So after a week of this, i would walk 30 minutes - one way - to a pay phone (remember those?) once the internet slowed, call them, explain that i couldn't do anything they wanted me to do physically, since they disconnected my phone line every time i called.

This is what happens with a de facto monopoly.

I will never pay suddenlink another dime, even if they're the only terrestrial provider, for whatever reason.


Interesting, I wonder if I’m experiencing something a little bit similar that Comcast can’t seem to debug.

Almost every day, in the heat of summer, I get one to five 10-minute outages as soon as the temp gets over about 80F. More when it’s hotter, usually. Usually it results in a modem reset, so it’s hard to tell how long the actual outage is.

Been happening for going on 5 years. They replaced the under-street cable from our house to the junction box across the street to no effect. I suspect it’s that junction box, but afaict, none of my neighbors that share that junction box have the same issue. Not very fun to have your WFH day collapse unexpectedly in the middle of the afternoon.

Strangely, for the last month we’ve had several days of 80+ temps with no sign of outage. So fun.

Edit: yes of course multiple modem replacements and inside cable checks, to no avail.


We had the same problem at an old house. There was a cable splitter in the attic that was expanding in the heat and losing connection to the cable. We bought a heftier one and moved it under the insulation in the attic.


Interesting, I removed a splitter from the attic many years ago and replaced it with an F-F coupler.

I wonder if the coupler’s center conductor contacts could be expanding just enough to break the connection?


I wouldn’t be surprised if that was happening.


Yeah, probably very similar thing if that pattern is true. It's the shift forcing your modem to change speeds, but neither side being willing to accept it.

If you can, try forcing a level at/below the speed you get during the breakage and see if it just rides it out. If it does, shift it back up and plan your coffee breaks around it. Or don't, I'm not your mother


Several day's of 80+ temps, meaning it hasn't dropped below 80? Possibly it is just getting above 80 before you start your work day. And not dropping below until after your work day has ended?

I've experienced something similar except for temperatures below ~32-36 degrees. At this particular location it would result in a ~1hr outage going below that temperature, but not when it went back above it for some reason.


I think you’d get this problem, monopoly or not, whenever cost saving measures are in place (and they always are, for good reason) at the customer-interface level.

Maybe there should always be a hidden option that only people that meet a certain troubleshooting ability threshold get access to when calling in for tech support….


> I think you’d get this problem, monopoly or not, whenever cost saving measures are in place (and they always are, for good reason) at the customer-interface level.

I'm guessing that these scripts that we're all complaining about solve 95% of problems customers call in about. Sure makes things painful for the 5% of cases, though.

I've been a (grudging) Comcast customer for ~17 years, and I have been impressed by how their monitoring has improved over that time. It's been quite a number of years since I've had to convince them that I had an actual problem that their systems didn't automatically detect.



There is a hidden option - you can call it "proof of work", or "proof of determination". You keep calling, and trying ways to escalate, maybe even send a paper letter; eventually, something in the customer "support" process will yield and you'll get through to someone who can actually help you.


That time is an intersection of heavy home use and when the dew hits.


A customer's DSL connection dysfunctioned every evening during December - but worked fine the rest of the year... Culprit: interference from nearby Christmas decorations leaking EM all over the place.

A customer's DSL connection dysfunction's frequency increased mornings and evenings. Culprit: the lift's electric motor leaking EM all over the place.

A bunch of DSL connections degrade when traffic increase... Crosstalk in big cables of course !

The sort of fun incidents that take a good while to troubleshoot... I'm glad we are migrating away from DSL to fiber: either it works or not !


It's not like fiber doesn't have its own weird failure modes. Favourite one I heard was shoddy belowground work while crossing a street. No problem with ordinary car traffic, but heavy trash haul trucks could interrupt the link.


An ISP my friend worked at was having weird outages in one area, and it turned out that they had an apartment block built right in the way of their free-space optical link. Surprisingly, it was fine at first because the link went straight through it without obstructions, window to window. But when they started to add window panes, finishing the construction, the link became spotty, and adding the doors blocked the signal completely.


How does one even attempt to troubleshoot that, without resorting to a questionably legal drone or chopper flight?!


Line of sight, easy - just aim a sight and find that you are seeing a building rather than the opposite device.


Having had to debug many of such cable issues in the past, it's baffling to me that cable companies aren't proactively monitoring for things like this.

They have all the data available on their end, as far as I can tell! (Unless DOCSIS modems somehow don't have a standard "signal receive report" functionality?)


Telcos used to monitor their copper outside plant for moisture. This was called Automatic Line Insulation Testing in the Bell System. The ALIT system ran in the hours before dawn. It would connect to each idle line, and apply, for tens of milliseconds, about 400 volts limited to very low current between the two wires, and between each wire and ground, measuring the leakage current. This would detect moisture in the cable. This was dealt with by hooking up a tank of dry nitrogen to the cable to dry it out.

Here's a 1960s vintage Automatic Electric line insulation test system at work in a step-by-step central ofice. [1] Here's the manual for automatic line insulation testing in a 5ESS switch.[2] 5ESS is still the major AT&T switch for copper analog phone lines. After that, it's all packet switching.

For fiber, of course, moisture doesn't affect the signal.

This led to an urban legend: "bell tap". While Western Electric phones were designed to not react to the ALIT test signal, many cheap phones would emit some sound from the "ringer" when the 400V pulses came through, some time before dawn.

[1] https://www.youtube.com/watch?v=Wt1GGdDa5jQ

[2] https://www.manualslib.com/manual/2755956/Lucent-Technologie...


Great comment, thanks!

(I've sent a quick email suggesting it be added to https://news.ycombinator.com/highlights :)


If you're really into telephony history, the Internet Archive has "The History Of Engineering and Science in the Bell System" (3 volumes) online.

If you have to build reliable distributed systems, it's worth understanding how this was done in the electromechanical era of telephony, where the component reliability was much worse than the system reliability. "Number 5 Crossbar"[1] is worth reading, but hard to follow if you have no idea how telephone switching worked and are unfamiliar with the terminology.

Number 5 Crossbar, in current terms, was a collection of microservices. There was a big, dumb switch fabric, and "markers" which told it what to connect. Other microservices included trunks, originating registers (which listen to incoming dial digits), senders (which sent dial digits to the next switch), billing punches (which recorded toll call data for later billing), translators (which held routing tables), and trouble recorders (which logged errors.) Central offices had at least two of each resource, for redundancy. Resources were "seized" as needed from resource pools, with a hardware timeout and alarms to prevent resource lockup. If something went wrong in setting up a call, it was retried once, using different resources. If it failed on the second try, the caller got a fast busy and there was an alarm and a trouble recorder dropped a trouble card. Markers did not have persistent state. They started each call with a reset. So they could not get stuck in a bad state.

In the entire history of the Bell System, no electromechanical switching office was ever down for more than 30 minutes for any reason other than a natural disaster or a fire. It's worth understanding how they did that.

[1] https://telephoneworld.org/mdocs-posts/number-5-crossbar-sys...


Not truly related to the post content, but there is something about the way these old manuals are formatted/printed that immediately inspires confidence in the contents.

Maybe because you know that someone spent a lot of time on it before it was published since no adjustments could be made after the fact.


> trouble recorders

This feels like a term a sci-fi author would invent in an alternate history setting to replace "error log" and I find it very humorous.


No, just practical.

The previous version was a panel of blinking lights called the "trouble indicator". When an alarm sounded, someone had to go to the panel and record by hand which lights were on. There were about 200 lights. So the trouble recorder, which recorded that info automatically, was added in larger central offices as an upgrade.[1]

[1] https://hackaday.com/2022/12/02/stack-trace-from-the-1950s-p...


We still have a land line. When a call comes through the phone often gives a gentle "peep", then a pause, then goes full-on ring. I've started to react to the "peep".

But every evening, mostly around 21:00 or so, the phone gives a gentle "peep" without then ringing.

I wonder if it's a line test?


Crap electronic ringer, probably. If you put a scope on the line, you should be able to see what's happening. Remember to be prepared for higher voltages, up to 400V.

There are various weird, obsolete signals in analog phones. Ring pulse alerting signal. ALIT test. Polarity reversal. Ring to ground. Ground start. Caller ID (1200 baud FSK between the first and second rings) DSL. Basic talk and ring was standardized around 1900, and everything else is backwards compatible. Ringers are supposed to ignore all that stuff. People who implement Asterisk PBXs are into this.

Here are some actual waveforms, if anybody cares.[1]

[1] http://www.adventinstruments.com/Products/AI-5120/Screenshot...


Ah, so that's why there were always nitrogen tanks on NYC sidewalks.


Yup, here's a tom scott video on the very same: https://www.youtube.com/watch?v=juZqGU9iuq0


> Wait! Was that an old adding machine?

At 02:40.

And yes, it is an adding machine.


In my observation, to a first approximation, cable operators take off-the-shelf equipment, connect it, power it on, and bill customers for it. They don't really have the r&d capability to innovate and create new monitoring solutions quickly.

It might happen that an equipment manufacturer sees an opportunity and builds something, but then they have to go into a long sales cycles to convince operators to use it. Operators are in a duopoly situation in most places, so quality of service is kind of a secondary concern for them - customers may get annoyed, but as long as the competition is not vastly superior, few actually switch. It is not a market prone to innovation.


Common issue in Ireland for DSL customers. Damaged copper cabling would leak water when it rained causing dropouts and lower speeds. Telecom engineers would call out on days when the copper had dried out and be unable to find any fault. Turns out correlating such reports with weather reports is hard. :/


I’d suspected that, but kept it to myself because it sounded a bit mad


I ran into a similar issue, except internet and phone would get really bad on a cold morning.

Tech showed up around noon, saw I was indeed having a bad connection, went and checked the signal at the junction box for the street (can't remember what you call these) and everything was normal there, so he closed it back up again and double checks the signal at the house again, but it was fine. He walks the lines to double check but everything looked normal.

His best guess was that moisture was condensing ever so slightly inside the junction box that morning, and was let out as soon as he opened it at around noon, which fixed the problem.


Moisture in copper cables is what slowed me down too. It was in a section up the road from me. However now that fibre is installed, it’s glorious and works in the rain.


I had a similar problem, due to an old line running to my house; liquid getting in, etc. And when it acted up, I'd call the cable company and be like "look, I can show you I'm losing packets right now... I need you to run tests on your end to confirm". And every time, they'd tell me they could schedule a tech to come out and take a look at it. Only, I couldn't "schedule" the problem to occur when the tech came out.. so they'd come out, declare all was fine, and leave. It was infuriating.

Eventually I called so many times and had so many appointments, that the tech lead gave me his direct number and told me to call him directly the next time it happened. When it did, I did, and he ran some tests, and confirmed there was a problem. I don't know that we ever got it sorted out (it was a while ago), but just getting them to agree there was an issue took a very long process.


We have a countertop ice maker that gets jammed up and overloaded with ice on sunny days for a similar reason.

There's an infrared beam and sensor. When the ice tray is full, it is supposed to block the beam, and then the machine stops making ice.

On a sunny day, there's enough bright light in our kitchen to fool the sensor so it keeps making ice.

We have a random magazine that we put on top of it to make it work correctly.


I have a garage door that will not close on sunny days.

Same sort of problem. The obstruction sensor at the bottom of the door is confused by the strong sunlight and the door stops closing part way and re-opens.

I've tried a toilet-paper tube around the sensor but that isn't always successful. I really wish there was a laser sensor to replace it with.


The sad thing is there are certain IR wavelengths that are a lot less affected by the sun and nobody bothered to check for an outdoor product...


The garage door obstruction sensors are located inside the garage, so it technically might be an indoor product.

Although, the possibility of a garage being oriented such that the sunlight would directly hit the sensors while the garage door is open seems like it could be a not infrequent occurrence.


Paint your garage floor black: less light will reflect.

It also is a lot easier to see fallen bolts and shit on a black floor than on a white/gray one.


What sucks is those sensors are designed so you can't just jump a wire to permanently defeat them.

You can, however, tape the sender and receiver together.


Is it a Genie system? My old Genie system had that exact same problem. Even making large sun shields out of Amazon boxes didn't fix it.

I had to replace my opener and door anyway and had a conversation with a tech about it. We decided on a LiftMaster in part because their sensors are very good at dealing with sunlight.


Depending on the orientation of your garage door, exchanging the sensor could put it in an unaffected position.

It looks like an industrial photoelectric sensor, including laser based ones run around $100, so maybe that can be a realistic swap.

https://www.automationdirect.com/adc/shopping/catalog/sensor...


I have this exact problem and (mostly) fixed it by swapping the sensor and transmitter. I just cut the wires and spliced with electrical tape. Now the problem still happens but only sometimes in the fall and spring when the sun's angle is just right. This is with a west facing garage about 41°N latitude USA.

But yeah, why this isn't laser based, or using a light frequency that is less affected by sunlight? Probably cost, or ignorance.


It varies by brand- some brands are better at filtering out sunlight than others. The home builder should know not to use certain brands in garages that face the sun... but they often don't.

It'a not laser based so the sensors don't have to be perfectly aligned. Keeps your garage door working when you kid knocks it with their foot.


Very basic engineering would be to modulate the sender at a specific frequency


Similar problem toilet-paper hack worked.


Maybe experiment with filters.

Also it could be 'fun' to swap out the LEDs?


I had a VCR back in the day that refused to function if you opened its case. It turned out that instead of using physical switches inside it used pairs of lights and detectors that would give false positive results when ambient light shined on them.


as in an optocoupler? Those are the coolest, especially for dealing with different voltages.


No, there's not any call for high voltages in a VCR, outside the feedback loop of a SMPS if it's fancy. The most common thing to use light sensors is just detecting the difference between tape and clear leader at the ends of the tape (or broken tape).


Your mouse story makes me think of the day the CI system at work turned out not to be robust to vibrations.

One day we started having flaky tests, seemingly out of nowhere. We quickly identified that the issue affected tests involving graphical X client applications, but then we struggled to make further progress. The issue was just impossible to reproduce in other conditions... Well, as it happens, the CI jobs were running on some desktop machines we had installed somewhere within our premises. It turned out that some gentleman had plugged a mouse into one of the machines, and left it lying around on the shelf. Since then, when one of the machines was under a heavy load, the fans would spin faster, causing more vibrations, in turn causing the mouse to move, ever so slightly. And for ungodly reasons, this had side effects on tests.

Fun fact: the machines were not on my site, I managed to diagnose this over SSH. I was quite proud :-)


> And for ungodly reasons, this had side effects on tests.

Let me guess - tests with very tight timings?


Sadly I can't remember this part; I'm pretty sure there were comical bits to it.

This makes me want to dig out the gitlab issue, and turn it into a better write-up! This'll have to wait until I'm back from holidays though.


When it’s sunny, my wife’s car can’t open the garage door, and my car requires getting extremely close. Once the sun goes down, we can both open the door from the street.

It turns out our solar panels (or the optimizers, or the inverter) emit radio frequencies that interfere with our garage door opener. When the sun is out and they are producing energy, the interference is stronger than the homelink garage door opener.

A few years ago the garage door openers started working fine. It took a few days to realize it was because the inverter had failed.

I’m fairly certain there are some FCC regulations that would require our installer to fix it, but that relationship soured during installation and I’d rather deal with an unusable garage remote than dealing with them for warranty work.


Some clip on ferrites on the inverter cables might help.

If you have any amateur radio neighbours they'd probably love to help you with a project like this.


If they had a HAM in direct neighborhood, I imagine said HAM would already pay them a visit - the interference from the inverter is likely not constrained to the ISM band.


Not to be a pedant, but just for your info, Ham is not an acronym :)


I always saw it written as either "HAM" or "ham", and I assumed the latter is the "young generation doesn't give a damn about spelling or punctuation" spelling, and therefore that the former is the correct one.


On the cables coming in from the panels or wiring going back to the main panel?


I'd start with the wiring going back to the main panel first but be open to anything. Fixing RF noise is more of an art than science in my experience.


If you get a SDR, you can watch the interference and try things to help reduce it. An SDR should be like $20 and you plug it into a computer


Just curious but did you try any bodged shielding?


I used a coax cable to move the antenna closer to the exterior wall and didn’t see an improvement, however, I might not have grounded the shielding properly. I’ve had to replace the control board since then and didn’t replace the antenna on the new board, but may try that again.


How old is the garage door opener? Older ones used frequencies that are more susceptible to interference from certain sources. It's possible to buy new receivers to connect to your existing door opener.


The garage door openers aren’t terribly old, but they are terrible!

I have a HomeKit opener attached to it that we use during the day. Fortunately that’s been reliable enough to get around the issue.


Argh. My own mouse not working one:

- Use to fix PCs professionally in the early 90s.

- Guy comes in with PC. Right-mouse button stopped working.

- Replace mouse. Still not working.

- Play with Windows 3.1 drivers. Nothing helps.

- Pull HDD from another PC, install, boot. Mouse button still broken. WTF.

- Pull whole mobo, put another spare mobo in, with replacement HDD and replacement mouse. Still don't work.

- Replace PSU. Right-button works.

- Give up on computers, live in wilderness, eat squirrels.


I used to know an older woman who did trap and eat squirrels as her main source of protein, so... this isn't entirely unlikely.


What was her trap like?


I have a similar stories.

The first was a VDSL connection I had at home. It worked great (fast, for the time) except when it didn't. It always failed in the evening. Techs would come out, bless it as being good, and leave -- because of course it worked while they were there. Unless they showed up and it was broken and then they'd declare that it was an outside problem, and that they'd have to get someone else to fix it (because the residential techs can't do overhead work).

I made lots of (very polite) phone calls, which results in more refunds and more service calls. More than once, my driveway and the street in front of my house looked like an AT&T convention.

This went on for months.

I had direct numbers and emails for tier 3 support and the local manager who oversaw this plant. We were all getting to know eachother too well, and there were boots on the ground addressing this problem as many as three times in week.

I eventually noticed that as the days got shorter so did the evening outages...and that if it was a cloudy day, then that day was often outage-free.

I had an epiphany: The problem might correlate with the angle of the sun, and the duration of exposure!

I checked my logs and the past weather, and sure enough: It lined up.

So I reported my findings, even though they seemed like nonsense as the words came out of my mouth, and they sent out some crazy-haired guy with bluejeans and an untucked shirt who was clearly not used to wearing a uniform, and who was also obviously not normally customer-facing.

"I know exactly why they can't find the problem," he said after I reiterated what I'd learned. "Your neighborhood still has old lead-sheathed overhead lines, and nobody knows how to work on that anymore."

"But I'm certified on that. I'm going to go back to the shop, pick up a bucket truck and get your line fixed. It will take me most of a day to do this, but I will be back when I'm done."

And it was getting pretty late, but he did come back to let me know that he found some things and fixed them. And I don't know what those things were, but it was fine after that -- and it stayed fine.

Thermal expansion letting cosmic rays leak into copper pairs wrapped in paper, tar, and lead? Who knows. I certainly don't know.

I've never encountered that stuff professionally (and it isn't your grandfather's 25-pair cable) and as this dude said, "nobody knows how to work on that anymore."


We had a similiar problem with a label printer.

On some days, exclusively in the morning hours, the printer would fail to detect the start of a new label, printing over several labels.

After connecting remotely and checking the usual (queue, network connection, drivers etc), I asked my colleague to call me, as soon as it happened again.

When I went there, I saw that a ray of sunlight hit the printer. The windows had shutters, but there was a gap.

Label printers detect the gap between labels using a laser. And for some reason, the printer's case had a clear window at the top.

I printed an empty label and stuck it on the little window.


Printer, fix thyself!


I'm amazed an IT department would troubleshoot deeply enough to figure out it was the thin plastic letting in interfering light on sunny days.

I would have guessed they'd shrug at the first sign of trouble, swap it out with a known-working mouse and mark the ticket resolved... unless all the replacement mice were thin plastic too, I suppose.


I can't leave something like that unexplained, and I've been an IT department before.

It would bother me until I figured something out.


I'm assuming it was a trackball mouse from the description. The OP said it was cheap so I don't think cost was an issue but from my experience some employees are very particular about their peripherals (don't blame them one bit!). If they're important enough (or maybe just nice enough) I could absolutely imagine spending the time to make sure their preferred device is working properly.


Mice used to be expensive enough to troubleshoot.


For myself, I can't remember a time when a mouse cost more than an hour of an IT guy's time.

I suppose a good office-computer mouse in 1990 would cost $100 ~ $200 (say $350 today). In that case, yes troubleshooting it for a day would make sense, especially if it's not an isolated problem.


> For myself, I can't remember a time when a mouse cost more than an hour of an IT guy's time.

This is not a sensible comparison to make. Support staff have a lot of free time. They have to, because they're support staff -- if they were always busy, then whenever a problem arose, it would be impossible to get support.

So to have the IT department playing with the mouse is unlikely to cost the company anything. If something comes in that's more important, the mouse problem will be put aside. If they have nothing better to do, they can play with the mouse.


I love these kinds of problems. There's an old story about a bug where some people couldn't send an email further than 500 miles. Huh?

https://web.mit.edu/jemorris/humor/500-miles



Your car repeatedly doesn't start so instead of taking it to the shop, you... write a letter to the CEO of Pontiac who not only actually reads the letter but also personally dispatches an engineer to waste a week going out for ice cream? And Pontiacs have a known vapor lock design flaw that only you, the letter writer, are experiencing? And you've only experienced it on your ice cream runs? And you've never got the vanilla ice cream but took a little extra long so the vapor lock dissipated and disproved your cute theory about vanilla ice cream?

Seriously, this is one of those dumbass stories that come from your boomer relatives with the subject line "fwd: fwd: fwd: fwd: fwd: fwd: fwd: fwd: re: fwd: fwd: fwd: fwd: fwd: fwd: vanilla ice cream"

Nobody actually believes this story is true, right?


While I appreciate your point, I think anyone who has spent sufficient time troubleshooting complex systems has dealt with similar types of problems, and can grasp the _spirit_ of the story.

In fact, I'd argue the quaint style of the story does geeks a favor: if it's appealing to normies, maybe they'll appreciate us technical folks' perspective a little more.


actually the story is detrimental to helping people understand how technical systems and troubleshooting work, because it's so poorly invented.

lots of people, both technical professionals, and non-engineers who are observant and have an appropriate level of belief in causality, troubleshoot transient failures like this all the time. a difference in the amount of time between shutting down the engine and starting it up is one of the first things that someone like this would test, or control for. it's beyond implausible that the second time the guy got vanilla (after riding along for 4 trips, two long and two short), the engineer didn't raise the question of how long he was in the store.

people troubleshoot things like this by being able to separate causes which are plausible, although unlikely and surprising, from things which aren't remotely plausible. the 500-mile email story and the stories above about sunlight interfering with sensors demonstrate this.

if you're the sort of person who believes that the type of ice cream you get might affecting your car's ignition - the type of person who buys ice cream often but never thinks about how long the errand takes, you simply never get the point of being able to make a pattern between those two things. the second time your car doesn't start, you blame it on the scratch-off lottery ticket you won $2 on which used up your supply of luck for the day. the third time, you conclude that the car ignition knew you were late and likes to choose its failures to cause maximum annoyance. and the fourth time, you realize that your mother-in-law gave your car the evil eye that morning.

the story as told, especially when presented as a real parable about engineering rather than an amusing myth, is frankly insulting to the other type of person. the untrained, not necessarily educated person who cares about machines and believes in material reality. the person who starts checking their watch each time they go to the store and a couple of weeks later is telling their mechanic friend "if it's more than 3 minutes or so, it's fine. but if you try and start it before 2 minutes, then you have to wait another 5 before it's ready to go".


And having a stranger along for the shopping trip doesn't affect the timing more than the extra walk to the back of the store?

And the engineer is sitting in the car on the first night and it "wouldn't start," which signals the end of the episode for the day. Were they stranded? Did the car start after a few tries, which would have given a huge hint about the root cause? Surely the engineer who had reproduced the issue would quickly narrow it down by running diagnoses on the car itself.

But the family dynamics are the most improbable part here. How does the family have this predictable routine and not simply stock up on ice cream? The family has enough kids that the consume a whole $unit of ice cream per day. So with that much chaos in the house, how does the dad justify going out for a drive after dinner when the chaos of family multi-tasking (cleanup, chores, homework, bedtime) is at its peak? "Oh, look. Out of ice cream again. I'll be back in a few!"


Also ridiculously implausible is that the supermarket keeps vanilla ice cream in a completely different location in the store.


Aisle caps commonly feature a product that would ordinarily be found alongside a bunch of closely-related products somewhere else in the store. But I don't think I've ever seen a refrigerated aisle cap.


Obviously, they'd put the most popular flavor way at the back of the store, and the least popular flavors at the front. The store is not in the business of maximizing throughput of customers - quite the opposite, they want customers to spend more time walking and getting lost between shelves, as this maximizes the amount of wares moved.


I believe the aisle caps are bid for, and then arranged by, the manufacturers. The store just sells them the space.

The manufacturer is unlikely to see a problem with putting a display of very popular products right at the front of the store where people can't help but see it.


So you think Vanilla Inc, the company that only makes vanilla flavor ice cream, is paying for a whole chiller that they fill with vanilla, since all the other flavors are not as popular?

Or perhaps, unbeknownst to everyone in the world who uses the word 'vanilla' to mean 'mundane', vanilla ice cream actually enjoys far higher profit margins than all other ice cream.


No, I think Dreyers, the company that makes ice cream, uses the limited space available in the aisle cap to showcase one or two of their most popular flavors. (Or one or two flavors that are seasonally relevant.) That's a totally normal use of the aisle cap, just like how a Triscuits aisle cap is all original Triscuits and the many secondary flavors that Triscuits come in have to be found in the crackers aisle.

It would be literally impossible for an aisle cap to feature every ice cream flavor available - there are so many that each flavor would have very little representation in the display, and the concept would fall apart as soon as anyone bought something from it. At that point, you're paying a bunch of extra money to send the message "check out our least popular flavors".


It's unclear whether you're claiming 1. the vanilla ice cream is sometimes kept in a separate chiller must nearer the entrance and this obviously made-up story is perfectly true, 2. at least some stores exist where the vanilla ice cream (and only the vanilla) is kept in a separate chiller must nearer the entrance, or 3. the vanilla being kept by itself in a separate chiller has elements of plausibility, and can not be dismissed out of hand, even though it's possible that this specific set-up has never existed in any actual store in real life, ever. Which is it?


If you read my comments before deciding you needed to respond to them, it would be pretty apparent that I am claiming none of those things. I specifically noted that a refrigerated aisle cap is implausible.

What's not implausible is the idea that one flavor of a product with several flavors might be located far away from all the other flavors. That happens all the time.


> At that point, you're paying a bunch of extra money to send the message "check out our least popular flavors".

That's what I expect them to do, though. The most popular flavors, by definition, needs advertising the least.


The most popular flavors, by definition, benefit from advertising the most. The goal isn't to achieve equality of popularity between every product you sell. It's to sell the greatest number of products!

The advertising costs the same whether you advertise popular flavors or unpopular ones, but you'll get a lot more sales by advertising the popular ones.

Go check out a grocery store, see whether the aisle caps slant towards popular or unpopular product varieties.


..nor would the featured product simply be one flavor of ice cream, as opposed to, say, all the Ben & Jerry's.



Another similar anecdote I heard before was related to a wireless device, and some employees flying a drone during their break, generating interference.


My parents have some old Gateway amplified computer speakers. Came with the 386!

They still work perfectly... except for a regular pop of noise every few seconds that would intermittently show up, that scaled with the volume setting.

It turned out, their portable phone (read: landline with short-distance wireless RF handset) would ping from the base station to the handset, if it were off the cradle, which was being picked up by the unshielded line-level audio cable and amplified.

Moved the base station further from the cable, pop disappeared.


Remember how old speakers would let you know if you were about to get a cellphone call. It was like digital precognition haha


you can still hear cellphone and wifi noise in crappy amplifiers - i have two sets of active muff hearing protection - the ones with microphones on each earpiece, and if you get inbetween a beamforming WAP or near any wifi antenna, or near a cellphone, you get "brrz bz bz bz bzzzzzz tiktiktiktiktik".

But this is different than the old <2g/edge phones, which wouldn't interfere unless you were about to get a call - because the tower said "where's this phone?" and your phone would max out it's tx and say "here i am!" and that's what you heard. This is probably incorrect, but based on my observations this is what occurred.

Remember the doodads you could put on your startac style phones on the antenna bit, with LEDs in them - they'd light up when you were about to get a call, as well, by design!


Speakers haven't really changed in 50-60 years, it's the phones that changed there. If you got a call on 2G today it'd still happen.



i found this and had to find this thread... https://www.youtube.com/shorts/Q-6M2P6mAx4


OMG had forgotten all about that.


We did a lot of wireless (2.4GHz range) sensor development at my last job. It was a rule of thumb to avoid any testing at lunch time since the microwave generated so much interference, everything would fail when someone wanted to heat up their meal.


“Microwave oven to blame for mystery signal that left astronomers stumped”: https://www.theguardian.com/science/2015/may/05/microwave-ov...


Ugh... Mom and the microwave were a scourge back in the days of yore when I hosted servers for friends in various games. I was hardwired into the router as "the keeper of the hardware", but the wireless would just get schlocked every time she heated up her coffee.

And Mom ran on coffee. Lotsa coffee. Sometimes I think she just did it because she didn't hear enough complaints coming out of the speakers.

I rejoiced the day that thing died. She's now got an even beefier one, but it doesn't interfere on the 5ghz bands at all, and I'm not testing the 2.4 out of respect to the spirit of that ole menace.


Had a similar effect with garage door sensors and TVs which use infrared. During sunset, the sun would line up juust right to blank out the sensors


reminds me of testing out a pinewood derby track in my backyard for our cub scouts pack. It had an IR sensor that would detect the cars at the finish line. When someone was standing next to the finish line, it worked flawlessly. Otherwise, it was very flaky and randomly would trigger without the cars trigger it. I pretty quickly surmised it was interference from the direct sunlight, so we put up a pop-up shade over it and it worked flawlessly (without someone standing nearby, coincidentally casting a shadow). The other dads were amazed that I figured it out, but it's just one of those things you learn from experience (and some background knowledge).


This happened to me in my first job and it took me weeks to figure it out. The penny dropped and I put tape around the thin gap in the casing where the top joined to the bottom and it fixed it immediately.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: