Hacker News new | past | comments | ask | show | jobs | submit | kcexn's comments login

Most residential power is dirty to some extent, and AC/DC converters are supposed to be able to cope with a wide range of input distortions. For the problem to have worked its way through all of the safety components in both the laptops power brick and its internal electronics means that the distortions must have been particularly high amplitude, and at frequencies that couldn't be easily filtered out (very high, very low, or both).

Electricians that aren't particularly well trained, or particularly experienced are probably not going to be able to diagnose this kind of problem.

This is really a problem for an electrical engineer.

Unfortunately, in most jurisdictions, engineers aren't allowed to perform these kinds of repairs because they don't necessarily have the certifications to perform work on live wires.

Most electricians can only trace wires and identify visible faults, and replace things that aren't up to building codes.


You can buy a $10 tester[0] that will detect open grounds, open neutrals, and any other wiring fault you'd care to think of. It doesn't take an experienced electrician to use; you plug it in, and see which lights turn on.

What I think is more likely in this case is the A/C turning off. All electric motors induce a current whose direction is opposed to the originally applied current (Lenz's Law). This is called back-EMF or counter-EMF. It's also proportional to the rate of change of the magnetic flux. You can see this when you turn on a motor like a vacuum cleaner, and observe the lights briefly dim. The same thing happens when a motor turns off. Additionally, things like vacuum cleaners or fans have relatively little resistance to them, so when they turn off they can coast down - small rate of change, small impact to voltage levels. An A/C or refrigerator compressor presents a relatively large resistance to its driving motor, so when it shuts off, it shuts off HARD. Big rate of change, big induced voltage spike. I don't know about a small motor like you'd see in a household, but large industrial motors can generate hundreds of volts when turned off. This is usually handled with a circuit called a snubber, but it doesn't generally matter in residential applications so it's rare to be included, as a cost savings measure.

[0]: https://www.kleintools.com/catalog/electrical-testers/recept...


Hasn't radiation from black holes been observed? It's called Hawking radiation I think.


This is not because the publishers can't afford to spend a bit more. But because they would rather capture every last dollar of profit.

There's an argument to be made that getting some of these details right would improve their profitability. For sure people will be talking about the strange pigs wandering around outside the villages.


'Middle Management' is also not a well defined term.

For some, managers like you have described could be thought of as front line managers. Or supervisors.

And middle management are the layers, usually many, that separate the front line managers from the senior executives.

The role of middle management is often to oversee budgets, and be accountable for financial performance in certain geographic regions or business areas.

Middle managers don't employ workers. They employ other managers. Lots of these kinds of jobs will probably be made so simple that entire layers of management will just be removed.

Some examples from chain retail might be:

The franchisee of say, a McDonald's, is in some sense a middle manager. They hire store managers to run the stores in the area that they're responsible for. And they're accountable for the financial performance of all the McDonald's that they own.

The 'Regional Manager' of some large company. They won't be involved in the day to day activity of the business, but they're often an important cog in the wheel because they have local visibility on the financial performance of the business in the region, and are best positioned to allocate the budget in ways that maximize financial performance. Again, they don't hire workers, they hire other managers.


Isn't Apple very successful with centralised product/tech departments?


I was wondering the same thing. I suspect that Apple has a strong top-down culture that's been set by Steve Jobs, so you likely have a "single-threaded owner" model, regardless of how the org chart really looks. Companies that rely on more distributed models where you have peers that have final say over their product decisions end up with splintering and features that don't align well.


This is a wild over simplification of what 'chaos' theory is actually studying and the statements it makes.

Chaos theory in general states that for some deterministic systems, small changes in the initial conditions can lead to a wildly different deterministic outcome.

A closer metaphor than the butterfly metaphor is a car on a wet road.

Think of driving your car on a wet road. This is a completely deterministic system, there is nothing we don't know about how cars handle on wet roads. When you take that one corner too fast however, your car loses traction on the road, whether your car spins out or just fish tails for a bit before straightening out depends a great deal on the speed and angle that you entered the corner at. If you don't know the speed or angle that you entered the corner accurately, you don't know if you can bring the car back under control again or not.

I don't think anybody in the field of Quantum Mechanics doubts that it is possible for Quantum effects to be entirely deterministic. But it may be so sensitive to small changes in initial conditions (which may be as far back as the birth of the universe), that even if we knew the exact deterministic equations to solve for Quantum Mechanical systems, they would never make an accurate prediction.


> I don't think anybody in the field of Quantum Mechanics doubts that it is possible for Quantum effects to be entirely deterministic.

That quantum effects cannot be deterministic is not only something people in the field believe, it is the majority position. The alternative goes under the name of "hidden variable" theory, and hidden variable theories aren't doing so well: https://en.wikipedia.org/wiki/Hidden-variable_theory


In gp's particular case, it is the subclass of hidden variable theories known as pilot wave theories.

All the local ones are dead due to Bell. Except super determinism I guess (both "sides" dont agree on whether it is local or non local)

The nonlocal ones live, but have few real supporters

(The article, unlike the grandparent comment, is closer to superdeterminism)


Bernardo Kastrup took Sabine Hossenfelder to task over this and to say that she did not like it would be an understatement.


What are you referring to?


They're a philosopher (Kastrup) and physicist (Hossenfelder) who have been having a long going back and forth internet debate about super determinism and hidden variables. Google it and you will find plenty of YouTube videos from both of them. It's quite interesting if you're into that sort of thing


It's the apparent (as it doesn't really happen) collapse of the wave function that seems to transcend determinism. However that's not intrinsic to the function itself. It's only a supervenience of subjective consciousness as it were. The wave function itself is very much deterministic and nothing can violate that.


The issue isn't that deterministic models are necessarily untrue. But that it's meaningless to investigate the phenomena as a deterministic one.

Even if we knew all of the hidden variables. And had the exact deterministic mathematical formula for the universe. The formula may still be so unstable that it would make exactly 0 predictions.


You'd think something like "the uncertainty principle" by it's name alone would be a pretty good hint.


The name of the "uncertainty" principle is pretty stupid and it has nothing to do with its meaning.

The so-called "uncertainty" is just a trivial property of the Fourier transform. It is the same property which requires a great frequency bandwidth for transmitting a pulse that is short in time.

The fact that certain quantities that appear in quantum mechanics are related by Fourier transforms is something that is completely orthogonal to the interpretation of the wave function amplitudes as probabilities of intrinsically random phenomena or as interaction efficiencies caused by real angles of rotation of the interacting particles/waves, which are unknown due to unknown initial conditions.


Technically you're correct, but you seem to be shortchanging the import of the uncertainty principle being a consequence of the Fourier transform.

That "trivial" property of the Fourier transform fundamentally limits our ability to measure both position and momentum, or time and energy of a particle. Without that aspect of QM we could get effectively unlimited certainty of those various properties. It deeply implies the universe is non-local in nature.


What is the difference between "unknown (and unknowable) initial conditions" and "uncertainty"? You can't go back in time to check the initial conditions.


The uncertainty in the initial conditions has no relationship with the "uncertainty" word used in the phrase "uncertainty principle", which refers to the relationship between the variances of two functions, one of which is the Fourier transform of the other.

I have refrained from using a phrase like "uncertainty in the initial conditions" and I have said "unknown initial conditions", precisely to not imply any connection with "the uncertainty principle", because no such connection exists.

Moreover, "uncertainty" is typically used about the difference between the true value of a physical quantity and its estimated value, but in quantum mechanics problems there are many cases when a value is completely unknown (i.e. all the possible values are equi-probable at the initial time, like the angular coordinate of the position of an electron bound in an atom) and not only uncertain.

Saying "unknown initial conditions" covers such cases.


Max Bohr explained the uncertainty principle in his Nobel laureate speech quite well (and in nontechnical terms) - even though it might be interpreted as some law today, it definitely began as a practical obstacle.

Also, the hidden variable theories seem to be about practicality; for all intents and purposes, because it would be practically impossible to know that such hidden variables exist, they are deemed as non-existent.


> Also, the hidden variable theories seem to be about practicality; for all intents and purposes, because it would be practically impossible to know that such hidden variables exist, they are deemed as non-existent.

Definitely not. Everyone would strongly prefer a model of the universe that was deterministic. Hidden variables don't get ignored just because we don't need them to make predictions. We really really really want them, for their own sake, but we cannot make them work.


> We really really really want them, for their own sake, but we cannot make them work.

Sure we can. Bohmian mechanics showed how back in the 50s.


I think you are folding two things together that are related but not the same. Hidden variables are about the state of an entangled system. The uncertainty principle can be applied to measurements taken of an entangled system but can also just refer to two quantities of a single particle such as it's position and momentum.


> Max Bohr

Max Born


I think the description in the article was actually good. In particular, the part where they say:

> This is the metaphor used to describe the unpredictability of chaotic systems:tiny uncertainties in the initial conditions of a system grow and grow until they completely destroy the accuracy of any forecast

This doesn't seem too far off from you say.


I'm so confused by this change. fgrep and egrep are already just shell scripts that are almost just like aliasing grep -E and grep -F. Why is the GNU team suddenly so worried about these shell scripts?

If the maintenance of these scripts is so hard. They could just deprecate them entirely. It's pretty unlikely that they would stop working or become insecure anytime soon. And sysadmins would just source in the aliases into their shell environments anyway....


A more layman way to describe it that avoids too much over simplification is that these learning models try to group things and apply probabilities to sequences of groupings.

E.g.; A word is a grouping of letters, try to find the sequences of letters with the highest probabilities.

A phrase is a grouping of words, with punctuation marks. Try to find the sequences of words with the highest probabilities.

A sentence is a grouping of phrases. Try to find the highest probability sequences.

A paragraph is a sequence of sentences. And so on and so on.

Within very narrow domains (specific writing styles, say technical or legal writing), these models can be very accurate, since the sequencing of words into phrases, and phrases into sentences, and sentences into paragraphs etc., are very predictable. People call this kind of predictable sequencing a 'style', and it aids us in understanding text more quickly. More generally across all domains, it's much harder to accurately predict these sequences, because AI identify the 'style' of a text, purely from the text itself. No context surrounding the text is given to the AI, and so it guesses.

For example:

a political press release, will be written in one style of writing. And a company marketing press release will be written in a slightly different style of writing. As humans, we can easily distinguish between what is commercial marketing, and what is political, because we are given that information upfront. In latin (the choice language for some mathematicians and logicians for historical reasons), we have the information 'a priori'. A learning algorithm, isn't given that information up front, and must determine only from the text itself, whether it is more likely to be a marketing release selling some product, and therefore it should adopt a certain language style, or a political release selling an ideology and therefore should adopt a slightly different language style.

When we don't know the right answer, and have no way to determine it, the solution that most computers are programmed to adopt is a minimax solution, i.e., minimise the maximum possible error. It does this by sort of mixing and matching both marketing and political styles.

When a human reads it, sometimes it looks very strange and funny. Usually this is because it has some distinguishing feature, that we can immediately recognise as placing it as either a political or marketing document, i.e., a company name, a political party, a corporate or political letterhead, a famous person's name etc. The computer naturally doesn't know who Donald Trump is, since we haven't taught it who or what a Trump is, so it doesn't give it any precedence over any other word on the page. Actually, in the case of Donald Trump, I bet if you took the dates off of all of his tweets, even humans would have a hard time distinguishing if they were political or commercial in nature.


More likely, I think it's that the first principles approach they have used to find this optimisation has applications in domains far outside of optimising matrix multiplications.

Only because the Google paper crossed their desk did they even think to try it on matrix multiplications.

And then more or less by luck, it happened to turn out to be slightly better. If it had produced exactly the machine learning algorithm, it would have been less interesting.

The speed of the response makes me feel that the authors are extremely good at their domain, and that finding this algorithm was a reasonably trivial problem for them. The slowness of the follow up paper, I believe is that the authors understand that their mathematical domain is wildly far outside of what most people can easily understand, so they're going to take their time to write it up well, instead of drop a whole bunch of maths definitions that nobody else in the world understands.


The authors were already working on other small-matrix optimizations. They have a 2020 preprint about choosing good pivots for up to 8x8 matrices.

https://arxiv.org/abs/2006.01623


I'd say they're more algebraic representations of regular geometries.

Algebraic Geometry as a mathematical field is interested in solving highly non-trivial geometric problems (think, from differential geometry, functional analysis, etc.), using tools from Abstract Algebra (think Galois Fields, Lie Groups etc.)

It saddles a bridge between traditional geometry, and abstract algebra, and allows insights from one field of mathematics to be applied to the other. As such, it allows practitioners skilled in these tools to make many useful inferences about incredibly complicated systems.

It's also incredibly dense. In part because many of the tools of algebra are incredibly involved. But also, in part because to define an algebraic object in a way that is equivalent to a geometric object, sometimes requires a fairly complicated definition.


Algebraic geometry existed before Grothendieck lol. It's still algebraic geometry even it looks boring next to etale cohomology of infinity stacks or whatever.


Even before Grothendieck's scheme theory, algebraic geometry was already pretty focused on varieties in rings of polynomials and other such constructions.

The heart of the domain is still using abstract algebraic arguments to solve geometric problems.

E.g.; Euclid's method for finding the midpoint of a line is to draw two concentric circles centred at the vertices with radius the length of the line. The straight line that passes through the two intersection points of the circles, also passes through the midpoint of the line.

This is the same as saying the midpoint of a line is the intersection of an algebraic variety with a root at one vertex, and another algebraic variety with a root at the other vertex.

You don't need schemes, or projective curves, or local rings to prove it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: