I'm glad this article included an example of what happens to their eyes when they read text in dark mode. I get the same afterimages and it's incredibly disorienting when it happens and dark mode makes it way worse than normal. As an aside, does anyone know if that effect has a name?
Yes, I am glad to see this post, because I had the sentiment to be alone with this ‘symptom’.
Can’t bear dark mode, it give me disorientation and nausea.
I wasn’t able to explain this to my ophthalmologist.
Douyin is the Chinese TikTok equivalent. China isn't opposed to the concept of short form video, they just want to segregate Chinese users into their own app
I think they're trying to say that you don't respond to bad behavior (China banning apps) with your own bad behavior (US banning apps). If America is opposed to the way China handles social media then we shouldn't seek to emulate them
> I think they're trying to say that you don't respond to bad behavior (China banning apps) with your own bad behavior (US banning apps).
This presumes an assumption. I don't consider the banning as a lever for ensuring US controls Tiktok as bad behavior. America has a vested interest in snooping on and having direct control over popular mediums of communication. Giving Chinese ownership access to the methods used (like the physical devices, et al), is a security issue. It's a cold war game that seems a little sophisticated for this day and age (somehow). The lack of understanding explains a lot of these wandering conversation about tangents.
Couldn't you argue the opposite? That is, if we are so opposed to China then shouldn't we do the opposite of them? I don't think it seems very American to change our policy to be more like the "enemy"
This is like saying, "well sure they invaded us with their military, but we don't want to be like them, so let's not take any military action in response."
Fundamentally, aggressive action as a response is not equivalent to being the initiator of aggression. Hence: turnabout is fair play. If someone punches you economically, it's entirely fair and reasonable to punch them back. It does not make you "just like them" to defend yourself.
No, it's a perfect analogy, you just don't like it. If you actually had a valid point, you'd bother to explain the issue, but you didn't. Telling.
The US isn't censoring it based on content anyway -- in fact, the US government's ability to censor much of anything based on content is severely constrained by the First Amendment -- the US doesn't like the fact that it's controlled by the PRC. But blocking businesses from a rival nation is a trade issue, not a speech issue.
China is a rival and opponent of the US on the geopolitical stage. It's entirely reasonable to respond to trade restrictions with trade restrictions.
The HHKB variants can be nice keyboards, but they're definitely not for everyone. I have a Leopold FC660C that has the same silenced Topre switches as some of the HHKB variants and they're really nice, but the boards that come with them quite expensive for what they provide. The HHKB layout can also be a bit annoying to get used to too, which is why I use the FC660C, since it has arrow keys.
In practice I find modern custom keyboards to have higher upside than Topre keyboards, since you can build one with your preferred layout and switches. If you get a QMK enabled board you can also fully customize the functionality of the keyboard, which I find immensely useful. Unless you specifically want a 60% keyboard with light tactile switches I wouldn't recommend an HHKB, but if that's what you want it's hard to do better.
Agreed. I like my Leopold FC660C too but the Topre switches can’t don’t even come close to today’s quiet/lubricated mechanical switches. My custom Python M60 board feels significantly better to type on and is far quieter than the Topre switch board. The Topre switches feel like cheap junk by comparison. I used it for years, but there has just been so much development in this area lately, the Topre switches are outclassed.
On QMK, I disagree. The QMK Bluetooth/USB support still kinda sucks in every recent board I’ve tried, and you can get most of the best QMK functionality on any keyboard using Karabiner anyway.
How can you verify a proof though? Pure math isn't really about computations, and it can be very hard to spot subtle errors in a proof that an LLM might introduce, especially since they seem better at sounding convincing rather than being right.
To code proofs in lean, you have to understand the proof very well. It doesn't seem to be very reasonable for someone learning material for the first time.
The examples in this book are extraordinarily simple, and covers material that many proof assistants were designed to be extremely good at expressing. I wouldn't be surprised if a LLM could automate the exercises in this book completely.
Writing nontrivial proofs in a theorem prover is a different beast. In my experience (as someone who writes mechanized mathematical proofs for a living) you need to not only know the proof very well beforehand, but you also need to know the design considerations for all of the steps you are going to use beforehand, and you also need to think about all of the ways your proof is going to be used beforehand. Getting these wrong frequently means redoing a ton of work, because design errors in proof systems are subtle and can remain latent for a long time.
> think about all of the ways your proof is going to be used beforehand
What do you mean by that? I don't know much about theorem provers, but my POV would be that a proof is used to verify a statement. What other uses are there one should consider?
The issue is-- there are lots of way to write down a statement.
One common example is if you're going to internalize or externalize a property of a data structure: eg. represent it with a dependent type, or a property about a non-dependent type. This comes with design tradeoffs: some lemmas might expect internalized representations only, some rewrites might only be usable (eg. no horrifying dependent type errors) with externalized representations. For math in particular, which involves rich hierarchies of data structures, your choice about internalization might can impacts about what structures from your mathematical library you can use, or the level of fragile type coercion magic that needs to happen behind the scenes.
The premise is to have the LLM put up something that might be true, then have lean tell you whether it is true. If you trust lean, you don't need to understand the proof yourself to trust it.
The issue is that a hypothetical answer from a LLM is not even remotely easy to directly put into lean. You might ask the LLM to give you an answer together with a lean formalization, but the issue is that this kind of 'autoformalization' is at present not at at all reliable.
Tao says that isn't the case for all of it and that on massive collaborative projects he's done many nonmathemeticians did sections of them. He says someone who understands it well needs to do the initial proof sketch and key parts but that lots of parts of the proof can be worked on by nonmathemeticians.
If Tao says he's interested in something being coded in lean, there are literal teams of people who will throw themselves at him. Those projects are very well organized from the top down by people who know what they're doing, it's no surprise that they are able to create some space for people who don't understand the whole scope.
This is also the case for other top-profile mathematicians like Peter Scholze. Good luck to someone who wants to put chatgpt answers to random hypotheticals into lean to see if they're right, I don't think they'll have so easy a time of it.
Anecdotally I have found this to be the case for the students I tutor. When I introduce a new topic I always start with worked examples, and I find that students are able to learn much more effectively when they have a reference. Poor pedagogy is also one of my biggest gripes with my undergraduate math program too, where the professors and textbooks often included too few worked problems and proofs, and the ones they did include were not very useful. What I found especially frustrating was when a worked example solved a special case with a unique approach, and the general case required a much more involved method that wasn't explained particularly well. Differential equations seems to be a particularly bad offender here, since I've had the same issue with the examples in many texts.
> What I found especially frustrating was when a worked example solved a special case with a unique approach, and the general case required a much more involved method that wasn't explained particularly well.
Amusingly, many people think the solution to this is "abandon worked examples and focus exclusively on trying to teach general problem-solving skills," which doesn't really work in practice (or even in theory). That seems to be the most common approach in higher math, especially once you get into serious math-major courses like Real Analysis and Abstract Algebra.
What actually works in practice is simply creating more worked examples, organizing them well, and giving students practice with problems like each worked example before moving them onto the next worked example covering a slightly more challenging case. You can get really, really far with this approach, but most educational resources shy away from it or give up really early because it's so much damn work! ;)
Interestingly, there have been studies that show that students lectured to feel like they’ve learned more, and self-report that they have, while students learning the same material in self-guided labs report feeling like they’ve learned less but perform better on assessments.
This description confounds two independent variables: "active vs passive learning" and "direct vs unguided instruction."
The studies you refer to are demonstrating that active/unguided is superior to passive/direct.
But the full picture is that active/direct > active/unguided > passive/direct. (I didn't include passive/unguided here because I'm not sure it's possible to create such a combination.)
Other studies -- that only manipulate one variable at a time -- support this big picture.
Well, sure. But very few formal educational settings are purely active/unguided. Unfortunately passive/direct is much more common.
To me though the more interesting result isn’t really about pedagogy, it’s that people’s (undergrad physics students, in the case of the specific study I’m thinking of) subjective impressions of the effectiveness of instruction are unreliable.
Anecdotally, teaching in a manner that forces students to discover a key or difficult concept on their own is a way to weed out those who "can" from those who "will", if you get my meaning.
My undergraduate math professor was like that, and he was pretty brutal, but by the end of the 2nd semester it was pretty clear who was going to end up majoring in something to do with math and who wasn't. From a pure selection standpoint, this makes sense to me. On the other hand, for those who "won't" it can make the experience pretty miserable.
Imo, the need to weed out is counterproductive from a societal perspective. Imagine if in military conscription they weeded out everybody who didn't want to be there. They'd probably fall short of their service requirements quickly. In the same way, if America wants to bridge the supposed gap in math from Asia, it's not a matter of who is willing, it's a matter of whether they can teach or not.
Well, in a conscription scenario you don't weed out everyone who doesn't want to be there. That's ... what makes it conscription. In the AVF (All Volunteer Force) we do in fact weed out people who don't want to be there, and the relative pressure of that weed-out process increases the more elite the unit is that we're talking about. The state of military recruiting in the United States is the worst it's ever been, or close to it, but that is unrelated to that process described above since the problem is upstream from basic training.
I'm probably confusing people with my use of the word "will" in this context, since it can mean several things in English. What I'm really saying is that those who have the actual aptitude "will derive complex concepts on their own, and will be likely to pursue further their math education". It's already difficult to identify those people when they're young enough, and even harder if you teach math in a "lowest common denominator" approach, which is essentially what the American strategy is (with notable exceptions that probably just prove the rule).
Lots of fields have a 'weed out' class early on. I majored in CS, and it essentially weeded out all those that had no real interest in the field but had thought they'd like it because it paid well or they wanted to make video games. Those sorts of classes don't necessarily need to be overly hard, because the people who 'get it' won't struggle much and those who don't will find it hard regardless. Although I imagine in math specifically, even those who get it might need to struggle a bit.
I think there's a lot to unpack here. Teaching someone how to write a for loop is easy and can done in a straightforward way, but teaching them when it's best to use, and getting them to under why, is different. Even further, getting them evaluate novel situations, apply it correctly and be able to communicate why they did it that way is another thing.
At what point would you say they've actually acquired the skill?
+100 for "Please, Just Work More Examples, I Swear It Helps".
I don't have nearly as impressive a backstory as you do here, but I did apply spaced repetition to my abstract algebra class in my math minor a few years back. I didn't do anything fancy, I just put every homework problem and proof into Anki and solved/rederived them over and over again until I could do so without much thinking.
I ended up walking out with a perfect score on the 2 hour final - in about 15 minutes. Most of the problems were totally novel things I had never seen before, but the fluency I gained in the weeks prior just unlocked something in me. A lot of the concepts of group actions, etc. have stuck with me to this very day, heavily informing my approach to software engineering. Great stuff.
Great story! This is exactly the kind of thing that we see all the time at Math Academy, that I saw in the classes I taught, and that many MA users report experiencing -- but unfortunately, lots of people find it counterintuitive and have a hard time understanding/believing it until they experience it firsthand.
Eh, I think that’s setting students up for failure once they enter graduate studies or more open ended problems that don’t come from a problem bank. Productive struggle is a perfectly valid approach to teaching, it’s just less pleasant in the moment (since the students are expected to struggle).
This is true (i.e., the struggle is productive) only if the struggle allows for students to develop the intuition of the subject required for synthesis.
Even then, before you get to that point, you have to prime students for it. Throwing them into the deep end without teaching them to float first will only set them up to drown. This does typically mean lots of worked motivating (counter-)examples at the outset.
It's a big reason why we spent so long on continuity and differentiability in my undergraduate real analysis class and why most of the class discussion there centered on when a function could be continuous everywhere but nowhere differentiable. Left to our own devices and without that guidance, our intuition would certainly be too flawed for such a fundamental part of the material.
I would argue that understanding the pathological behavior in something is critical to developing an accurate intuition for it, yes. These cases don't show up often, but when it comes to having a good sense of smell for when part of a proof is flawed, it really helps to have that olfactory memory.
Aside from that, understanding counterexamples teaches you to understand the definitions and theorems better. Which matters for proving future results.
> Productive struggle is a perfectly valid approach to teaching
Is this supported by research though? As I understand it, for students (not experts), empirical results point in the opposite direction.
One key empirical result is the "expertise reversal effect," a well-known phenomenon that instructional techniques that promote the most learning in experts, promote the least learning in beginners, and vice versa.
It's true that many highly skilled professionals spend a lot of time solving open-ended problems, and in the process, discovering new knowledge as opposed to obtaining it through direct instruction. But I don't think this means beginners should do the same. The expertise reversal effect suggests the opposite – that beginners (i.e., students) learn most effectively through direct instruction.
Here are some quotes elaborating on why beginners benefit more from direct instruction:
1. "First, a learner who is having difficulty with many of the components can easily be overwhelmed by the processing demands of the complex task. Second, to the extent that many components are well mastered, the student will waste a great deal of time repeating those mastered components to get an opportunity to practice the few components that need additional practice.
A large body of research in psychology shows that part training is often more effective when the part component is independent, or nearly so, of the larger task. ... Practicing one's skills periodically in full context is important to motivation and to learning to practice, but not a reason to make this the principal mechanism of learning."
2. "These two facts -- that working memory is very limited when dealing with novel information, but that it is not limited when dealing with organized information stored in long-term memory -- explain why partially or minimally guided instruction typically is ineffective for novices, but can be effective for experts. When given a problem to solve, novices' only resource is their very constrained working memory. But experts have both their working memory and all the relevant knowledge and skill stored in long-term memory."
Intuitively, too: in an hour-long session, you're going to make a lot more progress by solving 30 problems that each take 2 minutes given your current level of knowledge, than by attempting a single challenge problem that you struggle with for an hour. (This assumes those 30 problems are grouped into minimal effective doses, well-scaffolded & increasing in difficulty, across a variety of topics at the edge of your knowledge profile.)
To be clear, I'm not claiming that "challenge problems" are bad -- I'm just saying that they're not a good use of time until you've developed the foundational skills that are necessary to grapple with those problems in a productive and timely fashion.
> What I found especially frustrating was when a worked example solved a special case with a unique approach, and the general case required a much more involved method that wasn't explained particularly well.
That was the bane of my University degree. "And, since our function f happens to be of this form, all the difficult stuff cancels out and we're left with this trivial stuff" and then none of the problems have these "happy accident" cancellations and you're none the wiser on how to proceed.
The statistics book we used was an especially egregious offender in this regard.
I think often the reason this happens is that the chosen examples[1] are just more advanced topics in disguise. Eg maybe you are given some group with a weird operation and asked to prove something about it, and the hidden thing is that this is a well-known property of semi-direct products and that’s what the described group is.
Two I remember were:
- In an early geometry course there was a problem to prove/determine something described in terms of the Poincaré disc model of the hyperbolic plane. The trick was to convert to the upper half-plane model (where there was an obvious choice for which point on the boundary of the disc maps to infinity in the uhp). There I was annoyed because it felt like a trick question, but the lesson was probably useful.
- in a topology course there was a problem like ‘find a space which deformation-retracts to a möbius strip and to an annulus. This is easy to imagine in your head: a solid torus = S1*D2 can contain an embedding of each of those spaces into R3. I ended up carefully writing those retractions by hand, but I think the better solution was to take the product space and apply some theorems (I think I’m misremembering this – product space works for an ordinary retraction but for the deformation retraction I don’t think it works. I guess both retract to S1 and you could glue the two spaces together along that, or use the proof that homotopy equivalence <=> deformation retracts from common space, but I don’t think we had that). I felt less annoyed at missing the trick there.
[1] I’m really talking about exercises here. I don’t really recall having problems with the examples.
Differential equations seems to be a particularly bad offender here
I think that’s a problem with differential equations as a subject. The only ones we know how to solve are special cases. Solving them in general is an open problem.
My education was basically made of worked examples. What was missing was WHY they were worked THAT way. The thought process of the person solving the problem was missing. Yes, intermediate steps were all there - and still no answer to "why go in THAT direction - from the onset?".
It's not "problem solving", it's the deeper understanding of a discipline which makes the experienced practionner go one way rather than the other.
The downside of teaching using worked examples is that it teaches only one problem solving skill to students: mimicking.
Many students will look only at examples in the textbook and happily ignore definitions, theorems, and proofs. They don't know whether the strategy they picked works, only that it worked on a similar looking problem.
Sure, when (good) teachers explain the example they do go through the effort of referring to the definitions and theorems, but that is not necessarily what the students remember.
They skip definitions, theorems, and proofs for good reason too. Students are spending a lot of time and money they don't have to get a degree and have an obligation to work efficiently. With a limited amount of time and energy they would actively hurting themselves by focusing on things that aren't graded. In general I've found that teachers grade quite harshly on things you could have memorized, and find little value in understanding or institution.
Your grades doesn't matter as much as your understanding does, for most people. In some cases your grades will be the deciding factor, but in most cases it is worth more to you to get a better understanding and worse grades.
There's a significant contingent of people who find typing numbers with the number row cumbersome (me included). If, for whatever reason, you need to type a lot of numbers, it can be quite inconvenient to not have a numpad.
It always looks stupid to me when I see those people typing their password and it has numbers in it.
Switching their hands between the alphabet and number keys, sometimes multiple times, with bonus points for not knowing the numlock status and having to do it all over again.
With the amount of time all of us spend on keyboards, it just seems lazy/inefficient to not learn to touchtype, including the numbers and at least the common symbols.
Can someone explain the last paragraph? The author gives the example of trying to find a 70 Ohm resistor and how the 68 Ohm and 75 Ohm are a little off. They conclude by saying you should just use 33 and 47 Ohm resistors, but wouldn't that give an resistance of 80, not 70?
I think the author maybe doesn’t know how to order 1% resistors from Digi-Key??
My intro circuit analysis prof gave these wise words to live by: “If you need more than one significant digit, it isn’t electrical engineering, its physics”
Fun fact is that afaik component values are often distributed in a bi-modal way because actually +-5% often means that they sorted out already the +-1% to sell as a different more expensive batch. At least it used to be that way. Wonder if it is still worth doing this in production. So I guess one could also measure to average things out otherwise the errors will stay the same relatively.
If you can measure them with that precision, would it make sense to sell them with that accuracy too? So if you tried to manufacture a resistor at 68kΩ +/- 20%, and it actually ended up at 66kΩ +/- 1%, couldn't you now sell it as an E192 product which according to TFA are more expensive?
Selling with different tolerances only makes sense to me if the product can't be reliably measured to have a tighter tolerance, perhaps if the low- quality ones are expected to vary over their life or if it's too expensive to test each one individually and you have to rely on sampling the manufacturing process to guess what the tolerances in each batch should be.
Resistors with worse tolerances may be made out of cheaper, less refined wire, which will vary resistance more by temperature. The tolerance and resistance is good over a temperature range. For more reading looking up "constantan".
Most resistors don't use wire, but some film of carbon (cheaper, usually the E12 / 5% tolerance parts) or metal (E24, or 1% and tighter tolerances) onto a non-conducting body. Wires mean winding into a coil, which means increased inductance.
I suspect in most cases the tolerances are a direct result from the fabrication process. That is: process X, within such & such parameters, produces parts with Y tolerance. But there could be some trimming involved (like a laser burning off material until component has correct value). Or the parts are measured & then binned / marked accordingly.
Actual wire is used for power resistors, like rated for 5W+ dissipation. Inductance rarely matters for their applications.
Accuracy depends on the technology used. Carbon comp tends have less accuracy then carbon film. And it's not true that higher accuracy is always better.
Some accurate resisters are essentially wound coils and have high inductance and will also induce and pick up magnetic interference. Stuff like that matters often a lot.
Thanks, always good to remember that the tolerance of a resistor is not just a manufacturing number but also defined over the specified temperature range.
Depends on where in the production line they are being tested. If they are tested after they've had their color bands applied, then you wouldn't be able to sell it as a 66kH since the markings would for a 68kH
The issue is probably volume. Very few applications need a resistor that's exactly 66kΩ, but a lot of applications need resistors that are in the ballpark of 68kΩ (but nobody would really notice if some 56kΩ resistors slipped in there).
For every finely tuned resonance circuit there are a thousand status LEDs where nobody cares if one product ships with a brighter or dimmer LED.
Unless the components are expensive, that proposition seems dubious. It's much more economical to take a process that produces everything within 12% centered on the desired value and sell it as ±20%. 100% inspection is generally to be avoided in mass production, except in cases where the process cannot reach that capability, chip manufacturing being the classic example. For parts that cost a fraction of a penny, nobody is inspecting to find the jewels in the rough.
Actually it seems to be really the case that multimodal distribution are rather the result of batches not having a mean. So it is rather the effect of systematic error [1]. I guess it is really a myth (we did low cost RF designs back in 2005 and had some real issues with frequencies not aligning die to component spread and I really remember that bi modality problem, but I guess okhams razor should have told me that it makes no economical sense)
disclaimer: it will be a relatively small effect for just two resitors
aleph's comment is also correct. the bounds they quote are a "wost-case" bound that is useful enough for real world applications. typically, you won't be connecting a sufficiently large number of resistors in series for this technicality to be useful enough for the additional work it causes.
Note that tolerance and uncertainty are different. Tolerance is a contract provided by the seller that a given resistor is within a specific range. Uncertainty is due to your imprecise measuring device (as they all are in practice).
You could take a 33k Ohm resister with 5% tolerance, and measure it at 33,100 +/- 200 Ohm. At that point, the tolerance provides no further value to you.
It’s not nearly that simple:) Component values change with environmental factors like temperature and humidity. Resistors that have a 1% rating don’t change as much over a range of temperatures as 5% or 10% components do. This is typically accomplished by making the 1% resistors using different materials and construction techniques than the lower tolerance parts. Just taking a single measurement is not enough.
If values are normally distributed, random errors accumulate with the square root of the number of components. Four components in series have 2x the uncertainty over all, etc, but if you divide that double uncertainty by four times the resistance, it's half the percentage uncertainty as before. (I avoid using the word "tolerance" because someone will argue whether it really works this way)
In reality, some manufacturers may measure some components, and the ones within 1% get labeled as 1%, then it may be that when you're buying 5% components that all of them are at least 1% off, and the math goes out the window since it isn't a normal distribution.
In the article's example, I'd prefer 2 resistors in parallel. That way result is less dramatic if 1 resistor were to be knocked off the board / fail.
Eg. 1 resistor slightly above desired value, and a much higher value in parallel to fine-tune the combination. Or ~210% and ~190% of desired value in parallel.
That said: it's been a long time since I used a 10% tolerance resistor. Or where a 1% tolerance part didn't suffice. And 1% tolerance SMT resistors cost almost nothing these days.
This might be why pretty much all LED lightbulbs/fixtures have two resistors in parallel. Used for the driver chip control pin, that sets the current to deliver via some specific resistance value.
It's always a small and a large resistor. The higher this control resistance, and the lower the driving current.
Cut off the high value resistor to increase the resistance a bit. In my experience this often almost halves the driving current, and up to 30% of the light output (yes, I measured).
Not only most modern lights are too brights to start with anyways, this fixes the intentional overdriving of the LEDs for planned obsolescence. The light will last pretty much forever now.
So I will postulate without much evidence that if you link N^2 resistors with average resistance h in a way that would theoretically give you a resistor with resistance h you get an error that is O(1/N)
> tolerance should actually go down since the errors help cancel each other out.
Complete nonsense. The tolerance doesn't go down, it's now +/- 2x, because component tolerance is the allowed variability, by definition, worst case, not some distribution you have to rely on luck for.
Why do they use allowed variability? Because determinism is the whole point of engineering, and no EE will rely on luck for their design to work or not. They'll understand that, during a production run, they will see the combinations of the worst case value, and they will make sure their design can tolerate it, regardless.
Statistically you're correct, but statistics don't come into play for individual devices, which need to work, or they cost more to debug than produce.
The total tolerance is not +/- 2x, because the denominator of the calculation also increases. You can add as many 5% resistors in series as you want and the worst case tolerance will remain 5%. (Though the likely result will improve due to errors canceling.)
For example, say you're adding two 10k resistors in series to get 20k, and both are in fact 5% over, so 10,500 each. The sum is then 21000, which is 5% over 20k.
The Central Limit Theorem (which says if we add a bunch of random numbers together they'll converge on a bell curve) only guarantees that you'll get a normal distribution. It doesn't say where the mean of the distribution will be.
Correct me if I'm wrong, but if your resistor factory has a constant skew making all the resistances higher than their nominal value, a bunch of 6.8K + 6.8K resistors will not on average approximate a 13.6K resistor. It will start converging on something much higher than that.
Tolerances don't guarantee any properties of the statistical distribution of parts. As others have said, oftentimes it can even be a bimodal distribution because of product binning; one production line can be made to make different tolerances of resistors. An exactly 6.8K resistor gets sold as 1% tolerance while a 7K gets sold as 5%.
> Tolerances don't guarantee any properties of the statistical distribution of parts.
That's incorrect. They, by definition, guarantee the maximum deviation from nominal. That is a property of the distribution. Zero "good" parts will be outside of the tolerance.
> It will start converging on something much higher than that.
Yes' and that's why tolerance is used, and manufacturer distributions are ignored. Nobody designs circuits around a distribution, which requires luck. You guarantee functionality by a tolerance, worst case, not a part distribution.
> The Central Limit Theorem (which says if we add a bunch of random numbers together they'll converge on a bell curve) only guarantees that you'll get a normal distribution. It doesn't say where the mean of the distribution will be.
That's kind of overstating and understating the issue at the same time. If you have a skewed distribution you might not be able to use the central limit theorem at all.
>If you have a skewed distribution you might not be able to use the central limit theorem at all.
The CLT only requires finite variance. Skew can be infinite and you still get convergence to normality ... eventually. Finite skew gives you 1/sqrt(N) convergence.
Very true, I was writing as absolute value, not % (magnitude is where my day job is). My point still stands: it is complete nonsense that tolerance goes down.
They said it "should" go down, but that another comment saying the worst case is the same is "also correct".
I do not see any "complete nonsense" here. I suppose they should have used a different word from "tolerance" for the expected value, but that's pretty nitpicky!
I'm sorry, but it's incorrect, as stated. It's a false statement that has no relation to reality, with the context provided.
Staying the same, as a percentage, is not "going down". If you add two things with error together, the absolute tolerance adds. The relative tolerance (percentage) may stay the same, or even reduce if you mix in a better tolerance part, but, as stated, it's incorrect.
It's a common misunderstanding, and misapplication of statistics, as some of the other comments show. You can't use population statistics for low sample sizes with any meaning, which is why tolerance exists: the statistics are not useful, only the absolutes are, when selecting components in a deterministic application. In my career, I’ve seen this exact misunderstanding cause many millions of dollars in loss, in single production runs.
It only stays the same if you have the worst luck.
> You can't use population statistics for low sample sizes with any meaning
Yes you can. I can say a die roll should not be 2, but at the same time I had better not depend on that. Or more practically, I can make plans that depend on a dry day as long as I properly consider the chance of rain.
> In my career, I’ve seen this exact misunderstanding cause many millions of dollars in loss, in single production runs.
Sounds like they calculated the probabilities incorrectly. Especially because more precise electrical components are cheap. Pretending probability doesn't exist is one way to avoid that mistake, but it's not more correct like you seem to think.
I've repeatedly used a certain words in what I wrote, since it has incredible meaning in the manufacturing and engineering world, which is the context we're seeking within. It's a word that determines the feasibility of a design in mass production, and a metric for if an engineer is competent or not: determinism. That is the goal of a good design.
> It only stays the same if you have the worst luck.
And, you will get that "worst luck" thousands of times in production, so you must accommodate it. Worst off, as others have said, the distributions are not normal. Most of the << 5% devices are removed from the population, and sold at a premium. There's a good chance your components will be close to +5% or -5%
> Yes you can. I can say a die roll should...
No you cannot. Not in the context we're discussing. If you make an intentional decision to rely on luck, you're intentionally deciding to burn some money by scrapping a certain percentage of your product. Which is why nobody makes that decision. It would be ridiculous because you know the worst case, so you can accommodate it in your design. You don't build something within the failure point (population statistics). You don't build something at the failure point (tolerance), you make the result of the tolerance negligible in your design.
> Sounds like they calculated the probabilities incorrectly.
Or, you could look at it as being a poorly engineered system that couldn't accommodate the components they selected, where changing the values of some same-priced periphery components would have eliminated it completed.
Relying on luck for a device to operate is almost never a compromise made. If that is a concern, then there's IQC or early testing to filter out those parts/modules, to make sure the final device is working with a known tolerance that the design was intentionally made around.
Your perspective is very foreign to the engineering/manufacturing world, where determinism is the goal, since non-determinism is so expensive.
> If you make an intentional decision to rely on luck, you're intentionally deciding to burn some money by scrapping a certain percentage of your product. Which is why nobody makes that decision.
Now this is complete nonsense. Lots of production processes do that. It depends on the cost of better tooling and components, and the cost of testing.
And... the actual probabilities! You're right that you can't assume a normal distribution. But that wouldn't matter if this was such a strict rule because normal distributions would be forbidden too.
Determinism is a good goal but it's not an absolute goal and it's not mandatory. You are exaggerating its importance when you declare any other analysis as "complete nonsense".
> since non-determinism is so expensive.
But your post gives off some pretty string implications that you need a 0% defect rate, and that's not realistic either. There's a point where decreasing defects costs more than filtering them. This is true for anything, including resistors. It's just that high quality resistors happen to be very cheap.
Please remain within the context we're speaking in: final design not components. When manufacturing a component, like a resistor or chip, you do almost always have a normal distribution. You're making things with sand, metal, etc. Some bits of crystal will have defects, maybe you ended up with the 0.01% in your 99.99% purity source materials, etc. You test and bin those components so they fall within certain tolerances, so the customer sees a deterministic component. You control the distribution the customer sees as much as possible.
Someone selecting components for their design will use the tolerance of the component as the parameter of that design. You DO NOT intentionally choose a part with a tolerance wider than your design can accommodate. As I said, if you can't source a component within the tolerance you need, you force that tolerance through IQC, so that your final design is guaranteed to work, because it's always cheaper to test a component than to test something that you paid to assemble with bad parts. You design based on a tolerance, not a distribution.
> Determinism is a good goal but it's not an absolute goal and it's not mandatory.
As I said, choosing to not be deterministic, by choosing a tolerance your design can't accomodate is rare, because it's baking malfunction and waste into the design. That is sometimes done (as I said), but it's very rare, and absofuckinglutly never done with resistors tolerance selection.
> But your post gives off some pretty string implications that you need a 0% defect rate, and that's not realistic either.
> There's a point where decreasing defects costs more than filtering them.
No, defects are not intentional, by definition. There will always be defects. A tolerance is something that you can rely on, when choosing a component, because you are guaranteed to only get loss from actual defects, with the defects being bad components outside the tolerance. If you make a design that can't accommodate a tolerance that you intentionally choose, it is not a defect, it's part of the design.
0% defect has nothing to do with what I'm saying. I'm saying intentionally choosing tolerances that your design can't accommodate is very very rare, and almost always followed by IQC to bin the parts, to make sure the tolerance remains within the operating parameters of the design.
I feel like this has lead to circles. I suggest re-reading the thread.
Maybe you could give an example where you see this being intentionally done (besides Wish.com merchandise, where I would question if it's intentional).
To give an example, let's say you've got two resistors of 100 Ohm +/- 5%. That means each is actually 95-105 Ohm. Two of them is 190-210 Ohm. Still only a 5% variance from 200 Ohm.
Tolerance is a specification/contractual value - it's the "maximum allowable error". It's not the error of a specific part, it's the "good enough" value. If you need 100 +/- 5%, any value between 95 and 105 is good enough.
Using two components to maybe cancel out the error as you describe. On average, most of the widgets you make by using 2 resistors instead of one may be closer to nominal, but any total value between 95 and 105 would still be acceptable, since the tolerance is specified at 5%.
To change the tolerance you need to have the engineer(s) change the spec.
reply