Hacker Newsnew | past | comments | ask | show | jobs | submit | program_whiz's commentslogin

Sorry I should have responded to this comment, but I wrote a separate response in the parent thread. I didn't feel the pdf / paper was really trying to mimick spiking biological networks in all but the loosest sense (there is a sequence of activations and layers of "neurons"). I think the major contribution is just using the dot product on output transpose output, the rest is just diffusion / attention on inputs. Its conceptually a combination of "input attention" and "output attention" using a kind of stepped recursive model.


In my reading of the paper, I don't feel this is really like biological / spiking networks at all. They keep a running history of inputs and use multi-headed attention to form an internal model of how the past "pre-synaptic" inputs factor into the current output (post-synaptic). this is just like a modified transformer (keep history of inputs, use attention on them to form an output).

Then the "synchronization" is just using an inner product of all the post activations (stored in a large ever-growing list and using subsampling for performance reasons).

But its still being optimized by gradient descent, except the time step at which the loss is applied is chosen to be the time step with minimum loss, or minimum uncertainty (uncertainty being described by the data entropy of the output term).

I'm not sure where people are reading that this is in any way similar to spiking neuron models with time simulation (time is just the number of steps the data is cycled through the system, similar to diffusion model or how LLM processes tokens recursively).

The "neuron synchronization" is also a bit different from how its meant in biological terms. Its using an inner product of the output terms (producing a square matrix), which is then projected into the output space/dimensions. I suppose this produces "synchronization" in the sense that to produce the right answer, different outputs that are being multiplied together must produce the right value on the right timestep. It feels a bit like introducing sparsity (where the nature of combining many outputs into a larger matrix makes their combination more important than the individual values). The fact that they must correctly combine on each time step is what they are calling "synchronization".

Techniques like this are the basic the mechanism underlying attention (produce one or more outputs from multiple subsystems, dot product to combine).


I would say one weakness of the paper is that they primarily compare performance with LSTM (a simpler recursion model), rather than similar attention / diffusion models. I would be curious how well a model that just has N layers of attention in/out would perform on these tasks (using a recursive time-stepped approach). My guess is performance will be very similar, and network architecture will also be quite similar (although a true transformer is a bit different than input attention + unet which they employ).


The time complexity of a large matrix multiplication is still much higher than using fourier, for large matrices it has superior performance.


Exactly this. I was thinking exactly like GP but I've been doing a large amount of benchmarking on this and the FFT rapidly overcomes direct convolution with cudnn, cublas, cutlass... I think I'd seen a recent Phd thesis exploring and confirming this recently. NlogN beats N^2 or N^3 quickly, even with tensor cores. At some point complexity overcomes even the bestest hardware optimizations.

And the longer the convolution the more matrices look tall-skinny (less optimized). Also you have to duplicate a lot of data to make your matrix toeplitz/circulant and fit into matmul kernels which convolution is a special case...


> for large matrices it has superior performance.

how large? and umm citation please?


You can mostly compute it for yourself or get an asymptotic feeling. Matmul is N^3 even with tensor cores eating a large part of it (with diminished precision, but ok) and FFT-based convolution is NlogN mostly. At some point - not that high - even the available TFLOPS available on tensor cores can't keep up. I was surprised by how far cuDNN will take you, but at some point the curves cross and matmul-based convolution time keeps growing at polynomial rate and FFT-based mostly linear (until it gets memory bound) and even that can be slashed down with block-based FFT schemes. Look up the thesis I posted earlier there's an introduction in it.


> Matmul is N^3

are we talking about matmul or conv? because no one, today, is using matmul for conv (at least not gemm - igemm isn't the same thing).

like i said - the arms race has been going for ~10 years and everyone already discovered divide-conquer and twiddle factors a long time ago. if you do `nm -C libcudnn.so` you will see lots of mentions of winograd and it's not because the cudnn team likes grapes.

> Look up the thesis I posted earlier there's an introduction in it.

there are a billion theses on this. including mine. there are infinte tricks/approaches/methods for different platforms. that's why saying something like "fft is best" is silly.


You ask how large and citation, and given one with sizes, you already have a billion ones. The article is about long convolutions (winograd being still a matmul implementation for square matrices and a pain to adapt to long convolutions and still N^2.3), and given comparisons with 10-years-optimized-(as you say) cuDNN and very naive barely running FFT-based long convolutions, showing actual O complexity matters at some point - even with the best hardware tricks and implementations.

I don't know what more to say ? Don't use FFT-based convolution for long convolutions if it doesn't work for your use cases or if you don't believe it would or should. And those of us who have benchmarked against SOTA direct convolution and found out FFT-based convolution worked better for our use cases will keep using it and talk about it when people ask on forums ?


> I don't know what more to say ?

Do you understand that you can't magic away the complexity bound on conv and matmul by simply taking the fft? I know the you do so given this very very obvious fact, there are only two options for how an fft approach could beat XYZ conventional kernel

1. The fft primitive you're using for your platform is more highly optimized due to sheer attention/effort/years prior to basically 2010. FFTW could fall into this category on some platforms.

2. The shape/data-layout you're operating on is particularly suited to the butterfly ops in cooley-tukey

That's it. There is no other possibility because again fft isn't some magical oracle for conv - it's literally just a linear mapping right?

So taking points 1 and 2 together you arrive at the implication: whatever you're doing/seeing/benching isn't general enough for anyone to care. I mean think about it: do you think the cudnn org doesn't know about cooley-tukey? Like just so happens they've completely slept on this method that's taught in like every single undergrad signals and systems class? So that must mean it's not a coincidence that fft doesn't rate as highly as you think it does. If you disagree just write your fftdnn library that revolutions conv perf for the whole world and collect your fame and fortune from every single FAANG that currently uses cudnn/cublas.


Yes this is common knowledge in econ supply/demand and money supply.

Other currencies 1M base notes is not a lot (e.g. 1M dinar). You can just add/remove zeros but prices have adjusted, those people can't live like "millionaires" with 1M dinar.

There was a time goods like meat cost pennies, now it's $10 per pound. In those times $10,000 would be a life-altering amount of money, today most people have $10,000 in assets. The price of goods is related to money supply, they get more expensive if people have more money.

Money has no intrinsic value, it is balanced by whatever goods and services can be bought by it. If you add money but no goods and services, money is worth less (see COVID policy, increase money and decrease goods and services).


Thank you! The COVID policy example feels especially relevant because it illustrates the kind of sudden economic shift I'm curious about, rather than just changes in nominal currency values. To clarify my thought experiment: imagine a stable economy where suddenly every person worldwide is gifted $1 million USD. I'm interested in exploring how this kind of immediate influx would impact prices and standards of living, beyond just inflation.


If every person is giftet USD1M, prices of all things will go up by a lot. Furthermore, prices of necessities will rise by larger fractions because most people in the world are significantly poorer than the median of this forum.

More generally, gifting every person worldwide the same amount of money seems roughly equivalent to taxing every above-average wealth person a fixed percentage of their surplus and giving every below-average-wealth person a fixed percentage of their deficit...


I think that's not quite true, because rich people mostly keep their wealth in assets, not cash. Stock will just rise.


Which of my two claims is not true? Why should stock rise more than basic necessities? Of course, all prices would rise a lot. But I think stocks would rise by a smaller factor.


Ah sorry, I wasn't very clear. I was talking about your second claim - that giving people lots of money would redistribute a fixed proportion of wealth from the rich to the poor. My point was that the rich have most of their wealth in stocks and the like, so the redistribution would only affect the cash portion of their wealth, which is quite small.


I see. The "taxation" scheme I had in mind was intended to apply to "any and all wealth", which I didn't make clear. Do you think it works out then?

On a more realistic note, gifting every human 1M would probably completely break the financial system and cause a global recession...


To me this is the clearest case of using lawfare to try and suppress honest competition. Yes Palworld borrows some creative ideas from pokemon, but it so clearly a different game there's no question to it (if that's not different enough, then what is?).

In this case, I think Nintendo realizes their biggest cash-cow is pokemon so they _have_ to make a play to suppress any competition. However, this is very bad for the market imo and should be disincentivized somehow.


If they succeed, imagine the chilling effect to game makers who have to pause and think "if I succeed, will I be destroyed because my idea shares some commonalities with other games?"


Yeah,this would be grim because this is how genres are born.



From what I understand their lawfare approach is using japanese software patent law, they have notably not tried to do the same using trademark or copyright infringement.

If they could get away with what you're suggesting, I imagine they would have tried it on digimon decades ago: https://digi-battle.com/Content/CardScans/CP-22.png vs https://archives.bulbagarden.net/media/upload/e/ed/0098Krabb...

There's a laundry list of these comparisons actually https://preview.redd.it/what-if-digimon-adventure-was-pokemo...


No one is looking at Digimon and going "Hey, that's Pokemon!" -- looking at Palworld's creatures for the uninitiated, it looks so very much like Pokemon creatures that most people I know have confused it for Pokemon.

There's a HUGE difference between being influenced by, and blatantly copying the inspiration and design. It took Nintendo decades to come up with creature designs, and Palworld less than a year - and they could do that because they likely went through each creature one-by-one and said, "How can we make it just so slightly different?"


No one is looking at Digimon and saying it looks like Pokemon now, but in the 90s they sure as heck were. People's mothers (ie the key demographic for "the uninitiated") commonly confused one for the other, even the TV show. This is no longer the case purely because Digimon is far less popular.

Bearing in mind pretty much all Pokemon designs follow the rule of "what if a mythical animal existed in our art style", it is in fact shockingly easy to accidentally ape a Pokemon design just by cartoonifying mythos.

This is incidentally also true of Pokemon, who were accused of ripping off Dragon Quest when the first games started coming out. Does anyone remember that at this point?

https://pbs.twimg.com/media/GEYrZuzXUAAjpkB?format=jpg&name=...


Similarity in design is not necessarily infringement. Invincible is obviously based on the DC comics universe, with Omni-Man very similar to Superman both in appearance and background, and there are basically one-to-one equivalents of most of the Justice League (e.g. Darkwing for Batman). Yet that doesn't mean it infringes on DC's IP.


At this point Marvel and DC actually need each other in order to have a comic book market. The more readers you get at Marvel, the more potential ones you would get in DC (because they cover different social issues) and vice versa.


Invincible is actually published by Image Comics.


It does look bad, but this is about the core design of the creatures, not the gameplay. I think we should be concerned if this meant making a game clone like how first-person shooters originated as "Doom-clones" was off the table.


could you imagine a world where only the first company to develop a game mechanic is allowed to use that mechanic in their games?

That would make games like iphones, only the smallest change allowed between each generation, Atari would rule the game universe, the kids would be playing jumpman in 4k resolution (Now with 256 colors and 12 unique levels!)


The creatures are the biggest part of the Pokémon IP. You don't sell plushies and merch of the gameplay.


The entire lawfare approach thus far has been via patent dispute from what I understand, so the gameplay is actually what's being argued.


Agree the creatures look similar but that isn't at issue in the case, it's gameplay mechanics (catching creatures and then using them to battle).


They probably took some inspiration but you also could argue that there's so many Pokémon nowadays that you can't create any creature from scratch which would not look like any of them.

And some of them in the Pokémon world aren't really that inspired either...


For any aspiring inventors / engineers out there, take a good look at how Hall was treated by GE. He literally invented game-changing tech with every obstacle thrown in his way by management, and was given a 10% raise and $10 savings bond.

Had he done it on his own, he would have been extremely wealthy, being the supplier of synthetic diamonds to the world (assuming he wouldn't have faced legal challenges by former employer). He would have also been able to pursue this full time, who knows how much he could have improved the tech.

Just because the powers that be don't think its a good idea, doesn't mean it isn't (it also doesn't mean it is). And if they don't want you building it, for goodness sakes, don't just give them your amazing idea, build it so you can profit when it turns out to be a golden nugget.


There's a similar story for the inventor of the blue LED!

https://en.wikipedia.org/wiki/Shuji_Nakamura#Careers


Love the gist of this, but just wanted to point out, but there's no need to draw the line between buildings and gardening. Anyone who has built a house or done major remodel knows that it too suffers from fractal complexity. It may not be a nail that becomes a wormhole of complexity (as neither is something simple arithmetic operations in programming), but all kinds of things can crop up. The soil has shifted since last survey, the pipes from city are old, the wiring is out of date, the standards have changed, the weather got in the way, the supplies changed in price / specification, etc. Everything in the world is like that, software isn't special in that regard. In fact, software only has such complexity because its usually trying to model some real-world data or decision. For the totally arbitrary toy examples, the code is usually predictable, simple, and clean ; the mess starts once we try to fit it to real-world usecases (such as building construction).


I've seen little evidence that the smartest humans are able to dominate or control society as it is now. We have 250 IQ people alive right now, they haven't caused imminent destruction, they've actually helped society. Also gaining power / wealth / influence only seems a little connected to intelligence (see current presidential race for most powerful position in the world, finger on nuke trigger).


I loved the author's example of Orcas, which may have more raw intelligence than a person -- still waiting for world domination (crashing yacht's doesn't count).


We have zero people with an IQ over 200, due to how IQ is defined: http://www.wolframalpha.com/input/?i=6.67%CF%83

We also can't reliably test IQ scores over 130, a fact which I wish I'd learned sooner than a decade after getting a test result of 148.

Most humans are motivated to help other humans; the exceptions often end up dead, either in combat (perhaps vs. military, perhaps vs. police), or executed for murder, or as a result of running a suicidal cult like Jim Jones. But not all of them, as seen in the hawks on both sides of the cold war. "Better dead than red" comes to mind.

> Also gaining power / wealth / influence only seems a little connected to intelligence

On the contrary, they are correlated: https://www.vox.com/2016/5/24/11723182/iq-test-intelligence

Trump inherited a lot and reportedly did worse than the market average with that inheritance, so I'm not sure you can draw inference from him using that inhereted money to promote himself as the Republican candidate, beyond the fact that it means other valid rich people (i.e. not Musk because he wasn't born in the US) don't even want to be president — given the high responsibility and relatively low pay, can you say they're wrong? They've already got power and influence. Musk can ban anyone he wants from Twitter, the actual POTUS isn't allowed to do that. And given Musk's diverse businesses, even if he was allowed to run, he'd have a hard time demonstrating that he wasn't running the country to benefit himself (an accusation that has also been made against Trump). Sure the POTUS has a military, how many of the billionaires are even that interested in having their own given what they can do without one?


Why? If you knew 100% someone was guilty, why would you defend them? Isn't the point of the "strong defense" that we haven't established guilt? If someone is guilty, then give them the consequence of the crime, unless you think having some guilty people get away with it (at substantial cost) is in the best public interest?

I understand providing a way to determine guilt and innocence without bias, but if everyone knows someone is guilty, isn't trying them and/or defending them just a waste of resources, with the best outcome being "same as plea bargain" and worst possible outcome being "goes free without punishment, despite having committed crime"?


It’s a token effort to catch the false positives. If you put in no effort you’d start to become overconfident in the guilty assessment and an increasingly larger percent of defendants will be considered guilty even though more of them will in fact be innocent.

Plus there is more to it than guilty/not guilty and the state should not be permitted to take shortcuts.


Our whole concept of justice enshrined in the US Constitution is that the truth must be ascertained by a jury of your peers from a fair presentation of the evidence and arguments in a court of law. And it is entirely to prevent people from being "rubber stamped" guilty and discarded. At its heart the trial is about ascertaining the truth, with the understanding that the verdict has some powerful gravity for everyone involved.


> If you knew 100% someone was guilty, why would you defend them?

Ask defense attorneys, I think you'll be surprised at the number of "yes" responses. How do you establish that "everyone knows" someone is guilty? You're assuming the conclusion as true and then reasoning from there. Put yourself in the shoes of a defendant who "everyone knows" is guilty. Would you not still want a robust and aggressive defense?


Ok I get what people are saying, but you are describing the common case, where guilt is in question. I'm talking about the parent comment which is saying "even if someone is known to be guilty they still deserve defense." You're talking about the common case -- where there is doubt to guilt and/or someone maintains innocence. Do you also believe someone who admits guilt should still be defended (e.g. hide this fact from the jury and proceed as though they didn't admit to the crime)? The only reason for defending someone is because they may be innocent, there is no advantage to excusing a guilty person. Or do you believe there is an advantage to excusing the guilty?

I also understand why we have our current system, and that there may be false positives, I'm merely commenting on the fact that we should not defend those that are 100% known to be guilty (and I'm not claiming that its easy to ascertain, but that in cases where people plead/confess, maybe its for the best).


Let's enter fantasy land for a moment and assume that we even can ascertain whether someone is "100% known to be guilty" without a trial. Say we have a crystal ball that we can just ask. That person still is entitled to representation to ensure that all the proper procedure was followed (was the law somehow broken when the crystal ball was consulted? was the crystal ball accurately calibrated and configured? is it a real crystal ball and not a knock-off that always says "guilty"?).

Even if there was no crystal ball! The defendant admitted and signed a confession. He still needs a defense. Was the confession forced or obtained under duress? Did the defendant know what he was confessing to?

And even if there was no crystal ball, the defendant confessed voluntarily and understood fully what he confessed to, cooperatively admitted everything to the point where there is zero chance of reasonable doubt. He still needs a defense. Who is going to ensure that a fair punishment is imposed? Without a defense attorney and proper procedure, what stops the judge from simply imposing the maximum sentence for everyone?


Thanks this changed my mind and I agree with your assessment.


Game theory at work? Someone needs to maintain legacy code for free that hosts thousands of sites and gets nothing but trouble (pride?) in return. Meanwhile the forces of the world present riches and power in return to turn to the dark side (or maybe just letting your domain lapse and doing something else).

If security means every maintainer of every OSS package you use has to be scrupulous, tireless, and not screw up for life, not sure what to say when this kind of thing happens other than "isn't that the only possible outcome given the system and incentives on a long enough timeline?"

Kind of like the "why is my favorite company monetizing now and using dark patterns?" Well, on an infinite timeline did you think service would remain high quality, free, well supported, and run by tireless, unselfish, unambitious benevolent dictators for the rest of your life? Or was it a foregone question that was only a matter of "when" not "if"?


It seems when proprietary resources get infected it's because hackers are the problem, but when open source resources get infected its a problem with open source.

But there isn't any particular reason why a paid/proprietary host couldn't just as easily end up being taken over / sold to a party intending to inject malware. It happens all the time really.


Yes, the economic problem of reward absence is exclusive to open source and private software does not have it. They may have others, like excess of rewards to hackers in form of crypto ransom to the point that the defense department had to step in and ban payouts.


Private software always wants more rewards, leading to identical symptoms.


Private software already has rewards that may be threatened by certain types of behaviour, leading to reduced symptoms.


Hasn't stopped Larry Ellison from laughing all the way to the bank.


private != profitable

As long as business is not going as well as owners want, the same economic problem exists in private software too - in fact, the private companies get acquired all the time too, and they get shut down, causing DOS for many of their clients.

(See for example: https://www.tumblr.com/ourincrediblejourney )

One difference is that closed-source source usually much less efficient; I cannot imagine "100K+" customers from a commercial org with just a single developer. And when there are dozens or hundreds of people involved, it's unlikely that new owners would turn to outright criminal activity like malware; they are much more likely to just shut down.


agreed, but if a company is making millions for the security of software, the incentive is to keep it secure so customers stick with it. Remember the lastpass debacle, big leak and lost many customers...


Directly security-focused products like lastpass are the only things that have any market pressure whatsoever on this, and that's because they're niche products for which the security is the only value-add, marketed to explicitly security-conscious people and not insulated by a whole constellation of lock-in services. The relevant security threats for the overwhelming majority of people and organizations are breaches caused by the practices of organizations that face no such market pressure, including constant breaches of nonconsensually-harvested data, which aren't even subject to market pressures from their victims in the first place


Even for security-related products the incentives are murky. If they're not actually selling you security but a box on the compliance bingo then it's more likely that they actually increase your attack surface because they want to get their fingers into everything so they can show nice charts about all the things they're monitoring.


Aye. My internal mythological idiolect's trickster deity mostly serves to personify the game-theoretic arms race of deception and is in a near-constant state of cackling derisively at the efficient market hypothesis


I wouldn’t point to LastPass as an exemplar…

https://www.theverge.com/2024/5/1/24146205/lastpass-independ...


I didn't, and my point was exactly that it's not a great one, so I think we largely agree here


Only if they have competition. Which long term is not the default state in the market.


> Which long term is not the default state in the market

Why not?


Why did Facebook buy Instagram and Whatsapp? Why did Google buy Waze? Why did Volkswagen buy Audi and Skoda and Seat and Bentley and so on?

Might as well ask why companies like money.


Sure, but the car industry is pretty old and there's plenty of competition still.


I’m not sure about plenty when you actually look at it, but in any case it’s because of government intervention.


Think about it for a second mate: competition leads to winners and winners don't like competition, they like monopolies.

And no, the car industry has almost no competition. It's an oligopoly with very few players and a notoriously hard industry to get in.


Oh yeah, corporations are so accountable. We have tons of examples of this.


I think for some reason some people still buy cisco products, so this reasoning doesn't seem to be applicable to the real world.


Real solution?

We’re in a complexity crisis and almost no one sees it.

It’s not just software dependencies of course. It’s everything almost everywhere.

No joke, the Amish have a point. They were just a few hundred years too early.


I 100% agree. I feel a huge part of my responsibility as a software "engineer" is to manage complexity. But i feel i'm fighting a losing battle, most everyone seems to pull in the opposite direction.

Complexity increases your surface area for bugs to hide in.

I've come to the conclusion it's tragedy-of-the-commons incentives: People get promotions for complex and clever work, so they do it, at the cost of a more-complex-thus-buggy solution.

And yes, it's not just software, it's everywhere. My modern BMW fell apart, in many many ways, at the 7 year mark, for one data point.


Right, no one is incentivizing simple solutions. Or making sure that the new smart system just cannot be messed up.

We need thinner faster lighter leaner on everything… because IDK why, MBAs have decided that reliability will just not sell.


It's a competence crisis not a complexity one.

https://www.palladiummag.com/2023/06/01/complex-systems-wont...


We haven’t gotten smarter or dumber.

But we have exceeded our ability to communicate the ideas and concepts, let alone the instructions of how to build and manage things.

Example: a junior Jiffy Lube high school dropout in 1960 could work hard and eventually own that store. Everything he would ever need to know about ICE engines was simple enough to understand over time… but now? There are 400 oil types, there are closed source computers on top computers, there are specialty tools for every vehicle brand, and you can’t do anything at all without knowing 10 different do-work-just-to-do-more-work systems. The high school dropout in 2024, will never own the store. Same kid. He hasn’t gotten dumber. The world just left him by in complexity.

Likewise… I suspect that Boeing hasn’t forgotten how to build planes, but the complexity has exceeded their ability. No human being on earth could be put in a room and make a 747 even over infinite time. It’s a product of far too many abstract concepts in a million different places that have come together to make a thing.

We make super complex things with zero effort put into communicating how or why they work a way they do.

We increase the complexity just to do it. And I feel we are hitting our limits.


The problem w/ Boeing is not the inability of people to manage complexity but of management's refusal to manage complexity in a responsible way.

For instance, MCAS on the 737 is a half-baked implementation of the flight envelope protection facility on modern fly-by-wire airliners (all of them, except for the 737). The A320 had some growing pains with this, particularly it had at least two accidents where pilots tried to fly the plane into the ground, thought it would fail because of the flight envelope protection system, but they succeeded and crashed anyway. Barring that bit of perversity right out of the Normal Accidents book, people understand perfectly well how to build a safe fly-by-wire system. Boeing chose not to do that, and they refused to properly document what they did.

Boeing chose to not develop a 737 replacement, so all of us are suffering: in terms of noise, for instance, pilots are going deaf, passengers have their head spinning after a few hours in the plane, and people on the ground have no idea that the 737 is much louder than competitors.


Okay but your entire comment is riddled with mentions of complex systems (flight envelope system?) which proves the point of the parent comment. "Management" here is a group of humans who need to deal with all the complexity of corporate structures, government regulations, etc.. while also dealing with the complexities of the products themselves. We're all fallible beings.


Boeing management is in the business of selling contracts. They are not in the business of making airplanes. That is the problem. They relocated headquarters from Seattle to Chicago and now DC so that they can focus on their priority, contracts. They dumped Boeing's original management style and grafted on the management style of a company that was forced to merge with Boeing. They diversified supply chain as a form of kickbacks to local governments/companies that bought their 'contracts'.

They enshiftified every area of the company, all with the priority/goal of selling their core product, 'contracts', and filling their 'book'.

We are plenty capable of designing Engineering systems, PLMs to manage EBOMs, MRP/ERP systems to manage MBOMs, etc to handle the complexities of building aircraft. What we can't help is the human desire to prioritize enshitfication if it means a bigger paycheck. Companies no longer exist to create a product, and the product is becoming secondary and tertiary in management's priorities, with management expecting someone else to take care of the 'small details' of why the company exists in the first place.


>Example: a junior Jiffy Lube high school dropout in 1960 c

Nowadays the company wouldn't hire a junior to train. They'll only poach already experienced people from their competitors.

Paying for training isn't considered worthwhile to the company because people wont stay.

People won't stay because the company doesn't invest in employees , it only poaches.


Boeing is a kickbacks company in a really strange way. They get contracts based on including agreements to source partly from the contracties local area. Adding complexity for contracts and management bonus sake, not efficiency, not redundancy, not expertise. Add onto that a non-existent safety culture and a non-manufacturing/non-aerospace focused management philosophy grafting on from a company that failed and had to be merged into Boeing replacing the previous Boeing management philosophy. Enshitifaction in every area of the company. Heck they moved headquarters from Seattle to Chicago, and now from Chicago to DC. Prioritizing being where the grift is over, you know, being where the functions of the company are so that management has a daily understanding of what the company does. Because to management what the company does is win contracts, not build aerospace products. 'Someone else' takes care of that detail, according to Boeing management. Building those products in now secondary/tertiary to management.

I did ERP/MPR/EBOM/MBOM/BOM systems for aerospace. We have that stuff down. We have systems for this kind of communication down really well. We can build within a small window an airplane with thousands of parts with lead times from 1 day to 3 months to over a year for certain custom config options, with each parts design/FAA approval/manufacturing/installation tracked and audited. Boeing's issue is culture, not humanity's ability to make complex systems.

But I do agree that there is a complexity issue in society in general, and a lot of systems are coasting on the efforts of those that originally put them in place/designed them. A lot of government seems to be this way too. There's also a lot of overhead for overheads sake, but little process auditing/iterative improvement style management.


I see both, incentivized by the cowboy developer attitude.


Ironically I think you got that almost exactly wrong.

Avoiding "cowboyism" has instead lead to the rise of heuristics for avoiding trouble that are more religion than science. The person who is most competent is also likely to be the person who has learned lessons the hard way the most times, not the person who has been very careful to avoid taking risks.

And let me just say that there are VERY few articles so poorly written that I literally can't get past the first paragraph, and an article that cherry-picks disasters to claim generalized incompetence scores my very top marks for statistically incompetent disingenuous bullshit. There will always be a long tail of "bad stuff that happens" and cherry-picking all the most sensational disasters is not a way of proving anything.


I'm predisposed to agree with the diagnosis that incompetence is ruining a lot of things, but the article boils down to "diversity hiring is destroying society" and seems to attribute a lot of the decline to the Civil Rights Act of 1964. Just in case anybody's wondering what they would get from this article.

> By the 1960s, the systematic selection for competence came into direct conflict with the political imperatives of the civil rights movement. During the period from 1961 to 1972, a series of Supreme Court rulings, executive orders, and laws—most critically, the Civil Rights Act of 1964—put meritocracy and the new political imperative of protected-group diversity on a collision course. Administrative law judges have accepted statistically observable disparities in outcomes between groups as prima facie evidence of illegal discrimination. The result has been clear: any time meritocracy and diversity come into direct conflict, diversity must take priority.

TL;DR "the California PG&E wildfires and today's JavaScript vulnerability are all the fault of Woke Politics." Saved you a click.


A more fundamental reason is that society is no longer interested in pushing forward at all cost. It's the arrival at an economical and technological equilibrium where people are comfortable enough, along with the end of the belief in progress as an ideology, or way to salvation somewhere during the 20th century. If you look closely, a certain kind of relaxation has replaced a quest for efficiency everywhere. Is that disappointing? Is that actually bad? Do you think there might be a rude awakening?

Consider: It was this scifi-fueled dream of an amazing high-tech, high-competency future that also implied machines doing the labour, and an enlightened future relieving people of all kinds of unpleasantries like boring work, therefore prevented them from attaining high competency. The fictional starship captain, navigating the galaxy and studying alien artifacts was always saving planets full of humans in desolate mental state...


My own interpretation of the business cycle is that growth cause externalities that stop growth. Sometimes you get time periods like the 1970s where efforts to control externalities themselves would cause more problems than they solved, at least some of the time. (e.g. see the trash 1974 model year of automobiles where they hadn’t figured out how to make emission controls work.)

I’d credit the success of Reagan in the 1980s at managing inflation to a quiet policy of degrowth the Republicans could get away with because everybody thinks they are “pro business”. As hostile as Reagan’s rhetoric was towards environmentalism note we got new clean air and clean water acts in the 1980s but that all got put in pause under Clinton where irresponsible monetary expansion restarted.


> My own interpretation of the business cycle is that growth cause externalities that stop growth.

The evidence seems to be against this.

https://eml.berkeley.edu/~enakamura/papers/plucking.pdf


> along with the end of the belief in progress as an ideology, or way to salvation somewhere during the 20th century.

That 20th century belief in technological progress as a "way to salvation" killed itself with smog and rivers so polluted they'd catch on fire, among other things.


Thank you for summarizing (I actually read the whole article before seeing your reply and might have posted similar thoughts). I get the appeal of romanticizing our past as a country, looking back at the post-war era, especially the space race with a nostalgia that makes us imagine it was a world where the most competent were at the helm. But it just wasn't so, and still isn't.

Many don't understand that the Civil Rights Act describes the systematic LACK of a meritocracy. It defines the ways in which merit has been ignored (gender, race, class, etc) and demands that merit be the criteria for success -- and absent the ability for an institution to decide on the merits it provides a (surely imperfect) framework to force them to do so. The necessity of the CRA then and now, is the evidence of absence of a system driven on merit.

I want my country to keep striving for a system of merit but we've got nearly as much distance to close on it now as we did then.


>Many don't understand that the Civil Rights Act describes the systematic LACK of a meritocracy. It defines the ways in which merit has been ignored (gender, race, class, etc) and demands that merit be the criteria for success

Stealing that. Very good.


The word "meritocracy" was invented for a book about how it's a bad idea that can't work, so I'd recommend not trying to have one. "Merit" doesn't work because of Goodhart's law.

I also feel like you'd never hire junior engineers or interns if you were optimizing for it, and then you're either Netflix or you don't have any senior engineers.


FWiW Michael Young, Baron Young of Dartington, the author of the 1958 book The Rise of the Meritocracy popularised the term which rapidly lost the negative connotations he put upon it.

He didn't invent the term though, he lifted it from an earlier essay by another British sociologist Alan Fox who apparently coined it two years earlier in a 1956 essay.

https://en.wikipedia.org/wiki/The_Rise_of_the_Meritocracy


I think this is the wrong takeaway.

Everything has become organized around measurable things and short-term optimization. "Disparate impact" is just one example of this principle. It's easy to measure demographic representation, and it's easy to tear down the apparent barriers standing in the way of proportionality in one narrow area. Whereas, it's very hard to address every systemic and localized cause leading up to a number of different disparities.

Environmentalism played out a similar way. It's easy to measure a factory's direct pollution. It's easy to require the factory to install scrubbers, or drive it out of business by forcing it to account for externalities. It's hard to address all of the economic, social, and other factors that led to polluting factories in the first place, and that will keep its former employees jobless afterward. Moreover, it's hard to ensure that the restrictions apply globally instead of just within one or some countries' borders, which can undermine the entire purpose of the measures, even though the zoomed-in metrics still look good.

So too do we see with publically traded corporations and other investment-heavy enterprises: everything is about the stock price or other simple valuation, because that makes the investors happy. Running once venerable companies into the ground, turning merges and acquisitions into the core business, spreading systemic risk at alarming levels, and even collapsing the entire economy don't show up on balance sheets or stock reports as such and can't easily get addressed by shareholders.

And yet now and again "data-driven" becomes the organizing principle of yet another sector of society. It's very difficult to attack the idea directly, because it seems to be very "scientific" and "empirical". But anecdote and observation are still empirically useful, and they often tell us early on that optimizing for certain metrics isn't the right thing to do. But once the incentives are aligned that way, even competent people give up and join the bandwagon.

This may sound like I'm against data or even against empiricism, but that's not what I'm trying to say. A lot of high-level decisions are made by cargo-culting empiricism. If I need to choose a material that's corrosion resistant, obviously having a measure of corrosion resistance and finding the material that minimizes it makes sense. But if the part made out of that material undergoes significant shear stress, then I need to consider that as well, which probably won't be optimized by the same material. When you zoom out to the finished product, the intersection of all the concerns involved may even arrive at a point where making the part easily replaceable is more practical than making it as corrosion-resistant as possible. No piece of data by itself can make that judgment call.


Probably both, IMHO.


Well, on the web side, it'd be a lot less complex if we weren't trying to write applications using a tool designed to create documents. If people compiled Qt to WASM (for instance), or for a little lighter weight, my in-development UI library [1] compiled to WASM, I think they'd find creating applications a lot more straightforward.

[1] https://github.com/eightbrains/uitk


Most apps don’t need to be on the web. And the ones that need to be can be done with the document model instead of the app model. We added bundles of complexity to an already complex platform (the browser).


> Real solution?

I don't think there's any. Too many luminaries are going to defend the fact that we can have things like "poo emojis" in domain names.

They don't care about the myriad of homograph/homoglyph attacks made possible by such an idiotic decision. But they've got their shiny poo, so at least they're happy idiots.

It's a lost cause.


> Too many luminaries are going to defend the fact that we can have things like "poo emojis" in domain names. They don't care about the myriad of homograph/homoglyph attacks made possible by such an idiotic decision.

There is nothing idiotic about the decision to allow billions of people with non-latin scripts to have domain names in their actual language.

What's idiotic is to consider visual inspection of domain names a neccessary security feature.


DNS could be hosted on a blockchain, each person use his own rules for validating names, and reject, accept or rename any ambiguous or dangerous part of the name, in a totally secure and immutable way.

Blockchain has the potential to be the fastest and cheapest network on the planet, because it is the only "perfect competition" system on the internet.

"Perfect competition" comes from game theory, and "perfect" means that no one is excluded from competing. "Competition" means that the best performing nodes of the network put the less efficient nodes out of business.

For the moment unfortunately, there is no blockchain which is the fastest network on the planet, but that's gonna change. Game theory suggests that there will be a number of steps before that happens, and it takes time. In other words, the game will have to be played for a while, for some objectives to be achieved.

UTF8 and glyphs are not related to supply chains, and that's a little bit off topic, but i wanted to mention that there is a solution.


in a strange way, this almost makes the behavior of hopping onto every new framework rational. The older and less relevant the framework, the more the owner's starry-eyed enthusiasm wears off. The hope that bigcorp will pay $X million for the work starts to fade. The tedium of bug fixes and maintenance wears on, the game theory takes it's toll. The only rational choice for library users is to jump ship once the number of commits and hype starts to fall -- that's when the owner is most vulnerable to the vicissitudes of Moloch.


> in a strange way, this almost makes the behavior of hopping onto every new framework rational.

Or maybe not doing that and just using native browser APIs? Many of these frameworks are overkill and having so many "new" ones just makes the situation worse.


Many of them predate those native browser APIs. Pollyfills, the topic at hand, were literally created to add modern APIs to all browsers equally (most notably old Safaris, Internet Explorers, etc.).


Good point. What's often (and sometimes fairly) derided as "chasing the new shiny" has a lot of other benefits too: increased exposure to new (and at least sometimes demonstrably better) ways of doing things; ~inevitable refactoring along the way (otherwise much more likely neglected); use of generally faster, leaner, less dependency-bloated packages; and an increased real-world userbase for innovators. FWIW, my perspective is based on building and maintaining web-related software since 1998.


to be fair there is a whole spectrum between "chasing every new shiny that gets a blog post" vs. "I haven't changed my stack since 1998."

there are certainly ways to get burned by adopting shiny new paradigms too quickly; one big example in web is the masonry layout that Pinterest made popular, which in practice is extremely complicated to the point where no browser has a full implementation of the CSS standard.


CSS Masonry is not even standardized yet. There is a draft spec: https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_grid_la... and ongoing discussion whether it should be part of CSS grid or a new `display` property.


Would you consider yourself as to have "chased the new shiny"? If you don't mind, how many changes (overhauls?) would you say you've made?


To be fair, when it comes to React, I don't think there is a realistic "new shiny" yet. NextJS is (was?) looking good, although I have heard it being mentioned a lot less lately.


Perhaps. I view it as the squalor of an entirely unsophisticated market. Large organizations build and deploy sites on technologies with ramifications they hardly understand or care about because there is no financial benefit for them to do so, because the end user lacks the same sophistication, and is in no position to change the economic outcomes.

So an entire industry of bad middleware created from glued together mostly open source code and abandoned is allowed to even credibly exist in the first place. That these people are hijacking your browser sessions rather than selling your data is a small distinction against the scope of the larger problem.


I believe tea, the replacement for homebrew, is positioning itself as a solution to this but I rarely see it mentioned here https://tea.xyz/


Tea is not the “replacement for homebrew” apart from the fact that the guy that started homebrew also started tea. There’s a bunch of good reasons not to use tea, not least the fact that it’s heavily associated with cryptocurrency bullshit.


Alternatively, if you rely on some code then download a specific version and check it before using it. Report any problems found. This makes usage robust and supports open source support and development.


I'm afraid this is hitting on the other end of inviolable game theory laws. Dev who is paid for features and business value wants to read line-by-line random package that is upgrading from version 0.3.12 to 0.3.13 in a cryptography or date lib that they likely don't understand? And this should be done for every change of every library for all software, by all devs who will always be responsible, not lazy, and very attentive and careful.

On the flip side there is "doing as little as possible and getting paid" for the remainder of a 40 year career where you are likely to be shuffled off when the company has a bad quarter anyway.

In my opinion, if that was incentivized by our system, we'd already be seeing more of it, we have the system we have due to the incentives we have.


Correct. I don't think I have ever seen sound engimeering decisions being rewarded at any business I have worked for. The only reason any sound decisions are made is that some programmers take the initiative, but said initiative rarely comes with a payoff and always means fighting with other programmers who have a fetish for complexity.

If only programmers had to take an ethics oath so they have an excuse not to just go along with idiotic practices.


Then there are the programmers who read on proggit that “OO drools, functional programming rules” or the C++ programmers who think having a 40 minute build proves how smart and tough they are, etc.


lmao you jogged a memory in my brain... in my senior final semester in college, our professor had us agree to the IEEE Code of Ethics:

https://www.computer.org/education/code-of-ethics

In retrospect I can say I've held up for the most part, but in some cases have had to quit certain jobs due to overwhelming and accelerating nonsense.

usually its best for your mental well to just shut up and get paid ;)


"Fetish for complexity" haha. This sounds so much better than "problem engineer".


> Report any problems found. This makes usage robust and supports open source support and development.

Project maintainers/developers are not free labor. If you need a proper solution to any problem, make a contract and pay them. This idea that someone will magically solve your problem for free needs to die.

https://www.softwaremaxims.com/blog/not-a-supplier


Vendoring should be the norm, not the special case.

Something like this ought to be an essential part of all package managers, and I'm thinking here that the first ones should be the thousands of devs cluelessly using NPM around the world:

https://go.dev/ref/mod#vendoring


We've seen a lot more attacks succeed because somebody has vendored an old vulnerable library than supply chain attacks. Doing vendoring badly is worse than relying on upstream. Vendoring is part of the solution, but it isn't the solution by itself.


Not alone, no. That's how CI bots help a lot, such as Dependabot.

Althought it's also worrying how we seemingly need more technologies on top of technologies just to keep a project alive. It used to be just including the system's patched header & libs, now we need extra bots surveying everything...

Maybe a linux-distro-style of community dependency management would make sense. Keep a small group of maintainers busy with security patches for basically everything, and as a downstream developer just install the versions they produce.

I can visualize the artwork..."Debian but for JS"


In the old ways, you mostly rely on a few libraries that each solve a complete problem and is backed by a proper community. The odd dependency is usually small and vendored properly. Security was mostly the environment concern (the OS) as the data is either client side or some properly managed enterprise infrastructure). Now we have npm with its microscopic and numerous packages, everyone wants to be on the web, and they all want your data.


I guess that would work if new exploits weren’t created or discovered.

Otherwise all your plan to “run old software” is questionable.


That isn't the plan. For this to work new versions have to be aggressively adopted. This is about accepting that using an open source project means adopting that code. If you had an internal library with bug fixes available then the right thing is to review those fixes and merge them into the development stream. It is the same with open source code you are using. If you care to continue using it then you need to get the latest and review code changes. This is not using old code, this is taking the steps needed to continue using code.


> did you think service would remain high quality, free, well supported, and run by tireless, unselfish, unambitious benevolent dictators for the rest of your life

I would run some things I run forever free, if once in a while 1 user would be grateful. In reality that doesn’t happen so I usually end up monetising and then selling it off. People whine about everything and get upset if I don’t answer tickets within a working day etc. Mind you; these are free things with no ads. The thing is; they expect me to fuck them over in the end as everyone does, so it becomes a self fulfilling prophecy. Just a single email or chat saying thank you for doing this once in a while would go a long way, but alas; it’s just whining and bug reports and criticism.


Yes it’s inevitable given how many developers are out of a job now. Everyone eventually has a price


This is an insightful comment, sadly


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: