I went to a talk of his on CCC ages ago, and it was such a fascinating combination of geometry, causality, and asymptotics. I have absolutely no clue whether it's reasonable physically, but independent of that, it's just a really elegant fusion of topics in a fun to think about way. Worth a read for anyone who just appreciates elegant new ways of combining mathematical structures.
I've also seen this talk, at the behest of some spaced out friends of mine, an amazing experience and I still think about the universe through the lens of that talk!
My understanding of this idea is that once the universe reaches a state of maximum entropy (this is the “heath death” of the universe, where everything is a uniform, undifferentiated cloud of photons, then time stops being meaningful because there can be no change from moment to moment. In a sense, time _is_ the change from low to high entropy - if you don’t have any entropy gradient, you can’t have any time either.
I've always rejected the idea that time is entropy change.
First, in many local processes entropy moves from high to low (e.g. life). Nobody says that time is moving backwards for living things. It only increases if you consider the system it is embedded in as well. So this idea that entropy is time is something that only applies to the entire universe?
It's true that we don't see eggs unbreaking, or broken coffee cups flying off the floor and reassembling. This increase in entropy seems to give an "arrow" of time, but to my mind this view (ironically) confuses cause with effect.
If you have any causal system (cause preceding effects) then you will always see this type of entropic increase, by simple statistics. There are just many, many more ways for things to be scrambled and high entropy than ordered and low entropy.
So yes, entropy does tend to increase over time, but that's an effect of being in a causal system, not the system itself. At least, that's my view.
Could you expand on your comment that life has entropy moving from high to low? Doesn't aging increase the entropy in our biological system? I have always thought that we are at our most structured in the early phases of conception with entropy increasing constantly as we age.
Life is essentially a process of creating order (lower entropy) building complex cells and so on using energy and matter from its environment.
Perfectly true that entropy gets us in the end as we age, as the system breaks down and cannot sustain itself any longer. Although if we could fix those systems, there's no reason in principle we couldn't halt aging entirely.
I took it as capital-L Life is moving from high to low. As evolution continues Life seems to evolve ever higher -> lower/more-ordered organisms (as more complex organisms depend on the systems created by simpler organisms prior to themselves).
I am slightly blending the concept of entropy and complexity. But "ordered complexity" is how I imagine it.
I don’t think entropy ever moves from high to low overall, it only ever distills some local low out of an higher entropy area, and in doing so, the overall entropy increases.
It works a bit like air conditioning: yeah, you can make one room cold, but only by making more heat outside the room. The overall temperature of the system increases.
This sounds sort of like the "if a tree falls in a forest and no one hears it, did it make a sound".
if time passes and there's no observable difference, did it pass? I guess it makes no meaningful difference, but it's not really answering the underlying question of if some variable is advancing or not.
If nobody logs in to a multiplayer game, does the game world still exist?
Sure there are files sitting on a server somewhere waiting to be read when the first user logs in, there may even be a physics engine polling abstract data structures for updates, but the game world doesn't render without players present with their computers bringing all this data into a coherent structure.
Also, for an extra existential kick, realize that it renders /independently/ in the GPU/CPU/RAM of each player's computer.
I remember the book "Now - Physics of Time" by Richard Muller (a Berkley physics professor) touching on the subject of entropy linked to time, but I never got to finish the book and sadly I can't provide more insight.
And potentially leads to things like Boltzmann Brains, given enough time! Quantum fluctuations can still create wildly improbable things, even if only briefly.
If everything is massless, everything travels at the speed of light, and nothing experiences any time (photons travel null geodesics with zero spacetime interval).
This is required to make Penrose's end state Conformal i.e. scale invariant, so that it can arbitrarily Cycle to a small scale to make a new Bing Bang Cosmology (CCC).
I don’t know if some people are just wired differently, but I can back up the feeling of not caring at all where I fall in a hierarchy or how much people respect or don’t respect me.
The things I find most thrilling always relate to being challenged. Finding someone better than me qualifies. Having ideas challenged or being proven wrong are the most positive experience I’ve had, especially being forced to change deeply held beliefs. I mention this because it’s one of those things that I always hear people say that everyone hates, but I’ve always felt the opposite, just from a pure chemical feeling perspective. I don’t think I could possibly be unique in that experience.
Wouldn't this depend a lot on how management responds to your use? For example, if you just kept a log of prompts and outputs with notes about why the output wasn't acceptable, that could be considered productive use in this early stage of LLMs, especially if management's goal was to have you learning how to use LLMs. Learning how not to use something is just as important in the process of adapting any new tool.
If management is convinced of the benefits of LLMs and the workers are all just refusing to use them, the main problem seems to be a dysfunctional working environment. It's ultimately management's responsibility to work that out, but if the management isn't completely incompetent, people tasked with using them could do a lot to help the situation by testing and providing constructive feedback rather than making a stand by refusing to try and providing grand narratives about damaging the artistic integrity of something that has been commoditized from inception like video game art. I'm not saying that video game art can't be art, but it has existed in a commercial crunch culture since the 1970s.
When you shelter your children from the world to an extreme degree, you end up getting one of the most popular stories in both Europe and Asia for the last 2500 years:
I'm hoping my kids become digital buddhas. :) More seriously I hope they learn that online is not better than IRL. But that means giving them IRL experiences in addition to roblox. yet here I am posting on Hacker News... I did have the neighbors over though for dinner the other day. I think a good thing parents can do is have dinner parties. Show kids how to be social.
But dinner parties mean time away from Roblox and YouTube! I know this struggle, especially in winter. I think kids still want to socialize but struggle more often these days with how to do it. It’s easier for many of them to do it online when they can leave a situation at any time they want, and it’s hard to adapt to the pressure of being stuck in a social situation. What used to be “rage quitting” in games seems to be normal now. Maybe kids just need to learn more about the art of the excuse to leave.
Why do responsible parents even allow their kids to play Roblox? Roblox is plagued by freemium style gambling games that are harmful to children. I’m interested to hear from HN parents why they feel Roblox is a safe environment for their kids. For further reading:
It's the networking effect. When all the other kids are playing a game, it's tough for some kids to be the only one not on it. Then it becomes one of their primary means of socializing. To a lot of kids, there are only two games in the universe: Roblox and Fortnite. That's all any of their peers play. They're not in getting into other ones where their friends aren't. It's the same as social networks.
Whether it's irresponsible to let kids play the same games as their friends is of course up to individual parents. I think it's possible to both be exposed to these types of traps and learn how to avoid them. They can't gamble without access to money from parents anyway.
Thank you for the thoughtful response. I too have struggled with fighting these network effects. And I am disheartened to see so many parents who just blindly let their children play these harmful games. Then parents like us, who do see the negatives, feel forced to let our children play so that they don’t become ostracised from their social groups.
Fortnite is another excellent example of introducing gambling to minors with their sales of loot boxes, which the FTC has fined for $245M. Recently one of my children asked to play Genshin Impact because their classmates were all hooked on the game. I was firm in saying that I did not want my children to play gacha games which were designed to fool players into gambling on loot boxes and paying to win. Instead I tried to get them to switch to another game without these poisonous mechanics.
I’ve always been hesitant to be too forceful in getting children off these bad game platforms because I didn’t want to be labeled as the bad parent who took away their fun and in turn causing issues for my children at school. But I think my new strategy is just to buy their friends games to play that I feel are more constructive such as Minecraft instead of playing freemium mobile games.
I just hope more parents become aware of the negative and addictive aspects that these games pose to children. I strongly believe that one day we will look back at this industry and it will be compared to the tobacco industry and the harm it caused.
is there any harm in the games if the kids can't spend any money?
that is my solution. i allow my kids to play games, but i am not spending a single cent on them. their accounts never even get the ability to spend money, and so the kids can waste their time, but they can't gamble because they don't have access to money. i know my son tried to earn some robux, but he didn't get far and he focused on games that were playable without. eventually the kids lost interest...
same goes for genshin impact. we even played that together for a while. my oldest made it to level 48 out of 50 by just grinding. money was never an issue because he knew that i'd be firm on that, so he never asked. (i just asked him about that and he found that the benefits from spending money wouldn't really have been worth it. they didn't make the game much easier, so why bother?)
That is a good question, is there any harm to play freemium games if the kids aren’t allowed to spend any money?
My view is that freemium games tend to be engineered to hook people into playing for long periods of time. They use strategies similar to how casinos hook gamblers with behavioral conditioning giving intermittent rewards for long play time; basically timed dopamine hits.
So going back to your question regarding not spending money, even though they’re not spending money now they are being conditioned to find such behaviors in games as a norm and one day when they have a source of income themselves those dopamine hits are just a few dollars away.
But my kids are strong willed and they won’t fall for these tricks, you may say. That may be so but the fact that they’re participating in these game platforms is drawing in other children who may not have the same mental fortitude.
I guess my long winded rant is just to say that we shouldn’t be promoting these casino-like games. We should be promoting games that foster creativity and a sense of achievement without pay to win shortcuts and gambling for rewards.
It is a good question. YouTube is a similar phenomenon. Below that would be cable tv. Everything has been hyper optimized for attention/dopamine reward.
There are some folks who seem to only let kids watch old movies and old shows on DVD.
Also most kids are in school in person which may help mitigate the brain rot.
YouTube is another dangerous vehicle for toxic ideology affecting our children. If you’re not careful with censoring the algorithm it is very easy to fall into a self reinforcing loop of disinformation. And even after blocking a bad channel new ones sprout up daily like weeds. Parenting today has become very challenging.
Obviously, you will be able to find plenty of examples of things that don't work, and you probably have a bank app or some other thing that you need Google for, but alternatives do exist, and I'd argue that you can have a healthier, more productive, and more enjoyable experience if you can have all your needs met by software that isn't treating you as a product.
My opinion is you should use whatever works; I do. But try not to absolutely need software that you can't control.
As you say, due to banking, this works more or less depending on which country you live in.
Some countries have tied their banking to their phones, with apps that use SafetyNet to check how Googled you are.
Somehow corporations and nations have given sovereignty away for convenience and so you may need 2 phones: the google one and the good one, to properly be f-droid only.
I agree with you that most consumers probably do want things that are bad for them. I would at least be cautious of services provided by one of the companies with the most anti-trust lawsuits this century, I really don't think they're your friend.
It’s dangerous to assume so much as this about the thoughts of someone on comments, but let me offer a supporting opinion to the other point:
People like things to be consistent and reliable. When we’re talking about technology, they probably don’t know what specific coding or licensing or development practices lead to that, but they know that they don’t like it when something they use gets worse over time.
When things they use everyday change at the whim of one company that has full control, they don’t always like it. Having software that’s free to modify and distribute makes it so people will always have options and not depend on one company or another having the same opinions about what makes software good.
"Having software that’s free to modify and distribute makes it so people will always have options and not depend on one company or another having the same opinions about what makes software good"
Yeah its called android, company like samsung,xiaomi,huawei etc literally modify android and its come out of the box with the phone
are you saying that android without google is the answer since android is still google, saying you dont want google but still using android is not really live up to the opinion since google can change android core anyway
This isn’t about wanting or not wanting Google but wanting freedom. I don’t avoid Google completely, but I don’t want to be dependent on them. I always want to have free and open alternatives to what they provide.
The AOSP works for those different companies because it’s free to modify. Huawei had to move away from Android (still using parts) because Google services were not free or available to them. That was fine because at least the core parts remained free.
If the basic functions of life like paying for things don’t work without Google, it’s a problem. That’s bad for people and too much pressure for Google to do the right thing for people who have different needs.
Yeah, it's called hypocrisy. You want all the good things without the bad things, but they come with an associated cost. People are free to use anything other than Android or iOS; no one forces them to use Google services either.
Google is an objectively evil company, ever since they removed their “Don’t be evil” slogan. Android is, conceptually, a good idea. There is no “emotion” behind that statement, they themselves have said they are evil, and their actions regarding Android reflect that. It is not hypocrisy to desire that there be more good than bad in this world, and I urge you to read a dictionary.
Yeah its called "don't use android at all", its called hypocrisy when you still use android without the google service
same with IOS, I bet a lot people desire IOS on non iphone device but that not going to happen soon because Apple profit is from iphone sales
same with youtube, people dont like watching ads but youtube bussiness model is not going to survive without ads (who going to spend money if people can upload unlimited video on it for free???)
google benefit from android development cost because they can generating revenue because people can get use google service
its called BUSSINESS, there is no evil in doing bussiness. dont talk me about being right or evil when we talking at HN when most people want to create unlimited subscription for their service
It’s not hypocrisy to remove the cancer from my phone, tumors like Google Services exist to be removed. I use an Adblock, I download all the videos I like to personal storage after watching them on YouTube via a shared Indivious frontend me and my friends use (meaning Google takes the data hit twice), and I always fill up my GDrives. This way I can slowly do my part in removing evil from this world.
There is absolutely evil in doing business, unless you view things such as slavery as A-OK. And trust me, I fucking hate subscriptions and view those who fall to forcing them onto their users as less than scum. They are just as evil as Google or Apple are.
Civilization doesn't even have to collapse for a project like this to be useful. Like you said, there's lots of e-waste. If you happened to live in a place that ended up with a lot of this stuff but didn't have a lot of infrastructure, you could possibly build up some convenient services with something like this. I like the idea of building software to make hardware less reliant on external resources in general. Over time, it could be useful to have more resilient electronics, because we seem to be designing machines to be more reliant on specific networked infrastructure every year.
What services? For most modern services the big cost inputs are things like human labor, utilities, and real estate. Reusing obsolete hardware doesn't gain anything. It's likely to be a net negative if it takes up more space, uses more electricity, and requires more maintenance.
A lot of functions of electronics don't require tons of processing power or efficiency. Microcontrollers can be used for just about anything. General purpose computing can be put to whatever purpose you can image.
Things supported by this OS like the Z80, 8086, 6502 etc. use around 5-10W. Using simple parts to control complicated machines is a standard operation, and even advanced electronics tend to use a lot of parts using older manufacturing techniques because it's more efficient to keep the old processes running.
If you're running a tractor, sure, 5 watts is not a big deal. But there are a lot of hypothetical post-collapse circumstances where such a high power usage would be prohibitive. Consider, for example, the kinds of radio stations you'd need for the kinds of weather and telecommunications uses I discussed in https://news.ycombinator.com/item?id=43484415, which benefit from being placed on inaccessible mountaintops and running unattended for years on end.
5 watts will drain a 100-amp-hour car battery in 10 days and is basically infeasible to get from improvised batteries made with common solid metals. Current mainstream microcontrollers like an ATSAMD20 are not only much nicer to program but can use under 20 μW, twenty thousand times less. A CR2032 coin cell (220mAh, 3V) can provide 20 μW for about 4 years. But the most the coin cell can provide at all is about 500 μW, so to run a 5-watt computer you'd need 10,000 coin cells. Totally impractical.
And batteries are a huge source of unreliability. What if you make your computing device more reliable by eliminating the battery? Then you need a way to power it, perhaps winding a watchspring or charging a supercapacitor.
Consider winding up such a device by pulling a cord like the one you'd use to start a chainsaw. That's about 100 newtons over about a meter, so 100 joules. That energy will run a 5W Z80 machine for 20 seconds, so you have to yank the cord three times a minute, or more because of friction. That yank will run a 20 μW microcontroller for two months.
I agree with your point! Old electronics aren't going to be appropriate for every situation, and modern alternatives are superior for lots of situations. But that doesn't mean that it isn't worth maintaining projects to keep the old ones useful. Plenty of people are still using technology developed thousands of years ago even when there are modern alternatives that are thousands of times more efficient. It just suits their situation better. Putting one of these things in something like a tractor or a dam or anything that has enough energy to spare is exactly the use case. And the relative simplicity of old technology can be a benefit if someone is trying to apply it to a new situation with limited resources or knowledge.
What cases are you thinking of when you say "Plenty of people are still using technology developed thousands of years ago even when there are modern alternatives that are thousands of times more efficient"? I considered hand sewing, cultivation with digging sticks instead of tractors, cooking over wood fires, walking, execution by stoning, handwriting, and several other possibilities, but none of them fit your description. In most cases the modern alternatives are less efficient but easier to use, but in every case I can think of where the efficiency ratio reaches a thousand or more in favor of the new technology, the thousands-of-years-old technology is abandoned, except by tiny minorities who are either impoverished or deliberately engaging in creative anachronisms.
I don't think "the relative simplicity of old technology" is a good argument for attempting to control your tractor with a Z80 instead of an ATSAMD20. You have to hook up the Z80 to external memory chips (both RAM and ROM) and an external clock crystal, supply it with 5 volts (regulated with, I think, ±2% precision), provide it with much more current (which means bypassing it with bigger capacitors, which pushes you towards scarcer, shorter-lived, less-reliable electrolytics), and program it in assembly language or Forth. The ATSAMD20 has RAM, ROM, and clock on chip and can run on anywhere from 1.62 to 3.63 volts, and you can program it in C or MicroPython. (C compilers for the Z80 do exist but for most tasks performance is prohibitively poor.) You can regulate the ATSAMD20's voltage adequately with a couple of LEDs and a resistor, or in many cases just a resistor divider consisting of a pencil lead or a potentiometer.
It would be pragmatically useful to use a Z80 if you have an existing Z80 codebase, or if you're familiar with the Z80 but not anything current, or if you have Z80 documentation but not documentation for anything current, or if you can get a Z80 but not anything current. (One particular case of this last is if the microcontrollers you have access to are all mask-programmed and don't have an "external access" pin like the 8048, 8051, and 80C196 family to force them to execute code from external memory. In that case the fact that the Z80 has no built-in code memory is an advantage instead of a disadvantage. But, if you can get Flash-programmed microcontrollers, you can generally reprogram their Flash.)
Incidentally, the Z80 itself "only" uses about 500 milliwatts, and there are Z80 clones that run on somewhat less power and require less extensive external supporting circuitry. (Boston Scientific's pacemakers run on a Z80 softcore in an FPGA, for example, so they don't have to take the risk of writing new firmware.) But the Z80's other drawbacks remain.
The other draw of an established "old architecture" is that it's fairly fixed and sourcable.
There are a bazillion Z80s and 8051s, and many of them are in convenient packages like DIP. You can probably scavenge some from your nearest landfill using a butane torch to desolder them from some defunct electronics.
In contrast, there are a trillion flavours of modern MCUs, not all drop-in interchangeable. If your code and tooling is designed for an ATSAMD20, great, but I only have a bag of CH32V305s. Moreover, you're moving towards finer pitches and more complex mounting-- going from DIP to TSSOP to BGA mounting, I'd expect every level represents a significant dropoff of how many devices can be successfully removed and remounted by low-skill scavengers.
I suppose the calculus is different if you're designing for "scavenge parts from old games consoles" versus proactively preparing a hermetically sealed "care package" of parts pre-selected for maximum usability.
It's a good point that older hardware is less diverse. The dizzying number of SKUs with different pinouts, different voltage requirements, etc., is potentially a real barrier to salvage. I have a 68000 and a bunch of PALs I pried out of sockets in some old lab equipment; not even desoldering was needed. And it's pretty common for old microprocessors to have clearly distinguishable address and data buses, with external memory. And I think I've mentioned the lovely "external access" pin on the 8048, 8051, and 80C196 family, though on the 80c196 it's active low.
On the other hand, old microcontrollers are a lot more likely to be mask-programmed or OTP PROM programmed, and most of them don't have an EA pin. And they have a dizzying array of NIH instruction sets and weird debugging protocols, or, often, no debugging protocol ("buy an ICE, you cheapskate"). And they're likely to have really low speeds and tiny memory.
Most current microcontrollers use Flash, and most of them are ARMs supporting OCD. A lot of others support JTAG or UPDI. And SMD parts can usually be salvaged by either hot air or heating the board up on a hotplate and then banging it on a bucket of water. Some people use butane torches to heat the PCB but when I tried that my lungs were unhappy for the rest of the day.
I was excited to learn recently that current Lattice iCE40 FPGAs have the equivalent of the 8051's EA pin. If you hold the SPI_SS pin low at startup (or reset) it quietly waits for an SPI master to load a configuration into it over SPI, ignoring its nonvolatile configuration memory. And most other FPGAs always load their configuration from a serial Flash chip.
The biggest thing favoring recent chips for salvage, though, is just that they outnumber the obsolete ones by maybe 100 to 1. People are putting 48-megahertz reflashable 32-bit ARMs in disposable vapes and USB chargers. It's just unbelievable.
In terms of hoarding "care packages", there is probably a sweet spot of diversity. I don't think you gain much from architectural diversity, so you should probably standardize on either Thumb1 ARM or RISC-V. But there are some tradeoffs around things like power consumption, compute power, RAM size, available peripherals, floating point, GPIO count, physical size, and cost, that suggest that you probably want to stock at least a few different part numbers. But more part numbers means more pinouts, more errata, more board designs, etc.
I appreciate the thought and detail you put into these responses. That's beyond the scope of what I anticipated discussing.
The types of things I had in mind are old techniques that people use for processing materials, like running a primitive forge or extracting energy from burning plant material or manual labor. What's the energy efficiency difference between generating electricity with a hand crank vs. a nuclear reactor? Even if you take into account all the inputs it takes to build and run the reactor, the overall output to input energy ratio is much higher, but it relies on a lot of infrastructure to get to that point. The type of efficiency I'm thinking of is precisely the energy required to maintain and run something vs. the work you get out of it.
In the same way, while old computers are much less efficient, models like these that have been manufactured for decades and exist all over might end up being a better fit in some cases, even with less efficiency. I can appreciate that the integration of components in newer machines like the ATSAMD20 can reduce complexity in many ways, but projects like CollapseOS are specifically meant to create code that can handle low-level complexity and make these things easier to use and maintain.
The Z80 voltage is 5V+/-5%, so right around what you were thinking. Considering the precision required for voltage regulation required is smart, but if you were having to replace crystals, they are simple and low frequency, 2-16Mhz, and lots have been produced, and once again the fact that it uses parts that have been produced for decades and widely distributed may be an advantage.
Your point about documentation is a good one. It does require more complicated programming, but there are plenty of paper books out there (also digitally archived) that in many situations might be easier to locate because they have been so widely distributed over time. If I look at archive.org for ATSAMD20 I come up empty, but Z80 gives me tons of results like this: https://archive.org/details/Programming_the_Z-80_2nd_Edition...
Anyway, thank you again for taking so much time to respond so thoughfully. You make great points, but I'm still convinced that it's worthwhile to make old hardware useful and resilient in situations where people have limited access to resources, people who may still want to deploy some forms of automation using what's available.
Projects like this one will hopefully never be used for their intended purpose, but they may form a basis for other interesting uses of technology and finding ways to take advantage of available computing resources even as machines become more complicated.
In my sibling comment about the overall systems aspects of the situation, I asserted that there was in fact enormously more information available for how to program in the 32-bit ARM assembly used by the ATSAMD20 than in Z80 assembly. This is an overview of that information, starting, as you did, from the Internet Archive's texts collection.
But the Archive isn't the best place to look. The most compact guide to ARM assembly language I've found is chapter 2 of "Archimedes Operating System: A Dabhand Guide" https://www.pagetable.com/docs/Archimedes%20Operating%20Syst..., which is 13 pages, though it doesn't cover Thumb and more recently introduced instructions. Also worth mentioning is the VLSI Inc. datasheet for the ARM3/VL86C020 https://www.chiark.greenend.org.uk/~theom/riscos/docs/ARM3-d... sections 1 to 3 (pp. 1-3 (7/56) to 3-67 (45/56)), though it doesn't cover Thumb and also includes some stuff that's not true of more recent processors. These are basically reference material like the ARM architectural reference manual I linked above from the Archive; learning how to program the CPU from them would be a great challenge.
I also appreciate your responses! I especially appreciate the correction about the Z80's power supply requirements.
> What's the energy efficiency difference between generating electricity with a hand crank vs. a nuclear reactor?
A hand crank is about 95% efficient. An electromechanical generator is about 90% efficient. Your muscles are about 25% efficient. Putting it together, the energy efficiency of generating electricity with a hand crank is about 21%. Nuclear reactors are about 40% efficient, though that goes down to about 4% if you include the energy cost of building the power plant, enriching the fuel, etc. The advantages of the nuclear reactor are that it's more convenient (requiring less human attention per joule) and that it can be fueled by uranium rather than potatoes.
> Even if you take into account all the inputs it takes to build and run the reactor, the overall output to input energy ratio is much higher. (...) The type of efficiency I'm thinking of is precisely the energy required to maintain and run something vs. the work you get out of it.
The term for that ratio, which I guess is a sort of efficiency, is "ERoEI" or "EROI". https://en.wikipedia.org/wiki/Energy_return_on_investment#Nu... says nuclear power plants have ERoEI of 20–81 (that is, 20 to 81 joules of output for every joule of input, an "efficiency" of 2000% to 8100%). A hand crank is fueled by people eating biomass and doing work at energy efficiencies within about a factor of 2 of the best power plants. Biomass ERoEI varies but is generally estimated to be in the range of 3–30. So ERoEI might improve by a factor of 30 or so at best (≈81 ÷ 3) in going from hand crank to nuclear, and possibly get slightly worse. It definitely doesn't change by factors of a thousand or more.
Even if it were, I don't think hand-crank-generated electricity is used by "plenty of people".
> projects like CollapseOS are specifically meant to create code that can handle low-level complexity and make these things easier to use and maintain.
I don't think CollapseOS really helps you with debugging the EMI on your RAM bus or reducing your power-supply ripple, and I don't think "ease of use" is one of its major goals. Anti-goals, maybe. Hopefully Virgil will correct me on that if he disagrees.
> if you were having to replace crystals, they are simple and low frequency, 2-16Mhz, and lots have been produced, and once again the fact that it uses parts that have been produced for decades and widely distributed may be an advantage.
I don't think a widely-distributed crystal makes assembly or maintenance easier than using an
on-chip RC oscillator instead of a crystal. It does have real advantages for timing precision, but you can use an external crystal with most modern microcontrollers just as easily as with a Z80, the only drawback being that the cheaper ones are rather short on pins. Sacrificing two pins of a 6-pin ATTiny13 to your clock really reduces its usefulness by a lot.
> If I look at archive.org for ATSAMD20 I come up empty, but Z80 gives me tons of results like...
Oh, that's because you're looking for the part number rather than the CPU architecture. If you don't know that the ATSAMD20 is a Cortex-M0(+) running the ARM Thumb1 instruction set, you are going to have a difficult time programming it, because you won't know how to set up your C compiler.
There is in fact enormously more information available for how to program in 32-bit ARM assembly than in Z80 assembly, because it's the architecture used by the Acorn, the Newton, the Raspberry Pi, almost every Android phone ever made, and old iPhones. See my forthcoming sibling comment for information about ARM programming.
Aside from being a much better compilation target for high-level languages like C, ARM assembly is much, much easier than Z80 assembly. And embedded ARMs support a debugging interface called OCD which dramatically simplifies the task of debugging broken firmware.
> models like [Z80s and 6502s] that have been manufactured for decades and exist all over might end up being a better fit
There are definitely situations where Z80s or 6502s, or entire computers already containing them, are more easily available than current ARM microcontrollers. (For example, if you're at my cousin's house—he's a collector of obsolete computers.) However, it's difficult to overstate how much more ubiquitous ARM microcontrollers are. The heyday of the Z80 and 6502 ended in about 01985, at which point a computer using one still cost about US$2000 and only a few million such computers were sold per year. The most popular 6502 machine was the Commodore 64, whose total lifetime production was 12 million units. The most popular 8080-family machine (supporting a few Z80 instructions) was probably the Gameboy, with 119 million units. We can probably round up the total of deployed 8080 and 6502 family machines to 1 billion, most of which are now in landfills.
That means about as many ARMs were being produced every two weeks as 8080 and 6502 machines in history, a speed of production which has probably only accelerated since then. Most of those are embedded microcontrollers, and I think that most of those microcontrollers are reflashable.
Other microcontroller architectures like the AVR are also both more pleasant to program and more abundant than Z80s and 6502s. They also feature simpler and more consistent sets of peripherals than typical Z80 and 6502 machines, in part because the CPU itself is so fast that a lot of the work these obsolete chips need special-purpose hardware for can instead be done in software.
So, I think that, if you want something useful and resilient in situations where people have limited access to resources, people who may still want to deploy some forms of automation using what's available, you should focus on ARM microcontrollers. Z80s and 6502s are rarely available, much less useful, fragile rather than resilient, inflexible, and unnecessarily difficult to use.
> though that goes down to about 4% if you include the energy cost of building the power plant, enriching the fuel, etc.
Rereading this, I don't know in what sense it could be true.
What I was thinking of was that the cost of energy from a nuclear power plant is on the order of ten times as many dollars as the cost of the fuel, largely as a result of the costs of building it, which represents a sort of inefficiency. However, what's being consumed inefficiently there isn't energy; it's things like concrete, steel, human attention, bulldozer time, human lives, etc., collectively "money".
If, as implied by my 4% figure, what was being consumed by the plant construction were actually 22.5x as much energy as comes out of the plant over its lifetime, rather than money, its ERoEI would be about 0.044. It would require the lifetime output of twenty or thirty 100-megawatt power plants to construct a single 100-megawatt nuclear power plant. That is not the case. In fact, as I explained later down in the same comment, the ERoEI of nuclear energy is generally accepted to be in the range of about 10 to 100.
About the return on investment, the methodology is interesting, and I’m surprised that a hand crank to nuclear would increase so little in efficiency. But although the direct comparison of EROI might be small, I wonder about this part from that article:
“It is in part for these fully encompassed systems reasons, that in the conclusions of Murphy and Hall's paper in 2010, an EROI of 5 by their extended methodology is considered necessary to reach the minimum threshold of sustainability,[22] while a value of 12–13 by Hall's methodology is considered the minimum value necessary for technological progress and a society supporting high art.”
So different values of EROI can yield vastly different civilizational results, the difference between base sustainability and a society with high art and technology. The direct energy outputs might not be thousands of times different, but the information output of different EROI levels could be considered thousands of times different. Without a massive efficiency increase, society over the last few thousand years got much more complex in its output. I’m not trying to change terms here just to win an argument but trying to qualify the final results of different capacities of harnessing energy and technology.
I think this gets to the heart of the different arguments we’re making. I’m not in any way arguing that these old architectures are more common in total quantity than ARM. That difference in production is only going to increase. I wouldn’t have known the specific difference, but your data is great for understanding the scope.
My argument is that projects meant to make technology that has been manufactured for a long period of time and has been widely distributed more useful and sustainable are worthwhile, even when we have more common and efficient alternatives. This doesn’t in any way contradict your point about ARM architecture being more common or useful, and I’d be fully in favor of someone extending this kind of project to ARM.
In response to some of the other points: using an external crystal is just an example of how you could use available parts to maintain the Z80 if it needed fixing but you had limited resources. In overall terms, it might be easier to throw away an ARM microcontroller and find 100 replacements for it than even trying to use an external crystal for either one, but again I’m not saying it’s a specific advantage to the Z80 that you could attach a common crystal, just something that might happen in a resource-constrained situation using available parts. Better than the kid in Snowpiercer sitting and spinning the broken train parts at least.
Also, let me clarify the archive.org part. I wasn’t trying to demonstrate the best process for getting info. I just picked that because they have lots of scanned books to simulate someone who needed to look up how to program a part they found. I know it’s using ARM, but the reason I mentioned that had to do with the distribution of paper books on the subject and how they’re organized. The book I linked to starts with very basic concepts for someone who has never programmed before and moves quickly into the Z80, all in one old book, because it was printed in a simpler time when no prior knowledge was assumed.
There are plenty of paper books on ARM too, and probably easier to find, but now that architectures are becoming more complicated, you’re more likely to find sources online that require access to a specific server and have specialized information requiring a certain familiarity with programming and the tools needed for it. More is assumed of the reader.
If you were able to find that one book, you could probably get pretty far in using the Z80 without any familiarity with complex tools. Again, ARM is of course popular and well-documented, but the old Z80 stuff is still out there and simple enough to understand and even analyze with your bare eyes in more detail than you could analyze an ARM microcontroller without some very specific tools.
So all that info about ARM is excellent, but this isn’t necessarily a competition. It’s someone’s passion project who chose a few old, simple, and still-in-production technologies to develop a resilient and translatable operating system for. It makes sense to start with the earlier technology because it’s simpler and less proprietary, but it would also make sense to extend it to modern architectures like ARM or RISC-V. I wouldn’t be surprised if sometime in the future some person or AI did just that. This project just serves as a nice starting point for an idea on resilient electronics.
What's your point? A lot of simple devices are still being manufactured with cheap microcontrollers. Most of them don't even have an OS as such. If society collapses it's not like people are going to scavenge the microcontroller out of their washing machine and use it to reboot civilization.
In https://news.ycombinator.com/item?id=43484415 I outlined some extremely advantageous uses for automatic computation even in unlikely deep collapse situations, for most of which the microcontroller out of your washing machine (or, as I mention in https://news.ycombinator.com/item?id=43487644, your disposable vape or USB charger) is more than sufficient if you can manage to reprogram it.
Even if your objectives are humbler than "rebooting civilization" (an objective I think Virgil opposes), you might still want to, for example, predict the weather, communicate with faraway family members, automatically irrigate plants and build other automatic control systems, do engineering and surveying calculations, encrypt communications, learn prices in markets that are more than a day's travel away, hold and transmit cryptocurrencies, search databases, record and play back music and voice conversations, tell time, set an alarm, carry around photographs and books in a compact form, and duplicate them.
Even a washing-machine microcontroller is enormously more capable of these tasks than an unaided human, though, for tasks requiring bulk data storage, it would need some kind of storage medium such as an SD card.
I loved both of these games and spent a lot of time with them. The Baldur's Gate review here stays pretty close to others I've read, but the Fallout 2 review is the more interesting part because it's more opinionated, and in ways I disagree with. It's interesting how much everyone involved seemed to dislike the game, but very much fits the late 90's vibe. It's also what made it one of my all-time favorites. I never felt like I was playing through someone else's story. It felt like a set of content you could actually create your own character and play through, directed by your character's own motivations rather than the writer's. It's a quality that's rare to find in games, but the conditions of a bunch of people creating as much content as they could without a specific direction make sense for how this kind of thing would have to come together. Baldur's Gate is great, but much more linear and gives less opportunities for choices and role-playing. I wish more games would revisit the freedom of Fallout 2.
It sounds great, but I'm still not sure what they'll have to do to make it worth an upgrade for people who already use a Steam Deck and a HMD. I already use the Steam Deck like this often, just using an HDMI cable and an adapter to connect it to a Quest 3 for a giant display, but I can also run all the Quest window management on the side without taking resources from the Deck, and if I feel like continuing on the Deck alone, I can unplug the cable and keep using it on the small screen. It's a pretty nice setup.
I'm wondering if it would be worth just getting whatever adapter they come up with and the next gen Steam Deck to use the same way rather than investing in Deckard, but I'm interested in seeing their case!
Now we just need a perpetual entropy-powered photo-electric computer that uses a contained low-light LED array for internal data transfer, storage, and computation mechanisms as well as a power source. Okay, maybe not that, but this could lead to some interesting applications.
That's fascinating, and I had no idea web dev influencers were so big. I checked, and there really are people with millions of followers doing development. Personally, the idea of learning anything related to coding through a video is extremely frustrating. It's a text medium. I want to look at things, take time, think it over, compare code, follow references, look up functions.
That people like video formats isn't really surprising to me since it's everywhere, but I still don't fully understand the appeal. Even if you were raised on video content and started coding that way, at some point you have to reference text documentation, right? At that point, I would think you would just stick to the text and not go back to the video, but maybe it's just more entertaining the other way.
> That people like video formats isn't really surprising to me since it's everywhere, but I still don't fully understand the appeal.
Me either, but I have a hunch about why.
Are you a fast reader?
I am, at least compared to the population at large. And one of the reasons I can't stand video as a format for learning about coding topics is that it is so frustratingly slow compared to my reading speed. To get anywhere close, I have to crank the playback speed up so high that I start having trouble understanding what the presenter is saying. That's on top of other things like poor searchability and no way to copy-paste code snippets.
The decline of reading skills, at least in the US, is pretty well-documented. And my hunch is that for the increasingly large number of people coming into the industry who don't read quickly or well, the efficiency of learning from videos is closer to parity with text. What's more, I suspect there's high correlation between lower reading skills and dislike of the act of reading, so videos are a way to avoid having to do something unpleasant.
I have no solid evidence to back any of this up, but it seems at least as plausible to me as any other explanations I've run across.
That’s a really interesting take. I say that as I’m the opposite — a slow reader — and I, too, cannot stand learning via video.
I’m by no means a weak reader, I love reading and do so often. I just find myself re-reading complex sections to ensure that I understand 100%.
I also like to be able to read something and then follow it on a train of thought. For example, if a post/article says that X causes Y because of Z I want to find out why Z causes it. What causes Z to be etc.
With a video I find this sort of learning to be inefficient and less effective while also making the whole experience a bit rigid. I also find that videos tend to leave out less glamorous details as they don’t video well if that makes sense
I'm also a slow-reader by your standards, re-reading to me is part of the learning process. Going over text with your eyes is not reading, let alone learning.
I think your dislike of video over text is because you're a quick learner. Like you said, going on a tangent and researching some word or sentence or statement makes you a thorough learner I think. Eventually you have a quicker and bigger grasp of the subject at hand, which is the whole point if you ask me.
Thanks mate! I think I consider myself a slow reader as I’ve grown up with my mother and sister who both read at some ungodly pace. They’ll finish 5 books for every one which I finish.
I do agree with the thorough learner aspect. I think having come from physical engineering backgrounds helps a lot with that.
When studying aerospace, for example, there was a lot of ‘but why’ which usually ended up leading to ‘natural phenomenon’ after abstracting far enough.
Alternatively: you can listen to audio while commuting or driving or cleaning or working out. I love audio for higher level things and to get an overview of the topic. Then text to dive into the details.
Another big driver to move from text to video: It is easier to monetise video via YouTube compared to a blog. People with millions of subscriptions on YouTube aren't creating FE learning material out of the goodness of their hearts; it is a big business. Also, video is almost always lower information density compared to text, so it is easier for your net to capture more customers.
And you can't just search in it. It's truly trashy format for anything other that presentation or lecture. For simple information sharing it's horrible.
I have a fairly fast reading speed, but I mostly consume my non fic (not technical) books in audio format.
Why? Attention span. If someone is reading to me, I tend to get 'pushed along' and it makes it easy to slog through a non fiction book that really could have been a pamphlet but the author needed it to be 400 pages. If I space out listening, it's usually not a problem because most non fic books are so repetitive. I suspect that's the secret behind video's popularity, people's attention is in short supply.
I’m a pretty slow reader. I tend to reread sections, pause and play around with the ideas that come into my head, get lost while doing that and have to start over… I still prefer reading specifically because it allows me to do all that at my own pace. I don’t have to feel rushed along by a presenter or actively pause, rewind, try to scrub the timeline to find a point I want to rehash etc.
I really think you have got a point, I'd add however that reading is more cognitive effort than watching a video, at a basic level (that is, information in the text or video put aside).
Just see how hard it is to read more than a few paragraphs when tired before bed vs. how hard it is to watch something in the same state.
I think this gets added to the point you are making about reading skills declining.
People learn best in different ways. Some learn best by reading, some by tinkering, some by watching and listening. I heard this over and over in school and college.
I don’t think it has anything to do with reading speed. When taking in complex technical information, you spend more time thinking and trying to understand than actually reading.
If you’re finding that you can quickly race through content, it probably just means you find the content easy to understand because you’re already familiar with many concepts.
I happen to agree with the conclusion also. And you don't need a rigorous proof to do what you want to do. But I often find that people appeal/resort to "common sense" when they don't have a coherent argument, and just can't conceive of any other point of view.
>, the idea of learning anything related to coding through a video is extremely frustrating. It's a text medium. I want to look at things, take time, think it over, compare code, follow references, look up functions.
That people like video formats isn't really surprising to me since it's everywhere, but I still don't fully understand the appeal.
I like (some) programming videos and I'll give my perspective as someone who learned 100% from books and 3-ring binders for old languages like C/C++/C#/Javascript/Python/bash/etc. (The 1980s Microsoft C Compiler manuals were 3-ring binders.)
The newer languages I learned with a hybrid of videos + traditional books would be HTML CSS, Apple Swift, and PyTorch with the latest AI toolkits and libraries.
The extra dimension that videos offer besides plain text is the live usage of IDE, tools, troubleshooting, etc. For me, watching a dynamic screen with a moving mouse cursor and voiceover seems to activate extra neurons moreso than just reading static text in a book.
There's also a lot of "activities in-between the coding" that's helpful such as seeing the programmer looking up something in various pages of documentation, scrolling around, navigating etc.
Another useful aspect that's underappreciated is seeing the mistakes the programmer makes during the video recording. E.g. the code doesn't compile because of invalid syntax. Or a config setting is wrong and he troubleshoots what's preventing it from working. In contrast, virtually all text books or blogs of coding are "perfect happy path" outcomes. But real-world programming is messy with broken intermediate states. A lot of videos show the messy steps to get to a working state.
The videos that are not that helpful would be the videos of C++ CppCon conference sessions where there are a bunch of static slides with bullet points and the speaker just reads them aloud word-for-word.
Although I learned C++ from textbooks, I found videos of Matt Godbolt showing tips & tricks of how to use his Compiler Explorer (http://godbolt.org) very helpful.
In summary, the artifacts of coding may be the text, but the activity of coding involves a lot more than just the text and that's why some videos can enhance learning.
Definitely. As long as the videos are uncut, they can be a confidence booster that I'll be able to replicate the result, because I can follow them knowing they won't skip over those little steps that often go without mention. Well, unless they're being sneaky with hotkeys.
These videos are edutainment at best, which is generally not a good way to learn something well enough to be able to actually work with it. A lot of them are pretty much straight up entertainment, where the entertainment value comes from drama and strong opinions. They're totally fine if you know that, but some of their audience does not know that.
I've been seeing more and more of a certain kind of person who are into these videos on some Discord servers, and it is clear that they are driven more by culture and style than by the goal of creating some thing, or having a strong understanding of how to make computers do certain things.
> That people like video formats isn't really surprising to me since it's everywhere
That’s because those “people” are either larping students or kids that want to become programmers. I have never in my 10 year career met a person who said “yeah, I learn my craft from Fireship videos”.
Likely these videos did not exist when your reference / age group was acquiring these skills.
Videos are sort of easier to produce (via screen capture), and are much easier to show the effect of FE things: look, we write code like this, now we can click here, and the page reacts that way. No need to muck with JSFiddle or something.
I'm not a fan of videos as a reference medium, but they can be elucidating to those who needs more hand-holding, literally "click here, see the result". Back in the day, a few short videos about Blender helped me quite bit to grasp certain things that were not obvious from descriptions.
This correlates to my parent post - when my generation started with Flash around 2000 there was no literature on how to programm in Flash, it just happened.
So we went to the nearest bookstore and got a bunch of other books on programming. For many Flash developers the bible was Thinking in Java by Bruce Eckel. Most of the source materials for game programming (and that was a lion share of Flash programming) was in C++.
I'm not claiming that we were smarter, but by sheer coincidence, most people, even folks like me who skipped school, had very solid fundamentals. And partially due to the fact that it wasn't that lucrative back then.
Today most people don't care, IT is just easy money, kids have short attention span and trends are tailored by tiktok videos. All in all, it's just a fashion driven developement.
>I'm not claiming that we were smarter, but by sheer coincidence, most people, even folks like me who skipped school, had very solid fundamentals.
Higher barrier of entry should statistically lead to less people making it past and that those who do make it past aren't a random sampling of the initial group making the attempt. While the selection isn't only for intelligence, specifically the subsets of intelligence related to programming, I would doubt any claims it wasn't a factor at all.
It's not about learning (anymore). It's about consuming content. People spending (wasting) their time on X and YT are not there to learn something but to get their social media (dopamine) fix.
I hate YT, X, Insta. Don't even have an account. Some years ago there was really great content on YT, now it's mostly clickbait.
There's still lots of great YT content, much of it by the same producers you allude to, and they need your support more than ever with all the slop around them.
These grifters sell entire courses on the product, that's their game. So when you find an unmaintained Remix app at your company, well, the grifters got the ears of your junior devs :(
Pure grift. But since most people are decent people they don't know and fall for it, and something like this influencer emerges. They have entire Discords of customers, the same as crypto scams.
Edit: I don't know why people would downvote calling out a notable grifter in a thread that extended out to a discussion about influencers. WHICH influencers? Are we scared of that topic? The climate of the JS ecosystem didn't happen accidently.
People fall victim to this shit right here on HN, and then write blog posts about what the fuck is wrong with frontend:
I find Remix really nice to work with, it’s a framework that embraces and utilizes web standards (what the article is arguing we should get back to doing more), and I’ve learned everything I know about it (and the majority of everything else I know about front end dev) for free. It’s not like you need to purchase courses to learn. At the same time, I don’t think there’s anything wrong with selling courses to teach people about a framework. But the idea that the entire thing was created just to sell courses about it is not true.
But I do agree that there’s just way too much fast moving, breaking changes on front end in general, frameworks released every other week, etc…
It does. It bridges a purely server-rendered architecture with a SPA really nicely, and does it mostly with web standards. You don't need to run any client-side JS with a Remix app. It's not perfect, but there are a lot of benefits to its approach.
I won't try to argue there's no front-end treadmill: there absolutely is, and I had to laugh reading the current top comment because I just had to migrate off Apollo CLI at work.
But this "The web was perfect in 1999--stop changing things!" take is tedious and intellectually lazy. (I'm not accusing you of it, but it's certainly a common sentiment.)
We should be working together to solve concrete problems, and avoid both chasing the latest fads and pretending there's no room for improvement.
It’s a nuanced topic. If we want to dive in, I can provide a glimpse into the first layer of the anus as we stick our head into it.
When we shepherded a lot of sheep into frontend via these courses and boot camps and quasi courses/bootcamps in the form of certain frameworks (hey, you only know this one framework?), we created a cohort of something.
Now what is that something? It’s not really the tinkerer that loves doing this stuff and would have found a way to express themselves (please pay attention to the word “express”, as in, can’t help it). That something was … a pragmatic identity. A pragmatic identity was formed where “I am now a software engineer because I and my cohort agree, we really know how to do our stuff”.
Such a cohort can only be fueled by identity, not passion. This cohort can’t innovate and must cling to the identity of their original accreditation, so they will always be defensive.
That’s the first layer of the asshole as we enter it, it goes deeper. The second layer involves large amounts of money and people’s livelihoods, to which they’d defend unto death.
Okay? I'm having a lot of fun talking about some of the parts of our circus. I can't change anything. There will be new cult leaders (evangelists) for frameworks, and new cohorts, we can't change the past. Just pay attention to the rough framework (no pun intended, swear) as it happens again, and try our best to call it out, because it didn't always lead to great outcomes.
Money will be made on all sides regardless and we will all be fine financially. I'm talking about something else, inner. The infinite anus, asshole, is real - but now I'm just projecting.
IMO, the pain from "mostly" starts to show when integrating React Router v6 with legacy frameworks and applications. I'm sure if you go all in on React Router v6 it's great.
At my $DAYJOB we are migrating to Remix w/ GraphQL Federation. It's been a pain.
Especially because we haven't finished any of these migrations:
* ExtJS -> JQuery
* JQuery -> React class components
* React class components -> MobX-observed components
* Observable MobX components -> functional React components with context
* Functional React Components with context -> React Router v6
* React Router v6 -> Remix w/ GraphQL federation
I understand my situation is unique - I'm just bitter from needing to know ~6 different frontend technologies at once. Let alone all the Not-Invented-Here-Syndrome abominations in our codebase.
It's not that unique. The one enterprise app I worked on (that was started with Rails 1) had all of: Prototype, jQuery, Backbone, Angular, React, Handlebars AND mustache, vanilla CSS, SASS, CSS in JS (or whatever it's called). I wouldn't be surprised if they've introduced Tailwind at this point.
It actually wasn't even THAT bad considering how huge it is. People still complained (admittedly myself included), but it had been TDD'd from the start so had very good test coverage, at least. Also, some people who had worked on really massive Java applications called it "really good!" so it's all about perspective, I suppose :)
your last note that adds not-invented-here abominations… if chasing endless frameworks of the month is bad, and building stuff in house is bad, then what do you propose to avoid making this mess?
Skip a couple framework versions and indeed entire frameworks. Maybe go a couple years before you "upgrade" to something else. It is entirely possible you could go as much as 5 or 10 years on something. You'll still have to evaluate and potentially mitigate some CVE's. But that could actually be less work and less aggravating.
My point being, it's "based on" Web Standards, it is _not_ Web Standards.
What if I use `fetcher.submit(data, { encType: "application/json" })`? Why not just use fetch directly at that point? I believe it adds a layer of indirectness that just wasn't there before.
If web standards are so important, why don't we use `window.fetch` and `new FormData()` directly instead of wrapping it?
My favorite example of this being JSON gets converted to FormData on the frontend, which then gets POST-ed to the server, which then converts it to JSON on the backend.
I think you're mistaken. I can't comment on the quality of Kent C. Dodds' educational content, but his formal affiliation with Remix was short-lived. The courses that he sells have no apparent affiliation with Remix (the open source project or the company).
Incidentally, Remix is an open source project started by the React Router devs to create more framework features around React Router. React Router is probably one of the most widely deployed JavaScript libraries ever and is the furthest thing imaginable from a project created by grifters to sell online courses.
Remix was also a company that raised a $3 million seed round and then was acquired by Shopify (for presumably much more than $3 million). Shopify appears to continue to invest heavily in Remix and React Router development, and appears to use Remix heavily.
I don't think it's weird to like a piece of software and have that lead you to work at the company that builds the software and also to develop an educational course about that software.
There are only a few popular, promoted alternatives to NextJS right now (that I know of): Remix and TanStack. That is, if you're fully React focused, ofc. I dont see promoting Remix as a red flag.
Promoting it? No problem. But promoting something you profited from without disclosing it violates FCC rules for broadcasting. I would say influencers aren't technically broadcasting but they are in principle.
> React Router is probably one of the most widely deployed JavaScript libraries ever and is the furthest thing imaginable from a project created by grifters to sell online courses.
This is a funny example (to me) because in 2017, one of the two co-creators of React Router (Michael) came to my job and gave a two or three-day in-person training course on React. I think he also covered Redux and React Router. We had a great time getting to know him.
It turns out that Ryan and Michael spent a substantial amount of time and effort on a side business called React Training. It is fair to say that their speaking engagements were a solid revenue stream, but agreed - definitely not grifters.
In case anyone isn't familiar with remix, bloomingkales seemingly has no familiarity with the framework. Obviously it's not been created as a conspiracy to sell training courses. The idea is ludicrous.
It's quite a nice framework. It's easy to learn, straightforward, the people in their discord are very helpful. It has the backing of a large company (shopify) who are using it extensively.
It is, I'll say again, obviously not a conspiracy to sell training courses.
I get why you might feel that way. Ryan and Michael used to run a company based around React training. They created React Router which some people love to complain about. They've since moved over to working for Shopify. Shopify pays for their development on React Router/Remix. They do NOT sell training anymore.
Kent on the other hand, worked with them for a short time. He makes his living selling training. Filling in a gap (selling training) isn't really a grift is it? The dude's got a family and he's found something he can sell.
E.g. react-router was ready 5990 commits ago. It is a grift, they keep rewriting it and reengineering the API over and over and over again just to be able to sell more training.
Look at wouter for what is possible if your motivation isn't selling training material. It was written and left alone, it works just as well, it's stable and doesn't change for no reason.
You asked if react-router team sold courses, they sold consulting services. Seems like a conflict of interest to sell consulting on a tool you built while introducing breaking changes (but hey if you need quick help throw us a few dozen grand).
I guess that's fine for you but it's very smarmy IMO.
I think that adds to my point. How does that have so many stars on github? The customers "star" it. Who uses this on a real app? It's alright to slowly accept the bitter truth that grifting scales.
Not really sure that's relevant. Grift implies an intentional value extraction without providing anything. Using your example: I'm confident that the time spent working on remix and courses related to it resulted in far less monetary gain than spinning out courses on React. If you think Remix is misguided or a bad framework etc... that is very different from grifting. A corollary: Is Deno a grift because it shares the same creator as Node and has a paid product attached to it? In my opinion no but you might disagree... I'm mostly opposed to the idea remix in particular exists purely as a grift - love it or hate it there are far easier ways for someone with the influence of Kent to make money.
That was the end goal for this whole thing. I do look at the pricing page (what are you trying to sell constantly?) on anything people put up on the internet and judge from there. You can have the last word and put in a testimonial for Remix, since I won't be budging on this. It's a rabbit hole for both you and me to keep going at this, as I've seen enough of this pattern. Consider me a neural net on this front (end).
I'm not interested in writing a testimonial for Remix, merely commenting on the absurdity of calling a project of this scale as nothing more than a grift to sell educational content. There's no reference to these paid courses anywhere on the landing page, there's no callout for paid courses in the main navigation. The only mention of tutorials at all is buried in the community section which leads to: https://remix.guide/ which seems to be unaffiliated with the Remix team, and has no section advertising paid courses anywhere. You're talking about a framework that has been acquired and subsequently used in production by a global company in Shopify - clearly there is something to the framework beyond being a vehicle for tutorial sales.
Again, I want to be clear: This is NOT an endorsement of Remix. Your line of thinking seems to be conspiratorial and not grounded in reality. You mention repeatedly about pricing and the end goal of funneling noobs toward course purchases... One would assume that in conspiring to sell courses the team behind Remix might actually advertise that they have courses for sale on their website.
I have to be honest as a third party that a. doesn't work with remix, b. doesn't know anyone who works on remix, c. doesn't know you - it seems like you have a personal vendetta.
No personal vendetta. We sit here and punch the mysterious air as to why things are the way they are. I thought maybe we'd punch up at something that is plausibly a culprit. I'll admit it may be punching down, since this is just one dude. But then again, it's one dude who influenced a lot of people ...
We can't just keep sitting here and blaming developers for being
1) New
2) Dumb
3) FOMO
4) Dumb
5) Unqualified
You understand? It's worth looking at what content they are consuming and where the mindshare is being promoted from. It's worth asking who is selling them the idea of these frameworks.
> We can't just keep sitting here and blaming developers for being New / Dumb …
Well, as a cohort, I think the ratio of inept programmers to skilled programmers stays mostly constant regardless of stuff like this. Like, if programming is hard to learn, fewer people will try and learn it. But also the skill bar goes up - so people spend more time as inept developers before they’re skilled. Likewise if programming gets easier to learn, we get a swell of fresh faces eager to become frontend developers. And the ratio stays more or less the same. It’s kinda like a sales funnel, or a hiring funnel. You always have more leads in your funnel than conversions. (And if you don’t, you’re in trouble!)
We live in an anti gatekeeper era. Content is free, but nobody protects you from wasting your time watching edutainment. The downside of that is real - lots of people waste countless hours larping as students. But the upside is real too. It’s easier than ever to learn anything.
>Grift implies an intentional value extraction without providing anything.
Is it without providing anything, or a value extraction greater than what one is providing?
If the former, it makes the definition very each to check, but it almost makes it very easy to avoid grifting by providing even the most minimal value, and leads use to needing a new word for providing some value but extracting more than provided (perhaps intention should be included). If that is the case, might I suggest "jrift"?
I can tell you that your response is at least relevant for me because I happen to be working with Remix right now, not because of any influencers but just because I happen to be working on a Shopify project. I've seen lots of frameworks come and go and evolve, so I'm not surprised that this one changes a lot, but I always enjoy getting opinions from people with experience. Whether or not I'll end up resenting it in the future, I don't know, but at least I'll have been warned.
The fact that there's influencers for everything nowadays made me realize I'm old.
It's super useful that everyone is sharing their opinions and expertise to get that sweet 5 minutes of fame - I just learned how to tile my bathroom after watching a slew of TikToks on the subject, some with millions of views.
reply