As a society we find violence or harm against children to be extremely shocking and tragic. As a society we would do almost anything to prevent it.
Giving children the kinds of freedoms discussed in this article would lead to some harm coming to them. Accidents, violence, kidnap, etc.
Therefore, society won't give them those freedoms.
This tendency has been exagerated by mass media in the modern era. Every single case, every piece of anecdata, makes massive headlines and instills the fear into parents everywhere.
It's impossible for society to reverse course because that would mean acknowledging, implicitly or explicitly, that some level of harm for some children is justified by the developmental benefits to all children of increased freedom.
Well, this doesn’t exactly check out when you compare to countries and cultures who already give children these freedoms and have the built environment to support it (transport, walkability, parks/spaces to play, etc). They are not only “just fine”, but their children are generally much more mature and confident (as in skilled), than their Anglo counterparts.
France, Netherlands, Denmark, former commie block come to mind. It was one of the things that really stuck out to me: how mature/well behaved and independent/confident the children were there (for the most part). There's even books on the topic (Raising Bebe, The Danish Secret to Happy Kids) which I was not aware of until after spending a lot of time in those places. At lot of it has to due with the built environment - kids can literally walk out the front door to parks and stuff unlike in suburban places, but also culture.
> Giving children the kinds of freedoms discussed in this article would lead to some harm coming to them. Accidents, violence, kidnap, etc.
Not really most of the violence against children originates from adults within and near the family.
The American public has allowed some rare stories that made it to the news and the Satanic panic of the 80s to form their world view.
A survey in 2021 found that 15% of Americans that's 31 million people believed the government was run by Satan-worshiping pedophiles.
To sum up children do face risks of violence and sexual abuse but it's mostly from trusted people in their environment, the risk of some random person kidnapping a child of the street is rather low.
Now given that society has decided to keep children locked away, letting your kid run around is not really a viable option it's a collective action problem.
I think you're onto something here. We do a lot to protect children, but we've outsourced that protection to institutions—police, laws, politicians—rather than building it into our communities. If we had stronger communal networks where neighbors actively looked out for each other's kids, parents would feel comfortable letting them roam. Without that social fabric, we're stuck with a binary choice: either rely on law enforcement to intervene after something goes wrong, or keep kids sheltered at home.
The fact that you think heaven and earth have to be moved over a single school shooting (as terrible as it was) is a symptom. School is the safest place where children are. They're more likely to be murdered and abused at home.
I'm 50, but when I was a kid I took the bus by myself: when I was 12, on the South Side of Chicago, which had an order of magnitude more violence than your kids will ever be exposed to. I ran around with my friends within a block or two, we were big into BMX bikes. Kids got hurt periodically, but sometimes kids will get hurt; sometimes kids will die. A few hundred years ago, most kids would die. Now we (not we, but mostly a certain demographic whose aesthetic is imposed on everyone else) find it unacceptable to hear about kids dying on television, nowhere near us.
It was bad when 50% of kids died, but I've had the belief that whatever the number it now is too few. We need the number of kids who die through "death by misadventure" to go up. Raising kids in a box leaves them dumb and unsocialized. Kids need independence, and to be allowed the ability to make some bad decisions with real consequences. They need to be able to fall off the monkey bars.
We're close to the same age and I grew up on the south side of Chicago, so I'm cheekily going to ask if by "a bus on the south side" you meant Hyde Park or Beverly. :)
>The fact that you think heaven and earth have to be moved over a single school shooting (as terrible as it was) is a symptom.
no, the symptom is that kids have to normalize lockdown drills and how to barricade a door to protect from shooters. I had earthquake and fire drills as a kid, but it wasn't culturalized to accept that school shootings are a thing.
Now we can expect 4-5 school shottings a year, documented by year. Trying to make this equivalent to "well kids get hurt" completely misses the point. But I'm not surprised at this point.
Maybe I'm an idiot, but I was really struggling to understand how this worked from the website, which seems to have no photos or video of it actually operating.
I searched youtube and found a youtube short of it being used.
First you have to use another small machine to slice up a plastic bottle into a spool of roughly formed plastic spaghetti. Then you feed it through the nozzle on this machine to draw it into a properly formed filament.
Having to slice up every plastic bottle you want to run through it by hand seems like a cost to using it in any kind of bulk (I originally was trying to work out if it melted them down which would alleviate that).
There's a small 3D-printing niche of PET #1 bottle cutting specifically for filament making. For whatever reason, they mostly seem to organize on Facebook.[1] A lot of these builds can be made out of scrap components.
The Recreator3D[2] somewhat automates the bottle cutting, feeding, and filament making. Older versions repurpose a specific 3D printer's hardware, but the MK6[3] uses largely off-the-shelf parts.
Igor Tylman sells kit and fully-assembled PETmachines that cut, feed, and pullstrude filament.[4]
(worth noting that these pullstruders are all separate from the methods of shredding or grinding plastics into granules/pellets and using an extruder, i.e. https://www.youtube.com/watch?v=nQ0UyWKafAw, whose videos on the subject are good overviews of the process but tend to annoy me because they use pretty hobby-accessible tools for everything but the critical grinding part, for which he wheels out a sponsor's $10,000 commercial plastic grinder)
I think this is kind of a bad take. There have been plenty of "monster catching game"/"pokemon with the serial numbers filed off" games that have been somewhat successful, but no smash hits. "Cassette Beasts", "Nexomon" and "Temtem" spring to mind. From larger studios there has been stuff like "Monster Hunter Stories" and "World of Final Fantasy".
Palworld is really a "survival crafting" game and is closer to a game like "Conan Exiles" which has a similar gameplay mechanic of capturing slaves to put to work in your base.
What made Palworld stand out was the shock factor elements of "Pokemon with guns", "make pokemon work as slaves in a factory" and "grind up pokemon for meat", which streamers were able to convert into clickbait thumbnails and views.
> There have been plenty of "monster catching game"/"pokemon with the serial numbers filed off" games that have been somewhat successful, but no smash hits.
Notably, the Megami Tensei series, which predates Pokemon.
I enjoy playing Palworld. I can’t stomach the thought of playing yet another Pokémon as I know it will be the same game with differently named gym leaders. Palworld is fun because it isn’t Pokémon. That YouTubers will be YouTubers has nothing to do with it in my case.
My biggest question is "why would I buy this and not Aseprite?". Aseprite is very well established in this space and this new tool doesn't seem like it has anythign unique to offer. Not that building a new tool isn't a good project, but when you're selling it you probably need some distinctive features, certainly if you want to win over existing users.
Aseprite is a very good product! For me personally I don't like the interface but that's just personal taste and maybe there are others out there who might prefer something like what I've created, I would hope so. Lightcube imports PSD which last I checked Aseprite doesn't do(?), text editing is inline as opposed to a text field in a window and the shape tool includes isometric ellipse which is pain in the butt to draw. Bottom line is 80% of any pixel art editor is the same functionality as any other editor, but once you've established that you can start to add on the innovative ideas and start to differentiate. Well, that's the plan anyhow. Either way, competition is a good thing!
I think creating something for fun is good by itself, also the app looks good.
Personally I wouldn't use since I use Linux and prefer OSS, but the app looks really cool, congrats
Aseprite files also can be ingested directly by tools down the line. Godot, for instance, with a plugin can setup all your animations directly from an aseprite file.
I think both Aseprite and Graphics Gale are great tools for pixel art. If you have Aseprite on Steam, you can try the experimental branches to try features before they are fully fledged. I've been using the tile branch and playing with it for some time now.
You can also compile for yourself if you want - even though I have bought it on Steam I have my own build too that I use in an old Linux computer when I want to draw in non-distracted almost offline way.
I recently tried Literate Programming and I've found that it has some downsides not commonly discussed by its advocates. (It also has upsides which are valid, this post may come across as overly negative because I'm only covering negatives.)
* It messes with tooling. If you're lucky then your editor will be smart enough to syntax highlight inside code blocks, or can be taught to do so easily. It's unlikely that more advanced IDE-style features will work. Obviously tools could be adapted with custom plugins but literate programming isn't popular enough for these to already exist as far as I'm aware.
* It adds overhead to refactoring. Refactoring code now means editing and rewriting a book, doing a good job requires significantly more effort. If you have designed the program thoroughly ahead of time you can avoid this, but that's not generally the way most programmers work.
* The "includes" problem. Most language require you to include/import/require packages used in a file. These are usually all placed at the start. This means a lot of chapters will start with "here are all the includes we'll need" if you're using a linear format. More complex formats that rearrange the code to generate outputs can do a better job but it's still a little clunky.
Overall my conclusion was that literate programming works best if you write code like Donald Knuth: work solo, have a good idea of the entire design and a fixed scope which you can complete to declare both book and program "done forever". Unfortunately that doesn't cover most real world programming. I don't think it works well in the most common commercial/programming team style settings.
> It adds overhead to refactoring. Refactoring code now means editing and rewriting a book
In my opinion, this is a good thing. The problem with ordinary refactoring is that it's too easy to just shuffle things around until they work/seem nicer/do whatever you want to. However, with literate programming, you have to think why you're refactoring, and how does it reflect in the explanation.
I use it in a commercial team to write technical documentation. I write it while, for example, exploring an API that I am integrating with. It produces a nice little document with important integration apis, limitations, open questions, integration decisions as well as interactive ways to check certain parts of the integration.
I do this with Typescript (deno) and a neovim plugin called sniprun.
> It messes with tooling. If you're lucky then your editor will be smart enough to syntax highlight inside code blocks, or can be taught to do so easily. It's unlikely that more advanced IDE-style features will work.
Probably a niche example and the exception that proves the rule, but in Racket, the included literate programming (Scribble/LP2) is itself a language implemented in Racket. Racket’s IDE and tools for exposing and inspecting syntax work just as well in that environment as in any other Racket-implemented language.
Org Babel is really nice, but it doesn't really solve the problem, rather it sidesteps it my giving you the option to edit a block of code in a dedicated buffer. I don't know how well that works with LSP (I don't use LSP) but it does allow you to make full use of SLIME.
their is no tooling for it because no one has tried to build it. all you need is a markdown file with links to the code bookmarks. every modern editor can resolve those links to the actual location of the code
>* The "includes" problem. Most language require you to include/import/require packages used in a file. These are usually all placed at the start. This means a lot of chapters will start with "here are all the includes we'll need" if you're using a linear format. More complex formats that rearrange the code to generate outputs can do a better job but it's still a little clunky.
Real literate programming, rather than rich text comments, can order the code blocks in any order. You can add then at the very end of the book/chapter/section if you feel like it.
I did some literary programs about a previous generation of our program. Every new hire reads them and asks me for copies.
> Real literate programming, rather than rich text comments, can order the code blocks in any order. You can add then at the very end of the book/chapter/section if you feel like it.
Does it matter though? The primary purpose of using literate programming is expository. If Knuth feels that putting the includes at the start makes sense, then that's what he does. If you don't, then fine you move it to the end or the middle. That's the benefit of WEB (and WEB-derived systems). The presentation order is not dependent on the code order, it's dependent on what makes the most expository sense.
When I've written literate programs, I almost always shuffle long lists of includes to the end. They add little or no value at the top and distract from the material I want to present. It's also a simple cut and paste to move them back to the top if I wanted to. The only time I leave them at the top is if there's something actually informative about having them at the top, or it's a short program, or I've not actually finished working on it (a lot of my literate programs start as traditional "live in source files" programs that I slurp into org files).
> The primary purpose of using literate programming is expository.
Right. A big, zero-context block of includes prefaced with a comment that you should just "skip ahead" past it to the "interesting stuff" and offering no insight about the stuff that you're skipping over is the opposite of expository. The main program—what it actually does—is sufficient to serve as exposition for the includes that it ends up needing to be put it. So actually write it that way.
> That's the benefit of WEB[...] The presentation order is not dependent on the code order, it's dependent on what makes the most expository sense.
The argument is that it doesn't make expository sense to write the includes the way Knuth is. It's just yet another form, as taeric observes, of boilerplate—which makes us a slave to what the (LP-unaware) compiler expects, and which LP is supposed to liberating us from...
* * * *
It strikes me that there are probably only two ways to really deal with includes in the spirit of literate programming, which is to either (a) put them at the end after having already shown us why they're necessary (accompanied with with e.g. a comment along the lines of "Recall that our program is using printf—a part of the C standard library and defined in stdio.h—so it's necessary, therefore, that we include it in order for our program to compile") or (b) to use WEB/CWEB's appending facilities—after having written a routine that uses printf for the first time, you write an interlude not altogether different from what I just described that adds that include to the "running sum" of necessary includes, which the tangle step will take care of as part of the build.
(I say "only" because these are the only ways that occur to me that it can reasonably be done. I'm not committed to that being true—I'm open to the possibility of there being more—and it's not as if I started with that in mind, but it's hard for to imagine others. The only thing I'm really committed to is that what the article says about Knuth's two examples being wrong is accurate. I know Kartik has softened his stance since originally writing it, but I haven't. The examples given are individually each an awful way to demonstrate LP considering how antithetical that is to the whole thing. Totally indefensible.)
My point, which you missed: It doesn't matter what Knuth does.
thr-nrg rightly pointed out in their comment that you can place the code wherever you want. You then say, "Knuth puts it at the top" (meant as a criticism of his style).
My point (so you don't miss it again): You can place the code wherever you want when doing a literate programming style. It also doesn't matter what Knuth does with his includes because his style does not dictate your style or my style or anyone else's style. So put it wherever you want. His style does not matter.
I didn't miss it. I replied to your question of whether it matters: yes, it does.
Your point doesn't make any sense. ("_You_ can place the code wherever _you_ want when doing a literate programming style.") Literate programs are supposed to be read. Of course a writer can write however they want. You could write a book where everything that isn't a proper noun ends with an uppercase letter, except for proper nouns, which are all lowercase. It will be shit.
What matters is the exposition. Does the presentation (order, style, etc.) communicate what you intend for it to communicate, and does it communicate effectively. That's what matters.
Does it matter what Knuth does? Nope. I don't care for that aspect of his style. I already said I put the includes (when a text gets past the draft stage at least) toward the end. We're in violent agreement with respect to how we (if I understood you correctly) often or generally put them into LP texts.
You're just weirdly caught up on criticizing Knuth's style for some reason. Have fun shouting into the wind. The rest of us know, it doesn't matter. The goal is to communicate effectively and not whine about things that don't matter.
I don't think I'm the one missing the point here. You're adamant that I'm complaining about Knuth's style. I'm complaining that programs he's written don't show us that exposition matters (rather than him just telling us that it does).
Exposition does matter. He's not doing it. That's the problem.
Dumping a big lump of include statements or defines at the beginning of the text and then saying don't worry about it isn't an example of someone who actually believes that exposition matters. It's incontrovertibly devoid of exposition—not just a matter of it being one's "style".
> Exposition does matter. He's not doing it. That's the problem.
I agree. Here's an example I give of an emacs configuration in the typical style which I don't find especially useful versus one that does use exposition:
* My emacs use-cases
** Version control with Git
*** use magit as a porcelain
#+begin_src elisp
(use-package magit)
#+end_src
*** enable easily showing history of a file
#+begin_src elisp
(use-package git-time-machine)
#+end_src
I'm not entirely sure that trying to not have boilerplate of any kind is a worthy goal. Consider, no book is wanting to get rid of the title page. Even a dedication page has grown to be a near required page that we literally teach students to write.
To that end, the goal is to not have parts that can be the same between programs vary. That has pedagogical value. Is a type of stability that we don't really value much anymore.
Though, I also give a +1 to jtsummers' point. Knuth has a style of writing literate programs. And if you find some old videos of his reviewing student's code, he explicitly calls out points of style that he likes from different students and how they work together. He didn't necessarily want to push a "this is how you style your programs," but instead was working how you talk about programs into a way that you can also write them. While not having to relearn either, necessarily.
> I'm not entirely sure that trying to not have boilerplate of any kind is a worthy goal.
I think it's a worthy goal for nearly all readers. However I also believe that literate programmers should be written with many different readers in mind.
So you could have one version that is written in a "typical code" style like:
That doesn't need to be where it ends though and arguably shouldn't be. Write multiple books using the same code but represented differently as it's justified.
For example:
* typical program style
** library 1
*** module 1
**** source file 1
*** module 2
**** source file 2
* new to language X presentation of program
** link to various parts of library 1 alongside prose to describe concepts to language newcomer
* typical hire presentation of program
** business domain focused exposition presenting code via transclusion as needed from typical program style
People will inevitably complain that what you're advocating for involves too much duplication, but I've independently reached the same conclusion that you have. Besides, in a good LP system, the toolchain would be able to consume the second part and transform and emit the first part.
> Consider, no book is wanting to get rid of the title page.
Lists of imports are not title pages.
> the goal is to not have parts that can be the same between programs vary
Lists of imports vary. (And in today's world of extreme software reuse, which other pieces of software a program depends on is of greater interest than ever.)
I'd take your point on boilerplate if it were necessary and unavoidable. I've already described, however, that it isn't and how.
Depends on the list? The ones you linked from Knuth are basically fixed imports that are, in fact, very common and easy to take as a whole.
I think the disconnect is still that you are writing a C program with CWEB. If I'm using a literate style to write JavaScript, I'm still writing a JavaScript program. As such, the point isn't to just throw any established norms out, but to rearrange for presentation. If some items go well together, might as well keep them together.
Most programming tools that can do this require you to "bundle" things such that you have to have well formed containers, if you will. Having the "add this to a section" constructs really help, as then you can say things like "add this to the imports" the first time you have need of a new import. But, if you don't add any non-standard imports, you can forego doing that.
The other approach is to try and make everything implied, such that new users can skip any of the boilerplate and jump straight to programming. That is fine, for what it is, but does little to build understanding of the programs as a whole.
That said, I may have missed your described solution. I'm kind of scattershot today and not keeping fully on top of this. Apologies if I'm talking past you.
I missed this conversation earlier, but see my response to that post here: https://news.ycombinator.com/item?id=29871047 (also see https://news.ycombinator.com/item?id=30762055) — in short, literate programming is writing; writing is done with an audience in mind; you don't necessarily have to explain everything; Knuth only chooses to explain the tricky stuff rather than language features or obvious (to him) #includes, especially in these throwaway programs that he wrote for himself.
Note the end of the sentence you quoted: “if you feel like it” — Knuth clearly doesn't feel like it in these programs. (BTW, it appears he generally adds a comment next to each #include mentioning why it's being included.)
Literate programming, for Knuth, is supposed to be a fun way to write programs that leads to them being easier to write and read and debug and maintain; it's not some sort of purity discipline that requires everything to be maximally expository.
> Now, writing is always (best) done with a specific reader in mind: you assume the reader has a certain background/prerequisites: some things that don't need explaining, and some things that do.
I don't buy it. The same explanation can be used to excuse the use of traditional (non-LP) programming systems—raising questions about why make a fuss about LP at all at that point if an LP text is going to treat the same types of shortcomings as a given.
As the commenters on Ward's wiki pointed out, Knuth's examples come across as still being written for the compiler—what he's mostly succeeded at is just coming up for a different syntax for it to consume in a roundabout way.
> Depending on the reader you're targeting (e.g., yourself a few years from now), and how polished you're trying to make your presentation, you may well choose to take it for granted and not bother explaining that a C program will have some obvious #includes at the top.
But of all the things that Knuth could explain, that's the one thing he does explain. It's not a lack of an explanation that the program will have some includes that is the problem. It's the impertinence of immediately dumping a list of includes on the reader, in a total failure in exposition.
We don't have to focus exclusively on the includes to see this problem. The define at the top of the Symmetric Hamiltonian cycles exhibits the same thing just as clearly.
I have another theory that I've floated, which is that Knuth realized that there's something wrong with how traditional Pascal and C compilers force you to write/read your programs bottom-up, but by the time he came up with literate programming he was already so warped and tainted from years of doing work in the bottom-up tradition to satisfy the compiler that it ends up clouding his vision, even when he knows that's the mindset he's trying to actively work against.
It seems that your main complaint is that in Knuth's literate programs, he does not explain enough. To which my answer is: Sure! LP doesn't mean you have to explain everything, you can choose to explain as much you want. It's not a "moralism" like "structured programming"; it's just a tool.
I think everything you're saying is along the same lines:
> The same explanation can be used to excuse the use of traditional (non-LP) programming systems—raising questions about why make a fuss about LP at all at that point if an LP text is going to treat the same types of shortcomings as a given.
Firstly, "excuse" seems to suggest an accusation that something is wrong with non-literate programming, which has not been made except as a joke — LP is just presented as a tool, with the expectation that it won't work for everyone. (Knuth is against moralism in programming style, and has often complained about it, e.g. in the context of defending GOTO and comparing pointers and what not.) So your question is basically asking: why attempt to explain at all, if one is not going to explain everything? The answer is, even without explaining everything, explaining as much as he does seems to work for him; he's been doing this for 40 years now and continues to rave about it.
> But of all the things that Knuth could explain, that's the one thing he does explain.
(I didn't understand this part.)
> immediately dumping a list of includes on the reader
As a rule of thumb, Knuth has said somewhere (maybe in the early days of LP, for TeX), that he targeted around 12 lines per LP section. So I would imagine that if he ever wrote a program that had more than about 12 includes (which is pretty hard to imagine :-)), he would split up the list of includes into multiple sections, presented separately. Below that (like the four or five includes in these examples), there's not much value in splitting it up further. I guess there's a lesson/hint here that splitting something up into sections is not "free" and below some threshold starts having more cost than benefit. (Just like the cost with having lots of short functions in modern-day non-LP programs: Ousterhout's A Philosophy of Software Design has some succinct words about that: https://web.stanford.edu/~ouster/cgi-bin/aposd2ndEdExtract.p....)
In any case, not having separate sections for #includes is not a failure of imagination: if you look at the programs on his webpage, the very first example ("used as a handout for a lecture on literate programming") has #includes separated out: https://shreevatsa.github.io/knuth-literate-programs/program... — note that in section 2, after a self-mocking joke ("First we brush up our Shakespeare by quoting from The Merchant of Venice … This makes the program literate."), he uses printf, and in the next section includes stdio.h ("Since we’re using the printf routine"), and then in section 7 he has another include, with the words "UNIX’s localtime function does most of the work for us, but we need to include another system header file before we can use it." So the fact that he presented includes like this in his early demonstration program (October 1992), and does not bother to do so in later programs (including last week: May 2023) seems worth thinking about.
> The define at the top of the Symmetric Hamiltonian cycles exhibits the same thing just as clearly.
As mentioned before, this is one of the programs that "use the conventions and library of The Stanford GraphBase". If you look at the program (https://shreevatsa.github.io/knuth-literate-programs/program...), it starts with "We use a utility field to record the vertex degrees" and then has "#define deg u.I". This is in fact part of those conventions — in the SGB book index, there's an entry for "utility fields" pointing to pages 38–39 and 284, where this is explained:
> Every Vertex record contains eight subfields. We have already mentioned name and arcs; the other six subfields are called utility fields because they can be used for many different purposes. […] The six utility fields are named u, v, w, x, y, z, and their five possible interpretations are distinguished by adding one of the respective suffixes .I, .S, .G, .V, .A; thus, for example, v->w.I stands for the integer in utility field w of the Vertex record pointed to by v.
> Utility field names are usually given meaningful aliases by means of macro definitions. For example, GB-GAMES defines nickname to mean y.S.
and so on, and the book is full of definitions like "#define source a.V" and "#define back_arcs z.A" — so when the SHAM program begins with "we use a utility field" and then "#define deg u.I", this is established convention, not some wild thing pulled out of nowhere. When Kartik says “presumably a struct whose definition — whose very type name — we haven't even seen yet. (The variable name for the struct is also hard-coded in”, all of these presumptions are wrong: `u` is not a hard-coded variable name but the name of a field in Vertex (over the course of the program we can see v->deg, x->deg, u->deg, a->tip->deg: there's an index entry for "deg"), and the Vertex struct's definition and type name are well-documented (in a published book). I find it easy to take at face value that this was really the order in which Knuth thought of things, and also the level of exposition that he finds most useful (for his intended audience, which is himself).
This points at a problem with literate programming, that I also mentioned in the other thread: because it can be so personal, everyone who uses LP has their own idea of what is most worth explaining, and even ends up building their own LP tools. (Similar things are said about every Lisp programmer ending up with their own idioms and mini-languages.)
> he was already so warped and tainted from years of doing work in the bottom-up tradition to satisfy the compiler that it ends up clouding his vision
Another way of saying this is that Knuth never believes in hiding the fact that you're writing code that a compiler will read and that a computer will execute — in fact he continues to annotate variables with "register" even though compilers ignore it, simply because he likes to be conscious about what instructions the (ideal) machine will execute — he's not a big believer in abstraction and hiding the details by passing to a higher level; he believes in (and somehow manages) constantly being aware of all levels at once.
Yet another way of saying this (my theory) is that when machine code / assembly programs became too hard to maintain, the rest of the world solved the problem by, over time, settling on abstraction, interfaces, higher-level languages, style conventions, information-hiding, and all that. Instead, Knuth has forged his own programming path/style, still basically writing machine-sympathetic programs, but "explaining to human beings what we want a computer to do". It is up to others to merge LP's human-orientation with mainstream ideas… if it will ever happen.
Anyway, it seems that the criticisms we're discussing are mostly based on first impressions. While they are valuable (and indicative of what others' first impressions will be), ultimately I think criticisms based on studying these programs more closely would be more interesting. E.g. has his LP style evolved over time? What can we learn from studying recent programs (there are over a dozen since 2020 alone, like the program above or others posted on his webpage and which I just typeset yesterday at https://shreevatsa.github.io/knuth-literate-programs/program...) — what can we learn from what he chooses to explain and not; why does he make those choices; what way does this seem to help him: can we use these insights (not the same style) for our own programming practice? Things like that.
> It's been 9–10 days here and I'm not sure anyone will read this (I'm surprised HN still lets me reply)
HN gives you 14 days.
(I wish it had a feature to "mark" a comment in lieu of writing a reply then and there. When 12 or 13 days have passed, it'd send you an email reminding you that this is your last chance to add what you have to say to the record. In this way, it would favor people going away and coming back to leave more thoughtful comments instead of impelling them to reply instantaneously i.e. to avoid the risk of not being heard.)
I suppose the hack would be to set your delay in your profile to 60 × 24 × 12.5 = 18000 (minutes), post your reply, then reset it to its initial value. You'd progressively edit your comment as the opportunity comes up, and then it will be autoposted in whatever form its in when the delay elapses. You can only edit posts for up to an hour (or something), though, and I'm not sure whether that timer starts from the time you hit submit or when it appears in the comments (i.e. after the delay has elapsed).
And? Knuth also used Pascal for the original Web and TeX. Both are still Pascal programs that get transpiled to C before being compiled. Just because he does something doesn't mean you should copy it religiously.
Web and its derivatives are sufficiently advanced that they suffer from the lisp curse. What in other systems are major fundamental engineering problems - try adding an include in the middle of a c file - in Web derivates are a matter of taste. Do you have a chunk that picks up includes as you need them? Is it a big one at the front or back? It's up to you.
And nothing. For programs that are ostensibly meant to be read, Knuth's examples are poor ones.
> in Web derivates are a matter of taste
Dumping a bunch of includes at the top and saying the equivalent of "don't worry about this boring stuff for now; you can skip it if you want" (and thereby compromising the entire LP experiment) is a matter of taste in the way that putting beef in a vegan casserole is.
That's only for the telemetry that happens during the install process (if I've read the link correctly). Seems quite reasonable as long as we accept them sending telemetry during install. ("A single telemetry entry is also sent by the .NET SDK installer when a successful installation happens")
For telemetry during actual use, you can set that flag any time, and a message is shown on first use to inform you about it.
Microsoft even provides a page which showcases summaries of some of the metrics they collect from you if you don't disable this feature. These metrics even include MAC addresses.
> Seems quite reasonable as long as we accept them sending telemetry during install.
There is nothing reasonable about this. You should not be required to have tribal knowledge on how to use arcane tricks prior to running an application just to avoid being spied upon. It's a dark pattern, and one that conveys a motivation to spy upon unsuspecting users whether they approve it or not.
But they do mention that you can disable it at anytime, only that for the telemetry that is sent with the installer you have to set the flag beforehand (obviously):
> The .NET SDK telemetry feature is enabled by default. To opt out of the telemetry feature, set the DOTNET_CLI_TELEMETRY_OPTOUT environment variable to 1 or true.
> A single telemetry entry is also sent by the .NET SDK installer when a successful installation happens. To opt out, set the DOTNET_CLI_TELEMETRY_OPTOUT environment variable before you install the .NET SDK.
MAC addresses are only 48-bit and sparsely allocated (i.e. the first half indentifies the vendor). I wouldn't be surprised if the hashes for all normal hardware (i.e. with known vendors) can be easily brute forced.
In Magic, cards come in one of 5 colours* - white, blue, black, red, and green. Each colour has specific mechanics associated with it as well as a general play style. Magic players often have a preference for specific colours as a result.
The colours are analogous to factions in strategy games, or to the different heroes in Hearthstone, but with more flexibility to mix colours in a single deck.
Giving children the kinds of freedoms discussed in this article would lead to some harm coming to them. Accidents, violence, kidnap, etc.
Therefore, society won't give them those freedoms.
This tendency has been exagerated by mass media in the modern era. Every single case, every piece of anecdata, makes massive headlines and instills the fear into parents everywhere.
It's impossible for society to reverse course because that would mean acknowledging, implicitly or explicitly, that some level of harm for some children is justified by the developmental benefits to all children of increased freedom.