Hacker News new | past | comments | ask | show | jobs | submit | aoanla's comments login

I think that's sort of true, but unlike Disco Elysium - which I simply loved - the bits of Outer Wilds I loved were at odds with the fact that basically all of the worlds gave me anxiety from their specific quirks (plus, I found them all being so small - especially the ones closest to the sun - made me constantly worried about falling off), and I couldn't finish it. (I did watch a YouTube play through to get some of the experience without the terror later)


I mean, it does - people search stuff all the time now, rather than thinking about it.


IIRC, that was Socrates' complaint to Phaedrus about writing: that reading (because it was "high tech" at the time?) led only to an illusion of understanding.

Elsewhere Phaedrus echoes with a very modern complaint (even though search engines wouldn't arrive for another 2'300 years): They would say in reply that he is a madman or a pedant who fancies that he is a physician because he has read something in a book, or has stumbled on a prescription or two, although he has no real understanding of the art of medicine.

https://www.gutenberg.org/files/1636/1636-h/1636-h.htm


Socrates wasn't wrong. Reading a lot gives you a partial understanding but it isn't complete without experiencing the thing for yourself. Arguably the Internet is the natural home of authoritatively stated but uninformed opinions - the exact result of reading a lot about a subject without having any experience of it.


Ironically we can only know he was right by making the exact mistake he was warning us against.


I think the Phaedrus is all about the importance of practice. I've been reading a lot of math books lately, but I don't actually grok anything well until I sit down and try to reason through the material myself, write my own little proofs, try to deconstruct what's being said actively with pen and paper. Similarly, I understand a work of literature far more deeply if I take active notes, and/or write a small essay about my interpretation. I become a better writer by reading good writers and emulating them in my own writing practice. Writing was a threat to poets when the goal was still to recite a compelling live performance, which, to do this well, would require memorization and practice—today, still, we ask that actors do not have paper scripts in front of them when performing in a film.

This is kind of the threat that tools like LLM's pose. Their power to generate decent results means that far more people will eschew practice for "good enough" LLM produced results. Creation will become even more transactional, and (many) people will quickly fail to "see the point" in practicing until we have a culture that's degraded even further than it already has today.


As the hallucinated Euclid said to Ptolemy, there is no LLM for geometry?


N-body problems can't be analytically solved. However, you can still compute integrals into the future (with some acceptable error), you just need to step through all the intermediate states along the way

In the case of the solar system, yes, it helps that the Sun is much more massive than everything else (and then Jupiter is 4 times more massive than Saturn, the next biggest) - you can go a long way to a "reasonable" solution by starting with the 2-body solution if only the Sun affected each planet, and then adding in the perturbation caused by Jupiter and Saturn. (In fact, that's how we predicted the existence of Neptune, by noticing that there were extra perturbations on Uranus beyond those, and hence another massive planet must exist, far enough away from the sun to only significantly affect Uranus).


It's also a weird thing to bring up (Numba being great because it can jit-compile python to any arch, including GPUs) when the author discounted Julia... which has exactly the same property.


The difference is uptake. Julia's good and it's out there, but relative to the users of Python... How many people care how portable the Julia code they aren't writing is? The existence of a tool to jit-compile Python is more useful to a lot more engineers than the existence of another language that is nicely jit-compileable.


Right, except the author also mentions two obscure languages with very little uptake at all, so it can't simply be a popularity thing - they're not useful at all, by that limited metric.


Effectively, it does - one of the things recent releases of Julia have done is to add more precompilation caching on package install. Julia 1.10 feels considerably snappier than 1.0 as a result - that "first time to plot" is now only a couple of seconds thanks to this (and subsequent plots are, of course, much faster than that).


Yeah, this is why Quake's logic for a lot of game things - monsters, weapons, moving platforms - is written in a byte-code interpreted language (QuakeC). The idea was to separate it from the engine code so modders could easily make new games without needing access to the full engine source. (And QuakeC was supposed to be simpler as a language than C, which it... is, but at the cost of weird compromises like a single number type (float) which is also used to store bitfields by directly manipulating power of two values. Which works, of course, until your power of 2 is big enough to force the precision to drop below 1...)


The classic of this field of books is Abramowitz and Stegun's "Handbook of Mathematical Functions" - although the two listed names are merely those of the compilation editors, as the calculations of the numerous tables of values (and sheets of mathematical identities) required hundreds of human computers operating for years. Ironically, on publication in 1964 it was just in time to see the dawn of the electronic computer age that would supplant it.


I still use it when testing implementations of mathematical functions. Like if all I need is a bessel function, why pull in a whole CAS to do that?


Yeah, I've had the same thing every time I have tried Anki - it's fine for a while, but once you get enough cards added, or just have been memorising a deck for long enough, even missing one session generates a huge unsurmountable backlog. It's bad and consistent enough for me that I just stopped trying to use Anki at all.


FSRS seems to reduce the amount of reviews required, might be worth enabling it and seeing if it helps.


Is there no way to skip a day? And simply pause?


You can suspend cards and decks, but also you can skip days.

It'll create a backlog of cards to review, but that's surmountable. The number can become intimidating (I think my worst was around 2k cards), but there are a few ways to clear it. You can "just" get back into the daily habit, it'll take (by the default numbers, if one deck) # cards/100 days to clear out, more or less (some cards may come up for review again in that period so the actual number might be a bit higher). You can also up the daily review limit to clear it faster (I did this when I hit 2k, 200/day was about my personal time limit to spend on it so it took just over 10 days to clear).


I get skipping days is against the algorithm, but wouldn’t it be better to simply freeze the algorithm for a few days opposed to get a backlog at all? A backlog would break the habit for me making me unlikely to return.


Skipping days is not against the algorithm, it accounts for skipped days.

In a simplified system suppose that we just double easy card review times, and reset hard cards back to 1. You have a card up for review today, you last saw it N days ago. Two scenarios:

1. You don't delay. You review the card now. It's either hard and reset to 1 (see it again tomorrow) or easy and you see it again in 2N days.

2. You delay. You review it M days from now. When you finally do it's either hard and reset to 1 (see it again the next day) or easy and you see it again in 2(N + M) days.

That's it. The algorithm has you covered if you delay. It doesn't do something silly like say "This card was supposed to be reviewed after 2 days, but you waited a month. You remembered it, but we're going to show it to you again in 4 days." The algorithms will take the delay into account (maybe not one-for-one) like I illustrated above.


Good point. The real risk in skipping days is that you might forget altogether some of the cards that were due for review. But the Anki default is to review often enough that probability of recall for each card is very high, so if you're only skipping a few days at a time this is not a huge concern. Capping the amount of cards you do per day has a similar effect; Anki will prompt you with the highest-risk cards first, and some will be left unreviewed for the day (hence, practically skipped).


Adding to the replies which list SF examples of this idea:

The Collapsium (Wil McCarthy) has a plot built on a combination of this idea and extrapolation of the consequences of the fringe theory that gravity is really due to high frequency quantum oscillations. (Almost everything in it's future is made of configurable quantum dot "pseudo-atoms" that can be reconfigured to states that don't exist in natural atoms.)


Back in 2011, the UK Government commissioned an "independent review" of copyright etc (The Hargreaves Review of Intellectual Property").

It broadly agrees with your ideas - at least in terms of the fact that copyright is far too long (the report thinks 20 to 30 years would be long enough, possibly in two phases with a renewal needed in between) but notes that the UK is bound by international law to keep copyright longer than that.

It also recommends on weakening copyright with more exemptions...

...and in the case of patent law, making it more expensive to renew patents to encourage people to not just renew by default, and not allowing any additional things to be patented (cf US software patents).

It's also very grumpy about the way copyright law just happened whenever a wealthy industry asked for it


Excellent reference and apparently the first time it's been cited at all on HN.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: