Hacker News new | past | comments | ask | show | jobs | submit | smatija's comments login

My mother's chickens once caught a snake, stampeded it to death and then promptly ate it. Dinosaurs for sure.

Well Ghost is free if you self-host - but I agree that while hosting it is simple, it is still a barrier to entry.


Self hosting isn't free. It costs money to run a server.


GitHub pages is free.


The problem isn't that resources like this didn't exist in the past (Calculus Made Easy was written in 1910 after all), problem is that they aren't wellknown - and that is unlikely to change even today.


"that is unlikely to change even today" - huh? Doesn't the internet make it much easier to discover and access resources?


While the internet does make it easier to access educational and useful materials, it also makes it easier to access X,000 hours of $videoGame or X0,000 dopamine hits from endless TikTok videos. Perhaps parent comment meant that there does not appear to be a significant trend towards more people reading or making use of these types of resources.


Yes and no. Yes there are more resources, and yes they're easier to discover, but you implied that was all that GP meant, but his point was they can be hard to find still. At least resources that try and give you intuition, so that's the "no" part. In terms of your question more availability of resources in general doesn't always mean you have better odds of finding good to great available resources for giving you these intuitions. The problem we have today, I believe, is the signal to noise ratio is low to build your intuition. It's a problem one of my professors explained once that so much of grade school and even required college math for most students is filled with the solve this equation mentality that you kind of miss the purpose of a lot of what higher level math is about. Sure part of it is still solving equations, but some other parts just as an example are about discovering truths (through for example a proof) that we didn't know before hand or hadn't built the intuition for before hand and that part of math is fun, but you do need a foundation for that too. It can be disheartening to learn that so much of math kind of kills that joy, and only focuses on the foundation of it the solving of the equations.

We often find ourselves sifting through materials that always assume you have (or can figure out) the intuition and make it hard to find something that digs a little deeper and allows something to stick. This happens in math, programming, and you name it. We kind of assume the intuition is already developed most the time. Writing depends on our audience. If you're a programmer like me imagine someone trying to explain a modern and complex algorithm to you while simultaneously explaining every small part of how a program executes all the way down to machine code. That would be a horrific way to try to learn just the algorithm itself and it would be drudgery to try and fight through finishing an article or book that contained that much information, and because of that it would be a waste of the author's time writing such a thing. The simple answer is usually is to either stick with simpler examples and try to build intuition like this book does (though that doesn't mean there is no complexity in the example still due to digging into the example from a fresh perspective), or to just write a book full of rules and examples through problem sets (think like a textbook on math).

I swear this is related, but my favorite course in college was Discrete Mathematics which is sometimes titled something like Mathematics of Computer Science. The interesting thing is it was the first time in Math and at school that someone has explained to me the meaning of things that build a foundation in Logic in mathematics like "for all", "there exists", logical operators, and the negation of any of those things and what they mean. It was enlightening to say the least. It gave me an intuition behind the language of proofs. It allowed me to write my own proofs and to be able to read other proofs. Which proofs are everywhere in mathematics and understanding a proof is exactly like building an intuition of the underlying math. I couldn't believe after taking that one course that I actually understand textbooks way more often, and could understand that most the learning didn't happen by going example to example and solution to solution, but by understanding the underlying rule. Math always seemed, so much more ambiguous to me before then like the rules didn't clearly define every edge case and outcome, but they almost always certainly do.


My opinion is that the problem (difference) is that digital format and internet is not the same as paper and libraries.


I would argue that internet makes good resources even harder to find - there is so much of everything, with no curation, that any resource's chance of gaining long term recognition is practically zilch.


For myself (not a mathematician) I would find hard to argue that in 1920 would have been easier to have access to this specific information. I was sitting in my sofa when I found this.

Also, it should be easy to ask people attending school now if they would prefer books to internet. And to me if the justification is that youth doesn’t know what they are talking about is like denying progress.


In Slovenia weather is like a rollercoaster this year. 24th March we had snow and yesterday we had record high 32°C.

Forecast is showing sub zero temperatures in the morning will be back in a bit more than a week.


Minimum FIDE rating is 1400, so you really cannot be 800 FIDE. Also the gap between online and FIDE ratings is closer than people think.


That’s not correct, as of 2022:

They only publish numbers above 1000, but they still come up with a number before deciding to not publish it. https://www.fide.com/docs/regulations/FIDE%20Rating%20Regula...

Not all clubs follow those rules exactly, but you still need to track sub 1000 to know when people cross it.


It is since new rating regulations, which are in effect as of 2024: https://handbook.fide.com/chapter/B022024

See 7.1.2. Ratings aren't tracked by clubs even when players are under this cutoff - all games for rating are submitted to FIDE.


Which only came into effect on March 1 2024, so people still have sub 1400 FIDE rankings from 2024.


They don't - ratings of everyone under 2000 got increased, according to formula: player rating + 0.4 * (2000 - player rating).

So everyone now has rating above 1400, except if their rating fell under in meantime, in which case they will be removed from next rating list.


Chess.com isn't really more sophisticated than lichess - it's only trying to appear so.

It's definition of blunder etc is still based only on engine evaluation. For example it marks as briliant all sound sacrifices, even the most routine ones. This is good marketing, but I doubt it's good analysis.


It's just marketing to make players feel better about themselves. For example when i tried chess.com out they marked a simple queen sacrifice to deliver back rank-checkmate as brilliant even though it's an obvious move for any intermediate player with half a year of experience. Lichess does nothing of that marketing bs.


When playing against much better player there are two possible strategies in practice:

1. try to simpifly position as much as possible and hope for a draw (and inevitably lose the endgame due to playing too passively);

2. complicate position so much that neither you nor your opponent can calculate or understand it, then hope that your opponent will mess up before you will.

Take from that what you will.


These evaluation-centric definitions of blunder are a bit awkward though.

Traditionally blunders were defined in more player-centric way: player blundered, when he made a mistake obvious enough, that a player of his strength is very unlikely to make. So what is a blunder for a strong player may merely be a mistake for a weaker player.

Problem with evaluation-centric definition is that not all moves that worsen position by 14% are equally obvious - if you hang a queen in one that is certainly a blunder, if you miss a non-trivial sacrificial combination on the other hand...


Chess.com is also definitely using an evaluation-centric definition to label moves as blunders. The issue is that this definition is also some function of the change in winning probability.

> So what is a blunder for a strong player may merely be a mistake for a weaker player.

Statistically this intuition appears to be correct. Your winning probability is still more than 25% when down a queen against an 800 rated player, but under 10% if playing a 2200: https://web.chessdigits.com/articles/when-should-you-resign#...

So it would make sense for the definition to take into account the opponent's Elo rating.


I agree - all attempts at automatic classification of blunder have same problem. This is why analysing games without engine still matters and is going to matter for foreseable future.

Don't forget also impact of time control - shorter games lead to more mutual mistakes. While in 90+30 first big blunder should decide the game, in blitz it's just the beginning.

Amusing example is Chessbrah speedruning to 2000, while hanging queen in every game: https://www.twitch.tv/videos/593176969


I think Assange would disagree with this statement.


When I last used CL using Emacs+SLIME helped a lot. Especially C-x C-e (evaluate under the cursor) and C-M-x (evaluate form you are inside of). With this you get all the benefits of classic editor together with REPL-like instant feedback.

https://slime.common-lisp.dev/


In a Lisp-aware editor, such as Emacs with SLIME, you can send to the REPL for evaluation an arbitrary block of text that contains multiple separate expressions. See for example M-x slime-eval-region (C-c C-r ).


When I read “I feel like I'm fundamentally missing something about iterative development in Common Lisp” in the GP, I thought of exactly what’s in these replies. I’ve only recently started learning CL via Practical Common Lisp, and while I liked Emacs+SLIME, I’m a vim guy (I know) and switched to vim+VLIME instead, and so far I’m loving it. This to me has actually been the “secret sauce” of Lisp in my early experience, because now when I go to write code or use the REPL for languages like Python and Ruby, I find myself missing the SLIME/VLIME experience. I find it to be a very intuitive and efficient way to write code interactively.


Any chance you could drop the Common Lisp equivalent of the Python program in the original post here? That is, the code that goes in Vim, then what commands or key bindings you use to execute and find the "add two lists" syntax error.



Even VS has send to REPL kind of experience for Python, maybe vim isn't the right tool, rather using something else.


For the record you can do the same with various emacs python modes, and it is vastly superior to using the python repl alone.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: