Hacker News new | past | comments | ask | show | jobs | submit | vkazanov's comments login

True, looking at real proofs is what changed the game for me.

Before I actually went through 3-4 books on basics of proofs, math felt... almost meaningless , a game of remembering the right thing at the right time.

Saying that as somebody who oscillated between being "good in math" and "top in class" for all 18 years of studying.


To me proofs never where the interesting part of maths - the ideas and intuition which made the proof possible were.

Proofs were a way of formalizing something and, well, making sure the intuition was actually correct, but they were just a tool and not the game itself.

The best math teachers/professors I had were the ones who focused on the ideas .


Yup, and once again, it depends on how we learn. I'm a strongly "learn by doing" kind of person. For instance, I'd get almost nothing out of reading a math book that was full of ideas but no problems or proofs. Doing problems and proofs is how I wrestle with the structure of the subject matter, and internalize the ideas.

Well, i don't disagree with you! in my book proofs are formalized intuition.

Oh don't you worry, most of them are like that almost everywhere.


Lua is definitely a likeable language and due to its (very) limited nature can be used as a first language.

But as a Linux CLI lang, or for simple one-off scripts... Python is better. Scripts (and also competitive programming) favour languages capable of compact solutions and universal stdlib. Lua, because is has no stdlib to speak of and a limited amount of syntax sugar. It is just so much less useful for these purposes.

As for competitive flavours of programming... I recently gave it a go for Advent of Code 2024.

My impressions: https://www.reddit.com/r/adventofcode/comments/1hvnou1/2024_...

My repo: https://github.com/vkazanov/advent-of-code-2024?tab=readme-o...

TL;DR Python is better here.


Lua's best place is as the language "on top of" another language.

ie. You have a big application that has a configurable feature and you also need that configurable feature to be super flexible since the amount of possible variations on that feature is so large that it's not worth maintaining it in the "main" language. An example could be say, an enemy AI decision script in a turn based RPG. While you can hardcore the decisions into the language itself, it doesn't make it easy to modify (especially if you want to iteratively test it, since each iteration needs a recompilation of the entire program).

That's where LUA shines - instead you just put it in a LUA file and it's dynamically loaded and interpreted when it's needed, allowing you to rapidly modify it without needing to recompile the entire game.

Python is amazing for scripts that are also software in their own right, LUA is amazing for snippets (so a layer smaller than scripts), since on that level, none of it's shortcomings are a problem. The C interop and sandbox also makes it fairly easy to set up a DSL/importable module that runs your "main" language if that's what you need as well.


Fair point about the Linux CLI, especially the lack of a comprehensive stdlib. As for the syntax sugar and code terseness, maybe it's just me, but I prefer verbosity over compactness. Surely it takes more time to write and read, but personally it makes it more readable for me.


Why? Only because of luajit?


most likely,also sadly


what's wrong with luajit ? (I never used it)

edit: sorry I should have searched, Luajit only supports up to Lua 5.1


There is design tension between the language designer and the VM implementation. AFAIK some features post 5.1 make performant VM implementations difficult. Lua is originally a configuration language where performance didn't matter all that much. Because Luajit is so fast much more of the application code can be done in the scripting environment so now the performance matters much more than it did.

So the spectrum is from newer language and slower to older language and faster. If performance is an issue the cost of the newer language features could be that more of the application code has to be written in C++ instead of Lua - in that context the Lua language shouldn't be considered independent of the host language. Performance matters to me so increase in C++ code would not be worth the newer language features. Using Rust instead of C++ as the host language might change the landscape again since Rust is so much more ergonomic than C++.


Heh...

I remember my 10th doctor said something like: "typing less should fix these". Typing less..!


I've been using Emacs for 20+ years and counting. Over the years I had all the usual programmers health-related problems, and saw my friends getting the same kind of problems as well.

I no longer have these problems, but I see people around me going full steam ahead towards getting these.

Here's some boring dad-level wisdom:

1. Vim/Emacs/VSCode/favourite editor have nothing to do with the correct, least straining way to use the keyboard. Spreading the load over all 20 fingers is the only way to avoid RSI and related problems. Proper touch typing, caps lock remapping, right hand ctrl pressing... What ever it takes to balance things. Typing less also helps.

2. We programmers sit a lot. Or stand a lot. This leads to back problems, faster RSI development, etc. All the usual workarounds (massage, etc) will only postpone the issue a bit. The only way to counter these is to make sports a routine. Walking, lifting, swimming, climbing. Anything that makes blood go faster - this helps with recovery, and should be a habit.

3. Being overweight kills and makes all of the above problems 10 times as painful. There is just no way around learning to control one's weight. It's not even that hard.

My father (also an emacs user btw) also taught me this but for the usual young people's reasons I had to learn things by, ehm, doing.


As somebody who wrote maybe half a million lines of python code, I still must say that it is Rails that became a model.

Django was (is?) the default tool for python, but i don't remember it being discussed as widely.


Yes, rails definitely in a bigger scale. Django within pythons ecosystem.

IME fastapi has eroded some of Django’s hold but it’s still chugging along nicely. Hype has certainly died down because it is ancient by today’s standards. Still a very good tool and quite a lot of work available around it.


While I have been using Linux since 1996 or so, and do have quite an opinionated workflow, I never could agree with this kind of ultraconservative approach to things. History never stops. Things change. Linux changes. Not every day, not every month, but every couple of years something has to go. And that's ok.


I agree. At some point in the past Unix was also new. There is a time for stability, but also time for changes. In fact, the most popular distributions such as Debian, Ubuntu, Fedora or Arch largely operate on the principles that have not changed since the 90s. There is definitely a space to do things better now. I'm personally excited about GNU Guix, I think is one of the most innovative distributions, just on the basis of its consistency alone. They use a single programming language to implement all aspects of the OS: configuration, system services, packaging. NixOS is obviously another notable one, though it is not as tightly integrated because it still relies systemd and the nix language is quite arcane to use.


> They use a single programming language to implement all aspects of the OS: configuration, system services, packaging.

I can understand the appeal of the idea, but this feels like a significant mistake. It would be like an automaker saying, "We are using exclusively 17 mm bolts for fasteners." Sure, it saves you time with finding a wrench. But I can't begin to imagine the number of compromises or additional complexities you introduce this way.

It seems like a goal founded in an academic ideal rather than a design benefiting from engineering practicalities.


I totally understand the concern but I think that it’s well within reason that your analogy overstates the issue. I haven’t given it all that much thought, but to me it seems reasonable that there are enough different use cases that are similar enough to still get a net benefit from a shared language. How much of this language fragmentation is just a reflection of Linux ecosystem fragmentation as opposed to e.g. “the right tool for each job”? I’d bet a fair bit.


Guile is not C, the stdlib is small and it can be adapted for any task. The difference using s-exp everywhere instead of an algol like language.


> But I can't begin to imagine the number of compromises or additional complexities you introduce this way.

So can you be more specific about the kind of compromises you have in mind and whether they are currently affecting Guix?


> "We are using exclusively 17 mm bolts for fasteners." Sure, it saves you time with finding a wrench. But I can't begin to imagine the number of compromises or additional complexities you introduce this way.

I spend 10 hrs a week under cars, and i say, hell yeah! I want this! For all cars!


You want to use 17 mm bolts to hold your fuel cap on? And your wiper blades? And your rear view mirror?

Some standardization is a great idea, but including the word "all" is what makes it academic and impractical. And if you're not going to be absolutist about it, then you're just using marketspeak.


Yes, i do! At least that all of their heads are 17, IMHO a pretty good and already common size.


Not sure if you chose 17mm on purpose but the tolerances on most sockets and wrenches make 17mm and SAE 11/16" sizes nearly interchangeable.


I would say that software is meant to be chopped up and glued together much more flexibly than the parts of a car. It is more like saying "all LEGO pieces have studs 5mm apart center to center".


> History never stops. Things change. Linux changes.

To the better, right? Right? The last two years yielded so horrible regressions to me that I'm again considering giving up on Linux.


First, what distro are you on?

Second, have you tried windows or macOS recently?


Distro doesn't matter that much, its mostly the desktop environment (panels and settings), and kernel regressions. Like half of my thinkpad fleet now boots into a blank screen due to an regression in the Linux i915 driver.

I used to run Alpine Linux on servers, decided i wanted to change to something less exotic and found that Debian is no less buggy. No idea how to go on.

Windows is consistently worse, i haven't tried macOS as it is not really popular here.


Try linux-lts. The latest "stable" releases of the kernel (since 6.10 onwards) have felt like they weren't tested at all, major regressions in every single version. I report them, but new problems keep coming. Never seen anything like that in two decades of being a mostly/only linux user.

The lts is fine, no problems at all.


The Linux Foundation cut funding for LTS releases: https://www.zdnet.com/article/long-term-support-for-linux-ke... They only spend a small percentage of their money on maintaining the kernel. So I think if you want a stable kernel you need to find someone downstream willing to do that work.


I see Alpine 3.20 still ships 6.6, I'll grab an ISO and check if works, thanks!


I suggest going with a Red Hat-like OS such as CentOS stream. It's boring, but my experience is that it's rock solid (when paired with good hardware).

What were the issues you faced with Debian on your servers?


Distro matters a lot for kernel regressions.

I run arch and so I bump into those once in a blue moon but it's rare.

Debian runs older versions so you miss recent bug fixes but at the same time you should see minimal regressions. Pick your poison.

You might be extra sensitive to bugs. I'm that way too but at least I can fix them when I have the source.

I also only use a few apps (Firefox, eMacs, VLC, gimp) and i3 as my window manager. It's been a long time since I hit a bug that actually impacted usability.


Debian is supposed to be stable, but the last time apt hosed itself is barely two weeks ago.

The suggestion with the bug sensitivity is belittling, cut that out.


I've seen pacman and opkg hose themselves. I've never seen apt & dpkg hose itself since 2006 when I started using it. Usually when people say that it's hosed, it's actually successfully detecting and preventing breakage to the system that other package managers would happily let you commit, and it's allowing you to unwrap and fix stuff without having to hose the whole system and start from scratch.

I have utmost respect to apt, especially since I switched my daily workstation to Arch and learned how the life without it looks like.


What does apt do that pacman doesn't?


Gracefully handle edge cases. I've seen pacman continuing as normal and pretending that everything is fine, burying the error in the middle of several screens of logs, when free disk space temporarily went down to zero during system upgrade. That just doesn't happen with apt, where you're usually `dpkg --configure -a` away from recovering from most disasters.

There's also a matter of packaging practices, which isn't entirely a pacman vs. apt thing but rather Arch vs. Debian (although package manager design does influence and is influenced by packaging practices). In Arch, the package manager will happily let you install or keep an out-of-epoch package installed during an upgrade that will just fail to function. apt usually won't let you proceed with an upgrade that would lead to such outcome in the first place. It's a thing that's ridiculously easy to stumble upon as soon as you use AUR, but since user's discovery of the issue is delayed, most people probably don't attribute it to package management at all - they just see an application getting broken one day for some unknown reason, while apt screams at them and appears broken right away when they try to use apt.

To be frank, I don't know for sure that relations between packages that Debian uses couldn't all be expressed with pacman, maybe it's possible. What I know though is that I've never seen a Debian-like system that used pacman, and I know that makepkg-based tooling is very far away from debhelper so even if it's theoretically possible with pacman, you'd have a long way to get there with your tooling anyway.


> the last time apt hosed itself is barely two weeks ago

How did you manage to do that? I use Debian on about half my home fleet (about a dozen machines or so) and apt has caused me no issues in the past decade and half.


How is it belittling when I told you I'm that way too.

What I'll actually cut out is responding. Good luck with your bugs.


Agreed with you, I'm also sensitive to bugs and it's not belittling.


I'm addicted to Gnome Fedora since Asahi gave me the option, having one button that brings up a combination of Mission Control and Spotlight has soured my on Mac OS, why are these two different actions?

I haven't had to go into the shell to change anything yet, the default files, software center all work as I expect out of the box, including mounting USB drives which has always been an annoyance to me.

Now I'm investing in learning CentOS Stream and SELinux, happy with the learning curve thus far.


> Debian is no less buggy.

On servers? How do you notice? Maybe you are doing things we don't?


I'm happy with macOS, I know what to tweak and the display support is great. Ubuntu was very bad with fractional scaling for 4k displays. Maybe skill issue but the ARM Macs are just so fast, don't want to give up on that.


the fact that you think "panels and settings" _is_ Linux tells me you dont know the basics of the OS itself. Linux is the kernel and drivers. Everything else is an application, if you don't like the UI/UX, that's between you and the FOSS maintainers, as well as your choice of interface to use. take some time to read up on the various options before you try to blame (what you think is) and entire OS.


Hey Windows it's pretty nice since they added built in Linux vms.


They also popped open a full screen Windows 11 ad after I closed a game last year. That's when I went back to Linux.


as in windows is nice when you ignore the windows bits and run a linux vm (what WSL2 is).


Windows GUI, Linux CLI. The mullet of dev stacks but it works out well.


or better yet, run windows in a linux kvm


I mean sure, if you want to give away your data via Recall.


I am quite happy with endeavouros for the last couple of years, it even plays well with nvidia cards. Having played with multiple distros from the point of installing the Ubuntu from the cds they used to send, I definitely feel things have improved a lot. And a sincere thanks to all who’ve pushed things forward.


Likewise.. things do not have to change just for the sake of change. If things _improve_ I'll adopt them. If they don't then I'll stick with my old code.


That isn't console. That is X11. I've known people who avoided that and who did not use GUI. Or maybe framebuffer (occasionally!). When I use Linux, I use Wayland. You say i3wm? I say SwayWM. Even if you were insisting on X11, there was QubesOS. And, there was a time when being conservative in Unix-land meant you did not run Linux. Or you sticked to Unix instead of WinNT. It is and always will be two steps forward, one step back. Case in point: Wayland, QubesOS.


Same here, I do think UNIX did some stuff right, but also did lots of stuff wrong (otherwise The UNIX-HATERS Handbook wouldn't even be a thing), and while the command line is useful for some tasks (which can also be done on a scripting language REPL), it is hardly something to settle life on.

We already have enough UNIX clones, and moved away from TUIs 40 years ago for a reason.


Yes, I have a personal unix haters list as well. And I also love the whole tech subculture to death.

Funnily enough, init.rc used to be on the top of it, as well as all the numerous process/job management gotchas. Systemd + control groups = step forward. Plan9-style fuse-based systems = step forward. Kernel data structures exposed as files = step forward. And so on.

CLI utils will probably have their place like forever, TUIs as well, with the main benefit being the ease of development and staying in the CLI context.


The problem is Linux is, as he puts it, hard to learn and hard to master. So once I've gone through the learning phase for fun and learned what to do, I really want to just keep using it and not have all my hard work undone at a whim.

Perhaps ironically systemd is one case I would point to as being an acceptable breakage. The software itself definitely fulfils the license's promise of "NOT FIT FOR ANY PURPOSE", but as an idea it's mostly sound. It suffers from bad design in that e.g. it has no concept of "ready state" so there is no way to express "The VPN service needs the network to be online" and "The NFS mount needs the VPN to be connected"; thus it also has no way to express "you must wait for the NFS to be cleanly unmounted before stopping the VPN" - only "you must execute umount before tearing down the VPN (but without waiting)". Similarly if you have a bind mount you can't make it wait for the target to be mounted before the bind mount is executed (i.e. if I have an NFS mount at /mnt/nfs/charlie and bind mount /mnt/nfs/charlie/usr/autodesk to /usr/autodesk, I could find no way to make systemd wait for the NFS mount to be done before bind-mounting a nonexistent directory - contrary to the man page for /etc/fstab it executes all mounts in parallel rather than serial). All that said, you can work around it by sticking to bash scripts, which is the good part - it still retains a good bit of the old interface.

The problem really comes when a completely new way of doing things is invented to replace the old way, e.g. ip vs ifconfig, nftables vs iptables - now you have to learn a new tool and keep knowledge of both the new and old tool for a while (about a decade or two) until the old tool has gone completely out of use in every system you administer.

This was the kind of thing we used to make fun of Microsoft for in the '00s. Every year a new framework replacing the old framework and asking you to rewrite everything. In the end people just kept using the Win32 API and Microsoft actually kind of stabilised their churn. Now Linux is making the same mistakes and alienating existing users. I'm not sure how things will play out this time, I just gave up about ten years ago and run Windows on my PC. My worry is that the Linux world will get stuck in a cycle of perpetual churn, chasing the One True Perfect Form of Linux and repeat all the same mistakes as Microsoft did twenty-thirty years ago except without the massive funding behind it.

Or put another way, I can no longer trust Free Software. The people writing it have shown over and over again that they do not respect users at all, certainly much less than a commercial vendor does. Idealism trumps practicality in the Free Software world.


> Similarly if you have a bind mount you can't make it wait for the target to be mounted before the bind mount is executed

Have you tried RequiresMountsFor/WantsMountsFor ? You'd have to create a new unit that just does the bind mount though..


On macOS I have a compatibility layer for Linux ip (since I've grown to use it, and besides the BSD ifconfig and route (and friends) has always been different from Linux. But when I am on OPNsense, the only other BSD I use, I don't have such, sadly.

With regards to Windows I use ways from NT era, Windows Vista/7, and Windows 10 to configure Windows, and I bet they added stuff in 11, too. It is a mess, supposedly by a company which makes a super user friendly UI (/s)

NFS is a very simple yet archaic filesystem, with nice throughput, but it comes from a LAN era where LAN clients were trusted. I don't know if it got modernized but I just use SSH over FUSE or CIFS over Wireguard.


NFS v4 has proper security, but it's arguably bloated and difficult to configure compared to v3, and worse than ksmbd in every way.


Why did it have to win?

Emacs is fundamentally different, vscode is only convenient if there is no intention to change things.


Uh... given the beauty of relational algebra I don't understand how we ended up with the ugly mess of sql.




Some people might think it is crazy but I like wrapping queries like that up in JooQ so I can write

  recursiveQuery(table, linkClause, selectCaluse)


Life finds a way, I suppose:-)


I have spent a lot of time trying to understand how we ended up with SQL. Best I can determine, we got SQL because it isn't relational, it is tablational. Tables are a lot easier than relations to understand for the layman, and they successfully pushed for what they were comfortable with, even if to the chagrin of technical people.


Here you go: https://www.red-gate.com/simple-talk/opinion/opinion-pieces/...

"RM: What was key to SQL becoming the standard language for relational databases in the mid- 1980s? Was all down to good marketing?

CJD: In other words, why did SQL became so popular? Especially given all its faults? Well, I think this is rather a sorry story. I said earlier that there has never been a mainstream DBMS product that’s truly relational. So the obvious question is: Why not? And I think a good way for me to answer your questions here is to have a go at answering this latter question in their place, which I’ll do by means of a kind of Q&A dialog. Like this:

    Q:
        Why has no truly relational DBMS has ever been widely available in the marketplace?
    A:
        Because SQL gained a stranglehold very early on, and SQL isn’t relational. 
    Q:
        Why does SQL have such a stranglehold? 
    A:
        Because SQL is “the standard language for RDBMSs.” 
    Q:
        Why did the standard endorse SQL as such and not something else-something better? 
    A:
        Because IBM endorsed SQL originally, when it decided to build what became DB2. IBM used to be more of a force in the marketplace than it is today. One effect of that state of affairs was that-in what might be seen as a self-fulfilling prophecy-competitors (most especially Relational Software Inc., which later became Oracle Corp.) simply assumed that SQL was going to become a big deal in the marketplace, and so they jumped on the SQL bandwagon very early on, with the consequence that SQL became a kind of de facto standard anyway. 
    Q:
        Why did DB2 support SQL? 
    A:
        Because (a) IBM Research had running code for an SQL prototype called System R and (b) the people in IBM management who made the decision to use System R as a basis on which to build DB2 didn’t understand that there’s all the difference in the world between a running prototype and an industrial strength product. They also, in my opinion, didn’t understand software (they certainly didn’t understand programming languages). They thought they had a bird in the hand. 
    Q:
        Why did the System R prototype support SQL? 
    A:
        My memory might be deficient here, but it’s my recollection that the System R implementers were interested primarily in showing that a relational-or “relational”-DBMS could achieve reasonable performance (recall that “relational will never perform” was a widely held mantra at the time). They weren’t so interested in the form or quality of the user interface. In fact, some of them, at least, freely admitted that they weren’t language designers as such. I’m pretty sure they weren’t all totally committed to SQL specifically. (On the other hand, it’s true that at least one of the original SQL language designers was a key player in the System R team.) 
    Q:
        Why didn’t “the true relational fan club” in IBM-Ted and yourself in particular-make more fuss about SQL’s deficiencies at the time, when the DB2 decision was made? 
    A:
        We did make some fuss but not enough. The fact is, we were so relieved that IBM had finally agreed to build a relational-or would-be relational-product that we didn’t want to rock the boat too much. At the same time, I have to say too that we didn’t realize how truly awful SQL was or would turn out to be (note that it’s much worse now than it was then, though it was pretty bad right from the outset). But I’m afraid I have to agree, somewhat, with the criticism that’s implicit in the question; that is, I think I have to admit that the present mess is partly my fault."
Discussed in HN (probably posted many times): https://news.ycombinator.com/item?id=39189015


> Why has no truly relational DBMS has ever been widely available in the marketplace?

Postgres was "truly relational" for a significant portion of its life before finally losing the battle with the SQL virus. There is probably no DMBS more widely available. Granted, it wasn't widely used until the SQL transition.

> SQL isn’t relational.

This is key. Relations are too complicated for the layman, the one who is paying for it, to understand. Tables are more in tune to what is familiar to them. The hardcore math/software nerds might prefer relationality, but they aren't the ones negotiating multi-million dollar contracts with Oracle/IBM.

I remember when Postgres moved to SQL. People started billing it as being Oracle, but free. That got non-technical manager attention. Without that marking success appealing to the layman, I expect nobody would be using it today.


I know the story of System R really well.

It was a breakthrough system is almost all aspects. It defined what dbs would look like both internally and externally for decades.

In many ways SQL is like C: good for what authors wanted it to be but severe consequences much later.

But the history doesn't care. SQL (and C) still have many-many years ahead of them.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: