Hacker News new | past | comments | ask | show | jobs | submit | safetytrick's comments login

Quoted: "Likewise, you can repeat yourself but classes only have a single (duplicated, but subtly different) responsibility."

I'm proud of my little joke on dogmas: "The only dogma I believe is that all dogmas are wrong".

I like to phrase this kind of thought as: "The only dogma I believe, is that all dogmas are wrong".

vim is late 1991, vi was 1976, sublime text was 2008. Just seems like garbage writing (very easily could be LLM).


In the article the author mentions wanting to benchmark a GPU and using ChatGPT to write CUDA. Benchmarks are easy to mess up and to interpret incorrectly without understanding. I see this as an example where a subtly-wrong idea could cause cascading problems.


With a dependency on a font. Wow, it's been a while since I've seen that.


Fairly commonplace for TUIs that use icon. Tons of vim plugins rely on NerdFonts (A regular monospace font that also includes tons of icons)


Is there any technical reason the font can't be shipped in the binary and installed automatically?


Which font? What license? Which one does the user want?

Where does it get installed? Will the user consent?

I did not use a Nerd Font until I tried superfile out today. It was not hard to build my own nerd font version of Berkeley Mono with font forge.


I very well might be in the minority of Linux users, but I don't particularly care about the answers to most of these questions. I just want it to work. Give me solid defaults[0]. I'm not saying you shouldn't be able to override those defaults. That's an important feature of Linux.

My first experience running a cool-looking TUI file manager yesterday (I actually ended up trying yazi first) was that I got a lot of blank squares in place of missing icons and emojis due to missing fonts. I had to spend 20 minutes figuring that out before I got a good experience.

Interestingly, I also tried wezterm[1] in the process. It actually ships with the required fonts as fallback, but the version from my distro's package manager didn't work (the AppImage did). I'm guessing my distro removed them, maybe for some of the reasons you cited. I started installing the nerd-fonts group for my distro. 6.5GB... no thanks. After manually poking through them and some googling I finally installed a couple and it's working now.

My overall point is that it's possible for app developers to provide good defaults like wezterm does. It's also possible for distro's to break those defaults. Also, if size is a concern then at least detect that I don't have a working font and offer to download one for me.

[0]: https://blog.codinghorror.com/the-power-of-defaults/

[1]: https://wezfurlong.org/wezterm/


Regardless of wanting sane defaults, this is not something superfile can do on itself: it runs in a terminal, and normally terminal programs do not get to choose what font is used.

So the "best" it could do is bundle the font file, but then you would still have to configure your terminal to use it. At that point, it's easier to just tell you you need a nerd font and link to their repo.

That being said, I kind of agree that, since NerdFonts are pretty good and by now quite widespread, it wouldn't be a bad idea for major distros to patch their default monospace fonts so that you get NerdFonts out of the box in the default terminal.

But, in general, if you go out of your way to install a different terminal emulator, it's unlikely you'd have much trouble changing its font anyway; still, getting everything to look nice and pretty is sometimes harder, so I suppose wezterm is commendable for including fonts and colorschemes.

(The above really mostly applies to fonts as they are an additional dependency and also highly dependent on user preference. For pretty much everything else I agree that good defaults are under-emphasized in CLI/TUI utilities. Probably because options usually get added incrementally and breaking historical defaults is not a good idea.)


Would it be possible for the program to detect that the current font doesn't support all the features you need and tell you?


Besides, you still have to set your terminal to use that font. Not as if typing `spf` on the command line can reconfigure whichever terminal you are on.


I think they are talking kilometers


I have a '97 Jeep XJ at ~214,000 miles. They do exist :)


Hasn't rusted through?


Nope. That's mostly (I guess?) an issue in places with a lot of winter months that use salt on the roads instead of sand. It's had a bit of rust, but nothing a buzzbox welder and some steel sheets couldn't patch.


Who is doing this and where can I read more? What are the tradeoffs?

I imagine that you get a a dataset that is significantly smaller but it is much trickier to keep a dataset in memory the way you could with MySQL.

It's like having a free implicit index on the customer (because you had to lookup the sqlite db file before you could start querying).

I spend a lot of time thinking about tenancy and how to handle it. Tenancy is such a common problem.

Performance is the number one reason tickets are hard to estimate. The second in my experience is security.

Time and tenancy are the number one opportunities for SQL to just be better (I always need tenancy and my Order By or at least one constraint can typically be satisfied with time).


Salesforce uses a (postgres) database per tenant ('org'). Imo db per tenant is the way to go for most SaaS problems.

The ease of backup/restore, the extra layer of separation, the ability to 'easily' move something off cloud -> on premises or local


Also it should, in theory, make it nicer to do gradual rollouts/migrations.


I forget that this has been a very popular strategy for a long time with more traditional databases.


I wonder if Turso https://turso.tech/ supports that use case. They support 10k databases in the step above free pricing tier.


I'm doing it, though I haven't written anything up. Happy to share my opinion though, with a bit more experience than you have.

The databases I'm working with are pretty small - ballpark 4MB of data per "tenant". So, I guess, a single large database sever with half a terabyte of RAM could keep well over a hundred thousand tenants in memory at the same time (I don't have anywhere near that many tenants, so I haven't tested that... and honestly if I did have that many I'd probably split them up between different servers).

Without getting stuck into the into too much detail - "tenant" isn't really a good fit for how we split them up. Our business is largely based on events that happen at a specific date, with maybe a few months of activity before that date. We have an sqlite database for each event (so ~4MB per event). Once the event passes, it's essentially archived and will almost never be accessed. But it won't actually never be accessed so we can't delete it.

I haven't run into any performance issues so far, just with regular sqlite databases on the filesystem. I expect the kernel is doing it's thing and making sure "hot" databases are RAM as with any other frequently accessed file on the disk.

My understanding (it's a theoretical problem I haven't actually encountered...) is SQLite only really struggles when you have a bunch of simultaneous writes. Our business model doesn't have that. The most actively written table is the one where we record credit card payments... and unfortunately we don't make tens of thousands of sales per second.

If we did have that "problem" I'm sure we could allocate some of our billions of dollars per day in profits to finding a way to make it work... my gut instinct would be to continue to use SQLite with some kind of cache in front of it. All writes would go to something faster than SQLite, then be copied to SQLite later. Reads would check the write cache first, and SQLite if the cache misses.

My experience working with a single large database is you end up with a lot of stale data that you is almost never needed. When a table has a hundred million rows, with indexes on multiple columns, even the simplest operating like adding a new row can get slow. My approach with SQLite eliminates that - I'll often have just hundreds of rows in a table and access is blazingly fast. When I need to access another database that hasn't been touched in a long time (years possibly), having to wait, what, an entire millisecond, for the SSD to load that database off the filesystem into memory isn't a big deal. No user is going to notice or complain.

Obviously that's more challenging with some data sets and if you're constantly accessing old data, those milliseconds will add up to significant iowait and things will fall over. I definitely don't use SQLite for all of my databases... but in general if you're doing enough writes for SQLite's simultaneous write performance issue to be a problem... then chances are your data set is going to get very large, very quickly, and you're going to have performance headaches no matter what database you're using.

Finding some way to divide your database is an obvious performance win... and SQLite makes that really easy.



It's okay to erect a fence that is less than ideal but please explain yourself.


Whenever I am in that expedient hack mode I've started commenting to explain my dilemma. My most valuable comments typically come out at that time.

Such and such can't be done right now because of such and such.

Often it doesn't matter if I ever resolve those comments or solve that problem I hacked through. The value comes from avoiding the rabbit hole next time.


Have you tried drawing a logical diagram/flow-chart of some kind?


If it's simple enough, I'll embed a Mermaid diagram into a Markdown file committed with the change.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: