Hacker News new | past | comments | ask | show | jobs | submit | overbytecode's comments login

One of those points is not like the other, Marimo’s feature to deploy a notebook as WASM is a very nice feature imo.

Who are comparable providers in your opinion? Hetzner has KYC and needs ID for you to use their services, I’d rather go with a service that didn’t. Any recommendations?


There some hetzner resellers which accept crypto coins instead

OVH(and subsidiaries like server 4 you,kimsufi) is the pricing a bit higher but comparable (in some regions) But last time I used ovh Hetzner also didn't require Id verification, maybe they changed since then

Ionos also similarly priced didn't need Id last time I used them


OVH wants ID as well in some cases. If you're in the US you aren't getting an OVH overseas anymore to my knowledge. Although, you can get 2gbps unmetered on your servers which is awesome.

I've just been being lazy and buying a domain from namecheap and getting the VPS Pulsar (6GB RAM, 4cores), 250mbps up/down for when I do a project. one server does fine for multiple projects usually.

OVH is still grand-daddy IMO.


Confirming this is the case. OVH closed all my non-US accounts approximately 1 year ago.


> There some hetzner resellers which accept crypto coins instead

Who? I would assume hetzner would close those accounts pretty quickly.

What is their markup?


Curious, what was your process debugging/diagnosing this? How did you reach the conclusion that it was packet congestion?


Wish I could say it was more sophisticated than slow trial and error. I tried changing many different aspects: MTU, forcing different routes/peering through different VPSs, various reverse proxy configurations.

I guess what started leading me down the right path was a more methodical approach to benchmarking different legs of the route with iperf: Client <-> reverse proxy, reverse proxy <-> jellyfin server. I started testing those legs separately, w/ and w/o Wireguard, both TCP and UDP. The results showed that the problem exhibited at the host level (nothing to do with Jellyfin or the reverse proxy), only for high latency TCP. The discrepencies between TCP and UDP were weird enough that I started researching Linux sysctl networking tuneables.

There might be something smart to say about the general challenges of achieving stable high throughput over high-latency TCP connections, but I don't have the knowledge to articulate it.


How do you decide when to reach for Polars vs DuckDB?


In Python, I think of them as, DuckDB is for getting the data you want, in the form you want it, from a file / remote place. Polars is for when you want to do something with that data, like visualize it.

`duckdb.sql("SELECT foo, bar * 2 as bar_times_2 FROM ...").pl()` (now in polars format) -> other stuff

In Rust, it's a bit fuzzier to me, as DuckDB is a pretty heavy dependency. I'm looking more and more fondly at DataFusion.


Do you mean Polars depends on/uses DuckDB pretty heavily in Rust? I'm only just now dabbling in Rust myself so I'm not familiar.


No not at all. Polars is not dependent on DuckDB.

DuckDB is a heavy dependency in terms of size. It's written in C++, so you can't work with it like a native rust dep.


Or you use Ibis and switch between the two at will!


Can any SREs tell us how applicable this book is today? Is it still a useful read?


Yeah, it is, but there's also a lot more to being a SRE than this book. This book more or less tells you how to stand up a reliability program, what it doesn't really indicate is what SREs do. A lot of people I meet think SRE is just the new title for "operator" which can't be farther from the truth. Whether you're doing an embedding model, like is referenced in the book, or you have a central org - both are made up of software and systems software engineers that are focused on performance and reliability. They build software, do analysis, and write policy that improve the bottom line reliability of the organization.


Not an SRE, but I think the main contribution from this book was to popularize terminology of operations (eg SLAs) and to give an opinionated perspective on how to handle operations at scale.

More practically, I don’t think the book is as useful, as it generally only makes sense when you reach a certain scale that few organizations ever do (imo).

However, we are heading into a future where computing will be everywhere and sensors in everything so in maybe a decade even the “smallest” of organizations may be responsible for large scale distributed systems and operating that would require concepts that are provided in the book.


As a non Googler myself, it still is if you want to know how to set up an SRE team and introduce SRE (ie good sysadmin, for lack of a better word) best practices. The focus on actual indicators such as SLI and SLO, the importance of reducing "toil" (boring repetitive tasks) and automating,... these are all valid concerns.

If you want more about system design and how to design reliability, I suggest reading https://google.github.io/building-secure-and-reliable-system...


yes, but not as a checklist of things you have to do, instead it's a valuable discussion of lots of problems and how they were solved in specific circumstances.

learn from it, don't copy from it.


The front half is for introducing ideas. The back chapters where never that great IMHO. They get both too in the weeds and at the same time missing actionable advice.


The age-old question, do I strive to learn new tools that are better designed (Xonsh, Nushell, Fish)? Or old tools that are omnipresent (Bash/Posix)?

I like Xonsh, it’s pretty nice to work with, but it makes going back to Bash, when I have to, even more painful.


Better to live in paradise most of your life even if you have to go back to bash hell once in a while rather than endure that pain all the time


Seconded. I am very young, never bothered to learn bash, went straight for fish. ChatGPT is there to help if I ever encounter unintelligible “$@“&2>?!¿…

Gonna try nushell soon as that seems even more productive.


Nushell is great fun, but be prepared to encounter and fix errors when you paste code designed for sh/bash/zsh/fish. It's a much bigger step away from convention than the others.

I think the one that catches me off guard the most is:

    export FOO=bar
    BAZ=qux
Is now

    $env.FOO = "bar"
    BAZ = "qux"
It makes more sense the nu way, but old habits die hard.


I don't see much use for such pasting and fixing up— I generally find it more painful than other methods.

For short snippets, better to read, understand, and translate as you type.

For some shell integrations or environment setup tools, use fenv or babelfish.

For scripts of any length, just save it and run it. The shebang'll invoke bash or whatever is needed.


nushell requires double quotes around values? looks like powershell where everything-is-an-object of something-else


If you want those values to be strings, yes. For instance, it knows that this is a list of ints:

    $ [1, 2] | math max
      2
And here are some strings:

    $ ["a", "c"] | str join "b"
    abc
And there are tables

    $ [
       {name: "foo", value: 6},
       {name: "bar", value: 9}
      ] | where value > 7 | get name
    bar
And that's about as weird as the types get.


I also never bothered to learn Bash... `#!/bin/sh` all the way!


as such doing that yields different results based on am i on a mac, in debian, in a container, am i on a zebra-on-the-moon...

its all a bunch of loops and string manipulation in the end. in fact, awk can handle a whole lot of it too! and octave... and others, lol; so long as it does not turn into the python2 to python3 fun we've had in the past and we have some stability between versions; this is why i still choose bash (and the whole gnu binaries)

edit (formatting i hope, and python)


I'll admit that I haven't tried this on other platforms, but I was under the impression that the opposite should happen: `/bin/sh` loads a basic POSIX-compliant shell that should work on other platforms, with Bash, Zsh, and whatever else, which are extensions of a POSIX-compliant shell.

Blegh, now I have to do some digging...


sh will load whatever the OS decides it to open LOL; there is no rule. again why i just go with gnu bash as it is pretty feature rich and exec's other-cli-apps really well. the loops it can do, and arrays are just a bonus; if i need some more complex data structure, then there's another app for the job- that is not an interactive shell. just beware of the dragons when dealing with macos's old-ass bash; check out the version, go with new if you can


Nah, hell is everywhere. I'm better off using Bash when I need a basic script, then upgrading to Python if I need anything more substantial.


Take a look at https://github.com/xonsh/xonsh/issues before deciding to abandon the devil you know.

I prefer sticking with bash where necessary (where a script is the only thing that will reasonably work), and elsewhere using a programming language with testing, type checking, modularity, and compilation into something with zero or minimal runtime dependencies.


I use zsh on my work and personal computers. I'm not ssh'ing into boxen these days. But when I do, I'm not doing anything more than reading logs to figure out why the userdata on an EC2 didn't work as expected.

I try to use posix standards when convenient, but I'll switch to bash at the first sign of posix complexity.

Xonsh seems like I'd have to type a lot more than I do with zsh. I would also be concerned about not being able to give my team members the same command I used without forcing them into a non-standard shell.

I don't use fish because I've only met one other person IRL who used it. Everyone I've worked with has use bash, zsh, or ksh (I'm glad I left the ksh company before they had to rewrite all those ksh scripts).

Also, Bash is staying for now. posix will most likely always work for the foreseeable future. Zsh seems to be the new Bash, but I have yet to see anyone put zsh in a shebang at work.


> I have yet to see anyone put zsh in a shebang at work.

I put it in shebangs for macOS scripts nowadays, since it's been the default shell on macOS for a little while. That's a niche for sure, but still.


I found xonsh to be configurable to be extremely similar to zsh. I don’t see that it’s «more typing».


I recently (i.e., yesterday) migrated my 15+ year old bash config to zsh. zsh has some great quality of life improvements compared to bash and is basically 1-1 compatible. I had to spend about an hour migrating my prompt, but other than that it was a smooth transition.

zsh is now the default shell in macOS, so I'd say it's a safe bet if that's what you work with.


I switched a few years ago and while there's a lot to like about (and power in) zsh, there's a lot i really dislike about it. For starters, it adds so much additional functionality and compatibility that the documentation (man pages are terrible). Also, the additional history & variable expansion capabilities are messy/ugly in shell syntax (imho). I think ultimately the problem is that, for shell scripting, bash has clearly won, but other shells show that there's room for an alternative specifically for a user's interactive interface...but zsh didn't get the memo so is trying to be all things to all people. One of these days, i'll probably swing over to fish...if I can get the energy to change my environment yet again.


"basically 1-1 compatible"

careful, there are footguns in those words. It may seem like it's 1-1, but it's not. There are subtle differences especially in escapes. Bash uses \ escapes. Zsh uses % escapes. Zsh has builtin wildcard expansion. There are other differences as well but you can use the emulate command to emulate bash so it actually is 1-1.

Also, once you've made the switch to zsh - checkout oh-my-zsh (https://ohmyz.sh/)


it's pretty funny that Zsh only came out one year after Bash... in 1990


Learn the new tools. I’m working on my own machine 99% of the time, and if I’m on a remote machine, there’s a 90% chance I’m running something automated there. I’m not going to handcuff myself to the baseline for the sake of that .1%.

There’s no way I’d go back from Fish to Zsh or Bash on my daily driver. It’s just too pleasant to give up just because of “what if?”.


If it's Python, is it really a new tool?

The big problem is that bash is more or less portable (almost everything has bash in the box). They need to start convincing distros to include and/or default xonsh, to really make it worthwhile.


In my view, yes. It would allow you to extend you shell easily, for example do something with the history, like setting a blacklist for commands that should not appear in the history when using the up key, but still retaining the execution of such a command in the history for poor man's auditing.


If you're new and still trying to get into the industry, try all the new tools. Drive them hard and try to break them, so that you can find bug fixes you can contribute. Just go nuts, and let yourself steadily build up a backlog of unique, public, referencable commits you can show employers.

Once you're already established and comfortable, it's up to you if you want to keep trying the new flavor of the week. People gravitate towards novelty at different parts of the stack: Some people love running FreeBSD or Alpine, but stick to Bash on top of those; others, like me, try to stick with Ubuntu whenever possible, and mess around with things like shells and tiling window managers. Others even return to Windowsland and instead focus all of their efforts on innovating at the highest levels of what they can do with C# and actually making money with an innovative business model.

But you'll never learn where you don't enjoy the thrill of seeing something new break on you if you don't have that initial "question everything" phase.


Personally I'd prefer to have to learn no shells at all, but since that's not possible I'll stick with the one that's most commonly installed, which is bash. Similar feeling with an editor - vanilla vi instead of learning and depending on a slew of fragile extensions that only work on a personal laptop. If the challengers supplant the defaults on servers in these domains at some point, I'll learn them then.


I don't think Fish counts as "new" at this point, given that it's been around for almost 20 years now. "Boutique", perhaps?


Just go for it. I've been using Fish as a daily driver for something like 14 or 15 years now. I have no regrets about it.

If I want it on a server I'm using, I (*gasp!*) just install it.

(I still write Bash sometimes and that's not really a problem, either.)


I feel like having ipython available whenever I feel like it from my shell is the best of both worlds


you can use both.

whenever I have some task to deal with data manipulation, i.e. fetch a json, map/filter/reduce on it, save it as some format, I reach for nushell.

If it's just process management or day to day trigger of command in a folder, I use zsh.


Quickemu gives me the ability to instantly spin up a full blown VM without fiddling with QEMU configurations, just by telling it what OS I want.

This might be less useful for those who are quite familiar with QEMU, but it’s great for someone like me who isn’t. So this saves me a whole lot more than 2 minutes. And that’s generally what I want from a wrapper: improved UX.


I find NNN and Ranger a lot more ergonomic as a UX, but MC’s virtual file system is so good, the ability browse folders seamlessly whether local, remote, zip, tar, jar in the same interface etc is really useful.


Okay..what is the alternative?


> Okay..what is the alternative?

Just ... open the file and handle the error?

What are you getting by checking that the file exists that you don't get from the 'file not found' error that the `openFile()` routine returns?

Because it doesn't matter if `doesFileExist()` returns true, you still have to handle the `file not found` error[1] when calling `openFile()` on the next line anyway.

[1] Just because the file exists when the program checked, that doesn't mean that the program can open it (permissions), that the file still exists (could have been removed between the call to check and the call to open), that the file is, in fact, a file that can be opened (and not a directory, for example), that the file is not exclusively locked (Windows) by some other process that opened it, etc. `doesFileExist()` tells you nothing that would change how the subsequent is written.


That makes sense.

I guess we’re accustomed to thinking that absence of a file is not “exception-worthy” because it is expected under normal circumstances. But the cases you raised make sense.


And if it's the initial configuration file?


> And if it's the initial configuration file?

You're still going to have to open it and handle the errors (in case it's the wrong permissions, or the filename is a directory, etc).

Checking if it exists doesn't make the subsequent code any easier or shorter - you still have to check for errors even if it exists.


I’m curious how you’re integrating it with Sveltekit? Are you using Sveltekit just as a static generator?


Depends upon the project.

For CRUD apps, sveltekits progressive enhancement and form actions make it quick to to add simple function to the page. You can store the pocketbase instance, pb, in locals and reference it all over the application.

For more multiplayer things, sticking a client-side subscription to a collection allows updates of elements that can be worked with/added/moved around etc.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: