Hacker News new | past | comments | ask | show | jobs | submit login
SpaceSim (pavelsevecek.github.io)
359 points by Luc 2 days ago | hide | past | favorite | 46 comments





I just did this install, then went to remove and it attempted to remove `/usr/local/bin`

Well, that's just one way to get "space" :)

typing in 'rm' in any script I write scares the bejeebus out of me. I tend to write 'echo rm' so I get a chance to review while testing to catch this specific type of issue.

Instead of deleting anything, my scripts usually mv files to a timestamped folder under /tmp. In practical terms, it’s rarely a noticeable difference in performance or disk usage. Also makes scripts easier to debug when you can inspect transient artifacts.

I manage large video/audio assets, so disk usage is very noticeable. I've done the mv to a designated trash folder with another script that finds files in that folder older than designated time to live and then -exec rm -f {} \; type of stuff. Even typing that out still gives me pause. Nobody ever needs a file with such urgency as just 24 hours after it was deleted, but not in the designated time out window.

My workstation machines take hourly(+at boot+on demand) snapshots of the filesystem. Doing it on the system level is a lot simpler than repeating the logic over and over, and /tmp is often a different mount then where the files first resided so moving things over there is a copy+delete.

In case you're interested, I have adopted a pattern that works for me in bash (I don't use zsh so caveat shellator)

  N=${N:-}  # if you use (-u)
  $N rm ./whatever
and then you can exercise the script via

  N=echo ./something-dangerous
but without the N defined it will run it as expected. More nuanced commands (e.g. rsync --delete --dry-run which will provide a lot more detail about what it thinks it is going to do) could be written as `rsync --delete ${N:+--dry-run}` type deal

Can use -i to confirm deletions also, to not have to edit and re-do the command. The downside is being asked for everything individually rather than confirming one (big) list, so not sure if this fits your use-case

the -i option doesn't really work well for scripts intended to run unattended and headless, so no it doesn't fit the use case.

now i'm waiting for the suggestions to use -rf


Did it try it with `rm -r` or with `rmdir`? `rmdir` seems perfectly ok for me, it will keep /usr/local/bin intact if there left files.

Installer and game work perfectly on Intel Integrated Graphics on Linux with Wine 9.22

Note that it does build natively for Linux (and other platforms) from source, and the github page:

https://github.com/pavelsevecek/OpenSPH

includes an old Debian package you can install (although for Debian 10, and doesn't work on recent Ubuntu/Mint installs either)...


Certainly feels like a missed opportunity not to advertise that somewhere official.


It would be fun if we could define planets with our own materials, like bananas (influenced by xkcd), diamonds or whatever other silly substance we like :-)

Or chocolate (Terry Pratchett: Thief of time) IIRC

Are there any easy examples one can just run once installed?

Or can anyone on HN give me any hints on a valid flow chart


This is really impressive.

I worked with the author at Corona Renderer; guy's a genius, no sweat.

Tangentially related: Gravity Wars - a fun 2 player physics artillery game where planets affect projectile path

https://github.com/whyboris/Gravity-Wars


Moonshot[1] is another similar game, and a bit more polished. It was abandoned, unfortunately, but it's fun for a few minutes.

I just noticed there's Orbit Outlaws[2] from the same developer, which builds on the same concept (for better or worse), but is also abandoned.

[1]: https://store.steampowered.com/app/426930/Moonshot/

[2]: https://store.steampowered.com/app/1319100/Orbit_Outlaws/


Oh wow, I wish saw this in the list of projects when LÖVE was featured here the other day.

This looks super fun.

Edit: I'm loving the explosion-revenge last-ditch effort to counterstrike when hit. Fantastic concept.


> Tangentially related

I see what you did here.


very very cool, its also so rare these days to see the scientific crowd bother building windows installers, now people whose only skillset is using microsoft word and cheating in games can get a glimpse of what modern compute is capable of, hopefully inspire some of them to think beyond badly formatted text documents.

Although at this point they are more likely to call it science fiction because they all know the earth is flat.


Looks great, but GitHub metrics indicate that, unfortunately, the project has stalled. The last commit was six months ago on master and two months ago on develop.

source: https://github.com/pavelsevecek/OpenSPH/graphs/contributors


2 months between commits seems fine for a hobby project. I wouldn't call it dead for a couple of years.

Two months without a commit could still be quite active and useful software, especially for a personal project. Where would you draw the line?

Yes, I don’t question the usefulness of the project by any means. To be frank, I’m personally very interested in it—I studied celestial mechanics at university many years ago and am still curious about simulations.

The graph on the chart I shared suggests that the peak of contributions was a couple of years ago, with occasional changes since then. This doesn’t make much sense to me, as the rendering quality looks great (at least in the videos—I’ll try the software a bit later), and it’s head and shoulders above what the scientific community is currently using.


I don't think that it's fair to compare the rendering to what is currently in use in the scientific community, for two main reasons:

The first is that different types of rendering have different uses; typically in scientific visualization this is broken down into essentially "viz for self, viz for peers, viz for others" and oftentimes the most well-used rendering engines are targeted squarely at the first and second categories. The visual language in those categories is qualitatively different than that used for more "outward facing" renderings.

The second reason is that I disagree with your assertion about the quality of the visualization techniques in use within science. There are some truly spectacular visualization engines for cosmology and galaxy formation -- just to pick two examples off the top of my head, the work done by Ralf Kaehler or that by Dylan Nelson. (There are many really good examples, however, and I feel guilty not mentioning more.)

As I said in another, rather terse and unelaborated comment, though, this is really, really impressive work. I think it's important that in praising it, however, we don't discount the work that's been done elsewhere. This need not be zero-sum.


I don’t mean to discount any other work. I have already disclaimed that I don’t work in academia and rely on second-hand feedback from my classmates (in Europe)—for example, the Fortran implementation of Yoshida’s method from N years ago that nobody could modify, or the pressure for publication. Building (or learning) a new rendering engine would be a losing strategy in an academic career, as it is a much more difficult path to getting published. There are far fewer postdoc positions than PhD positions, and rendering skills won’t help in this competition.

Regarding the work of Ralf Kaehler: I have seen his renderings and looked through his articles, but to the best of my knowledge, no source code is publicly available. I don’t consider it fair to count it as something actively used in the field, beyond his lab and affiliated projects.

Disclaimer: that doesn't mean that there are no others, but their availability to researchers is limited to be widely spread.


You can't imagine that someone working on something like this would slow down as the work neared completion? Why must a piece of software / code constantly be changing? What's your specific concern? You're making a very strong claim that the "project has stalled" without any real evidence. Furthermore, the project "stalling" makes it less... what, exactly?

Yes, I can imagine multiple reasons why an author might decide to change their pace for whatever reason. my observation was that it changed.

Based on my experience (both personal and from colleagues), when a project is not in active development, the team starts losing knowledge of the codebase along with its context. For example, something that was at your fingertips while actively working on the project would be much more difficult to recall after a year. The difficulty of maintaining or extending the project grows over time if it is not actively worked on.

‘Stalled’ = contributions become less and less frequent.

If a project has stalled, there isn’t much new happening. For a simulation like this, the sky is the limit—you can make it as accurate as possible (e.g., accounting for light pressure - esp. significant around blackhole acceleration disk, the Yarkovsky effect, etc.)


update: based on author's activity on youtube, he still works on it https://www.youtube.com/@pavelsevecek/videos?view=0&sort=dd&...

I dunno, I have active hobby projects that go weeks to months without commits. Sometimes you need to experiment with things for a while to get a feel for whether or not it should be committed. Sometimes you need to take a break.

The bullshit amounts of churn-for-the-sake-of-it in the JavaScript ecosystem aren't normal.


It depends on the complexity of the project. I assumed something nontrivial, like this project. I outlined some thoughts on the effects of consistent development and what the project might become in a comment above (current state vs. becoming a go-to visualization tool in the field for years to come).

Regarding the JavaScript ecosystem—I never mentioned it. Replacing one tool with another has nothing to do with the evolution of a single project.


> hobby projects that go weeks to months without commits

Just months? :D Last week, a hobby project took down various unrelated services on my server (like receiving email) by causing disk space to suddenly fill up. The root cause is bad handling of an expired third-party domain. I had last touched that in 2012!

Or the grocery list software I use daily: its main activity period is probably 2015 through 2018, with features/bugfixes being added maybe once every 2-3 years nowadays. Back in September that I added a small feature we now use on most grocery trips, but since it gets daily use by the developer, it's not like it's unsupported

One of my few projects that has regular users besides family is a ~2013 rewrite of a 2011 file uploader. Sometimes there is over a year between any change at all, but whenever someone came along with a bug report I think the fix was never more than a few days away. Come to think of it, it was just today that a friend reported being happy that I still provide it

Although stalled perhaps isn't inaccurate, I would feel that it gives the wrong vibe if someone used that to described these daily-used projects where the bug reporting method pops a silent notification on my phone and I'm acting upon any. No offense to u/apetrov, I get what they meant when reading their subsequent replies elsewhere in the subthread


Developing for a single platform in 2025 is like developing for a single web-browser in 2005.

Developing for a single platform in 2025 is like developing for a single platform in 2005, if you don’t care about mobile.

The desktop marketshare of the various platforms hasn’t fundamentally shifted since then. Mobile was all additive, and Microsoft lost it. But Mac and Linux remain roughly where they were.


I believe, that the GP comment is too dismissive, but it is true. In 2005 when the dominance of IE started to dissolve it was not the best move to develop for a single browser. Though people still did it.

Today we see a rise of ARM on desktops, developing for x86 excludes Mac users, but the situation moves in a direction when exclusive x86 software will exclude an assortment of users of different OSes who chose to buy ARM desktop/laptop.

But I completely understand the choice made by the author, to use vector extensions on two (or three? RISC-V?) processors would be a much more additional work. The project is FOSS so anyone can jump in and add support for ARM vector extensions. Hopefully it will be easier then to write it from scratch, because you can compare intermediate results bit to bit, and catch mistakes red handed.


This chart indicates lots of growth for OSx since at least 2009 (as their data goes back to).

https://gs.statcounter.com/os-market-share/desktop/worldwide...


Would be interesting if they took out enterprise and/or computers were forced to use rather than chose to purchase.

Not everything is a VC funded thing. This is clearly a research project at a university - notice the ff.cuni.cz links in the images.

It's mff.cuni.cz. ff.cuni.cz is faculty of philosophy, mff is faculty of math and physics :)

Paraphrasing somebody: Win32 is the most stable Linux API.

Unironically, true.

As one of your sibling comments points out: it works perfectly on Wine.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: