Hacker News new | past | comments | ask | show | jobs | submit | opengears's comments login

Selfhosting

Actually I don't think self-hosting is a viable solution for many people. Most server hosts are not security experts. IT security is really hard due to the many possible attack vectors you have to be knowledgeable about. In this article they assume that an attacker has compromised a server and I don't see how a layman can keep a server safe if experts want to compromise it in the long term if you just follow the reccomend maintenance. One day you will slip up.

You might get a security related update late, did not hear about the last breach and are not aware how that relates to you, all sorts of scenarios. The only way to make it much more difficult to be compromised is if you don't connect your self-hosted cloud solution to the internet. But then it's not a really a cloud solution anymore.

And that's before you have to consider that not everyone has the knowledge, time, interest to self host.


If that 3+ TB attack CF just mitigated starts aiming at the entire ipv4 range (probably more spread out and cyclical), self hosting could die :(

At least anything on UDP


Hmm, yeah that's true


I used to daily drive this. Works quite well.

yeah but it’s a bigger pain

and yet how do i install this gaming support on nixos?

You'd have to package FEX :D

For the kind of person who wants to run NixOS on Apple Silicon or do Linux gaming on Apple Silicon in the first place, that's probably interesting and not too hard

but if you're allergic to that, you might be able to figure something else out with Box64, which is already packaged in Nixpkgs

x86_64 gaming on NixOS is of course well supported and has been for a long time. There's a 'native' package that I've always used and the Steam Flatpak is also available and works as well as it does anywhere


we are talking about asahi linux. i think it is pretty clear that nixos isn’t supported like a first class citizen because you have to do a fair bit of work to make all of the more recent userspace fixes work on NixOS. i run NixOS Asahi so I know.

it was easier when Arch was a first class citizen but the advice nowadays you get upon encountering a problem on Arch is to switch to Fedora


> you have to do a fair bit of work to make all of the more recent userspace fixes work on NixOS

Is it fundamentally different from any other Nix packaging work in some way?


Perhaps if the NixOS folks want better support they should invest more time in keeping up with Asahi Fedora?

The developers can use what they want. Marcan famously used Gentoo for many years


Using libredrive supported devices - would we get some other advantages? Like being able to read from old and broken CDROM and DVD devices more reliably?

No, it actually makes it "worse" in that most usual DVD players and drives will do a "best effort, but keep going" type of read which may result in a pop or skip or desync for a moment on playback - but these tools are archival and refuse to read if they can't read correctly.

It's actually quite annoying at times, for example it's often better to rip audiobooks with iTunes and then grab the files and delete it from iTunes than to use something like XLD directly.


In the mitigation section there is written 'Deploy Runtime Protection: Use advanced anti-malware and behavioral detection tools that can detect rootkits, cryptominers, and fileless malware like perfctl.' -- which tools can we currently use to detect perfctl?

I hear Crowdstrike is king (≖ ͜ ≖)

To be fair, a system that rebooted and won't come back up IS pretty secure.

No, because it's a denial of service.

C-I-A triad: Confidentiality, Integrity, Availability.

A dead system is confidential, and if that's your criterion, then fine, but legitimate users may require access to intact data and services.


Sure, and availability in this sense is often forgotten, but I was only joking about Cloudstrike's ability to block malware.

A dead machine is difficult to infect with malware. You'd have to go out of your way to do so.


are there any scripts or steps to 100% detect perfectl yet?

Article mentions couple of const paths that are used, like /root/.config/cron/perfcc.

Also, it mentions that ~/.profile is modified (EDIT: and many others, actually), so IDS like AIDE, if operated correctly, should alert you on that. I don't see any mentions about attempts to circumvent locally run IDS. I wonder if/why malware author did not attempt any evasive actions here, given how much they try otherwise. Maybe cost/benefit ratio is too low?


From the text, tons! This rootkit does not seem very stealthy at all.

IMHO, a simplest one is to check $PATH. If there are suspicious entries, like /bin/.local/bin, it's a sign of infection.

You can also check for presence of the specific files as mentioned close to the end of article.


> In all the attacks observed, the malware was used to run a cryptominer

I assume it starts by detecting a continuous 100% utilization of the cpu’s.


Supposedly it tones down it's activity while a user is logged in and waits for the machine to go idle. Another reason to have centralized performance monitoring.

Yes, but tools like htop show the average load over the last 15 min. So I assume that will show a high utilization.

plaintextaccounting to the rescue

I have good experiences with Syncthing for file syncing.


Sadly no sigrok and pulseview support. See https://forum.redpitaya.com/viewtopic.php?t=22866


Here is another good thread about the Red Pitaya ("sobering") https://www.reddit.com/r/embedded/comments/y9oxqb/red_pitaya...


It is such a shame the RPi Zero2 does not support "traditional" sleep modes like the ESP32 for example - which is why we have to optimize the Linux boot process. https://forums.raspberrypi.com/viewtopic.php?t=243719


This is easy with Syncthing. Just create a bindirectional sync and move away the data with cron.


i don't believe that will work reliably. cron risks moving a file that is not complete or a race condition where the complete file is moved away before synching verified that it is complete causing it to transfer it again. this can be avoided by only moving files that are a few minutes old but issues like this just make the process more brittle.

i want to use a tool that is reliable, not hack together a custom solution. if i did that, why even bother with syncthing?


That doesn't provide a guarantee that a file is on multiple replicas before deletion from the phone.


You could use Syncthing just to empty the incoming files from your phone (ingest) and then move the photos via cron to a second folder (also Syncthing) which is just shared with the replicas.

Another approach would be to push the files from Syncthing to borg (borgmatic can do replicas) https://torsion.org/borgmatic/


I import files from Syncthing folders into a git-annex on my NAS, where multiple copies are eventually guaranteed via sync to off-site mirrors (remotes).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: