Hacker News new | past | comments | ask | show | jobs | submit | tetha's comments login

> I’m no security person, so the reasoning felt a bit weird to me, as I guess the .zip TLD can’t hurt anybody; downloading a .zip might, which you can attach to any link name?

Turn of all of your developer knowledge for a minute.

You click on a link "very-trustworthy-ceo-information.zip" in a mail, since you want to download this very important information from your CEO. Sure, your browser pops up, but it does that all the time so who cares, and then there is a file "very-trustworthy-ceo-information.zip" in your downloads folder. Native Outlook might usually open it in a different way usually, but who cares? OWA - you won't notice a difference in the UI at all. But anyway, important CEO information. Open the zip, open the PDF, oops your workstation is compromised.

If we turn our technical knowledge back on, it's rather simple. A user was phished to open a link to "https://very-trustworthy-ceo-information.zip". This returned with a file download, obviously called "very-trustworthy-ceo-information.zip", containing whatever I want to contain based off of IPs and whatever I can stuff into the link in a hidden fashion the average user won't note.

A lot of people would not be able to distinguish between https://foo.zip answering with a binary content type and naming the file foo.zip through content disposition headers and foo.zip coming from a trusted source.

And honestly, I would personally have to double-check what's going on there if it happened to me.


My point was that the person fooled by https://foo.zip/ would have been also fooled by https://foo.com/bar.zip, so the existence .zip wouldn’t change much.

But now I’ve understood that the auto-linkification of a simple non-link mention like update.zip can be indeed dangerous.


Imagine a URL like http://www.chase.com.auth.statement.zip

Average users would never suspect a thing.


I genuinely do not see how that is different than http://www.chase.com.auth.statement.com

As a developer, the difference might be minimal, you see the http:// part.

For the average computer user, I'm guessing they don't "see" the http:// part. It looks like a link to an attachment in the email, so safer than a random URL.


Music isn't very different either. A common recommendation is to first learn to play songs you like, and then to start diverging a bit and to adjust the things you don't like and merge the ideas you like later on.

Our worst outage occurred when we were deploying some kernel security patches and we grew complacent and updated the main database and it's replica at the same time. We had a maintenance with downtime anyway at the same time, so whatever. The update worked on the other couple hundred systems.

Except, unknown to us, our virtualization provider had a massive infrastructural issue at exactly that moment preventing VMs from booting back up... That wasn't a fun night to failover services into the secondary DC.


This makes me glad I finally talked people at work into running our annual pentests of our products on production, and putting the entire production infrastructure in scope. Focus may be on a specific product or system, but everything is in scope.

And the first test is running, and no one is screaming yet, so fingers crossed.


when you say yearly I assume you're not conducting regular internal pentests?

any pentesting companies that you could recommend which do more than just drive-by shooting with metasploit?


We pentest all of our developed applications annually and on top, a few customers have internal regulations to pentest applications they use, so some of our applications run through 3-4 pentests per year. This is pretty useful to stay up to date on our TLS configs, the latest security headers, they have found some issues in authorization structures and such.

However, what I'd really like is budget and time for a dedicated infrastructure pentest. I'd like to give the pentesters the same access as our office has, to see if that's fine. And since I like pain, I'd also like to simulate compromise of an application server: Deploy some reverse shell / kali container as an unprivileged container with some middleware service access, and later on deploy a privileged containers as well. Ideally the first simulation should only lead to loss of data the service needs to function, but as the article shows: Who knows. Maybe we also lose everything.

Regarding companies, at my current job we're having good experiences with Secuvera[1] from germany. They are doing the usual ZAP/Metasploit drive-bys, but they are also poking and prodding at various security boundaries, the services behind the application. We're getting good results from them.

At my previous test, we also had a few encounters with Redteam Pentesting[2]. Those guys used an incorrectly used cipher-mode to exploit the links allowing users to "single-sign-on" (only in spirit, not in current tech) from the game-client to the forum in order to hijack arbitrary forum accounts by modifying the encrypted forum-account-id inside the link. And other fun hijinks.

1: https://www.secuvera.de/ (I can't find an english version of that site)

2: https://www.redteam-pentesting.de/en/


The article lost me at a certain point, somewhere around the "solving the conondrum".

It lost me, because we have two estimations - an overall size guess of an epic and an actual implementation estimation of an epic. Like the overall size guess is just 2-3 seniors looking at an issue and wondering if this takes days, weeks, months or years to implement.

The actual implementation discussion is however what the article is talking about. We get most or all of the team into a meeting and we talk through what needs to be done, and structure all of that into individual concrete tasks everyone can have an idea of implementing them. And then we estimate those tasks.

And this estimation in turn is communication to management. Like, we've realized that about 21 is what one of us can do in a usual monthly iteration outside of massive outages and such (we're an operational team). So if an epic turns out to require some 3 21's and 3 13's... that can easily take 6-12 months unless we put exceptional focus on it. With high focus... as a team of 4-5, that will still take 3-6 months to do.

On the other hand, something that falls into a bunch of 5's and 9's and such tends to be muddled and struggled through regardless of whatever crap happens in the team much more reliably. It needs smaller chunks of overall attention to get done.

And note that this communication is not deadlines. This is more of a bottom-up estimation of how much more or less uninterrupted engineering time it takes to do something. A 21 in our place by now means that other teams have to explicitly make room for the assigned person to have enough headspace to do that. Throw two interruptions at them and that task won't happen.

It's more bin-packing than adding, tbh.


I have similar experiences. I've been digging into this more over the years and my two conclusions are: (a) Linux memory management is overall rather complex and contains many rather subtle decisions that speed up systems. (b) Most recommendations you find about it are old, rubbish, or not nuanced enough.

Like one thing I learned some time ago: swap-out in itself is not a bad thing. swap-out on it's own means the kernel is pushing memory pages it currently doesn't need to disk. It does this to prepare for a low-memory situation so if push comes to shove and it has to move pages to disk, some pages are already written to disk. And if the page is dirtied later on before needing to swap it back in, alright, we wasted some iops. Oh no. This occurs quite a bit for example for long-running processes with rarely used code paths, or with processes that do something once a day or so.

swap-in on the other hand is nasty for the latency of processes. Which, again, may or may not be something to care about. If a once-a-day monitoring script starts a few milliseconds slower because data has to be swapped in... so what?

It just becomes an issue if the system starts trashing and rapidly cycling pages in and out of swap. But in such a situation, the system would start randomly killing services without swap, which is also not entirely conductive to a properly working system. Especially because it'll start killing stuff using a lot of memory... which, on a server, tends to be the thing you want running.


It is not just advice.

Default configs of most distros are set up for server-style work, even on workstation distros. So they’ll have CPU and IO schedulers optimized for throughput instead of latency, meaning a laggy desktop under load. The whole virtual memory system still runs things like it is on spinning rust (multiple page files in cache, low swappiness, etc).

The only distro without this problem is Asahi. It’s bespoke for MacBooks, so it’s been optimized all the way down to the internal speakers(!).


> Default configs of most distros are set up for server-style work, even on workstation distros. So they’ll have CPU and IO schedulers optimized for throughput instead of latency, meaning a laggy desktop under load. The whole virtual memory system still runs things like it is on spinning rust (multiple page files in cache, low swappiness, etc).

LOL. A Ken Colivas problem, circa 2008, still there :-)))


Mh, I'm probably comparing apples to oranges and such.

But the last 2-3 times I setup a config management, I made sure to configure the local firewalls as deny-all by default, except for some necessities, like SSH access. And then you provide some convenient way to poke the necessary holes into the firewall to make stuff work. Then you add reviews and/or linting to make sure no one just goes "everything is public to everyone".

This way things are secure by default. No access - no security issues. And you have to make a decision to allow access to something. Given decent developers, this results in a pretty good minimum-privilege setup. And if you fuck up... in this day and age, it's better to hotfix too little access over losing all of your data imo.


> necessities, like SSH access.

SSM for life. Fun fact, one can also register non-AWS assets as SSM targets, so I could imagine a world in which it makes sense to create an AWS account, wire up federated auth, just to dispense with the hoopjumpery of SSH attack surface and Internet exposure

The break-glass is always a consideration, so it's no panacea but I still hope one day the other clouds adopt the SSM protocol same as they did with S3Api

I believe a lot of folks have had good experiences with Wireguard and similar, but thus far I haven't had hand-to-hand combat with it to comment. We use Teleport for its more fine-grained access and auditing, but I've had enough onoz with it to not recommend it in the same way as SSM


Mh. One thing I observe at work is: Increasing what I call the technical relativistic speed of deployments to insane levels is either impossible or trivial. Like, we have C++ code in the company with bindings to WIN32 APIs. We're not talking about speed with those things. But for a lot of relatively modern software, it's pretty easy to implement automation that can make a fairly robust deployment take a few minutes. Containers make this easier, but you can have the same thing with ruby (capistrano), python (liberal use of venvs), Java and so on on VMs as well. Most of this is just a bit of config management or container orchestration config.

However, quite a few dev-teams make many very stupid decisions and suddenly your 2 minute long deployment without downtime requires 4 weeks of coordination with customers, because people figured to include a breaking change in their API, opposed to some incremental evolution. Or because people implemented a big-bang database migration, which will take hours of downtime, opposed to some incremental 2-3 step database model evolution. Or they pile 23 steps you "have to manually do after a deployment, but just maybe and not always and it's not obvious when" on top. Or because people get scared because of either bad decisions or dumb stuff just happening. Or because people don't understand different rollout strategies to hedge risk and then they get scared and then they don't deploy and then everything explodes once they deploy a crapton of stuff at once. Which, naturally, catches on fire.

The question of how to rollout a change smoothly, silently and with little coordination seems to be an entirely alien black magic to some people.

And from there, a bunch of our new applications and services with less baggage actually become slower than our old veteran products with loads of experience and really ugly baggage behind them.

It's just utterly strange to me that we are approaching a level in which we in ops can confidently reconfigure, failover and restart database clusters on git-push and some dev-teams are so worried about swapping stateless code in their system.


So many "Security Researchers" are just throwing ZAP at websites and dumping the result into the security@ mail, because there might be minor security improvements by setting yet another obscure browser security header for cases that might not even be applicable.

Or there is no real consideration if that's actually an escalation of context. Like, "Oh if I can change these postgres configuration parameters, I can cause a problem", or "Oh if I can change values in this file I can cause huge trouble". Except, modifying that file or that config parameter requires root/supervisor access, so there is no escalation because you have full access already anyhow?

I probably wouldn't have to look at documentation too much to get postgres to load arbitrary code from disk if I have supervisor access to the postgres already. Some COPY into some preload plugin, some COPY / ALTER SYSTEM, some query to crash the node, and off we probably go.

But yeah, I'm frustrated that we were forced to route our security@ domain to support to filter out this nonsense. I wouldn't be surprised if we miss some actually important issue unless demonstrated like this, but it costs too much time otherwise.


Maybe I'm naive and brainwashed and so on. But one of the big selling point of crypto was that you don't get "under the opressive control of banks and the state" and such. And, I can get the merit of that idea. Banks can have a massive amount of control over some peoples life.

But at some point, a bunch of financial sharks got involved into the crypto pond. These guys have decades of experience at exploiting financial markets, and a bunch of the regulations and rules we have through banks and the state is to protect naive people like me from sharks like those. And it doesn't work beyond a certain state, sure. If you can pay millions to dudes finding loopholes to make billions, alright. But at my level of money I can trust the system to some degree. At least more than an unregulated market full of people with decades of experience at exploiting companies and people financially.

And while I like the idea behind crypto, that thought has shaken my confidence into the idea beyond repair, to be honest.


> But at some point, a bunch of financial sharks got involved into the crypto pond.

It’s not like things were any better before the sharks arrived. In the old days you had Mt. Gox where they lost your money through rank incompetence. Now you have FTX/Terraform/etc. where they lose your money through evil (and also rank incompetence). At no point was there a golden age when crypto wasn’t a shitshow.


There was a very little gap of time after bitcoin was created. It indeed didn’t last long.


Has there existed a time since 2011 when the crypto pond wasn’t filled with sharks?


Banks are simply basic human nature exploded into large corporate entities.

In reality, everyone would behave the same way if they were similarly scaled.

The idea that crypto lets you avoid banks is akin to the idea of thinking you can have an economy without humans.


Very well said


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: