Hacker Newsnew | past | comments | ask | show | jobs | submit | dylan604's commentslogin

Wouldn't a generic 400 be better. It's not that the page wasn't found, but you've sent something that was not an accepted request. Fix your request and try again is how I've read it, and that's how I use it in the APIs I provide. I prefer it over 406 since it's not my end that can't process it. If your query string is tacking extra stuff trying to break things or just because your request wasn't crafted per the docs, then it's on you.

406 would be wrong for me. As it is to be used when client sends Accept: header and server cannot fulfil that. HTTP return codes get quite specific when you read the actual description and not just name.

I remember reading about the Blue LED when it first started to appear. It was interesting reading what they thought how it would be used at the time with being able to do true R/G/B the thing everyone was talking about. Not sure how much later before they were used for shiny round discs but it wasn't part of the article's prognostications. This is all a bit nostalgic as I read about that in a printed magazine when those were still a thing.

Are you suggesting somehow Microsoft or Apple would be doing this? That seems pretty perverted if you are.

If I were Redhat or any other distro maintainers, this seems like something I'd want to be doing internally to lock it down.


Every time I venture in the the web server's error log, I see all of the skiddie's attempts at accessing the most common things with most of them being .php files. Lots of /wp/admin.php and /phpadmin/ type requests. Of course, none of those are available which is why the requests are in the error log. I've never paid attention, but I wonder how long (as in how little time) for a new server to come online before it starts to get probed by a skiddie. Whether they are just war dialing IPs or paying attention to new domain announcements but I'd put it on a few hours tops.

Dismissing these as script kiddie attempts is no longer correct. This is a real industry now. It’s not like the large scale actors are going to pass up a valid unpatched vector just because it’s old hat.

They're skiddies if they're trying WordPress attacks on domains that have never hosted anything remotely close to a CMS before...

Imagine this; ~40% of public websites run wordpress. (based on some AI-gen summary, even if fewer it is still an important percentage).

So you might be spinning up a new instance with 40% probability. It makes sense in mass vulnerability explotation and detection to aim for highest success rate first.

Especially when the IPv4 space is so easy to scan nowadays. And you have services like Shodan that do just that daily.


yes, but how often otherwise would i get to use the word skiddie?

If you get a letsencrypt certificate it will get probed within a minute

I’ve tested this recently (this post week). Had a dns entry up and pointing to an nginx server for ~12 hours, zero requests. 17 seconds after the letsencrypt cert was issued, the floodgates opened. Over a dozen of requests per second.

I don't think it's necessarily specific to LE but rather to public certificate transparency logs. LE being free and easy to automate means it's very widely used these days, but if you theoretically go to a "pay" root CA and get a cert that covers thing.com and www.thing.com , the same probing will happen on the same time scale.

22 minutes. I got my new ISP with fibre. Placed my web server online. 22 minutes my honey pot got stung.

Didn't really help Fukushima though. In fact, the ocean came to it. They didn't have to go get it.

Not according to POTUS math. You can have 200%, 500%, 600%, 1200%. You just have to say it enough and people will question if they really might not understand percentages enough, and just go with it.

ok but cooling systems don't run on POTUS math though

Nor does the rest of the world

I can see both sides. As someone automating, I could see getting the malformed command -h result to stderr. I can also see that sending -h to stdout would be expected as that is the legit out as requested by the user. At the least, if -h is sent to stdout for a malformed command then by gawd it better be a nonzero exit. With that, I think (and boy did it hurt) that it's an okay rule to break

> 3. Generally, you can always tell if something is going wrong by grepping for errors or warnings a single stream (stderr), or by looking for a nonzero exit code.

I'll use ffmpeg as an example of being an edge case. It's hard to get ffmpeg to give a nonzero exit code. What might be a problem for the user wasn't necessarily a problem for the app, so the app thinks it is completed and does its thing exiting with zero. For example, if a file is being read as input that is corrupted causing ffmpeg to no longer be able to read from the source, it will happily close your file cleanly so it is usable (just shorter than expected) and report it completed successfully. If all you do is check the exit code, you'll think your file is completed. Much more due diligence is necessary to be sure.


Last time I had that problem -xerror helped.

> Does no one know what exit codes and stderr vs. stdout is anymore?

Sounds like you don't use ffmpeg very often. Because ffmpeg is able to send its output to stdout to be piped to other apps, verbose text output can't use stdout as one would expect. Non-error text is sent to stderr instead. So when you want to trap the text output you have to route stderr to text file. It takes some getting used to, but it's now normal for me.

So, yeah, I know stderr vs stdout still, but it's not what you want it to simply be. In the real world, things are not as clean as they are in school books.


That is exactly what I expect, or did I misread you? I expect its text output on stderr, and stdout being reserved for its video (or whatever) output.

Maybe the name “stderr” is a bit misleading. It’s totally common for non-error output to be in stderr as well, like verbose/debug logging.


I've just checked the man page for ffmpeg and they have a `-report` flag to capture the log. There should be no need to redirect stderr.

Does any of it have to do with the spectrum becoming available? After 2.4GHz and 5GHz, I have no idea what else the latest/future gens of WiFi are using. As some tech like 2G is no longer in operation, that spectrum was opened up. There are other frequencies that have become available where operating the older equipment that used to operate there is a big no-no now. There was a frequency range used by old wireless microphone systems that are banned at locations.

Just taking a swing at it, but I don't play that sport so probably a big whiff


In regulatory regions where it is usable, Wifi 6 (802.11ax) added some 6GHz channels. Wifi 6e extended that to roughly the entire 6GHz band, for ~1GHz of contiguous RF bandwidth in that area alone.

The "old" cellular bands aren't generally open, at least in the States. We tend to use them for newer licensed stuff in cellular-land instead of the old licensed stuff we used to do. (Old modulation techniques die out and get replaced, but licensed RF bandwidth is still licensed RF bandwidth.)


> In regulatory regions where it is usable, Wifi 6 (802.11ax) added some 6GHz channels.

'Plain' Wifi 6 (non-E) had zero 6 GHz. If you think otherwise can you produce a citation?

Edit:

* https://en.wikipedia.org/wiki/List_of_WLAN_channels


You're right. 6GHz wasn't usable as a part of standardized wifi until 6e.

I'd like to choose option C: I thought otherwise, and I was wrong in thinking that. I'd like to submit my previous comment, just above, as a citation demonstrating the incorrect thought process. ;)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: