Exactly. Change "json" to "relational DB" and it's not close as ridiculous as author implies. All questions mentioned could be answered with a single SQL: although it will take quite a while. And it is much more extendable than this "huge array" approach.
The only thing is that by saying "Postgres" you say pretty much nothing: the real question here is how'd you structure the db. 300M a month is pretty huge, so it's really worth to think if we should just pile everything in 1 table (unique key ip+month, all ports are columns) or if we can denormalize something.
Well, it will be quite a table to do your selects against. Assuming that your hardware for such a project is fairly modest, after a couple of months index won't really fit into the memory, and even selecting a record by the key would take 30 jumps over that table just after 4 months. And most of the tasks in question require running through the whole thing. Not a tragedy, but well worth considering aggregating some data as it comes, especially as it costs pretty much nothing at this point (nmap takes longer anyway). Like counting number of IPs with the open port 22 would be just incrementing some number at the dedicated table by simple rule in the code. Maybe it's worth to invent some more interesting denormalization in order to be able to run more complex queries easily.
Maybe I'm wrong: I'd have to try out that to see how fast it actually computes, but I remember having problems with much smaller tables: like these 300M in total, maybe. Although, it was much "wider", with varchars and stuff.
Not straight Postgres, but this dataset would be ideal for a columnstore index with compression - huge runs of the same value are readily compressed with run-length encoding.
Partition the whole thing by the status of the up flag, perhaps additionally by month, and the SELECTs can be pretty well optimized.
The only thing is that by saying "Postgres" you say pretty much nothing: the real question here is how'd you structure the db. 300M a month is pretty huge, so it's really worth to think if we should just pile everything in 1 table (unique key ip+month, all ports are columns) or if we can denormalize something.