Hacker News new | comments | show | ask | jobs | submit login

During the scan we monitor the bandwidth, and we have control pings in order to check all the time the server can send and receive pings. We took certain monitoring and slow down things. Sure it was nor perfect, but the reliability were considered during the experiment.



So, what measures were taken?

The problem is not sending packets fast enough. It's not about bandwidth. The problem is sending them just fast enough, which is impossible if you're scanning statelessly with just ICMP echoes.

Let's say you're on a 100 mbit ethernet, your uplink is only 8 mbit. If you send packets at a rate of 10 mbits, packet loss will happen. And you're not the only one using the network either, so this can happen way earlier. And that's only the part of the network that you control. There might be a lot of hops between you and the host you're sending packets to. And with your approach (the way I understand it) you're not gonna notice packet loss.

I might make too many assumptions here, but ten hours is just too short of a time period for a network of that size for a reliable result. I'm very sceptical. But please prove me wrong, because it will def. make my job easier.

I guess you could publish the code, so I could test it myself.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: