That's actually the opposite of good practice; good practice in security is to base your planning off of facts and research. Throwing away your whole setup after every gig works for Mission: Impossible, and I guess it makes people feel extra-super-ninja, in practice it just perpetuates the endless (and pointless) culture of I-know-something-you-don't.
Opsec should be based on reality and threat modeling, not endless rounds of whatabout.
Edit: if you (the rhetorical you, not parent specifically) actually know something here, chime in!
That really is the difference between "proven secure" vs "not proven insecure", which would you consider best practice?
As far as fingerprinting WiFi devices goes: It is an rf device and all rf devices vary in behaviour due to component tolerances. This shows in such things as spurious emissions, power variations across its transmission spectrum, oscillator drift, etc, etc. These are fairly easy to detect remotely. One example is shown in this paper: https://www.cs.ucr.edu/~zhiyunq/pub/infocom18_wireless_finge...
That paper states that the accuracy could be as high as 95%. Apple has sold over a billion iOS devices with WiFi radios in them. I'll let you Google the base-rate fallacy for yourself, and decide if that risk is worth it.
The paper is only one such method, there are countless and these methods have been in documented use in signal intelligence since at least WW2, combined your accuracy increases. And this is on top of all the other known methods of fingerprinting network devices.. Besides, most of the time you only care whether the same device was used, 95% gives you a lot of certainty.
Within propper constraints "proven secure" certainly is possible.
Good security practice is considering all devices as insecure until proven otherwise. Also, mitigating known unknowns where a general problem happens a lot. Devices snooping on you, misleading you, interdiction, hacks on firmwate, etc. Then, you mitigate it in situations where you're unsure of what's going on just in case. So, long as mitigation isn't too costly.
I used to buy and get rid of WiFi devices and throwaway computers for that reason. Also, buy them in person at random places with cash. You can even turn it into charity by using FDE, wiping them afterwards, and reselling cheap or donating to others that cant afford full price. Put Ubuntu and Firefox on them to spread some other good things.
Well that's impossible (see also the halting problem) so that's pretty clearly not good security practice.
Nothing in that says anything about what your threat model is. What risk are you mitigating by doing this? This sounds like the type of "ignore the words and listen to the sound of my voice" security espoused by management and vendor sales people.
It sounds like you have a diverting past time, and I wish you the best with that, but this isn't what security is about. Security is about identifying and mitigating specific risks. This goes doubly for operational security. All else is security theater.
Extra comment to add something I left off. There's at least two types of static analysis and solver tools: unsound and sound. The sound ones, especially RV-Match and Astree Analyzer, use a formal semantics of the code, a formal statement of the property, and automatic analysis to determine if it holds or doesn't depending on the goal. Related, SPARK Ada and Frama-C have their formal specs and code turned into verification conditions that check for code conformance to the specs. The VC's go through Why3 which sends them to multiple, automated solvers to logically check them. Far easier to scale and get adoption of these automated methods than manual proofs.
The main drawback is potential errors in the implementations of the analyzers or solvers that invalidate what they prove. Designs for certifying solvers exist which essentially are verified or produce something verifiable as they go. There's examples like verSAT and Verasco. The tech is there to assure the solvers. Personally, I'm guessing it hasn't been done to industrial solvers due to academic incentives. Their funding authorities push them to focus on quantity of papers published over quality or software improvements with new stuff over re-using good old stuff. Like infrastructure code, everyone is probably just hoping someone else does the tedious, boring work of improving the non-novel code everyone depends on.
Also, given my background in high-assurance research, I'm for each of these tools and methods, mathematical or not, to be proven over many benchmarks of synthetic and real-world examples to assess effectiveness. LAVA is one example. I want them proven in theory and practice. The techniques preventing or catching the most bugs get the most trust.
"Well that's impossible (see also the halting problem) so that's pretty clearly not good security practice."
No it's not. It's been done many times. The halting problem applies to a more general issue than the constrained proofs you need for specific, computer programs. If you were right, tools like RV-Match and Astree Analyzer wouldn't be finding piles of vulnerabilities with mathematical analyses. SPARK Ada code would be as buggy as similar C. Clearly, the analyses are working as intended despite not being perfect.
"Security is about identifying and mitigating specific risks. "
Computer security, when it was invented in the 1970's, was about proving that a system followed a specific, security policy (the security goals) in all circumstances or failed safe. The policy was usually isolation. There's others, such as guaranteed ordering or forms of type safety. High-assurance security's basic approach was turned into certification criteria applied to production systems as early as 1985 with SCOMP being first certified. NSA spent five years analyzing and trying to hack that thing. Most get about two years with minimal problems. I describe some of the prescribed activities here in my own framework from way back when:
Note that projects in the 1960's were hitting lower defect rates than projects achieve today. For higher cost-benefit, I identified the combination of Design-by-Contract, Cleanroom (optional), multiple rounds of static analysis by tools with lower false positives, test generators (esp considering the contracts), and fuzzing w/ contracts in as runtime checks (think asserts). That with a memory-safe language should knock out most major problems with minimal effort on developers' part (some annotations). Most of it would run in background or on build servers.
Modern OS's, routers, basic apps, etc aren't as secure as software designed in 1960's-1980's. People are defining secure as mitigates some specific things hackers are doing (they'll do something else) instead of properties the systems must maintain in all executions on all inputs. We have tools and development methods to do this but they're just not applied in general. Some still do, like INTEGRITY-178B and Muen Separation Kernel. Heck, even IRONSIDES DNS and TrustDNS done in SPARK Ada and Rust respectively. Many tools to achieve higher quality/security are free. Don't pretend like it's just genius mathematicians or Fortune 25 companies that can, say, run a fuzzer after developing in a disciplined way with Ada or Rust.
It's less a culture of I-know-something-you-don't than a culture of someone-may-know-something-I-don't. I don't understand your implication of intellectual delusions of grandeur here; I see it as the opposite.
If you read the other reply to my comment, you'll see that it was in fact a case of I-know-something-you-don't, although in this instance they are in fact wrong about the implications of the thing that they know. The gate keeping that goes on in security (saying that there's a threat but not saying what it is) is extremely frustrating to me.
Your security profile needs to exceed that set for the highest level of clearance you could possibly gain. In practice that means exceeding the highest level of security used in an organisation. You wouldn't want to inadvertently exfiltrate a clients data would you?
Aside from that, it is not uncommon for say a department to not be aware they are being pen-tested with consent of their management, and you don't want to trigger counter measures.
I upvoted you because your first sentence is a useful observation, but I'm having a hard time using any of that to justify throwing away a wifi adapter. Even if it were possible to fingerprint the adapter beyond its MAC address, there's no global database of whitehat pentester wifi adapter fingerprints, and such a thing would be worthless anyway. You're not going trigger countermeasures by reusing a wifi adapter. The only threat model that remotely makes sense for that kind of precaution is fear of nation-state level resources trying to identify and catch you. And that's well outside of the realm of "pentesting".
(And the idea of accidentally exfiltrating data through a reused wifi adapter is ludicrous)
I've bought an Alfa adapter 10+ years ago because you can use them in promiscuous mode. So you can snoop wifi traffic, listen for handshakes and doing so crack WEP/WPA (wifi) encryption.
They have a little. 5ghz is more common, so you won't get any thing there. WPA2 is significantly harder to crack, and I usually do it on GPU with pyrit or hashcat-ocl and a wordlist. WPA3 is out now, too, and I'm there aren't really any well-established procedures for it yet.
Just FYI WPA2 is pretty solidly and quickly broken (lookup KRACK attacks). WPA3 is unfortunately already partially broken (though currently joining the network / password breaking aren't fully broken, see Dragonblood attacks).
KRACK was a nonce re-use, not a core protocol flaw. WPA2's flaws are more around un-encrypted control packets; i.e. I can de-auth you without having to get session keys.
> Description
The best wireless adapter for those who use the penetration platform Kali Linux & BackTrack. The wireless USB adapter has been tested to work with Aircrack-ng and supports packet injection along with monitor mode.
The Alfa AWUS1900 is a nice model, but if you want something cheaper (and will put up with 2.4ghz-only), the TP-Link TL-WN722N is cheap, but get the v1 chipset! It's the best-supported with drivers on linux. Oh yeah, and you will probably need to do monitor mode on linux.
Don't know about counterfeiting, but when I tried to order yubikeys via German amazon, every single one of the blisters looked suspiciously as though they had been tampered with[1]. They were opened juuust slightly on the side - enough to potentially slide the key out and in again, definitely something that you could miss if you weren't paying close attention. I placed a second order and the exact same thing happened. It was quite weird and I've since ordered from yubico directly.
That's not true, since CAs don't have "the keys to decrypt all traffic." They have the ability to sign website operators' public keys, but they do not have access to the website operators' private keys.
Of course, the CA could also issue a fake certificate with attacker-controlled keys, but if they tried to do so, they would get caught by Certificate Transparency.
Guess there could be two attack vectors, one that is easier to avoid and the other not so much.
The first one being a targeted attack. Then any ordering of Yubikeys can leave to vulnerable as the supply chain can be intercepted (because they see it's you and switch out the key to a counterfeit one). This can be solved by going to a in-person store and buying it there. Then there is no risk of you being personally targeted as you can go to any store.
The second one, is where all keys sold being counterfeit, which you cannot solve by going to a in-person store or ordering online. Not sure how you could avoid this vector.
While this is a theoretical problem anywhere, it's a practical problem when ordering from Amazon far more often than anywhere else. Going to a reputable physical store likely shields you from the second scenario nearly as well as the first. Also, in the case of Yubico at least, you can order directly from their website, which presumably minimizes the number of hands the product has to go through, thus minimizing opportunities for a counterfeit to be swapped in.
It recommended me "20 ORANGE SNAPPY GRIP -Bucket Handles -Mining-Gold Prospecting-Gardening" under the customers also bought... I guess I'm just not leet enough. This in spite of the fact that I've actually bought a few external wifi adapters from amazon.
- ALFA AWUS036NEH Long Range WIRELESS 802.11b/g/n Wi-Fi USBAdapter
- Yubico - YubiKey 5 NFC - Two Factor Authentication USB and NFC Security Key, Fits USB-A Ports and Works with Supported NFC Mobile Devices
EDIT:formatting