No, because with symmetric encryption you'd have to protect the key against theft and tampering (i.e., against reading and writing by a third party), while you only have to protect an asymmetric public key against tampering, which is a lot easier in practice.
(also not an expert, just my understanding of it)
Choice 1: use a symmetric keys. This means one key per device, which you have to manage. Quite cumbersome. You could instead have one symmetric key for everyone, but then if only one IoT device gets compromised (which over time is a virtual certainty) the whole crypto system would be compromised.
Choice 2: use a public/private key pairs. One for the server, and one for each device. Now the system is only broken when the server key is compromised. If a device gets compromise, the attacker merely learns the server's public key, and can impersonate that particular device.
The main advantage of choice 2 vs choice 1 is that with Choice 2, you can use the same server key for everything. You'd still use a protocol with ephemeral keys, but you wouldn't have to manage many many keys. And if the IoT devices are untrusted (that is, they are assumed compromised or anonymous), the whole system only has to manage one key.
Now sure, you could still make it more performant by retaining symmetric keys around. And you'd have to perform fast key erasure (replacing the key by a hash of the key) from time to time to ensure forward secrecy, but with public keys around the symmetric key can act as a cache, which can safely be erased whenever you restart or update your system.
I guess it doesn't protect against something attacking and snooping on your machine though.
If there are no public keys pinned to clients (say secure messaging apps like Signal where each user generates their own keys), users need to check the public key fingerprints to make sure there's no MITM attack taking place.
So very likely unless Bob is comfortable with this situation he stills need a mechanism to find out who he's talking to. On the upside he does now have an encrypted channel on which to continue the work.
At scale that only practical answer is an Authority, a Trusted Third Party, people _so_ trustworthy that Alice, Bob and maybe even Mallory agree that they know who is who. In one sense this is so hard it might be impossible. But then again maybe it works anyway?
If you don't need scale, for example maybe you're a conspiracy of a few dozen people trying to bring down the Authority, then you have lots of other options depending on your circumstances including Out of Band verification and the Socialist Millionaires Protocol.
If you are a college kid and convinced that everybody on your Facebook friends list, and everybody on their Facebook friends lists, is a fundamentally good person - but that the Authority is a shadowy conspiracy against you all, you can use the Web of Trust, right up until the guy who once lived with a friend of your cousin's housemate steals your life savings and leaves you in a bathtub filled with ice with a hole where one kidney used to be.
The difficulty you are describing assumes a user base of crypography pedants who make assumptions about third parties that don't matter to 99% of non-technical users (nor even technical users in many cases).
One example: you work for the American government, and you witness something very wrong, very illegal going on. You'd better be sure, when contacting Laura Poitras, that you are indeed contacting Laura Poitras, and not some counter-intelligence operative from the NSA.
And it has to work even if you don't have Ed Snowden's skills. Without reliable crypto the rest of us can use, people will get caught, arrested, tortured, killed blackmailed… just for speaking up.
Maybe we don't want reliable crypto to be widely available. Maybe we want to have mass surveillance. But that's another debate. (Personally, I'd rather have everyone to have reliable crypto, and I'm willing to make wiretapping impossible in the process.)
How to beat Laura Poitras publishing a public key all over the place?
Without forward secrecy, getting Laura Poitras' key will enable the NSA to read all past communications. They only have to seize her computer when it's still on, and the key is still in memory somewhere, or compel the poor journalist to give up here keys (possibly using that "non invasive" waterboarding torture, and justifying it with suspicion of helping terrorists).
Now if Laura kept the decrypted messages in her laptop, forward secrecy wouldn't do anything, but if she properly deleted them, it would be a shame if the messages were nevertheless at the mercy of the attacker.
As for key finding, well… the simple solutions do work pretty well. Snowden for instance didn't find Poitras' keys lying around on the internet. He asked someone he trusted would give him the right key.
It isn't real ambiguous.
For instance, Snowden had someone tweet a key fingerprint: https://theintercept.com/2014/10/28/smuggling-snowden-secret...
Hint: TOFC is a lot like what I described above, with the added usability that you don't have to type "yes" every time like a chump.
Oh god but how do you ask her without a guarantee that she's really who said yes?
Even if you meet Alice in real life to ask, how can you be sure the meeting isn't a dream or a simulation and the Alice before your eyes isn't a chosen plaintext attack by a cosmic man in the middle?!
The entire science of cybersecurity is bankrupt and founded upon untenable foundations!!
The point of the article is all the ways RSA blows up. libsodium addresses that style of problem. Saying there are other problems ("how do we agree on a key") may be true, but isn't responsive to the discussion.
Same goes with crypto. It's fine to learn the concepts of how to use a trusted library, but you really aren't likely going to understand the underlying tradeoffs and mathematics.
People here aren't just saying "just use product X", they're saying, "learn the concepts of product X and use that". That's about as good as it's going to get for any specialized, complicated domain.
Yes, of course it is. Doctors know a lot, but they're not competent to assess the pharmaceutical literature.
So here's how it works. In med school, drug manufacturers provide free equipment, food, liquor, etc. And push their drugs. Because once you have someone in the habit of prescribing some drug, they tend to keep prescribing that drug.
Also, you pay influential doctors to deliver talks at conferences, praising your drug(s). You also pay ghostwriters to draft papers, which said influential doctors can submit for publication.
And last, you pay sexy, charismatic young things (of both sexes) to visit doctors' offices, pushing your drugs, and giving away food and stuff.
It is possible (even more likely) to have experts with legitimate opinions that are scientifically valid and have measurable outcomes that also correspond to commercial interests. Ie. "it helps people and makes money".
That model has worked for over a century. Yes, it's under attack by unethical people (more than in the past?), but I'm not sure what the alternative would be.
For example, I recall reading that well over half (maybe 70%-90%) of the medical literature on Neurontin (gabapentin) were ghostwritten advertorial spam funded by Novartis.
When someone says something that doesn’t make sense to you, do you just put whatever they’ve handed you into your mouth without asking questions?
On the other hand, getting crypto advice from random bloggers and HN commenters is akin to getting a medical treatment plan and drugs from random bloggers and HN commenters.
If you can't be bothered to read their profiles or understand who they are, I refer back to my point about arguing from a position of ignorance.
To be honest, this is literally the level of discourse of most Hacker News discussions about healthcare.
I have doctors in the family. The scariest stories of "marketing wankateering" in medicine I hear is from them.
Obviously not. So, where is the line drawn of what professional opinions are or are not trusted?
There's no doubt that "marketing wankateering" happens in all complex domains. Any "Market for Lemons" (i.e. a market with information asymmetry - a domain so complicated or obfuscated that consumers can't understand its fundamentals) will be exploited by charlatans. This is why we have professional (imperfect but functioning) backstops such as medical scientific research and the security/crypto research community.
OP was claiming that not even the professionals on this thread can be trusted to not be "wankateers" for a free/open source library, with no evidence, or even a hint of moderate understanding of the problem domain (i.e. why it's hard to distribute a public key), or desire to learn. Perhaps they were just frustrated with the complexity of the domain, but flaming people trying to help as being "wankateers" is rather fatalist.
Arguing from a position of ignorance when people say something that doesn't make sense to you is literally how anti-vaxxing happens.
Just because arguing from a position of ignorance can sometimes produce outcomes which align with your personal anecdotes doesn't make it an intellectually valid method of discourse.
Anti-vaxxer beliefs aren't caused by people questioning the first medical advice they get from a medical professional when it doesn't sound right to them. Anti-vaxxer beliefs come from either not verifying and going with your gut, or verifying and then ignoring what you've learned.
Doctors are humans and make mistakes sometimes, and your own health is your own responsibility. So is safety of your own application, so you shouldn't plug in someone else's crypto if you don't feel comfortable with it, but instead try to understand the domain as much as you need to start feeling comfortable.
...which is exactly what "arguing from a position of ignorance" means. Once you attempt to verify medical advice (in good faith) you are no longer ignorant.