I’ve been in your shoes. I found three options to work (in worst to best order):
1) I become an “overnight expert” at the expense of my work life balance and frankly the system architecture long term
2) Pushed back on management with tangible risks: a rough estimate to implement is XX and we are not confident in the solution so that could result in rework cost or worse long term scalability/maintenance cost. We propose outside (could be another expert on another team) consultation to close the risk gap.
3) We need to reassign resource X who is our most knowledgeable in adjacent tech, this takes them off feature Y and we will do a spike of duration Z to verify our approach.
When you do #3 enough your management will appreciate you using existing resources, providing the trade-offs, and you will also up skill that person.
The discipline for program management on the government side is shit and as a result contractors can be too. Some in a malicious or wasteful way. SV startups don’t operate with that mindset (during their scrappy phase). It’s just two ends of the spectrum. There is opportunity it’s just not going to produce these headlines.
I’ve always been interested but confused by these concepts mostly trying to understand people’s intent. On many websites with logins we create our identity, but have to “validate” it against an email one time token. Me claiming I’m someone famous on most websites is, mostly innocent, and mostly not trusted. However, how do we verify real world identity to key pair? By some centralized authority we have trusted to “validate” said identity, aka public key infrastructure.
So, at best case we have some proof that our key is created/controlled by “me” via a trusted channel but not a centralized authority. Do I upload a video of me showing my public key to the world and upload to some hosting site? Could a deep fake me do that too? Then of course the gpg web of trust model comes to mind, if we attend key signing parties and sign each other’s keys we can verify through associative trust vs centralized trust.
Or is really the point to not have a real world to key identity linkage at all, for “privacy reasons,” and we just all do our business online with full anonymity?
This is exactly it. LinkedIn harasses you to turn this on every time you launch the app on your mobile. If you have their number in your contact list and you enabled this setting.
I don’t know about Android, but in iOS you can see if you have enabled access to contacts. It’s also totally feasible your contact info is in their phone and they enabled it (email address).
Google isn’t interested in truth, they are interested in information. They are provide a search engine not a truth engine.
Sarcasm aside, how do we propose they determine what is truth? If we assume the internet is full of information and more truthful than not, then Google’s assumption could be accurate. Of course they do try and solve this with the knowledge graph and expert curation. Connections to verified information might give validity to that information, but not always.
Google isn’t interested in truth, they are interested in information. They are provide a search engine not a truth engine
Google has been transitioning to attempting to provide a "truth engine" for several years. Whenever I try a complex key-word search, it suggests a question format for it (often with worse result but sometimes OK). When I have finally got the key words down to filter just what I want, google whines about "Not very many results, here's what you should do..." and, of course, Google often gives explicit answers for questions in it's search results (a notable percentage of which are wrong as noted).
And Google being half-assed truth engine is all sorts of bad...
That doesn't need to be sarcasm, because it's true: at its core, Google's search is a method of finding information, not a method of directly ascertaining truth. It's not really possible for it to be a truth engine, and if you realize that, it's not even a flaw. You're left with a way of finding information you will need to evaluate for yourself, which is fine.
The problem is in the presentation: Google's tools in general, not limited to search, present themselves as though they can identify truth. That's the flaw, the lie, if you prefer.
Given that they are actively curating the information and censoring "misinformation", they certainly think they are a truth engine. And present it this way. Of course you'd only believe it if you believe Google is omniscient, omnipotent and benevolent.
That's the problem, isn't it. Most people want to be good, so most information on the internet is the-truth-as-they-know-it. It lulls you into a false sense of security.
if you peek outside of math/physics it's pretty much landscape of relative truths; imho ideal thruth/fake news detecting machine would simply require axiomatic/weighted input, ie. "I trust MIT with public key 0x..., youtube jesus from la with public key 0x... and my childhood mate 0x..." – based on that is "X true, false or undefined"? (weighted output). Because I trust MIT with weight ie. 500% and MIT trusts Caltech, my trust graph will favour Caltech view of the world. Yes you can throw blockchain and AI into it and it actually makes sense.
Depending on the industry you are in, it could be an attractive credential. As a hiring manager it’s not what I use to determine proficiency or possibility of success, but lots of our contracts require them.
If you are using it to learn it’s not a bad tool to learn unless you have some pet projects you want to try out on the platform of your choice.
I’ve been working with AWS for the last 6 years and have been certified for the last 5. I learned new things through certification, but my practical experience tells me where the sharp edges are.
When you do #3 enough your management will appreciate you using existing resources, providing the trade-offs, and you will also up skill that person.