That's definitely a fair concern. We believe that open science and transparency are the right approach here. By releasing it, we ensure that everyone is on the same page with respect to abilities and defense.
Defending against such an "attack" is much easier if the technology is widely available and many people can play around with it and explore the limits.
At some point you have to let the rest of society in on it. As technology advances, if you keep the power away from the public, eventually you will split into two civilizations a vast divide creating an asymmetry that will be exploited by those on the elite side of the divide. It must be an aspect of technological rollout to figure out how to keep its usage safe once it is widespread.
You are assuming that such tech is not already being (or has been) developed covertly by malicious actors. Developing this and making it open source brings more awareness about the subject and will make it easier to develop defense models against such bots (whether already in existence or that will be developed in future).
> You are assuming that such tech is not already being (or has been) developed covertly by malicious actors.
I fail to see where playing out the "but others are doing it too" card exempts the responsibility of those who either lower or eliminate the barrier to entry to these attacks.
every AI sound/word/picture editor i've ran into says something along the lines of "we're releasing this data set to help stay secure in this day and age of easy counterfeiting of X.", but they never really mention how you apply the data in an adversarial way against itself -- they just sort of hand-wave that part.
Same with fake AI generated Obama video and sound, and earlier data-set generated chatbots; it's plastered all over the projects things like "Since these methods are available we think that it's important that this data is disseminated so that other's can use it to validate real world data sources", but again -- how?
We have the real data, we have the fake data -- how is this diff done, exactly?
I'm willing to bet it isn't as easy as all the AI researchers who release this stuff claim it may be.
If its secret or not publicly available people will argue using Occam’s razor or that only “State actors” could use this. With the subtext being your not important enough.
With the data public its more akin to driveby ssh login attempts. Not being important doesn’t mean your not under attack and people can take the necessary precautions.
That's a bit like saying that nuclear secrets should be made public so that people can "take precautions" because "anyone can have a nuclear weapon, not just state actors".
There are few reasonable ways to "take precautions" against nuclear weapons and there are few reasonable ways to "take precautions" against something like this short of swearing off of social media entirely.
Without reasonable defences, all you really accomplish is ramping up proliferation.
I don’t think weapons of mass destruction is comparable. More like a security vulnerability for the mind. You can no longer be sure its a human on the other side.
> there is no online tool that approaches the bandwidth of two people in a room with a whiteboard.
VR will change that at some point. Even in today's infancy, there are some VR apps that give you that sense of presence with all the other tools needed and then some (whiteboards, screen-sharing, projecting your desktop on a huge screen, etc...)
Yes and No. For example, "type: LoadBalancer" works fine on almost every cloud, but various annotations need to be added for SSL termination on an AWS ALB, for example. The annotations don't collide tho, so you can have a load balancer with both AWS and Google Cloud annotations, and it will work fine on either cloud.
Volume classes are probably the best example of being cloud-specific, but this problem is solved by having a different volume class for each cloud provider, named the same, such that the deployment can always grab a disk regardless of which cloud its living in.
They are. Kubernetes has abstractions at exactly the right layers (e.g. Service to create a load balancer) so that you can exchange configs between cloud providers.
There can of course be some difference in the capabilities that each cloud provider supports (e.g. not all load balancer implementations may support UDP) but the abstraction is definitely there.
I thought load balancers popped out the end of Services and it was plugins that handled the specific cloud environment? I'd say that still constitutes cross platform.
If they were marketed as different products then there didn't be quite as much panic but since this is V2 and V1 people are afraid that the product they like will be unsupported soon since it's an older version.
Also, while V1 still exists for the time being it is fair to assume that future updates will only apply to V2. One user on here already found this to be true because although a new region was added, GRU, they could not deploy to it with V1 and was told it only supported V2.
Google Identity is not going to be the right solution for integration with other AD/LDAP stores.
Really big companies will often have multiple AD/LDAP identity stores with enterprise grade provisioning systems (Sailpoint, Oracle Identity Manager) to keep accounts in sync.
This offering from Google is meant as a replacement for AD/Exchange(obviously) for companies that are mid-small. It also offers provisioning to other apps but only those that support SAML Just-In-Time provisioning. I did not immediately see a way to add custom apps to this (for provisioning), so might be limited to just the OOTB apps they provide (List: https://support.google.com/cloudidentity/topic/7661972)
Auth0 today offers SAML integrations and if the target system supports JIT Provisioning, it will work the same as google.
And just to clarify, Google's offering is 100% meant for internal users (employees, anyone that you'd give a Google Apps account to)
It's the most confusing part of Kubernetes IMO. It's a load-balancer with a very restricted feature set so what is it good for?
The main issue it tries to solve is how to get traffic from outside of the cluster to inside. The ingress resource is also supposed to be orthogonal to the ingress controller so that if your app is deployed on AWS or GCP (in practice it's not true though).
With the nginx ingress controller the main advantage I see is that you can share the port 80 on the nodes between multiple Ingress resources.