You want Google Search, Google Docs, Chrome, Android, and Google Cloud to all share the same blog? Not to mention lesser-known areas like Google Education and so forth...?
That tells me there isn't a single security research group, but at least six of them. Which doesn't surprise me.
Shouldn't they be blogging the org chart? When I want to follow updates, it's generally from a particular part of the org. Each group has its own separate mission and its own audience.
That makes as much sense as saying every Y Combinator startup should post on a single shared blog, with tags to filter by company.
No -- a single blog should revolve around a single group of authors writing around a single, concrete theme -- an individual product, product suite, initiative, or similar.
The idea of a single blog with 500 posts a day from 500 different people sounds terrifying, tags or not. It's too many tags -- like, you'd need tags for the tags!
And? It has a couple of posts a day on average. It's extremely high-level. It's not aggregating the probably hundreds of posts you'd get across the entire corporation.
You'd never in a million years want content like this mixed in with that:
We're also introducing GoStringUngarbler, a command-line tool written in Python that automatically decrypts strings found in garble-obfuscated Go binaries.
garble actually sounds like an excellent utility to add some protection around things like keys/secrets in a binary. Is there anything like this for Swift binaries?
Obfuscation tools like these only slow down attackers, they can never stop them. Even the best in the game, where there are strong financial incentives on the line, fall to attackers typically in a matter of months.
As such, you should never use them to protect data that needs to stay secret indefinitely (or for a long time), such as keys.
That was my reasoning as well, I used to work for a company that really wanted code to be obfuscated because they were terrified of corporate espionage. Even though the one I was working on was just a configuration interface, and the configuration was plain text files, and the application didn't do anything special, just complicated (mobile network routing / protocols, lots of domain specific knowledge but as far as I know nothing secret or difficult to reproduce with enough resources).
Apps and websites get copied all the time. Somebody throws up a duplicate with ads and steals your traffic and search rankings and customers and whatever.
Adding code to prevent your product from working when it's not on the right app/domain, and obfuscating your code to obfuscate those checks, can be sadly necessary. It doesn't need to defeat a determined attacker, but just be hard enough they'll spend their time cloning something else instead.
There are occasions where you just want to make it a little harder to impersonate an official client where it can be useful to store a secret in the binary. It's still vulnerable but requires intention and actual effort.
Might have the opposite effect. Like a Streissand effect... hacker sees that the app is mysteriously hiding a secret? Makes you want to hack it just for the challenge, even if you had no intention before.
Probably a much better solution would be to store those as environment variables. I can't think any sane way where adding secrets to a binary would be useful unless you want to do something malicious with it.
Unless you’re launching the binary with c&c infrastructure receiving remote commands to start the binary, I don’t see how you would obtain the values to inject them into environment variables.
But even this case doesn't make much sense. I expect that instead of adding the secrets inside the binary you will go through to the more traditional ensuring that the client is logged in and that the secrets are stored in the server.
Unless you want your app to be used anonymously, but then why have secrets?
The use case I have encountered was for anonymous users where the company wanted to prevent unauthorized clients (copies of the app) from relying on the same server-side HTTP API used by the official app. The point wasn't to make it impossible for an unofficial to be used, but to make it harder than "trivial".
So the app used a digital signature / request signing with a key that was obfuscated and embedded in the binary. With anonymous users I don't know how else you could avoid use of the private API.
I am not saying that it can't be done, but I still find it a flawed solution. It probably works if your product is not really popular, but once you have anything remotely interesting and popular you can be sure that people will be analyzing your binaries and leaking your secrets faster that you can replace them.
reply