Hacker News new | past | comments | ask | show | jobs | submit login
Salesforce enables ‘modify all’ in user profiles (reddit.com)
280 points by LinuxBender 66 days ago | hide | past | web | favorite | 89 comments



Presumably this should cause most companies in the EU to announce, within 72 hours, a data breach, since it allows any salesforce user to gain any permissions and view/leak/copy/steal any data on any customer in the org.

That will be a lot of companies considering pretty much every big company uses salesforce.


While many companies were affected by downtime, only those that were using or had previously used Pardot were affected by the permissions issue.


Doesn't seem to be true, going by the Reddit discussion.

Potentially, everyone that happens to share a instance with an org that has every used Pardot is affected.


I have no inside knowledge but I’m a contract developer on the Salesforce platform so I was on a lot of the public calls today.

Every org on an instance that had an org that used Pardot (huge percentage) had a service outage today.

The security breach was limited to the actual orgs (much smaller percentage) that have actually enabled Pardot in the past.

So, many (I’d guess half) of Salesforce's customers were affected with an outrage, but a much smaller percentage also had a possible data breech.


Does it count as a data breach if unauthorized users were allowed to access data, but they didn't actually? ie. server logs showed that nobody actually read anything they weren't supposed to.


EU definition of a personal data breach.

> "‘personal data breach’ means a breach of security leading to the accidental or unlawful destruction, loss, alteration, unauthorised disclosure of, or access to, personal data transmitted, stored or otherwise processed;

Based upon this definition just allowing the access is a breach.


Well it seems like a gray area. There's a meaningful difference between "accessing" something and "having access to" something. To me this seems like it would only cover the former.


I’m not aware of access logs exist, perhaps history tracking if enabled on that object(limited to 10 fields per object).

It would absolutely be considered a breach if customer data was accessible by non authorized staff, moreso if by partners.


Salesforce has a feature called Event Monitoring. Event monitoring does show lots of information, like pages access, reports exported, api endpoints used, etc. but it needs to be purchased. However, Salesforce does track and retain that data for some time so you can decide to purchase it retroactively if you ever want.


TIL, thanks. Maybe they ought to give that away to those who have an affected org.


> Presumably this should cause most companies in the EU to announce, within 72 hours, a data breach, since it allows any salesforce user to gain any permissions and view/leak/copy/steal any data on any customer in the org.

Presumably most users already have an NDA signed? Doesn't that cover PII?


NDA can be breached. What matters if the data was leaked or not, not if it was done legally (Data breaches seldom are)


Oopsie. Some poor bastard is having the worst day of their career.

I find it concerning that it was even possible for this to happen, regardless of whether it was intentional.


This is just a reminder that no matter how big or successful you are shit like this can always happen. And it’s usually not the fault of a single person, but rather some lack of process/review/control that made it possible in the first place.

I feel terrible for whoever initiated this. I’ve been in that boat and it -really- sucks.


It's hard to make things totally impossible, but hopefully a good post-mortem will identify the systemic issues that led to it happening, fix them, and not throw anyone under the bus.


"I'll just ask engineering to fix your wonky account"

"Ah yeah - it got in a bad state somehow, let me fix it manually"

    UPDATE permissions SET allow = 1 WHERE user=671156 AND permission=16 AND org=101 OR 102;
Classic SQL blunder...


For anyone reading this who has login access to a production SQL database...:

* Change your account to readonly. Make a new admin account, and put its credentials somewhere hard to get (and audited!).

* Make a directory in git for 'one off sql statements'. Make them all go through code review and have an automated system run them on merge/deploy.

* Enforce style rules with a linter/test, like "UPDATE must have a LIMIT"

* Anything the above process is too burdensome to do should have an API or admin interface built for the purpose.

* Aim to eventually get rid of your readonly account. A leaked customer data dump could kill the company and shouldn't be available to any malware on your machine. You aren't as secure as you think you are.


I agree with everything you said except "a customer database leak could kill a company".

Do we have any example of any company going under due to a data breach? Unfortunately, there seem to be a lot of examples of enormous breaches that did essentially nothing. (like Experian)


That Italian malware vendor closed down after all it's internal documents were leaked. As did the law firm behind the Panama papers


Hacking Team is still alive and kicking.


Well yes, but I would attribute the first to simply doing illegal things and getting caught. The data breach just helped them get caught faster. The second, yes, although privacy was literally their number one product and I wouldn't categorize a leak quite the same as a breach.


You forgot one thing, if you’re ever executing one off statements make sure they’re in a transaction if it’s an insert or update.

BEGIN; Statement COMMIT; or ROLLBACK

etc


You can also do changes in the transaction and check that it did what you expected before committing.


Fair warning about this: if your transaction isolation level in your database is set to serialized hanging out in mid transaction while you verify your changes will block other calls to the table(s)!

I always recommend that you are wrapping calls in transactions if you are touching prod data. I only left the WHERE clause of my update query off once in my life before I started doing that.


If you fear this, you can put it into script like this:

BEGIN; UPDATE... SELECT for verification ROLLBACK;

And then just change ROLLBACK to COMMIT, after you're sattisfied. There's no need to do it interactively.


That is smart, I would add "limit 2" to a single row update, so if it returns 2 rows updated I know it was wrong.


> * Enforce style rules with a linter/test, like "UPDATE must have a LIMIT"

As someone who has never had access to a DB with any serious number of users, can you explain this one further? What if you really do want to update every row? Do you just do LIMIT INT_MAX or the like, and just force people to write that so that they always know they're updating the entire table? Or are you saying you should only ever use UPDATE on a known finite (and small?) number of rows?


Updating every row is usually a bad idea anyway. That means every row needs to be rewritten, which involves reading and writing all data in the table. That will take locks which mean old data needs to be kept (in case of a transaction abort), doubling disk usage. Since half of rows are now dead, all other queries to the table will take double the disk access time.

Basically, if your table is over a few gigabytes, updating every row in one query on a production instance is a really bad plan.


So how do you make the change then? Do SQL dialects provide built-in tools to do an UPDATE to a large table in a saner way, or do you end up having to write an application to do so?


I think it might just be like a psychological thing to try to prevent mistakes. Like a dialogue box asking "Sure?" before you delete your harddrive.


It's the notion of a staggered rollout. You should never initiate a global change without first validating it on some small, controlled-impact subset.


I'm sorry for being a SQL noob. Can you elaborate on what is wrong with this, and what would be the way to write it without causing the scale of problem described in the OP?


The problem is that the condition is parsed as

  (user=671156 AND permission=16 AND org=101) OR 102
The right way is to use something like

  org IN (101, 102)


There are lots of good answers already posted, but if I can make a further suggestion...

It is often worth running a SELECT on the WHERE clause you are about to use for your UPDATE. That way you can make sure only a limited amount of data comes back before you launch something catastrophic.


the last OR statement "OR 102" will evaluate true.

It should be

    UPDATE permissions SET allow = 1 WHERE user=671156 AND permission=16 AND org=101 OR org=102;


You and fazzone have posted different answers. I'm a little unclear on which matches the original intent, but your where clause is equivalent to:

  (user=671156 AND permission=16 AND org=101) OR (org=102)


That is still wrong and would apply to everyone in org 102


Applying to everyone in org 102 is the intended behavior no? The slip-up would be the 'OR 102' which would just evaluate to true.


The example query is supposed to match a single account, not an org.


> In addition to the SQL-standard privilege system available through GRANT, tables can have row security policies that restrict, on a per-user basis, which rows can be returned by normal queries or inserted, updated, or deleted by data modification commands. This feature is also known as Row-Level Security

https://www.postgresql.org/docs/11/ddl-rowsecurity.html


As a manager, if you see a poor bastard having the worst day of their career because of something similar, it means you have inadequate safeguards in place and you’re not doing your job correctly.


The worst day of their career so far.


I can imagine it happening, but it's hard to imagine how they applied it to their disaster recovery backups before noticing the issue.


Right. Like add a unit test or something. Geez.


I’m more curious how a script that does this even made it through review. And if there wasn’t a review... why not?


Stuff like this is always possible wherever you have fully centralized architecture. I don't know how many massive cloud failures it will take for IT community at large to realize this.


Salesforce launched in 1999 before before cloud was a thing. It's basically just a big Oracle database.


That's not very fair to a company that was one of the pioneers of a lot of the things we consider normal on today's software as a service.

Also incorrect, they have great cloud and devops practices. If anything it's likely this bug's impact would be limited due to how decentralised SFDC operates.

Still a massive fuck-up, I'm interested in seeing if they'll release any more detail on why it happened.


While a little unfair, and Salesforce is a decent product with nice dev tooling (apart from the weird ancient Java ish custom language), but under the hood, it really is just a Oracle database per org.


Weird ancient proprietary language.

Impossible to run locally.

No debuggers.

Virtually impossible to put an entire org in source control.

No package manager.

More undefined behavior than a C compiler.


> decent product with nice dev tooling

whaaaaaaaaaaaaaaaaat hahahaha i can't take that seriously; I've used it and it felt like a giant pit of despair


Well,

they do own heroku

https://www.heroku.com/


They purchased Heroku.


In fairness they did do long enough ago that they would have massively messed it up if they at least didn't understand something about running software.

SFDC hate is pretty common, maybe because of how big they are. I think that their tech is actually pretty impressive.


The hate is likely propelled in part because their sales process is the work of Satan.

Like seriously I would rather lick alcohol soaked razor blades than do a standard annual renewal of a Salesforce contract.


Other than being pretty public at this point, this incident could have easily happened in an upgrade/rollout in on-premise settings as well (at least without a good staging environment and test process).


You get all kinds of new and exciting failure and fuck up modes in a decentralized architecture.


If I add a service listening on port 1234 that pipes text input to a root terminal most likely no one will ever know.

That's the advantage of decentralized architecture. It's a disadvantage too though ...


"The Salesforce Technology team is investigating an issue impacting Salesforce customers who use Pardot, or have used Pardot in the past. The deployment of a database script resulted in granting users broader data access than intended. To protect our customers, we have blocked access to all instances that contain impacted customers until we can complete the removal of the inadvertent permissions in the impacted customer orgs. As a result, customers who were not impacted may experience service disruption. In parallel, we are working to restore the original permissions as quickly as possible. Customers should continue to check Trust for updates."


Reading between the lines, I think someone forgot a WHERE clause in their UPDATE statement....


I mean, you would hope it's not possible for it to be that simple but... you could see it, couldn't you?


Considering that just yesterday I had to quickly cancel an operation that had the wrong criteria specified, I can absolutely see it...


How about turning autocommit off?


Salesforce incident: https://status.salesforce.com/incidents/3815

Our product syncs data to Salesforce - we're seeing hit and miss connectivity across our customers' instances. Some API calls are still working, I'm unable to sign in to a developer instance in NA49.


it'd be nice if we could get the hn link pointing to that instead, since it's actually got a bit more details than the reddit post and the reddit post doesn't seem to be getting updated, so it's only going to get more "stale"


FWIW that link doesn't load for me. I think they're getting hammered.


For people not familiar with salesforce, what does this mean?


For a time, users on many instances were able to read/modify data that they shouldn't have been. They got full CRUD access to -all data-. This includes some external users of things like Customer and Partner portals (where functionality and data are made available to external users via Salesforce). When they decided to try to mitigate the issue, they locked down all access and took away CRUD permissions from all users/profiles in Salesforce on those affected instances.

We woke up to a bunch of users unable to do their jobs because they suddenly started receiving "NO ACCESS" errors, effectively. We also haven't been able to modify the profiles and fix the access effectively.


When I saw this headline, my first reaction was that a salesperson in an org could download all customer contact info and immediately go to a competitor and start poaching customers extremely efficiently. How likely is this scenario? What sorts of recourse, legal or otherwise, would the org have? Non-competes are hard to enforce, and I don't know enough about trade secret laws to have a good opinion on this.


The way most companies operate, most of their Sales people could do this on any given day anyway. Salesforce can track those activities though.

This issue did not open access to other companies’ data. Just all the data in their own Org.


If I understand it correctly, it's much worse: A customer of Company A could download all internal data of Company A (e.g. all customer info) if Company A is using a Salesforce based support ticket system and the customer had an account in there.

So if a sales person at Company B happens to be a Customer at Company A...


> How likely is this scenario?

It's difficult to say. Exporting data in bulk isn't totally straightforward for your average user, and I'm not sure how long total CRUD access was granted. That being said, given that full access was granted, it's not impossible to imagine someone creating a report and doing a dump of customer data.

Though typically that may fall under NDA and not non-compete. NDAs are a little easier to enforce as far as I understand, but I'm not a lawyer.


It's sort of like if you walked by a bank and saw a bundle of money unattended in the lobby rather than the vault. Technically maybe you could get outside with the bundle, but keeping out of trouble long enough to enjoy the big screen TV you try to buy? ... Don't bet in it.


Nah, it’s more akin to an employ coming in one day and seeing that the vault door is just open, and thinking they might poke around a bit / pocket some bundles of cash... Except they usually handle most of those bundles of cash day to day anyway.


This is absolutely disastrous for healthcare companies.


As someone with a big healthcare client on Salesforce, you're not wrong. This was a massive issue for Salesforce Healthcloud users.


Only of they used or had used in the past Pardot...


Wonder how many lives you could attribute to being lost because of this bug.


When computer networks go down, hospitals switch to good old pen and paper.

It'll be 0 - its more about the possible breach of PHI/HIPAA violations


Does this allow a salesman from company A to access customer info from company B (where A and B are both customers of salesforce) or is it just intra-organization?

Still bad if the latter, but catastrophic if the former


Nah, definitely not that. Just ups their permissions to that of a super user / admin in their company.


Looks like they shut down a big part of it, our instances are all down.


This is the documented solution to any significant security issue. They'd much rather be down than expose any customer data.


We're on ap4 and it looks like our stuff is still up.


I get a lot of just loading at:

https://status.salesforce.com/

Not really what you want with something like this... but the folks holding the keys to a site like that are often never around / fast enough to make those sites helpful.


It didn’t affect any of the 3 orgs within our firm. It was limited to firms who’ve deployed Pardot so a much smaller audience.

Still, epic screw-up.


can confirm my org is impacted.

our SF instance is accessible, but no permissions on login.

just got an update from our admin, no eta


Why would anybody trust Salesforce after this?


Doesn’t really matter.

Moving off Salesforce is a many month project for a small company and possibly years for a larger company depending on the add-ons and everything.

Doubt they lose any real customers over this but they’re definitely going to be cutting some checks/credits to a ton of people for the next few months but it’ll fade.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: