Hacker News new | past | comments | ask | show | jobs | submit login
What Happened: Adobe Creative Cloud Update Bug (backblaze.com)
397 points by slyall on Feb 15, 2016 | hide | past | web | favorite | 116 comments



It is unbelievable how invasive Adobe software is. I try very, very hard to avoid installing anything by Adobe, because I know that the software will do whatever it wants on my drive, install itself all over the place, install various (ugly) menubar items which are non-removable, run things at login, make a "Creative Cloud Files" on my Desktop (!), make me a member of the "Community", demand my password to get administrative privileges, without explaining what for, make lots of network connections to various servers, sending unknown information about me, run various processes in the background (AAM Updates Notifier, anyone?) and run various updaters with annoying popups whenever it feels like it. I think CC will also pester you with upselling popups, or new features information.

Back when Flash was still a thing, they got so low as to try to sneak in more of their products when I just wanted to update Flash.

It's crappy behavior and should be called out for being what it is. We are too quiet about it and we got so used to taking crap from software producers that many people don't even complain.


On OSX with just Photoshop CC installed, there's around 10 adobe related processes running at any given time, all the while making connections and sending data to both adobe and non-adobe (CDNs) IPs. And that's AFTER you disable all of it's settings including run at startup. There's even an adobe Node.js instance (wtf?) constantly running and using CPU.

I don't take kindly to my system being commandeered, so I cancelled my CC subscription, removed their bloated, slimy POS software and bought Affinity Photo.


Go to /Library/LaunchDaemons/ and delete what you don't want to run at launch. (At your own risk of course.)


Better to run "launchctl list | grep -iv com\.apple" for both yourself and root; there are multiple places that startup items can be placed.


Don't delete. Instead, run "launchctl unload -w <file name>". This will stop the service and disable it from loading on boot.


Tried that, didn't work. My guess is that Photoshop ran some kind of script that re-enabled it. I went to great lengths trying to gracefully disable all the CC crap before I gave up and rm -rf'd every damn file/directory containing adobe.


> It is unbelievable how invasive Adobe software is.

Have you ever attempted to analyze the periodic data transfer that happens from your computer to Adobe servers when you have one of their products running? They log every action that you perform: from opening a file to clicking a menu item.

Hint: The "encryption" scheme employed to _encrypt_ the user actions is a substitution cipher.


Adobe Creative Cloud is the Tyler Durden of software.


Actually it's the Windows 10 of software.

Which is the problem: if you hire a builder or a plumber and they do something stupid which causes you significant harm, you can sue them. And you'll probably win.

If you buy a car and it's defective and causes you significant harm, you'll likely get compensation from a class action suit.

How did destructive software errors get an exemption from the legal principle of accountability? (I know the EULAs say whatever, but that doesn't mean they can't be challenged in court.)

Adobe, MS, Apple and the rest don't need to care about software quality because they don't need to worry about expensive legal action by customers.

Stupid crap like this will continue to happen until that changes.


I agree that some stronger regulation may be needed, but applying a broad "punish for a bug" strategy will have terrible effects for our industry as it will lead to a stagnation in innovation and a reluctancy to experiment. (We can already see this with the "Error 53" bug, where Apple is actually doing the right thing by not letting the secure enclave be compromised and getting sued for it).

Look, bugs in software are as old as software itself, comparing software to a builder is not helpful because it is simply not the same thing at all - software is a lot more abstract and the effects of doing certain things are far less measurable at composition time - also builders have several thousands of years of tradition, programmers have less than a hundred.


I think it's fair to put the blame for data loss on users as long as it isn't done maliciously. That's because users are the ones in the best position to protect against it by making backups in proportion to how valuable their data is. It would be a nightmare if Adobe had to make a backup of every customer's computer just in case their software had a bug that deleted something. Much more efficient for the user to just have a single backup protecting them from bugs in all their software rather than all their software redundantly doing it.


This is a fairly common programming mistake to make, nothing to do with Adobe being invasive. I remember when eve online's updater deleted C:\boot.ini in 2007 for example:

http://community.eveonline.com/news/dev-blogs/about-the-boot...

That said I completely agree Adobe's software is disgustingly invasive, just not for this particular reason.


A commmon mistake...

No. It's an idiot mistake. It's an idiot mistake that should have been caught by code review. It should have been caught by release engineering. It never should be made in the first place because deleting files that your software didn't create in the first place is asinine.

If your software does this you should be fired.

End users should be free to install software without having to sort through their backup/recovery process.


This is not a common programming mistake to make at all, I'd also say it has everything to do about being invasive. The EVE reference you mentioned was 9 years ago for a video game in an early release stage.


I don't know the exact nature of the Adobe bug, but Steam had a similar problem recently too "Moved ~/.local/share/steam. Ran steam. It deleted everything on system owned by user" https://github.com/valvesoftware/steam-for-linux/issues/3671

Honestly (and imnsho) the prevalence of that sort of thing makes it even less excusable for a large company like Adobe.


That is incredible.

> Line 468: rm -rf "$STEAMROOT/"*

Without a check for $STEAMROOT validity. I'm not sure you couldn't sue Valve for the incredible recklessness. That is just beyond the pale.


In general you can't. Most all software EULAs pretty much says you agree there is no warranty and no liability no matter what happens.


EULAs are not magic, at least in sane jurisdictions. A lot of stuff they contain is not legally binding.


Holy shit how have I never heard of this. That's an insane bug for Steam to let slip


EVE wasn't early release stage at that point. But: EVE at the time had a startup configuration file `boot.ini` inside the game folder. They tried to delete a game file, but if the game was not installed on C:\, a couple of path variables were empty and it realpathed to `C:\boot.ini`, if I remember correctly.


EVE had already been out for 4 years at that point - much younger than it is today, but hardly at an early release stage.


Can you more clearly define "common" for me?


Kudos to BB for their swift actions.

Under no situation should any software randomly delete folders and files it doesn't own, period, specially something so widely used by professionals. This is not just a little bug on a rarely used OS platform not limited to people using backblaze on the Mac, it should have been a show-stopper, and should have never left the test environment (do they even have one?). It reflects on the abysmal quality of Adobe's software and their QA.

There's no reason Adobe should even touch the root directory "/" on the Mac, leave alone delete anything from there. Why does Adobe CC need root permissions to install on the Mac in the first place?

I hope the people responsible for this disaster, their managers and the QA team were fired. In my business, this sort of mess can sink our company.


> I hope the people responsible for this disaster, their managers and the QA team were fired. In my business, this sort of mess can sink our company.

I hope something very different: I hope they're required to write a thorough post-mortem analysis of what went wrong, and what in their development & testing process led to the failure. And then required to implement improvements to both that would prevent this class of bugs from happening again.

Firing people for a mistake is rarely a good idea. It makes them not want to admit mistakes. Firing them for making the same mistake several times and not learning for it -- fire away.

c.f., http://itrevolution.com/uncovering-the-devops-improvement-pr... and the "blameless post-mortem"


Yev from Backblaze here -> That would be cool, though there's no real "requiring body" to require post-mortems. The company has to want them. We do them ALL the time internally at Backblaze, they help us piece together what the hell went wrong and gives us next steps to work on. We find it really useful, especially with software/hardware issues (less so UI glitches and stuff).


When you find yourself writing code that deletes directories, you need to be extraordinarily careful. It should be subject not only to extensive testing, but a design review that questions if there isn't a better way to achieve your goal because of the many ways code that runs rm -rf can go wrong.

If code like this escapes code review and testing and wacks actual customer data, firing is an appropriate response.


Are you sure, without seeing the bug? Imagine two different manifestations of the bug.

(a): Pull out first directory found in readdir("/"), system("rm -rf ${dirname}") ( in no real pseudocode).

(b) removeCacheDirs(); --> recursiveDelete(getLocalCacheDirList()); --> ReadConfig(GetUserPrefLocation(CACHE_DIR)), .... <some horribly nested mess of functionality created by 10 different programmers over as many years, where somehow the code misbehaves under a specific set of environment attributes, etc.>

Sure - (a) would be awesomely stupid, but it's probably not what happened. Oh, it might be. But (b) is more likely, and reflects as much an organizational and dev process mess as anything. And if (a) happened, you need to ask yourself how: Why was someone with so little clue allowed to implement something like this with no code review / why did it bypass code review? What's the longer-term policy and mechanism to prevent this?

When a three year old shoots someone with a gun, do you put the three year old in jail, or do you figure out how the $*! they got a gun in the first place?


When I wrote similar functionality, we had (as you mentioned) various helper functions. All of them where heavily tested and marked for extra code review if they were changed. And if you need a temp or cache dir, systems provide safe ways to create such -- eg mktemp, GetTempPath, etc.

And it's more than fair for the lead who approved the code change, and perhaps even the lead who approved the entire design to have their jobs on the line for this.


In the code I've written that deletes directories, I never call the system level delete functions directly, and instead have abstracted them away to a function with added checks to the beginning that fail/assert with big warning messages if parameters ever try to delete root or other system directories.

It's never been triggered except by my unit tests, but I'm sure glad it's there.


  It reflects on the abysmal quality of Adobe's software
  and their QA. [...] I hope the people responsible for
  this disaster, their managers and the QA team were fired.
Eh, I can kind of see how it might happen myself. Remember that time Valve had a bug in Steam for Linux that would wipe the user's hard drive? [1]

It's easy to imagine how this sort of thing would happen - manual test guys don't have hidden folders in their root directories because their boxes are locked down by corporate IT and they can't create any folders in the root directory. Developers don't have them because they understand the unix way and know they shouldn't be putting files there. CI servers don't have such folders because, well, why would they? None of the test environments include the specific condition needed to trigger this bug - a hidden folder in the root directory - so the bug never appears in testing.

Is it bad and serious? Absolutely. But I don't think the fact testing missed this is a sign of testing incompetence - you could have cutting edge testing following every industry best practice and still miss this bug.

[1] https://github.com/ValveSoftware/steam-for-linux/issues/3671


Honestly? Adobe has been pulling this sort of bad stuff for a while now, it is not just a dumb mistake.

I am a hardcore fan of Macromedia Fireworks, and Adobe releases caused so much problems that I decided to use Macromedia Fireworks in the literal sense (ie: in my current machine, I have Macromedia Fireworks 8 installed).

When Adobe bought Macromedia, I installed their Adobe Fireworks thinking it would be just an updated Fireworks... it was right, and wrong, right because it barely had any new features or bug fixes, wrong because it install for some reason needed "adobe common files" anyway, and installed 3gb of crap on my disc (when discs were still 20gb big, thus it was a huge waste of space), AND somehow it conflicted with their own Adobe Reader and made it stop working.

When Adobe Fireworks CS3 was released, I was in university, I was excited to try it... and not only found again it had almost no new features, old features don't worked correctly either, most aggravatingly, the selection boxes would always render in the wrong place, and not just 1 pixel off, they would be completely off, when I tried to input text for example, the text box appeared 300 pixels away from the actual text.

When a friend installed it on his computer, the computer started misbehaving.

And the problems kept coming, to the point that eventually I gave up, I have no idea what version Fireworks is now, because I decided when CS6 came out, to install on my machine Macromedia Fireworks, and not install any Adobe product on my computer (eventually I found it was hard to avoid Acrobat Reader, thus now I install it, but only that).

EDIT: Also I always disable Adobe Updater, the first thing I do after installing any Adobe product on any computer for any reason, is shut down the updater process, and forbid it from running.


Old Macromedia Fireworks (and current Adobe Cloud) user here. Fireworks still has not changed in any positive way since Macromedia developed it.

The bright side is that they haven't completely ruined it. I haven't noticed the text box problem, but I skipped many versions before I switched over to OSX and had to get the new one.


I go one step further - I block all access to Adobe's update servers at the router so it can never get there.


can relate to that. adobe's arrogant shit show destroyed the best fireworks.


Adobe EOLd Fireworks in 2012.


Except that it's not just a hidden folder in the root directory. It's the first folder sorted by name in the root directory...


Which if you have no hidden files .in / ends up being /Applications !


> Remember that time Valve had a bug in Steam for Linux that would wipe the user's hard drive? [1]

Counter-point:

* Let's be honest, Linux is not a first class gaming platform. Linux is not going to be Steam's #1 platform. Whereas here it's an Adobe product on OSX.

* The Steam bug was triggered by the user moving ~/.local/share/steam. That's not something that users will be doing regularly. Whereas this Adobe bug wasn't caused by any uncommon behaviour.

> Developers don't have them because they understand the unix way and know they shouldn't be putting files there.

Developers are very definitly going to have dot-files. .ssh, .vim, .bashrc, .local, .gnome, .mozilla, etc. etc. If something wiped my SSH private keys on my desktop, that would be a massive problem.


Perhaps a bit pedantic (HN!) ... We may be remembering a different bug but wasn't the steam bug a failure to check that a variable existed before using it to complete a directory that was to be deleted.

The variable was empty when a user moved steamroot but that would have been fine with proper input checking - also there were probably other scenarios where that failure soul result in catastrophe too.

http://m.theregister.co.uk/2015/01/17/scary_code_of_the_week...


  Developers are very definitly going to have dot-files. .ssh
The article only mentions the system root directory, never the user's home folder. If you're storing your ssh keys in your system root directory, you're doing something very unusual.


I'll be running chattr +i on my private keys when I get a chance. I wonder why that bit isn't set by default?


The Steam bug was the first thing that popped into my mind...But does application testing have to be so dependent on testers having all possible permutations of a system configuration? I mean, I can't offer a mathematical proof for this, but it seems to me possible to write code that pre-empts the situation where an arbitrary directory is blown away, willy-nilly. In the same way that while it's hard to effectively blacklist all the dangers, it's still possible to effectively implement security by whitelisting allowed behaviors.


That's exactly the point of application sand boxing, which Apple's been pushing for a while.


For the Steam bug, it could have been fixed by running the shell script with options that prevent use of undefined variables.

It really isn't hard to have a company policy about nounset, errexit, and, in the extreme, noclobber.


Don't forget companies with a nolock policy.


What's nolock?



> Eh, I can kind of see how it might happen myself

Exactly! Any programmer worth their salt should be able to see how it might happen... and prevent it!

> you could have cutting edge testing following every industry best practice and still miss this bug

I disagree. If you are following industry best practice (checking files/directories before you delete them) you would not have this bug.


> It's easy to imagine how this sort of thing would happen - manual test guys don't have hidden folders in their root directories because their boxes are locked down by corporate IT and they can't create any folders in the root directory.

This is what QA is for. It should be thorough, more thorough than you imagine and performed by someone other than who wrote the code.


Adobe is the company who once thought modifying the boot sector of a hard drive was OK [1].

I'm just glad they didn't migrate to messing around with EFI variables for their copy protection.

1: https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/441941/...


FWIW FlexNet is not an Adobe product, it's by Flexera[0] (Acresso at the time, prior owner Macrovision). It is/was used by a lot of products for license enforcement. I wasn't personally aware of the boot sector crap until now though.

Doesn't really excuse Adobe.

[0] https://en.wikipedia.org/wiki/FlexNet_Publisher


I've implemented FlexNet licensing in some of our software and my first thought when I read about this adobe bug was that it was an error in the FlexNet licensing implementation since it likes to just own a folder and wipe its contents before refreshing licensing info from the server (which would make sense after an install/update).

The only thing I could find from Adobe was the basic release notes, which really don't say much. Has anyone seen more details on how this happened? If it is licensing, I doubt they'd admit it.


Messing around in /? Pretty certain it is licensing, putting information in obscure places most users won't look for them.


> I hope the people responsible for this disaster, their managers and the QA team were fired.

This is a actually a rather typical response, made with full awareness of the consequences of an action by people not under the same constraints as the participants in an incident. Diane Vaughan (an expert on risk, causes of incidents, and how organizations come to become error-prone) terms these actions 'ritualistic':

> Causes must be identified, and in order to move forward, the organization must appear to have resolved the problem. The direct benefit of identifying the cause of an accident as Operator Error is that the individual operator is the target of change. The responsible party can be transferred, demoted, retrained, or fired, all of which obscure flaws in the organization that may have contributed to the mistakes made by the individual who is making judgments about risk. The dark side of this ritualistic practice is that organizational and institutional sources of risk and error are not systematically subject either to investigation or other technologies of control.

To make it even better, you're now throwing away all the people most experienced in the conditions that lead to a specific incident. Unless very specific conditions are true, this is a sub-optimal recommendation. The evidence seems to be against building a culture of fear.


There's no reason Adobe should even touch the root directory "/" on the Mac,

There is no reason why an application should touch anything outside its own sandbox (unless requested by a user via a open/save dialog). Incidents like this show why developers should distribute their Mac applications sandboxed by default.


> I hope the people responsible for this disaster, their managers and the QA team were fired.

Yes, call out the lynch mob.

Coincidentally, this bug was released a day before a holiday weekend. It's practically guaranteed that people were crunched on their deadlines and faced with either a) working all weekend b) getting reprimanded for failing to meet their deadlines or c) calling it good, releasing with inadequate review, and enjoying the long weekend (or so they thought). Firing minions and writing technical post-mortems won't help with creating a sane work environment.


I saw something interesting last week where an upgraded osx to el cap screwed up the adobe fodder permissions lib. Adobe's suggested fix was to give full access to everyone. I don't suggest upgrading to el capitan, clean install only. I also suggest people use standard accounts, and authenticate updates and installs with a separate admin account. I'm eager to see what the real reasoning behind the bug was, interesting programmer error using osx API I imagine.


It sounds like it touched the homedir, not the root dir.


Is there any kind of docker-like container solution for macosx? In a docker-like world, this wouldn't happen would it? An installed application (in a container) wouldn't have access to delete in a willy–nilly manner.


Adobe is handling this terribly. From the forum post [0], a number of people are reporting losing data, with one poster losing "all of my working files for my clients" and another saying "197 GB of working files Gooooone!". Adobe simply says "You can go ahead and update the desktop application. The issue has been rectified."

Surely there are others who didn't even know they lost data; when asked about how they'll notify those people, Adobe responds "We have advised customers to take local backup of their files just to be on a safe-side."

Some talk coming from a company that deleted user files with wanton disregard. I'm aware that legal culpability is a complicated topic, but Adobe is doing themselves no favors here.

[0] https://forums.adobe.com/thread/2089459?start=0&tstart=0


It's deplorable. I follow Adobe on Twitter and pay attention to various sites. I heard about the problem from BackBlaze.

Absolutely 100% unacceptable. Massive props to the BackBlaze team for handling Adobe's shitstorm so well.


Yev from Backblaze here -> Thanks! We tried to act pretty quickly since it was causing deletions of data. As a backup company that's a thing we try to help people avoid :P


Good postmortem from BB. Crappy situation for them to have to be in since it was all Adobe's fault, but I think they handled it well. They didn't off-load problem-solving to Adobe, but made it clear what the root cause of the bzvol error was.

Meanwhile, I'd love to see the postmortem from Adobe. What file(s) were they expecting to delete and why did they feel it prudent to not do a strict/stricter check for the files/directories to delete? Presumably some sort of "adobe" directory, but I would hope to at least find out that they couldn't reliably expect the folder to be a specific name and the issue was that their searching was just too loose.


They obviously made no check. The goal was probably to delete all Adobe configuration files. And somehow the first hidden folder in home became a target. Because .adobe is obviously going to be first, right?


Deleting the first directory seems like a harder problem than deleting a folder called .adobe.


I don't know about that. It's harder to do it right, but they probably didn't bother with doing it right.

For example:

    ls / | head -n 1 | xargs rm -rf


That's easier than

    rm -rf /.adobe
?

My point being that things indicate a more interesting problem they were trying to solve than just deleting a directory. What that actually was, and how that lead to such an awful solution, would be interesting to know.


I agree it's pretty far fetched. I'd love to know what the real story is too.


I wonder if it was something like .adobe-A3DF0910 that they were targeting, so they couldn't completely hard code the path and neglected to hardcode the prefix. Or maybe it was a purely randomly named directory without any prefix.


Maybe they were targeting .adobe* in home, and assumed that it'd be first, but didn't account for that possibility that it'd be missing. That would have been insane, I admit. But we do know that initial dot directories were getting hit.


Yev from Backblaze here -> To their credit Adobe did put out this blog post/update -> http://blogs.adobe.com/adobecare/2016/02/12/creative-cloud-d....


Why is BB writing to ~/ instead of ~/Library/Application Support/BackBlaze?


Brian from Backblaze here, here is an explanation I typed in elsewhere:

I'm the guy who decided to put things in a folder at the root level of drive. Originally this was a system for EXTERNAL drives that get plugged in then unplugged, then come back later. We need to know what state the backup is in related to that drive, and if the customer has several external drives that come and go we need a unique ID for each one. So I created a top level folder on the external drive called ".bzvol" (hidden) and then placed two files inside of it, a README that explained what the folder was and who created it and why you should not delete it, and a 100 byte XML file that had the drive's unique id AND ALSO the identifier of the backup that owns this drive (some customers have two computers both backed up by Backblaze and they carry a hard drive between them).

There are many designs that would work, in retrospect a better design might be to have used the drive's internal serial number (which is globally unique) and maybe a little mapping database either stored on the Backblaze website or somewhere down under /usr/local or /Library that maps the globally unique drive back to the backup it is associated with.

That's the problem with developing software. After you have worked on a problem for 8 solid years, you are finally qualified to BEGIN working on it and should rewrite it from scratch. :-)


Thanks for the reply - I completely understand that historical design design sometimes leads us to end up in non-ideal situations.


Maybe they hired a bunch of Stackoverflow driven developers and it was a result of a copy paste


A common response here is to bash Adobe for its lax approach to software quality. I think it would be better for us to look in the mirror and ask ourselves what we can do to improve the robustness of our own software. A few ideas:

- For production code, categorically reject any language that will happily treat an unset variable as an empty string. Or, if possible, configure the language implementation to run in a stricter mode that fails fast instead. Question: Is that possible with shell or PHP?

- Favor manipulation of data structures over concatenation of strings. See Glyph Lefkowitz's blog post "Data In, Garbage Out" [1]. When manipulation of strings is practically unavoidable, as with filesystem paths, use a well-tested library such as Python's os.path module or .NET's System.IO.Path class.

- Embrace the principle of least authority at every level. For example, when developing for the Mac, work within the Mac app sandbox as much as possible.

Any other ideas on how to prevent this kind of catastrophic bug?

[1]: https://glyph.twistedmatrix.com/2008/06/data-in-garbage-out....


> Is that possible with shell [...]

#!/bin/bash

set -eu

<actual work>

Explained:

   -e  Exit immediately if a simple command exits with a non-zero status, unless
       the command that fails is part of an until or  while loop, part of an
       if statement, part of a && or || list, or if the command's return status
       is being inverted using !.  -o errexit


   -u  Treat unset variables as an error when performing 
       parameter expansion. An error message will be written 
       to the standard error, and a non-interactive shell will exit. -o nounset
Worth a read: http://ss64.com/bash/set.html


There is also the variable-expansion abort, even with optional custom error message:

    "${VAR:?Error message goes here}"
Will exit2 if VAR is unset or the empty string.

Or, which I often use, build the path you want to delete, call realpath on it - then check if your base directory you never ever have any business leaving is a prefix string of the realpath resolved path. This still allows to symlink stuff into subdirectories but will catch if something resolves suddenly all over the place.


File delete bugs are (obviously) dangerous and sometimes have surprising causes.

A long time ago, we had a Java web application that we'd deploy, and after a couple of days would stop working. We'd investigate and find it hadn't been deployed properly, yell at the person who installed it and go on our way.

Until the day data disappeared. That was weird.

Then we saw files physically disappearing while we had Windows Explorer open in front of us.

So it turned out that a library we were using was using exception handling for flow control, and in a limited set of circumstances this meant a file wasn't closed. That was fine. Usually.

Except that Java 1.4.1_01 had a bug on Windows where if you opened more than 2036 files open THE NEXT FILE THAT WAS OPENED WOULD BE DELETED[1].

I'll never forget the pain.

[1] http://bugs.java.com/bugdatabase/view_bug.do?bug_id=4779905


Several sources (including Adobe's own blog post on the issue [1]) suggest that the problem is in the update code, but the Backblaze guys say (and show, in videos) pretty clearly that the deletion recurs every time the user signs into the Adobe Creative Cloud account. Is this actually some DRM bullshit?

[1] https://blogs.adobe.com/adobecare/2016/02/12/creative-cloud-...


Yev from Backblaze -> no idea really, we initially thought that it was "on update", but then when we dug further it appeared that it would fire any time a sign-in occurred on the broken version. Not sure what the exact code was though, or what the intent there was.


Hello gang! Yev from Backblaze here to answer any questions. Sorry, just woke up. Who posted this? Australians? :D


Once again, Adobe demonstrates that they have the worst software quality if any company at their tier of the market.

Such a shame that they're market-leader in so many ways, because once you get passed the pretty uis and impressive features, their software just a train wreck.


If anything, a testament to the importance of a good UI if you want your software to be used by people.


This just shows we really do need a step towards an 'appstore-like' permissions model for desktop operating systems. There is just no reason for all these software packages to have full write access to everything in your user account.


You know, Thinstall (later VMware ThinApp) has existed since 2001. I'm surprised its particular breed of containerization (effectively an LD_PRELOAD shim that defines virtualized versions of libc calls) never took off. It was perfect for providing this kind of "isolation": the kind where you want the app to "not make a mess of things"—but aren't aiming for container-like isolation-based security, because the apps you're running are known quantities (if stupid ones.)

It's interesting, also, that Windows' WoW16, and then WoW64, both provide their own levels of filesystem virtualization for "messy" apps... but those same constraints aren't pushed on "native" apps.

I still don't really understand why no OS just virtualizes every app's filesystem, without having to opt into something like sandboxing. It'd actually be able to provide a much nicer programming model, a lot like Plan9: just spew all your program's files into the virtualized equivalents of system directories, because they're directories that are really just for you. No subdirectories; you just put configuration right in /etc, manuals right in /usr/share/doc, etc.

That could then be combined really well with a database-filesystem: going in the file manager to /usr/share/doc would display a "virtual library directory" with virtual subdirectories for each app-container that had made use of the directory. (Or you could skip the virtual subdirectories and get a merged view. Good for e.g. a Fonts directory.)


Honestly, the best part of this story for me is I am finding out the details from Backblaze. I remember when I worked at Zetta.net and I chatted with a couple of members of the founding Backblaze team at the time about their DC issues (we were in the backup biz as well). Thoughtful, pragmatic, and straight forward.

I was using Backblaze back then and still do.

A great company and well worth supporting.

Thank you for the analysis and sorry for the troubles you encountered due to Adobe -- a company that had multiple "we are doing better for the employee" HR moments which mean "you are screwed" over the last 5-7 years.


Yev the non-founder here -> Hah, that's great to hear! The founding members are still a silly bunch.


It's a shame Apple is doing such a bad job with the Mac app store -- it would be really nice to have a system that limited the damage to the app's own playground and files.

Giving random 3rd party applications access to your whole computer feels like a relic from another age.


The MAS is in a strange place of having to push out some stuff (e.g. Xcode) that is entitled to mess around with your system, for necessary reasons (e.g. to install a debugger.)

There only a few ways they could have worked around this, and I don't blame them for their choice:

• Require apps to sandbox themselves to the degree they're able, and then request "entitlements" for anything they need to do outside of the sandbox. Use entitlement list on app submission to guide the app review process.

This is what Apple chose; it's not very costly for them, since they were already doing review on app submission, and the entitlement list can actually make review faster than before, since it gives each review-iteration more of a clear picture of the app's architecture.

• Divide apps into "system" vs. "pure-userland" apps. Run all "pure-userland" apps in containers. Optionally, require that "system" apps are as minimal as possible—more like "system extensions"—and set it up so that most "system apps" end up as "system extension" + "pure-userland app" pairs that use IPC to coordinate, where one has some sort of official UX to trigger the app store to download+install the other. Require, additionally, that the "pure-userland app" still retains some sort of usefulness even without the system-extension.

This is a cool solution, and the one I'd expect Linux (or Android) to use in the future, pairing "app" packages with "system" packages. It can be nearly completely automated with little-to-no review required. But it is very restrictive: either you're containerized or you're not. The components that require elevation can't benefit from isolation at all. There's no partial sandbox in the sense of Chrome's "even if you get write-memory access to the browser, you're not getting anywhere" sandbox.

• Do what iOS does: have only pure-userland "apps", but where some apps can prompt you to install configuration profiles, that then apply a layer of (reversible-by-uninstalling) changes to the OS, which the app can rely on. This is how e.g. the MaaS360 app "takes over" and manages corporate iOS devices.

I could see OSX being written this way. OSX has configuration profiles(!), though they're not used for nearly anything other than enterprise management (and, I think, the recent-ish OSX beta program.) Personally, I think they're a cool system, a bit like if the Windows Registry was composed in terms of immutable, introspectible patch-layers rather than being a single mutable tree. (By analogy: applying a configuration profile is like applying a filter in Flash or Fireworks, while installing a .reg file is like applying a filter in Photoshop.)

The flaw with configuration profiles is that you have to anticipate every way an app might want to modify the system, and provide an API to request that the system modify itself in that manner instead. You might be able to anticipate e.g. MacFUSE's desire to install filesystem drivers, or f.lux's desire to play with screen gamma, or even Synergy's desire to take over your input device handling. But you'll inevitably miss things, like e.g. Dropbox's desire to inject a shared object into the Finder that overrides some of its exported functions with new ones.


Amazing that the only contact from Adobe mentioned in this blog post is from their PR folks

>At 12PM I received emails from Adobe’s PR who wanted to make sure we were on the same page with how we were addressing the issue


Has anyone figured out the reason for this bug? What was this "delete the first directory" code actually supposed to do, and how did it fail? I can't figure out any plausible scenario.


I'd guess that the cause is very similar to the Steam bug that also deleted random files. Probably a shell script with a glob like '$SOMETHING/*' where the $SOMETHING variable was accidentally unset.


Some previous discussion on this most unusual bug here: https://news.ycombinator.com/item?id=11090630


all because adobe's shit is running with privileges it doesn't need.


Exactly. People get "Adobe Creative Cloud" because it's the only way to get Photoshop, Illustrator, and a few other related tools. None of those tools need root privileges. But Adobe, in their arrogance, think they have the right to take over the user's machine, run tasks in the background, communicate with servers, and do other things they have no business doing.


This is exactly the kind of mistake that a sandbox was designed for.

In a sandbox, by default your software can ONLY write to a specified place like the "/some-random-globally-unique-string/yourapp/yourcrap" folder. Then, guess what: the most that "rm -Rf /" can do is obliterate your own stupid app's files. And, if you further compartmentalize features, it may only damage a segment of your app.

Adobe's still stuck in the decades-ago software development model of "well, guess I need full root permission" with no thought invested beyond that. No wonder Apple further locked down root folders in El Capitán so that you now need to restart from a special partition and essentially invoke "sudo --act-from-god --yes-I-really-know-what-the-hell-I-am-doing --no-really-yes-I-do --yes-really" to remove the protections from most root folders.


Is there a viable alternative to Lightroom? It's the only reason I'm stuck with Creative Cloud.


I've heard good things about Capture One from PhaseOne, especially in regards to the RAW rendering. I'm still on the Lightroom bus myself.


Thanks for the tip! Looks like they have a perpetual license. :)


Nice write up. I wish they wouldn't use light grey text, however. Form over function?

Preventative steps: https://backblaze.zendesk.com/entries/98786348--bzvol-is-mis...


We've gotten one or two mentions of this before, but our designer really likes it. Creative types...what are you gonna do?


I'm guessing your unsympathetic "creative types" have nice Retina displays where the thin "Open Sans" font is more readable, even at #4D4D4D. If I switch to Lucida Grande, #4D4D4D is OK. http://contrastrebellion.com/


You'd be right!


Is there really no official press release from Adobe?


Well, they did release this official blog post -> http://blogs.adobe.com/adobecare/2016/02/12/creative-cloud-d...


Hey, it's something. Even if this is a blatant lie:

> In a small number of cases, the updater may incorrectly remove some files from the system root directory with user writeable permissions.


I got super lucky and happened to see the original tweet about this. Immediately ran the terminal commands.


_Uncheck_ "Always keep Creative Cloud desktop up to date"

Ahh, much better.


Yet another reason to use GIMP: it doesn't randomly delete your files.


Really?

You're suggesting people should switch to a GNU project program because GNU has a better track record for not doing counterintuitive insane, dangerous bullshit?

`strings` was exploitable. `strings`, for fuck's sake.

And the autotools are still a thing.

The day I assume GNU software is well written is the day GNU HURD ceases to be a punchline.


GIMP will become a more viable option once they implement non-destructive editing.


Considering Photoshop just erased an entire folder of files, it can hardly be called non-destructive. You're basically at the mercy of Adobe to not mess up and not delete everything when "the cloud" updates. The workflow being slightly more convenient doesn't matter at this point.


So to supposedly avoid one-off bugs (which can be entirely mitgated with proper backups), graphics professionals would rather suffer every day?


Pixelmator ($30) is a much more usable alternative on OS X


Probably.




Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: