Hacker News new | past | comments | ask | show | jobs | submit login
Adobe joins the Chromebook party, starting with Photoshop (chrome.blogspot.com)
368 points by zastrowm on Sept 29, 2014 | hide | past | web | favorite | 245 comments

Well this is interesting.

Their FAQ[0] contains some more information, but still does not explain everything. They say "however, instead of being installed on your local machine, it is running in a virtualized environment so can be accessed from any Chrome browser or Chromebook. Because this version of Photoshop is running in a virtualized environment, you open, save, export and recover files from/to your Google Drive rather than your local file share."

It would seem that they are indeed streaming the video (VNC-style). More over, space requirement is only 350MB (Photoshop is normally much bigger), and there's also this:

"If network connectivity is lost, you will need to launch a new session. A recovery folder called ‘Photoshop Recovery’ will be created in the root of your Google drive. To recover files, simply double click to open a file."

Overall, this doesn't sound good at all. If they are streaming screen "video", color correction and pixel-level precision in design is going to be tough. Photoshop seems like one of the most difficult programs to work via VNC.

[0]: http://edex.adobe.com/projectphotoshopstreaming/faq

Whose to say this is VNC-based? VNC is very old technology. RDP/NX solve a lot of these problems. Or this could a x-server like application. Who knows, but I doubt Adobe is launching something that uses VNC-type technology where it samples and over-compresses the screen. I imagine things like color precision are well taken care of.

Also, anyone else notice this amusing note at the bottom of the FAQ:

Please post your issues to [Insert Forum Link]

If you were building this for a modern Linux client, wouldn't spice be the logical technology to leverage?


Or the server/client both implement custom compression on top of VNC.

The color correction can be done on the server. It just needs to get and upload your monitor profile, and Photoshop on the server returns color corrected pixels. Of course, Chrome cannot even read the monitor profile on Linux yet. I'm not sure if this also applies to Chromebooks though. Since it's a fixed device, it probably ships with the monitor profile file factory stored somewhere. Or Adobe gets the files for the few dozen or so Chromebooks from Google, and the browser only sends an identifier.

And then H.264 artifacts probably undo that color correction.

Adobe has spent decades worrying about color accuracy in their applications in every path it can take to your eyeball and they're not just going to screw up the last hop because it didn't occur to them to take the effect of whatever remoting technology they're using into account.

Based on the backlash over Creative Cloud, I don't think they really care.

Has there been backlash over color accuracy in Creative Cloud?

Because there are a lot of things Adobe might not care about as much as you'd like- but color accuracy is one of the things they care about A LOT.

(I spent a summer working in an art computing lab, nearly every minute of which was spent managing color accuracy)

I guess they are using VP9, which has 4:4:4 chroma subsampling: http://youtu.be/xo_R40C7RTo?t=3m35s

Correction: H.264 also has 4:4:4. If you see artifacts, that probably means it wasn't correctly compressed.

  Photoshop seems like one of the most difficult programs to work via VNC.
I think it's actually the opposite. Usually only a small amount of the screen is changing, so response is fast. For full-screen or complicated effects they can provide more CPU horsepower than your average laptop, so the operation may actually complete faster, even if screen redraw takes more time.

FWIW I've used Photoshop over Remote Desktop in the past and with a good connection it feels perfectly usable. There's a little input lag which is only noticeable with sweeping brush effects, not UI interactions, and like I say large area screen redraw (e.g. scrolling) suffers. There's no visible compression or artifacts and I don't see why color correction would be an issue.

> Overall, this doesn't sound good at all. If they are streaming screen "video", color correction and pixel-level precision in design is going to be tough. Photoshop seems like one of the most difficult programs to work via VNC.

This would be a huge issue for the professional market. But it appears that they are targeting education with this first (or maybe just that market). Color precision is likely not the highest of priorities.

I don't see pixel-level precision being hard. As for color-correction, I would bet they have settings to allow you to adjust based on your monitor.

Doing color correction based on monitor colors would be impossible: sometimes you adjust, for instance, RGB curves for one specific layer.

It could mean that half of the screen is changing with every step of the slider, and you want exact color, immediately.

I think you're making this overly complicated.

For one, doing client-side color correction (for example with WebGL) is technically possible. It's probably not in this solution, but let's not use words like "impossible" unless they're actually impossible.

But more importantly, I think you're just complaining about latency, but then you're using the phrase "impossible."

EDIT: I thought you were talking about gamma correction and the color space of the display, where the program running on the server wouldn't know about the gamma and color characteristics of the monitor on your current browser. Thus, "monitor settings." But no, you're just worried in general about whole-screen fill rate latency. Again, I think using terms like "impossible" is hyperbole. It will be fine for some users, and not for others.

I suspect that it's built using Native Client (https://developer.chrome.com/native-client), not using VNC. "Streaming" in this context probably means that the code and files are dynamically served by the server, rather than referring to streaming video.

No, it's actually streaming. As the link says, they run managed vm's and stream it. As someone else says, VNC is really old technology. We can do better.

Nothing in this discussion indicates that anyone knows what streaming technology they're using. However of course there are better and worse technology. Streaming RDP from a Windows box a thousand miles away, for instance, is a world more enjoyable and usable than VNCing to the Mac box connected via a 1Gbps twisted pair. App level virtualization through the same, interacting and engaging with the rest of your desktop, is close to magical.

"Nothing in this discussion indicates that anyone knows what streaming technology they're using"

I actually do, but i don't know whether it's public, so i don't talk about it.

However, i'll point out that Google already has something called Chrome Remote Desktop that already does streaming desktops.

IS it possible that there is a localized virtual machine setup and the image itself is what is streamed over the network? IE you would have a locally installed version of linux on a virtual machine and an additional drive is essentially steamed over the network via SMB os something that contains the PS application?

I can't see any practical reason why this would be the case. Photoshop is a pretty hefty application in terms of RAM/CPU, so it makes a lot more sense to run it remotely, since Chrome Books are built to be cheap, not powerful.

I agree, although perhaps ChromeBooks are "cheap and weak" because there are no applications that use the power (hence the failure of the Chrome Pixel). If ChromeBooks are to become ubiquitous, I think local VMs is a great solution for legacy applications.

Or perhaps NaCl?

I wonder if they are going to pay for all the windows or mac licenses to run on or they have a linux version working inhouse.

I suspect this is using Mainframe2 which I recall reading about here: http://www.cringely.com/2013/10/15/mainframe2-runs-super-pow...

You are probably spot on, thanks! And nice to see to read/see Mr. Cringely (silicon valley documentary fame).

Also, I wonder how this affects bandwidth. Are you just constantly streaming HD? Are there latency issues? What about if the network gets congested? It sounds pretty terrible.

Also on the bottom of that FAQ it says "Please post your issues to [Insert Forum Link]" heh.

"Also, I wonder how this affects bandwidth. Are you just constantly streaming HD? Are there latency issues? What about if the network gets congested? It sounds pretty terrible."

I can imagine, given experience with VNC or even some crappy RDP implementations, it sounds like it would be terrible. But those are pretty old and bad technologies for remote desktop usage, with known issues. You can do better.

I'm sure there are situations where the experience is not ideal.

But remember that you are talking about people using chromebooks here. They are computers basically built with the expectation that you have good internet, and they are targeting the education market here.

I would point out that neither Adobe, nor Google, would want people in the education market (one of Adobe's prime targets) to have a crappy experience with Photoshop, as it would be really bad for both parties (if Google makes a bad first impression with chromebooks, people write them off and never try them again. If Adobe makes a bad impression with photoshop, people won't pay hundreds later on to buy it).

So one would imagine they would not release a terrible experience :)

the expectation that you have good internet, and they are targeting the education market

Sadly I don't think I've ever been in an educational institution where the wifi was particularly reliable. Wired ethernet yes, so a computer-lab setup can work well with remote-desktop type stuff. But that's getting increasingly old-fashioned, and everyone connects on wifi now, which seems to often not be up to the task. We have some huge pipe at the university I work at, and my wired office connection is great, but the wifi? Frequent dropouts and latency spikes, especially if a bunch of people in one room are doing network-intensive stuff. I can believe our IT infrastructure isn't among the best, but it's not the only place I've had that trouble either.

"If network connectivity is lost, you will need to launch a new session."

As an Australian, I fear this is yet another one of those things we won't be able to use on our spotty 250KB/s connections.

It obviously isn't VNC style. After all, if you lose a connection when using VNC, you don't end up with the files on your local machine to be recovered. They would be still out on the net.

The processing is local, that is for sure.

I am little bit concerned about the way technology trends these days.

It feels like sooner or later, big companies will have all the hardware and software hosted on their side and people will only get access to play it according to companies rules.

This is actually a concern to my freedom as well.

I want to own my hardware and software power. I want to run everything offline if i want to. I don't want to click "I agree" button for all the actions i am doing.

It also means everybody is kind of forced to use their devices how these big companies are decided to. They are selling you an apple, but they don't let you to eat it however you want.

I will be happy as long as they give me the option(and not to force me use cloud) to keep my hardware and software on my side.

"It feels like sooner or later, big companies will have all the hardware and software hosted on their side and people will only get access to play it according to companies rules."

I agree. The "PC revolution" has gone into full reverse. It's the revenge of the mainframe and the whole mentality behind it.

There's many reasons for this, but the biggest one I think is piracy and the general mentality that "everything should be free" (as in beer). The problem is that nothing is free. Everything costs money. Really polished software costs a lot of money.

So far three strategies for dealing with this misalignment have emerged:

(1) Push everything back to the mainframe and return the user to serfdom.

(2) Create completely jailed devices (iOS, jailed Android deployments) linked to completely closed app stores and use cryptography to effectively achieve #1 without having to offload all computation.

(3) Create "free" products and services for which the user is the product -- this is the surveillance and manipulation business model of sites like Facebook.

I've come to believe that free is actually the enemy of freedom, and that the "pirate party" crowd is worse than wrong. What they're promoting is actively pushing us toward a radically less free world. The "everything should be free" ideology creates a complete decoupling between the user and the economics of software and content creation -- so now the only viable business models are either serfdom or exploitation.

In a sense free places the user in the ultimate position of disempowerment. Since you're not paying there is no reason anyone should care about you or your needs/preferences. It works for content too. In a world where people pay for music, music is made for you. In a world where nobody pays for music, it has to be made for someone else. I envision a future where pretty much all non-hobbyist / starving artist music is made at the behest of advertisers or political propagandists. It's all full of either product placements or messages crafted to get you to vote or think a certain way, and it has negative artistic integrity.

A comprehensive post. Just to add to 2) Chromebooks are DRM friendly, and chromecasts are locked as well. Google bought wildvine DRM and my guess is DRM can be pretty much expected to win for media/software, since hardware makers are on board and they've had enough tries to get the crypto implemented right. Even intel cpus have DRM on board (intel insider). The only threat to DRM now is it relies on human beings keeping secrets [edit: and having as few people / companies who know the master secret as possible], so if the economic value in breaking it is high enough, someone will get paid off / extorted to leak it... whereas 1) is probably safer from attack, and 3) can get hit by adblock+ right now...

Definitely fun watching the battle play out, and as a programmer with a family to feed I must admit I'm hoping for a way to value virtual creative goods that isn't $0.00 :-)

I agree with you that this mentality is likely how we gotten into the mess, but I think the way out is less clear. Thanks to the whole piracy dilemma, there was motivation your three strategies and develop the technology and services that they require. But now the technologies are there and aren't going away anymore, even if everyone would suddenly start paying again for everything.

Take Adobe for example: They moved from selling Photoshop as a software to a subscription-based model where they are effectively selling the same thing as a service. They didn't just use the cloud technology to secure their old business model, they actually made it even more restrictive.

Similarly, when you buy a phone, you don't get to choose between a cheap, locked down phone and an expensive free phone - you're paying the full price and still get the locked down phone.

So I believe, it is necessary that we get an alternative to the "free" culture, but it isn't enough. As things are today, it makes the most economic sense to abandon the "free" culture and still keeping your users in serfdom. Something else has to happen to shift balance, but I have no idea what that could be.

I think piracy is one motivation, but I suspect it'd be going in this direction even without it: a yearly recurring revenue stream is just more attractive than selling software. Companies in many industries prefer the "license/rent you something" model to the "sell you something" model, and it's mostly a question of whether culture/economics/technology/competition/law makes that practical.

In the enterprise space it's long been standard: IBM won't sell you a mainframe outright, but rather rents you computing services. In the consumer space it's traditionally been harder, because it wasn't obvious why someone should pay you a subscription fee for a CD-ROM; though some companies did try to sell subscriptions to yearly updates. With a software delivery model more like a classic mainframe+terminal, it's easier to pitch the software-rental approach to consumers, so unsurprisingly they're doing so.

"alternative to the "free" culture" ( for both software and hardware)

- Reasonable price tags on products.

- Companies standing behind their products instead of letting it being bought by 3 biggies. (this is hard)

- Learning how to make beautiful and useful user experiences from Google, Apple. You need to be passionate about what you are doing. If your people do not buy it, then you can not sell it to others.

There are already some companies doing these and getting pretty good results. I keep my faith in them.

As long as we do not improve ourselves, we are kind of helping these big companies to rule the environment i am guessing.


Making things even harder is how deeply entrenched the piracy-is-good mentality still is. Try arguing against it over at Reddit or on any thread about Popcorn Time or Kim.Com (what a douche) and see how far you get. You'll get attacked pretty viciously and downvoted into lower Hades, even at Hacker News.

If you don't pay, you don't matter. Free means serfdom. This is one of those things that absolutely nobody gets, IMHO, and I see no sign of more people getting it for a while. It's gonna have to get a lot worse before it gets better.

Free can also be Free/Libre/Open. You don't pay for Wikipedia and you're not a serf as a reader / user, you're not the product.

That only works for a subset of things, namely those things that can be effectively crowdsourced at small increments. I'd also argue that Wikipedia is heavily indirectly subsidized by academia and government.

Large coherent software products and great works of art are by definition the products of individuals or small teams focusing intensely for long periods of time. That is inherently expensive. Only the independently wealthy or the otherwise subsidized can afford to focus for thousands upon thousands of hours and then give the product away... not unless they have another idea for revenue like freemium, SaaS, etc. (... and those don't work for music, movies, books, etc.)

... and "otherwise subsidized" takes us back to some of the pathologies I mention... propaganda, advertising, etc. driving art and surveillance/manipulation based business models driving software.

Do we really want a future where only trust fund kids and propagandists create art? Do we really want a future where software is created exclusively to monetize its users?

The Wikipedia model can only work for everything in a true post-scarcity society where the marginal cost of everything is zero and everyone can live with no real "income." Until someone finds a way to draw unlimited energy from the quantum vacuum, we are stuck with some form of economy where everything I said applies.

Another way of thinking about the pirate economy is as a deflationary spiral in which monetary velocity collapses and all kinds of pathologies emerge from that. Deflationary spirals are actually really great for uber-capitalists that already own lots of rentier assets, but they're bad for workers, entrepreneurs, and other non-rentiers. Sound familiar?

Wikipedia happens to be supported by a huge quantity of individual supporters around the world, both in writing and in funding the servers and development. They actively avoid reaching out to wealthy philanthropists and governments and get all their support from the general global community.

Wikipedia indeed requires a smaller dedicated team to keep the system running. But they are not subsidized in any pathological way. They are funded by large quantity of small donations. There's nothing wrong with that.

We don't need to reach a true post-scarcity world in order to better allocate the scarce resources we have. It's completely feasible to provide a basic income or other basic resources just like we do with clean water. We can have mechanisms to charge for use beyond the basic level. And we can have the community fund people to make the resources we want.

The "pirate economy" you talk about is the "freerider economy" (pirate is a stupid term here). Freeriding is a REAL problem. We can't live on a freeriding economy. But we can still have Free/Libre/Open resources if we can get people to fund them without compelling them through artificial restrictions. Wikipedia works because the scale is SO large that they get adequate support despite freeriding. There's also lots of ways to discourage freeriding / encourage donation better than we do now. The first step is to get everyone to STOP wasting their limited funds on proprietary products so that they actually can afford to better support Free/Libre/Open ones.

You should donate to and become a member of the Free Software Foundation.


I'd gladly do so if Stallman wasn't part of it.

Childish meltdowns on stage, comments about pedophilia and toe jam eating aside, RMS might well have been a luminary 30 years ago. But AFAIK he hasn't done any coding in years, and mostly just travels around giving talks, riding on his past fame to continue to enable his lazy, selfish lifestyle (cf. refusing to own a cell phone but happily using someone else's, same for loyalty programs, accepting volunteers who probably want software dev intern experience and making them update his blog, etc.)

The guy's just an embarrassment nowadays.

I went to his talk when he was in Vancouver, and it was… interesting. It came across as very tone-deaf, the tech equivalent of the man standing on the side of the street screaming about how our sinful behaviour will invite the lord's wrath.

He's described, in recent history, the method in which he accesses the internet; namely, to queue up a list of URLs, batch-fetch them, and then read them offline at a later date. That the person arguing about how our software is taking our control from us doesn't use it in any manner remotely recognizable to us is odd, but for him to argue against web apps which he has, apparently, never used and presumably does not fully understand (beyond his experiences with mainframes and virtual terminals, which are conceptually similar) seems impractical and somewhat tone-deaf.

The man is opposed to the way technology is going, and good on him for sticking to his principles, but he's basically staying ignorant of the last 15 years of technology development. He refuses to use any technology which isn't free software from the top to the bottom, which means he basically can't use sufficiently modern technology, and yet he wants to lecture us about what it does and how it works.

Yet, his position is still somewhat inconsistent. He'll use other people's cell phones, he doesn't refuse to use servers which are entirely free software, and there's no way that the networking gear between him and other servers (e.g. his mail server) is 100% free software. He seems to draw the line at paying for it, but not at using or benefitting from it indirectly.

He uses other people's phones because it doesn't give any information about him when he does that. If nobody else around him had a phone, he just would get on life without making that phone call, or he would wait until he found a landline to make that call. But if the convenience already exists, he'll use it.

He's actually quite internally self-consistent. The principle he's adhering to is (1) he doesn't want to be tracked himself but (2) if someone else is already tracked, that doesn't necessarily track him. He still thinks you should choose to not be tracked yourself, but if you already made the choice to carry a phone, it's too late.

He will use servers or devices which aren't entirely free software because it's not practical to modify them. This is why he makes a big distinction between free software and hackable hardware. He considers these to be two different things. He believes that once you have the actual device and it's modifiable, you should be allowed to modify it. If the device is soldered and hardwired from the beginning, then it's simply impractical to require that hardware manufacturers allow you to modify it. This is why originally free BIOS wasn't such a big deal, until all BIOS became flashable, so now Coreboot is one of the causes of the FSF.

Saying that he will use any server that runs non-free software is a simplification of his position. He considers that sometimes you are merely being provided a service, and you should consider the conditions under which this service is provided. That's why he eschews the word "cloud", because it conflates a bunch of different things and makes people stop thinking about their relationship to a server. He does think that merely requesting a web page from a server running, say, Microsoft Server is ok, because that's just information the server is sending him, and it doesn't matter how that information got generated. He might consider that Microsoft Server would harm the sysadmins using it, not him.

His positions make sense. He's a huge aspie, so he spends a lot of time making sure everything he says falls into place logically. Part of his aspie condition is that he has tantrums and eats toe cheese. You should view those acts as that: a condition he has to live with. But this condition also makes him think very long and hard about the ethical and technical implications of the decisions we make about our computers.

I don't think he's internally consistent in the manner a rational person would expect. I think he's internally consistent in the same way a patient with multiple-personality disorder and delusions finds ways to justify where their lover Eduardo has gone and why he comes and goes a lot like the Pokaroo.

> I'd gladly do so if Stallman wasn't part of it.

The FSF is a lot more than Stallman. John Sullivan runs the day-to-day affairs, Chrissie Himes organises a lot of the fund raising, and Jasimin Huang is the one who gives my GSoC students extra travel money from the donations that have come to Octave.

The opinions of rms are not the opinions of the FSF.

All I'm saying is, if you actually like the FSF, then you must like something about it other than rms. He's not the FSF. It's difficult for our primitive monkey brains to do this, but consider an idea independently of the person espousing it.

Oh come on, that's a far more childish attitude than any of RMS's eccentricities. Do you really think he travels around and gives talks because he's lazy, or because he's long recognized that it's the best way he can contribute to his goals of promoting free software?

Because if you'd ask me which I'd rather be doing, travelling around and giving talks, or being at home and coding, I'd pick my home and my neighborhood and my routine, every day of the week.

Name an organisation you respect and I will find you an asshat that they are affiliated with.

Refusing to support the FSF just because of RMS is, in my opinion, childish.

> I'd gladly do so if Stallman wasn't part of it.

If it wasn't for him, the tools that many love so much around here, would never exist.

On the flip side, you can consider that traditional proprietary software installs usually have full access to read and modify your system, often in ways you don't expect or wouldn't consent to. Running in a virtualized environment in a browser significantly reduces access to your system. So if you run a free software local install with proprietary programs running in a jailed browser environment, you are probably more secure than you would be if you had installed them locally.

This is true in theory, in practice not very much.

i believe if the current trend goes on, you'll soon likely have not much on your local machine anymore that would be worth protecting. The philosophy behind devices like the Cromebook or streaming apps like photoshop-in-the-cloud after all only work if all your files are also in the cloud. So if this catches on, expect your Dropbox/Google Drive/etc to turn from a convenient place to backup and move your files to the primary location where you store them.

Even if you have valuable files stored locally, you'd be correct if access to cloud services would really only run through the browser. but check how all recent Android or iOS releases (or really almost all modern consumer OSes) work: most regularly phone home to their respective vendor and are usually heavily locked down - the devices are little more than physical endpoints if the cloud. On such devices, the company that created the device has likely mider access to the system than you.

obviously there are the usual exceptions from the Free Software movement - however for average consumer devices, the trend seems to be clear.

But think this through...if you don't run anything locally then what, exactly, are you protecting on your local system by running things remotely?

Although honestly, for something like photoshop I really don't see a problem. It's a creative tool, and probably isn't producing interesting data for the NSA, et al. The real security concern here is Adobe's, in that they want to protect their software from you, the user. They want to set up an annuity-style revenue stream, and the only way to do that is to protect the binary image of their software. So they did that.

Really, the only objection I would have to this is that it adds a (sometimes rather difficult to meet) requirement that you have a solid internet connection to run what used to be a stand-alone application. More philosophically, I object to the horrible waste this kind of runtime requires, particularly of network bandwidth. Perhaps, though, this is the price of saving the premium proprietary software market.

You are protecting your private information in cloud A from company B. I can trust different groups with different information without giving them all access to the whole set.

Except for Google, who will (eventually) actively mine all of it, likely by way of some plausible universal search feature.

You'd be preventing your local credentials from being phished / stolen, keyloggers installed, your machine from being in a botnet...

And your local credentials matter for exactly what? The only useful ones are increasingly the ones you type in the web browser.


While the cartoon is cute, it's wrong. I don't remain logged into services I'm not using. If I'm not doing banking right now, I'm not logged into my bank account. I don't have my browser remember my password, either, so I have to enter it every time I go to their website. I can also locally encrypt data on my drive if I feel it would benefit me. I realize many people are lazy about these things, but I like having the option to not be. (I also don't use many of those services because they are big security risks.)

I know that Randall is focussing on the stored credentials aspect, but the wider picture remains true - local credentials don't matter at all for an average user, because everything that matters is stored and operated remotely.

I should have said "all credentials". The local one may unlock all of the other remote ones. Or you just enter the remote ones directly. Point being, theres a lot to protect. Is a keepassx password a "local credential"?

>On the flip side, you can consider that traditional proprietary software installs usually have full access to read and modify your system, often in ways you don't expect or wouldn't consent to.

Perhaps for non-technical users. For technical users this is a solved problem for a few decades now. You can restrict read/write access for any software in any which way you want.

>Running in a virtualized environment in a browser significantly reduces access to your system.

Why would I want to kill my software performance by running it on an inferior browser platform when the kernel is perfectly capable of compartmentalizing software.

That's the whole point of the Cloud. To the business, it deleverages the customer's control and increase the latter's dependence on the former.

Think about why Skype is no longer P2P. Think about how some Video/Audio formats are proprietary that you have use the same service or else you risk losing all your past recordings.

>Think about why Skype is no longer P2P...

Yes, please do think, then read what Skype's principal architect has to say about it http://www.listbox.com/member/archive/247/2013/06/sort/time_...

From what I gather, Skype moving to a centralized model is an effort to make it work better with mobile devices, where p2p/always-on is inefficient and power-hungry.

Mobile is the thin end of the wedge.

Your own/free software will run on your Raspberry Pi at home, or your independent hosting provider in Island etc.

It just doesn't happen that much on the desktop any more. It's still up to you if you want to use the software from adobe.com or the "Gimp cloud" on your home server.

This could even be an opportunity, because giving a link to non-technical users is easier than installing software locally. In that sense it could close a gap that existed previously.

On the other hand, all this cloud software means that you can use really demanding applications on cheap hardware, like Photoshop on a Chrome Book.

But for the most part, I agree with you. Companies can change their pricing or terms at any time and you can't do anything about it. On top of that, to use the software you're paying for you also have to pay your fees to the ISPs.

> On the other hand, all this cloud software means that you can use really demanding applications on cheap hardware, like Photoshop on a Chrome Book.

But Adobe can charge literally anything! And they can cut your access overnight at a whim.

But it's suddenly a viable business plan to start competing in the DTP space if you can find the talent.

Adobe can charge anything <-> suddenly anyone can realistically start to compete with Adobe.

I would say that evens out the playing field.

Except you don't own most software you have on your machine locally. You agree to a license to use it.

It's a theoretical/legal thing. In practice, you own software by having executables on your drive. You don't have to ask anyone whether you're allowed to run it or not. If authors have the audacity to enforce some checks on you, you can go ahead and crack the app. It's still code saved and running on your local machine.

You own the license. Which give you the right to use the software. You don't need to own the software to use it; you need to have a license.

The real problem is that you can't own a license anymore for a lot of software. You can only just rent one.

"I want to own my hardware and software power"

I think this states what I am trying to say.

The parent is specifically addressing that point. You don't own most of the software you are purchasing. The vendor retains either the technical or legal ability to reach into your machine and disable the software via the licensing agreement.

I understand the point, which is really reasonable.

But how about this way;

In order to disable the software that I installed on my machine, they need to know who I am, which computer it is installed to, plus lots of other procedures to deal with. Which is time consuming and not easy.

In order to disable cloud hosted software,

- User.find(me).licence.valid = false

License or not, source or not, if I have something installed on my computer, I can modify it or at least attempt to do so. I can also run it without a reliable network connection. Not so much for streaming services.

On most countries EULAS are void.

Yeah how's that work? In the US any terms made after money is tendered are void. This was done to avoid someone adding terms to a contract on delivery. SO is it true for software too? What about electronic download?

In most European countries yes, as you can only read such licenses after buying the product.

For the EULA to be enforcable, it needs to be made available at the store in such a way that the customer can read it before buying.

Same thing for digital goods.

If you can only read the license after buying the product, it is not valid.

And this is the logical conclusion of the death of the 56K baud modem. There was a prophetic (if early) contest at Sun to 'imagine the future' and the winner got a SPARCStation. As a networking guy I tried to imagine what was going to change when the 'big yellow hose' (which is what 10MB Ethernet looked like at the time) came right into your living room. And one of the things that changed was that you could work at 'home' like you did at work, which at the time was most of the stuff on a beefy server with lots of CPU + storage and just the X windows on the local machine. (this worked fine with a lot less than 10MB of bandwidth of course, but it was conceptually a 'return to mainframes' pitch)

The economics are pretty sweet, given the marginal cost of one additional subscriber to a cloud 'hosted' environment. Google Drive / Drop box gets the storage requirement out of the way. Company X, acquires/operates a small server farm connected with a generic 10G pipe to the 'web' (this is an off the self config at places like Switch in Las Vegas). Sign up a bunch of subscribers where you have a 'free' tier to sop up excess compute and a paid tier for folks who care enough about response times to pay for them.

The dependency on the network has always been troubling, but once the network is like house 'power'. so many things one does depend on the house having power available, making it a requirement becomes less and less onerous. Combined with things that make traveling with data annoying (like cross border inspections) and I can see this as the future of application level computing for a lot of things.

Interesting that terminals are the future once again.

The power/electricity analogy here is what's really key - there will come a time when the ubiquity of being "online" will be the same as the ubiquity of "the power is on." It's so prevalent in modern society we really don't think about it.

It took me a while to realize that this is where Google is headed with ChromeOS/Chromebooks. Ultimately, "applications/apps" will not exist, there will just be the web and all personal computing devices become thin clients. What to use photoshop? great, go to //photoshop.adobe (or something); Office? //office.microsoft; play the latest COD? //callofduty.activision. [EDIT: also note the importance of search in this scenario...]

I think google's bet is that just like CompUSA and other computer retail outlets have died, so too will "app stores" that sell digital equivalents of physical boxed apps. All apps will become services available openly on the web.

It's definitely exciting, though I realize it makes a lot of people scared and there are definitely security risks that need to be kept in mind. But overall I see it as another step in the democratization of access and knowledge. A few hundred years ago, only the privileged had access to resources enabling them to read and write and create things. A few years ago, only those with some serious cash could (legally) purchase a license to operate a copy of Adobe's products. Now, any kid with 100 bucks and a decent laptop can spend 3 months with the entire Adobe catalog available to them. Pretty soon, that "high-powered" hardware requirement will be gone and the market opens up even further and pricing will drop even more.

That's pretty awesome.

It's not awesome at all. It's the death of personal computing.

What to use Photoshop? Pay a perpetual monthly fee forever, with no control over which version of the software you are using. Adobe jacked up the price this year for the new version despite a lack of new features? Tough, nothing you can do about it. It's either pay up or lose access. The new version is buggy? Tough luck.

Want to modify any of the software on your computer in any way or install any local software? That disables the trusted DRM and none of your remote apps will load anymore because your machine is no longer trusted. (We already have this on Chromebooks for DRM video).

Want to cancel your Microsoft Office subscription? Fine, now you've lost the ability to view any of your documents any more.

Want to continue using SuperAwesomeApp? Too bad, they just shut down the servers forever. Sorry, no refunds.

All your files hosted in the cloud and data mined for terrorist terms, for your safety of course.

All your files hosted in the cloud and data mined to build marketing profiles.

Every piece of software having the ultimate in user lock-in - total control over the users data.

I find it hard to imagine a more dystopian future for software.

Uh no.

You see the reason personal computing exists is because it was people who wanted this level of control and "their own computer" started using these cheezy "calculator chips" to build their own computers. What they could do was a lot less than what the mainframes/minicomputers/workstations could do but you could own your own.

All of those things are still true.

But there is a rub. Today you can go on some warez forum and get a cracked Adobe CSx suite and run it on your machine at home. That feature goes away, Your only option is to run Gimp, and if it doesn't have features you need, then you get to write new features yourself because, well its open and you have the source code.

So the world will change, it always does, but the ability for engineers and hobbyists to have their own computers that do exactly what they want them to do, that won't change.

Surely you must understand that the intersection of people that want to run graphical manipulation software and have the time and expertise to work on the GIMP codebase is rather small?

Sure, the Steve Wozniaks of the world have always had the ability to write their own GIMP plugins or whatever, and you're right: that hasn't changed.

But there were a few decades there where the rest of the world could actually own commercial software, and that had a lot of upsides. Those days are drawing to a close.

(Yes, I know that EULAs attempted to "license" software to you instead of truly letting you own the software. But for all practical purposes, you did at least own it to the extent that it would continue to work even if the software vendor's servers went down, or they decided they didn't like you using their software, or your government decided they didn't like you using the software vendor's software, or whatever.)

Even so, in this scenario lots of people will discover that GIMP is good enough and so its popularity will rise, interoperability may become better and more people will take an interest in working on it.

Photoshop piracy was the worst thing that ever happened to the GIMP.

If your vision of how people can continue to own their data in the future relies on GIMP being improved to the point that's not unbearably painful to use, you're failing to reassure anyone.

Daily GIMP user here, pretty much the only thing you can't do is print CMYK.

...or use most classes of non-destructive adjustment layers. Or use many of the more desirable and time-saving (read: cost-effective, even when compared to "free as in beer" plugins. Gimp isn't useless by any means, but it's not a Photoshop replacement yet, any more than Corel's Paint Shop Pro is. (But it will be soon, right? It's been "going to be" for a very long time.)

The problem with GIMP isn't its featureset, but its interface. It's so foreign to anyone who works with graphics that whatever UI metaphors it uses fail to resonate with people, so they just end up confused.

What GIMP probably needs is an overhaul that copies what Photoshop does. People understand Photoshop, and it's the industry standard.

I've never understood the problem with the GIMP UI, what is it about it that is so foreign?

From what I can tell the UI's look fairly similar too: https://gtchan1.files.wordpress.com/2012/06/5-gui-photoshop.... http://www.gimpchat.com/files/196_2012-05-12_214936.jpg There's various widgets that contain e.g. tools, layers, history, fonts, tool options; and of course each tool might have its own options.

(Also, the team itself have said in various ways that they'll never have as their goal to be a "Photoshop clone", as I understand it that's an unreachable and thus depressing goal. It's a lot more motivating to simply try to make something that works well than trying to cater to every whim of those who don't want to pay for Photoshop.)

From what I can tell the UI's look fairly similar

They may look superficially similar, and at a high level have the same tools, but from a fundamental workflow point of view there are too many things that are just very different (non-destructive editing, smart objects etc).

the team itself have said in various ways that they'll never have as their goal to be a "Photoshop clone"

And that would be fine and if they'd done something unique or innovative with the UI and really made it their own. Unfortunately they didn't and they're stuck looking kind of like a half-baked Photoshop clone. And as long as that is kind of what they look like, then that is how they'll be judged.

> they're stuck looking kind of like a half-baked Photoshop clone

So does the GIMP look like Photoshop, or does it not?

It looks like Photoshop at a glance, but doesn't really behave like Photoshop once you get close.

It's so foreign to anyone

It's foreign to photoshop users, GIMP is just different. I really don't like those open source software that want to copy some proprietary one. They tend to copy the mistakes of the other software too. Maybe GIMP doesn't have all the features that photoshop has but copying photoshop is not the way to go.

Not sure if you know or not, but GIMPshop (GIMP with the Photoshop interface) has been around for a long, long time:


Still not sure why Adobe hasn't gone after the Linux market. They could completely dominate.

Photoshop's interface is awful, and the reason you like it is because you're used to it. GIMP should spend absolutely no time in imitating its mistakes.

Of course, we're offering zero evidence that either position is true, just bald assed assertions. That I vastly prefer multi-window is my only evidence.

>People understand Photoshop, and it's the industry standard.

They've been trained to. It's as intuitive to you as the way home from work. It's as intuitive to me as the way to your house from your job.

> That I vastly prefer multi-window is my only evidence.

That's ironic, given that the latest Gimp (2.8) defaults to a single-window interface. I'm having a hard time adjusting, but this doesn't mean it was a bad idea.

Agreed, GIMP is a horrendous piece of software. I know it is extremely powerful (for example combined with G'MIC) but the fact you can't do the most simple transformations (rescale, move, strech and rotate) on the freehand tool speaks a lot about its usability.

Inkscape on the other hand really is a viable alternative to Illustrator.

Every time I try Inkscape it has been a big disappointment. It doesn't come close to old versions of Freehand, not even speaking of current Illustrator. Even indie devs like http://bohemiancoding.com/sketch/ do a lot better job at illustrating.

"Surely you must understand that the intersection of people that want to run graphical manipulation software and have the time and expertise to work on the GIMP codebase is rather small?"

Of course I do, and those people will be faced with an uncomfortable choice, suffer the 'free' tool or pay for access to the 'non free' tool. And the people who run the 'non free' tool will use that money to pay engineers to improve it. And the engineers working on the 'free' tool will do so without remuneration or reward other than knowing they built something they like.

> get a cracked Adobe CSx

- What about when Adobe makes a change when going CS2 => CS3 that I don't like, or that is detrimental to me? If I 'own a copy' I can just continue using it. With 'CS in the cloud' I have to upgrade because that's what Adobe wants.

- To step outside of software for a second, what happens when (e.g.) George Lucas decides that the "special edition" version of Star Wars is the 'real one' and replaces all cloud copies of Star Wars with the special edition version?

> To step outside of software for a second, what happens when (e.g.) George Lucas decides that the "special edition" version of Star Wars is the 'real one' and replaces all cloud copies of Star Wars with the special edition version?

Yes, this is huge. The cloud is making our culture ephemeral.

Just yesterday I was playing Escape Velocity Nova, a video game back from 2002. It's an excellent game, and since its initial release, people made tons of mods to it. I heard that Eve Online pretty much copied their story.

I'm pretty sure that if this game was made in a "cloud version", as a service and not a downloadable product, I wouldn't be able to play it today. I couldn't enjoy the story, I couldn't live through experiences that influenced the gaming culture ten years ago. It wouldn't make any economical sense for current copyright owners to maintain such abandonware. It would just disappear into oblivion, like most SaaS startups do after a year.

With CS in the cloud' I have to upgrade because that's what Adobe wants.

Not that I'm a supporter of Adobe's cloud strategy, but as of the past 6 month or so you have a choice between using the current version and the previous version. We'll see when the next version comes out if they'll only support two versions or if they'll let you chose any previous cloud supported version.

There are people still running PS6 because it suits their needs. That can't happen in the cloud. Well, it can, but Adobe is unlikely to enable this to happen.

Sure, but you then end up with two entirely divergent software ecosystems.

As it stands two people, one of whom uses Linux/LibreOffice/GIMP and another who uses Windows/MS Office/Photoshop are able (bar some possible formatting issues) to share data and collaborate together because they can share files via dropbox or USB dongles.

Once Photoshop and Office move entirely to the cloud there will no longer necessarily be "files" to share.

That is true, and to the extent that those cloud applications don't use an open interchange standard you won't be able to share documents with them. Having lived through this from before[1], it isn't as dire as you might imagine. What happened last time is that it forced a lot of interchange formats. Will be interesting to see if it does that again. All of the market forces are still there, person A wants to send a document to person B, etc. Of course everyone can send HTML or RTF documents like they do today.

[1] Intel used MultiMate for its documents (as did a bunch of lawyers) while nearly everybody in the tech space was using Interleaf. Xerox was trying to be the paperless glue for the middle. We got PostScript out of it, which was an interesting compromise.

The difference here is that you actually had a copy of the data. If the vendor (e.g. Microsoft) decides to lock you in, they will keep the data in the cloud, with their app being the only access point. Then, locally you will only have a copy of "application state" at any point in time.

Not totally divergent - those of us who are able to maintain our own PCs will have the best of both worlds. We'll be able to use either our own local apps, or the cloud ones.

Most people will prefer the managed computing experience provided by the cloud, and that's a good thing. I've had to ask "so did you keep backups?" with a sinking heart; and heard "My computers running really slow - can you have a quick look?" too many times.

>We'll be able to use either our own local apps, or the cloud ones.

Not necessarily. Chromebooks are already locked down so that any modification of the software on them means DRM video support is disabled. Its only a small leap from video DRM to software DRM. This will probably be sold to users on the basis of security - we won't let your machine log into your photoshop account to steal/delete your photos if it has been compromised.

Web intents would allow new competitors to open existing docs.

In a way controlled by yet someone else.

Contrast with files, which I own, can open in anything I want and that I can reverse-engineer in order to write a custom app for them.

I realize that I'm going to be a bit pedantic here, and it is not an argument I would support, but your statement *"Contrast with files, which I own ..." which is only true in a limited sense. If you have a book, in a file, which is stored on your device for your Kindle app, Amazon will tell you that you don't own it, what you "own" is a right to view it in the Kindle app for as long as your maintain your side of the agreement and Amazon doesn't feel like using one of their escape clauses.

Now I completely get how crazy that makes people, who argue "I paid money, I got this thing, I own it." but the only reason that it is in a "file" today is because Amazon hasn't figured out how to give you access to it without giving you a file. Nothing in the Adobe announcement changes anything, except that it provides for them, what they consider to be a better implementation of their rights management paradigm. Again, I don't condone or support it, but it's important to note that their thinking hasn't changed, only their implementation has. This also helps them to avoid you reverse engineering their files which they don't like because it allows you to circumvent their rights management tools. Again, I don't condone or support, just report.

So what may happen here is that something which has some value to a person, will be put outside their reach behind a price they are unwilling to pay. I understand that this situation sucks, but on the positive side it adds energy to the 'free' side of the equation because if there really is value there, it is only extractable if stealing is less easy than paying, and paying is at a market price that can support the energy of making it available. That price may be in programmer hours, not necessarily in dollars.

> the only reason that it is in a "file" today is because Amazon hasn't figured out how to give you access to it without giving you a file. (...) but it's important to note that their thinking hasn't changed, only their implementation has

Yes, I understand that and I agree with you here. I'm not complaining that they changed their thinking; I'm saying that their thinking sucks (for the user), and that the more they improve their implementation details, the worse-off we (users) are.

> So what may happen here is that something which has some value to a person, will be put outside their reach behind a price they are unwilling to pay. I understand that this situation sucks (...)

This suckiness is what I'm complaining about.

I need to think about it more to come up with a coherent view of that problem. However, let me share my current perspective.

I grew up in a world where software was owned by me and effectively free. What I needed but couldn't afford I could crack if I cared. Most of the time I didn't care, or there were better free tools. But sometimes it mattered. I learned my graphics skills as a kid on cracked Corel Draw and Photoshop (in the end I switched to Paint.NET + GIMP + Inkscape combo, as I don't want to publish - even for free - things done on "stolen" software, but all those tools are inferior compared to paid ones). I learned my Office skills as a kid on pirated MS Office.

I started programming around 13 years ago. Programming tools were already mostly free at that time (thank you Microsoft for MSVC++2003 Toolkit, though I loved my pirated Visual C++ 6.0). But it's not about piracy, it's about access. I learned how to code because I wanted to make games, and my primary inspiration and motivation throughout the teenage years was the ability to dig in and tweak various games. I knew my way around StarCraft binaries. Hell, my first serious application of Assembly was patching SC using StarGraft. I read UnrealScript files extracted from Unreal Tournament games. I hex-edited saves, tweaked data files, poked and twisted many games. All of this was possible because I owned the data. That is, the files were there, on my hard drive, unprotected. I built my whole career and half of my life on top of that.

To quote pg[0], "It is by poking about inside current technology that hackers get ideas for the next generation. No thanks, intellectual homeowners may say, we don't need any outside help. But they're wrong. The next generation of computer technology has often—perhaps more often than not—been developed by outsiders.".

What I'm really afraid of is that the next generation, the generation of my children, will not be able to poke inside anything, because everything will be accessed remotely. In order to learn and grow I didn't need a credit card when I was 13, but I fear the next generation will not have that luxury.

TL; DR: think of the children.

[0] - http://www.paulgraham.com/gba.html

It's so much easier to poke about inside a web app.

Yeah right. Especially one with a backend.

You control what you open stuff in with web intents. The site the document is stored on doesn't.

Say Google Drive supports web intents:

If you have a doc on Google Drive, and you install an intent for a Google competitor word processor in your Google drive, you can open it, edit it on that service, and save it back to Google drive.

It is certainly true that most users will prioritize convenience over privacy and control.

It is still true that in the environment we are moving towards, those consumer choices are leading to a situation where _we don't control our own workstations_. Everything you do will be at the forebearance and under the observation of a corporation you pay.

You can say that that's the natural free market consequence of consumer preferences combined with current technology all you want. That is still a scary situation for us as a society, where more and more of our lives involve interacting with software, and we have less and less control over that software.

Your first paragraph is spot-on: People wanted and got control of their own computer.

However this is clearly changing now:

As data, and now functionality, keeps moving towards the cloud, quality goes up again (a polished app running on all kinds of devices), but there's little chance to tinker, to do it your own way.

Or, where would you say Stallman's 4 Freedoms (https://en.wikipedia.org/wiki/The_Free_Software_Definition) fit in these days?

Why do you feel entitled to use proprietary software with no restrictions?

Yes, Adobe and Microsoft are going to charge you rent. So what? Just use free (as in freedom) software instead. Adobe and Microsoft were never your friends, and if you feel bad about this recent development I don't know why you were using their software in the first place. They were always looking for a way to do this to you, all that's changed is that they've figured out how.

> Why do you feel entitled to use proprietary software with no restrictions?

Because without laws regulating the sale of products to private citizens, we'd have chaos? So we need some protections to ensure the rights of users? Like the rights to re-sell used software (or media) etc.

Now, that said, without such reasonable rules in place, levelling the playing field, it's hard to fault companies for trying to give as little and charge as much as they can get away with.

If you buy a hammer, you have a lot of reasonable consumer protection. If you buy a laptop many of the same rules apply -- why shouldn't you have similar protections regarding software?

A hammer isn't going to cease working when the company that sold it goes out of business (or is bought up, or refocuses on a different market) -- why should your office suite? I'm not saying one should be guaranteed upgrades in perpetuity -- but the ability to install the same software on a reasonably similar configuration should be a no-brainer, really. Or be able to run it under an emulator.

But again, these things really needs to be legislated as consumer protection, so some company can't get away with not providing some of these freedoms.

>I don't know why you were using their software in the first place

Who says I'm concerned about myself here? Maybe I'm concerned about other people, especially those with less technical understanding who don't fully appreciate what it means for all their files to be locked away on a cloud server.

>They were always looking for a way to do this to you

So we can't complain when bad people do bad things now? Also, I was replying to a post that was celebrating this as the future of all software (and democracy and human knowledge in general it seemed).

  > Just use free (as in freedom) software instead.
If the last several decades have shown us anything, it's that the free software movement is excellent at creating some kinds of software (operating systems, development tolls, severs) and generally pretty bad at many other kinds of software (games, content creation software, etc).

The reason for this is that those kinds of software require lots of money to pay developers to work on them full time, because they aren't straightforward applications of the sort of thing you'd get in an engineering/computer science education.

Recent years have suggested that "software as a service" is the only way to get this money due to rampant piracy. If the free software community doesn't like this state of affairs, they need to step up their game.

Completely agree. Although the open source movement has plenty of great engineers and is great at solving complex software problems, it doesn't tend to have everything else that great software companies have.

UI designers, customer researchers, graphics artists, strategy teams (to set an overall vision) e.t.c. tend to be under-represented compared to other software companies.

I also think that open source software tends to be less innovative, as 'design by committee' isn't something that is followed inside for-profit companies with closed software.

Wait, which of those two (kernels/compilers/etc. or games/content/etc) are you suggesting have developers who work for free?

Free (as in freedom) software developers often write software that serves some purpose for themselves. That is, there are any number of people writing GPL software for...software development and similar things.

Consider the grandparent post again:

> If the last several decades have shown us anything, it's that the free software movement is excellent at creating some kinds of software (operating systems, development tolls, severs) and generally pretty bad at many other kinds of software (games, content creation software, etc).

This seems to suggest that, if the last several decades have shown us anything, free (as in freedom) software developers are more likely to be willing to write software used for software development, and not very likely to write high quality games or content creation software like Photoshop.

It's well known that those in the free software community are often not paid very well, if at all. Consider the problems OpenSSL has had getting funding, despite millions of people all over the world using it every day.

The people who _are_ writing Photoshop are very much in it for the money, and are taking every step they can to ensure a steady income stream. Look at how Adobe is now trying to rent people Photoshop instead of giving them their own copy.

Speaking out against something you don't like or think will have a bad effect on something you hold dear IS NOTHING LIKE being entitled.

> They were always looking for a way to do this to you, all that's changed is that they've figured out how.

And that's why things start to suck. What they way "always looking to do" to us is bad, but we had a period of happiness when they couldn't effectively implement their vision.

OK those are all valid downsides, but here's a potential upside:

A software company will make more profit selling its software as a service than a single-fee product. Therefore consumers will find software developed for them in markets which were previously not viable.

To take a personal example: I develop traditional desktop accounting software for the UK market. There is absolutely no way that I could produce a version for India due to piracy. I also cannot produce a version for Ireland because I simply would not get a good return on my investment. But if I provide my accounting software as a service I can capture more of the value my software generates for my users. This would mean people in India and Ireland getting more accounting software choices.

Thank you, I had this dynamic of cost-to-market in mind and you provided the perfect example.

I would see the propagation of SaaS model as advantageous to competition and not detrimental and thus beneficial to all who consume software, as it will reduce the cost to market and make profit models actually deterministic.

I work in a company whose products deal in the structural design for construction. The energies put into retail, licencing, fighting piracy etc. are very off-putting for anyone thinking of starting a software business but they seem to be essential part of selling expensive desktop software. Also, the de-facto standard licencing server solution (I'm looking at you, FlexLM) is ... not so good.

I actually only now start to believe there could emerge viable competitors to Adobe in publishing/graphics design, Microsoft in Office and Autocad in 3D design. The age of the dinosaurs is finally coming to an end. Yay!

To the average consumer it does not matter whether the software lives in the providers server or in the local substrate. It's the added value, usability and the trust in the software that matters.

Adobe's CreativeCloud pricing pisses you off? Fine, start up your own desktop publishing software company, implement, say 20% of the features you need and expand. Of course, race to the bottom is never good but a product that implements say 10% of the Creative Cloud feature set does not need to be as expensive - it's in a totally different segment - and once it has a user base and traction the company can grow, implement new features and products. When this virtualization-as-a service really kicks in it actually will be just as realistic option to start a desktop-equivalent product as making a web-app as the dynamics of propagation _will_be_the_same_ as for any web service. Good products will propagate and creativity will flourish.

Huge proportions of cost of desktop software development are about licencing, fighting bugs in different user desktop configurations, etc, non value-adding, labour intensive things that you need to provide a usable desktop product which are mostly about platform fragmentation. Suddenly you can develop software on top of _only_one_platform_ that will be more or less _stable_ in terms of your core functionality.

People, this is not a catastrophe and return to the age of the mainframe behemoths, this will be a new renaissance (... just as long as the pricing for the core technologies will be such that they do not lock out small time players... fingers crossed).

The only downside here are software patents which probably could and will be used by large companies to fight scary small competitors who will not sell but I'm optimistic their time will go away... at some point.

I dunno... all of this sounds like fearmongering and some of these things are perfectly analogous to the way stuff already works.

For example: "Want to continue using SuperAwesomeApp? Too bad..." Want to continue using TwitPic? Too bad. Obsolescence happens, companies shut down, products go off the market and cease to be supported every day. This isn't new. If there's enough demand for an alternative, it will happen. Where there's a market, there will be product.

Or: "It's either pay up or lose access." When I stop paying my gym membership I loose access. Nobody complains that "it's either pay up or get fat." Photoshop used to be a product, now its a service. This isn't an "injustice" it's normal business.

Data safety in the cloud though is a reasonable concern. You don't know what's going on with your data on remote servers. But then, most people don't know what's going on with their data on their own machines. We've found ways to implement safety and security on our local machines (monitor for malware etc...), I imagine we will eventually find ways to ensure it on remote machines as well. Maybe.

> Photoshop used to be a product, now its a service. This isn't an "injustice" it's normal business.

Well, it isn't injustice. But it doesn't change the fact that this sucks, very badly.

The thing about personal computing was that when a company goes out of business, the software they made still remains on machines. It can be 3rd-party-patched. File formats can be reverse-engineered, so that you can use your data with a new program. Software can be cracked, so if a company tries to make you pay too much it is the company, not users, that is going to have a bad day.

All that move towards the cloud serves companies well, but sucks for the user. You don't get to tinker anymore. You don't get to do things your way anymore. Technologically, are being literally enslaved.

A small silver lining may exist in a combination of open isolation platforms like Qubes OS / Genode and "cloudlets", http://elijah.cs.cmu.edu, http://qubes-os.org, http://genode.org

Latency / Speed of light is still a major factor in human-quality UX, even with GPU virtualization. Qubes/Genode can isolate "DRM zones" from "freedom zones" on the personal computer. DRM zones can run Adobe / Microsoft / Google / Netflix cloudlets and algos, with local caches that are DRM protected. User benefits from improved UX.

Freedom zones can run open software that secures user data and requires the cloudlets to use network-isolated sandboxes when they need access to user data. If the user wants to sell/lease their data to a cloudlet, then an audited copy is allowed to leave the local PC, moving to the public cloud.

Many permutations are possible in a multi-tenant client architecture. The key is to support the coexistence of open and closed enclaves on the same local device (a home server/bridge), isolated by hardware-assisted security that has an _open_ architecture TCB and root of trust.

I think it's great. I'm paying about $30 more per year for Photoshop and Lightroom combined than I was paying for Lightroom by itself before. Plus it means that I now get to use a 64-bit version of Photoshop. It's a huge win for me without any reservations.

>Want to continue using TwitPic? Too bad.

That is just an illustration of my point really…

The difference is that if the company that makes my local image editing software shuts down tomorrow, I still have all my files and I can still run the software. Eventually I'll have to find a replacement, but I've got a reasonable window of time to do that, and no corporation blackmailing me (and who knows what happens to the data held by defunct companies…).

>This isn't an "injustice" it's normal business.

Business can be, and often is, unethical. Somebody who argues that "we changed the EULA and now you have to pay a month fee or we delete all your data" isn't an injustice is pretty warped in my view. I've come to expect such psychopathy from business types though.

> Obsolescence happens, companies shut down, products go off the market and cease to be supported every day. This isn't new.

Yet I can (and do) still run TextMate 1, OmniFocus 1, LineForm, NewsFire, Office:mac 2011 and Photoshop CS3 on my Mac. Even when it comes to the "cloud", I can still use Skype 4.x on my iPhone which doesn't force a shitty Windows Mobile interface down my throat. It's not that I mind paying for subscriptions, I just don't want my tools to be taken away while I am trying to get work done with them.

Not quite the same as a qym membership, however. Many, many designers use Adobe products to create original work. The prospect of being forced to continue to pay for access to your own created works feel a bit like a strong-arm tactic... and that's honestly how many large and small agencies are beginning to feel. It's not easy to just use another product, since a lot of proprietary aspects of the Adobe universe make it nigh impossible to open or access in another application.

I haven't used Photoshop (and will never subscribe to any Adobe software) in over a year because there are plenty of alternatives that meet my needs, and web/graphics design is my lively-hood.

Between Sketch, Pixelmator, and just designing in CSS/HTML I do not need Photoshop.

I haven't been hampered by DRM, subscriptions, or anything else with these apps/technologies and that is because personal computing is alive and well.

> I find it hard to imagine a more dystopian future for software.

Really? The dystopian parts of that vision are already here: laws protecting software as intellectual property and laws prohibiting the circumvention of DRM. The other parts are almost certainly better for the vast majority of users.

Kids in the first world having access to mobile broadband at speeds and prices that make Photoshop-as-a-service feasible on their chromebooks is explicitly not awesome unless you're one of those privileged kids.

This is a "rich get richer" situation that is developing.

The problem with the centralized model is that it rather fucks over even people in developing or under-developed countries, and also places like, say, South Africa where broadband speeds are internally decent but are notoriously slow to the rest of the planet.

And not all of us have super-reliable constantly-available mobile broadband.

Here in Aberdeen, when I walk down the street, it varies from HSDPA to GPRS. There's occasionally University WiFi, but that only helps if I'm near a University building. This access is far from constant or consistently speedy.

I bet this argument has been used on the early days of dial-up modem and cell phones as well.

History proved that these things get cheaper and become accessible to the masses.

I don't see how lowering the barrier to entry (access to tech) has any impact on the "rich getting richer".

If I understand na85 correctly, they are suggesting that the terminal model of chrome books requires decent internet and thus increases the barrier to entry. I agree with you that Chromebooks lower the barrier to entry because hardware cost is too big of a factor (and many things still work offline).

And there are other barriers (I actually think the bandwidth problem is solvable, by caching large parts of the program). Before SaaS, a poor kid from Egypt could (although it's illegal) get a pirated copy of Photoshop, Office or Visual Studio and improve their skills with modest means. Now they need software subscriptions that they can't afford.

I can see two things happening: it increases the popularity of FLOSS software in such countries and/or companies will adjust the monthly subscription fee to be proportional to e.g. the median income.

Anyway, I think thinks can be said for both models. Regardless my worries about SaaS, I also like always having the latest version, not having to install and maintain programs, etc.

Do you not understand analogies or something?

ok... so kids in the first world have mobile broadband and can sit on a bench in central park and photoshop away on their fancy chromebook pixel.

Meanwhile, kids in Egypt who don't have mobile broadband sit at a desk in an internet cafe and photoshop away on their cheap chromebook.

I don't see what the problem is?

The internet cafe would still be slow, and not being able to work from home is worse that their current situation.

"There is no “Cloud”: There are only Other People’s Hard Drives."

-- http://www.loper-os.org/?p=44

Having no local state is a wonderful vision. I see why that makes you excited, but "the cloud" is the death of programmatic computing.

What we should move towards is personal servers, either physical machines or (for 98% of the population) some kind of "gmail for servers" like https://sandstorm.io/.

> Ultimately, "applications/apps" will not exist, there will just be the web

Except this isn't the web at all, this is some closed Google-only internet. This isn't the death of personal computing, this is the death of the web.

It should be pretty clear by now to most techies that Chrome* is the neoMSIE.

We should stop applauding these complete violations of web standards. This is the opposite of awesome.

The direction the web is heading these days literally makes me sad: less open, less standards-based, less cross platform. Everything which made the web a good thing in the first place is being driven away by.

How is this "Google-only", and how are they completely violating web standards?

Also, what exactly makes Chrome the "neoMSIE"?

Here's the problem with the power analogy: While you can replace power with batteries to truly get it everywhere, being online truly everywhere with a high enough bandwidth to work (consistent >= 3 MBit) is still a long way out. Overbooked base stations, airplanes, train lines with tunnels, inconsistent coverage - these are all hard problems to solve and it will take at least another 10 years to get there. Until then, I hope that your vision doesn't become true too soon, because it would mean a loss of mobility compared to what we have now.

I am not sure it is that awesome. You get a closed platform that only runs what hosting companies decide is worth supporting. What is this going to do for niche open source apps?

The dependency on the network has always been troubling, but once the network is like house 'power'. so many things one does depend on the house having power available, making it a requirement becomes less and less onerous.

Well, the other troubling aspect of getting all your software from the network is that you don't own anything. Adobe can end your access to the software if they just decide to. If you have paid, you might be able to sue them for the software back or you might get damages but neither will help if you have a job on a tight deadline.

This optimistic outlook ignores the pesky reality of latency. Using VNC, X11, or an equivalent protocol over a connectiion to a data center hundreds or thousands of miles away isn't the same as an X terminal connected to a server in the same office buildinig.

While I entirely agree and preach latency often, there are two major points I would like to make:

1) Remote computing doesn't have to use X11. The local machine can do some of the lifting. Wolfram Alpha on mobile phones is a good example of this- a server does the crunching, but neither is the client side a dumb terminal.

2) I cross my fingers that one day latency will be prioritized more... IMO the majority of latency problems are not caused by the limits of physics, but rather the simple fact that not many people care about it.

The theoretical peak ping speed across the USA & back is something like 20ms, an entirely reasonable figure for remote computing- if only it was ever anywhere close to realized.

> IMO the majority of latency problems are not caused by the limits of physics

If you look at network QoS research in the past 20 years, you'll see that a lot of people do care. It's just not an easy problem to solve.

And with modern browsers and AJAX we already have clients in place that can deal with latency better than X11 servers.

True, but is the real solution to remove the latency in the first place and just run it on your local machine without a network connection? (Like ordinary desktop software that we have at the moment.....; seems the terminal approach to software is attempting to fix a problem that has already been fixed - just run it locally)

its not really the same in this case it updates in the background and stuff runs locally using NaCl


Claiming that "terminals are the future once again" is hardly controversial. How much (time/money/mind share) are invested in non-networked, desktop-style apps? Some, yes, but "everyone" seem to be (continuing) the push for networked apps (MS Office and Exchange/Sharepoint, Google (everything), Adobe/Macromedia with Creative Cloud, Apple iCloud ...). Mozilla is doing a cloud-push to, but have thankfully so far had a very strong self-host, do-what-you-please story. Not sure about the Firefox OS, though -- will we be able to (easily) host our own app store? Back-end services? At least for now the things I use (Firefox sync) is easy and feasible to self-host.

The one thing I could see giving a resurgence to "off-line" for mainstream use, is latency -- more concretely -- the low latency tolerance of VR. But then again, as long as you can stream enough data/logic to the "terminal" -- even that shouldn't be a problem.

I do hope the industry can help shift things towards locally cacheable/locally hostable solutions, though. I'd much rather run some "cloud" office suite on my server, with the option of taking it with me off-line from time to time (and then perhaps sync up whenever I'm on-line) -- than having my data on a third-party server. Or servers.

I'm afraid the more likely future, is a future where a majority of users are locked-in to the whims of mega-corps -- and the scariest part of that, is that those that are leading in providing "cloud" services actually have a business model that centres around (ab)using the data, not charging for the services they provide.

Networked apps running on the desktop != Terminals

True enough. But if everyone of those apps are just a poorly constructed X Server equivalent (nothing going on locally if the connection drops) ... the difference seems a little semantic?

Yeah, but usually that is not the case with proper desktop applications.

Not yet, it isn't.

And it will never be.

The problem with this formulation of the future is limiting cycles for the free tier sopping up as you say. How does that work exactly? Unless you're implementation is an interpreted language, I can't imagine a solution.

There is actually a pretty interesting solution in production today, you can use Vagrant and Docker to implement a containerized service system where 'priority' containers get resources in preference to non-priority ones. From the perspective of mainframe computing this problem was solved pretty much as a minimum viable feature (at the time mainframes had hard limits on things like getting billing jobs done before 6AM and other batch jobs got 'best effort' sorts of treatment) We have so many more tools now than they did, and hardware support too.

Good points. Supposing you're referring to setrlimit/ulimit syscalls for example, do those apply per thread or per process? My understanding is they're per process. Then the math comes down to how many user processes per remote machine can the service run? A few hundred? That's a hard limit on the scalability of the service. Is process really the ideal level of isolation?

Sun had a specific phrase for these workstations. It was quite geeky to carry one of those id cards embedded with a chip. Practically everyone outside Sun made a mockery of these cards.

I telework two days a week. I VPN into work, then RDP into a Windows server and start up Eclipse. It works.

Did you win?

No sadly. The winning entry involved video conferencing. (which also came to pass but not in quite the way that person envisioned it, I still think there is a chance we'll see it in games).

For those wondering how the "streaming" version of Photoshop is implemented:

Project Photoshop Streaming is identical to the Photoshop you’d install locally with a few notable exceptions. This build can be accessed from any Chrome browser (Windows only) or Chromebook and does not require a full download and install. In other words, this is the same build of Photoshop you’d typically download and install from Creative Cloud, however, instead of being installed on your local machine, it is running in a virtualized environment so can be accessed from any Chrome browser or Chromebook. Because this version of Photoshop is running in a virtualized environment, you open, save, export and recover files from/to your Google Drive rather than your local file share. Also this Beta version of the virtualized environment does not have support for GPU consequently GPU dependent features are not yet available (coming soon). This build also does not yet support for print.


I'm pretty excited to see how utterly wretched the performance will be!

If you can play an FPS remotely (and I have - and it works) then I think this is a walk in the park.

Maybe if you're playing a 30FPS "cinematic experience."

Meanwhile, the hardcore PC FPS gamers with 120hz+ monitors running the game at 2x displayable framerate can't even tolerate the latency from enabling vsync.

Yeah, low-latency streaming using x264 was solved in 2010. http://x264dev.multimedia.cx/archives/249

Yeah, all those people who bought chromebooks for their bleeding edge performance will be really upset.

Yep. Absolutely wretched: http://shield.nvidia.com/play-pc-games/

Local home-wifi has latencies on the order of a few milliseconds. And not to mention the bandwidth is also typically MUCH higher than the bandwidth from Adobe's (or anyone elses) server farms to your PC. Please try and understand the problem we're talking about before posting irrelevant links.

This sort of tech is showing up in various places and cloud appears to be a big market for it. Onlive had quite a bit of technical success with it though gaming probably wasnt the best market. I know what I'm talking about. Comparing this to 25 year old technology like vnc is a bit silly.

Do you think that works well over airport or coffee-shop wifi?

This is stupid. Latency (which can't be reduced below the speed of light without circumventing the laws of physics) is incredibly detrimental to drawing and digital painting, two of Photoshop's most popular use cases. A frame or two of lag really hurts responsiveness (a frequent concern when using a large brush on a slow PC), and network latency to nearby servers in the US starts at around 30ms and only gets worse. Client side prediction helps a lot in video games, but predicting a graphics editor basically entails having the entire editor, at which point the "cloud" is doing nothing but storing your files.

I will never understand why people are so obsessed with the extreme of the thin client ideal. They are a good choice in a world where the network is fast and low latency, while client devices are underpowered and expensive (VT100s in a lab with minicomputer in the next room). Meanwhile, we've lived for at least the past two decades in a world where the network (on the cellular and home broadband ends) is slow and high latency but our client devices are incredibly powerful and cheap. The period of time where this makes any sense for anyone except for proprietary software vendors that want to close off any possibility of pirating their products has long since passed.

The appeal of "compute clusters" for most "power user" tasks especially diminishes when you realize that shitty off-the-shelf PCs from 10+ years ago were perfectly capable of running programs that did most of same stuff as their modern versions. New functionality has been added, of course, but most of the increase in resource requirements came from selfish programming from generations of programmers that never learned how to optimize. There's no reason that a Chromebook with a cheapo ARM or low-end Intel SoC shouldn't be able to natively run a better optimized graphics program like Paint Tool Sai with CPU time/battery to spare.

Agreed. But only if that's how it's implemented (essentilly VNC to a centralized app). But if it runs code locally, then latency should be fine. I've not seen any details on what they're actually providing, so it's hard to tell what they're offering. I'd assume it's a continuation of their "creative cloud" offering, which as far as I know, caches the application code (and assets) locally?

The article and comments ITT suggest that it is not like the existing "Creative Cloud", but instead works like VNC, or perhaps renders UI locally while doing all the actual image editing on a server. If it was all client code, there's no way it would run acceptably on cheapo Chromebooks (because Photoshop is a behemoth, mind, not because it's inherently impossible to edit graphics on a low-end device).

Am I the only one who thinks it's crazy that after all these years, Chromebook is the one who made Adobe port Photoshop to a linux-based OS?

Photoshop used to run on IRIX ...

No thanks.

A glorified X-Windows/RDP/VNC/... terminal in form of a browser.

I love my cores, my GPU, my hard disk, ...

I rather stay on my beloved island seeing the ships sailing away a certain destiny of doom.

All your pictures are belong to us.

(From p. 117 of the EULA: (http://wwwimages.adobe.com/content/dam/Adobe/en/legal/licens...) "All rights not expressly granted are reserved by Adobe and its suppliers.)

You're misunderstanding what that sentence means. In context:

3. Intellectual Property Ownership

The Software and any authorized copies that Customer makes are the intellectual property of and are owned by Adobe Systems Incorporated and its suppliers. The structure, organization, and source code of the Software are the valuable trade secrets and confidential information of Adobe Systems Incorporated and its suppliers. The Software is protected by law, including but not limited to the copyright laws of the United States and other countries, and by international treaty provisions. Except as expressly stated herein, this agreement does not grant Customer any intellectual property rights in the Software. All rights not expressly granted are reserved by Adobe and its suppliers.

The rights being referred to are rights that Adobe (and its suppliers) have in the software. Courts long ago decided that, absent specific wording to the contrary, implicit rights and obligations can be read into contracts. This type of specific wording tries to avoid that from happening.

I haven't read the rest of the document so I can't speak as to whether the EULA does indeed transfer/license your rights to Adobe, but if there is such a provision, this is not it.

Probably has something to do with the BLAST browser client that VMware is doing with chromebooks.


This is big news. Chrome OS now has the potential to destroy Windows.

It will soon run a large range of Android apps, and a good virtualization solution would mean that business applications could be written for Linux and run on any platform that runs Chrome.

If I was Microsoft I'd be very worried right now.

I'm not Microsoft, but as a user I'm worried. I certainly will not be supporting these kinds of anti-user lock-in SaaS platforms. I'm hoping users will ignore them, and they will fail.

I hope so, but I doubt this will happen, thanks to business users.

When you run a company, you don't want your graphics stuff to tinker around software and talk about freedom or whatever. You want them to get their job done, to make one graphic after another, to fulfil contracts. You need to pay for the software anyway, and as long as you have cashflow you can pay for the "services" (if you can't, you have bigger problems).

So, like always, companies will trade off freedom for convenience. The rest will have to follow, and the world will be worse off.

It's not clear to me why this supports only Chromebooks and Chrome on Windows. It seems like they would have had to go to the trouble to explicitly disable support for Chrome on Linux.

Are Chromebooks so different than the shipping version of Chrome for Linux?

> Streaming Photoshop can be run on any Chromebook or Chrome Browser.

It doesn't necessarily mean that they disable support for chrome on linux. It's possible that it can run on Chrome for Linux but Adobe won't necessarily provide support in the same way that they provide support for the Windows version.

That's probably the main reason. It's too bad though because it would be great to have it available on linux, even in an unsupported offering.

chances are that once it comes out someone will hack it/reverse it to "run" on a standard Linux setup

Probably just heavy collaboration with Google, and Google wanting to push Chromebooks.

ChromeOS is gentoo linux. I consider this a Photoshop port. :-)

Didn't Google show off Chrome extensions that were low-level like C code a few years back? If I remember correctly, they even showed off an in-browser image editor and video games, what ever happened with that?

NaCl is very much alive, much to my chagrin (I hate Google's pushing of proprietary stuff like NaCl, it's like Microsoft with ActiveX a decade or so ago).

However, I suspect Adobe are unwilling to port Photoshop to NaCl.

It has been integrated into Chrome, but hardly anyone uses it.

My understanding is that Adobe's previous cloud offerings download the programs to your machine, where they execute locally, saving files to the cloud. Is this how Photoshop works here? Or are they doing VNC/remote desktop style access?

I really hope it's the former - having virtual machines / LXC instances synchronized down to the Chromebook would solve those last few use-cases. (For me, my IDE is the only thing missing from a Chromebook and stopping me using it as my primary machine).

The current version of Creative Cloud is really just a different license; indeed you do still download (say) Photoshop, it just pings the Adobe servers to ensure that you have paid your monthly or annual licensing. To call it "the cloud" is really in name only, as it's no different than the Photoshop that you pay a one-time fee for and install from DVD.

This is totally different, as you suggest, it's a VNC-style access for Photoshop. (Since there is no Photoshop for Linux.)

Really the cloud offering in the name applies to their storage (20 GB as individual, 100 GB as a business account [1 TB should be standard at this point imo]), Typekit and Kuler. The applications, like you said are pretty much the same as always with full local installations.

But this suggests Adobe might move to remote execution SaaS at some point in the next few years.

I wouldn't be surprised if that's always been the ultimate goal of CC.

A lot of SaaS is waiting for faster broadband. CC Saas will be limited at 10MB, but as speeds creep up to 100MB and beyond it's going to start making more sense.

I think the rent-your-cycles model sucks for users, and is against everything personal computing was supposed to do.

But it's a no-brainer for corporates with a captive audience for life, zero piracy, and easy surveillance.

Yeah it's not a change I'm excited for if it does materialize.

So, it is the latter you prefer? -- VNC/remote desktop

No, I want the virtual machine to run on my Chromebook. And it sounds like that is what they're doing!

No, they specifically say that if your internet connection is terminated you will need to reconnect and restore your session. It is a virtual machine running in the cloud and you are VNC'ing in from your browser.

Wouldn't it make more sense to port this to a "real" Linux first? Photoshop is often quoted as a killer app that keeps people on Windows.

CS6 already has gold status under Wine, works quite well. They could "port" it to Linux just by wrapping it in a Wine container and fixing a few bugs, much like gog does with games (using both Wine & dosbox)

Well, it's being ported to run in a virtual environment, not necessarily even Linux :/

Honestly I wouldn't be surprised if Apple or Microsoft were paying Adobe large sums of money to not do that.

It would interesting to see what happens when people start to load the server. Rendering images, and any other form of image processing, can be really heavy. On my desktop workstation it can take large amounts of processing time when working with large files.

IMO, this is where cloud based solutions don't work well. It would be cheaper to have your own workstation which you can count on always having access to the GPU.

Interesting - so it's Adobe's Photoshop on a VPC, but is only for Windows/Chrome.

Remote storage of your designs/files, remote execution, like manipulating via remote desktop/VNC, but hopefully more usable.

My two concerns would be a) privacy of data and b) upload/download costs/time - some of the files involved can be huge.

I wonder what "initially with a streaming version of Photoshop" actually means. In a roundabout way, this is Photoshop on Linux, which is interesting.

This is a web application version of Photoshop. Nothing more (and it should run on any OS with a modern browser).

Are you sure? It sure sounds like this is a version of Photoshop, running in a managed VM in the cloud by Adobe.

The client is still Chrome, though.

No, it isn't ;)

This is huge, no matter what kind of technology or compromises being made, it shows the acceptance and maturation of chromebooks in the general public.

Only if you define "general public" as the intersection of Google Apps for Education customers and Adobe Creative Cloud licensees.

I really, really wish they would port Photoshop to Linux - it's going to be the killer app for Linux, hugely driving up adoption.

I don't see how, can you elaborate?

As a weh developer, I need photoshop, thus I'm stuck with either a Mac or Windows system.

I'd much prefer to drop the VM's and go full Linux. I can't. Because of Photoshop.

Totally agree with you. I would just use Linux if I could run Illustrator with out any special setup on Linux. Instead I have a MacBook Pro.

I think you two are a minority, that is why Adobe doesn't care.

Most programmers aren't artists/designers.

This has nothing to do about programmers.

Artists/designers have two choices today to use Photoshop - windows or mac. In third world countries, buying either is quite expensive. You get a whole range of laptops/desktops in Asia with Ubuntu installed (including Dell, HP, Lenovo, etc.). It would be really cool to open it up to the Adobe ecosystem.

Even if third world countries represent a meaningful fraction of artists and designers, thinking that they actually buy software is not realistic. Their piracy percentages are >90%. Why do you think Adobe is switching to a network model.

We went from Computer terminals to independent computing and now back to terminals (just on a network).

This would have been more interesting if they'd ported Photoshop to asm.js, or even NaCL...

It's really hard to distinguish the Chromebook on the photo from a MacBook Pro. The Chromebook designers should really come up with something of their own instead of copying Jony Ive's design for Apple if they want the laptop to have a better image than poor mans MacBook.

If I had a nickel for every apple fanboi claiming google steals from apple I'd be rich. Please, get over it, let's not act like apple doesn't copy.

I can get over it. Even if it's more like cloning than copying.

I found an old issue of Wallpaper Magazine from 1997 with ad's for Mark Newson's watch named Ikepod. Both the name and the watch are similar to the iPod that appeared first time in 2001. Apple has off course copied a lot it's design from Braun's Dieter Rams. Rams is has retired, but Newson has just started working for apple.

i dont know if i like the idea of depending on a zillion diff companies for everything i do or own on the computer to be honest.

I wonder how it saves the file, ie. just uploading the diff, or the whole file. First would sound a lot nicer, especially when working with 1gb psd's

I have some serious doubts about the usefulness of this -- considering how long it will take to "open" a large PSD file.

There's also the issue of sensitive/confidential content -- Adobe is putting a very large bulls-eye on themselves. Since your PSD/etc files are being uploaded to Adobe to "open" Photoshop, they'll have to be stored there, and that is very attractive to nefarious individuals.

I understand that in actuality the files are stored on Google Drive, but the data itself eventually exits Google's network to enter Adobe's. I wouldn't be surprised if Adobe keeps copies around for their own convenience, eg: caching or historical metrics.

How is that any different from trusting Google Drive with your data? Or Dropbox? Or any other cloud backup services?

I'm not saying that it's not a problem, because it clearly is, but it sounds like it's a bit too late to be this worried about our business-related files stored on remote servers/services.

How did they move a C++ code with a GUI toolkit running an event loop from OS7? How do they weeks spent optimizing each C++ function translate to the web? Is this a complete rewrite? The technical details are mind boggling.

This is the natural progression of the most pirated software app on the Internet.

if you buy chromebbook you must be inherently stupid to think you will get something good out of it. Beside stealing data there is nothing good about it. Chromebook == trash

Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact