Here's what Amazon does with products it no longer wants:
Consider SimpleDB, one of the first of the Amazon web services and effectively dead. But still has a live page, and still works and is online https://aws.amazon.com/simpledb/
But is not listed in the Amazon Web Services product lineup:
Presumably if you had built something on it many years ago (which I did), it would still be working today. Maybe the lesson for Google is that instead of killing things, leave them working but without future development. And instead of radical pricing change, make pricing change for new customers and leave the old ones on the existing pricing. Sure people might be unhappy about the changes but they are less likely to be enraged and lose trust in Google.
This post from Amazon Web Services CTO https://www.allthingsdistributed.com/2016/03/10-lessons-from... says:
5. APIs are forever
This was a lesson we had already learned from our experiences with Amazon retail, but it became even more important for AWS’s API-centric business. Once customers started building their applications and systems using our APIs, changing those APIs becomes impossible, as we would be impacting our customer’s business operations if we would do so. We knew that designing APIs was a very important task as we’d only have one chance to get it right.
Microsoft does a lot of work in that direction too, there are workarounds and fallbacks to older API behavior everywhere around the place.
I think this is also compatible with Linus' mantra; "Don't break userspace". (Obviously there is a limit, Linus did elaborate on those limits recently).
I really don't like it when APIs stop working. There are plenty of ways to get people off the old API without disrupting them or annoying anyone, as you mention, an option is to simply stop advertising the old version and maybe make it more expensive (or less accessible) to new customers.
Having a stable API that I can bet a product for 10 years (or more) on is something that developers (and users) like and should be a huge plus when deciding between product offerings from companies.
2016 11 (actually 8 if you don't count things that got merged into other things)
I get it, it's expensive to run maps, it's not free. But this is multiple-times burned by these guys. So the threshold of toleration is lower with Google than other businesses/services.
Nothing free from Google is ever going on anymore of my sites or software unless it is setup with an alternative already in place so when they do _anything_ ridiculous, they can be dropped instantly. But as of right now, the plan is a complete purge of Google.
I too take caution with Google APIs after the maps price hike. I've also abandoned Twitter API after their streaming API shutdown
OpenStreetMaps ( https://www.openstreetmap.org ) is the data source for most other mapping systems though.
Suffice to say, I am setting it up so that it can be more easily replaced if need be.
Does Google shut down stuff at a greater rate relative to anyone else? Are the odds better or worse if X had been run by company Y?
As mentioned by others Amazon seems to have a policy along the lines of "APIs are forever".
That number is crazy high. That there has been a slight trend the last 2 years does in no way make up for the significant risk shown by those other numbers.
You have to be crazy to base your business on the Google platform.
Seriously, it should be a rule of thumb that if you are not big enough to negotiate with Google for a secured pricing, don't rely on their API exclusively in your project
> Innovate, don't duplicate
> Don’t make a substitute for Google Photos or use this API to create, train, or improve (directly or indirectly) a similar or competing product or service. For instance, if your app’s primary aim is to provide a general purpose photo gallery app, it’s a substitute for Google Photos.
IANAL, but this seems to rule out the use of this API in a number of places where it would be useful (plugin for open source photo management seems to be directly targeted etc).
I can understand why they don't want to allow me to train my image recognition software using their API, but this phrasing makes me really wonder what it is for except syncing.
Also, generally the wording is vague enough to cover almost anything, which means at some point, sooner or later you might have to rely on the judgement of some random future googler to prevent all your integrations being destroyed.
I am building it: https://PhotoStructure.com
Cleaning up the mess left from early-adopting N photo apps and websites that subsequently shut down is why I took on this project. I've got 20-odd hard drives that have accumulated over the years, filled with backups and libraries from Apple Photos, Aperture, Picasa, several hundred gigs of Google Takeout tarballs, and other ancient DAM apps. I wanted a single, organized, deduped, copy of my photos and videos. Skip the thumbnails, the files that are missing original EXIF headers, or have suffered bitrot.
Finally, I've got a single folder hierarchy I can rsync to my NAS or wherever, and know I got everything. There's a simple SQLite db I use for persistence, and a web server that sits on top of it that makes browsing and searching your whole library feel serendipitous.
So yeah, it's Google Photos that lives on your bookshelf. Viva the distributed web! I'm looking into the applicability of dat and ipfs for secure sharing soon.
I've got a limited number of beta users trying it out right now. If you're willing to share your feedback, please consider signing up. The beta is free.
So this looks fantastic! Subscribed ... very willing to be a beta tester and provide detailed feedback.
However, the problem I'm finding is a small percentage of file corruption from all the storage upgrading and copying over the years, meaning no given file can be 100% trusted to be a valid original.
I haven't found any file or photo deduplication tools with the savvy to figure out which of two identically sized and timestamped files is the least corrupt image.
In many cases, a second generation is viewable while the original is present but unusable. This most often applies to very old Aperture libraries that got copied from NAS to NAS over the years, where a "master" may be corrupt but it still has a viewable generated high res cache as a JPEG.
Implication is the "structure" of the image files themselves has to be analyzed ... is this an uncorrupted viewable image?
Note that with JPEGs and various flavors of RAW, renderers will still happily open and display the file but what humans view can evidence bit rot. Conversely, some files are detected as corrupt by file examination, but can be viewed without problem.
To offer "principle of least loss" for mass merge of diverse collections, this would have to be figured out.
What I've found on my older hard drive backups was file corruption due to bitrot or file truncation.
I use `jpegtran` to validate JPEG bytestreams, `dcraw` to validate RAW images, and ``ffmpeg` to validate videos. At least for my quarter-million-file corpus, those tools detect corruption sufficient enough for me to want to skip the file. I actually had to write a bit rotter to write tests for this, and do glitch inspection.
> To offer "principle of least loss" for mass merge of diverse collections, this would have to be figured out
Every unique SHA gets copied into your library (if you have copies enabled), but any given asset will have 1 or more asset files (that are merged in the UI and DB). To minimize risk from bugs^H^H^H^H "undiscovered features," PhotoStructure never moves or deletes files excluding it's own cache and db.
I'm in a similar boat. What I'd like to know is: where are the duplicates and what can I safely delete? Anything that can help me clear it up would be a godsend!
The approach I've settled on which should work for most people is to establish a new library, with unique copies of each of your originals, skipping exact SHA matches and invalid files.
In your case, though, you'd run PhotoStructure in its "don't copy into the library" mode. Once it finishes scanning your drives, you can run a simple SQL query against your SQLite db to get a list of duplicate files. That query will be in the FAQ.
I could manage that query but your average user wouldn't. How about a way to export it to a CVS file so it can be viewed and filtered in the users choice of spreadsheet app?
Result: the corrupted uploaded files filled my Google Drive, so I couldn't receive emails anymore in Gmail! Lost 7 days of messages till diagnosed the problem.
> All media items uploaded to Google Photos using the API are stored in full resolution at original quality. They count toward the user’s storage.
Are there some commercial decision behind this?
Currently the Google uploader is worse than the previous backup tool, and you can't make third party uploader with this API.
Our primary concern is ensuring that users have full control over the quality of media that's added to their library (so currently the API defaults to always uploading in original quality).
Clearly there are ways that problem can be solved, so please suggest this use case in the issuetracker and star the issue (so we have a sense of developer interest & you can be kept in the loop on updates).
As it stands it seems people are interpreting it as if all images, regardless of size, will count against the quota.
If this is just a documentation issue I'd be happy to see it fixed. Anyways, as mentioned upthread it would be great to have an option in the api or something.
As I read it now it limits the usage so broadly and vaguely that I'd be hesitant to put any serious effort into it.
Google Photos is a great product but for me it has one huge drawback. You cannot remove a person from a shared album after adding them. You or anyone an album is shared with can accidentally share the album with any other person and once that happens, only way to stop them from accessing your photos is to delete the whole album. Deleting the album means losing all photos, comments and likes added by others. Also sharing an album always generates a link that can be used by anyone to access the album. I haven't found a way to share only with people without generating the link.
> Photo storage and quality
It would be nice to specify a photo to be stored as "high resolution" like you may as a gphotos user. Every photo uploaded via the API counting towards a user's storage might deter them from using the integration.
However... I noticed a clause that requires a "Commercial license" if your app prints photos:
> If your product transfers a user’s photos onto physical goods (such as photo prints or t-shirts) and you charge money for this service, you must have a Commercial license. For more information, see the Google Photos partner program.
Anyone know how much this costs or why this is the case?
My photos, my contacts. What's the big deal?
I have to do a tedious manual export and upload to get photos from Apple Photos desktop app to Google Photos cloud
(and I don't want a sync app that uploads everything)
Come on Google, you can do better than this.
It's been months and my Play Music library still isn't accessible from YTM. Artists in my library are still a mix of YouTube followings and music artists I care about. I have to long press on an album to add it to my library which is cumbersome. The recommendations are useless in comparison to GPM. YTM doesn't surface artists playing in my area which was an awesome feature that GPM still has.
I really hope they decide to go Google Pay route: See how stupid is was to change the branding of something from Google to $otherproduct, then see a decline in users, go back to the Google brand and just overhaul the existing apps and infrastructure.
Google is a really weird company sometimes and I'd love to be the fly on the wall during meetings where they decided upon shit like YTM.