Hacker Newsnew | comments | show | ask | jobs | submit | bane's comments login

Google lost the plot with G+. The reason that Google is where it is is because it made things people wanted to use, they naturally came to use these things.

Around this time Google just simply forgot this, killed reader, killed ig, killed labs, and then tried to herd people into using an inferior social network, then tried to shove all of google's other products into this terrible product nobody wanted to use.

This is different than what happened during the search engine wars, where google's product was simply better than all the others and that's why people used it.


At some point you began to have to use Goofle+ to post comments or ratings to Maps' POIs. (So I stopped.)

If you have to force people to use your product, and they still won't, the product is dead. There is no product. That Google couldn't see that is a little disturbing.

And let's not forget the damage Google+ did to search. You used to be able to use the + sign to force a word to appear in results documents, but this was changed to clumsy double quotes (on both sides of the word) because of G+...


I think the progression is.

Provider: Here use X. Customers: Nope don't wanna. Provider: Okay now you have X to use Y. (lol got you) Customers: Stops using Y.


So how do you like the new Apple Music?

I think this may have had a lot to do with the guy they picked to run it. Vic Gundotra brought his Microsoft mentality with him, and proceeded to use it on Google.

As someone who worked at Google when Google+ was released, IMO the failure was all about Vic Gundotra.

1. He's the elitist who went against Google's culture and separated the Google+ team in its own sooper sekrit headquarters with its own cafeteria.

2. He's the backstabbing liar who first said the company-wide annual bonus was going to be based on the success of Google+ and then when it briefly looked like it would be a facebook killer, he went back on his word, thus alienating pretty much everyone.

3. He's the out of touch dumba$$ behind real names that sucked the appeal out of Google+ and left it a dumping ground for stuff that didn't fit anywhere else.

The day Google+ was released, I was besieged with invite requests from friends and acquaintances. And the Google+ UI IMO was nicer than Facebook's (and it still is IMO). However, as soon as Google+ made it their business to out gay people using pseudonyms to stay in the closet, those requests stopped.

I'm sure there are many more reasons for the failure, but this was what I saw from within Google at the time.


Not to mention gaming the usage metrics and OKRs to give the impression G+ was far more popular than it really was, then taking the whole team to Hawaii to celebrate the supposed success. I'll never understand what Larry saw in him; he came across to many as a huge culture mismatch from the very start with his smooth-talking used car salesman style.

When project leaders spoke internally at Google (at least back then) they typically did so with a level of candor that Vic never provided. He would get on stage and just speak in unconvincing platitudes.

Bradley Horowitz still carries that torch unfortunately.


This is dead on. Anybody who looks at the failure of G+ without looking at Gundotra is missing a huge part of the plot.

As someone else there at the same time, I pretty much concur. I still remember the tgif where someone asked him a very pointed question about real names and he completely dodged the question. Vic was a very bad thing for the company.

Steve Jobs deserves a lot of the credit for this as well:

http://googlesystem.blogspot.com/2011/10/how-steve-jobs-infl...

Also, not in the article but reported in the press at the same time, Jobs told Page that he should be less afraid of ruffling feathers and more willing to command the company to do what he wants. Unfortunately, what worked for Steve Jobs wouldn't necessarily work for Larry Page.


With a search engine the cost of switching is pretty small. I don't care if anyone else switches search engine when I do. Learning a new one is easy. Adding features and continuously making a slightly better product will make it a winner in the long run.

With social networks you just can't do that. The cost of switching is enormous for the individual user, and no new users will join the smaller network. It's s natural monopoly.

The only chance you have is to offer something else, such as a professional network (LinkedIn) or a similar thing to completely set it apart from all existing networks.

When I created my G+ account I expected to find features for adding/importing/inviting my FB contacts. Nothing. Google must have been too scared of legal fallout to just mass import users' networks.

The way G+ worked at launch, FB could have a year of downtime without risk of losing many users to other networks.


I regret that I have but one +1 for this comment. This is exactly what happened. The thinking was, "We're going to make a successful social network ... what can we do that looks like successful products?" So the target audience is just "everyone", since of course everyone wants to use a successful social network. And if the stats don't look like they're successful, make up new stats ("the fastest growing social network ever" bullshit). And when that fails, we force as many products as possible to integrate with it, which of course they have to do because we're making a "successful" network and who wouldn't want to integrate with "success"?

People really wanted Facebook the product without Facebook the company. G+ had a golden opportunity, and they blew it.

I assure you, that nobody outside our bubble ever thought something like that.

I sincerely doubt that for people that wanted Facebook without Facebook Inc., Facebook with Google Inc. would be a winning proposition.

It doesn't matter, they'll burn most of it up, raise another round and pivot into a B2B client-customer IM app with patented business emoji or something.

No lessons will be learned, nobody will understand physics more because the things they (re)learned about basic science will be lost behind the veil of "stealth mode" and all of society will be a bit poorer in the end.

It doesn't matter because the money will vaporize, nobody will learn anything and we'll get another useless app that nobody will use (unless they do then they'll be lauded for great execution and their investors will be called visionaries, the pivot from ear shattering charge-tech to chat app will be just a footnote in everybody's success story).

Then the whole thing will be acquired by IBM or something and will disappear into history.


Yeah exactly. Emoji might charitably be called an extension of punctuation, but some of the stuff that ends up in the Unicode spec seems pretty crazy.

I mean, there's a Moai head in this set.


On the plus side, this is really great work. On the minus side I sometimes think people working on unicode, a system intended to allow for all the world's written languages to be represented by a single encoding, have lost their minds sometimes.

I understand that emoji are used in-line with text, but they aren't text.

Once you start going the path of encoding pictures into a text representation you're going to start missing things that people need in these kinds of pictoral representations.

I'm sympathetic in a sense, I wrote a paper in college demonstrating why emoji/emoticons are a natural extension of text (to a point), because they can allow encoding emotion and intent, which natural language punctuation don't really allow for. But at the same time, is it really necessary to have half a dozen fish, trains, several food items an alien head, a full set of numbers from 1-10, squares of different sizes, and other non-emotional iconography?

And then the argument is that this is just intended to encode Japanese carrier emoji. Great, let's spend brain cycles building into unicode icons used by phone carriers in one country.

I'm already bothered by the separate encodings I've seen from time to time for the exact same script, but in slightly different font variations. That's supposed to be handled by the font that's representing the script, not by the encoding.

This really needs to be a separate encoding that's not unicode. I figure at some point, unicode will simply turn into a generic image format with a bunch of extra character encoding baggage that people will complain about.


I'm far from an expert on Unicode, but I definitely have the feeling that they've had trouble sticking to their original mission. A few months ago we had a discussion here on HN about some people still not being able to write their names in their native script -- Burmese? maybe, I forget -- but now we have skin tone modifiers for emoji.

That said, I guess I agree, if you're going to have emoji at all, you have to at least think about this issue. My inclination would have been to make them all green from the beginning, so there's no question of ethnic favoritism, but I guess it didn't happen that way.


> On the plus side, this is really great work. On the minus side I sometimes think people working on unicode, a system intended to allow for all the world's written languages to be represented by a single encoding, have lost their minds sometimes.

Why? It's a useful and convenient pictographic addition to text, and it brings attention to astral characters and combining characters whose handling is routinely screwed up by western developers (let alone anglo-centric ones) who don't routinely deal with them and don't care to test for these cases. I mean it took until 2010 for MySQL to start not destroying astral characters at all (with the introduction of "utf8mb4" in 5.5(.3)). And oddly enough, one of the big world-wide changes to text content between 2006 (MySQL 5.1) and 2010 was emoji spreading outside of japan starting around 2008. I'd like to say it's a coincidence, but that's not really likely.

> And then the argument is that this is just intended to encode Japanese carrier emoji. Great, let's spend brain cycles building into unicode icons used by phone carriers in one country.

That's an argument nobody is making because emoji escaped japan back in 2008 when people outside japan started unlocking the emoji iOS keyboard and using emoji elsewhere.

> I'm already bothered by the separate encodings I've seen from time to time for the exact same script, but in slightly different font variations. That's supposed to be handled by the font that's representing the script, not by the encoding.

That doesn't really work, because now you can't mix the two languages anymore, this is actually a big issue with han unification, mixing Chinese and Japanese in the same text becomes a pain in the ass because they can't look right without a bunch of font-based hackery, which expects the client to have all the right fonts in the first place.

> This really needs to be a separate encoding that's not unicode.

And so you couldn't mix emoji and text, now that would be convenient and absolutely wouldn't lead to proprietary emoji implementations in private unicode fields at all.

Well it would actually because that's how emoji were first integrated in unicode in the first place.


You talk about this issue like text and images have never shared space next to one another in the same gui control on a computer screen.

More importantly, your history is all wrong. Emoticons have been used in the west since the telegraph and kaomojis (emojis) showed up in the 80s and have been known outside of Japan not long after.

These images are simply an interpretation of common emojis into graphical form, they literally map something like ^_^ to a smiley face.

Somewhere along the line, it was thought to be a good idea to let people just choose from common emoticons and have it map to the character implementation rather than doing the reverse (because remembering things is hard). On the receiving end, the ^_^ is simply replaced with whatever image the chat app has chosen to represent that.

Later, DoCoMo, KDDI and Softbank decided to further formalize the emoji codec mapping as part of Shift-JIS (and ISO-2022-JP) and that's why we have encodings for snail, minidisc and chicken leg, but not mosque, pork chop and blue jeans. Here's the mapping table https://docs.google.com/viewer?url=http%3A%2F%2Fwww.unicode....

What the logic was to do this is anybody's guess, but it was probably for bandwidth efficiency.

It would be more appropriate for there to be an entirely separate "expressions" standard for encoding various emoticon systems (there's several) and providing a standard iconography. You can already mix and match fonts and languages with images in most rich-text controls.

Unicode is not the correct place to do this. And amazingly, text-controls, as we have already established, can support fonts and images next to each other. So extending a text control to support unicode, images and expressions, would seem more logical than shoehorning shift-jis, which is not a human written language, into unicode.

Here's the original proposal that started this madness, so you can read the rationale. There's lots of people to blame for this, but we can start with the authors of this proposal.

http://www.unicode.org/L2/L2009/09025r2-emoji.pdf


Do emojis always look the same or are there different sets? I often wonder if the emotion that I feel some icon perfectly shows, will be displayed the same way at the recipient. I know that fonts and icon sets are usually different per app. Is it just as unpredictable here?

Emoji are like letters, they have a basic meaning but the exact representation depends on the font. So there are "different sets" in the sense that U+1F46E "POLICE OFFICER" may not look the same across all systems, in the same way "A" will not always look the same unless you specify the font to use.

I urge technical people to explore non-technical subjects and general "well roundedness". I've gotten immense relaxation and satisfaction from community art classes, martial arts, yoga, etc. There's a powerful argument that technical work is inherently creative, but creative work without the technical is something else entirely.

I also urge technical people to study history, language, speech, public performance and public speech giving. All of those things give a sense of perspective, and abilities to confer with partners and customers on a level that most technical folks don't understand.

Public performance in particular enables one to overcome lots of fears and be able to talk in front of both crowds and executives. This capability is often rewarded in important ways that build one's career...and the only way to get good at it is to do it.


My father is in his mid-70s and after a short stint in retirement went back to work about 5 years ago. He claims it's because of the money, but there's not a lot preventing my parents from selling their home and moving to a cheaper part of the country and living out their retirement years.

On the plus side, he was remarkably sharp and agile for a 70 year old. But his legs are starting to fail and that's made him age very quickly.

Now I can see it being about money (and health care), he's going to need knee replacement surgery and months of rehab. Something my parents will struggle to afford.

He's lived a fascinating life, and is full of stories, but to me he's also a warning about the need to cultivate non-work interests and stash away enough money to enjoy a long retirement enjoying those interests.


I work for a small enterprise (under 2,000 people) and we get a choice of a couple different Macs (Air, Book, Pro), Dells in various configurations as well as Surface tablets.

The tech folks usually get a Mac Pro or a high-spec'd Dell (it has better specs than any Mac Laptop that's available at the moment, but is about the size and weight of a TV tray). The business guys and PMs go for the Airs and Surfaces...I'd say the Surfaces are about 2:1 to the Airs on takeup, and that's mostly because of the form factor and pen.

There's been a couple rough spots, but our leadership simply decided it was time to figure it out, and if we did, we'd be years ahead of larger enterprises, thus giving us a competitive advantage.


So what happens when you need to seriously use excel and the Mac version falls over.

Excel on Mac works fine, and we do some fairly sophisticated numerical modelling work.

In terms of Office, the bigger problem has been legacy Access databases. We solve this by setting up some VMs with RDP access for the Mac users. But there's also a couple Mac utilities that let you export Access tables to CSV which can then be loaded up into something sane.

A surprising number of people also use Project, Visio and OneNote and there's not really a good portable solution for those cases. I've been pushing people to use yED for portable charting (instead of Visio), OneNote is now available for Office 365, and people are finding Excel a good enough Gantt chart tool for many purposes.

Office on Mac is quite good compared to Windows, just not all of the products people expect to use are available.

The bigger picture thing is that this is going to happen and enterprises need to figure it out. Non-cross-platform applications and processes need to be abandoned. This also happens in reverse with remote Linux servers being easier to work with on Macs than via Windows (putty and winSCP sort most of the issues out, but they aren't perfect).

In my experience people really don't care about the desktop experience too much, IMHO Windows is better than OS X for most of that anyways, but they care about "getting shit done" and that means apps, and these days web-based apps. The OS just isn't that big of a factor so long as the things people need to GSD is available. They're choosing hardware and form-factor over OS and they pick the one they think will work best for them.


Well until a few weeks ago you where limited to 65k rows in mac excel.

Having struggled with OSX an apples obvious abandoning of any professional us other than a one man band designer - I hate apple hardware with a vengeance.


Run it through a VM.

Here's what the signs look like for those interested, most appear to be from London.

https://encrypted.google.com/search?q=%22no+irish%22+window+...

Here's a reddit thread on the issue

https://np.reddit.com/r/badhistory/comments/3erkxv/nina_no_i...


My desktop is basically filled with an octopus of dongles these days. It's ugly and annoying, and increases the footprint of my laptop 2x.

I guess USB-C is supposed to help with this?


an octopus of dongles

Who do we need to talk to for that to be the official collective noun?


Consistently talk to many people in publicly available texts over a period of a few years. That should do it.

Tell me where to vote and I will.

i am interested in this query as well

I wish I knew why Apple's top quality engineers didn't embed or bundled a USB-C to 60GHz WiGIG Dockinstation. Having only a single USB-C port is absolutely cool, if you have a wireless 7GBps dockingstation.

Is there some reason why Apple doesn't promote WiGIG?

Here's an overview over the technology:

http://ultrabooknews.com/tag/wigig/

http://www.slashgear.com/intel-wigig-docking-station-in-2015...

http://www.cnet.com/news/60ghz-tech-promises-wireless-dockin...

Personally I wish they used Ultrabroadband-Radio (>500MHz Bandwith). There are numerous reasons fellow RF enthusiasts will recognize in UWB radio. But I am happy with whatever technology allows me to have a cable-free desktop 😌 This is the reason I loved the original Ubuntu Phone (with it's powerful specs).


Docking stations are awesome.

And pretty freaking expensive too. $250 for a thunderbolt docking station, give or take. Despite the fact that the company will freely provide a $2,000 laptop and a $600+ monitor, getting them to pony up for a docking station has proven to be remarkably difficult.

As a WFH employee I have the opposite issue, I've got a ~$2,000 ThinkPad W540 and matching dock, but I have to provide my own monitor (which honestly is fine, due to amblyopia of the left eye I had as a child I lack peripheral vision for large displays - it's easier for me to just shop for my own).

Docking stations ARE awesome. Seems like USB-C is going to kill docking stations though. Already they feel a bit like a relic from the 00's.

Bad news: Ethernet ain't going anywhere in the foreseeable future. So you'll still be stuck with a USB-C-to-Ethernet dongle :(

But at least you'll be able to hide all the dongles behind some wall and just have a single USB-C cable running from your laptop to a powered hub.


This is basically it. There's a thing I've noticed, I like to call it the "inverted reason list". It's an upside-down ordered list of reasons to do something, delivered to a skeptical or hostile audience. The list is ordered such that reasons the audience will like are delivered first, as if they are the most important, and the reasons the audience will hate are delivered at the end, as a side-thought.

"and oh....it turns out open offices will save us $5m a year in facilities costs." is usually delivered at the end, when nobody is paying attention anymore, because it was preceded by "free snacks at every 25th cube!" and "will allow you to configure seating to keep your team near you"

For work that requires lots of talking (or not lots of deep thinking), they work "ok", but for knowledge work, they're basically the worst idea ever conceived outside of sending all your coders to rock concerts all work day and wondering why nothing is getting done.

Where I work, most people get or split offices, but we can work in a few other places, including an open work area. Most people hate working there because it will inevitably have somebody on speaker phone having an argument with their health insurance over an unpaid bill, and that conversation will go on for 2 or 3 weeks solid. Appeals to management have basically been ignored, and now we're getting ready to move to a new facility that's all open space.

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: