Hacker News new | past | comments | ask | show | jobs | submit login
Google launches an AI contest for social good (ai.google)
164 points by saranshk 3 months ago | hide | past | web | favorite | 97 comments



Summary: $25M for projects in a wide array of categories. Grants are $500K to $2M for one to three years of deliverables.

Application of questions can be found here: https://ai.google/static/documents/impact-challenge-applicat...


This is weird.

1. Why does it need AI? Why not just fund stuff that do social good? Instead of giving out computing credits that will eventually run out.

2. The successful projects joins a startup accelerator. Wtf?

This guys really lost track of what charity means.


re 1, rephrase that on funding mosquito nets specifically

1. Why does it need mosquito nets? Why not just fund stuff that do social good? Instead of giving out mosquito nets that will eventually deterioate


Because its an obvious PR move to try to improve the standing of "AI" as mostly a technology for companies like Google to exploit your data with to your detriment.


This is awesome! "crowdsourcing" AI tech is a pretty smart business move, esp considering it's wrapped with "social good". This initiative can bring a lot of talented and sentimental minds together, and who knows.. it can possibly put a start to a next google product! And if nothing comes out of it.. who cares? Still a marketing win (assuming this gains some traction). I expected nothing less from the idle minds at this corp.


You said so much without pointing out any material benefits to humanity.


I honestly can't tell if you're being sarcastic or ironic or serious here. The rules state I have to take the most charitable reading, which I guess would be sarcasm. In any case, it's very difficult to tell.


[flagged]


Drop the advertising business, social good automatically increases, no AI needed :)


Are you saying that the world was better off before google?

Or that there's a practical, socially preferable way for google to sustain its search engine without ad revenue?


Is this anything like "don't be evil"? I want to try to not be cynical. But so much of ethical concern, especially regarding privacy, has come out of the Google corner in the past few years that the "for social good" part instantly makes me paranoid about what it will really eventually be used for.

And the story [1] about Google patenting a person's work after an interview comes to mind.

Having got that off my chest, hopefully the participants read the legal terms very carefully and might even consider having a lawyer review them.

[1] https://patentpandas.org/stories/company-patented-my-idea


I hate the term "AI for social good", because it reminds me that the default use of AI today is actually far from being a social good. I wish "AI for social good" was not a thing, and default use cases of AI was for good, social or otherwise.


I’ll just leave this here.

https://www.faception.com/


It's impossible to tell if this is a joke, or YC '19. Exciting verticals!


i thought it was a joke/satire on AI and facial recognition tech. Now i can't tell either!


It could be it started out as a social satire ("faception", really?), but then someone send them an genuine inquiry about how many terrorists their system could catch for umptheen million dollars.

How many would resist the temptation to throw some code together, ship it, and pretend it's real? After all, who's going to argue with top-notch scientific facts like:

"Researchers previously knew that genetics played a large role in determining face shape, since identical twins share DNA."

Criminals and terrorist-sympathisers, that's who!


That is horrifying.

I don't suppose there'd be any way to know if companies are using this neo-phrenology during interviews?



>...Our technology allows predictive screening solutions and enables Preventive Actions in the public safety, smart cities and homeland security...

That could never run into problems, whatsoever. /s


Sounds like the plot of Condor, the recent TV series, waiting to happen.


They claim to be able to recognize a terrorist. Seems more like an joke to me.


What they probably do (among other things) is racial profiling: brown person with beard -> bigger chances of being a terrorist, white person with hipster glasses -> less chances of being a terrorist.


Madness!

I guess with images of this sort, it might judge that the person could be dangerous, but just on the looks of your face alone, that is just stereotyping done as a [very suspect] business.

https://www.google.com/search?q=silence+of+the+lambs+hanniba...


Plenty of research suggests that the human brain already does this, and does so effectively, like it or not. But translating this into an algorithm is disturbing.


Of course it does - "I don't like the look of that guy". "Dodgy looking fellow". "He had an honest face". etc. The basic principle as a first-pass heuristic is tried and tested. Kneejerk reactions to this kind of tech always seem to completely ignore the question of whether it's actually effective.


I think people do realize that it’s codifying the “science” of “Ah reckon” and realize that same heuristic is famously terrible. Coding bias intentionally into an algorithm seems more like a thin justification for pre-existing profiling than any breakthrough in reading faces. It’s not hard to imagine the same behavior we seem today, but now with the excuse that “The algorithm did it.”


Cynically I suspect that may be the point. I refer to such things as bias laundering. There is a very long history of "objective" measures that were tailor made to collar hated groups.


Really we have enough problems with prejudices based on stupid gut feelings without adding in Garbage In Garbage Out from magical thinkers misusing machine learning.



I must dispute the effective part greatly. Sociopaths know how to play the system and there are many fooled by style over substance - who listen 99% to how it is said and ignore the content. I have found it shocking what complete bullshit people fall for that if repeated verbatim is incredibly stupid.


I'm curious if after they founded the company, and developed the technology, did they run their technology on their founders and employees to find out who is what - and perhaps got rid of people who were flagged.


This example didn't even come to my mind when writing the above comment. I think even run-of-the-mill applications of AI and ML such as feed "personalization" in Youtube, Twitter, and Facebook are pretty evil. They're clearly not optimizing for value for the user but for monetizability even if the user is being shown garbage content to get them addicted. I'm a deep learning researcher, and these applications make me seriously re-consider my choices and the impact I'm having in the world.


Is this for real? Or a satire like that "fake a vacation for instagram" kind of thing?


Excerpted from their principles[1]:

[...] we will not design or deploy AI in the following application areas:

1. that cause or are likely to cause overall harm [...] 2. [...] Weapons [...] 3. [...] that gather or use information for surveillance [...] 4. [...] whose purpose contravenes widely accepted principles of international law and human rights [...]

[...] As our experience in this space deepens, this list may evolve. [...]

The last sentence gave me a crack, it was definitely generated by some DeepMind-AI called DeepSarcasm.

[1] https://ai.google/principles/


As a small social enterprise with ambitions to use data to improve the work of local health/activity/wellbeing charities, the offer of help appeals to us so we're applying. I get the various arguments here but on balance, what would we achieve by denying their help.


Quick, someone suggest an AI-based project to defeat Chinese censorship. :-)


The negativity here is saddening. So what if it is for PR? How many other companies are doing these things even for that? Isn't it eventually promoting AI projects that have at least some social good in their objectives?


Google collects sensitive data on every hapless internet user. A contest "for social good" should start by helping Google brainstorm a new business model. The one they have now is poison.


Given Google is already doing that (collecting anonymized data), wouldn't it be overall better if it also puts that to some good use?


> Given Google is already doing that (collecting anonymized data), wouldn't it be overall better if it also puts that to some good use?

Given Google is already doing surreptitious stuff and is generally anti-privacy, it ought to look at stop doing those first. Stealing from people and doing "good" with that is not an excuse for stealing, especially when most of the victims are common folk (even if one follows a Robin Hood principle).

Also, I didn't get the part about Google "collecting anonymized data". Google collects precise, personally identifiable data because that's what pays the bills for its chosen business model (similar to Facebook, which is far worse). Anonymization may happen down the line for certain purposes and anonymized data may be all that Google is able to get from certain sources.


Stealing from people? huh?


No. Look at what's happening at a meta level and nonlinear effects. Yes, the social good projects will have some small positive effect in the world, but they also end up buying a lot of positive PR for Google, thus allowing them to continue their malpractices in their main business. Skepticism of the kind in this thread informs Google that their positive image right now is actually fragile, and that they should up their ante if they want to continue to be looked at as having a positive impact on the world, which is a critical advantage they use to gather the best talent.


The analogy for this comment is:

"The defendant is already doing that (committing crimes), wouldn't it be better if we permitted them to continue doing so as long as they're putting a portion of their spoils to good use?"

And don't forget to pat them on the back for their community service.

Even that is a very generous analogy, as it assumes that Google's AI contest will actually result in outcomes that produce some social benefit, rather than a few temporary initiatives that look good, provide near-zero real benefit to society, and really provide Google with technical AI innovations that are of some small use to their business.

You may not consider collecting "anonymised" data to be a bad thing, but you'll note that you inexplicably introduced the "anonymised" descriptor twisting the original commenter's "sensitive data"!?


We pitched our AI project to Google around last year. They moved us to next stage, asked for business documents, proof of concept that it works and then just vanished.

Just like how other people complain about their job interviewing process, there was no response from any of their team members, no email, no rejection. I know ideas are dime a dozen but please don't stomp on startup's dreams no matter how tiny we are.


While I don't completely disagree with you,

If somebody known to kidnap dogs opened a free doggy daycare service, I would expect people to question his motives.

By doing this, I suspect Google will get access to a significant number of social datasets which were earlier not available to it.


Well that's not a rhetorical question really. I think based on recent past experience with Google people have actual concerns about whether it will be for social good in the long term. For me this is similar to what's happened in the GMO space. Potentially great technology that gets more of a bad name than it deserves because the players involved have done some really disturbing stuff before they ever got involved in GMO. Getting back to tech, would it be overly negative to have a little healthy paranoia about Facebook contributing to privacy research?


I'm out of the loop. What has the food/GMO industry done that is disturbing?


The industry as a whole? I'm not sure I could make that claim. And I'm somewhat well informed having close family that work in agriculture. But take one major player as an example, Monsanto (now owned by Bayer):

- Major producer of DDT. Evidence that DDT was causing widespread environmental damage was clear by the 1960s but it took the US government until the 1970s to finally ban it. Then the industry sued the US government to try to overturn the ban. Who tries to continue selling a product that has caused widespread environmental damage?

- Producer of Agent Orange. Another "perfectly safe" chemical. Red Cross of Vietnam estimates that up to 1 million people are disabled or have health problems as a result of Agent Orange contamination. I'm not sure how many US veterans were affected.

- Major producer of PCBs. Which are later banned due to being highly environmentally toxic, cause cancer in animals and probably humans too.

- Producer of bovine growth hormone, another "perfectly safe chemical". rBST has not been allowed since at least 2000 on the market in Canada, or since 1990 the European Union. Other countries have banned it. I don't think the studies on how it might affect humans are conclusive but there are concerns. For sure the studies on how it can negatively affect cows are more solid.

- They have been convicted of bribery in order to get government approval.

At what point do you stop trusting? And again, I don't mean the science and GMOs themselves. I mean the company who is making claims that GMOs are perfectly safe for both humans and the environment. Scientists are not so bold. Read the studies to see disclaimers and healthy doubt in those studies. Usually specifically spelled out. Disclaimers and doubts that are never mentioned by Monsanto when they publicly speak about their products. If they started being far more careful with their "perfectly safe" rhetoric, I might change my mind about them. So far, that hasn't happened.

I can't say if on the whole Monsanto has been somewhat responsible and well-behaved as a company. But I can say that based on just the list above, I would be very hesitant to trust any claims they make about the safety of any agricultural product, especially those involving chemicals.

I'm sure that someone can / will come in with a defense of Monsanto or arguments about each point above. The devil surely is in the details. But who has time to sort through hundreds of studies and court cases? Fair or not, we will continue using heuristics to decide who we trust. Maybe Facebook does in fact treat my data ethically and each time they get caught with privacy concerns or taken to court it was "just an honest mistake". Are you willing to spend tens or hundreds of hours of your free time trying to figure that out? Or will you use a heuristic and just be cautious with trusting what Facebook will do with your data?


They managed to revolutionize agriculture and stave off the starvation apocalypse that the people of the '50s and '60s were predicting.

... oh, I misread your question; I thought you asked "How did the food / GMO industry save the world?"


And now everybody has cancer and the food is tasteless.


I’m seeing little negativity, just the realities of dispensing with unwarranted optimism. Shorting of pithing myself, it’s goinf to be hard to ignore what Google is and show it works just because they throw a (for them) small amount of money at something which so far only sounds good. My experience of Google is that they excel at marketing their own supposed best intentions while delivering the worst of them.

It’s not negativity to have been conscious and capable of forming long-term memories for the past decade, sorry. I’m also wondering if you have now or previously had any relationship with Google?


No, I don't and have never had relationship with Google except being a (non-commercial) user of their services. I would have given a disclaimer had I been.

My experience of Google is that they excel at marketing their own supposed best intentions while delivering the worst of them.

>> My personal experience has actually been quite opposite. I think had they been marketing their services properly (and not just to geeks) they'd have been at a much better place than they are today. I think Apple is a world class standard when it comes to marketing.


[flagged]


I work in Google Cloud and this comment makes me sad. Not everyone is a sellout and I didn't forfeit my soul for a nice paycheck.

What motivates me to go to work every day is making our customers' life as easy as possible. And not because I want to make Google even more profitable, but because running on GCP is greener. In fact, since 2017 all Google operations are offset by renewable energy: https://sustainability.google/reports/environmental-report-2... .

So, please don't generalize. I firmly believe that the vast majority of Googlers still care and are not evil.

(these are my personal thoughts and by no mean I speak on behalf of Google itself).


Who would you rather have it governed by? Facebook? Microsoft? Amazon? Wall Street? DARPA? US Govt? EU? China? Russia?

At an abstract level: any organization, run by humans, with a lot of power over critical aspects of our life is bound to get corrupted due to that power. Its just that some organizations are less bad than others. And I believe Google isn't much worse than many other realistic alternatives.


That's a false, uh, decachotomy (?), it doesn't have to be a centralized organization of any kind at all


It doesn't have to be, but things seem to be inevitably gravitating towards a centralized model because it seems people care the most about their User Experience and everything else is secondary (privacy, monopoly, monoculture, worker abuse, etc).


The word you are looking for is dichotomy.


I think the parent commenter wanted to emphasize that there are more than 2 options (di- = 2, deca- = 10) to choose from...


But false decachotomy would imply there are more than 10 options. Seems weirdly specific.


Such an organization, designed to respect input from consumers, workers, and global community members affected by negative externalities might be called a "co-op."


It's a matter of oversight, transparency, accountability and motivation. In the case of Google, they have very little oversight, nearly zero transparency, they are accountable to only their shareholders, and their motives are exclusively profit.

They have no interest, or even means of producing, a publically good internet. They can and will only produce an internet that is best suited to make them money. And with that there will be inherent conflicts with what is best for the users.

At least with a government you get some of the transparency and oversight, public control and no profit motive (if you cap campaign financing that is)

Some organization structures are better suited to do different things.


That's exactly the kind of unreasonable hatred I'm talking about. Wise people would understand that these companies (including the ones who you think are much better) are only as 'evil' as the law of the land allows them to be, and most of them are 'for profit' and will do everything they can to maximize the profit still being in the legal framework. Which company do you like instead?


They are spending millions of dollars lobbying and poisoning the law.


And again, all companies with enough money to be able to do that are doing that.


That is why I avoid all these companies. Everybody have the option to do that aswell.


Chill dude. I have almost never seen such generalizations are justified. "Sellout"? please.


Are they going to patent the idea behind your back aswell?


> Who owns the intellectual property created by the grant recipients?

> We believe that projects supported by our funding should be able to benefit everyone. If you are selected to receive a grant, the standard grant agreement will require any intellectual property created with grant funding from Google be made available for free to the public under a permissive open source license.


And for the projects that get denied?


handwaves these are not the ideas you thought of


> be made available for free to the public under a permissive open source license.

so nobody gets trade secrets and nobody gets patent rights

which also sucks


I disagree. If the goal is to encourage people to build things that benefit everyone rather than just those who built it originally, this is a good way to do that. If it were me I would go a step further and require that it be released under a copyleft license rather than a permissive one, but Google seems fairly averse to copyleft licenses, and they're the ones funding this.

I'm sure there are plenty of VCs out there that would fund the development of proprietary machine learning software. This is something different.


Do you have any more info or reading on Google and copylefts?


If you look at their FOSS projects you'll notice they generally tend to be released under the Apache license. A look at their license requirements will tell you why - Google employees are required to use it unless they have a reason not to[0] and appear to be strongly discouraged from using copyleft licenses[1].

[0] https://opensource.google.com/docs/releasing/preparing/#lice... [1] https://opensource.google.com/docs/thirdparty/licenses/#rest...


Thank you.


AI for social good is one of those things that are kind of absurd in it's very premise.

AI is not a thing we program to then deliver abstract terms like social good. It's a thing we program to do specific things which might then end up being used to do social good.


Yeah... who ever suggested "AI IS a thing we program to then deliver abstract terms like social good"? Are you refuting a notion that no one holds? Of course the contest at hand is to use AI technology to solve various problems we deem to be socially good. Not to rigorously define social good and have an AI solve it.


The deadline for applications is in two days. January 22.

Looks like this was launched a few months ago?


Yes, this was online before Christmas (at least)


What is AI?


SELECT with GROUP BY.


That is advanced one. I thought current state of the at is a fine skill of throwing things against a wall and seeing if they stick


What you just described is called reinforcement learning.


hands down best (and most painful) joke today.


Here's the simplest AI example I have found so far. It is called Binary Driver AI.

if (car.distance < 5) {

  [car slowDown];
}

else if (car.distance >= 5) {

  [car speedUp];
}

This AI was created by Apple. You can see it on Page 81 on this pdf:

https://devstreaming-cdn.apple.com/videos/wwdc/2015/608rpwq1...

So, AI means any algorithm, which can be run by a computer..


Bigger question, what is I.


Can’t answer that question with a thought because it precedes thought


imperfection, inability, incorrectness Running out of ideas


apple


No, Google in 2003 may have, but today's Google is only interested in exploiting your friends and family.


Spec work?


AI communism is here


[flagged]


Yikes, keep this shit off HN, nobody wants to know what you think about this. I am partial to Marx myself, Capitalism is an amoral excuse for greed that has repeatedly destroyed itself and had to be saved many times, and I would hate to have to fight with people on the internet when I was hoping to enjoy today.


It seems like you sir don’t have enough facts, experience, or just plainly evil and openly advocating for a large government, I’m going to go ahead and politely disagree. We can leave it at that.


I’d love to discuss it some other place, but this seems counterproductive to civility here.


Off the main topic, but I'm a scientific method believer at heart and I hope we can improve or invent a better system than capitalism one day. If we treat social science more like a science then there's no need to use words such as "crap" or "only". Instead we should keep our heart open for its improvements, maybe even borrowing ideas from existing theories such as socialism.


Absolutely! All hands up for improvement (there are always opportunities for further enhancements) and keeping our minds (not hearts) open.


[flagged]


I don't imagine a whole lot of altruists donate significant sums of money to projects that run contrary to their beliefs.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: