- much more multitask learning and multimodal learning
- more interesting on-device models — or sort of consumer devices, like phones or whatever — to work more effectively.
- AI-related principles-related work is going to be important.
- ML for chip design
- ML in robots
"Dean said the company is thinking of including more information in Google search results to give users a predicted carbon output for choices they make, like ordering a certain product."
An AI system that predicts carbon output for consumer choices and delivers that to consumers could be useful in battling climate change, even if only in a small way.
Of course the engineering is important, but politics is 1000% the road block on climate action and that is where efforts should be focused,
Are all humans fungible? You don't think it's valuable to have some errant geniuses trying to think of weird and creative ways to solve climate change through unconvential engineering? It's not as though they'd be good politicians.
That needs to be made abundantly clear to everybody.
If Google is asked "what are you doing to address Climate Change?" and we hear predominantly engineering talk, then they're being disingenuous.
- UK: Met Office https://www.metoffice.gov.uk/about-us/careers/vacancies)
- France: Institut Pierre Simon Laplace https://www.ipsl.fr/en/ (their job page is busted)
- Germany: Max Planck Institute https://www.mpimet.mpg.de/en/institute/opportunities/
- Netherlands: KNMI https://www.werkenvoornederland.nl/organisaties/ministerie-v...
I'm sure there are more, but these are the ones that I've worked with in the past. European climate research agencies seemed to have their act together a lot better than US ones; before I left the industry, they were much further on the path of moving computation to the commercial cloud (instead of trying to keep up in the supercomputer arms race, which always seemed like a losing proposition to me), and I think ML has been increasingly integrated into climate models as a way to approximate complex dynamical systems.
In terms of which of these labs had their head screwed on straight from a technical side, it was probably KNMI / Met Office, followed by Max Planck and then IPSL.
One of the clear recommendations is - contribute to Julia. A lot of what Bret said in 2015 has actually panned out. Julia has become a powerful language for scientific computing and machine learning. As a result it is being used in climate projects such as Climate Machine (MIT and Caltech). The Julia Lab at MIT participates in this project:
Climate Machine: https://clima.caltech.edu/
Julia Lab: http://julia.mit.edu/
Contributions to Julia packages (compiler, stdlibs, math packages, ML packages, parallel computing) will end up finding their way into climate research, because of the extensive reuse of code within the Julia ecosystem. Specifically capabilities such as Zygote.jl (https://github.com/FluxML/Zygote.jl) for differentiable programming have the potential to dramatically make it easy to apply ML techniques to scientific codebases. Compiler contributors are hard to come along, so all contributions to compiler technology are incredibly valuable. The DiffEqFlux.jl ecosystem in Julia is a good example of combining mechanistic models with ML (https://github.com/JuliaDiffEq/DiffEqFlux.jl). Hop on to the Julia slack channel or discourse to dig in deeper.
Another thing Bret Victor speaks about in his blog post is working with agencies such as ARPA-E on advanced projects. Julia Computing is participating in an ARPA-E project to bring these capabilities to many energy related simulation and development technologies. This press release gives a broad idea:
ARPA-E press release: https://www.energy.gov/articles/department-energy-announces-...
Funded projects: https://arpa-e.energy.gov/sites/default/files/documents/file...
Julia Computing press release: https://juliacomputing.com/communication/2019/12/09/arpa-e.h...
I'm also working something similar to this. Would you leave your contact info ?
This paper seems relevant to the climate change part. This is more about what to do rather than technologies.
An example from :
Jevons observed that England's consumption of coal soared after James Watt introduced the Watt steam engine, which greatly improved the efficiency of the coal-fired steam engine from Thomas Newcomen's earlier design. Watt's innovations made coal a more cost-effective power source, leading to the increased use of the steam engine in a wide range of industries. This in turn increased total coal consumption, even as the amount of coal required for any particular application fell. Jevons argued that improvements in fuel efficiency tend to increase (rather than decrease) fuel use ...
> The size of the direct rebound effect is dependent on the price elasticity of demand for the good. In a perfectly competitive market where fuel is the sole input used, if the price of fuel remains constant but efficiency is doubled, the effective price of travel would be halved (twice as much travel can be purchased). If in response, the amount of travel purchased more than doubles (i.e. demand is price elastic), then fuel consumption would increase, and the Jevons paradox would occur. If demand is price inelastic, the amount of travel purchased would less than double, and fuel consumption would decrease. However, goods and services generally use more than one type of input (e.g. fuel, labour, machinery), and other factors besides input cost may also affect price. These factors tend to reduce the rebound effect, making the Jevons paradox less likely to occur.
>Jevons argued that improvements in fuel efficiency tend to increase (rather than decrease) fuel use
In other words, I can ask: does aggregate fuel usage depend on fuel efficiency? If I state the question this way and ask myself to answer it, it starts to become obviously an incomplete model to use.
This may seem so pedestrian but it is so confusing for me to think about in terms of conceptualizing what structure was needed to recognize a poor prediction model and considering a better one. This seems so different than what AI is today.
The bad intuition is if X units of a resource are being used to achieve an outcome then providing the option of getting the same outcome with (X - something) will result in less resource use.
However, that is ignoring the economic principles of supply and demand to only focus on current use and demand. Reframing it from the supply side: it used to make sense to supply X units to do so much. Now supplying X units can do even more than so much.
So the economics of the situation are unlikely to cause a reduction in supply, because if it made sense to supply X units of resource before, it really makes economic sense to supply it now. In fact, since the resource is now more useful (efficiency rose) it probably makes sense to supply even more of it.
If Jevons's Paradox actually appears paradoxical, the root cause is a misunderstanding of supply and demand. If efficiency is modelled by moving the supply/demand curves around on a supply/demand chart it is pretty obvious what is going on; efficiency gains are equivalent to moving the supply curve under that model.
Otherwise I cannot understand how he can really think that “more AI” is going to help with deforestation, as in “more AI” probably means less overall costs for bad people in the Amazon (where your major costs are people-related, it’s phisically very demanding cutting down trees in an Equatorial climate) which in turn means more trees being cut down. And this is just the beginning of it.
Training a machine learning model for any common task is getting radically cheaper and more energy efficient.
This is a combination of both better hardware (eg, Google TPUs) and better optimisation of training techniques (which is mostly done outside Google).
Eg, DAWN Bench benchmarks training of ReseNet50 to 93% accuracy. This used to take days.
Now the FastAI group has shown you can do it in 18 minutes on 16 p3.16xlarge on AWS Spot instances. This is a huge energy saving.
Huawei has shown you can do it on 16x8xV100s on their cloud in less than 3 minutes.
That's the paradox. The cheaper energy becomes, the more uses we find for it.
> all the stuff we trained in our Google Data Center — the carbon footprint is zero. Because … basically, all of our energy usage comes from renewable sources.
This is what we have been seeing the past 10+ years, and of the main reasons why renewables are starting to be directly competitive with fossil fuels in some regions, even without subsidies. This is a huge turning point that we are on the cusp of.
> VentureBeat: One of the things that’s come up a lot lately, you know, in the question of climate change — I was talking with Intel AI general manager Naveen Rao recently and he mentioned this idea [that] compute-per-watt should become a standard benchmark, for example, and some of the organizers here are talking about the notion of people being required to share the carbon footprint of the model that they trained for submissions here.
> Dean: Yeah, we’d be thrilled with that because all the stuff we trained in our Google Data Center — the carbon footprint is zero.
That is a lie unless Jeff Dean lives under different physics laws.
(disclaimer: work at G)
By way of analogy, the total industrialised world as of 1900 was mostly the US, UK, France, and Germany, with a few odd other millions from Canada, Australia, and bits of Europe.
Not only did those 200 million or so souls see a tremendous personal increase in services (and resources) consumed, but:
- The population of industrialised nations swelled by a factor of about 5, to 1 billion (the G7 + EU).
- Another two billions grew and increased economic activity in China and India.
- The Rest of the World grew to another 5 billions.
The G7 per capita GDP is about $40k. Depending on what figures you look at, China lands somewhere between about $8k - $14k/person. The rest of the world (including India) is lucky to hit an average of $5k, or about 1/8th the G7, whilst being 5 times its population.
Put another way, raising the RoTW to G7 standards would swallow 40x current global economic activity, and that's presuming the G7 + EU hold still, something few economic models allow for.
Or: Jevons has a lot of mileage yet.
Regards Google's "zero carbon" datacentres: if you don't account for the rest of the market's response, this simply lowers costs for other entrants to make claims which Google have effectively abandoned, absent some means of otherwise increasing prices. Fighting resource utilisation by increasing efficiency alone is like fighting for peace or fcking for virginity. It tends to self defeat.
To decrease overall* utilisation, you've got to increase costs or decrease demand. Taxes, sink utlisation charges (e.g., carbon cap-and-trade), or some form of suasion against resource utilisation, might work. Efficiency itself simply lowers costs, which is to say, increases the supply function, and moves total quantity consumed up.
Some highlights include:
-Jeff Dean's PIN is the last 4 digits of pi.
-He once shifted a bit so hard it ended up on another computer.
-He wrote an O(n^2) algorithm once. It was for the Traveling Salesman Problem.
-Jeff Dean once implemented a web server in a single printf() call. Other engineers added thousands of lines of explanatory comments but still don't understand exactly how it works. Today that program is known as GWS.
-There is no 'Ctrl' key on Jeff Dean's keyboard. Jeff Dean is always in control.
-Jeff Dean's watch displays seconds since January 1st, 1970. He is never late.
-Jeff's code is so fast the assembly code needs three HALT opcodes to stop it.
At a Hacker Jeopardy some time ago (I think it was the one at 29C3), the final round ended in a tie, so a tie-breaker was needed. The tie-breaker question was "What is the current Unix timestamp?" The contestants struggled hard, leading the moderator to exclaim "For god's sake, don't you ever check the clock!?"