Hacker News new | more | comments | ask | show | jobs | submit login
Modern Weather Forecasts Are Stunningly Accurate (theatlantic.com)
172 points by scott_s 21 days ago | hide | past | web | favorite | 117 comments

Just a little knowledge to drop, as a bachelor degree meteorologist and also coder / HN'er.

Dont use your built in phone app for forecasts. Dont expect hour by hour precision (maybe for the next few hours sure, but not 2,3,4,5 days out). Trust nobody expect the trained professionals at NWS. These people look at ALL the models, they watch things like a hawk. I work with some of these nerds and it's crazy. They are like 'oh jimbob did you see the latest GFS kicking out the low up into canada? oh yeah doug that is crazy, but the euro says blah blah and NAM says blahblah... Their job is to literally make a forecast, for the same area, every day. Some people do it for years and years! Think of the experience, wisdom, little tricks and tips etc. Just go to weather.gov, look at it for 2 minutes. Read the foreacst discussion if you want to get your details.

I find Dark Sky to be the best forecasting tool for weather within the hour. It's almost always ahead of all the other apps in terms of the latest forecast.

I bought the app because of its hyperlocal forecasting, but it's been known to predict precipitation where there's none, and vice versa.

I wonder if it does better in some geographies than others (e.g. high pressure system areas where things are more stable)

hyperlocal is a joke, sorry to say it. Just because you can downscale your grid to a very small number doesnt mean good data. It just means you can do statistics and mathy stuff to interpolate. Everyone forecasts better in stable times such as a strong high pressure ridge. The tricky part is the edge areas between air masses, the transition zones. This is where all the juicy weather actually happens and it is the hardest to model

Totally agreed on it being a marketing term, not a real prediction improvement - it's essentially the same as any other app in terms of "will it rain" / etc.

But they do a much better job of notifying about changes based on where I physically am, instead of for the city / region as a whole. Other weather apps I've used don't differentiate between "rain falling now 50 miles away" and "rain falling on my head", even though the radar map clearly gives them that info at a much finer level of detail.

It's pretty accurate here, when it works. I'd guess at least 40% of the time it states that the local radar is down and refuses to give that kind of forecast.

One of the things I loved best about Wunderground was their prominent display of the textual SFD. Now it's gotten a lot more mainstream, and still have increasingly accurate forecasts, but the SFD link has been dropped from the forecast page for any particular area.

Now my source is directly from NWS page, which is really nice since forecaster terms can be clicked to bring up a definition. https://forecast.weather.gov/product.php?site=LSX&issuedby=L...

I think Metcheck is the UK equivalent of this. To pick a few random page of turbo-nerdery (if you need a UK postcode to try, the National Museum of Computing is at MK3 6EB):




You can still find a link to it on the 10-day tab, but they did hide it.

In the Hudson Valley, we have (a) local weather nerd(s): https://hudsonvalleyweather.com .

It's especially nice for upcoming storms because they break down the region logically based on the relevant features (the river valleys and mountains), for instance: https://hudsonvalleyweather.com/wp-content/uploads/2019/01/1...

Don't all those apps just get their data ultimately from the NWS?

Some commercial forecasters have a "wet bias" (https://en.wikipedia.org/wiki/Wet_bias). They intentionally overestimate the odds for precipitation on the theory that people are happier if it doesn't rain when they were planning on rain than they would be if it rained when they did not expect it.

Yes this is truth to some degree. However, you dont know what you are getting with an app. If you go to read the official NWS page, the forecasts are human curated by a pro. Apps likely are just taking model output directly and feeding it to you as gospel.

If you really care about short-term weather forecasts, learn to read surface pressure charts and rainfall radar. It's not particularly difficult and it gives you a huge amount of additional information.

A summary forecast can only tell you so much, especially in changeable conditions and complex microclimates. Knowing the range of probable conditions is often much more useful than knowing the most probable condition.

How do you suggest really learning to read the surface pressure charts? I have gotten to the point I understand what I'm looking at, but can't really turn the factual knowledge of watching the highs and lows and winds moving around into real understanding & forecasting.

I made a website that displays hourly weather forecasts from the NWS: https://www.subraizada.com/weather/kbmg

You can make a version for your location here: https://bitbucket.org/subraizada3/weather-generator/src - it should just generate a single HTML file which you can host anywhere.

An older version also showed the 'normal' weather.gov forecast embedded to the left of the hourly forecast, you can get that if you copy/paste the deleted part of this commit back into the python file: https://bitbucket.org/subraizada3/weather-generator/commits/...

I get my weather straight from UK met page for that reason.

It has much more detail which helps as a cyclist, there is more than just average wind speed and temp.

Many folks, when hearing forecasts have improved, will disagree. They can think of many times when the forecast was wrong, perhaps hilariously or disruptively so.

This is textbook availability bias. You overestimate the failure rate because the failures were memorable, especially the major failures. But the far more frequent accurate predictions are not recalled, because they don't leave as much of an emotional impression. For the same reason folks regularly overestimate crime rates, deaths from terrorism or rare diseases.

Meteorology has an entire subfield devoted to studying the quality of forecasts[0]. Before you bandy about numbers and anecdotes, stop and tell me what measure you are using. Is it percentage of degrees centigrade? Well you can't, degrees centigrade is an interval measure, not a ratio measure. If you use Kelvins, which has a mathematically meaningful zero, the percentage accuracy is suddenly very good. How do you count near misses of very intense weather? Direct hits by systems that were less intense than expected? How do you account for systems with very gradual gradients over very wide areas? How do you account for being early by an hour or late by an hour, but nailing the storm surge? How do you score for confidence? What's your rule for weighting false alarms? How important is mean error vs absolute error vs variance, and why?

Solve these and dozens of other problems of describing "accuracy" and weather forecasters might take you seriously. But until then it's probably worth accepting that they are the most effective profession of their kind and that we have a lot we can learn from them.

[0] http://www.cawcr.gov.au/projects/verification/

Often I can't even say for sure if the forecast is wrong or not.

For example, what is a 10% chance of rain at 10? That could mean 10% chance that rain will occur at some point from 10 to 11, 10% chance that it will be raining at a randomly selected point between 10 and 11, or 10% chance that it will continuously rain from 10 to 11. Also, it could be any of those three, except from 9:30 to 10:30. Also, it could be a 10% chance of raining at exactly 10.

So right there, I see 7 possible interpretations.

Add in a map, and we can apply those to an area. It may be a 10% chance of raining at some place within this region, but any one place gets a smaller chance. It may be a 10% chance everywhere in the region, but a 99% chance that some part of the region gets rain.

What it really means in terms of exact timing is that an AWOS (weather station) will record measurable precip between the 1000 report and the 1100 report.

Sometimes percent means forecast confidence, sometimes it means that they can't predict what area will get hit with rain. Like synoptic conditions will say "this region will definitely get a few thunderstorms", but when you drop down to mesoscale then you can't tell which city will actually get the rain.

In longer-term forecasts, 10% usually means "I have no idea what's going to happen, so I guess precip is possible then." At mid range, like 3-5 days, it usually means "we haven't nailed down the timing of this thing yet".

If you actually want to know what the logic is, go read the forecast discussions.

Validating the wx is extremely hard. For instance, if you make a forecast, what do you use as the 'truth'? A weather station? Trust me, I've worked with plenty of station data, and it is trash. So now you are comparing a weather forecast against some suspect trash validation set. Nobody knows!

The fact is, predictions will keep getting gradually better, more high res. But the fact our atmosphere & earth system is insanely complex. The system is chaotic so tiny small changes in each model run produce different results... We have code to represent some basic physics but still dont understand many of the important things. We still dont know exactly the rain process & snow formation process. We are trying to pinpoint the exact moment when dust & water vapor magically form into a cloud droplet so we can better describe the rain process. THEN we have to translate all of this insane math and physics into code.

Hi fellow meterologist hacker. I too have a B.S. in atm sci and enough years in grad school that I'm still paying for it lol.

Yeah, the extent to which models are verified against their own data is pretty interesting -- even the reanalysis data only represents a best guess of a place with physics similar to Earth. It includes a lot of advected observations, thousands of miles from the original observation site, but still useful because at least you know something about the air over the oceans.

Even if we really do get a good handle on cloud physics, ultimately there are limits on how much we can model. Parameterizations have to happen, because of limits to computer time -- a 24h forecast completed in 26h is worthless. I'm honestly surprised that they've been pushing to get models to third order and higher approximations, because the errors in your initial conditions are larger, but apparently every little bit helps.

But forecasts are pretty good considering that weather modeling is a initial value problem where we only know the initial conditions at a few observation points of varying quality, a boundary value problem where the boundaries can have complex interactions with the internal state, and simulation run time really matters.

You can use this site to compare the different models using statistics.


ForecastWatch, a company solely dedicated to analyzing the accuracy of weather forecasts, has a good study available: https://www.forecastwatch.com/wp-content/uploads/High_Temper...

You can also check accuracy for your local zip code (in US) at: https://www.forecastadvisor.com/

tldr: for the best US consumer weather forecasters the mean error on high/low temp for 1-5 day forecasts is around 3F and they hit within 3F within 70% of the time.

7-day has declining accuracy and 10-day is not much better than statistical analysis on historical trends.

If you live near where the jetstream boundary usually lies, forecasts are generally less accurate because small, less predictable shifts can have a large impact on daily weather.

Yes! As a west coast native and sometime visitor to the midwest, I'd say that is true for the whole western US at least. If you learn to read the forecasts and maps you can actually start to grok it and intuit the mechanics.

You can start to imagine the jet stream whipping like a hose and understanding that the exact rate of its swing might vary, causing weather fronts to sweep across the plains a little earlier or a little later, or steering the atmospheric river a little north or south as it sprays the coast. This is the mechanism behind those sudden wild changes in Chicago, when you switch between northern to southern weather in a single day.

And in the short-term data, you can watch a radar animation and get pretty good at predicting when a rain wall or big thunder cell is going to cross your location, and what to expect on a quick errand.

The only thing you cannot reason your way through is when the maps indicate large areas of instability and you really just have to set your mental threat condition and wait to see whether something erupts over your head or not. Afternoon thunderstorms in the mountains or coasts often have this uncertainty to them.

Nah, midlatitude west coasts are worse because of the lack of good data over the oceans. Satellites can't tell you as much as you think, especially in the vertical direction. Knowing how much water is in a column of air is useful, but not as useful as if you knew precisely which layers it was in.

A considerable amount of data in models for California is from observations that were made in China and Japan, that flow with the wind across the Pacific. Or conversely, you can look at it as a data hole that advected across the ocean, too.

I'd like to note that of the few blown forecasts we get in Southern California, almost none of them have to deal with the jet stream. It's actually the cut-off lows offshore that don't have steering winds, which makes it difficult to predict where it will go next. The same is true with hurricanes.

Certainly weather forecasts are generally more right than wrong, but for a science, you expect it to be closer to 100% correct than 80% correct.

We don't care if weather forecasters take us seriously. The weather forecasters should care whether we take them seriously.

For example, just the other day in NY, we had a forecast of light rain. We got a snow squall instead. That sure was fun.

People aren't complaining about a forecast of 80 degrees but it was 81 instead. It's those forecast that call for light rain and you get a nice fun snow squall.

I worked outside every day for 4 years at least. Weather forecasts are so unreliable I stopped listening. I had better luck trying to figure out if i'd need rain gear in a day by just waking up and looking out the window in the morning than looking up or listening to a weather forecast.

You really start to notice how wrong they are when you spend every day outside. I don't notice any more working inside again. But when you rely on weather forecasts everyday to figure out how miserable you're going to be in a day...you really notice..they might as well just be making things up half the time.

Hmmm. If you are needing down to the minute accurate timing of when rain is going to start & stop, sure nobody can predict that, for every location in the USA, all the time. I have to question where you are getting your weather forecast information??? I am a bachelor degree meteorologist, I dont use any apps that try to be 'hyper local' (omg i hate that word, it is the weather version of AI/ML hype). Just go to the government site www.weather.gov

We can't predict exactly when it will rain, but we can get daily trends pretty good for 3-5 days out. This means you are five days ahead of the weather. You cant know "it will rain from 5:00am to 9:30am on february 4th, 2019". But you can know if we have a synoptic scale system that is moving through your area on that day, and know if there is a probability of rain.

Where do you work everyday outside? i would be happy to give you a 5 - 10 day forecast right now actually and can tell you it will be good. I am supposed to be writing some tests and documentation but i can drop some WX knowledge if you need.

I get the feeling that this "hyper-local" mentality contributed to NYC and surrounding areas being caught flat-footed by the first-of-season snowfall at the end of last year. I have little understanding of meteorology, but it was clear, just from the forecast maps, that a small change in the location and timing of the rain/snow line would make a big difference to the outcome, and that something more than was forecast was a distinct possibility.

As with many things, having a little understanding of the big picture can help you get more out of weather forecasts.

Your last statement is the so true. An amateur forecaster can spend ten minutes looking at the some big picture (synoptic scale) weather and know regional trends (this part of country is dry, these guys are wet, humid etc). This would be a pressure & wind map, showing location of high/lows, maybe some 850mb moisture charts too... But you are painting broad strokes. All of these dumbos on HN want to know exactly when the rain is going to hit your exact location, or you have to be within 3 degrees C modeling the surface temperature of my street so i can either ride my bike or not. We just cant do that with the current science & computational landscape!

After you have the synoptic scale picture, then you zoom in on your target area and get specific if you want to. This local expertise is where you NWS office comes into play. You want the local forecast from these guys!

It's not that, it's selective memory. Everyone remembers the blown forecasts. Even I do. Three or four years back NWS predicted a big Christmas storm in SoCal, but the damn cut-off didn't come on shore, and the resulting offshore flow made the day nice and warm.

And, Americans only: if you forecast temperatures of 99°F and the observed temperature is 97°, good job. But if the observed temperature is 100°, then you made a huge mistake. People need to know when you hit the century mark!

That snowfall last year was bizarre. We all knew it was going to happen - I remember thinking days in advance that I would probably leave work early. So I'm not sure why there was no salt on the roads in the NY area.

I don't work outside any more. But it would be more things like entire weeks of rain or sun forecast and instead would be the opposite all week. I could be sitting in the sun listening to someone on the local radio station using data from the local weather stations telling me it's raining. Or ya know, see it's raining in the forecast and open my door to a blizzard.

Hi brootstrap, since you're a specialist in this field, I'd be interested in your views...

here in the uk, I am often interested in whether or not it will rain in e.g. a 3 hour, 6 hour or 12 hour period. Either because I am going rock climbing, laying concrete, spraying weedkiller, painting outdoors, etc etc. Often this is very important (not just - 'oh, if only I took an umbrella today').

I have a strong impression that the ability of the uk's met office to forecast this is very poor, such that is only just worth consulting their forecasts. OK, sometimes we have settled weather and forecasting is easy (in these cases, looking towards the prevailing wind is also good). But in most cases the weather changes several times a day and the forecasts are poor. This can also be seen by e.g. checking how the forecast for a specific day changes as the day gets nearer - every day you look, it tends to have changed markedly. Also, the uk met office and norwegian met office (significantly better than the uk one, but still not so good) generally disagree with each other.

I really cannot square my 10+ years of 'studying' weather forecasts and the weather with what you are saying (in the case of the uk (pennine hills of northern England)). And 'stunningly accurate' is just a bad joke.

I presume that precipitation is somehow hard to model, compared to windspeed/pressure/temperature (which seem to be forecast more accurately, though I care about them much less)?

Howdy from across the pond! To be honest, I have almost no experience forecasting for Europe so sadly I cant help too much with the specifics. You are right predicting rainfall at small scale is very hard.

I know that here in the midwest, we never have any days where the forecast at 5am says 'all sunny and zero chance of rain', and then randomly a huge thunderstorm pops up. The atmopshere just doesnt work that way. People get upset about probabilities of rainfall and all that. Just think of the model perspective. The model sees square grid cells that are many KM apart. If the storm shifts from one grid cell to the next one, not that big of a deal in the grand scheme of things. However if you live in that grid cell, your forecast just changed from getting dumped on to clear skies (or vice versa).

Similar experience in the uk, in terms of whether it will rain or not. 3 hour, 6 hour etc forecasts are of little use - no better than looking out the window. Meanwhile wind speed and temperature seem ok on that timescale, but are (usually) less important to me than precipitation.

It depends a lot on what part of the world you are in - I think that the uk is particularly hard to forecast.

As an amateur astronomer/astrophotographer for the past 25+ years I've relied on long to short term forecasts. Will it be clear for an event I want to see? I can certainly agree the models have improved and are pretty good. Much to my dislike on occasion such as the recent lunar eclipse. I was clouded out, that looked likely days ahead of time.

One has to appreciate the complexity of modeling such a large and dynamic system that the atmosphere is! There are cases where forecasting is more difficult. Certainly getting the temperature correct within a degree or two is one case.

Another issue is geography. I'm in middle of of USA and there is significant amount of data collected over the continental US as systems approach me. For those on the west coast I can see where forecasting is more challenging since there is far less data available to input into models (current atmosphere state 100s of kilometers out over the ocean). I'd suspect some European countries experience same situation, those further inland benefit from increased modelling data.

Reading the comments here I'm a bit surprised at how many quibble over slight details. The improvements over the past decades in forecasting models and the supercomputers that crunch the data has been significant. In most cases, like recent cold temperatures and snow fall amounts, models converge on a good solution as the event nears. This is consistent with idea that better data input yields better data out.

The system is complex beyond our comprehension. If we really wanted to model weather and climate better we need to know more things. Things like how much cow farts are there? I would be happy to build a system to ingest cow fart obs and feed them into the numerical models. FPS farts per sec yo

> Meteorologists are increasingly uniting weather models and climate models, allowing them to project the general contours of a season as it begins.

Weather and climate models are gradually converging and both are getting incredibly good.

They work even in astronomy. You take a climate model, set the parameters for Mars or some exoplanets and what you get is relatively good Mars climate model or good principled guess of of what the climate in tidal locked planet around ultra-cool red dwarf star is (TRAPPIST-1).

> In 2009, a back-of-the-envelope study estimated that U.S. adults check the weather forecast about 300 billion times per year.

That's every single adult in the United States checking the weather four times a day. A bit more than I would have expected…

This number probably includes things like swiping to the leftmost home screen on your phone to see the weather widget at the top. When it's that easy you can often check the weather unintentionally.

That's a 2009 study, so more like every time you open your flip phone.

Actually, now that I think about it, I have a weather complication on my watch–it's probably grabbing the weather every hour or so to keep it up-to-date.

I'm not in the US, but as I like spending my weekends doing outdoor activities I check weather forecasts during the week a lot to try and plan what I'm going to do and where. Four times a day doesn't seem that much to me - I'd imagine people who work outdoors and farmers have even more reason to check the weather frequently.

[NB I even check multiple sources of weather information: BBC, Windy and MWIS].

December-April, I check a few times a day and different sources for snow/ice since I do not feel comfortable driving and getting caught in it. The number of times increases especially if I'm traveling. So when you time-average this over the year, it makes more sense.

When there's a hurricane incoming during the season, I check more often than that, at least every time the NHC has a new advisory. I imagine severe weather events cause huge surges in activity, it's probably not normally distributed throughout the year.

I check the Air Quality Index probably 20X per day where I am in Thailand (Chiang Mai). Guess Americans are just neurotic when it comes to the elements.

Goes back to shopping for that NASA spacesuit...clean air wherever you go.

Floridian. During the summer months I check more often (when I'm heading out) so that I know if a T-Storm will be developing in the next couple hours.

Wait what? I grew up in Miami: the answer is always yes.

My weather check is normally looking out the window. I don't use forecasts very often.

Anyone who is a weather buff should try meteogram (https://play.google.com/store/apps/details?id=com.cloud3squa...). You can plot a huge host of parameters (temperature, dew point, pressure, humidity, precipitation, wind speed/direction, sun/moon azimuth/elevation, etc.) over time, and the insights are very revealing. You can also choose multiple data sources, and it even has options to display METAR data.

Just follow few models for a couple of weeks and you will notice how inaccurate the weather forecast is.*

I do follow ECMWF and ICON and a bit of OpenWeather forecast and the differences in forecasted temperatures are often 2-3 degrees C. And everything with a horizon over 72 hours is just numerology. (OpenWeather showed -19C in early days of Feb just 2 days ago, today it is -2C).

Now a model is just a model (I had some experience with industrial and financial models) so they are good at this and not so good at that. And they are recalculated and self-adjusted over time. I can understand.

But from the user perspective there is huge difference between +1C tomorrow and -3C.

Windy app is using 4 models and has nice graphical comparision of forecasts.

* - What I mean here is that meteorologists have good understanding of processes but the numbers that we are getting could be as well random.

The article discusses a scientific paper that has tried to quantify the accurateness of the forecasts, and you just hand-wave that away with your own anecdotes?

But from the user perspective there is huge difference between +1C tomorrow and -3C.

Really? I'd read those both as "near freezing, going to need a decent coat".

+1 means that the packed snow is going to melt on top and then probably re-freeze, making everything super slippery. So I need to remove as much snow from the paths as possible and spread sand, and remind the kids to be careful.

-3 means relax.

The temperatures given are usually air temperatures. If the ground is warm enough -- which it is usually is in the cities -- then it doesn't really matter, the snow will melt in both cases.

Not here mate, not in one day that's for sure.

It is unfortunate for us that even tiny prediction misses around the freezing point have drastic differences in our experience. But it does not change that they are tiny prediction misses. We take for granted how good they have become.

Really. -3C is slippery roads and pavements. It matters if you walk or bike etc. But it is dry and feels warm. +1C is humid and feels cold.

The difference between +1 and 0 is the difference between me making the turn on my bicycle yesterday, and me sliding across the road on my side and then spending 15 minutes trying to fix the chain.

If they're both relatively rare for you you might not notice a difference, but if you encounter them more often you'd notice I think.

The windy.com site is really useful. It is the only way I know to get the ECMWF forecasts for free. As far as I know all the AccuWeather and similar spyware apps use the freely available but inferior GFS model.

There's a few places you can get ECMWF from, but usually your state meteorology institute will use it, even if it's not clearly advertised. (In Portugal they use ECMWF together with AROME)

Tropical Tidbits is a more US focused site with an emphasis on tropical storm season (hence the name) but they keep the models running & updating all year.

weather.us as well, they even have the 50 different ensembles for the Euro. Very surprised it's free but I ain't complaining.

oh man - you want to know something funny. At my startup we had a guy hacker in a random weather generator to use in our models. I was like WTF Man!

The numbers you are getting are not random. Sure the models are chaotic and non-linear, but still it's giving you hints at potential patterns, especially at long range time scales. People need to understand that wx forecasts, especially 3+ days out, should be read as guidelines for what is LIKELY to happen. Say that GFS predicts two inches of rain for your city over the next week as a wave comes through. What if the next model run shifts that blob of rainfall 25miles north? In the model terms, global terms, 25miles is nothing. But for you that is the difference between dry & soaked , and you think the model sucks.

Do you know where to get historical forecasts easily (to gauge accuracy)?

Windy's NEMS model is awesome for complex alpine terrain.

In Italy forecast are so unreliable they cannot predict rain in the evening from the morning.

I don't know if it's because current climate change is throwing off models but short term forecast accuracy is terrible and I'm recently acutely aware of that since I switched to public transport for my commute.

This anecdotal, but I have observed, in my experience in Europe, that short-term forecasts are really inaccurate in countries like Belgium and Italy while, for example, they are really, really spot-on in mainland Poland. So, my guess is that it really depends on the kind of climate and, perhaps, on the impact of climate change in that area.

I would attribute that to the substantial Soviet investment in meteorology.

I would rather to some longer term static patterns especially in summer or winter on large plains in Poland vs more of an unpredictable sea climate in Belgium and Italy.

I know. I had been tending large garden (roses and lawn) in my friends house in Toscana this summer. And I had been looking for a rain everyday :-) and observing the change in forecasts and acctual result.

I've seen similar in California. Forecasts are really not always that accurate even within 24h.

I’m sure it’s easier to predict weather in some places versus others, but in San Francisco I find the weather forecast to be wrong more often than not. I’m often told it’s currently sunny or raining when it’s the opposite. And forget about accurate high temperature predictions for tomorrow. At this point I don’t know why I even bother checking.

If you're looking for a graph that shows more than a projected minimum/maximum for every day, take a look at Foreca's 15-day forecast page [0]. It shows percentile lines for the forecasted minimum and maximum, which give a much better idea of what to expect. It also forecasts global vs. local precipitation. Unfortunately it is only available in Finnish.

[0] https://www.foreca.fi/Canada/Vancouver/15vrk

Disclaimer: I used to worked for MSN Weather, where we used Foreca's feed as the source for our weather data.

Back in the seventies and eighties there were four sources of impetus for improvement in highly parallel supercomputing. Military/Nuclear, high energy physics, oil and gas exploration (seismic analysis) and weather.

The weather people were the nicest. They could talk to almost anyone about their work!

I remember somebody in csiro talking to me about models getting accurate at the 1km square granular scale for a day. I think we're well beyond that for both cell size and duration now.

Everyone loves to talk about the weather and Bob Dylan was wrong about not needing a weatherman to know which way the wind blows

I skimmed this article and the linked sciencemag article and they seem to talk about wide scale weather events (eg. hurricanes), rather than the typical things that people look for in forecasts (yes/no rain, min/max temperature). A freakonomics article from 2008 found that at least for precipitation forecasts, weather forecasts don't fare much better than a simple probabilistic model. Has forecasts like these improved in accuracy?

This is an interesting example of an article that talks about gradual improvements in our lives that we take for granted at this point. It's strange because most articles are about dramatic deterioration. For example, it's unlikely we will read articles about the increase in % children who are vaccinated against basic diseases. It was 22% in 1980, 88% in 2016. Far more likely is reading reams of text devoted to the (for now) niche anti-vaxxer movement.

I don’t think using outbreaks of a preventable disease due to stupid people thinking they know more than they do and causing harm to society as a whole is overly dramatic. If anything, there should be more coverage about it so we can get laws passed to deny non medical exemptions.

What's the impact in terms of affected people. Hundreds of millions of children being vaccinated, millions of them not dying young, dramatically changing the dynamics of societies where women can now choose not to have multiple children.... or a few hundred people getting measles. I'm not saying the anti-vax thing isn't a problem. I'm saying that the improvements and advancements have a higher impact on society than what a few idiots choose to do. And yet we read much more about the latter.

The issue of people not vaccinating is specifically one which can spiral out of control very easily, as the tools we have to fight the disease don't work once herd immunity is lost, hence sounding the alarm bells is warranted.

Actually, now that I think about it, perhaps if the history of vaccines and the success had been more widely disseminated, then we may not have had this anti vaccine problem. I agree that it would be nice to read more about successes.

> [...] most articles are about dramatic [...]

This small snippet of your comment in combination with knowledge about click/ad-based revenue for most outlets is really all you need to make yourself aware of nowadays.

Drama is what drove gossip for millennia and with the advent of media it began to drive sales. Doesn't matter if book, TV show or news article. It's all entertainment.

The actual research this is based on is: "The Quiet Revolution of Numerical Weather Prediction" Bauer, Thorpe, Brunet, 2005.

"Stunningly Accurate" means that 7-day forecasts are now at the lower boundary of being considered "useful" so the bar is not being set very high here.

Still not a lot to quell the skepticism that some reasonable people have about the ability of scientists to accurately predict weather decades in the future.

> Still not a lot to quell the skepticism that some reasonable people have about the ability of scientists to accurately predict weather decades in the future.

It's a common misconception, climatology doesn't predict weather, but climate. And predicting average of weather is easier than predicting specific weather.

It's the same as in a casino. They cannot tell what will be the next throw of dice (weather). But they can calculate (predict) that you will, in the long-term average, lose (climate).

Weather ≠ climate.

right. climate is a more complex system. more variables. more unknowns. exponentially larger prediction area. unknown base states. so it should be much easier to predict right?

> so it should be much easier to predict right?

Yes, because the goals of the forecast are different. A weather forecast seeks to predict a future position within the phase space of the system; a climate forecast seeks to predict the overall shape of that phase space.

"It will be 1C warmer in February on average" is a useless prediction if I'm deciding whether or not to wear a heavy coat tomorrow, since day-by-day variability swamps that average. But it is a very useful prediction if I'm designing infrastructure that needs to last 50 years.

Climatology works at a different granularity in time and space from meterology. You can abstract away a lot of detail in climate models and still come out with useful predictions (the article has an example). You can include items like Antarctic sheet melt and Tundra methane release that has almost zero 72-hour effect but will matter deeply in 5, 50, 200 years.

I only see forecast errors for hurricanes listed as the cited data. Maybe I missed the other cited sources for data measuring forecast accuracy, but hurricane forecast accuracy, which is a storm that covers hundreds of square kilometers, doesn't seem like enough data to justify making the general claim 'modern weather forecasts' are stunningly accurate.

Shout out to Climendo.

I use the android app and it gives me a simple comparison of 4-5 weather forecasts, an "average" forecast, and a "certainty" (a measure of consensus).

Obviously the average of the ensemble averages isn't as good as having the underlying ensembles themselves, but it's nice to see days when the forecasts agree and days when they have no idea!

Best forecasts that I know (accurate enough to people come and ask me): https://www.windguru.cz/


- Only the GFS27KM is reliable on free version and within next 48-72h.

- Temperatures are always conservative on extremes. When it says 37C, expect 40C. When 2C expect 0C.

Intriguing. Barely a mention (buried and oblique) that there are different weather models, and almost no discussion about relative accuracy or the reasons for the difference. Especially contrasting with the last paragraph being explicitly "Rah rah America" with the fact that the US NWS' weather models are less accurate than the European ones (hurricane Sandy was a huge wake-up call)

They also dont even reference any USA models in the article. The GFS was also showing this big cold snap, not just the euro!

What's missing in most of the weather services is the recent history forecast vs observation.

why is my weatherman always wrong? :(

Because we tend to remember the few times where they get it very wrong (which is always going to happen because of the chaotic nature of weather) and ignore the large percentage of the time when they're right.

Confirmation bias.

Being "wrong" can be no big deal (48F vs 50) or hugely significant. My town will be getting precipitation this weekend. If the temps stay at 36, it will be a typical cold, wet weekend. If temps fall to 33 or lower, my town will be shutting down for a day or two since we don"t have snowplows.

  I use the forecast put out by weather.gov that's supposedly tailored
to the square mile because it gets the 12 hour forecast right about 40% of the time. The others are worse.

  Of course, when they say "chance of precipitation is 80%, less than an
  inch possible" and it doesn't rain, the forecast is semantically

  Like the El Nino impact on the SE US, where they forecast a 50% chance
of drier colder weather and a 50% chance of warmer wetter weather, it's nearly impossible to be wrong.

  Maybe this is the sort of obfuscatory probabilistic forecast Mr. Meyer
is counting as "accurate."

> it gets the 12 hour forecast right about 40% of the time

Where are these figures from? Your emotions, or something scientifically rigid? If the former, having this discussion is meaningless.

"chance of precipitation is 80%, less than an inch possible"

How would you determine, in a scientifically rigid manner, the limits on conditions which would validate that forecast as "right?" Or the inverse. What conditions would invalidate it as "not right?"

In a top-level comment upthread, jacques_chester posted a link to an overview of forecast verification methods: http://www.cawcr.gov.au/projects/verification/

> I use the forecast put out by weather.gov that's supposedly tailored to the square mile because it gets the 12 hour forecast right about 40% of the time.

Unfortunately, this is an example of false precision. The highest resolution numerical forecasts run by NOAA have a grid of about 3km, already coarser than your one-square-mile "tailored" output. The effective resolution of numerical weather models is also 2-3 times coarser than their grid spacing (because of numerical diffusivity and similar effects).

What you're seeing isn't a "tailored" output, but instead an interpolated result from a coarser grid.

Forecasting of very high-resolution effects is the subject of active and ongoing research, but unfortunately popular meteorology does not do a good job of discussing current limitations.

Look at the confusion in this set of comments, for example, about what degree of forecast error is normal/acceptable.

If I move my forecast location a few hundred yards, my elevation changes rather drastically, changing the forecast. Elevation isn't a factor most forecasts even consider. Is it false precision to use elevation to tailor the forecast?

In that case, the interpolation includes a vertical component as well. You'll see the effects of the lapse rate (change in temperature with height).

That's great for you, but it means that your forecast (of [my elevation, my coordinates]) is not much more accurate than one for ([my elevation, my coordinates plus a few hundred yards]).

More technically: "surface" isn't a smooth variable when elevation changes quickly, but interpolation like that performed by weather.gov necessarily works on smooth fields. Applying post-facto elevation is great and worthwhile, but it doesn't improve the accuracy in a technical sense.

No doubt.

That's like saying horoscopes and fortune-tellers are stunningly accurate: "a positive opportunity will present itself to you today - you only have to open the door" "you are holding a secret pain that is preventing you from moving on. Find peace and let go"

The recent snow storm in my area, 4 days in advance they were predicting 30cm of snow. 3.5 days in advance were 25cm of now. 3 days in advanced 15cm of snow. 2 days in advanced 5-10cm of snow. 1 day in advance 15cm of snow. We got about 10cm of snow.

Sorry but that's stunningly inaccurate.

4 days in advance they correctly predicted you would get snow and that it would be on the order of tens of centimeters thick. That's really good. Over the intervening days they revised their estimate down and the snowfall was eventually lower than the initial estimate so the accuracy improved. Their final estimate was again correct that you did get snow, which is the key prediction, and they even got the depth at your precise location right to within 33%. That's fantastic, and far better than was possible for the vast majority of my lifetime.

Also, the fact that their predictions were all rounded to the nearest 5cm should give you a clue at the expected accuracy and granularity of the final prediction. Missing the target by only one 'unit' of measure is pretty decent.

There's also the fact that predicting amount of snow is particularly difficult because the same amount of water will yield vastly different snowfalls with single-degree deviations of the temperature.

Which model? Try ECMWF on windy.com, that's usually the most accurate one for long-term.

Wow, what a nice looking weather website. Thanks. Didn't know about it.

While that's an anecdote, it's true too often for me to feel that forecasts are "stunningly accurate". That label implies near-perfection, which is laughable.

No, both our weather forecasting and climate model forecasting are still very very weak and poorly understood. Predicting the future is still hard for us humans.


Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact