Hacker News new | past | comments | ask | show | jobs | submit login
300 meters resolution SF Bay Area Forecast (atmo.ai)
169 points by johmathe on Oct 27, 2022 | hide | past | favorite | 115 comments



Growing up in Germany, before I moved to the Bay Area, I was wondering why weather apps and widgets were so prolific. Sure, knowing the forecast for next weekend was nice, but for anything closer I'd just get out of bed and look out of the window. That would pretty much tell me what weather it is, and it would usually change just slowly over a few days or so.

Then I moved to the Bay Area, and weather does not only change quickly, it may also be vastly, vastly different just a short distance away. Temperature differentials of 10°C or more within just 40 miles are interesting enough that it's a frequent topic of conversation with friends in Germany.

Everything suddenly made sense. The weather widgets. The hoodies: Easy to put on or off.


Another anecdote: In the South of Germany at least, long stretches of sunny days are often followed by sudden thunderstorms with equally sudden bursts of rain. That "fact" had been so deeply ingrained in me that it was subconscious. You'd have a careful feeling if it was hot for too long, suddenly you might find yourself running for the next awning to escape the torrential rain.

Sunny weather was a bit like building up a sort of pressure, that must release violently.

Took a while to let go of that feeling in California. It basically rains in winter, and does not rain in summer. Like, at all.

(Note that I moved away about a decade ago, climate may have changed in the meantime.)


I hadn’t even thought of that until I read this. I had thought thunderstorms after hot weather were just a fact of life. I guess in places near German latitudes that get thunderstorms the hot weather is caused by high pressure systems but maybe that isn’t really the cause in California.


Coastal weather is often lacking the conditions that creates thunderstorms. I live near the coast now too and haven't experienced the kind of thunderstorm I know from Germany.


I guess maybe in North Europe where the climate doesn't vary much, but in the South, we do get thunderstorm and flood in coastal area. https://www.euronews.com/green/2021/09/15/watch-as-more-floo...


Depends on the coast I guess...NYC, for example, gets some real whopper summer thunderstorms https://www.youtube.com/watch?v=wk3Gz9o9yw4&t=12s


Have you been to the Midwest? I grew up near Cape Cod, and I have to say -- I'll take a nor'easter any day, give me the snow. That "maybe a tornado will just come down and demolish your house" type thunderstorm is pretty scary! I'm glad we've got a calm atmosphere up here.


Midwestern USA has similar stretch of hot summer followed by thunderstorms.


May (followed by April and June) are the most active months for storms, especially tornado producing ones in virtually every state in the Midwest USA, well before “hot summer” days.


> It basically rains in winter, and does not rain in summer. Like, at all.

You needn't have moved so far from southern Germany to experience that! Lack of rain in the summer is a defining characteristic of the Mediterranean climate. You would have experienced the same a few hundred miles south. :-)


Thats what we used to have in Australia, and now it just rains. It's killing my morale atm.


When I frequented Portola Valley, we called it six months of mud, six months of dust. Those golden hills are actually brown.


The climate did change. Now it hardly rains in the winter either.


Yes. I’m from Denmark, but I check the weather every night before stepping out as I have experience 13 C nights where the previous night was 20 C. And this is not uncommon.

When I first got here I was stunned by how noticeable nicer the weather was when driving from Santa Clara to Palo Alto, and more than once have I forgotten to bring a sweater to SF.


Yeah. Living in SF and working in the South Bay, it's common in the evening to get into my car sweating, and coming out freezing. I always pack a hoodie.

On the bright side, a hoodie is often all I ever need, all year long.


My hometown is close to Portugal's westernmost tip, about 30km from Lisbon. Its not uncommon to see differences over 10C between the two, and the car ride will many times include rain or fog only in a specific spot of the highway, half way.

Even if you stay right next to the ocean/river during the whole trip by avoiding the highway you'll notice a big difference most days.


If I had to guess they are running the WRF model [1][2]. The AI part is post-processing the model output. With a fair amount of reading the manual anybody can run their own WRF. WRF scales from running on a laptop to supercomputers with 1000s of cores.

[1] https://www.mmm.ucar.edu/models/wrf [2] https://github.com/wrf-model/WRF


Having worked with WRF for 13 years now, contributed patches to many releases, and built a SaaS business centered on WRF (https://cloudrun.co), I still discover new things about it and run into interesting scientific and engineering challenges. It's not the kind of software that's easily picked up and run by a non-expert. It's a large framework that's more niche, more obscure, and not as well documented as something like, say, Tensorflow. There's still a ton of value to derive from making WRF and similar models more usable by non-experts and without access to supercomputers.


I mean up or downsampling is trivial. The question isn't if you can make a raster at any resolution, its if you can make a raster thats accurate and precise at that resolution.

Its not clear to me that this is either.


One of the interesting things the model captures at this resolution is the dynamics of the wind going in the bay through the golden gate. See for instance: https://sf.atmo.ai/wind@37.80911,-122.44543,11.68,36,0,16669...


The Global Forecast System (GFS), i.e. the model presently used at NCEP, has a grid resolution of 18 miles (28 km). It is (has been, for years, actually), the second best global forecast system, right behind the European ECMWF (sometimes outperforming it, but on average slightly underperforming it, in terms of accuracy).

I don't know how the ECMWF model works, but even as someone who did not study meteorology (but studied electrical engineering, which forms the theoretical basis of weather forecasting via the Kalman filter), I can say the following (having spent a number of years working at NCEP): 1. Initial conditions/parameters are fundamental in setting up a model run. 2. Forecasts have for a long time relied on ensembles, which are repeat model runs with slightly varying parameters. The idea of ensembles is, if you run enough of them, you will frequently notice one or more convergence(s) that various sets of parameters produce, e.g. where some sets of parameters predict one movement pattern for a hurricane, while others produce a different movement pattern. Historically, such discrepancies were resolved by actual forecasters, who decided based on their knowledge and experience which one was more likely. In addition, they also had meetings every morning between scientists (developing the model) and forecasters (who relied more on general knowledge and experience) and involved occasionally heated discussions between the groups. But I digress. 3. Considering it involves a chaotic system, I cannot say how much value something like deep learning might bring to the table that produces consistent value above and beyond what's already obtained by using ensembles of Kalman predictive filtering. It is however noteworthy to point out that if the grid resolution is 28,000 meters, then it may not make much sense to set the resolution of the model itself substantially lower (like 300 meters), because any resulting data is more likely to be an artifact of the model itself, rather than reflective of real life information. Luckily, this issue has been and is being addressed through the development of rigorous testing standards, which inform of the inherent quality of forecasts produced by a particular model (this is how they can assign an objective rank to e.g. the GFS and the ECMWF, when forecast quality is generally very close and the model producing the most accurate prediction varies between the two). To put it plainly, the degree to which the website mentioned above has any value is based not on its best predictions, but on the overall variance (i.e. how close predicted data comes to actual measurements of the same, which is necessarily retrospective). 4. That said, it's worthwhile to point out that just because it doesn't involve a government agency with something like a thousand employees, hundreds of scientists (in the case of NCEP alone), and very powerful supercomputers, does not necessarily mean it's bunk (even if it frequently does). For example, I do recall Panasonic (IIRC) showing up out of the blue, with its own forecasting system, which was shown to be competitive after requisite, rigorous testing. I don't remember many details and this was years ago—and its disappearance alone is suspect, but it's worth adding for completeness.


While a good set of initial conditions is indeed critical, having a smaller model is helpful for modeling micro climates such as the ones you see in the Bay Area. At this resolution you can have a much more detailed representation of relief and water, which are two of the biggest drivers behind the beautiful dynamics we observe here.

Kalman filtering is only one part of the process, and plays a critical role during the data assimilation part. Classical Kalman filtering is optimal for Gaussian-distributed linear dynamical systems, but needs tweaks for non Gaussian distributions and non linear systems.

Classical NWP models for instance will integrate the primitive partial differential equations in time and space and run various parameterizations (which can be in some cases even more expensive than integrating the primitive equations). ECMWF on their end use IFS, which is a spectral method for solving the PDEs.

The whole process of solving these models accurately has definitely been some of the most fascinating science and engineering I’ve had the pleasure to work with. It’s extremely humbling :)


Germany actually has discrete-cloud (modeling radar-detected convective cells as polygons) forecasts covering the next few hours, combined with some further nowcasting aspects into SYMFONY for localized flash flood warnings (+2h precipitation predictions updated every 5 minutes): https://www.dwd.de/EN/research/researchprogramme/sinfony_iaf...

There's the ICON-D2 prediction system with native 2.2km grid, run every 3 hours, with reach of +27h (the 3am UTC run reaches +48h). Also available as an ensemble of 20 possibly futures: https://www.dwd.de/EN/ourservices/nwp_forecast_data/nwp_fore... (open data; feel free to check it out)


I would be curious how far DeepMind could get if they moved into this field. Would perfectly fit into the list of fields they turned upside down (or at least 45 degrees) in the recent years.


They already did begin exploring this field, starting with precipitation nowcasting - like every other AI shop out there (see https://www.deepmind.com/blog/nowcasting-the-next-hour-of-ra...).

Numerical weather prediction is a _very_ well established field. In fact, large tranches of modern computer science and computing in general owe their existence in direct ways to the importance of numerical weather prediction, since this was one of the original applications of digital computers! Modern weather forecasting models are extraordinarily sophisticated scientific and engineering achievements. It's not obvious that AI actually offers any significant, immediate benefit over these tools save for niche, simplified forecasts (e.g. precipitation nowcasts) - certainly, given the prowess of modern NWP, the ROI is likely to be very low for research investments into general purpose AI weather forecasts.

One might then argue that perhaps AI can be useful to help refine or post-process these existing forecast systems? But of course - we've been doing just that since the 1970's. In fact, even the basic weather forecast that you might get from your national weather service these days is based on a sophisticated statistically post-processed machinery applied to not one but dozens of weather forecasts.

Weather prediction is unlikely to be a field where AI practitioners stumble across a significant improvement to the status quo. It would be far wiser to work closely with meteorology experts to solve practical and _useful_ weather forecast problems - like, is that thunderstorm I see on radar likely to produce a tornado in the next 45 minutes?


sure anyone can do it, but it takes a lot of computers to do it at a reasonable refresh rate. HRRR is 3km and available for free. no super computers required by the end user. then you can apply what ever AI/statistical downscaling you want without having to try and run a better/more reliable forecast than the NWS


I remember tinkering with WRF model in university using a single server. Depending on the resolution, generating forecast for the next day can take more than one day! Imagine getting forecast for tomorrow two days later! You do need quite a bit of computing resources and GPU if you want to use its forecasting feature meaningfully. In my case, I was interested in downscaling some historical data, so my limited compute was enough.


This needs to be done for more cities in California, especially Los Angeles with its huge geographic area and all the diverse micro climates contained within. Sometimes the weather changes more than 15 degrees in half as many miles from the coast. It therefore doesn't make sense to e.g. check "weather in LA" when its going to be somewhat wrong most of the time depending on where in LA you happen to be, since the little widget that pops up on google search for "weather in LA" doesn't exactly tell you where in the 500sq miles of LA they are putting the temperature probe.


Same thing in San Diego. I feel like the popular weather services can't make accurate predictions or even tell what is happening most of the time. My iPhone will tell me it's raining when it's sunny outside.


I feel like you could make a whole movie about the weather in LA. It would change your life. Twice.


Recognized the reference immediately. What a movie.

https://tvtropes.org/pmwiki/pmwiki.php/Film/LAStory


That's true, but an upstate New Yorker would wonder what the fine resolution forecast looks like for a place that really has weather.


Not weather, sun, sun, sun, sun, sun!


shameless plug of my little rain map that has the opposite approach --

zero interpolation, zero forecasting. just the real data from the radar feed in 10 meter resolution at the current time (plus 3 past snapshots at 1 hour intervals):

https://truweather.link

and the app versions:

https://play.google.com/store/apps/details?id=net.conceptual...

https://apps.apple.com/ca/app/truweather/id1537614881

i find it useful because every other weather app is wrong in some way ;) instead, it enables you to form your own mental models


Running over in Rodeo valley (Marin) there was a very noticeable (and unusual) inversion this morning around 7:30 am. It must have been 5-10 degrees colder on the valley floor compared to up higher -- after ~100ft of elevation gain up out of the valley it warmed up very rapidly.

I don't see that reflected in this map at all fwiw. https://sf.atmo.ai/temperature@37.83200,-122.51075,13.53,20,...


It's likely because services like these use models as data feeds and not live/recorded data.


Weather forecasts are so hard for a user to evaluate... Are you going to check it every day and remember how many days it was right or wrong?

Please can weather providers just publish a headline statistic of "Our rain/no rain one day ahead forecast is right 85% of the time. That is better than NOAA (80%), Met Office (72%) and weather.com (65%)."


This is kind of the purpose of the "50% chance of rain" things. The process is called calibration and is usually done with linear regression, and it means that in historical forecasts, the actual outcome was rain 50% of the time. Surface precip is notoriously hard to predict, so this is what we've got right now.


But they should publish that... And then compare that figure to their competitors... To demonstrate to their users that their service is actually better, not just has a shinier UI...


Would be nice to have some extra error information. Like a probability for a number of mm precipitation buckets.


Atmo built their business on beating their competitors on a particular ranking metric that does exactly as you describe. I discussed it with the founder but am not sure how much of that convo was public info so I hope they post it here.


On part that's what ensembles are about: they run like 20 or so simulations of the near future, randomizing the start values and simulation parameters within the range of uncertainty, and get one potential future per simulation out of the process.

A 30% rain chance every hour of the entire afternoon until sunset tends to be less actionable than "85% chance of a 50~80 minute rain shower during the afternoon, unclear when, but likely a bit longer if it'd start very late", as one can often adapt around such, for example by scheduling the homework to get done "wherever it's raining, at the latest so it'll be finished by dinner time", spending the dry time with outdoor physical activity that doesn't care about wet ground (but getting wet from above isn't nice).


Very interesting data but infuriating UI.

With so much heat map display, why is there no key? For wind, red/purple are counterintuitively lower speed than orange and yellow?

Also, who do so many applications force oblique views?


> Also, who do so many applications force oblique views?

I'm able to adjust it by dragging with two fingers on my trackpad, which I think is the standard behavior for that (albeit hard to discover). But I do agree, it's weird for that to be the default.

There's also probably no key because the colors are mostly transparent, so it would be hard to make a key easy to understand. Labelling the contour lines seems like a reasonable approach imo.


Interesting, I don't get Contour labels on mine, so I have to continually reposition to get a sense of the magnitude


A few thoughts from a geographer (be prepared to shoot me):

- Basically every comment is wowed by this, but nobody questions what the accuracy is. I, too, can Krig interpolated surfaces to any resolution.

- off-nadir view doesn't seem to offer much

- We've been dealing with janky tile loading for like 20 years now. I really hope we'll get a much smoother approach for viewing these tiles as they load. The dissolve transition hides it a bit, but makes the data uncomfortable to view when playing the timeseries.

- I'm deeply curious about the Picnic data layer. Can someone share the ArcGIS/QGIS model for that one? =D


This seems to be a thing with weather models more generally. Somewhat relatedly, I've spent quite a bit of time evaluating weather models for use in India and Africa, and while predictions are easy to find, validation results for the predictions are very hard to find. And when you do find them, the results are pretty poor, with many models performing worse than if you would say "predict temperature on date X to be the average observed temperature on the same date in the past 10 years". But people still sell (and buy) these predictions!

Weather predictions seem to be accepted quite uncritically. Perhaps people have a lot of confidence in the smart people that built these predictions (a bit like how AI predictions can sometimes be accepted uncritically).


100% agree. Scientists and engineers all know that you must provide validation results, accuracy/uncertainty calculations, etc. or your data is just a pretty guess. I think weather forecast models are so commoditized and useful for laypersons that we've UX'd all of the complexity (scrutinizing the data) out of the product. The most scrutiny I ever see are people discussing what "Probability of Precipitation" values really mean.

My grad thesis advisor encouraged me to actually get the Environment Canada models and learn how to run them (they're in FORTRAN). I could never make them spit out data consistent with what EC publishes. That's probably on me, but it was a real eye-opener to this whole domain's complexity.


I've been working with weather models for 10 years and I often get asked "How accurate is X?" or "Which model is more accurate?" Many people think "accuracy" is a single number or a single thing - it is more complex than this and depends on your needs.

This chapter on Numerical Weather Predictions [0] is great, especially the section on "Forecast Quality and Verification" (p777). The eye-opener for me was "Binary/Categorical Event". An example of a binary event is rain, one model could predict rain correctly but a second model might not predict the rain at all. This doesn't mean the second model was completely wrong, it still predicted the rain but it predicted the rain passing further to the south.

[0] https://www.eoas.ubc.ca/books/Practical_Meteorology/mse3/Ch2...

I've also noticed some model are better than other at predicting one phenomena while other models might be better in certain regions. For example, many people report that Canada's GDPS is better at higher latitudes whereas NOAA's GFS is better at equatorial regions.

One final note, just because someone is solving an WRF model without verifying the results, doesn't mean it's wrong. Many numerical techniques and physical models within WRF have been validated analytical and experimental models. But it is also true that someone can naively setup a WRF model that gives bad results.

I use a 900m WRF model that predicts the wind shadow around an island and we use it to find the best beach for a picnic - and it works. But this same model predicts the general pattern of rain but it doesn't get the start and stop time of rain correct.

People get fixated on accuracy as a single thing and use it as a single basis for argument but to take a quote from the chapter [0] above "One of the least useful measures of quality is forecast accuracy" (ref. p777, Forecast Quality and Verification, third paragraph).


> other models might be better in certain regions

The US Navy's COAMPS model is good for littoral regions.


Meteoblue was dramatically more accurate in Chamonix last spring than the GFS.


You have to be careful you aren't comparing apples to oranges. You might be looking at the Meteoblue MOS (statistically corrected) predictions which might be based on their regional weather simulation. This regional simulation might be nested in a larger global model, probably from ECMWF. If you compare this ECMWF model to GFS, then you are comparing apples with apples.

I find global models like GFS are great for understanding the large scale weather systems. The regional high-resolution models, which are usually nested in a global model, give better definition of local weather phenomena like wind shadows or cooler temperatures in valleys.

Dues to averaging, weather simulations usually have a bias error in temperature predictions. These errors are corrected using statistics (look up Model-Output-Statistics) but is hyper-local, i.e., you loose the big picture. This is probably what you're looking at with Meteoblue.


Given this is in the SF bay there a number of high quality observations that you can use to validate the forecast skill (unlike India and Africa). I have not bothered doing this here since… well that’s too much like my day job.

I’m always excited to see new forecast products, generally. If I were to guess (as an above comment did) it looks like they are applying some dynamic downscaling on top of either a custom WRF model (expensive and complicated) or more likely already available weather model data like the HRRR, which still would represent a 10x resolution increase.

I’m more curious what the refresh rate is. Anyone can get a super accurate forecast for the next 3 hours that takes 10 hours to run, but at that point it’s no longer a forecast by the time the data is available.

I still think that windy has set the standards as far as modern weather visualization goes. Not saying everything has to be particles but other things (like the inclusion of isobars) is really clean and not trivial to execute.

Either way this has definitely piqued my interests and I will be keeping an eye on it, their advisory board looks legit (at least in the meteorology end)


The website claims to be using DL which may mean less of a model-centric approach? The expertise of the people at the top of the organization, on this problem, seems a little thin, TBH. And, no stated validation results at all? Without such details, this is just marketing.

It would be interesting to see how this behaves for longer prediction times and across a range of difficult forcing conditions off the ocean in the BA.


I agree, this generally left me feeling skeptical. I know of Luca Delle Monache on the advisory team, through colleagues who have researched under him at Scripps and they spoke highly of him. But yes, there is a lot left to the imagination here.

With regards to the sfbay specifically I used to work with a fairly high resolution wind model for the bay (this was a more traditional dynamic based simulation) and it worked pretty well overall, but every time a storm blew through it would crash. This ultimately had to do with the relatively steep terrain in the bay specifically (and the physics configurations we were using in the actual model).

Even if they are using DL they still need initial and boundary conditions. As I said there are a ton of weather stations around so I could imagine a DL type approach that looked at terrain elevation, and recent + historical observations to initialize a forecast, but I still imagine that boundary conditions would have to be provided by nesting this in a larger model somehow. Then again, I'm not a DL expert at all so there are probably some newer stuff in this field that I'm just out of date on.

Its really expensive to run your own dynamic forecast model, at a refresh rate acceptable for an actual forecast, at this resolution. That's why I suspected its taking existing weather models and downscaling them with DL techniques, but I can't really know just by looking.


(For clarity, I was referring to the company leadership proper, not the advisory team.)


We are currently integrating with Forecast Watch a 3rd party that analyses and compare various forecasting systems [1]. Please stay tuned until we integrate our APIs. I will be updating this thread when it is ready.

[1] https://www.forecastwatch.com/


Few thoughts as a sailor in the SF Bay area.

- Accuracy seems at least somewhat correct.That wedge shape you see in the late afternoon sailors call "the wind engine". Local sailing magazine Lattitude 38 has a special PDF that talks about doing a sailing trip around the bay accounting for this local wind phenomenon.

Correct stuff:

- The SF waterfront, out to the edge of the piers is mostly calm which is correct

- Berkely, Oakland, Emeryville getting blasted late afternoon is correct

- Back side of treasure island, immediately to the east is much lower than the west side, particularly near clipper cove

- Vast majority of alameda estuary is dead calm, that's correct for this time of year

- There's a big blast of wind between Daly City and OAK international where there's a gap in the mountains

Weird stuff:

- Most noticable, is the wind is still strong up to and south of the bay bridge. The bay bridge has been described by many as "a wall" when it comes to the wind. There's a drop off but it's not in line with the bay bridge. At all. at least 45 degrees off from true wind speed.

- There's a very windy patch between golden gate coast guard station and belvedere, it's usually really patchy wind here but I guess if the wind direction is just right it'll blow there

- Pointe Bonita (lighthouse on the west side of gg bridge about 2 miles, north coast) they are modeling the gap in the rocks there and you can see it funnel through which is neat

It's a cool visualization though, gives you a great idea of where the wind is, and more importantly where it won't be. There are a bunch of races that start in the bay and head south towards Santa Cruz and Monterey so it's nice to better visualize where the wind just dies off on the coast as it skips over the mountains.

Anyone who wants to see what the wind is like in the bay I recommend reaching out to YRA.org they can put you in touch with a boat who needs crew most likely. There are races 4-5 days a week through november all around the bay. It is modeling a distinct drop off of wind speed on the south side of the bay


Also, as a person with experience sailing on the Bay, this model immediately seems unusually accurate. It shows, for example, the Angel Island wind shadow moving around correctly as it does during the day, which I have never seen in another model.

Could anyone with more understanding of meteorology (or OP) please explain what is different about this model vs say the ECMWF model that you can see in apps like Windy, that are supposedly great, but just don't seem to get these features right? Those models are incredibly bad when dealing with the unusual local geographic features on the bay. What resolution are they operating at?


Thanks for the feedback :) The highest resolution models you can currently find on windy is around 3x3 km (HRRR) and this resolution is unfortunately too big to capture fine terrain and water features. 300x300 gives you 100 times more data points to work with.


How often are you planning to re-run this model? Can I convince you to rerun it before this weekend? I'm in a situation to get you a lot of new followers/users if I had a new model for Sat/Sun.


The model is currently ran every day. Hope you were able to use it for this weekend :)


It's purely spatial resolution and the representation of topography that high-resolution simulation permits. Any NWP model like WRF or WRF-LES will produce topographic-driven wind fields with high fidelity pretty much out of the box, without any customization required. The result may be pretty, but it's really nothing of note from a weather modeling perspective.

ECMWF's model is 9km spatial resolution, so Angel Island probably doesn't even show up in the model domain.


Very valuable feedback - thank you!


Thanks for chiming in. I can make an animation say whatever I want. I am very curious about the accuracy.

Unrelated: The site breaks my back button in Chrome, which is an unforgivable UI sin.


Thanks for the feedback. We will have the back button fixed quickly :)


Same thing in firefox, and pleeease have a units switch. Would be infinitely more useful if I could view the site in the units my brain normally works in rather than having to think about the conversion all the time. Very cool site otherwise, also very cool bay area right now.


All fixed up and deployed.


<3 thank you for the quick fix! no unit switch yet right, or am I missing it?


It will come within the next few days - please stay tuned. Being from Europe I am more used to Celsius myself :)


I had been wondering if that was actually something special because I thought I remembered that the German Meteorological Service ("Deutscher Wetterdienst") offered accurate forecasts on a sub-kilometer grid for years already. At least if you are ready to spend money on that because that service is not free, so maybe that's the innovation here.


Where did you get the existence of sub-km (horizontal) grid forecasts from? Thus page https://www.dwd.de/EN/research/weatherforecasting/num_modell... contains the following sentence: "The operational NWP models of DWD currently employ horizontal grid mesh widths between 2.8 and 13 km."

The convective cell tracking for nowcasting seems finer, IMO reasonable as it's about predicting watersheds down to <10km² area flash-flooding and causing the local creek to swell to actually dangerous levels/requiring partial evacuation of a valley.


>Basically every comment is wowed by this, but nobody questions what the accuracy is

I am very skeptical. Does the San Mateo bridge really block 10 knot winds for the entire south bay? Similarly, the land temperatures all seem the same close to sea level.


Tall bridges do weird things to the wind. I can confirm the bay bridge at surface level, there is functionally no wind for about half a mile downwind from it. Just glassy smooth.


Most of the san mateo bridge is quite low, especially the stretch crossing the bay. This is why I was so shocked it had a wind shadow 10+ miles


I'm not seeing much of a wind shadow for that bridge, particularly at 4pm Friday. Maybe they updated the model already. Most of the onshore windflow begins after 11am goes from the cold (high pressure) pacific through the gg bridge, wraps around the east side of angel island and north past Richmond and Vallejo towards the hot (low pressure) central valley. South of SFO silicon valley is surrounded by tall geographic features and there's not much path to hotter (low pressure) zones so it's unusual to see high winds there unless there's a special offshore wind event coming from the south (most often in the winter).


What does "300 meters resolution" mean in this context?


Weather models divide the atmosphere into cells where each cell has it's own forecast. There are several ways to do it but if you imagine the cells as cubes, the north-south and east-west dimensions of the cube is 300m long. 300m is very high resolution for weather models with most running at a few kilometers to a few dozen kilometers.


It means that the underlying weather data is computed and validated at a 300x300 meters resolution. Hope this helps :)


How are you doing validation?


Also note that it should be "300 meter resolution". You don't use a plural when using it as an adjective. I only point this out to help people because this is a common grammar mistake for non-native speakers.


The secret sauce to making accurate forecasts at 300 meter resolution has to be something more than “AI” right? I mean, there’s little reason to think AI should be meaningfully better at filling in details at 300m resolution than it would be at picking lottery numbers, no? Am I thinking about this wrong?


Nah, just run the WRF model at a 300m horizontal resolution and use a regional model or GCM for the boundary conditions. No AI necessary.


Love the app. The details that the map is displaying is really important if you want to know the forecast for a specific location, which is really important if you're on the sea or in the air. Looking forward to have the forecast for even more locations around the world


That is impressive. Pretty accurate about the early morning cold spots. Not sure how much of this is just matching historical trends (SF has a pretty consistent climate), but the level of spatial resolution on the data is amazing. Not sure how they did it really.


There's lots of weird little vortices where the wind seems to start or end in a spiral. I'm guessing it's an artifact of the modelling. Funny, but overall very cool.


Interesting. Their website has a real "Delos" feel, as in the West World corporation. I'd love a narrow cast for my area too. You could sell access to these maybe.


Displays nothing but garbage on mac/firefox.

I did briefly see something that looked like a map, but then it smeared into an abstract art looking blob.


I've been wanting this for a long time. Any way I could get it as a live updating widget for my Android home screen?


fun - but I am reminded of the excoriation that Windy-dot-com got here on YNews when facts-oriented people started comparing the visualization to more quantitative sources.. snipes aside, no doubts here that they will sell this and be paid for it.


Opportune day you picked to share this, given this nasty cold front. Very cool.


How much computational power does it take to make something like this?


Pretty cool, but the map really needs to have a street label option.


Wow, this is the future.


I wish it was.

I see no sign that forecasting has improved at all in my part of the world. Auckland, New Zealand.


This release is just for the SF bay area, their models for elsewhere are not released publicly yet.


Literally


Is there a site that predicts the movement of SF fog?


This is pretty cool if I do say so myself.

What does the picnic data show?


It’s a forecast of the best spots to have a picnic in the bay :) Basically a proxy for the best place and time to be outdoors.


so whats the RMSE and how do we know that?


Yes, yes, yes. Great. Thanks. Bookmarked!


Not working on my iphone. Slashdotted?


Looks amazing! However, this model (like all models by default) is not validated and produces garbage unless proven otherwise.


I’m loving the picnic mode!!!!


Is there a Celsius mode?


TIL Mt Diablo gets cold



Very cool.

Pedantic but suggest title rename to say “SF Bay Area” or “inner SF Bay Area”. SF is just a small portion of the coverage area.


How do you switch to normal (for 97% of humans) units for temperature and wind?


The heaviest users of wind forecasting use knots


I learned to interpret wind speeds in m/s on a visit to Iceland and found it to be an intuitive unit for hiking and other outdoor purposes. And 1m/s is pretty close to 2 knots so it's easy to convert.


We will be adding the Celsius switch shortly. We started with F since we expected most users of the forecast to be in the bay area but it is great to see interest from non Fahrenheit users :)


The bay area is probably the place with the fewest native users of Fahrenheit in the whole USA.

All those US customary units should never be the default in 2022. It's time to move on people. The whole world and the majority of the industry in the USA share the same system of measure. Let's get the American population there too.


Sorry, I’m from the present. How does one change it to Celcius?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: