Hacker News new | past | comments | ask | show | jobs | submit | johmathe's comments login

Thanks for the great pointers!


One of the authors here. Thanks for your comment! While a lot of this research is theoretical and does not have immediate use cases, we have tried to summarize some of them in the last paragraph of the paper (VII. Applications of Non-Euclidean Geometry). See page 26. We present some in Chemistry and Drug Development, Structural Biology and Protein Engineering, Computer Vision, Biomedical Imaging, Recommender Systems and Social Networks and Physics.


Thanks for the feedback! This model will evolve over time, we are planning on pushing a much higher resolution soon. Skill wise we will publish a paper in the next few months.


We are currently integrating with Forecast Watch a 3rd party that analyses and compare various forecasting systems [1]. Please stay tuned until we integrate our APIs. I will be updating this thread when it is ready.

[1] https://www.forecastwatch.com/


We will be adding the Celsius switch shortly. We started with F since we expected most users of the forecast to be in the bay area but it is great to see interest from non Fahrenheit users :)


The bay area is probably the place with the fewest native users of Fahrenheit in the whole USA.

All those US customary units should never be the default in 2022. It's time to move on people. The whole world and the majority of the industry in the USA share the same system of measure. Let's get the American population there too.


Thanks for the feedback :) The highest resolution models you can currently find on windy is around 3x3 km (HRRR) and this resolution is unfortunately too big to capture fine terrain and water features. 300x300 gives you 100 times more data points to work with.


How often are you planning to re-run this model? Can I convince you to rerun it before this weekend? I'm in a situation to get you a lot of new followers/users if I had a new model for Sat/Sun.


The model is currently ran every day. Hope you were able to use it for this weekend :)


Very valuable feedback - thank you!


One of the interesting things the model captures at this resolution is the dynamics of the wind going in the bay through the golden gate. See for instance: https://sf.atmo.ai/wind@37.80911,-122.44543,11.68,36,0,16669...


The Global Forecast System (GFS), i.e. the model presently used at NCEP, has a grid resolution of 18 miles (28 km). It is (has been, for years, actually), the second best global forecast system, right behind the European ECMWF (sometimes outperforming it, but on average slightly underperforming it, in terms of accuracy).

I don't know how the ECMWF model works, but even as someone who did not study meteorology (but studied electrical engineering, which forms the theoretical basis of weather forecasting via the Kalman filter), I can say the following (having spent a number of years working at NCEP): 1. Initial conditions/parameters are fundamental in setting up a model run. 2. Forecasts have for a long time relied on ensembles, which are repeat model runs with slightly varying parameters. The idea of ensembles is, if you run enough of them, you will frequently notice one or more convergence(s) that various sets of parameters produce, e.g. where some sets of parameters predict one movement pattern for a hurricane, while others produce a different movement pattern. Historically, such discrepancies were resolved by actual forecasters, who decided based on their knowledge and experience which one was more likely. In addition, they also had meetings every morning between scientists (developing the model) and forecasters (who relied more on general knowledge and experience) and involved occasionally heated discussions between the groups. But I digress. 3. Considering it involves a chaotic system, I cannot say how much value something like deep learning might bring to the table that produces consistent value above and beyond what's already obtained by using ensembles of Kalman predictive filtering. It is however noteworthy to point out that if the grid resolution is 28,000 meters, then it may not make much sense to set the resolution of the model itself substantially lower (like 300 meters), because any resulting data is more likely to be an artifact of the model itself, rather than reflective of real life information. Luckily, this issue has been and is being addressed through the development of rigorous testing standards, which inform of the inherent quality of forecasts produced by a particular model (this is how they can assign an objective rank to e.g. the GFS and the ECMWF, when forecast quality is generally very close and the model producing the most accurate prediction varies between the two). To put it plainly, the degree to which the website mentioned above has any value is based not on its best predictions, but on the overall variance (i.e. how close predicted data comes to actual measurements of the same, which is necessarily retrospective). 4. That said, it's worthwhile to point out that just because it doesn't involve a government agency with something like a thousand employees, hundreds of scientists (in the case of NCEP alone), and very powerful supercomputers, does not necessarily mean it's bunk (even if it frequently does). For example, I do recall Panasonic (IIRC) showing up out of the blue, with its own forecasting system, which was shown to be competitive after requisite, rigorous testing. I don't remember many details and this was years ago—and its disappearance alone is suspect, but it's worth adding for completeness.


While a good set of initial conditions is indeed critical, having a smaller model is helpful for modeling micro climates such as the ones you see in the Bay Area. At this resolution you can have a much more detailed representation of relief and water, which are two of the biggest drivers behind the beautiful dynamics we observe here.

Kalman filtering is only one part of the process, and plays a critical role during the data assimilation part. Classical Kalman filtering is optimal for Gaussian-distributed linear dynamical systems, but needs tweaks for non Gaussian distributions and non linear systems.

Classical NWP models for instance will integrate the primitive partial differential equations in time and space and run various parameterizations (which can be in some cases even more expensive than integrating the primitive equations). ECMWF on their end use IFS, which is a spectral method for solving the PDEs.

The whole process of solving these models accurately has definitely been some of the most fascinating science and engineering I’ve had the pleasure to work with. It’s extremely humbling :)


Germany actually has discrete-cloud (modeling radar-detected convective cells as polygons) forecasts covering the next few hours, combined with some further nowcasting aspects into SYMFONY for localized flash flood warnings (+2h precipitation predictions updated every 5 minutes): https://www.dwd.de/EN/research/researchprogramme/sinfony_iaf...

There's the ICON-D2 prediction system with native 2.2km grid, run every 3 hours, with reach of +27h (the 3am UTC run reaches +48h). Also available as an ensemble of 20 possibly futures: https://www.dwd.de/EN/ourservices/nwp_forecast_data/nwp_fore... (open data; feel free to check it out)


I would be curious how far DeepMind could get if they moved into this field. Would perfectly fit into the list of fields they turned upside down (or at least 45 degrees) in the recent years.


They already did begin exploring this field, starting with precipitation nowcasting - like every other AI shop out there (see https://www.deepmind.com/blog/nowcasting-the-next-hour-of-ra...).

Numerical weather prediction is a _very_ well established field. In fact, large tranches of modern computer science and computing in general owe their existence in direct ways to the importance of numerical weather prediction, since this was one of the original applications of digital computers! Modern weather forecasting models are extraordinarily sophisticated scientific and engineering achievements. It's not obvious that AI actually offers any significant, immediate benefit over these tools save for niche, simplified forecasts (e.g. precipitation nowcasts) - certainly, given the prowess of modern NWP, the ROI is likely to be very low for research investments into general purpose AI weather forecasts.

One might then argue that perhaps AI can be useful to help refine or post-process these existing forecast systems? But of course - we've been doing just that since the 1970's. In fact, even the basic weather forecast that you might get from your national weather service these days is based on a sophisticated statistically post-processed machinery applied to not one but dozens of weather forecasts.

Weather prediction is unlikely to be a field where AI practitioners stumble across a significant improvement to the status quo. It would be far wiser to work closely with meteorology experts to solve practical and _useful_ weather forecast problems - like, is that thunderstorm I see on radar likely to produce a tornado in the next 45 minutes?


It means that the underlying weather data is computed and validated at a 300x300 meters resolution. Hope this helps :)


How are you doing validation?


Thanks for the feedback. We will have the back button fixed quickly :)


Same thing in firefox, and pleeease have a units switch. Would be infinitely more useful if I could view the site in the units my brain normally works in rather than having to think about the conversion all the time. Very cool site otherwise, also very cool bay area right now.


All fixed up and deployed.


<3 thank you for the quick fix! no unit switch yet right, or am I missing it?


It will come within the next few days - please stay tuned. Being from Europe I am more used to Celsius myself :)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: