

Cleaning Radar Images using Neural Nets and Computer Vision - jparise
http://blog.forecast.io/cleaning-radar-images-using-neural-nets-computer-vision/

======
josephhardin
This is neat, though there are far better methods for doing this type of
thing. Now that the Nexrads are dual-pol, you can look at the correlation
between the horizontal and the vertical polarizations to filter out the vast
majority of the noise.

The noise you are filtering out is primarily insect returns. If you watch it,
it will get worse at some times of the day rather than others. If inversions
happen, you'll also see some beam bending that can cause you to pick up
powerlines/roads, etc. You can verify it is mostly bugs by looking at the
differential reflectivity. The differential reflectivity(difference of
horizontal and vertical returned powers) will be somewhat random for ground
clutter, and higher values for bugs.

With this approach, without knowing exactly how you do it, I'd be worried that
it would have the tendency to filter out initial formations of stratiform
clouds, and just leave convective formations. I can point you to some theory
on how a lot of the filtering is done for research in atmos if you'd like,
feel free to message me(I'm working on my Ph.D. in weather radar).

~~~
thegrossman
Dual pol wasn't deployed when we first started Dark Sky. Now that it's
becoming available, it's definitely going to make our job a lot easier!

Not knowing much about it, though, it'll take some time for us to become
comfortable with the data.

~~~
josephhardin
Well it's very impressive that you got that performance without using any
dual-pol parameters. Abstract submission for the AMS radar conference is next
week, and I'm sure they'd be interested in this.
<http://www.ametsoc.org/MEET/fainst/201336radar.html>

~~~
thegrossman
Also, correct me if I'm wrong (and I very well could be), but the most
egregious noise -- the giant Bagel Blobs -- aren't insect reflections, but
rather night-time temperature inversions. Can dual-pol correlation help
identify those?

~~~
josephhardin
It's likely a combination of both. A quick way to discern would be to look at
velocity, zdr, and rhohv. For ground clutter and returns caused by the
inversion, you will expect to see very low velocities(close to 0) relative to
the background precipitation. Additionally Zdr will end up looking like a
roughly random field. If it is insects, then the velocity will roughly match
the background precipitation, but you will have a high zdr(as insects look
like very very oblate bags of water). In both cases you should get a drop in
rho_hv, the correlation coefficient between the channels which will help to
differentiate it from actual precipitation. Also it looks like you're
primarily concerned with pulling out the rainfall in several cases. For this,
I'd look at the specific differential phase(K_DP) as it is a much better
estimator of rainfall than reflectivity. In general, it is linearly
proportional(exponential, with an exponent close to 1) to the rainfall rate.
I'm not sure how good the nexrad estimators are for kdp though.

------
thegrossman
Seeing this on HN, I realize I didn't go into a lot of technical detail in the
blog post (we still aren't quite sure who our audience is with the Forecast
blog). So if anyone has any specific questions, ask away.

~~~
MBCook
I thought it was great.

Could you comment on why all the radar noise in the sample image seems to be
almost perfectly confined to the eastern half of the US? Is that just an
artifact of the time of day (perhaps the sun was setting the in middle of the
US at the time)?

I'd love to know more about the neural network too, but I have no experience
with them so I'm not even sure what to ask.

~~~
thegrossman
Exactly. Those enormous Bagel Blobs tend to occur after the sun goes down (and
only in the warmer months. Two months ago, they weren't there).

The image was generated around 10:30 PM EST, so they hadn't had time to
propagate further west.

Regarding neural nets, I have to admit that I didn't have much experience with
them either before working on Dark Sky. They do, however, seem to be a little
more forgiving than other kinds of classifiers (Support Vector Machines, and
the like).

Check out the examples for the FANN C library: <http://leenissen.dk/fann/wp/>

Or in ruby: <https://github.com/tangledpath/ruby-fann>

~~~
josephhardin
You might find [http://www.chill.colostate.edu/w/Non-
precipitation_echoes:_i...](http://www.chill.colostate.edu/w/Non-
precipitation_echoes:_insects_and_smoke_particles) to be interesting. It shows
some of the dual-pol stuff on insects.

------
Maxious
See also Making Clouds Go Away on MapBox Satellite
[http://mapbox.com/blog/improving-mapbox-satellite-by-
making-...](http://mapbox.com/blog/improving-mapbox-satellite-by-making-
clouds-disappear/) <https://news.ycombinator.com/item?id=5475571>

------
m_mueller
Hey there. I've done a GPGPU implementation of a physical weather prediction
model[1]. I'm interested in how you get your weather data. Do you use WSM for
modeling? Also, have you considered licensing out Dark Sky for data
assimilation purposes to weather agencies? What I hear from the weather
researchers this is one of the hardest aspects of this field. I think that
once you have a product that's able to integrate an automated feedback loop
into existing weather models for all kinds of weather data, you've basically
won the weather game.

[1]<https://github.com/muellermichel/Hybrid-Fortran>

~~~
saidajigumi
Regarding data, have you looked at the API[1] they provide, also used to power
Forecast.io and Dark Sky?

[1] <https://developer.forecast.io/>

------
graupel
Meteorologist turned HN junkie here: this is really cool and I love the
commenter suggesting using dual pol data to do the heavy lifting for you.

My question about your radar data is this; it's beautiful but I absolutely
hate the color palette you are using, from a weather perspective - why not
just stick with the 'standard' green-to-red colors we all know and love? The
purples are different, for sure - but really hard to make sense of!

~~~
thegrossman
The color palette of the images in the blog post was adjusted to highlight
areas of precipitation at the expense of intensity discrimination. Dark Sky
itself uses a wider range, but it's still a similar blue to purple to salmon
to yellow color table.

The reason we went with that over the standard green-to-red color table is
because I personally find the standard colors to be jarring and more confusing
to the casual user because of the increased contrast between intensity bands
(at least, it certainly was to me back when I was a casual weather app user!)

The tradeoff is that there isn't as fine an intensity discrimination at lower
intensities. In theory I can see how that can be a problem, but I haven't
noticed it in practice.

------
davidw
Interesting... as part of a little site I threw together, I have a ton of
radar data for the region of Italy where I live, and have always thought it'd
be fun to do something with it:

<http://www.meteo-veneto.net/>

