Hacker News new | past | comments | ask | show | jobs | submit login

Great presentation, but I do wish they'd throw in an equation or two. When they talk about the "channel objective", which they describe as "layer_n[:,:,z]", do they mean they are finding parameters that maximize the sum of the activations of RGB values of each channel? I'm not quite sure what the scalar loss function actually is here. I'm assuming some mean. (They discuss a few reduction operators, L_inf, L_2, in the preconditioning part but I don't think it's the same thing?)

The visualizations of image gradients was really fascinating, I never really thought about plotting the gradient of each pixel channel as an image. I take it these gradients are for a particular (and same) random starting value and step size? It's not totally clear.

(I have to say, "second-to-last figure.." again.. cool presentation but being able to say "figure 9" or whatever would be nice. Not everything about traditional publication needs to be thrown out the window.. figure and section numbers are useful for discussion!)

You're right: we are taking the mean of the activations of a given channel `z` over all its `x,y` coordinates. (We could sum, but we use mean so that step sizes are comparable between channel and neuron objectives.) Thanks for the feedback that this notation is not super clear, we will consider rewriting those expressions.

When we do feature visualization we do start from a random point/noise. For the diagram showing steepest descent directions, however, the gradient is evaluated on an input image from the dataset, shown as the leftmost image. There's no real step size either as we're showing the direction. You can think of the scale as arbitrary and chosen for appearance.

Section numbers are on their way—and figure numbers also sound helpful! I've added a ticket. (https://github.com/distillpub/template/issues/63) For now you can already link to figures like this: https://distill.pub/2017/feature-visualization/#steepest-des...

Ah, thanks for your explanation re the gradient images, I got it, thanks! I think it does say that more or less in the text actually, I was understanding it a bit wrong, my bad. For me this preconditoning part of the article is the hardest to get an intuition for.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact