I'm not sure if this is directly mentioned in the paper, but I didn't see any mention specifically about the conflation between a validation set and test set. When people actually make a distinction between the two (which is seemingly not all that common nowadays), you're meant to perform model selection on the validation set, i.e. find the best HPs such that you minimise `loss(model,valid_set)`. Once you've found your most performant model according to that, you then evaluate it on the test set once, and that's your unbiased measure of generalisation error. Since the ML community (and reviewers) are obsessed with "SOTA", "novelty", and bold numbers, a table of results purely composed of test set numbers is not easily controllable (when you're trying to be ethical) from the point of view of actually "passing" the peer review process. Conversely, what's easily controllable is a table full of validation set numbers: just perform extremely aggressive model validation on your model until your model gets higher numbers than everything else. Even simpler solution, why not just ditch the distinction between the valid and test set to begin with? (I'm joking, btw.) Now you see the problem.
Your description of tanh isn't even correct, it squashes a real number to `(-1, 1)`, not "less than one".
You're curious about whether there is gain in parameterising activation functions and learning them instead, or rather, why it's not used much in practice. That's an interesting and curious academic question, and it seems like you're already experimenting with trying out your own kinds of activation functions. However, people in this thread (including myself) wanted to clarify some perceived misunderstandings you had about nonlinearities and "why" they are used in DNNs. Or how "squashing functions" is a misnomer because `g(x) = x/1000` doesn't introduce any nonlinearities. Yet you continue to fixate and double down on your knowledge of "what" a tanh is, and even that is incorrect.
When discussing `tanh squashing` among other AI experts it's generally assumed that even the most pedantic and uncharitable parsing of words won't be able to misinterpret "smashing to less than one" as an incorrect sentence fragment, because the "one", in that context, obviously refers to distance from zero.
If course they do exist. A parameterized activation function is the most obvious thing to try in NN design, and has certainly been invented/studied by 1000s of researchers.
How was that person derailing the convo? Nothing says an activation function has to "squash" a number to be in some range. Leaky ReLUs for instance do `f(x) = x if x > 0 else ax` (for some coefficient `a != 0`), that doesn't squash `x` to be in any range (unless you want to be peculiar about your precise definition of what it means to squash a number). The function takes a real in `[-inf, inf]` and produces a number in `[-inf, inf]`.
> Sure there's a squashing function on the output to keep it in a range from 0 to 1 but that's done BECAUSE we're just adding up stuff.
It's not because you're "adding up stuff", there is specific mathematical or statistical reason why it is used. For neural networks it's there to stop your multi layer network collapsing to a single layer one (i.e. a linear algebra reason). You can choose whatever function you want, for hidden layers tanh generally isn't used anymore, it's usually some variant of a ReLU. In fact Leaky ReLUs are very commonly used so OP isn't changing the subject.
If you define a "perceptron" (`g(Wx+b)` and `W` is a `Px1` matrix) and train it as a logistic regression model then you want `g` to be sigmoid. Its purpose is to ensure that the output can be interpreted as a probability (given that use the correct statistical loss), which means squashing the number. The inverse isn't true, if I take random numbers from the internet and squash them to `[0,1]` I don't go call them probabilities.
> and not only is it's PRIMARY function to squash a number, that's it's ONLY function.
Squashing the number isn't the reason, it's the side effect. And even then, I just said that not all activation functions squash numbers.
> All the training does is adjust linear weights tho, like I said.
Not sure what your point is. What is a "linear weight"?
We call layers of the form `g(Wx+b)` "linear" layers but that's an abused term, if g() is non-linear then the output is not linear. Who cares if the inner term `Wx + b` is linear? With enough of these layers you can approximate fairly complicated functions. If you're arguing as to whether there is a better fundamental building block then that is another discussion.
In the context of discussing linearity v.s. non-linearity adding the word "linear" in front of "weight" is more clear, which is what my top level post on this thread was all about too.
It's astounding to me (and everyone else who's being honest) that LLMs can accomplish what they do when it's only linear "factors" (i.e. weights) that are all that's required to be adjusted during training, to achieve genuine reasoning. During training we're not [normally] adjusting any parameters or weights on any non-linear functions. I include the caveat "normally", because I'm speaking of the basic Perceptron NN using a squashing-type activation function.
> It's astounding to me (and everyone else who's being honest) that LLMs can accomplish what they do when it's only linear "factors" (i.e. weights) that are all that's required to be adjusted during training, to achieve genuine reasoning.
When such basic perceptrons are scaled enormously, it becomes less surprising that they can achieve some level of 'genuine reasoning' (e.g., accurate next-word prediction), since the goal with such networks at the end of the day is just function approximation. What is more surprising to me is how we found ways to train such models i.e., advances in hardware accelerators, combined with massive data, which are factors just as significant in my opinion.
Yeah, no one is surprised that LLMs do what they're trained to do: predict tokens. The surprise comes from the fact that merely training to predict tokens ends up with model weights that generate emergent reasoning.
If you want to say reasoning and token prediction are just the same thing at scale you can say that, but I don't fall into that camp. I think there's MUCH more to learn, and indeed a new field of math or even physics that we haven't even discovered yet. Like a step change in mathematical understanding analogous to the invention of Calculus.
ACE 7, recently got my PhD in ML and am doing reasonably well, at least from an 'objective' lens (I have a well paying job and bought my first condo recently). My mind is still chaotic as ever (I have a mix of OCD / Tourettes / PTS) and it's still often hard for me to concentrate without overthinking some portion of my life or something completely unrelated to the task at hand.
If I can plug a Youtuber who really 'gets it' (mental health and depression), it's Dr Scott Eilers. Everything else is just clichéd garbage.
(Disclaimer: I am not 'gifted' but did very academically well during high school and uni)
I wish I had the ability to toggle PiP for any open window, while I am in full screen mode. For instance, I have both Chrome and Emacs side by side full screen and I can use a hotkey to drop down my iTerm window over both of them. (Basically like a Quake terminal but that feature is specific to iTerm).
Mostly theoretical-ish deep learning stuff as of late (I'm a PhD candidate in that field). But I want to expand it into really anything: psychology, dating, video game reviews, etc.