This book aims to provide an introduction to the topic of deep learning algorithms. We review essential components of deep learning algorithms in full mathematical detail including different artificial neural network (ANN) architectures (such as fully-connected feedforward ANNs, convolutional ANNs, recurrent ANNs, residual ANNs, and ANNs with batch normalization) and different optimization algorithms (such as the basic stochastic gradient descent (SGD) method, accelerated methods, and adaptive methods). We also cover several theoretical aspects of deep learning algorithms such as approximation capacities of ANNs (including a calculus for ANNs), optimization theory (including Kurdyka-Łojasiewicz inequalities), and generalization errors. In the last part of the book some deep learning approximation methods for PDEs are reviewed including physics-informed neural networks (PINNs) and deep Galerkin methods. We hope that this book will be useful for students and scientists who do not yet have any background in deep learning at all and would like to gain a solid foundation as well as for practitioners who would like to obtain a firmer mathematical understanding of the objects and methods considered in deep learning.
The personal responsibility argument really falls apart when you consider the forces working against the user. The conversation has long since moved on from this line of thinking because the power is so lopsided now.
I'm not quite sure I understand your analogy because I think most would agree that knives should not be regulated and we shouldn't investigate and punish knife sellers/makers for stabbings.
To play along with your analogy: if social media sites are well-designed, they do what they're supposed to do: communicate information or misinformation. And just like how we don't crack down on knives even though there are plenty of stabbings, we shouldn't be cracking down on social media & online speech even though there are plenty of liars and idiots who spread misinformation.
There is much more limited harm from a person wielding a knife for malicious purposes, as opposed to how quickly social media can fuel similar actions. If you genuinely believe scale doesn't matter and everything is the same as everything else, I don't know what to tell you.
I don't doubt that scale matters, it's just a bad analogy. Knives aren't comparable to social media, but if the commenter wants to play the knife analogy game then this is the conclusion.
But the object of knives isn't to stab people to death; a vanishing number of knives bought are used to stab people to death, and "lunatics" are not a core base for any knife seller.
A well-designed system does what it's supposed to. Facebook's either supposed to be addictive, or it just is by accident. If it's by accident, it's poorly designed. If it's not, it has an objectionable purpose.
Facebook is not designed to be addictive, it's designed to be engaging. Lunatics aren't the core base for Facebook either, a small minority of Facebook's total user base cause problems. It looks like you've already made up your mind that Facebook is addictive and bad, you barely have to dress it up with this analogy.
There's no solid evidence that it is addictive except in a colloquial sense. Addictive substances like opiates, nicotine etc consistently induce an addiction response in humans with very predictable outcomes. Facebook isn't anything like that, but its detractors love to use the word "addiction" to imply a medical phenomenon that doesn't happen. The "harm" of Facebook usage is not even close to the reliably brutal consequences of actual addictive substance abuse, nor is it induced with any consistency. Unlike actual addictive substances, Facebook does not alter one's ability to inhibit their own behavior.
There's no doubt that Facebook can negatively impact one's self-esteem and life, but that's like any entertainment. Even alcohol addiction is better understood and arguably more brutal, yet the only restriction on alcohol is an age limit.
I was a patient, be in hospital more then two months, i always thought if there was classic music echo in the passage or my room, the pain will be less.
Pre-Scheme(https://en.wikipedia.org/wiki/PreScheme) is a GC-free (LIFO) subset of Scheme
newLISP(http://www.newlisp.org/) uses "One Reference Only" memory management
Linear Lisp(http://home.pipeline.com/%7Ehbaker1/LinearLisp.html) produces no garbage (and seems to have no implementation)
Dale(https://github.com/tomhrr/dale) is basically C in S-Exprs (but with macros)
ThinLisp(https://web.archive.org/web/20160324213055/http://www.thinlisp.org/, https://github.com/ska80/thinlisp) is a subset of Common Lisp that can be used without GC
Bone Lisp(https://github.com/wolfgangj/bone-lisp) semi-automatic memory management
See also Conal Elliott's "Compiling to categories" ( http://conal.net/papers/compiling-to-categories/ ) where he is converting Haskell automatically to point-free form (like a concatinative language but not quite) and then instantiating that over different Categories to get different correct programs from the same expression.
Cool project! I've been working on typing for stack-based languages. Although I got stuck trying to make the inferencer work on higher-order functions. Kleffner's Master thesis supposedly explains how to accomplish that, but I haven't been able to wrap my head around it yet. I'd be interested to hear how you have solved the problem in Thun.
My original implementation is in Python and I documented a bit of research and implementation of type inference here: "The Blissful Elegance of Typing Joy" http://joypy.osdn.io/notebooks/Types.html The “Type Inference in Stack-Based Programming Languages” talk given by Rob Kleffner informed it. I don't recall now whether I read his thesis but it's likely. I probably couldn't wrap my head around it either.
For typing combinators (Joy's higher-order functions) I tried making a hybrid inferencer and interpreter that just evaluated them and it worked. (Incidentally that's what drove home to me the categorical nature of Joy. When I read Conal Elliott's "Compiling to Categories" I recognized what I had done.) In Joy the higher order combinators "don't care" if they are working on e.g. values or types. In other words they only care about the shape or structure (structural typing) of the data on the stack.
When I wrote the interpreter in Prolog and then wrote the inferencer in Prolog I noticed they were the same code, so I deleted one of them. In Prolog, you can pass a stack and compute values or pass logic variables and it will tell you what kind of stack a given expression expects/generates. If you implement math ops with CLP(FD) you get a nice constraint compiler that get generate new Prolog implementations of Joy expressions. Sick, eh?
In both Prolog and Python I haven't yet closed the loop for recursive combinators. Meaning the type inferencer generates the base-case and then the case for recurring once, then twice, and so on. I know the answer is some simple application of fixed-point theory or something, but I'm an idiot, and I've been working on other aspects (because I'm sure the solution is like decades old in the "compiling FP languages" literature. "Somebody else has had this problem.")
In any event, I don't think I'll have to figure it out, because I just found out that the next steps I was going to take have already been done by the "Seven Sketches" folks and then some: https://news.ycombinator.com/item?id=20376325
I'm pretty sure most of that stuff would make great Joy combinators. And something in there would be the way to deal with e.g. genrec and x combinators.