I think the book is still too long. Some of the passages are huge, with a long long blocks of text. There's a lot of filler words in there like "which is a bummer", and a lot of "say..." dot dot dots. As a reader, you need to spend mental energy in trying to figure out what is the key essence you are trying to convey in each paragraph--this would be fine if the book is dense to begin with, but since you are trying to make this a concise intro, maybe it would be best to reduce this mental energy requirement to the absolute minimum
A bit if feedback is, read the paragraphs your wrote, ask yourself "what is the exact point I am trying to convey here?", and the remove words that can be removed, without taking away from the point. You want the paragraph to be as short and concise is possible, since that's what makes your book different from all the other "dense" books out there on the same topic.
If someone in the book store opens up your book and "skims", if your paragraphs are clearly small, with lots of whitespace between paragraphs, then even without reading the text content in detail, the person can immediately tell your book is special, and completely different from all the other books on NN in the bookstore, and will be more likely to buy it right away.
I think at the beginning it was great. The use of everyday examples and analogies made it very quick and simple for someone reading it to "get" what you mean. But as the chapters progressed, e.g. chapter 3, the text gets more dense and examples get fewer--in the later chapters it looks like you got more excited about the technical details, and the calculus math etc, and the later chapters no longer seem to relate to examples as much or are as concise as before. The later chapters look more and more similar to the other denser literatures in the field
In any case, overall it's a great book. It's very unique I have never seen this condensed approach before, and very special compared to all the other NN literatures out there. It's very refreshing. I'm sure it will inspire a new generation of machine scientists who will remember this for years to come!
Thanks for writing this!
For technical documentation, it's actually better to hire a really good technical writing editor than a tech writer. Have the engineer spew out the right ideas, and then the editor puts in the magic that makes it an effective read. It's an easier process than trying to teach the subject to a tech writer.
(If you click this link: warning: it's not python)
The amazon affiliate panel shows an item by item breakdown of any purchases made by cookied users, goodbye private purchases.
Here is a non tracked link:
> goldenkey 3 hours ago [-]
so no he can't.
It seems a good way in ML is to hack your way around libraries until you get the feeling, and only after that start reading up on theory or doing some ML classes. The other way around is dry.
Thank you for writing this!
EDIT: And reply to this message.
- Assume that your target audience is going to be very eager to learn about DL but have no clue about what exactly to learn or where to even start. That's why they are buying your book in the first place and not some other more dense text.
- Hence, telling your readers what to learn and where to find more info is just as important as the subject matter itself. This can be as easy as e.g. telling the readers about certain keywords that they can use in their Google searches.
- The very best texts that I've read on complicated subjects were always "coarse-to-fine", i.e. give the readers the big picture as early as possible, then enable them to go into details at their own pace.
- Conversely the worst text that I've read on complicated subjects were either fine-to-coarse (trying to explain individual components in detail before going to the big picture), not explaining the big picture at all or being too verbose in the beginning (slowing down the eager readers and killing their motivation). A good example of the latter is Apple's "Programming with Objective-C" . Horrible text IMO.
- Following what was said above sometimes details aren't even necessary to include in your text as long as the readers are confident that they can find their way around and get details elsewhere.
- The very very best texts I've read also always had a motivational component. For someone who's just starting out the field looks vast and un-conquerable and scary. If you show them, in simple words, the boundaries of the field and which areas the experts are working on and even where current research is struggling you help give confidence and trajectory to your readers, so they can strive to become experts too.
"A neural network learns a function. This might seem confusing since I just told you that it is
a funtion. However, every neural network starts out predicting randomly. In other words, our
starting weight values are random... thus our function predicts randomly. It's a random function.
As you may remember from the previous chapter, a neural network learns how to take
an input dataset and convert it into an output dataset. For example, it might take an input dataset
of Farenheit temperatures and learn to convert it into an output dataset of Celsius temperatures.
It might covert a pixel values dataset"
funtion, Farenheit, covert
"We just take each weight... compute its affect on the error... and move it
in the right direction so that the error goes down (to 0)."
affect vs effect
A nitpick is that you use the words "matrices" and "differentiable" at the end of chapter 2. Maybe this is okay because you are signposting that these concepts will be explained but if you are aiming for high school algebra level readers with some python experience this could intimidate people.
IMO this topic in Ycombinator is being heavily blogspammed on this book and the topic should be deleted.
It also stung a bit that the link said 'Click To See the First Few Chapters' when in fact you click to see the first chapter and pay for the rest.
Also, FWIW, I'm offering free Q/A for feedback on those chapters (assuming i don't get totally overwhelmed).
Welcome to the internet.
def neural_network(input_data, weight):
prediction = input * knob_weight
Most importantly, numpy makes it really easy to deal with matrices (~ array of arrays). You just make operations on them as if they were classic numbers (so, you can do `a + b`, where both a and b are matrices).
While translating it into other languages not having numpy is indeed possible, expect a bit of intellectual gymnastic.
I don't know which language you targeted, but for those who wish to use golang, I made this matrix library: https://github.com/oelmekki/matrix
EDIT: oh, btw. I'm initially a ruby dev. I've learnt python just enough to be able to understand NNs code, that was easy (took me an afternoon). I won't pretend this makes me a python developer, but learning just enough to translate code in an other langage is straightforward.
When numpy is using the * operator between two matrices, it basically just does a cell by cell multiplication
result_matrix[x][y] = matrix1[x][y] * matrix2[x][y]
I hope you will use spaced repetition in which the reader will have a base from which to move on a deeper level of understanding. Can't wait to buy this book soon.
Thanks for your initiative, will buy the book soon as it is available.
"Everything you need to know to undrstand Deep Learning will be explained like you would to a 5 year old, including..."
Love the intro
Googling around usually gets you atleast 39% off. Here is one that worked for me ctwgeopytw
Signing up for their email usually gets you one of those soon after the book is announced and once close to publication date.
This should be flagged as blogspam.
No, you don't "get it". And I may "love the book", if I ever read the supposed "book".
I believe this is merely a marketing test for a book which likely does not yet exist and news.ycombinator is not a forum for market testing or advertising for books, even books on NN.
Furthermore, a post in trask's defense by another author who also has a book published by Manning, is most unsavory. Publishers and authors and their agents should cease spamming news.ycombinator.
Yup, I have written a book with Manning (Grokking Algorithms), which is why I commented about empathy towards authors.
"Please don't submit comments complaining that a submission is inappropriate for the site. If you think a story is spam or off-topic, flag it by clicking on its 'flag' link. If you think a comment is egregious, click on its timestamp to go to its page, then click 'flag' at the top. (Not all users see flag links; there's a small karma threshold.)"
" If you flag something, please don't also comment that you did.
" Please resist commenting about being downvoted. It never does any good, and it makes boring reading."
I apologize to all for these violations.
Nonetheless I find the initial "chapter" of the aforementioned "book" to be void of significant content and await the full publication before spending any money.
(but i don't control the website download offer)
Why, may I ask?
You're insulting one of your peers here. Insults, clever or otherwise do not belong on HN, especially not in threads where those peers offer their works for - limited - review.
If you're shocked by someone writing a book that has an actual audience here and even more shocked that they would have the temerity to charge for their work, if you complain they won't give you 'access' and then insult them to boot when they offer to do just that you go from 'clever' to 'asshole'.
I've flagged your comments and would really appreciate it if you found it in you to apologize to the topic starter, subthreads like these make me sad.
This is like trying to teach monads without having taught lambda calculus, functors, and applicatives.
There is a clear order to knowledge, and people should master the books dealing with prereqs if they want to grok deep learning.
Not jump into deep learning, just because it's the hot shit.
Part of these efforts to cheaply popularize CS makes computer science not a real field. Just a fad.
Nobody will write a book called "Grokking Quantum Physics" claiming that explaining quantum like "you are a five year old" will somehow cover for the necessary mastery of classical physics.
If you read such a book and think you understand quantum physics, you are terribly misguided.
Dunning-Kruger addicts people to feeling like they have mastered a subject, without putting in the effort.
Little learning is a dangerous thing.