Hacker News new | past | comments | ask | show | jobs | submit login
How Transformers Work – Model Used by Open AI and DeepMind (medium.com)
216 points by giacaglia 71 days ago | hide | past | web | favorite | 17 comments



Here are two more great articles about Transformers:

The Illustrated Transformer (referenced in the parent): http://jalammar.github.io/illustrated-transformer/

The Annotated Transformer: http://nlp.seas.harvard.edu/2018/04/03/attention.html


I reference one of the articles but I hadn’t look at the other one! Very interesting. Thanks for sharing


You do more than reference it -- you've copied a bunch of text and figures from it as well. Search for "The encoder’s inputs first flow through a self-attention layer" and read on from there. Most of the article is a word-for-word copy.


I’ve tried to use a bunch of the figures and information from these articles. I hope it was useful for some people


Regardless of whether or not it's useful, it's substantially plagiarized from another source. You don't have an inline citation or visual indication that many of the figures are copied from another article on the topic. The same goes for copying paragraphs with extremely minimal modifications.

Slightly changing sentence structure is not paraphrasing or stating in your own words. Pointing a reader to an article for further reading is not the same as a citation.

To put it bluntly, your arrangement of the material, substantial paragraphs, and a significant number of your figures/graphics are copied from elsewhere without citation.


The rest of the images are largely from colah’s blog posts; it’s plagiarized from a mix of sources.

Providing links billed as additional reading material doesn’t count as a citation.


I cite them at the end. The idea was to summarize all the articles into one. I will add a note to the self-attention section - that is the one that I used from Jay's blog. My idea was to summarize all the content from these posts and videos that are referenced at the end into one blog post. I hope it was useful for some people


That's not good enough. Citations must be in the text, so that nobody mistakes their work for yours. This can be informally like 'Jay describes", or 'Sally's article says', and then your words, or a quote 'in quotation marks' with a link to their work.


I'm adding a note to the self-attention section adding the fact that this was taken from another blog post


Note that if the other blog post is not licensed under a license that allows you to do so (such as Creative Commons Share-Alike), you are simply not permitted to copy images or text without explicit permission from the authors. It's not enough to state that it was taken from another post.

Of course, you are allowed to cite excerpts, but then the text should be clearly marked up as a citation.


The right thing to do is to either bring down the blog post, replacing it with references to the used articles, or quickly transform (pun intended) the paragraphs and graphics into your own. As it stands, it’s plagiarism.


I'm going to be honest, I thought this was going to be about Transformer characters desired by a Deep Learning AI


I thought this was about the electrical components. I was like, why would you need a neural network when there are physical laws?


I too was expecting to see some derivations of Maxwell's equations, inductor behavior etc.


And I thought it was an application of monad transformers to AI, even though I don't even know if that makes sense.


I thought "designed by a Deep Learning AI"


same here, very disappointing




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: