Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Testing HN titles against a neural network (github.com)
285 points by atum47 25 days ago | hide | past | web | favorite | 209 comments



Congratulations on getting first rank on front page.

Congratulations on getting your hands dirty and doing everything yourself like computing gradients manually, badly shuffling (non Fisher-Yates), badly js transpose (double swapping), it is a great way to learn.

Congratulations on completing a full pipeline, that's the hard part then it just swapping pieces for better pieces.

I advise non-technical readers not to attach much value to the results of this neural network as it is probably inferior to the even simpler naive Bayes.

The model of the neural network is simplistic :

Concat(Word Vectors)-Dense(120,act=sigmoid)-Dense(60,act=sigmoid)-Dense(2,act=sigmoid)

The Concat operation mean it is especially sensible to dropping or adding a word as it will offset the remaining words and give a totally different vector.

Using word vectors mean it doesn't forgive any spelling mistake as a spelling mistake will usually correspond to <unknown> vector.

Using a feed forward neural network means formulaic titles with a single word substitution from a good positive example from the training set will often work.

It is trained by gradient descent using a squared error loss, on ~1000 examples one example at a time without cross-validation using a custom written neural network library. (Almost all these bad choices can be solved by using a framework).

It seems to have successfully over-fit as it return Good ~1.0 for positive examples from the training set.


This really puts the work in perspective. Thank you for the summary!


Thank you for summarising and thx to OP for publishing his work. The combination of both was a nice opportunity for me (and others?) to learn something from mistakes. :-)


I really like this comment. Thanks a lot for taking the time to going through the code and figure all that out. The way I learn things is by doing. Since I started college, the only way I can understand a mathematical equation is by turning it into code. The machine learning class I took was a basic one. We learned about knn, kmeans, adaline, perception, linear regression... the semester ended with a multilayer perceptron with only one hidden layer. This Dejavu NN you see in my code is like the fourth iteration of me trying to really understand NNs. Some of the things you said I already knew but didn't took the time to apply: like cross validation. Tht shuffle algorithm I didn't knew, thanks for that. Anyways, if all the criticism I got from my projects were like yours i'd be such much better now.

When I learned about KNN, I made this project: https://github.com/victorqribeiro/budget

When I learned about Kmeans, I made this one: https://github.com/victorqribeiro/groupImg

When I learned about Perceptrons: https://github.com/victorqribeiro/carGamePerceptron (the training is done when you run the project)

When I learned about MLP: https://github.com/victorqribeiro/jokenpo

When I learned about Neural Evolution: https://github.com/victorqribeiro/aimAndShoot

None of these projects are perfect, but helped me to better understand things. I did implemented a professional solution for a company using machine learning, but I used scikit-learn (back then). I have some experience with tensorflow also, but implementing things makes me fell like in control. But I know whena a solution is good enough for production and when it's not. I made clear on the README of this project that this is not at all good for production. I took the time to show every single step of the process and tell it how it is. I had limited time and limited resources, and with the little I had I made a "fun" experiment. I enjoy doing this kind of thing. I have to squish this kind of projects in my free time, cause I have a full time job and I'm finishing my final thesis (I'll defend it two week from now).

Sorry if I turn this into a journal entry, but I kind got hurt yesterday when the other guy shted all over my project That's not at all what you did and I appreciate that. Thanks again.


This. Exactly this. No sophisticated tokenization. No interesting architecture using attention. And the author is completely clueless about overfitting... and even cross entropy loss. He could have gotten better results just using a bag of words approach.

But this ends up on frontpage anyway. Welcome to HN.


What tools would you use to detect overfitting in this case and in general?


My brain.

You will overfit an NN trained on only 1000 examples.

Also a simple train/test split will tell you that. But the author failed to take any time to learn the basics before spewing out this drivel.


By now it's clear from your history of behaving aggressively in HN threads that you don't mean to use the site as intended. I've banned the account. https://news.ycombinator.com/item?id=21245546

I'm sad to do that because you are knowledgeable about a number of things. Many of us could learn from you if you would share what you know without putting other people down. But the aggression subtracts more than the knowledge adds. We can't have users behaving like this sort of asshole in comments, least of all in Show HN threads, where the idea is to teach people things and expressly not to shit on them and their work: https://news.ycombinator.com/showhn.html.

Other users here know things and are willing to talk about them without being mean. GistNoesis modeled this wonderfully in the GP comment. We'll learn what we can from them instead. But if you decide that you want to use HN in the intended spirit, as described in the site guidelines and especially the one that says Be kind, you're welcome to email hn@ycombinator.com and let us know.

https://news.ycombinator.com/newsguidelines.html


Current top 10:

1. "Apple introduces 16-inch MacBook Pro, the world’s best pro notebook" Bad: 0.9964 - Good: 0.0038

2. "Developing open-source FPGA tools" Bad: 0.3381 - Good: 0.6652

3. "Show HN: Can a neural network predict if your HN post title will get up votes?" Bad: 0.0598 - Good: 0.9307

4. "How internet ads work" Bad: 1.0000 - Good: 0.0000

5. "More Intel speculative execution vulnerabilities" Bad: 0.7413 - Good: 0.2306

6. "OpenSwiftUI – An Open Source Re-Implementation of SwiftUI" Bad: 0.9994 - Good: 0.0005

7. "How VCs Make Money" Bad: 0.9997 - Good: 0.0003

8. "OpenBSD: Why and How (2016)" Bad: 0.9988 - Good: 0.0013

9. "The Perl Master Plan: How to Put Perl Back on Top" Bad: 0.9997 - Good: 0.0003

10. "Jerry (YC S17) Is Hiring Senior Software Developers (Toronto)" Bad: 0.3142 - Good: 0.6800

So all in all, only 3 of today's top 10 had good titles... Either the titles could have been better but the content was too interesting, or this tool has very low recall.


"Show HN" Bad: 0.0002 - Good: 0.9998

"Warning: bad economist" Bad: 0.0001 - Good: 0.9999

"Warning: bad artificial intelligence" Bad: 1.0000 - Good: 0.0000


Seems like the judge has a small conflict of interest.


So the answer "Can a neural network predict if your HN post title will get up votes?" is a clear "no", at least for this tool.


The only one it predicts well...is itself.


Perhaps because the author had access to the tool before writing the title? In which case, less of a "prediction."


"My YC app: Dropbox - Throw away your USB drive" (Bad: 0.9970 - Good: 0.0029)


I think this is a prime example of where AI could go wrong. When people just talk about social media AI curation they don't really understand it. But I personally really wish social media would do less AI curation, who knows what gems we've missed, just because they're maximising for our instant satisfication.

Kinda spooky even, who knows, social media totally might have already killed companies that sounded too different or even just political ideas that differ from mainstream (or sponsored) views?


I think the problem is that the title is not a good indicator for current-event related submissions. "More Intel speculative execution vulnerabilities" may be a bad blogpost, but it's an important current event, so it still gets to the top regardless of the title selection.

Categorizing submissions to different types, and repeat the experiment, you'll find the program may predict blog/article and "Show HN" submissions with higher accuracy.


> This project is far from credible. All the things I did were to satisfy my own curiosity. With that being said, the bigger limitation I can see is that I only had access to a few stories. I also cannot validated the neural network prediction, cause in order for me to do that, I would have to write a content, come up with a title and then post it choosing words that triggers a good value on the neural network and post that history on a Friday noon, to see if my story succeed.

This is from the Github project.


  new mac
seven characters, 0.0060 bad, 0.9940 good


"I know"

6 characters Bad: 0.0085 - Good: 0.9915

EDIT: find a higher score than yours at one character less

"I went"

6 characters Bad: 0.0025 - Good: 0.9975

for 5 characters:

"I won" Bad: 0.0031 - Good: 0.9976

===

at 4 characters:

"J ML" Bad: 0.0002 - Good: 0.9998


"ruby rails nodejs bad" -> good at 0.9975 or something


"Show HN: Cow Robots Neural Network Smells"

Bad: 0.0267 - Good: 0.9720


This comment is going to collect the most votes yet is predicted to be Bad: 0.9320 - Good: 0.0800


I am very torn. I really want to upvote this comment, but am equally wanting you to be proved wrong.


This is somewhat interesting. If what is stated in the comment itself is true then that means the poster had to solve to find x and y such that 'This comment is going to collect the most votes yet is predicted to be Bad: x - Good: y' would give Bad: x and Good: y as the output when passed through the neural net. Maybe they also manipulated other parts of the comment to find x and y.


Though a comment is not a post title.


Does this not add up to 1.0 by the way.


I didn’t read the code in the post and don’t have any deep familiarity with machine learning, but I have implemented a naive bayesian classifier to do something similar for tweets. The scores you get from that method don’t add up to 1 either.


It's basically a buzzword detector.

"this is just a tool for detecting buzzwords"

=> Bad: 0.9991 - Good: 0.0011

"this is merely a device for detecting artificially sophisticated words"

=> Bad: 0.0019 - Good: 0.9980


"linux for sohpisticated when if contextual what but then"

=> Good: 0.9986

"linux for sohpisticated when if contextual what but then what"

=> Good: 0.0201


sohpisticated ?


I think whether the title is spelled correctly is a valid thing to take into consideration when predicting whether it will get upvotes.

Put another way, I think misspelled titles are well within the problem domain here.


Is it within the problem domain? Yes.

Is it interesting for this project? Based on my skimming of the README, probably not. I don't think it handles misspellings intelligently.


>> "I plan to rewrite Linux in Rust - Linus Torvalds" <<

Maybe it's sohpisticated because Linus is going to RIIR?


deoxyribonucleic cloned asynchronously

Bad: 0.0001 - Good: 0.9999


So that's a neural network thing? If the input is 'word space' it trains to detect buzzwords?

I'm honestly interested - do we hobble our neural networks with our choice of training space? If for instance, the HN-NN-input space including 'prepositions and word-frequency statistics', would the network train for sentence sophistication? Just because those stats were in front of it?


The neural network used here is very weak, it's not going to do much better than word correlations.


That is how it works here, you collect upvotes if you use fancy words. That's why everybody uses the word "orthogonal" here all the time. Have you ever seen that word anywhere else?


> That's why everybody uses the word "orthogonal" here all the time. Have you ever seen that word anywhere else?

Yes, frequently. It is common in various areas of math -- mathematical background is not something "normal people" usually flaunt, but there are good reasons to expect programmers to have much more such background than average.

A good rule of thumb is that people are probably using the words they use because those words make sense to them. See the first panel here: http://www.basicinstructions.net/basic-instructions/2009/1/2...


Yes. I'll take orthogonal any day over HN things like "foot gun" or the cryptic invocation of so and so's "law"


isn't that orthogonal to the discussion?


i'm orthogonally inclined to disagree, but mainly because i'm sitting on a chair


> Have you ever seen that word anywhere else?

Unfortunately far too often both in tech and in CS academia (and I'm talking about informal conversations ie. this is an 'orthogonal idea')


Interesting, tired with:

I made cheese sandwiches for a week, here's what happened

And got: Bad: 0.0001 - Good: 0.9999

Not sure what to make of that.


VC licked my balls, here's what happened Bad: 1.0000 - Good: 0.0000


Insert "for a week" to get "Bad: 0.0016 - Good: 0.9986"


"I made, here's what happened": Bad: 0.0022 - Good: 0.9976


I've got only 0.9773 good with:

Can you pass the butter?

But exactly the same score with:

Cannot your passed their butter?

Couldn't hill-climb past that.


wait, what happened? i need to know.


He had lots of cheese sandwiches.


Well, here's the thing: a good Samaritan offered 2.6M stories from HN with score. I've downloaded the file (almost 500M) and I'm now processing it. It is taking a long time to just process it. I don't know if I'll be able to train the neural network with all that data. As I said on the repo, the project is was a quick thing, just to test a theory. My question is: do you think is worth feed the NN more data so it can make better predictions? Please up vote this comment so more people could give their opinion.


Depends - if you really want to explore the tech, this is precisely the way to do it. I would be interested in the results, especially comparing them to your initial results.


Show HN: Can a neural network predict if your HN post title will get up votes?

Bad: 0.0598 - Good: 0.9307

Can an AI predict if your HN post will go viral? The results will shock you

Bad: 0.9539 - Good: 0.0401

Seems to work!


Lol, you forgot to put "Show HN" in the second one :p


Not quite:

>> Show HN: Can an AI predict if your HN post will go viral? The results will shock you

Bad: 0.8326 Good: 0.1575


Apple is down (Bad: 0.0006 - Good: 0.9992)

Facebook is down (Bad: 0.0079 - Good: 0.9921)

HN pg (Bad: 0.0133 - Good: 0.9886)

Zuckerberg (Bad: 1.0000 - Good: 0.0000)


> Zuckerberg (Bad: 1.0000 - Good: 0.0000)

At least it gets something right ...


"I enjoy politics at work" (Bad: 0.0024 - Good: 0.9971)

I think the AI needs a little work


Are you tired of getting 0 up votes on your post? Wouldn't it be nice to have a tool to test if your title will draw people's attention?

Well, this project doesn't offer this tool, but it tries.


Interesting:

"A mouse killed our network engineer" - Bad: 0.2068 - Good: 0.7895

VS

"A rat killed our network engineer" - Bad: 0.9698 - Good: 0.0322


"A mouse killed our network engineer" - Bad: 0.2068 - Good: 0.7895

"A network engineer killed our mouse" - Bad: 0.0397 - Good: 0.9529


Perhaps "mouse" is better than "rat" because of computer mice?


ha, didn't think of that, you might actually be right :)


Mouse is a user interface object.


A RAT is a remote access tool.


It would be nice to include an analysis of how good is this with real data. Perhaps pick all the submission from yesterday and show the correlation between the real points and the prediction.

The distribution of points is very 0 heavy. It can be a problem to represent it and to model it.


From the github page example: "Bill Gates ate my tuna sandwich" Bad:0.9273 Good:0.0632

My try: "deep neural network ate my tuna sandich" Bad: 0.0000 - Good: 1.0000

Perfect score!!!!


"creating linux network socketss" -> Good 0.99

"creating linux network sockets" -> Good: 0.01


Interestingly, a couple months back I saw a reddit thread where someone collected data that showed that posts with small spelling errors will gain (in some cases significantly) more upvotes - for whatever reason.


I hypothesize it's because Reddit is filled with karma trap bots that like slight spelling errors. It's a shibboleth for them.


"I plan to rewrite Linux in Rust - Linus Torvalds"

Bad: 0.9988 - Good: 0.0013

Pretty sure HN would break if this actually happened.


"I plan to rewrite Linux in Go - Linus Torvalds"

Bad: 0.9999 - Good: 0.0001

Rust is 13x better over Go in this benchmark.


"I plan to rewrite Linux in C# - Linus Torvalds"

Bad: 0.1897 - Good: 0.7988

Looks like we have a winner.


the # will be discarded after the tokenization of the title, so the winner really is C, rsrs.


Clojure scores exactly the same.


as does

in assembly, in VHDL, in brainfuck, in America, in McDonalds, in clickbait

but "in legalese" scores surprisingly high


How many Hacker News stories was this network trained on? From the code, it wasn't many, and you need a lot of stories.

A year ago I made a Hacker News submission score prediction notebook on the full HN corpus: https://www.kaggle.com/minimaxir/hacker-news-submission-scor...

Even with hundreds of thousands of data points, the R^2 was effectively 0.


Hmmm.

I tried feeding https://rachelbythebay.com/fun/hrand/ to this. The results were... well, the input was very repetitive and the NN was only trained on 1.5k titles, so what can I say.

Some of the best:

1.0000 0.0001 The police found programming to raise sheep

1.0000 0.0001 The police found PS4 in the cloud

1.0000 0.0001 The future of API in the cloud

1.0000 0.0001 The economics of iPad in the cloud

1.0000 0.0001 The economics of iPad in the cloud

1.0000 0.0001 My framework for Windows is patented

1.0000 0.0001 My framework for SDK is patented

1.0000 0.0001 My framework for Heroku is patented

1.0000 0.0001 I bootstrapped my Heroku in 2 years

1.0000 0.0001 Google kills Y Combinator in space

1.0000 0.0001 Coming soon: bigger blog is patented

Some of the worst:

0.0000 1.0000 Apple has Y Combinator in 2 years

0.0000 1.0000 China creating PS4 at a coffee shop

0.0000 1.0000 China creating PS4 at a coffee shop

0.0000 1.0000 Choose Heroku for developers

0.0000 1.0000 Fixed: NoSQL at a coffee shop

0.0000 1.0000 Followup to the Arduino without warning

0.0000 1.0000 Followup to the marriage without warning

0.0000 1.0000 How I made Android at Stanford

0.0000 1.0000 How I made Obama on the freeway

0.0000 1.0000 How I made bitcoins in the cloud

0.0000 1.0000 How I made iPod at Stanford

0.0000 1.0000 How I made iPod on the freeway

I did this by porting this project to node (which was shockingly easy - took 3 minutes - because there were no JS libraries or frameworks used :D :D :D :D) and running it at the console. 5000 lines of output and repro instructions over at https://gist.github.com/exikyut/1714ad98a136d77d8674944410a4... (the output got pasted first for some reason, sorry)


While reading this, I see a lot of negative comments, but here's something to think about.

Machine Learning and AI isn't about producing a perfect solution; that results in things like overfitting. Instead, think of Machine Learning as a human being. No person is going to be prefect and no single person is going to be able to please everyone. Instead, it's about building a solution that generally speaking, provides "good results". Subjective, 100%.

The project just started. Once the OP starts adding new features to the data set and improving the data set itself, I'm sure the results will start getting better and better. At which point, it'll be a good system to "test your subject lines" before posting.

What's the worse it could do? Tell you a subject is great when a bunch say it sucks? I've clicked on many head lines that I thought "sucked" but ended up finding the content very useful.

Great job to the OP and keep it up. This type of work isn't easy but certainly can be fun. Fk all of the negative opinions and keep on keeping on.


You forgot to mention the time-zone in your analysis.


Yup, that's a big one. A topic where it is very noticeable is when it concerns monopolies. Negative post or comment about monopolies when Americans are the online majority? Expect to get downvoted into oblivion. Posted when Europeans are the online majority? Up you go.

To be clear: this isn't a complaint about downvoting, I'm just pointing out the phenomenon. For example, its possible to go to bed with +7 and wake up to -4 (a rather stark difference) because a certain comment was posted during the European evening, and thus was 'exposed' to American HNers longer than it was to European ones.


Yep seen the same phenomenon. There really is too much meaning packed into that one number.


Done.



"How to better waste time by using your time less efficiently."

Bad: 0.0005 - Good: 0.9995


"The future of emulation in compiler optimization LLVM haskell"

Bad: 0.0004 - Good: 0.9996


Bill Gates Talks Philanthropy, Microsoft, and Taxes: Bad: 1.0000 - Good: 0.0000

Here is link I posted let's see. It's 100% bad. https://news.ycombinator.com/item?id=21523295


Don't worry, I've commented so hopefully it will do better than 0.


Could you please upvote also :D Haha


"Can a neural network predict if your HN post title will get up votes?"

Bad: 0.9917 - Good: 0.0076

I'm sorry guys!


But:

"Show HN: Can a neural network predict if your HN post title will get up votes?"

Bad: 0.0598 - Good: 0.9307

Just in case, I've tested some more titles to make sure that "Show HN" doesn't (EDIT: typo, ugh) just boost any title when prepended.


> "Show HN" just boosts any title when prepended.

"Internet": Bad: 0.4528 - Good: 0.5472

"Show HN: Internet": Bad: 0.9987 - Good: 0.0013


Oh my, I edited a comment and didn't check if its meaning is not reversed...

On the other hand, "Show HN" does seem to greatly affect the score, either positively or negatively.


react-penis-app

Bad: 0.0015 - Good: 0.9986

Not sure what I'm going to build, but it's going to make me a lot of money.


Makes me wonder if there is ever any non-toy usage of AI sentiment analysis. People (managers, customers, marketing, etc) think is the hottest thing since sliced bread, but every time I've looked at it, the results are meaningless noise, like in the myriad of examples in the comments here.

It's a cool bullet point in the slide deck, and it gives you some metrics to graph on dashboards, but I'm unconvinced that it means anything.


Let me tell you a catchy title for HN.

"How I switched my AI bot from JavaScript to Go (webassembly) and it's 500x times better"


The real winner


I'd argue that titles, as long as they're neutral and accurate, have nothing to do with the upvotes you'll get. This is a perfect example of using data to try to find something that simply doesn't exist or that bear so little weight in comparison to the other variables that you can safely ignore it.


> I'd argue that titles, as long as they're neutral and accurate, have nothing to do with the upvotes you'll get.

This is nonsense. If the title is accurate, then it is closely related to the content, and the content has a lot to do with the upvotes you get.


Well yeah, that's exactly what I said. Gaming the system by tweaking your title according to a random github "neural network" won't help you if your content is shit, and if it's not shit the title will be good enough.


You're assuming that the use case for this tool is "I'm submitting an article; what should I title it?"

Don't overlook "Should I submit this article?"


> I'd argue that titles, as long as they're neutral and accurate, have nothing to do with the upvotes you'll get.

I'd argue that titles that are neutral and accurate have been edited from the original linkbait title that got the link to the front page.


this neural network is politically biased and possibly racist. it also prefers swedish girls over swedish boys. I do have to agree though, that cheese is better than mouldy cheese. and the fact "comacho for president for ever" nets a score of "Bad: 0.0415 - Good: 0.9614" which i think is fair reflection of voter behaviours and general consensus among humans. 'man' and 'not man' its not so fussy about, 'woman' gets same score as 'man', but 'not woman', oh thats just bad (Bad: 0.9909 - Good: 0.0097)

all in all a great tool to analyse your titles, it will surely help this community grow and mature over time, finally. Thanks a lot for creating this.


The results for this particular HN post are:

Bad: 0.0598 - Good: 0.9307.


The results for "The results for this particular HN post are:" are:

Bad: 0.9800 - Good: 0.0220

NB. Sorry for that, I just had to do it :)


The results for "The results for "The results for this particular HN post are:" are:" are:

Bad: 0.8083 - Good: 0.1910


Is it turtles all the way down?


That would not be noticeable to HN people apparently

"Scientists discover it is turtles all the way down" Bad: 0.9998 - Good: 0.0002


well, I typed "Court: Suspicionless Searches of Travelers’ Phones and Laptops Unconstitutional"

Bad: 0.9969 - Good: 0.0032


I spent significant time on binary text classification, specifically with HN titles. You can actually get up to 65-70% post popularity accuracy just by looking at the post title.

I am currently creating a filter that filters HN news and similar sources using a similar classfier. It learns on the fly and the accuracy of guessing my 'taste' is about 75%-80%. Better accuracy for this is explainable by the fact that my interests are more focused and the classifier has easier time predicting posts I would be interested in.


Cool! I'm really curious about what you're up to since I'm doing something similar. Mine's up at https://www.onlyvetted.com/

Hit me up at kevmod at gmail, would love to hear more.


In response to this article, I was expecting insightful comments on the neural network itself and how to improve it, but instead I'm mostly reading funny attempts to play with the scoring :)


Well, step 1 is finding an answer to the question in the title. Can it?


Found a completely good one!

"Elon Musk's advice on success"

Bad: 0.0000 - Good: 1.0000


"Artificial Intelligence Officially Solved, Singularity Achieved" - Bad: 0.4175 - Good: 0.5826

Split up

"Artificial Intelligence Officially Solved" - Bad: 0.2722 - Good: 0.7477

"Singularity Achieved" - Bad: 0.4528 - Good: 0.5472

With "Show HN" they're universally bad:

"Show HN: Singularity Achieved" - Bad: 0.9999 - Good: 0.0001

"Show HN: Artificial Intelligence Officially Solved" - Bad: 0.9886 - Good: 0.0150

"Show HN: Artificial Intelligence Officially Solved, Singularity Achieved" - Bad: 0.9998 - Good: 0.0002


"Show HN:" gets "Bad: 0.0002 - Good: 0.9998". Interestingly, entering a one-word title will get it stuck on "Bad: 0.4528 - Good: 0.5472".


"Show HN: Computer": Bad: 0.0850 - Good: 0.9141

"Show HN: Internet": Bad: 0.9987 - Good: 0.0013


Finally, I've managed to get a perfectly meaningful and top-rated headline:

I've developed a cute Bittorrent client. Upvote, you leisurely butts!

Bad: 0.0001 - Good: 0.9999

Not sure, if I should give it a try?



Nope, does not work.

More precisely, it received 3 votes, but then someone apparently was insulted by being called a leisurely butt and flagged the topic.


Bill gates dead. Bitcoin buy now

Good - 1.0 Bad. - 0.0

What do I win?


And the inverse:

https://i.imgur.com/tJPp31G.png

"Steve Jobs is resurrected. Buy bitcoin"

Good 0.0, Bad 1.0


This is actually 1.0 bad, apparently. :)

Each word added, "bitcoin", then "buy", then "now", drops the score lower until it hits 1.0 bad.


I'd like to see HN add a classic tool: the kicker (https://www.easymedia.in/kickers-newspapers-use-even-today/ ). "It provides [headline writers] the extra space that they desperately need to pack meaning in headlines."

This could easily be implemented as a (hover) tooltip.


I feed it a bunch of articles currently on the front page and tabs I had open that I might submit and got bad on all of them. Then I started typing stereotypical Hackernews click bait and got a pretty solid score.

Paul Graham Rust VC IPO growth Bad: 0.0026 - Good: 0.9976

So far adding any kind of grammatical structure to the random list of keywords that turns it into a title that makes sense completely ruins the score...


Very cool. I had the same idea a couple of years back and implemented a very similar interactive tool [1]. If you find the topic interesting, then you might also enjoy the analysis explained in that blog post.

- [1] - https://intoli.com/blog/hacker-news-title-tool/


"Announcing styled-components v5": 0.9682 Good

"Show HN: Announcing styled-components v5": 0.9973 Bad

So, don't use "Show HN"?


Probably a false positive. Many show hn posts don't receive too many votes.


Fun.

BTW, I tried a bunch of single word titles (example: red, green, blue, title), and I always seem to get the same result: Bad: 0.4528 - Good: 0.5472

So, apparently, if you want to maximize your "score" with the lowest mental effort, just spam thousands of single word title posts, and then, it's a coin flip for each one :)


> I also cannot validated the neural network prediction, cause in order for me to do that, I would have to write a content, come up with a title and then post it choosing words that triggers a good value on the neural network and post that history on a Friday noon, to see if my story succeed.

This seems very doable.


A few 1.0s after some trials:

Adam Neuman got away with billions with WeWork bankrupt Apple and Google help China's surveillance of dissents Google Chrome monitors straight viewers Solar panels can not slow climate change, only nuclear power can SpaceX to to build new Silicon Valley on Mars


Would have been nice if some keywords were likely to increase or lower the score prediction. For instance, popular things like "rust" would probably increase popularity.

e.g: "Rihanna concert cancelled" Bad: 0.0006 - Good: 0.9992

vs "Rust 1.33 released" Bad: 0.9896 - Good: 0.0108


A few 1.0s:

Adam Neuman got away with billions with WeWork bankrupt Apple and Google help China's surveillance of dissents Google Chrome monitors straight viewers Solar panels can not slow climate change, only nuclear power can SpaceX to to build new Silicon Valley on Mars


To be fair, I'd probably click on that


I kinda doubt this title would get up votes:

> spam can't ml neural network ai btc crypto

> Bad: 0.0013 - Good: 0.9986


"Can a neural network predict how many up votes your HN post will have?" or "Can an optimizer find a parametrization of a nonlinear manifold corresponding to tech zeitgeist?"

If it can't, then it's just playing along


737 Max Flaw Liberal SUV Gun IOT Bad: 0.0000 - Good: 1.0000

737 Max gun has flaw in SUV liberal IOT Bad: 1.0000 - Good: 0.0000

[edit] a challenge .. the (non) prize is for reversing the outcome with the smallest diff. i believe i am the current leader :)


"Stephen Hawking has died": Bad: 0.9727 - Good: 0.0322

https://news.ycombinator.com/item?id=16582136


This is an interesting question of what context titles should be evaluated in.

If your model is "pick a title, and then predict how it will do", with the title being an independent variable, then the overwhelmingly negative assessment is quite correct. Most of the time, Stephen Hawking (or any other celebrity) hasn't just died, and that title would be a lie or a hoax.

To predict these obituary articles better, you'd need to build in an assumption that the content of the title was true, which would cause a lot of problems in the general case.


"someone ate my sandwhich" Bad: 0.3876 - Good: 0.6616

"someone ate my sandwich" Bad: 0.9999 - Good: 0.0001

Hacker news can't spell, or maybe associates sandwhich and someone eating it to some repository.


Apple buys IBM: Bad: 0.3192 - Good: 0.6914

IBM buys Apple: Bad: 0.9757 - Good: 0.0249


> In order to check that, I got 1256 stories from HN API

Only? If you want to make useful guesses on the time of a post, better take a much larger number, to ensure you get enough randomness.


AI will take over the world by 2025 - Bad: 0.0000 - Good: 1.0000

That NN predicted.


IDK but I know for a fact that I can comment with the same message in different ways and get upvoted or downvoted accordingly just by wording the the comment differently


Your own title seems to be performing quite badly...

Bad: 0.9917 - Good: 0.0076


Next step, make it rewrite the title to get more upvotes.


"neural network achieves sentience" - bad: 1.0 good: 0.0

"neural network fails Turing test" - bad: 0.9964 good: 0.0035

I think this AI might be a bit biased against AIs


This enforces the well-known rule that, if an article asks "Can...?" or "Does...?", the answer is always 'No'.


Has Anyone Really Been Far Even as Decided to Use Even Go Want to do Look More Like?

>Bad: 0.1236 - Good: 0.8886

This says something.. though I'm not sure what :)


I noticed timing plays an inportant role in upvotes on HN as well as whether there are other interesting submissions within the same window.


Absolutely. Odds increase after around 10:30/11am NY time since you get the attention of both coasts for the US audience.


India bans the internet. Bad: 0.9942 - Good: 0.0056

India bans the internet in kashmir. Bad: 0.9408 - Good: 0.0566

India bans youtube and whatsapp. Bad: 0.0059 - Good: 0.9930


Why is it so sensitive to word order?

Esp8266 USB input Manufacturing: Bad: 0.0775 - Good: 0.9290

Manufacturing Esp8266 USB input: Bad: 0.9992 - Good: 0.0009


“Facebook data hacked by Google intern”

Bad: 1.0000 - Good: 0.0000

It really disliked this one, even though it probably would get attention on HN.


"Big boys doing big boy things"

Bad: 0.0009 Good: 0.9983

Somehow I don't believe this. On the other hand, I believe this.


Can a neural net predict Google's stocks if it goes up or down?

t. Google employee knowing what do with my stocks


People have certainly tried. If someone figures out how to do it, they usually don't tell everyone because they will have less of an edge.


More training data required, I think, or else HNers like 4chan memes from yesteryear:

>frosted butts

>Bad: 0.1214 - Good: 0.8659


What are it's results on historical submissions? Presumably a subset was used for training?


HN downvoted the news of Elon Musk's passing as well as the DIY portable fusion reactor.


Tried:

Can a neural network predict if your HN post title will get up votes? - Bad: 0.9917 - Good: 0.0076

I think it sums up all.


Show HN: Can a neural network predict if your HN post title will get up votes?

Bad: 0.0598 - Good: 0.9307

It's all about that Show HN.


Well

"Show HN:" Bad: 0.0002 - Good: 0.9998


"Bad JSON Parsers" Bad: 0.0006 - Good: 0.9992

"Bad XML Parsers" Bad: 0.9998 - Good: 0.0003


"Failing Fast with Web Components" Bad: 0.0122 - Good: 0.9874

I might have to write that blog post.


It seems a single word (that exists) always score "Bad: 0.4528 - Good: 0.5472".


You should have used just "Show HN: HN Titlenator", that one gets Good: 0.9560


"Elon Musk writes Redis blockchain Golang" -> Bad: 0.0041 - Good: 0.9955 :D


1.0 Bad: Facebook Has a Great New Product

0.94 Good: Facebook Literally Kills Puppies

Yup, that sounds like the HN I know.


Anything that doesn’t factor in the time of the posting will be pretty much useless


Your graph would be more legible as a matrix of (day of week) × (time of day).


Implementing an operating system for Risc-v in Rust Bad: 0.0542 - Good: 0.9655


"Supercalifragilisticexpialidocious" Bad: 1.0000 - Good: 0.0000

This thing is broken.


Probably all the words that are not in the dictionary like uihdwaiuwhdakhkjdgu get the same result. They are probably spam.


"Google Uber for Carbon Sequestration" Bad: 0.1631 - Good: 0.8615


NASA leads the race to the bottom of Saturns sea Bad: 1.0000 - Good: 0.0000


you are trying to use a small simple neural network to predict the behavior of thousands of other completely different and significantly more complex neural networks: the average Hn user.

Of course you will fail.


"Bad: 0.9007 - Good: 0.1067" - Bad: 0.9007 - Good: 0.1067


cute donkeys : Bad: 0.0002 - Good: 0.9998

fluffy cats : Bad: 0.9921 - Good: 0.0090


"The Professor eats cake" Bad: 0.0001 - Good: 0.9999


"Quick, click here!" gets Bad: 0.0037 - Good: 0.9970


"blockchain ai butts": Bad: 0.0048 - Good: 0.9952


>No Birth Control Creampie for Violet Myers

>Bad: 0.0000 - Good: 1.0000


Show HN: Testing HN titles against a neural network

Bad: 0.1585 Good: 0.8280


Haha. Thanks for the joke for a dark day in Hong Kong.


https://news.ycombinator.com/item?id=21522739 Made my title with your tool why 2 upvotes?


oh sorry, 6 now it works!


SCOTUS Rules Constitution Unconstitutional

Bad: 0.9999 - Good: 0.0001


ShowHN: Stock market is thriving (1930)))))

Bad: 0.0012 Good: 0.9986


So if it's bad, it's gonna work better !


"Show HN:" (Bad: 0.0002 - Good: 0.9998)


Works for me:

Bill Gates killed by a bug in Windows XP

Bad: 0.0026 - Good: 0.9975


"fuck facebook fake and unsexy"

=> Good: 0.9885


Are ya'll voting on the title alone?


Elon Musk unveils brand new technology

Good: 1.0000 :)


> Paul Graham hates LISP

Bad: 0.0579 - Good: 0.9427


"rand made work"

Bad: 0.0046 - Good: 0.9951


According to this tool, HN would not give a fuck about "Windows 11 will use a Linux kernel", "Mark Zuckerberg's lizard tail falls off during interview" or "Google Search to be rewritten in PHP", but "Trump nukes North Korea" would at least be 0.5719 good.

So... I guess the neural network has some learning to do?


Steve jobs alive again show hn

Good: 0.83


isnt this a bit like psychohistory from foundation?


well the post has hit top of HN


neural networks are fun

the history of X scores:

power: good 0.91

wealth: good 0.99

sex: good 0.99

squirrels: good 0.99

cables: bad 0.97

keys: good 0.99

why cables?


> why cables?

Maybe something to do with the wikileaks leaked cables?


Fuck cables.


Turns out Trump was onto something:

"Planning sophisticated covfefe" Bad: 0.0002 - Good: 0.9998


Although it turns out "covfefe"0 by itself would not have been a hit with the HN crowd. That is probably why it went on Twitter.


"Trump" - Bad: 0.4528 - Good: 0.5472 "Can Trump" - Bad: 0.9862 - Good: 0.0142 "Will Trump" - Bad: 0.9896 - Good: 0.0108 "Show HN: Trump" - Bad: 0.9735 - Good: 0.0225


"Obama": Bad: 1.0000 - Good: 0.0000


Donald Trump reelected President

Bad: 0.9879 - Good: 0.0129




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: