Hacker News new | past | comments | ask | show | jobs | submit | kratom_sandwich's comments login

It appears not to be an economics journal at all, at least it is not featured in RePEc's rankings: https://ideas.repec.org/top/top.journals.simple.html


As with many other things, the first step should be to fix your sleep/ diet/ exercise.


Not sure if you're including podcasts in cour question - if yes, check out "How I Built this": https://wondery.com/shows/how-i-built-this/


Here's a pretty interesting research into the countless fakers this channel has inspired:

"How Primitive Building Videos Are Staged" - https://www.youtube.com/watch?v=Hvk63LADbFc


http://www.ibash.de for Germany :-)


Newfangled stuff... german-bash.org was the original.

Has also been dead for 2 years now. I found a 20 year old quote of mine on archive.org. How time flies.


Not quite. The very first German bash.org clone is https://bash.pilgerer.org/


I think I recognize some of these from bash.org. Are all of these just translated? xD Or maybe I read them somewhere else ... god, it's been so long.


There was another site, called german-bash.org, but it went down last year :/


Can anyone comment on the issue of hallucinations? The author only mentions them briefly and I cannot gather how big of a problem this is. Apart from the literal quote the LLM hallucinated, wouldn’t all the other information have to be double-checked as well?


IMO, hallucinations make it basically unusable for things it should be very good at. For example, I have asked two different AIs what the option is for changing the block size with HashBackup (I'm the author). This is clearly documented in many places on the HashBackup site.

The first time, the AI said to use the -b option to the backup program, with examples etc. But there is no -b option, and never has been.

The 2nd time, the AI said to set the blocksize ("or something similar"; WTF good is it to say that?) parameter in hashbackup.conf. But there has never has been a hashbackup.conf file.

From examples I've seen, AI tends to do a passable job spewing a long-winded response where asking several different humans would give similar long-winded responses that contained a lot of judgement and opinions, some of which could be valid or not.


It is documented on your site.

But, before showing your site, Google 'features' chatGPT's hallucination from one of your earlier HN comments[0].

https://i.imgur.com/yxXo3GI.png

[0] https://news.ycombinator.com/item?id=38321168#:~:text=To%20c....


I'll echo this and say that I've run into very similar issues when evaluating local LLMs as the author of a popular-ish .NET package for Shopify's API. They almost always spit out things that look correct but don't actually work, either because they're using incorrect parameters or they've just made up classes/API calls out of whole cloth.

But if I set aside my own hubris and assume that my documentation just sucks (it does) or that the LLM just gets confused with the documentation for Shopify's official JS package, my favorite method for testing LLMs is to ask them something about F#. They fall flat on their faces with this language and will fabricate the most grandiose code you've ever seen if you give them half a chance.

Even ChatGPT using GPT4 gets things wrong here, such as when I asked it about covariant types in F# a couple days ago. It made up an entire spiel about covariance, complete with example code and a "+" operator that could supposedly enable covariance. It was a flat out hallucination as far as I can tell.

https://chat.openai.com/share/6166dd9f-cf67-4d9a-a334-0ba30d...


Yes, this. If the form of a plausible answer is known it is likely to be invented. API method names, fields in structures, options to command lines, plausible inventions that have a known form.

Similarly references of any kind really that have a known form, like case law, literature, science, even URLs.


I’ve had very similar problems asking technical things. I wish it would do like humans do, and say “not sure have you tried an option that might be called Foo?”. a good human tech repi doesn’t always have the precise answer, and knows it. unfortunately LLMs have mostly been trained on text which isn’t as likely to have these kinds of clues that the info might not be as accurate as you’d like.

I’ve found for technical things, I’m happier with the results if I’m using it as clues to getting the right answer, and not looking for an exact string to copy and paste.


There are many recently published or preprint research papers around that, not necessarily that hard to read I think. As a consultant this totally prevents me from making any professional use of LLMs at the moment (edit: aside from actual creative work, but then you may hit the copyright issues). But even without hallucination, using non scholar sources for training is also a problem, Wikipedia is great for common knowledge but become harmful at a certain point where you need nuanced and precise expertise.


The other problem of Wikipedia is it being a target of hostile politically motivated attacks that attempt to rewrite the history. It it will normally self-correct but time to time there are pieces of information that are maliciously incorrect.


In Addition to suggestions here, check http://github.com/trending for interesting repos


Margin Call is amazing! Also, I recommend Charles Ferguson’s documentary „Inside Job“ on the financial crisis


Apparently this was an architectural fad in the middle ages: rich families built "skyscrapers" to show their wealth and look out for enemies. At one point, Bologna had more than 100 of these towers [1]. Today, the small town of San Gimignano [2] is well known for these structures which do not seem to have a name in English (in German, they are called "Geschlechtertürme" and have their own Wikipedia [3] entry).

Also, I encourage everyone to visit Bologna, it's an amazing city!

[1] this is a figure you will often read, but it seems to be a bit inflated - apparently it's double counting towers from different time periods

[2] https://en.wikipedia.org/wiki/San_Gimignano

[3] https://de.wikipedia.org/wiki/Geschlechterturm


From https://de.wikipedia.org/wiki/Geschlechterturm:

https://de.wikipedia.org/wiki/Datei:Bologna_Middleage.jpg. That tower in the front a bit to the right of the center is seriously leaning, I estimate about ten degrees.


See also the "Folly", a more generic term for a "vanity structure".

https://en.m.wikipedia.org/wiki/Folly


Medieval epeens.


I heart Bologna! Going back for sure.


I highly recommend Bethany McLean’s "The Smartest Guys in the Room" to read up on Enron. Also, Lucy Prebble - who is now a consultant on HBO‘s succession - has written a nice play on the matter.


Sgitr is a great book and gives a real appreciation of how Ernon could have happened and will happen again. The perfect storm of hands off management and belief that ideas are more important than execution, plus the usual greed and slippery slope of covering small frauds with increasingly bigger ones. Other than Skilling and Fastow, they weren't all bad guys, I hope I'm never caught in a situation like that.

Also interesting is that sgitr is almost identical to "Bad Blood" by John Carreyrou (sp?) about Theranos. Most of the same elements were present in that fraud.


Alex Gibney ftw if you're into docu films


I recently listened to the aquired podcast on Enron: https://www.acquired.fm/episodes/enron

It's pretty long, but it's a good podcast, IMO. It also mentioned the book in it, and I think it was one of the main sources of information.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: