Hacker News new | past | comments | ask | show | jobs | submit login
AI tidies up Wikipedia's references and boosts reliability (nature.com)
156 points by clockworksoul on Oct 20, 2023 | hide | past | favorite | 87 comments



When Sci-Hub made a lot of papers available to the public, I started clicking through to more references on Wikipedia.

My goal was to learn more and go deeper on subjects, but I was stunned by how often the linked citation didn't support the claim in the Wikipedia article. There were many times where the linked citation said the opposite of the Wikipedia article.

My theory was that overly competitive Wikipedia authors were skimming PubMed abstracts and assuming the paper would support their assertion. Ironically, some of the statements with 5 or more citations were the most incorrect.

Trying to correct these articles is some times like going to war with editors who refuse to admit they were wrong.


Not only do some papers not support the claim at all (I found this to often be the case when I used to read the news regularly), but there is often a critical detail left out of the abstract that is key to how the study should be interpreted. The vast majority of people citing papers, by my measure, are relying solely on the abstract even when the entire paper is available for free.

This is often the case outside of Wikipedia as well. Truthfully, I'm a layman because I don't have an advanced degree that suggests I'm qualified to interpret papers, but things are bad enough IMO that part of me wishes people would stop trying to communicate science to the public until this broken system heals at least somewhat. On a related note, it doesn't help that folks like Andrew Huberman are normalizing this idea that one or two studies are good enough to form conclusions about how average people can "optimize" (micromanage) their lives in ways that are clinically relevant. This isn't to take away from the good that Huberman does, but I think it send a the wrong message to people who don't actually have any experience reading papers.

One thing that might help is forbidding abstracts and conclusion sections in papers published in journals, which might cut down on some of the misunderstandings and make it harder to pass off a paper as support for a claim.

> Trying to correct these articles is some times like going to war with editors who refuse to admit they were wrong.

There is virtually no point in trying to fix Wikipedia pages until Wikipedia fixes themselves. They have an extremely hard job, but their prolific editors are too bully-like and contradict themselves all the time.


Even when I was in grad school (sadly a while ago), it was understood that to really understand the validity of a paper, you needed to look at its conclusion and methods sections. Generally speaking the abstract was just used as a filter. It helped you answer the question of whether the paper was even worth taking a look at or not. Then you mined the conclusion and methods to see if they supported what you were looking for. Only then did you read the paper in detail.

As you say there's a pretty big gap with the way researchers read through papers and the way the science media communicate about research with laypeople. It's a hard problem that needs more attention than it gets.

(Probably one of the most useful classes I took in grad school was a class in the beginning that forced us to read 10 papers a week and write summaries. The class was hell as obviously there was no way to properly understand each paper on top of your research and other classes. But it made us great at being able to quickly winnow papers for the meat of each paper and also made us great at finding papers with... dubious methods.)


Yup.

I often will read something like "A connection between X and Y"

Then skim down to the conclusion and it will say "In conclusion, we were only able to see some weak connection and we aren't sure if it was one at all, but this warrants further research..."

I know negative results are still valuable science, but I feel like a good chunk of the papers I'm reading are basically worthless CV padding.


While it focuses a bit more on pop-culture, a relevant comic on "the science news-cycle": https://phdcomics.com/comics/archive.php?comicid=1174


hah - I was with you until the "ten papers a week" part. You say it yourself, there are parts to the paper, and also levels of concentration and re-reads. Drilling is useful for hyperactive monkey minds who are doing something else instead of studying


It was an "intro to grad school" type class. The point was simply to build endurance. They made very clear in the class that there was no way we were going to absorb the depth of the papers and told us to bound the amount of time we spent on it clearly so that we don't kill ourselves over it. I like to think of it as marathon practice or breathing practice for folks who played woodwinds; it was just helping us get into the muscle of winnowing papers effectively.


>One thing that might help is forbidding abstracts and conclusion sections in papers published in journals, which might cut down on some of the misunderstandings and make it harder to pass off a paper as support for a claim. I wouldn't go this far, bstracts are needed to determine if the paper is of interest to what you're researching. Perhaps a more modest refinement that could work would be to forbid conclusions in abstracts.


You need an abstract if you are looking up a topic and you’re scanning through articles though, and I think even if articles didn’t have abstracts, people would still link to papers to spin a narrative.

I think this all really the cost of admission for the ease of self-publishing. Now any jerk off can post about their opinions on keeping abstracts while using the bathroom on company time.


> This is often the case outside of Wikipedia as well.

Often they just read the wikipedia article that cited the paper.


You do not need an advanced degree to evaluate a paper. That’s absolute nonsense.


There are many papers I can’t evaluate even with a degree in musical composition…


> Not only do some papers not support the claim at all (I found this to often be the case when I used to read the news regularly), but there is often a critical detail left out of the abstract that is key to how the study should be interpreted. The vast majority of people citing papers, by my measure, are relying solely on the abstract even when the entire paper is available for free.

This happens in the HN comments all the time. People just assume the other person isn't actually going to stop read it.


> My goal was to learn more and go deeper on subjects, but I was stunned by how often the linked citation didn't support the claim in the Wikipedia article. There were many times where the linked citation said the opposite of the Wikipedia article.

WP is, unfortunately, not obviously any worse than researchers in general are: https://gwern.net/leprechaun#miscitation

> Trying to correct these articles is some times like going to war with editors who refuse to admit they were wrong.

Yep. Because now you are doing 'OR' by interpreting the paper, especially when the abstract is just lying/spin.


Don't even have to be about science, it can just be random little myths that spring up people try to correct. I happen to know stuff about firearms, got quite a few, from one of those places people can have quite a few. One day I'm scrollin down reading history on the M16 trying to remember something and I find the dumbest goddamn thing I ever heard, this claim the M16 got issued without cleaning kits and Colt claimed it was "entirely self-cleaning". I don't know if ya ever used firearms but that would never happen, and is the absolute dumbest claim I ever saw in my life. "reduced fouling" don't mean "entirely self-cleaning".

Now I don't really know how to use wikipedia but I thought it was one of those fake edits people might do as a joke. Went checking edit history and stuff to find out and turns out someone else tried to point out the myth too and some jackass with authority is jealously guarding that myth to keep it on the page forever.

Went and checked the government documents myself, among other things there is no Colt material making that claim in that context. Government also ordered cleaning kits, Colt supplied cleaning kits, so it's also contradicted by fact there too. Simple thing is there just weren't enough to go around, supply shortages, and someone at some point created the myth by misunderstanding that a design to "reduce fouling" thereby "reduce cleaning" don't in any way no hell no how imply "self-cleaning" in that way. Yet the claim remains on wikipedia, without clarification, because I guess some idiots repeating the myth in a book makes the myth okay.

I went and checked that book, too. No source of the myth in the cited book. It's completely fuckin made up and anyone with half a brain and a day using a firearm would know that, but there it is. All because someone, for some reason only God knows, personally wants it to be there and has the authority to keep it there.


I tracked down the book wikipedia cites for that self-cleaning claim (it's on Library Genesis).

Wikipedia presently says:

> However, the rifle was initially delivered without adequate cleaning kits[43] or instructions because advertising from Colt asserted that the M16's materials made the weapon require little maintenance, and was capable of self-cleaning.[67]

The book, The M16 by Gordon L. Rottman, says on page 20:

> Most Marine units began receiving the XM16E1 in April 1967 and immediately experienced problems arising from several factors. Most units received little if any cleaning gear beyond some cleaning rods and bore brushes. Some units had never heard of chamber brushes. Colt is said to have hyped the weapon as futuristic, requiring little maintenance owing to new materials. This was interpreted to mean the black rifle was “self-cleaning.”

So Colt supposedly saying the rifle requiring "little maintenance" was then subsequently interpreted (by the Marines I think) to mean the rifle was "self-cleaning". The book doesn't say Colt made the "self-cleaning" claim, but whoever wrote that part on wikipedia is attributing the claim to Colt.

Hard to say if even the book's claim is right.. "Colt is said to have..." said by who? The book doesn't actually cite any Colt marketing material or anything like that.


See what I mean? Book just repeats the myth in my mind, though as you say I guess a tweak of the myth. But you're Exactly right. Said by who? According to what? It's just a claim, and in my opinion contradicted by every single public document and contemporary marketing or Colt related material I could find from the time period. Same thing with military, of course that would be contradicted by any/every military related service rifle training program or materials ever.

Still on wikipedia though. Because writing bullshit without any source is fine as long as it's in print, I guess?


Well at some point, Wikipedia became less about writing an encyclopedia and more about creating and enforcing a set of rules that (hopefully) would eventually lead to a "high-quality" encyclopedia. So if you want to fix Wikipedia, you can't just edit Wikipedia (it'll be reverted), you have to justify your edits according to Wikipedia's core policies (Verifiability, no original research, and neutral point of view). Probably you could also explain why those policies are wrong and need to be changed, but that would be a longer discussion and wouldn't necessarily take place in the context of a specific article. Just guessing, but probably in this case the issue is a lack of secondary/tertiary sources - there is material from that period, but nobody has analyzed it and come to a conclusion on this sub-subject besides this author. So because (per verifiability) secondary trumps primary, the secondary is what is in the article. I think all you could do is get those claims removed, because synthesizing a different conclusion than the book would be original research.


Seems to me it'd be a lot more high quality if you could simply have points about truth or things being opinions, or speculation, or rumors, or myth, stuff like that. Or simply noting there's disagreement or things about truth. Maybe you can but I ain't about to spend my whole year learning a bureaucratic jargon just to "sneak in" what should be effortless to put there. Get my frustration?


I can kind of get Wikipedia's stance here. They're really not supposed to be litigators of "what actually happened," which may not be knowable. They have defined rules for what counts as an authoritative source and nonfiction books that are generally taken seriously are in that set. So if a book says it, it's good enough to be on a Wikipedia page.

Does that means it's true? I highly doubt it. I wasn't alive in the 60s, but I've known a lot of Marines in my life. I was in the Army myself. I've known people who were Marines in Vietnam. I've never known anyone who ever operated a rifle who would interpret a claim this way or belief a self-cleaning rifle was a possible thing.

But unfortunately, even if you are personally an expert, Wikipedia doesn't let you come in and tell a page it's wrong. You have to publish your knowledge in an authoritative archival source and then it can be cited. I get that it can be frustrating as a subject-matter expert, but I don't know what better sourcing and citation rules an encyclopedia can have. They don't internally litigate the validity of a claim. They define what outside sources count as citable and then trust those sources.

Hell, I experiened a fairly stupid version of this a few months ago. I edited the page for Slayer's Reign in Blood in the section for pop culture references, adding a mention that Angel of Death was used in the Leftovers when Nora pays a prostitute to shoot her and uses the song to cover the gunshot sound. I linked to an episode summary on another wiki and that got deleted because apparently Wikipedia doesn't allow other wikis to be cited as sources. Fair enough as a general rule, though I think it doesn't make sense in this context because Wikipedia has episode summaries of television shows that don't cite sources at all. So I just removed the cite and linked to HBO's home page for the episode and that stuck, even though it doesn't have a description and you'd have to actually watch the episode to confirm I'm not lying.

But hey, whatever, rules are rules. No rules are perfect. Courts get it wrong sometimes, too.


> They're really not supposed to be litigators of "what actually happened," which may not be knowable

seems like best thing would be to be able to say "this is not verifiable" or similar. seems to me these issues people have could be solved or improved a lot by having more and more nuanced info not less. Like if people are just so damn insistent the myth stay up there it should also be fine to point out "though there seems to be no evidence supporting this claim or claimed notion" when the books or stuff cited have nothing but more claims. More info, more context, not less


The image of wikipedia editors jealously guarding pages (which makes me think of a dragon hoarding treasure) is entirely accurate and widespread. I've seen it happen on all kinds of pages from pages about some religious event/person to pages about highly technical subjects. I honestly don't know what the solution is because there are definitely people who continually try to edit history by changing wikipedia pages, but the scales right now are way tilted toward preserving bad info, which includes fixing errors and also adding expansions.


Well maybe I'm a bumpkin but seems to be the fix is the truth, and if the truth ain't clear there's no reason you can't have explanations or explain claims repeated elsewhere aren't evident in something like source materials. It isn't like you got a floppy disk and have to cram an encyclopedia on it.

Simple enough fix to my mind. If them "dragons", and I like that image there I think it fits, pull bullshit like that you just remove them. Which you can now easily do because whether or not Colt claimed something is a matter of truth that can be checked. In this case obviously not. Simplifies everything and gets people talking about the truth and how to best represent what's true, even if it's disputed, instead of having what's very clearly false with no hope of even clarifying "this appears to be a myth".


There's an incentive for scholarly publishers and authors to have their works cited on Wikipedia, since there is a belief (supported by some research [0]) that citation on Wikipedia increases subsequent scholarly citations to those articles. Such a benefit could be used to increase a Journal Impact Factor & similar metrics [1,2] or the citations to one's own works, which is beneficial to academics competing for promotion, tenure, and research grants which are partially based on citation metrics.[3] I wouldn't be surprised if there was some indiscriminate citation on Wikipedia to bolster the metrics.

[0] https://scholarlykitchen.sspnet.org/2022/11/01/guest-post-wi...

[1] https://en.wikipedia.org/wiki/Impact_factor

[2] https://en.wikipedia.org/wiki/Citation_impact

[3] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6668985/


Unfortunately, many people treat citations as a formality rather than something to help someone judge credibility. I try to cite as specifically as possible. Cite the page if possible (and which column if available), section if possible, and if you want to go the extra mile, the paragraph or line number if possible. That makes checking easy. This was common when I worked at the USPTO as a patent examiner, but wasn't standard practice when I was in academia.

Previous comment of mine on this issue (including a bit about how to make URLs more specific): https://news.ycombinator.com/item?id=23897686


Over time, I've come to see that Wikipedia mirrors the broader internet: initially heralded as a game-changer, but when it really counts, it can be just as unreliable as the things that preceded it, if not worse.

I keep thinking of that quote from the Office: "Wikipedia is the best thing ever. Anyone in the world can write anything they want about any subject, so you know you are getting the best possible information."


I had a math professor who would occasionally correct an equation in the Wikipedia article about some theorem he invented, but someone would always change it back. He eventually gave up.


Wikipedia doesn't accept primary sources. You could be the inventor of a thing, but you'd be unqualified to write a wikipedia entry about it. You'd have to get someone else to write about it, get that published, and then cite that in wikipedia.


>Wikipedia doesn't accept primary sources

wikipedia accepts published papers from journals as references, despite that those are primary sources. (I thought they originally didn't, but they do now)

How would this guy's theorems be on wikipedia if he hadn't published them? Wikipedia is leery of people editing "their own page" in the sense of conflict of interest, but cleary if you are correcting a published theorem, that is entirely within bounds. You could say on the Talk page "I published this theorem, wikipedia has it wrong."


There was plenty written about it. Whoever wrote the article was confidently incorrect about how they interpreted what was written.


There is a talk page where you're supposed to hash this kind of thing out.

What was the equation?


He used the talk page. I have it in my notes in a box in storage. I'll dig it up probably in the next 10-20 years, if you can wait.


It's also a result of Wikipedia's insistence to provide a citation or reference for every single thing/claim.

Turns out its easy to find a reference for anything if you search long enough.


What do you suggest, not to provide a citation for every claim?


I don't think they're arguing against the use of citations in general, but just pointing out that anyone with sufficient motivation acting in bad faith can find something official looking that seems to support their opinion.

In other words, citations alone are insufficient


> anyone with sufficient motivation acting in bad faith can find something official looking that seems to support their opinion

you just summed up wikipedia on any political issue.


How about in-between: Not provide a citation until a claim is disputed, then go into a "let's weigh and vote on the citations that support A or B, and go with the one that has the most".

My big problem and theory of what will be the downfall of Wikipedia: No Original Research.


That would lead to levels of stone walling so extreme that no one would actually bother engaging. We already have that right now, with editors treating some articles like fiefdoms and being very combative to any change. Usually using pedantry or the revert rules. Imagine if you could just make any claim, and then there needs to be a process just to disprove it. Plus with the 3 revert rule it would basically turn the entire website into hell.


I am still irrationally annoyed that somebody made a Wikipedia article for their own made-up definition of high vs. low fantasy ("another world" vs. "our world", as opposed to the common definition, which was basically Tolkien vs. GRR Martin), based on a single decades-old article that didn't even mention the phrase "low fantasy", and managed to unilaterally change the definition.

No one could find a reference to contest it with, because nobody was writing articles defining terms that everyone understood; and the original editor's reference was not available online at the time, so no one could look at it and say "Uh, this is just some rando saying it would be cool if we called secondary-world stories 'high fantasy'."

Now the original, bogus reference is gone from the article, replaced with 20 low-effort articles that are obviously sourced from the same Wikipedia page they're being used to support. Boom, citogenesis! https://xkcd.com/978/


Feedback: love fantasy do not see the problem. The only thing you need to remember is that communication usually fails except by accident. I have seen many definition change meaning during my life it is just part of life. If you are content that you have the same definition you usually do not.

https://en.wikipedia.org/wiki/Wiio%27s_laws


> Ironically, some of the statements with 5 or more citations were the most incorrect.

There is nothing thing ironic about it. Wikipedia is run by bullies who make up references.


Did you fix it?


> Trying to correct these articles is some times like going to war with editors who refuse to admit they were wrong.


Wikipedia's policy is that errors in existing pages should not be fixed.


The actual policy is to improve accuracy: https://en.m.wikipedia.org/wiki/Wikipedia:Accuracy_dispute


Are Wikipedia's "actual" policies the things that Wikipedia says it does, or the things that Wikipedia actually does?


I don't know where you are going, but I did fix several articles already and never had a single issue with it because my sources were correct.

Yes, there might be bad actors, but it doesn't mean that everything is bad and impossible. You should try first, at least.


Wait what? Is the whole point of wiki not to improve and correct?


Try it and see what happens.


Honestly, I think the introduction and conclusion are almost always more important than the abstract. If you read the abstract, introduction, and conclusion, you should be able to understand the scope of what was done and what the paper actually shows.


I feel like this right here is what the singularity actually feels like.

With minimal effort, humans hookup AI to do some job, and things "just get better" rather than entropy taking its natural course and many things (without maintenance) trending towards "worse".

Once you have a bunch of this human / super-human level doing mundane things on wikipedia, they're now there in perpetuity, constantly improving.

I suspect this is what's going to start to happen across the economy: all of a sudden, the sidewalks seem cleaner, and trains run on time more often, traffic seems less congested, and latency in your favorite software product starts going down (with AI being turned loose on that legacy's software's code base that it gradually refactoring and optimizing in the background).

Effectively, what typically is happening due to entropy (decay, latency, quality, dirtiness) will start to move in the opposite direction due to automation and background AI.

This reversal of perceived entropy will start gradual, and then accelerate, and then on a day-to-day basis many things you touch in your daily life will be improving and then... singularity.


Some things might get micro-optimized by AI in ways that benefit people in general, but a lot will be micro-optimized to extract more profit out of customers at their expense and to replace or coerce workers in various ways. People will wield AI the way people wield every technology. Some techno-optimists thought that TV, personal computers, the internet and other technologies would bring some bright enlightened future, pretty sure AI optimists will see the same results as the others.


> Some techno-optimists thought that TV, personal computers, the internet and other technologies would bring some bright enlightened future

They were right, on a case by case basis. It meant a brighter, more enlightened future for me personally.

I was born in nowhere Appalachia with access to nothing. A third tier failed retail store was exciting to have access to; a McDonalds was exciting to have access to. The Web appeared like god damn magic cast down from the heavens circa 1994-1995. I had almost zero interest in computers growing up in the 1980s and early 1990s. I immediately understood what could be done with the Internet: I didn't need to ask permission, I could just immediately start creating anything I was capable of building on this new canvas; no licensing, no heavy government restrictions, no enormous bureaucratic walls to climb over: dial up and go. Go learn, go create, go chat with people from Australia or France, go meet thousands of people on WebChatBroadCasting or IRC, go become part of the Doom & Quake mod scene and learn how to program/script/design levels/whatever.

I think only highly privileged people would fail to grasp how extraordinary it was, they already had access to really nice things.


I think it's extremely valuable to hear perspectives like this and wish they were more commonly shared. We tend to focus on the downsides/problems of a lot of new tech, which are, to be fair very real and in some cases quite serious, but we tend to forget the often very large good that these technologies can often provide as well. That doesn't mean we should just let things go willy-nilly "because they do good stuff too", but it should temper our attempts to "fix" everything, and probably make us more cautious to not throw out the baby with the bathwater when we are trying to "improve" things.


Personally I'm doing great, I've used the internet to train for jobs and have quite a nice job now, and also I've used the internet to learn about all sorts of things that have been quite fulfilling for me. Still, we should be realistic about who is benefiting and who isn't, and be realistic about the effects of technological changes on externalities like the environment, politics, workers, and the marginalized where it's the Grim Meathook Future™. https://www.jwz.org/blog/2005/09/the-grim-meathook-future/


>micro-optimized to extract more profit out of customers at their expense

Why would that be a bad thing? Cases where corporation win, customers lose, are newsworthy because they're so much rarer than the mundane and taken for granted state of affairs where corporations make money by helping customers. Are your best examples of times that technological progress hurt us TV and the internet? They clearly helped a great deal, on net.


>Effectively, what typically is happening due to entropy (decay, latency, quality, dirtiness) will start to move in the opposite direction due to automation and background AI.

Which of course begs the question, "Where's all that entropy going, shouldn't it still be going up?"


There is no rule about how fast entropy has to go up. Systems can just be made universally more efficient without shifting that entropy elsewhere as long as they are just decreasing the rate of entropy increase as opposed to actually decreasing entropy.


Datacenter exhaust fans and chillers


Only in a closed system.


I have the opposite feeling. I only see AI accelerating entropy, as evidenced by the accelerated enshittification of everything online in the last few years.


> A neural network can identify references that are unlikely to support an article’s claims, and scour the web for better sources.

That seems like the wrong approach? The claims of an article should be informed by all relevant sources, not the selection of sources be informed by the claims of an article.


There is no bit of information on the planet that is corroborated by all relevant sources. There will always be some percentage of people critical of the prevailing theory, and if you want to go for 100% consensus for every article then there would be no Wikipedia.


The OP didn't say a claim can only be made if 100% of sources agree. He said that one should first google a subject, gather information, and aggregate multiple sources into a summary, rather than writing a summary first, and then googling for the sources that support it…


xkcd #978 as a Service


Urman points out that the Wikipedia users who tested the SIDE system were twice as likely to prefer neither of the references as they were to prefer the AI-suggested ones. “This would mean that in these cases, they would still go and search for the relevant citation online,” she says.

This seems like an optimistic interpretation.


What i dont understand is how we cant just feed papers into some sort of text > logical fallacy analysis which checks for the known fallacies, checks argument logic, checks sources (scores based on study size, and other requirements which qualify good studies) and just stops things right in their tracks before being added the "corpus" of knowledge..?? I'm just talking out of my ass here - im sure politics and other human factors stop something so seemingly simple from being implemented....

https://www.researchgate.net/publication/340531396_Automated...


Perhaps someone who has read the paper itself can address this question:

If input a claim that is unsupported, and you search for support, isn't that a fundamental error? That is, is the software assuming the claim is accurate?


Absolutely! It seems like searching for evidence to fit a conclusion, rather than the scientific approach of trying to disprove a hypothesis.


It seems like it would be the same search if you were looking for evidence that it’s false, and you wouldn’t find anything if it’s unsupported?


Maybe you would find contrary evidence?


   When SIDE’s results were shown to a group of Wikipedia users, 21% preferred the citations found by the AI, 10% preferred the existing citations and 39% did not have a preference.
and 30% didn't respond?


I was wondering about that too. Later there is "[...] Wikipedia users who tested the SIDE system were twice as likely to prefer neither of the references as they were to prefer the AI-suggested ones". Not clear if this is the same as "did not have a preference". And if it's not, now we've got 110%


SIDE looks impressive I wonder how it works and finds these resources.

I think the evaluation is flawed these subjective numbers would mean nothing if I did the survey. Looking at some Wikipedia pages in the SIDE demo (linked in comments here), it is clear that they in some cases fail to identify what claims are made in the article, and that the subjective choice of references are too binary.

I double check references on wikipedia in subjects where I have basic understanding, it is usually easy to find better references but it takes so much time. So very impressive.


Isn’t this kind of like saying “Adding a blur filter makes people look younger.”


I’m sure I’m not the only person that read it as AI tiddies, so I’ll take the hit here just to let you know it wasn’t just you.


Was looking for you lol.


Same here. I was afraid that maybe nobody said anything and I'd have to be the one to say it first.


What country are you from? I would consider "tiddies" to be a somewhat strange misspelling of "titties".

Compare https://en.wiktionary.org/wiki/titty_bar .


It’s internet slang. Looks like the earliest urban dictionary entry is 2012.

https://www.urbandictionary.com/define.php?term=tiddy


just for the record, nitty gritty has become niddy griddy


Is there a link to this SIDE project?


Code availability The code to reproduce our experiments is available at https://github.com/facebookresearch/side under MIT License and Zenodo https://doi.org/10.5281/zenodo.8252866


Somewhat ironically your link to Zenodo is malformed and goes to a non existent page. The correct URL is https://doi.org/10.5281/zenodo.8252866


thank you! also ironically, together we fixed it -- no AI


AI is the replacement for low-effort work.

What annoys me is that all the managers who were accruing technical debt were right. AI will clean it up for them, decades later.


this publicity does not include the MWF internal efforts for the last six+ years? only a META-owned research project?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: