Hacker News new | past | comments | ask | show | jobs | submit | dxroshan's comments login

What is happening is very scary. Many people don't seem to care about any evidence or sources. They blindly follow whatever lies that their leaders say. I think this has been the case at anytime in history. However, now, with the internet, it is easy to spread such lies to mass and easy for such leaders to make blind followers.


Clearly people care very deeply about sources and evidence -and they're attacking things (wikipedia, various gov websites) which can be used as objective sources.

If you don't have objective sources, it's easier to lead people around by the nose -hence the attack.


Here's the root of the problem though: wikipedia isn't an objective source by its very nature. Wikipedia requires mainstream established news sources for a lot of articles that aren't academic in nature, and especially for articles about people. You cannot include information that isn't supported by corporate news articles, which means corporate news is now the arbiter of truth, and corporate news lies all the time about everything.

Wikipedia is, and always has been, the encyclopedia of the elite and billionaire narrative, and especially the left-wing narrative, which dominates nearly all corporate news groups. I say this as a far left person myself.


corporate news rarely lies outright. libel is illegal. articles will spin and speculate, emphasize and elide, omit and opine, but that's not lying, it's spin, and a careful reading can extract the facts of the matter.

yes, you have to cite reliable sources on Wikipedia. yes, this means AP is considered more reliable than someone's Substack. you can, however, cite NPR or PBS, the BBC or the Guardian. if two reliable sources differ, you cite both and describe the conflict.

how do you know that "corporate" news lies all the time about everything? who told you that? why do you trust them? why should I trust them?


if you characterize something with such incredible bias, and do so knowing that the resulting impression and information someone will leave with does not match objective facts in reality, then that is dishonest and to me, equivalent to outright lying. this mischaracterization is in basically every single political article, including literally the top story on cnn dot com right now


> Many people don't seem to care about any evidence or sources. They blindly follow whatever lies that their leaders say.

I’m one of those people you complain about. When I did deep research about DEI, I presented evidence and sources to people like you, including judges that I knew in my private life.

It seems you didn’t care, to a point that I had in my hand a document printed from a department of justice’s own website (about mothers’ own violence on their children, which is as high as men’s given the scope you decide to choose) and the person who in his public life is a judge, didn’t even bother discussing the thesis and just told me: “This document is false. You changed the figures before printing the document”.

You may say that Trump is bad for dismantling your administration, but you guys don’t care an inch about truth, evidence, sources, honesty, bad faith, or even for the number of children who are beaten to death by their mothers.


Yeah I think you might be doing a little over-generalization there.


Depends on the extend of the subjects I’ve studied and the number of good faith - bad faith people I’ve met.

I literally wrote a book on one of those subjects and made it to a national news channel in two countries about it.

The cause is lost for science, people don’t respond to logic.


"given the scope you decide to choose"

By changing the scope, you changed the effect. Unless you did every statistical validation here... Yeah. That reads exactly like data manipulation. t-distribution approaches standard normal distribution, when the degree of freedom increases. That's not something that anyone should ignore and give credit to. It's the same bullshit that Donald has repeatedly tried to do, to prove himself doing the right thing, even as everything falls apart.

Caring about the truth, requires caring about the methodology, and not just the conclusions.


That’s not what the judge argued. He accused me of falsifying the document by doctoring it before printing.

Which shows:

- How much bad faith you have, assuming I argumented to a judge on a false hypothesis,

- Condescension to assume that I’m not a scientist who masters p-values,

- And ultimately, you confirm the hypothesis that you lead your research in bad faith, knowing full well the true level of violence from women and hiding it, which leads to more child deaths. You are accessory to criminality.

Your attitude confirm as well that it’s good this entire field of researched be defunded, it is a net win for science.


I'd really appreciate to hear about your research and where I could read about the violence. My Gmail username is the same as my HN username. Thank you!


The p-value is useless, where the t-value does not hold substance. One depends upon the other. If there's too much of a degree of freedom, it doesn't matter if the p-value looks accurate. The data is probably no longer normally distributed, requiring non-parametric testing.

You've leapt to me being a researcher acting in bad faith, accusing me for a whole industry. As to defunding an entire field of research, it sounds like you'd like statistics or mathematics defunded? I'm afraid they will persist regardless. Too many industries depend upon them.


> The AI takes care of the tedious line by line what’s-the-name-of-that-stdlib-function parts (and most of the tedious test-writing parts)

AI generated tests are a bad idea.


AI generated tests are genuinely fantastic, if you treat them like any other AI generated code and review them thoroughly.

I've been writing Python for 20+ years and I still can't use unittest.mock without looking up the details every time. ChatGPT and Claude are great at that, which means I use it more often because I don't have to deal with the frustration of figuring it out.


Just as with anything else AI, you never accept test code without reviewing it. And often it needs debugging. But it handles about 90% of it correctly and saves a lot of time and aggravation.


Well, maybe they just need X lines of so-called "tests" to satisfy some bullshit-job metrics.


> What I've noticed in that respect is that I just read what it does and then immediately reason why it's there ....

How if it hallucinate and gives you wrong code and explanation? It is better to read documentations and tutorials first.


> How if it hallucinate and gives you wrong code

Then the code won't compile, or more likely your editor/IDE will say that it's invalid code. If you're using something like Cursor in agent mode, if invalid code is generated then it gets detected and the LLM keeps re-running until something is valid.

> It is better to read documentations and tutorials first.

I "trust" LLM's more than tutorials, there's so much garbage out there. For documentation, if the LLM suggests something, you can see the docstrings in your IDE. A lot of the time that's enough. If not, I usually go read the implementation if I _actually_ care about how something works, because you can't always trust documentation either.


Plenty of incorrect code compiles. It is a very bad sign that people are making comments like "Then the code won't compile".

As for my editor saying it is invalid..? That is just as untrustworthy as an LLM.

>I "trust" LLM's more than tutorials, there's so much garbage out there.

Yes, rubbish generated by AI. That is the rubbish out there. The stuff written by people is largely good.


> Plenty of incorrect code compiles. It is a very bad sign that people are making comments like "Then the code won't compile".

I interpreted the "hallucination" part as the AI using functions that don't exist. I don't consider that a problem because it's immediately obvious.

Yes, AI can suggest syntactically valid code that does the wrong thing. If it obviously does the wrong thing, then that's not really an issue either because it should be immediately obvious that it's wrong.

The problem is when it suggests something that is syntactically valid and looks like it works but is ever slightly wrong. But in my experience, it's pretty common to come across that stuff like that in "tutorials" as well.

> Yes, rubbish generated by AI. That is the rubbish out there. The stuff written by people is largely good.

I pretty strongly disagree. As soon as it became popular for developers to have a "brand", the amount of garbage started growing. The stuff written before the late 00's was mostly good, but after that the balance began slowly shifting towards garbage. AI definitely increased the rate at which garbage was generated though.


> Yes, AI can suggest syntactically valid code that does the wrong thing

To be fair, I as a dev with ten or fifteen years experience I do that too. That's why I always have to through test the results of new code before pushing to production. People act as if using AI should remove that step, or alternatively, as if it suddenly got much more burdensome. But honestly it's the part that has changed least for me since adopting an AI in the loop workflow. At least the AIncan help with writing automated tests now which helps a bit.


> Yes, rubbish generated by AI. That is the rubbish out there. The stuff written by people is largely good.

Emphatic no.

There were heaps of rubbish being generated by people for years before the advent of AI, in the name of SEO and content marketing.

I'm actually amazed at how well LLMs work given what kind of stuff they learned from.


Wait, are you saying you don't trust language servers embedded in IDEs to tell you about problems? How about syntax highlighting or linting?


Do you mean the laconic and incomplete documentation? And the tutorials that range from "here's how you do a hello world" to "draw the rest of the fucking owl" [0], with nothing in between to actually show you how to organise a code base or file structure for a mid-level project?

Hallucinations are a thing. With a competent human on the other end of the screen, they are not such an issue. And the benefits you can reap from having LLMs as a sometimes-mistaken advisory tool in your personal toolbox are immense.

[0]: https://knowyourmeme.com/memes/how-to-draw-an-owl


The kind of documentation you’re looking for is called a tutorial or a guide, and you can always buy a book for it.

Also something are meant to be approached with the correct foundational knowledge (you can’t do 3D without geometry, trigonometry, and matrixes. And a healthy dose of physics). Almost every time I see people strugling with documentation, it was because they lacked domain knowledge.


Fair question. So far I've seen two things:

1. Code doesn't compile. This case is obvious on what to do.

2. Code does compile.

I don't work in Cursor, I read the code quick, to see the intent. And when done with that decide to copy/paste it and test the output.

You can learn a lot by simply reading the code. For example, when I see in polars a `group_by` function call but I didn't know polars could do that, now I know because I know SQL. Then I need to check the output, if the output corresponds to what I expect a group by function to do, then I'll move on.

There comes a point in time where I need more granularity and more precision. That's the moment where I ditch the AI and start to use things such as documentation and my own mind. This happens one to two hours after bootstrapping a project with AI in a language/library/framework I initially knew nothing about. But now I do, I know a few hours worth of it. That's enough to roughly know where everything is and not be in setup hell and similar things. Moreover, by just reading the code, I get a rough idea on how beginner to intermediate programmers think about the problem space the code is written in as there's always a certain style of writing certain code. This points me into the direction on how to think about it. I see it as a hint, not as the definitive answer. I suspect that experts think differently about it, but given that I'm just a "few hours old" in the particular language/lib/framework, I think knowing all of this is already really amazing.

AI helps with quicker bootstrapping by virtue of reading code. And when it gets actually complicated and/or interesting, then I ditch it :)


What do you do if you "hallucinate" and write the wrong code? Or if the docs/tutorial you read is out of date or incorrect or for a different version than you expect?

That's not a jab, but a serious question. We act like people don't "hallucinate" all the time - modern software engineering devops is all about putting in guardrails to detect such "hallucinations".


Even when it hallucinates it still solves most of the unknown unknowns which is good for getting you unblocked. It's probably close enough to get some terms to search for.


Have you tried using AI only for things you already know for a while? I almost only do so (because I haven't found that LLMs speed up my actual process much) and I can tell you that the things that LLMs generally leave out/forget/don't "know" about are plentiful, they will result in tons of debugging and usually require me to "metagame" heavily and ask pointed questions that someone who didn't have my knowledge simply wouldn't know to ask in order to solve the issues with the code they generate. A LLM can't even give you basic OpenGL code in C for doing some basic framebuffer blitting without missing stuff that'll cost you potentially hours or a whole day in debugging time.

Add to this that someone who uses a LLM to "just do things" for them like this is very unlikely to have much useful knowledge and so can't really resolve these issues themselves it's a recipe for disaster and not at all a time saver over simply learning and doing yourself.

For what it's worth I've found that LLMs are pretty much only good for well understood basic theory that can give you a direction to look in and that's about it. I used to use GitHub Copilot (which years ago was (much?) better than Cursor with Claude Sonnet just a few months ago) to tab complete boilerplate and stuff but concluded that overall, I wasn't really saving time and energy because as nice as tab-completing boilerplate sometimes was, it also invariably turned into "It suggested something interesting, let's see if I can mold it into something useful" taking up valuable time, leading nowhere good in general and just generally being disruptive.


I don't think so.How can you be so sure it solves the 'unknown unknowns'?


Sample size of 1, but it definitely did in my case. I've gained a lot more confidence when coding in domains or software stacks I've never touched before, because I know I can trust an LLM to explain things like the basic project structure, unfamiliar parts of the ecosystem, bounce ideas off off, produce a barebones one-file prototype that I rewrite to my liking. A whole lot of tasks that simply wouldn't justify the time expenditure and would make it effort-prohibitive to even try to automate or build a thing.


Because I've used it for problems where it hallucinated some code that didn't actually exist but that was good enough to know what the right terms to search for in the docs were.


I interpreted that as you rushing to code something you should have approached with a book or a guide first.


Most tutorials fail to add meta info like the system they're using and versions of things, that can be a real pain.


> A few months work by one guy and already more capable than the Hurd

It is no way capable than Hurd. It is a cool project though. Have you used Hurd recently? It can run a modern desktop.


I searched YouTube for actual evidence of Hurd booting to a desktop and only found two videos of Hurd freezing during boot, and a third video of RMS explaining to a very confused convention attendee that he's "never installed GNU slash lynn-ox" because he could just ask someone else to do it.

No videos of Hurd running Doom either, but anyone is welcome to create one and share.


The programming interface that Hurd provide is similar to that of any modern operating system. So, it can run pretty much any program that runs on Linux or BSD, but you have to port it. Doom is no exception. If you cannot find a video of Doom running on Hurd on YouTube, it doesn't mean that Hurd can't run Doom.

Hurd is sure not a successful project, but it is a capable operating system. Linux comes with a lot of device drivers for all sorts of hardware, so Linux nowadays can run almost everywhere. But that is not the case with Hurd because only a small number of people are contributing to this project and it is largely eclipsed by success of Linux. But it is an extensible system so if you want support for a hardware, you can develop a driver for it. But nobody is interested.

If you haven't seen Hurd running a desktop, I will introduce you to Debian Hurd (https://www.reddit.com/r/linuxmasterrace/comments/18i6e94/de...). It is a Debian distribution with Hurd as the kernel instead of Linux. It comes with Xorg and you can install XFCE, OpenBox. Basically, you can install any desktop that render on CPU. Desktops like GNOME and KDE need more infrastructure. They relay on modern GPUs and uses direct rendering. In Linux, we have DRI and Mesa for this. As of now, Hurd doesn't have any such infrastructure. As I have already said before, a lot of people are contributing to Linux and only a handful of people are contributing to Hurd.


If it can run nginx and node.js on a VPS it could be a nice alternative to Linux.


The current Debian/Hurd port of nginx is outdated, but runs fine.

There's been a few problems with nodejs, as libfuse compatibility isn't the latest yet. Some libraries work fine. Some explode. So you'll have to compile it yourself.

Python and Go, however, should run out of the box just fine.


Its been a few years but I ran HURD in a VM and it ran a nice X Windowing system. Its been a few years though so I don't know what HURD is capable of today


Hurd is for sure a failed project and it can't realistically run a modern desktop as claimed, but it's more capable than you give it credit for. For example, see:

https://www.debian.org/ports/hurd/


Youtube isn't an evidence of anything.

Set up yourself.


FWIW, GNU/Hurd has been able to run a desktop environment the last 20+ years.

And as someone who portedgor Doom working on GNU/Hurd decades ago … I’m going to wager that the GP has never used it. :-)


Fabrice is an amazing programmer, and does cool things. He is an inspiration to us all.


I disagree with you. According to your definition of best developer, is one with a skill for persuade manager enough for building complex we apps and services? A developer is hired to write code and they achieve business goals by writing code and building software. Writing code is the skill they have and that is why they are hired.


So cooool!


I was also asking exactly the same question.


The menubar widget is removed in GTK 4. There is GtkPopoverMenubar in GTK 4, but it is not equivalent of traditional menubar.

https://www.reddit.com/r/GTK/comments/xdfgjr/api_changes_in_...


Yes! Gnome 2 was great. I miss those days. The whole Gnome is a disappointment now. The GTK-4 doesn't even have the traditional menu and menubar widgets. I don't know what the Gimp people and the Inkscape people are going to do.



Both GtkMenu and GtkMenuBar are gone in GTK 4. Now threre is GtkPopoverMenu and GtkPopoverMenuBar. The GtkPopoverMenu is not a drop-in replacement for GtkMenu.

https://discourse.gnome.org/t/using-gtkpopovermenu-as-a-gtkm...


Mate is a nice option if you want Gnome 2 but don't want to live in the past. It's very pleasant.


Yes, Mate is indeed a decent desktop for people who like GNOME 2. But GTK people are removing features left and right. I don't know how Mate is going to cope with that.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: