I suspect that one difference that gives the impression that the characters in Peanuts are "untouched by failure" is that for the most part they don't have real character arcs. Once their archetypes are established they stay the same. Combine that with being the longest running comic written by a single person of all time and it feels like nothing ever changes.
That's not a critique - being a comforting source of unchanging familiarity is part of the point of a newspaper comic. But it is very different to H2G. Arthur Dent might be a bumbling failure who is flung around by forces out of his own control, but his life still changes and he still changes. He still grows a little bit as a person.
It's really sad that Kishimoto so terrible at writing female characters. I'm not being a hater when I say this, he has even complained about this himself!
In terms of character concepts he's always really great - Naruto is one of the few series I read almost from the start, all the way to the finish. At the beginning of the series the concept for all male and female characters started out really interesting. But the female characters barely got any development compared to the male ones, and it got worse as the series went on. Partially because it ended up focusing more and more on Naruto and Sasuke, partially because the majority of the female character development was reduced to how they relate to the male characters.
I don't think it's intentional or that Kishi has any malice towards women or anything - if that was the case I doubt he would have been able to come up with interesting character concepts for women in the first place. But the fact that they're sidelined like that still sucks, especially since the potential is there.
I'm glad that Sakura got to be a bad-ass in a few of the side-stories after the main series ended at least.
I constantly get the impression that it's a yaoi manga dressed up as a shonen, and so from that perspective I can understand why women may not be the focus.
Naruto and Sasuke spend much of the entire series pining after each other, and when they do finally - uhm - "resolve their differences", the show tacks on two female counterparts to marry them off to like an afterthought.
I don't blame Kishimoto for this, I blame the shonen crowd more for shaping their expectations on what is clearly a yaoi story
OMG that's amazing. This is the first time I hear that take but I can actually see it, hahaha. They did have each other's first kiss after all, lmao.
Having said that, the whole issue with the women is that they're flat romantic characters most of the time (or, apparently, beards) instead of being allowed to pass the Bechdel test. I don't think being a yaoi manga really excuses that (although I can't say I've ever read any so I can't really comment on its genre conventions - surely there are female friends in the better written ones though?)
Kind of by definition we will not see the people who do not submit frivolous PRs that waste the time of other people. So keep in mind that there's likely a huge amount of survivor bias involved.
Just like with email spam I would expect that a big part of the issue is that it only takes a minority of shameless people to create a ton of contribution spam. Unlike email spam these people actually want their contributions to be tied to their personal reputation. Which in theory means that it should be easier to identify and isolate them.
That works if more overdraw = more intensity is all you care about, and may very well be good enough for many kinds of charts. But with heat map plots one usually wants a proper mapping of some intensity domain to a color map and a legend with a color gradient that tells you which color represents which value. Which requires binning, counting per bin, and determining the min and max values.
Wait, does additional blending let you draw to temp framebuffers with high precision and without clamping? Even so you'd still need to know the maximum value of the temp framebuffer though.
That's what EXT_float_blend does. It's true, though, that you can't find the global min/max in webgl2. This could be done, theoretically, with mipmaps if only those mipmaps supported the max function.
Couldn't you do that manually with a simple downscaling filter? I'd be very shocked if fragment shaders did not have a min or max function.
Repeatedly shrinking by a factor of two means log2(max(width, height)) passes, each pass is a quarter of the pixels of the previous pass so that's a total of 4/3 times the pixels of the original image. Should be low enough overhead, right?
Sure, that will work, but it's log2 passes + temp framebuffers. As for overhead, I'm afraid it will eat a couple fps if you run it on every frame. In practice, though, I'm not sure that finding the exact maximum is that valuable for rendering: a good guess based on the dataset type will do. For example, if you need to render N points that tend to congregate in the center, using sqrt(N) as the heuristic for the maximum works very well.
That looks interesting, I'm not likely to write any jQuery any time soon, but I'll check out the source code to see if I can learn anything from it.
Regarding adoption levels, the JsViews website made me think I had accidentally toggled the "Desktop Site" option in my Iceweasel browser, I wonder if that scared people off. Or perhaps it's because, as others mentioned, most jQuery development these days is in legacy codebases where the devs are not allowed to add any new libraries, reducing the adoption rates of any new jQuery libraries even more than you'd expect based on the raw nrs of jQuery users.
(the website does work though, and it loads fast. Which is something I've always appreciated about jQuery based sites still alive today. The only thing I'm missing is any indication of how big it is when minified + gzipped. EDIT: jsrender.js is 33.74 kB, jsrender.min.js a mere 12.82 kB)
I’ve been collaborating with Boris, the author of JsViews, and we do have plans to modernize the website—which speaks directly to your point about first impressions and adoption. You’re absolutely right that presentation matters; if something looks dated, people may disengage before digging any deeper.
I also raised the jQuery dependency concern with Boris for exactly the reason you mentioned: many teams automatically rule out anything that requires jQuery, especially outside of legacy codebases. That’s a real barrier today.
For what it’s worth, a jQuery-free version may happen. Boris is actively exploring it, but he’s making no promises—it’s a non-trivial problem and would effectively require a full rewrite rather than a simple refactor.
I remember that incident! As a side-effect I discovered that beautiful panorama picture[0], which was perfect for my two-monitors-plus-laptop-screen set-up aside from the low resolution, so I used my stippling notebook[1] to hide that a little bit[2]. I could probably tweak the stippling settings a bit to have prettier output, but it's been my wallpaper for over two years now.
I can't speak for this conference, but fake conference scams do exist.
I have a friend who fell for one. He won the Dutch "Student of the Year" award in... I think 2006. When our then-minister of education proposed to make maths more popular in high school by dumbing down the curriculum, he organized a giant STEM activity day on a nation-wide scale to popularize it. We were both physics student at the time.
Anyway, as part of the award he was allowed to pick a conference to go to. He chose one in Spain. When he arrived it turned out he had fallen for an internet scam: the conference only existed as a scam website and the money had disappeared. He still had a nice weekend with his best buddy (now husband) though, and at least it didn't cost him anything.
Even assuming all of what you said is true, none of it disproves the arguments in the article. You're talking about the technology, the article is about the marketing of the technology.
The LLM marketing exploits fear and sympathy. It pressures people into urgency. Those things can be shown and have been shown. Whether or not the actual LLM based tools genuinely help you has nothing to do with that.
The point of the article is to paint LLMs as a confidence trick, the keyword being trick. If LLMs do actually deliver very real, tangible benefits then can you say there is really a trick? If a street performer was doing the cup and ball scam, but I actually won and left with more money than I started with then I'd say that's a pretty bad trick!
Of course it is a little more nuanced than this and I would agree that some of the marketing hype around AI is overblown, but I think it is inarguable that AI can provide concrete benefits for many people.
The marketing hype is economy defining at this point, so calling it overblown is an understatement.
Simplifying the hype into 2 threads, the first is that AI is an existential risk and the second is the promise of “reliable intelligence”.
The second is the bugbear, and the analogy I use is factories and assembly lines vs power tools.
LLMs are power tools. They are being hyped as factories of thoughts.
String the right tool calls, agents, and code together and you have an assembly line that manufactures research reports, gives advice, or whatever white collar work you need. No Holidays, HR, work hours, overhead etc.
I personally want everyone who can see why this second analogy does not work, to do their part in disabusing people of this notion.
LLMs are power tools, and impressive ones at that. In the right hands, they can do much. Power tools are wildly useful.
But Power tools do not make automatically make someone a carpenter. They don’t ensure you’ve built a house to spec. Nor is a planar saw going to evolve into a robot.
The hype needs to be taken to task, preferably clinically, so that we know what we are working with, and can use them effectively.
> If LLMs do actually deliver very real, tangible benefits then can you say there is really a trick?
Yes, yes you can. As I’ve mentioned elsewhere on this thread:
> When a con man sells you a cheap watch for an high price, what you get is still useful—a watch that tells the time—but you were also still conned, because what you paid for is not what was advertised. You overpaid because you were tricked about what you were buying.
LLMs are being sold as miracle technology that does way more than it actually can.
And at a cost im not sure most fully understand. We've allowed these companies to externalise all the negative outcomes. Now were seeing consumer electronics stock dry up, huge swaths of raw resources used, massive invasions of privacy, all so this one guy can do his corpo job 10x faster? Nah im good.
A huge amount of tech is a confidence trick. Not one aimed at <50 year old crowd but aimed at innumerate and STEM ignorant political leaders.
It's not LLMs they care about, it's datacenter ownership. US political norms empower owners. If you think of a DC as a mega church and remote users the disciple, it makes the desired network effect obvious. That is leveraged to sway Congress and states.
These tech projects are not intended for users. They're designed to gain confidence of politicians, preferential political support.
Gen pop is not the market. DC is.
Most peoples individual data crunching problems can be resolved with a TI graphing calculator.
Big Tech convinced Congress that culture of helpless consumers of their data center outputs is simpler and will lead humanity to a forever growth future!... nevermind they will all be dead, unable to verify.
A con trick that worked great on older, more religious leaning Americans. One that's not working so well on the younger generation who know how these systems work.
But saying it's a confidence trick is saying it's a con. That they're trying to sell someone something that doesn't work. Th op is saying it makes then 10x more productive, so how is that a con?
The marketing says it does more than that. This isn't just a problem unique to LLMs either. We have laws about false advertising for a reason. It's going on all the time. In this case the tech is new so the lines are blurry. But to the technically inclined, it's very obvious where they are. LLMs are artificial, but they are not literally intelligent. Calling them "AI" is a scam. I hope that it's only a matter of time until that definition is clarified and we can stop the bullshit. The longer it goes, the worse it will be when the bubble bursts. Not to be overly dramatic, but economic downturns have real physical consequences. People somewhere will literally starve to death. That number of deaths depends on how well the marketers lied. Better lies lead to bigger bubbles, which when burst lead to more deaths. These are facts. (Just ask ChatGPT, it will surely agree with me, if it's intelligent. ;p)
How does one go about competing at the IMO without "intelligence", exactly? At a minimum it seems we are forced to admit that the machines are smarter than the test authors.
"LLM" as a marketing term seems rational. "Machine learning" also does. We can describe the technology honestly without using a science fiction lexicon. Just because a calculator can do math faster than Isaac Newton doesn't mean it's intelligent. I wouldn't expect it to invent a new way of doing math like Isaac Newton, at least.
Just because a calculator can do math faster than Isaac Newton doesn't mean it's intelligent.
Exactly, and that's the whole point. If you lack genuine mathematical reasoning skill, a calculator won't help you at the IMO. You might as well bring a house plant or a teddy bear.
But if you bring a GPT5-class LLM, you can walk away with a gold medal without having any idea what you're doing.
Consequently, analogies involving calculators are not valid. The burden of proof rests firmly on the shoulders of those who claim that an LLM couldn't invent new mathematical techniques in response to a problem that requires it.
In fact, that appears to have just happened (https://news.ycombinator.com/item?id=46664631), where an out-of-distribution proof for an older problem was found. (Meta: also note the vehement arguments in that thread regarding whether or not someone is using an LLM to post comments. That doesn't happen without intelligence, either.)
That doesn't appear to be what happened. But the marketing sure has a lot of people working quick to presume so.
I would guess it's only a matter of days before that proof, or one very similar, is found in the training data, if that hasn't happened already, just as has been the case every time.
No fundamental change in how the LLM functions has been made that would lead us to expect otherwise.
Similar "discoveries" occurred all the time with the dawn of the internet connecting the dots on a lot of existing knowledge. Many people found that someone had already solved many problems they were working on. We used to be able to search the web, if you can believe that.
The LLMs are bringing that back in a different way. It's functional internet search with an uncanny language model, that sadly obfuscates the underlying data while making guesswork to summarize it (which makes it harder to tell which of its findings are valuable, and which are not).
It's useful for some things, but that's not remotely what intelligence is. It doesn't literally understand.
>* if you bring a GPT5-class LLM, you can walk away with a gold medal without having any idea what you're doing.*
My money won't be betting on your GPT5-class business advice unless you have a really good idea what you're doing.
It requires some (a lot of) intelligence and experience to usefully operate an LLM in virtually every real world scenario. Think about what that implies. (It implies that it's not by itself intelligent.)
I didn't say anything about cheating. In fact, if it did cheat, that would make for a much stronger argument in your favor.
If scoring highly on an exam implies intelligence then certainly I'm not intelligent and the Super Nintendo from the 90s is more sentient than myself, given I'm terrible at chess.
I personally don't agree with that definition, nor does any dictionary I'm familiar with, nor do any software engineers with whom I'm familiar, nor any LLM specialists, including the forefront developers at OpenAI, xAI, Google, etc. as far as I'm aware.
But for some reason (it's a very obvious reason $$$), marketers, against the engineers' protest, appear to be claiming otherwise.
This is what you're up against and what you'll find the courts, and lawyers, will go by when this comparison comes to a head.
In my opinion, I can't wait for this to happen.
Thrilled to know if I shouldn't wait for that. If you're directly involved with some credible research to the contrary, I would love to hear more.
But IMO, in this case at least, has nothing to do with intelligence. It's performing a search against its own training data, and piecing together a response in line with that data, while including the context of the search term (aka the question). This is run through a series of linear regressions, and a response is produced. There is nothing really groundbreaking here, as best I can tell.
These arguments usually seem to come down to disagreements about definitions, as you suggest. You've talked about what you don't consider evidence of intelligence, but you haven't said anything about the criteria you would apply. What evidence of intelligent reasoning would change your mind?
It is unsupportable to claim that ML researchers at leading labs share your opinion. Since roughly 2022, they understand that they are working with systems capable of reasoning: https://arxiv.org/abs/2205.11916
Based on an English dictionary definition, I would expect an intelligence exhibits understanding, don't you? I would hope people are reading the dictionary before they market a multibillion dollar product set to reach the masses. It seems irresponsible not to.
The article you linked discussed reasoning. That's really cool. But, consider that we can say that a chess game computer opponent is reasoning. It's using a preprogrammed set of instructions to predict out to some number of possible moves ahead, and choosing the most reasonable. A calculator, essentially, it is in fact reasoning. But that doesn't have much to do with intelligence. As we read in the dictionary, intelligence implies understanding, and we certainly can't say that the Chess Masters opponent from the Super Nintendo literally understands me, right?
More to the point, I don't see that any LLM has thus far exhibited remotely any inkling of understanding, nor can it. It's a linear regression calculator. Much like a lot of TI84 graphing calculators running linear algebraic functions on a grand scale. It's impressive that basic math can achieve results across word archives that sound like a person, but it's still not understanding what it outputs, and really, not what it inputs beyond graphing it algebraically either.
It doesn't literally understand. So, it is not literally intelligent, and it will require some huge breakthroughs to change that. I very much doubt that such a discovery will happen in our lifetime.
It might be more likely that the marketers will succeed in revising the dictionary. We've seen often times that if you use words wrong enough, it becomes right. But so far at least, that hasn't happened with this word.
OK, now let's talk about what it means to "understand" something.
Let's say a kid who's not unusually gifted/talented at math somehow ends up at the International Math Olympiad. Smart-enough kid, regularly gets 4.0+ grades in normal high school classes, but today Timmy got on the wrong bus. He does have a great calculator in his backpack -- heck, we'll give him a laptop with Mathematica installed -- so he figures, why not, I'll take the test and see how it goes. Spoiler: he doesn't do so well. He has the tools, but he lacks understanding of how and when to apply them.
At the same time, the kid at the next desk also doesn't understand what's going on. She's a bright kid from a talented family -- in fact Alice's old man works for OpenAI -- but she's a bit absent-minded. Alice not only took the wrong bus this morning, but she grabbed the wrong laptop on the way out the door. She shrugs, types in the problems, and copies down what she sees on the screen. She finishes up, turns in the paper, and they give her a gold medal.
My point: any definition of "understanding" you can provide is worthless unless it can somehow account for the two kids' different experiences. One of them has a calculator that does math, the other has a calculator that understands math.
I very much doubt that such a discovery will happen in our lifetime.
So did I, and then AlphaGo happened, and IMO a few years later. At that point I realized I wasn't very good at predicting what was and was not going to be possible, so I stopped trying.
Calculators do not understand math, while both kids understand each other and the world around them. The calculator relies on an external intelligence.
Don't stop trying. Predictability is an indicator of how well a theory describes the universe. That's what science is.
The engineers have long predicted this stuff. LLM tech isn't really new. The size and speed of the machines is new. The more you understand about a topic, the better your predictions.
I'm not sure what your level of expertise is with software but I got a lot out of some free tutorials on developing your own LLM and on ML. These are even available, free, directly from Google among many other sources.
I feel that my expectations surrounding "AI" are much more realistic than they were before building the tools.
If you haven't already, it's very much worth giving them a run through.
Exactly. It’s like if someone claimed to be selling magical fruit that cures cancer, and they’re just regular apples. Then people like your parent commenter say “that’s not a con, I eat apples and they’re both healthy and tasty”. Yes, apples do have great things about them, but not the exaggerations they were being sold as. Being conned doesn’t mean you get nothing, it means you don’t get what was advertised.
The claims being made that are cited are not really in that camp though..
It may be extremely dangerous to release. True. Even search engines had the potential to be deemed too dangerous in the nuclear pandoras box arguments of modern times. Then there are high-speed phishing opportunities, etc.
It may be an essential failure to miss the boat. True. If calculators were upgraded/produced and disseminated at modern Internet speeds someone who did accounting by hand would have been fired if they refused to learn for a few years.
Its communication builds an unhealthy relationship that is parasitic. True. But the Internet and the way content is critiqued is a source of this even if it is not intentionally added.
I don't like many people involved and I don't think they will be financially successful on merit alone given that anyone can create a LLM. But LLM technology is being sold by organic "con" that is how all technology such as calculators end up spreading for individuals to evaluate and adopt. A technology everyone is primarily brutally honest about is a technology that has died because no one bothers to check if the brutal honesty has anything to do with their own possible uses.
Such claims are not cited in the article. It may be possible to write a good article on the topic but this article could just as well be on the organic uptake of the PC and how most wealthy nontechnical people adopted a need for a PC on "cons" that preceded their ability to gain more worth than trouble.
Ok, but did you also run it for a while and compare how snappy or laggy the different DEs are? Because with xfc my old T440s Thinkpad from 2013 is still perfectly fine for browsing the web, or watching movies on the HD projector, without sacrificing too many modern interface conveniences.
I can't say the same for Gnome or KDE, and that's no slight against either (I happily use the latter on my more recent work laptop).
Having said that VSCode runs perfectly fine on the T440s too, so Electron and JavaScript aren't the fundamental reason those DEs are too demanding to use.
xfce and gnome are by far my most used desktops and i probably would put effort into using xfce more if it supported wayland etc but i guess i like to tinker with newer stuff
to me the only reason to use xfce would be if i wanted to use a lightweight desktop app on a borderline useless computer, cause once you start trying to load websites you're going to blow through so much memory that desktop environments are irrelevant
Currently gnome-shell is taking 135MB of ram, with other gdm/gnome related background services ranging in 700KB-3.2MB each to like 20MB together.
And it's as snappy as my sway config I log into depending on the needs.
I just spammed virtual desktop changes, opening Files, browsing, and it's as snappy as it is in sway.
I think gnome is getting a lot of unfair performance criticism online as it looks like something that would be slow.
Maybe it was slow back in the starting gnome3 days.
Maybe there are some heavy differences in how distros package it? (arch btw)
... though I will say that from my experience it's the KDE that's the slow one. I don't have it installed currently on this machine but had in the past and have it on my steam deck(which is stronger then this laptop).
It feels sluggish and I have this bouncing cursor wait animation in my head right now just thinking about it.
That's not a critique - being a comforting source of unchanging familiarity is part of the point of a newspaper comic. But it is very different to H2G. Arthur Dent might be a bumbling failure who is flung around by forces out of his own control, but his life still changes and he still changes. He still grows a little bit as a person.