Reading between the lines I think a key part of what makes chatbots attractive, re lack of judgment, is they're like talking to a new stranger every session.
In both IRL and online discussions sometimes a stranger is the perfect person to talk to about certain things as they have no history with you. In ideal conditions for this they have no greater context about who you are and what you've done which is a very freeing thing (can also be taken advantage of in bad faith).
Online and now LLMs add an extra freeing element, assuming anonymity: they have no prejudices about your appearance/age/abilities either.
Sometimes it's hard to talk about certain things when one feels that judgment is likely from another party. In that sense chatbots are being used as perfect strangers.
Again, I think they have utility as a “perfect stranger” as you put it (if it stays anonymous), or “validation machine” (depending on the sycophancy level), or “rubber duck”.
I just think it’s irresponsible to pretend these are doing the same thing skilled therapists are doing, just like I think it’s irresponsible to treat all therapists as equivalent. If you pretend they’re equivalent you’re basically flooding the market with a billion free therapists that are bad at their job, which will inevitably reduce the supply of good therapists that never enter the field due to oversaturation.
Also important is simply that the AI is not human.
We all know that however "non-judgmental" another human claims to be, they are having all kinds of private reactions and thoughts that they aren't sharing. And we can't turn off the circuits that want approval and status from other humans (even strangers), so it's basically impossible not to mask and filter to some extent.
I've always assumed fingerprinting was already ubiquitous. I look at the absolute absurdity of tracking/fingerprinting permission dialogs on sites, stating up-front their data sharing with 'trusted partners' in the hundreds ranges (thingiverse.com with over 900, theverge.com on mobile with over 800) and find it more surprising that the default state of all clients shouldn't be to block everything by default.
Edit: for clarity, I believe anything with the ability to analyze the user environment via Javascript/etc on major sites is likely fingerprinting regardless. Blocking, environment isolation and spoofing is already necessary to mitigate this.
> Given that Miyazaki is known to hate GenAI I really can't condone
Not that I wouldn't similarly expect it from Miyazaki in terms of general generative art but the actual source of all the articles/memes about his quote point to a 2016 video where he's being demo'd a disturbing 3D simulation of an oily looking human figure crawling on the ground by its head while the dev explains to Miyazaki and others that 'it feels no pain so it learned to move by its head' and it could be used for horror games.
It's then that Miyazaki expresses the 'insult to life itself' quote and explains the devs have no idea what human pain is. Makes one wonder how the devs thought the reaction would be any different tbh.
Edit: reading that he clarified in an interview[1] a couple years later that his distaste was due to believing the dev was aiming at humorizing such body contortions of realistic humans which he took issue with.
The core of the article, that thanks to LLM chatbots having zero IRL emotional states/responsibilities/distractions they respond in predictable, non-judgmental ways which may cause preference and even attachment to them over messy humans is a consideration for even how it may shape interactions of all ages tbh.
Consider for example the process of casually learning about a subject in a non-academic scenario. People frequently will search online, ask on forums/chat and watch videos. Search results are widely critiqued for being poor these days, so people often turn to asking communities for specific questions.
Sometimes the only communities available are beginner hostile, expecting some prerequisite understanding and even being antagonistic toward knowledge gaps (mostly because it's uninteresting to them and they've seen so many of such questions it introduces a jadedness).
It's scenarios like that where non-judgmental, always available LLMs stand out in contrast most. I have wondered though how that might shape the dependency on LLMs broadly if people get used to not having to deal with the rough edges of humans.
They also have a contract with Reddit to train on user data (a common go-to source for finding non-spam search results). Unsure how many other official agreements they have vs just scraping.
One distinctive quality I've observed with OpenAI's models (at least with the cheapest tiers of 3,4 and o3) are their human-like face-saving when confronted with things they've answered incorrectly.
Rather than directly admit fault they'll regularly respond in subtle (moreso o3) to not so subtle roundabout ways that deflect blame rather than admit direct fault, even when it's an inarguable factual error about even conceptually non-heated things like API methods.
It's an annoying behavior of their models and in complete contrast to say Anthropic's Claude which ime will immediately and directly admit to things it had responded incorrectly about when the user mentions it (perhaps too eagerly).
I have wondered if this is something its learned based on training from places like Reddit, or if OpenAI deliberately taught it or instructed via system prompts to seem more infallible or if models like Claude were made to deliberately reduce that aspect.
> It's an annoying behavior of their models and in complete contrast to say Anthropic's Claude which ime will immediately and directly admit to things it had responded incorrectly about when the user mentions it
I don't know whats better here. ChatGPT did have a tendency to reply with things like "Oh, I'm sorry, you are right that x is wrong because of y. Instead of x, you should do x"
> Rather than directly admit fault they'll regularly respond in subtle (moreso o3) to not so subtle roundabout ways that deflect blame rather than admit direct fault
Human-level AI is closer than I'd realised... at this rate it'll have a seat in the senate by 2030.
the upside of reddit data is you have updoots and downdoots, so you can positively and negatively train your AI model on what people would typically upvote, and train against what they might downvote
Now, that's the upside, the downside is you end up with an AI catering to the typical redditor. Since many claims there are formed on the basis of, "confident, sounds reasonable, speaks with authority, gets angry when people disagree" - hallucinations happen. Rather we want something like "produces evidence-based claims with unbiased data sources"
But this is the opposite of fair use. They're licensing the content, which means they're paying for it in some fashion, not just scraping it and calling it fair use.
If you don't like the fair use of open information, I would expect you to be cheering this rather than losing respect for those involved.
Again, like others have contested with you, how is this The Guardian's fault to have issue with? They convinced ClosedAI to give them money in a licensing deal to use their content as training data without having it scraped for free.
Your sense of injustice or whatevs you want to call it is aimed in the opposite direction.
Do you mean in terms of open source vector editors? As there a wide variety of tools with SVG authoring/editing capability, among the most well-known being Adobe Illustrator, Sketch, Affinity Photo/Designer, even some web apps are available that were made for online SVG editing (eg: SVGator).
Inkscape, like some tools such as Affinity's, adds its own XML namespace with custom attributes and values, though for arrows I would expect it to use native `marker` elements.
It's certainly true that with SVG's flexibility and particularly with cross-browser handling differences/bugs it can become its own task to get consistent presentation when doing more complex things with it. Still very fond of the format.
Inkscape is the only major vector graphics editor that relies on SVG as its native file format. Most other apps are merely allowing you to import/export SVG files which is often a lossy process (e.g. vector objects with filter effects might get rasterized).
SVGator is focused primarily on animation and it's rather pricey. Boxy SVG might be a better choice if you are looking for a web-based SVG editor (disclaimer: I'm the developer).
> You have to now look at the distribution of reviews rather than the overall average since fake ones at 4 and 5 star swamp all else.
An unfortunate recent change for anyone using Amazon as a general review resource is they login-gated links to everything except the summary page. So without an account or logging in one can't view lower starred reviews.
My favorites are ones which list the most relevant pages it can find (catching scenarios where URLs have changed but no redirect has been added) but only if the site has a functional search feature.
The problem is too many search libraries on sites are anywhere from useless to completely broken, even for some household name brand sites. My go-to test is while on an existing product page to search for either some part of the product title or a snippet of bullet point feature verbatim, to see if the page I'm currently on gets returned at all. It's stunning how many sites fail this test.
It's an interesting use case. They show two examples of the accent modification: Indian and Filipino. In my experience every Filipino call center I've interacted with (from auto redirected customer support numbers) have learned American accents. While Indian redirected calls ime have had their native accent.
Phillipine English is apparently exported to much of Asia via Filipino English teachers (as they're native speakers of their variety of English) so this is likely not just about the Phillipines.
Indian English on the other hand is of course about India, Pakistan, Bangladesh, etc.
It's pretty obvious what this product is for and who the target market is. Companies don't outsource their call centers to countries where these varieties of English are spoken by the majority of English speakers because they are looking for the best talents.
> Companies don't outsource their call centers to countries where these varieties of English are spoken by the majority of English speakers because they are looking for the best talents.
Any company anywhere always has to look for the best talent at the best price.
Otherwise, you'd only hire Nobel laureates, and go bankrupt immediately.
> Any company anywhere always has to look for the best talent at the best price.
Surely you understand the difference between wanting to hire superstars but then adjusting your targets downward based on what salaries you can offer versus wanting to pay as little as possible for call center agents and then adjusting your targets upward based on what you can get away with. "I want the best I can get but I need to be able to afford it" vs "I want to pay nothing but I need to spend enough to get something that works".
Expectations for customer service tend towards rock bottom for many businesses. It's something they have to provide or their customers will get very upset (or they might even get them into legal trouble) but it only has to be good enough to be serviceable. And for call centers this usually means you end up having people closely follow a script anyway so you're literally just paying people to be human dialog trees. It's a nuisance but you can't get away with not having it, so you want to pay as little as possible.
You're talking about cut-offs for price ranges - that's indeed a given which is why I thought it doesn't bear mentioning. I'm talking about whether you go over the resulting list sorting from lowest to highest ("best of the cheapest") or highest to lowest ("cheapest of the best").
> Surely you understand the difference between wanting to hire superstars but then adjusting your targets downward based on what salaries you can offer versus wanting to pay as little as possible for call center agents and then adjusting your targets upward based on what you can get away with. "I want the best I can get but I need to be able to afford it" vs "I want to pay nothing but I need to spend enough to get something that works".
Sure, different companies run different business models.
There's space for both Singapore Airlines and RyanAir in the world.
You should pick what your budget and preferences agree with. No need to be judgemental.
I wonder if this is a recent thing or something. I haven't visited the Philippines but IRL haven't heard Filipinos speak with this accent, neither did Filipino friends ~15 years ago (instead sounding more like the unadjusted Filipino example from the website here).
It's weird reading the claim as there have been plenty of routers designed for OpenWRT in years prior. Some even quasi-bespoke. A selling point of the product is they release their source code but afaik this just means OpenWRT per se (and I suppose the bootloader) as various chips have closed source firmware anyway (much like other products that try to be as open source as possible).
In both IRL and online discussions sometimes a stranger is the perfect person to talk to about certain things as they have no history with you. In ideal conditions for this they have no greater context about who you are and what you've done which is a very freeing thing (can also be taken advantage of in bad faith).
Online and now LLMs add an extra freeing element, assuming anonymity: they have no prejudices about your appearance/age/abilities either.
Sometimes it's hard to talk about certain things when one feels that judgment is likely from another party. In that sense chatbots are being used as perfect strangers.