Good lord, what an atrocious Gish gallop of selective quotes and evidence. This might be one of the worst displays of sharpshooter logic I've ever seen.
AND it features a quote from William Pierce, an infamous neo-Nazi. Probably more, but I gave up after the umpteenth unverifiable quote. Just goes to show how much modern right-wing propaganda aligns with traditional neo-Nazi propaganda.
> Good lord, what an atrocious Gish gallop of selective quotes and evidence. This might be one of the worst displays of sharpshooter logic I've ever seen.
Ease up on the throttle there, LessWrong. You've blown the transaxle.
Sorry, I meant to leave that on the comment I originally called out (the one pushing a neo-Nazi adjacent conspiracy theory, the one you rushed to defend.)
You are just terminally online and woefully overly-opinionated. Not entirely harmless, when paired with someone like greenavocado, but mostly benign.
You've been downvoted to death, but you are correct. The types of conspiracy theories typical of those who believe in the "Zionist Occupation Government" have tremendous parallels with theories like Pizzagate and, as in this case, the conspiracy theories around the Clintons.
I say this as someone who both detests the Clintons (and their ilk) and thinks the timing of this suicide is a bit fishy.
This isn't a test environment, it's a production scenario where a bunch of people trying to invent a new job for themselves role-played with an LLM. Their measured "defections" were an LLM replying with "well I'm defecting".
OpenAI wants us to see "5% of the time, our product was SkyNet", because that's sexier tech than "5% of the time, our product acts like the chaotic member of your DnD party".
> The error messages helpfully suggested fields I hadn’t known about by “correcting” my typos.
Glad to see this being called out. Sure, I get why it's convenient. Misspelling a field by one character is a daily occurrence ("activty" and "heirarchy" are my regulars). The catch is that spellchecking queries and returning valid fields in the error effectively reduces entropy by both character space and message length, varying by the type of distance used in the spellcheck.
Could it be that language patterns themselves embed truthfulness, especially when that language is sourced from forums, wikis, etc? While I know plenty of examples exist to the contrary (propaganda, advertising, disinformation, etc), I don't think it's too optimistic to assert that most people engage in language in earnest, and thus, most language is an attempted conveyance of truth.
I've seen multiple companies the past couple of years drop some really interesting projects to spend several months trying to make LLMs do things they weren't made for. Now, most are simply settling for chat agents running on dedicated capacity.
The real "moat" OpenAI dug was overselling its potential in order to convince so many to halt real AI research, to only end up with a chat bot.
Poor phrasing on my part. OpenAI ended up with the mantle as the Amazon of AI. Everybody else ended up with a chat bot. The rest of their services are standard NLP/ML behind an API they built up from all the money thrown at them, subsequently used to bolster their core offerings of a chat bot and an automated mood board for artists.
Really? They are a full platform for most popular applied AI, similar to AWS Bedrock and its other AI services, or Google Vertex. They cover vision, language translation, text generation and summarization, text to speech, speech to text, audio generation, image generation, function calls, vector stores for RAG, an AI agent framework, embeddings, and recently with o1, reasoning, advanced math, etc. this is on top of the general knowledge base.
You might be a wee dismissive of how much a developer can do with OpenAI (or the competitors).
I think the point was that despite all this the only thing that you can reliably make is a fancy chat bot. A human has to be in the seat making the real decisions and simply referring to open AI.
I mean there's TTS and some translation stuff that's in there but it's hard to call that "AI" despite using neural networks and the like to solve that problem.
Since when do you need a human in the mix? For example, there are financial risk analytical applications that use prompt templates and function calling , and have no chat bot interface to the end user. This is one of many examples. I think the leap that people miss is that you have to talk to the AI in some way, natural language is how LLM’s fundamentally work and so you have to express the problem space in that mode to it get it to solve problems for you as a developer. For some coders, I guess that is uncomfortable.
Are you should making those jobs more efficient is the right goal? David Graeber may have disagreed, or at least agreed that the most efficient action is to remove those jobs altogether.
A customer service agent isn't a bullshit job. They form a user interface between a complex system and a user that isn't an expert in the domain. The customer service agent understands the business domain, as well as how to apply that expertise to what the customer wants and needs. Consider the complexity of what a travel agent or airline agent does. The agent needs to understand the business domain of flight availability and pricing, as well as technical details related to the underlying systems, and have the ability to communicate bidirectionally comfortably with the customer, who knows little or none of the above. This role serves a useful purpose and doesn't really qualify as a bullshit job. But in principle, all of this could be done by a well-crafted system with OpenAI's api's (which others in these threads have said are "just chatbots").
Interfacing with people and understanding business domain knowledge is in fact something we can do with LLM's. There are countless business domains/job areas that fall into the shape I described above, enough to keep engineers busy for a real long time. There are other problem shapes that we can attack with these LLM's as well, such as deep analysis on areas where it can recommend process improvements (six sigma kinds of things). Process improvement, some might say, gets closer to the kinds of things Graeber might call bullshit jobs, though...
In theory, I agree that LLMs could perform those jobs.
I may just be less of a techno optimist. If history is any guide, the automation of front-line human interfaces will lead to less good customer service in the name of lowering labor cost as a means of increasing profits. That seems to make things worse for everyone except shareholders. In those cases, we’re not making the customers experience more efficient, we’re making the development of profit more efficient at the cost of customer experience.
Well their chatbot helped me write a tabbed RDS manager with saved credentials and hosts in .NET last night in about 4 hours. I've never touched .NET in my life. It's probably going to save me 30 minutes per day. Pretty good for a chat bot.
Do you really think I suggesting it as a long-laid plan behind someone choosing a name ~9 years ago?
I'm saying that the current relationship between the company and end-users--especially when it comes to "open" monikers--has similarities to an "Open Beta": A combination of PR/marketing, free testing, and consumer data collection, where users should be cautious of becoming reliant on something that may be yanked back behind a monetization curtain.
"In some sense", any word can mean anything you want. "Open" carries with an accepted meaning in technology that in no way relates to what you're describing. You may as well call McDonald's "open".
Parameterizing notebooks is a feature common to modern data platforms, and most of its usefulness comes from saving the output. That makes it easier to debug ML pipelines and such, cos the code, documentation, and last output are all in one place. However I don't see any mention of what happens to the outputs with this tool.
Focusing on support windows seems to be putting the symptom over the cause. Why not instead focus on the practices that are attempting to gouge customers on the secondhand market, e.g. Pelotons used equipment activation fee?
AND it features a quote from William Pierce, an infamous neo-Nazi. Probably more, but I gave up after the umpteenth unverifiable quote. Just goes to show how much modern right-wing propaganda aligns with traditional neo-Nazi propaganda.
reply