Hacker News new | past | comments | ask | show | jobs | submit | DigitalSea's comments login

Yay.

WFH autonomy isn’t trivial, it’s about reclaiming control of your time, environment, and productivity. If collective action focuses narrowly yet powerfully on securing that benefit, the leverage is clear. Companies resisting WFH often rely on isolated dissent; collective solidarity flips that script. Risky? Sure. But meaningful rights rarely arrive quietly. Worth the fight.


It's a game changer no doubt but unionizing is powerful AND risky.

The benefit must be compelling enough and/or the would be unionizers have to be risk tolerant (i.e. willing to be illegally fired ).


No.

Your app might gamify speech practice, but it overlooks crucial elements: nuanced human judgment, emotional rapport, and adaptive interpersonal communication. Speech therapists don’t just correct sounds; they navigate psychological nuances, adjust dynamically based on subtle cues, and foster genuine motivation through trust. AI might imitate, but can’t authentically replicate this.

Parents wary of therapy’s cost and engagement issues might initially bite, but sustained improvement demands personalised professional insight. Edtech and AI thrive as complements, not replacements.

Reframe your positioning clearly as a supplemental practice tool, not a replacement for professional therapy, or risk selling parents a mirage.


Thanks for your sharing.

I completely agree that AI can’t replace a trained speech therapist. My goal is to bridge the gap between sessions, giving children more frequent and engaging practice opportunities.

Right now, many kids only see a therapist once a month. I think the app can be an AI-powered supplemental tool that provides gamified speech exercises to reinforce what they learn in therapy.

Would reframing our wordings to emphasize this "complementary" role make it clearer?

Would love to hear more from you.


Ah, I thought you were proposing something to use as an alternative to traditional speech therapy.


My 2004 5 series BMW had this (it was called iDrive). A command style knob that could move on an axis of sorts (up and down, left, right). You could also press it in. I absolutely hated it.


They have removed it in their newest models. They have truly lost their minds at BMW. Everything on the touch screen and the panel that housed all tactile buttons how now one big led strip. Truly dumb design and a ton of wasted space.


That was among the first generations of iDrive. I was (and to some extent still am) skeptical, particularly given how overwhelmingly negative the reception was at the time. But, FWIW, the motoring press was later – as BMW apparently improved the system significantly – swayed to accept, and sometimes even praise, the iDrive. At least from about 2015-20, or possibly as early as 2010-15.

Can't give any personal evaluation, as I've never (AFAICR) driven a BMW.


Small review, 2013 BMW 335xi (drove for a test drive). Loved iDrive. Remained my benchmark for car infotainment UI for 10+ years. The main thing, you could DRVIE the car, aggressively, and give commands to infotainment without missing a beat.

Truly a lost art today. Fond memories.


Aggressive BMW driver. You don't say.


The first iDrive version was CCC, and it was reviled back then and is still currently. I think the opinion shift happened with the CIC version, which started rolling out at around 2009 and had a massively different user interface. And you can tell that CIC was a much better system since the next version introduced around 2013, NBT, only tweaked the CIC UI instead of completely replacing it.


I learned this lesson the hard way with my early 2000s BMW 5 Series (a 2004 model). It had a single joystick-style knob (iDrive, if I remember correctly) controlling a screen that handled everything—climate, settings, and more. The problem? It was an all-in-one system, completely integrated with vehicle functions, which meant you couldn’t swap it out for a newer or better OEM system. You were stuck with aging tech, and once the screen or computer started acting up, there were no simple fixes. No cheap button replacement, no easy upgrades.

Compare that to an old LandCruiser or similar vehicle from the ’80s. Physical controls still work decades later, and worst-case scenario, you replace a button or a switch for pocket change. Meanwhile, modern cars are turning into disposable tech products, destined for obsolescence the moment their proprietary systems fail. It's for this reason when I bought a new car a couple of years ago, I opted for a Toyota LandCruiser, the use of physical buttons (despite coming with touchscreens now) makes a huge difference when you're driving and want to press a button to change music or turn the volume up/down.


Having a standard DIN/double DIN slot for a head unit also makes upgrades/changes easier. With all the new cars having such bespoke and integrated systems, it makes stuff like changing out the head unit much harder.

ex putting a new carplay double DIN head unit is much easier in older cars, and difficult/impossible in newer cars


New-enough passenger cars tend to have connectivity like Carplay as a built-in.

Old-enough passenger cars tend to have a standard-enough hole DIN-ish hole to fit a modern aftermarket unit with modern connectivity.

In between those two, there's a world of cars that have touch-screen controls but lack modern connectivity. This is a subset of vehicles that cannot do anything but shrink as they age out.

But there are some aftermarket solutions for these, too, which add modern connectivity while retaining the stock electronics and physical appearance. (There's actually quite a diverse array of these upgrades available, though the origin of these devices feels very strange to me compared to traditional car audio aftermarket, and it is also absolutely exclusively Chinese.)


My 2021 BMW has iDrive and physical buttons and dials. It’s a fantastic system.

A lot of iDrive systems can be replaced, though I do worry where the cheap ebay components come from.

Later iDrive was so much better than earlier, probably the best system I have used in a car, but now they have gone with huge touch screens. yes it looks impressive and no I don't want it.


iDrive 7 (what you probably have in your 2021, and what I've had in 2022 and 2020 BMWs) is peak iDrive to me. The important stuff (climate control, volume, radio on/off) still has physical non-capacitive buttons, while the radio is entirely controllable with the touchscreen OR iDrive Controller (the puck in the center console). I barely have to take my eyes off the road to do anything, even in the iDrive system, since I have muscle memory for the iDrive Controller motions.

And now they're rolling out iDrive 8 vehicles that move climate control to the touchscreen and don't even have the puck (looking at you, X1) and I can't imagine buying one of those cars. I'm happy with my 2020.


Not sure if people picked up on it, but this is being powered by the unreleased o3 model. Which might explain why it leaps ahead in benchmarks considerably and aligns with the claims o3 is too expensive to release publicly. Seems to be quite an impressive model and the leading out of Google, DeepSeek and Perplexity.


> Which might explain why it leaps ahead in benchmarks considerably and aligns with the claims o3 is too expensive to release publicly

It's the only tool/system (I won't call it an LLM) in their released benchmarks that has access to tools and the web. So, I'd wager the performance gains are strictly due to that.

If an LLM (o3) is too expensive to be released to the public, why would you use it in a tool that has to make hundreds of inference calls to it to answer a single question? You'd use a much cheaper model. Most likely o3-mini or o1-mini combined with o4-mini for some tasks.


>why would you use it in a tool that has to make hundreds of inference calls to it to answer a single question? You'd use a much cheaper model.

The same reason a lot of people switched to GPT-4 when it came out even though it was much more expensive than 3 - doesn't matter how cheap it is if it isn't good enough/much worse.


It was expensive as they wanted to charge more for it but deepseek has forced their hand


They’ve only released o3-mini, which is a powerful model but not the full o3 that is being claimed as too expensive to release. That being said, DeepSeek for sure forced their hand to release o3-mini to the public.


o3 mini was previewed in December. Deepseek maybe made them release it a few weeks early but it was already on its way


I guess the question is, did DeepSeek force them to rethink pricing? It's crazy how much cheaper it (v3 and R1) is, but considering they (Deepseek) can't keep up with demand, the price is kind of moot right now. I really do hope they get the hardware to support the API again. The v3 and R1 models that are hosted by others are still cheap compared to the incumbents, but nothing can compete with DeepSeek on price and performance.


no they didn't, this was literally all announced in December with a release date for January


Rightfully so, some models are getting super efficient.


Interesting, thanks for highlighting! Did not pick up on that. Re:"leading", tho:

Effectiveness in this task environment is well beyond the specific model involved, no? Plus they'd be fools (IMHO) to only use one size of model for each step in a research task -- sure, o3 might be an advantage when synthesizing a final answer or choosing between conflicting sources, but there are many, many steps required to get to that point.


I don't believe we have any indication that the big offerings (claude.ai, Gemini, operator, tasks, canvas, chatgpt) use multiple models in one call (other than for different modalities like having Gemini create an image). It seems to actually be very difficult technically and I'm curious as to why.

I wonder how much of an impact our being still so early in the productization phase of this all is. Like it takes a ton of work and training and coordination to get multiple models synced up into an offering and I think the companies are still optimizing for getting new ideas out there rather truly optimizing them.


...or its all a farce, for now.


I'm sure o3 will be a generation ahead of whatever deepseek, google and meta are doing today when it launches in 10 months, super impressive stuff.


I’m not sure if you’re implying this subtly in your comment or not, as it’s early here, but it does of course need to be a generation ahead of what 10 months of their competitors moving forward have done too. Nobody is standing still


I read a fair amount of sarcasm in the parent comment ;)


> but this is being powered by the unreleased o3 model

What makes you believe that?


they explicitly stated it in the launch


The linked article says,

> Powered by a version of the upcoming OpenAI o3 model that’s optimized for web browsing and data analysis, it leverages reasoning to search, interpret, and analyze massive amounts of text, images, and PDFs on the internet, pivoting as needed in reaction to information it encounters.

If that's what you're referring to, then it doesn't seem that "explicit" to me. For example, how do we know that it doesn't use less thinking than o3-mini? Google's version of deep research uses their "not cutting edge version" 1.5 model, after all. Are you referring to something else?


o3-mini is not really "a version of the o3 model", it is a different model (less parameters). So their language strongly suggests, imo, that Deep Research is powered by a model with the same number of parameters as o3.


Has anyone here tried it out yet?


Pro user. No access like everyone else.

OpenAI is very much in an existential crisis and their poor execution is not helping their cause. Operator or “deep research” should be able to assume the role of a Pro user, run a quick test, and reliably report on whether this is working before the press release right?


Per the below, seems it's not available to many yet.

https://news.ycombinator.com/item?id=42913575


I love Sublime Text editor. Have been using it for 15 years now and despite the fact most of my development is done inside of VSCode or other editors, I still use ST for large files and notes. I can confidently open up a 1gb SQL dump in ST and it won't break a sweat, try that in VSCode and you can see it freeze up for a bit and that's on a decent machine too.


This is like watching a carpenter blame their hammer because they didn’t measure twice. AI is a tool, it's like a power tool for a tradesperson: it'll amplify your skills, but if you let it steer the whole project? You’ll end up with a pile of bent nails.

LLMs are jittery apprentices. They'll hallucinate measurements, over-sand perfectly good code, or spin you in circles for hours. I’ve been there back in the GPT-4 days especially, nothing stings like realising you wasted a day debugging AI’s creative solution to a problem you could've solved in 20 minutes.

When you treat AI like a toolbelt, not a replacement for your own brain? Magic. It’s killer at grunt work like; explaining regex, scaffolding boilerplate, or untangling JWT auth spaghetti. You still gotta hold the blueprint. AI ain't some magic wand: it’s a nail gun. Point it wrong, and you’ll spend four days prying out mistakes.

Sucks it cost you time, but hey, now you know to never let the tool work you. It's hopefully a lesson OP learns once and doesn't let it sour their experience with AI, because when utilised properly, you can really get things done, even if it's just the tedious/boring stuff or things you'd spend time Google bashing, reading docs or finding on StackOverflow.


So apparently, Sam Altman is so skilled he can cast magic spells on people? This just all adds credence to the fact he was fired because of a coup.


Or he is great in manipulating people so it's hard to point at a single proving example


Depends if he was entirely candid about which spells he was casting on them I suppose.


most people don’t save their receipts. it is a hassle until you need it.

most interactions aren’t between “the good side” and “the bad side.”

there seems to be a hesitance to entertain the possibility that a bad board pulled a coup on a bad ceo. harder to pick sides.

more likely than a good board pulling a coup on a good ceo, but that’s also possible. everyone got too afraid and now they hug it out.

so, no, not magic. just mass amounts of detail outside the public eye. makes for a lot of possibilities that people will speculate about online. even argue over it.


The fact the new CEO can't even get answers from the board is quite telling. Looks like the OpenAI board wants those investor lawsuits. And allegedly the Quora guy Adam D'Angelo is the ringleader of all this?


This whole saga has helped me see how rumors grow. (And I know you used the word allegedly, but still.) First it was Ilya who was the ringleader. Now it is Adam. There has been a small amount of new information (Ilya seems to have backtracked), but there is no real information to suggest Adam was a ring leader. It is the pure speculation of people trying to make sense of the whole thing.


There is no evidence that Adam is the ringleader.

All four are possible ringleaders.

Given Ilya's change of heart he is slightly less probable as the ringleader.


I have no evidence, but I do have faith that anyone who turned Quora into what it is today could totally be the ringleader of this clusterflack


Adam runs a clone of chatgpt (poe platform). It's right there on his Twitter account. Isnt this conflict of interest and motive?


First the board allowed the For profit corp to form, and now is firing the guy that did it. Second, they allow a board member to build a competing startup. What kind of AI safty/alignment/save the world crap is that?


yes, yes it is.


The detective from Knives Out would have solved this by now


> Given Ilya's change of heart he is slightly less probable as the ringleader.

I tend to believe that was exactly his strategy with his change of heart...


I'm not sure if lawsuits against the non-profit will be possible, as the investors didn't invest in it. More likely, making public the facts behind who was responsible for the shenanigans and what evidence they had (if any), combined with pressure from employees, will force their hand.


Or Dustin Moskovitz, it seems many of the board members may be linked to him


No https://www.threads.net/@moskov/post/Cz482XgJBN0?hl=en

"A few folks sent me a Hacker News comment speculating I was involved in the OpenAI conflict. Totally false. I was as surprised as anyone Friday, and still don’t know what happened."


HTMX is cool, but it's honestly only in the conversation because the developer has leveraged memes and garnered popularity on Twitter.


yeah, i'm not google of facebook or ny times or whatever, i'm a lone dev in montana

what other options did i have?


People don't realize that many of the dev tools they use they are using more because of the large marketing reach of those dev tools than any objective decision to use the tool.

Entire companies have been built around the premise of good developer marketing, and if you're faang, you simply get to impose whatever developer trends you want to see in the market.

I'll never forget how wildly popular Stripe became overnight because of how easy their SDK's were - people were happy to give them a higher % of each transaction (all the stripe competitors at the time were cheaper -- this isn't true anymore, but was at the time). It always blew my mind that people were so willing to give up a % of each transaction to save an extra couple days of development.

Developers in general are notoriously susceptible to marketing trends and if you're building a dev tool that you want to gain traction you absolutely have to play that game.


Yep. Vercel has raised $300m.


I am not saying HTMX is terrible or anything. What you've built is cool and I've built something with it. The point I was making wasn't made very well. What I meant was good options are often left out of the conversation because of React, Vue and for a while there, Svelte. There are a lot of great libraries and frameworks that nobody talks about, HTMX included. I just feel like HTMX isn't being hyped because it's good, but because of the memes/marketing aspect. I think it's a disservice to your work, which deserves to be assessed on its merits. It's a sad indictment on front-end that building something good is no longer good enough to get recognition.


yeah, it is a little unfair

i've tried to produce a lot of technical content, arguing for htmx on its merits:

https://htmx.org/essays

https://hypermedia.systems

but the reality is that marketing is what gets people to that content. I tried for years to convince people on pure technical merit alone, and only made halting progress.

i also got very lucky that a few things all came together at once:

* the primeagen & fireship_dev both covered htmx * we released our book * the twitter algorithm changed to boost funny stuff/memes


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: