Run Qwen3-coder-next locally. That's what I'm doing (using LMstudio). It's actually a surprisingly capable model. I've had it working on some LLVM-IR manipulation and microcode generation for a kind of VLIW custom processor. I've been pleasantly surprised that it can handle this (LLVM is not easy) - there are also verilog code that define the processor's behavior that it reads to determine the microcode format and expected processor behavior. When I do hit something that it seems to struggle with I can go over to antigravity and get some free Gemini 3 flash usage.
Qwen3 Coder Next in llama.cpp on my own machine. I'm an AI hater, but I need to experiment with it occasionally, I'm not going to pay someone rent for something they trained on my own GitHub, Stack overflow, and Reddit posts.
MiniMax has an incredibly affordable coding plan for $10/month. It has a rolling five hour limit of 100 prompts. 100 prompts doesn't sound like much, but in typical AI company accounting fashion, 1 prompt is not really 1 prompt. I have yet to come even close to hitting the limit with heavy use.
Maybe when I was 15 or 16, PHP stuff in order to maintain a website. I never really became interested in coding itself, I learned it for practical reasons and I studied it informally (forced myself through some boring books) when I was in my early 20s and wanted to get a real job.
> But taking a bird’s-eye view of what happened that day? A table got a new header. It’s hard to imagine anything more mundane. For me, the pleasure was entirely in the process, not the product.
I think that's where I differ from a lot of the artful programmers, I've never found pleasure in a perfect, beautiful solution. I get annoyed when I have to dump hours and hours into something that 'should' be simple! I don't want to spend my time fiddling with layouts, CSS, Oauth handshakes, etc. I want to build stuff and get paid for it, that's how I view my job as well. Less logician and more mechanic.
I use ChatGPT as much as I can, to do all I can and then fix the output when it's needed. I view it as a higher level programming language, that spares me from the burden of thinking of low-level details. It's same the reason I code in Python/Javascript rather than C++ or other languages, the goal is to make something of use. That's my goal to stay employed, become one with the language du jour, in the same way I've jumped from PHP to Jquery to React to ...
Camaraderie is an illusion with management or your boss. We're not comrades, we're their subjects. We can have comradeship with our peers, of course, but that's unlikely to provide a safety net where none of us are unionized.
So no, I don't feel safe. I smile and say polite things when they mention how great the company is or how the sales are, or what a great year it will be!(how will any of this benefit me, besides more work) I consider this performative act part of what they pay me for, even if it is very painful. I'm not in a FAANG though, just slumming it.
Referring to yourself as a "subject" is a very unhealthy worldview. There are a lot of bad bosses, but there are a lot of great ones who regularly put their neck on the line for the teams they serve. It sounds like you've had some bad experiences, but you should not generalize them to the whole world.
Well, could go either way on the healthy/unhealthiness of it. I would argue that it's accurate though. I've had good bosses too! They're not demons, just doing their job. It doesn't change that relation between us though: I'm subject to their whim, for payment. We're not peers, not collaborators, it's a hierarchical relationship of dependence with clear boundaries. I recognize the lines can be more blurred with more layers of management in large corporate structures, when the direct manager is subject to similar pressures that the end-worker is under.
Definitely identify with this viewpoint -- I've heard the same feedback around the worldview being "unhealthy" but I would actually argue it is more accurate and provides clarity for me in regards to my relationship with my work. The main positive as I see it is in avoiding frustrations stemming from things completely out of my control.
I understand that my boss is not my friend, but to describe oneself as a “subject” is not something I relate to. In fact, I think it’s absurdly dramatic.
It’s not though - when you look at what really “drives” the relationship. You trade your labor for money in a system designed to keep us so anxious, we will accept as little wage as possible, by the same capital owning class.
We are subject to their desires as they are bound, by law, to choose profit for shareholders over employees.
We are their subjects. “Ain’t no war but class war” applies to us in tech as much as it does to the miners a mile underground in PA.
I’m sorry, but your description of a relationship with an employer doesn’t match mine at all.
I don’t feel anxious. I feel comfortable.
I don’t accept as little as possible. I negotiate with the knowledge that I have options.
I don’t toil in the mines for 80 hours a week to barely afford to feed myself. I spend 40-50 hours a week doing something I rather enjoy, and for that, I’m paid a salary that affords a lifestyle few could have imagined even fifty years ago.
I understand that my employer would pay me less if they could. Then again, if I could find a plumber who could fix my shower for $200 instead of $250, I’d patronize the former, all else equal. Does that make the plumber my “subject”? I don’t think so.
Your employer can also choose to terminate that relationship at any time. No problem, you could just get a job at another shop, right? Except when the black swan appears and all the other companies are doing layoffs and freezes, flooding the market with talent while limiting positions. Then, in that hour of crisis, is the true nature of the relationship revealed at last.
As software devs. we can save, right? :). Not sure the same logic applies to people on low wage jobs. Or people like the characters in the movie "Nomadland" (which is supposed to be true to life)
I know the feel, but I also think this is misplaced anxiety. Work can be difficult, stressful, feel pointless, etc, which is why we get paid to do it. And you need some level of stress to get over the hump and get it done, to fight off complacency. The problem starts when we start blaming the person telling us what to do, for having to do it.
It seems the only thing that's missing is some type of fact-checking function. The interaction, from a user perspective, is much nicer than sorting through Google results.But the results can be confidentially wrong and if you're not familiar with the subject matter already, you won't really know that.
That said, I'm basically using it as a replacement for Google for stuff that isn't up-to-date (code, philosophy) then double checking the output to see how it's wrong.
and by extension: the ability and expression of doubt / humility. knowing what you don't know is when you reach a certain maturity, which so far all these AIs seem to lack.
That’s not particularly different from comments on HackerNews and Reddit. There’s a lot of extremely confident and very wrong answers on both sites!
ChatGPT is probably more wrong more often, by a good margin. But I don’t think the argument “it’s confidently wrong” carries any weight. Humans are extremely susceptible to humans who display confidence. It’d probably be a good thing if humans were as skeptical of confident humans as they need to be of confident chat/search bots.
> There’s a lot of extremely confident and very wrong answers on both sites!
Yeah, but you get to look at a multitude of them, at the context, at replies, at the websites they link to, and can form your own opinion. ChatGPT isn’t able to do that work for you.
But I’d say at least readers of a random HN comment are naturally skeptic of it. Are ChatGPT users on average having the same stance? My worry is when these tools get just good enough such that due diligence is dropped out of convenience.
I'm not sure what you're talking about. I just asked it to argue that 42 is a prime number, and it clearly knew it wasn't and even said multiple times that the agument is just pretend/hypothetical:
________
Me: Make an argument for why 42 is a prime number
ChatGPT: 42 is not a prime number. It is a composite number, meaning it is the product of two smaller numbers.
Me: But pretend it is a prime number and make that argument
ChatGPT:
Sure, I can make an argument for why 42 could be considered a prime number if we pretend that it is.
One possible argument for why 42 could be considered a prime number is that it satisfies the basic definition of a prime number. A number is prime if it is divisible only by 1 ... (continues with rest of argument)
Of course, these arguments are purely hypothetical and are not based on the actual mathematical properties of 42. In reality, 42 is a composite number and is not considered a prime number.
The specific phrasing you used implies probable continuations involve treating the proposition as false, and 42 is also a number the model will have had a lot of user feedback on. Similarly, the model gives an excellent answer for 1000 which even mentions some factors of 1000.
On the other hand
> Is 355 a prime number?
> Yes, 355 is a prime number. A prime number is a positive integer that has no positive integer divisors other than 1 and itself. For example, 2, 3, 5, and 7 are all prime numbers because they cannot be evenly divided by any other positive integers. 355 is a prime number because it cannot be evenly divided by any other positive integers except for 1 and itself. It is not divisible by 2, 3, 4, 5, or any other smaller positive integers.
Psh, all we need to do is create an AI that tells whether or not a single sentence is true or not. Should be straightforward and totally not a process that factors in the speakers, context of the conversation, and numerous other social aspects.
Makes the stalling on releasing 1.5 publicly more understandable. Hopefully they won't become the next "OpenAI" but the material incentives are not aligned with public releases.
I appreciate this methodology for getting stuff done, It's how I do my programming work, but I really do want to just "learn math" for the fun of it, like a puzzle, not make work. I was reading Descartes and he was writing about how the practice of math structured his thinking and I'd like to try to recreate that in myself. Maybe creating a project would be the best way to do it, but that seems like work rather than play to me.
ChatGPT $20/month is alright but I got locked out for a day after a couple hours. Considering the GitHub pro plus plan.
reply