Hacker Newsnew | past | comments | ask | show | jobs | submit | kshacker's commentslogin

I was reading and 2 (Srouji) is 61 years old. While that is not too old, but that does explain why he may not be choice for next CEO (besides any other things). You want someone to helm the ship for a decade.

I am a novice programmer -- I have programmed for 35+ years now but I build and lose the skills moving between coder to manager to sales -- multiple times. Fresh IC since last week again :) I have coded starting with Fortran, RPG and COBOL and I have also coded Java and Scala. I know modern architecture but haven't done enough grunt work to make it work or to debug (and fix) a complex problem. Needless to say sometimes my eyes glaze over the code.

And I write some code for my personal enjoyment, and I gave it to Claude 6-8 months back for improvement, it gave me a massive change log and it was quite risky so abandoned it.

I tried this again with Gemini last week, I was more prepared and asked it to improve class by class, and for whatever reasons I got better answers -- changed code, with explanations, and when I asked it to split the refactor in smaller steps, it did so. Was a joy working on this over the thanksgiving holidays. It could break the changes in small pieces, talk through them as I evolved concepts learned previously, took my feedback and prioritization, and also gave me nuanced explanation of the business objectives I was trying to achieve.

This is not to downplay claude, that is just the sequence of events narration. So while it may or may not work well for experienced programmers, it is such a helpful tool for people who know the domain or the concepts (or both) and struggle with details, since the tool can iron out a lot of details for you.

My goal now is to have another project for winter holidays and then think through 4-6 hour AI assisted refactors over the weekends. Do note that this is a project of personal interest so not spending weekends for the big man.


> I was more prepared and asked it to improve class by class, and for whatever reasons I got better answers

There is a learning curve with all of the LLM tools. It's basically required for everyone to go through the trough of disillusionment when you realize that the vibecoding magic isn't quite real in the way the influencers talk about it.

You still have to be involved in the process, steer it in the right direction, and review the output. Rejecting a lot of output and re-prompting is normal. From reading comments I think it's common for new users to expect perfection and reject the tools when it's not vibecoding the app for them autonomously. To be fair, that's what the hype influencers promised, but it's not real.

If you use it as an extension of yourself that can type and search faster, while also acknowledging that mistakes are common and you need to be on top of it, there is some interesting value for some tasks.


For me the learning curve was learning to choose what is worth asking to Claude. After 3 months on it, I can reap the benefit: Claude produces the code I want right 80% of the time. I usually ask it: to create new functions from scratch (it truly shines at understanding the context of these functions by reusing other parts of the code I wrote), refactor code, create little tools (for example a chart viewer).

It really depends on what you're building. As an experiment, I started having Claude Code build a real-time strategy game a bit over a week ago, and it's done an amazing job, with me writing no code whatsoever. It's an area with lots of tutorials for code structure etc., and I'm guessing that helps. And so while I've had to read the code and tell it to refactor things, it has managed to do a good job of it with just relatively high level prodding, and produced a well-architected engine with traits based agents for the NPCs and a lot of well-functioning game mechanics. It started as an experiment, but now I'm seriously toying with building an actual (but small) game with it just to see how far it can get.

In other areas, it is as you say and you need to be on top of it constantly.

You're absolutely right re: the learning curve, and you're much more likely to hit an area where you need to be on top of it than one that it can do autonomously, at least without a lot of scaffolding in the form of sub-agents, and rules to follow, and agent loops with reviews etc., which takes a lot of time to build up, and often include a lot of things specific to what you want to achieve. Sorting through how much effort is worth it for those things for a given project will take time to establish.


I suspect the meta architecture can also be done autonomously though no one has got there yet, figuring out the right fractal dimension for sub agents and the right prompt context can itself be thought of as a learning problem.

I appreciate this narrative; relatable to me in how I have experienced and watched others around me experience the last few years. It's as if we're all kinda-sorta following a similar "Dunning–Kruger effect" curve at the same time. It feels similar to growing up mucking around with a ppp connection and Netscape in some regards. I'll stretch it: "multimodal", meet your distant analog "hypermedia".

My problem with Gemini is how token hungry it is. It does a good job but it ends up being more expensive than any other model because it's so yappy. It sits there and argues with itself and outputs the whole movie.

Breaking down requirements, functionality and changes into smaller chunks is going to give you better results with most of the tools. If it can complete smaller tasks in the context window, the quality will likely hold up. My go to has been to develop task documents with multiple pieces of functionality and sub tasks. Build one piece of functionality at a time. Commit, clear context and start the next piece of functionality. If something goes off the rails, back up to the commit, fix and rebase future changes or abandon and branch.

That’s if I want quality. If I just want to prototype and don’t care, I’ll let it go. See what I like, don’t like and start over as detailed above.


Interesting. From my experience, Claude is much better at stuff involving frontend design somehow compared to other models (GPT is pretty bad). Gemini is also good but often the "thinking" mode just adds stuff to my code that I did not ask it to add or modifies stuff to make it "better". It likes to 1 up on the objective a lot which is not great when you're just looking for it to do what you precisely asked it and nothing else.

I have never considered trying to apply Claude/Gemini/etc. to Fortran or COBOL. That would be interesting.

I was just giving my history :) but yes I am sure this could actually get us out of the COBOL lock-in which requires 70 years old programmers to continue working.

The last article I could find on this is from 2020 though: https://www.cnbc.com/2020/04/06/new-jersey-seeks-cobol-progr...


Or you could just learn cobol. Using an LLM with a language you don’t know is pretty risky. How do you spot the subtle but fatal mistakes they make?

You can actually use Claude Code (and presumably the other tools) on non-code projects, too. If you launch claude code in a directory of files you want to work on, like CSVs or other data, you can ask it to do planning and analysis tasks, editing, and other things. It's fun to experiment with, though for obvious reasons I prefer to operate on a copy of the data I'm using rather than let Claude Code go wild.

I use Claude Code for "everything", and have just committing most things into git as a fallback.

It's great to then just have it write scripts, and then write skills to use those scripts.

A lot of my report writing etc. now involve setting up a git repo, and use Claude to do things like process the call transcripts from discovery calls and turn them into initial outlines and questions that needs followup, and tasks lists, and write scripts to do necessary analysis etc., so I can focus on the higher level stuff.


Side note from someone who just used Claude Code today for the first time: Claude Code is a TUI, so you can run it in any folder/with any IDE and it plays along nicely. I thought it was just another vscode clone, so I was pleasantly surprised that it didn't try to take over my entire workflow.

It's even better: It's a TUI if you launch it without options, but you can embed it in scripts too - the "-p" option takes a prompt, in which case it will return the answer, and you can also provide a conversation ID to continue a conversation, and give it options to return the response as JSON, or stream it.

Many of the command line agent tools support similar options.


They also have a vscode extension that compares with github copilot now, just so you know.

How do the sequels affect this? I read this once more in the same discussion so I am curious.

Let's assume the 1st book goes public. I should be able to use those characters and their known relationship in any which way, no? What's wrong with that, copyright wise?


> voice simply isn't a great way to interact with computers in the general case

You know I have talked to chatGPT for maybe a 100 hours over the past 6 months. It gets my accent, it switches languages, it humors. It understands what I am saying even if it hallucinates once in a while.

If you can have chatGPT level of comprehension, you can do a lot with computers. Maybe not vim level of editing, but every single function in a driving car should be controllable by voice, and so could a lot of phone and computer functions.


I think the utility of voice commands is marginal at best in a car. In isolation, voice commands don't make sense if you have passengers. You basically have to tell everyone to shut up to ensure the car understands your commands over any ongoing conversation. And in the context of old fashioned knobs and buttons, voice is seriously a lot of complex engineering to solve problems that have long been non-issues.

Not to mention the likely need for continuous internet connectivity and service upkeep. Car companies aren't exactly known for good software governance.


Modern cars have several microphones or directional microphones and can isolate a speaker.

I think well-done voice commands are a great addition to a car, especially for rentals. When figuring out how to do something in a new car, I have to choose between safety, interruption (stopping briefly) or not having my desires function change.

Most basic functions can be voice-controlled without Internet connectivity. You should only need that for conversational topics, not for controlling car functions.


> Not to mention the likely need for continuous internet connectivity and service upkeep. Car companies aren't exactly known for good software governance.

I don't own a car but rent them occasionally on vacation in every one I've rented that I can remember since they started having the big touch screens that connect with your phone, the voice button on the steering wheel would just launch Siri (on CarPlay), which seems optimal—just have the phone software deal with it because the car companies are bad at software.

It seems to work fine for changing music when there's no passenger to do that, subject to only the usual limitations with Siri sucking—but I don't expect a car company to do better, and honestly the worst case I've can remember with music is that played the title track of an album rather than the album, which is admittedly ambiguous. Now I just say explicitly "play the album 'foo' by 'bar' on Spotify" and it works. It's definitely a lot safer than fumbling around with the touchscreen (and Spotify's CarPlay app is very limited for browsing anyways, for safety I assume but then my partner can't browse music either, which would be fine) or trying to juggle CDs back in the day.


How I miss old fashioned knobs and buttons. The utility of voice commands goes up when all your HVAC controls and heated car elements are only accessible on a touchscreen that you can’t use with the mitts you need to wear when it’s cold.

Again, I disagree. I almost entirely use Siri to get directions to places using Google maps with my voice when I’m on CarPlay. I also use Siri to respond to texts in my car, not as frequently but often enough.

Why on earth would you want to accelerate, brake, and steer by voice?

I'm assuming they meant things like "change temperature" or "seat massage" or "play Despacito" - things you might need to look for in a rental car

Or more helpfully “find me a fuel station near my destination”

Step on it Jeeves, and go through the park.

You'd better hope Jeeves doesn't use MapQuest or you might have to drive through a river before you get to the park!

"Car, turn right. Shit, no left, left!" slams into wall

Deferring sales is one datapoint. But setting records is another datapoint and counter to deferring IMO.

Edit there is talk below of inflation adjusted numbers being high but meh


So I go to Costco and buy wine, and next week I get a letter from DMV reminding me of the blood alcohol requirements, I get people visit home selling wine glasses, when I go to work someone hands me the best of wines coffee table book, and so on.

Ok but you say - It is not this overt online - well if we live digital life, lot of things are not overt, but we know we need to clean cookies, some of us create containers, some of us use TOR, so the sensibilities in digital are different than real life, and I am showing pretty much equivalent examples / metaphors if the same level of intrusiveness was there in real life.


Don't forget that time Target accidentally informed a teenage girl's patents she was pregnant by sending her coupons despite not buying anything explicitly maternity related. And how Target's solution was "oh, we'll just add ads for other stuff too so it feels less creepy" -____-

https://archive.is/kK1V8


I am also not advocating anything but ... wasn't the famous Spock line "needs of the many outweigh the needs of the one". The question is of empirically proving it and that's the challenge. The jury may not be co-opted but the judiciary is. I wonder how do we go about proving this.


Utilitarianism is a dangerous mistress when it comes to justifying moral and ethical transgressions. Sounds great until TPTB decide that the half dozen lives that can be saved with your organs matter more than your one life.


If we followed the rules strictly, and not different rules for the rich, why's that a problem?


You don't see a problem with involuntary organ donations from living people?


AI spell check OR rather sentence improvements are awesome.

But by AI, people mean LLM and context. Remember what I told you -- yesterday I was booking a flight, can we check the prices again? What happened to that hotel booking? Dozens of other use cases. A private AI with awesome memory and zero hallucinations will be ... awesome.


No, I think the other side has a point. If I were doing 10 services on my car, I would have muscle memory of a lot of things. If I am doing only brakes, and maybe another thing, I do not have that muscle memory. While the work may not be harder, the familiarity is gone for a lot of people.

BTW just before Covid, or during Covid, I took a car mechanic course from the local De Anza college - no hands on, so that's why I think it was during Covid. But after 5 years and no experience, I have forgotten except the abstract concepts. Then imagine people who never had to look under the hood -- ever.


I took several car mechanic classes from De Anza college. Great instructor, and I did do some hands on stuff.

But my primary takeaway was that this is hard & dirty work, and there are numerous ways in which you can make mistakes that ruin the car and/or endanger your safety, so generally paying a professional to do it is a more sensible way.

Of course, if you enjoy doing this, or have a very old car, or more time than money, the trade-offs are different.


Which is true for a lot of things around the house. Although I got the whole house painted a bit ago because insurance was paying for it after a fire, there are some things I have experience with because I've done them a bunch of times--and often do them myself--there are others that I've never done. And may not have the right tools for and YouTube notwithstanding will probably take me a long time to do a very imperfect job.


> the capital and running cost of a purpose-built datacenter is far cheaper per rack than putting machines in existing office-class buildings, as long as it's a reasonable size - ours is ~1000 racks, but you might get decent scale at a quarter of that.

Just want to confirm what I am reading. You are talking about ~1000 racks as the facility size, not what a typical university requires.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: