Hacker News new | past | comments | ask | show | jobs | submit | chaboud's comments login

I recently exclaimed that “vibe coding is BS” to one of my coworkers before explaining that I’ve actually been using GPT, Claude, llama (for airplanes), Cline, Cursor, Windsurf, and more for coding for as long as they’ve been available (more recently playing with Gemini). Cline + Sonnet 3.7 has been giving me great results on smaller projects with popular languages, and I feel truly fortunate to have AWS Bedrock on tap to drive this stuff (no effective throttling/availability limits for an individual dev). Even llama + Continue has proven workable (though it will absolutely hallucinate language features and APIs).

That said, 100% pure vibe coding is, as far as I can tell, still very much BS. The subtle ugliness that can come out of purely prompt-coded projects is truly a rat hole of hate, and results can get truly explosive when context windows saturate. Thoughtful, well-crafted architectural boundaries and protocols call for forethought and presence of mind that isn’t yet emerging from generative systems. So spend your time on that stuff and let the robots fill in the boilerplate. The edges of capability are going to keep moving/growing, but it’s already a force multiplier if you can figure out ways to operate.

For reference, I’ve used various degrees of assistance for color transforms, computer vision, CNN network training for novel data, and several hundred smaller problems. Even if I know how to solve a problem, I generally run it through 2-3 models to see how they’ll perform. Sometimes they teach me something. Sometimes they violently implode, which teaches me something else.


> That said, 100% pure vibe coding is, as far as I can tell, still very much BS.

I don't really agree. There's certainly a showboating factor, not to mention there is currently a goldrush to tap this movement to capitalize from it. However, I personally managed to create a fully functioning web app from scratch with Copilot+vs code using a mix of GPT4 and o1-mini. I'm talking about both backend and frontend, with basic auth in place. I am by no means a expert, but I did it in an afternoon. Call it BS, the the truth of the matter is that the app exists.


People were making a front and backend web app in half a day using Ruby on Rails way before LLMs were ever a thing, and their code quality was still much better than yours!

So vibe coding, sure you can create some shitty thing which WORKS, but once it becomes bigger than a small shitty thing, it becomes harder and harder to work with because the code is so terrible when you're pure vibe coding.


> People were making a front and backend web app in half a day using Ruby on Rails way before LLMs were ever a thing, and their code quality was still much better than yours!

A few people were doing that.

With LLMs, anyone can do that. And more.

It's important to frame the scenario correctly. I repeat: I created everything in an afternoon just for giggles, and I challenged myself to write zero lines of code.

> So vibe coding, sure you can create some shitty thing which WORKS (...)

You're somehow blindly labelling a hypothetical output as "shitty", which only serves to show your bias. In the meantime, anyone who is able to churn out a half-functioning MVP in an afternoon is praised as a 10x developer. There's a contrast in there, where the same output is described as shitty or outstanding depending on who does it.


There's an ambiguity here between palindromic words and palindromic sentences (where punctuation and spacing are non-breaking):

"Go hang a salami, I'm a lasagna hog"

or

"A man, a plan, a canal, Panama"

though it seems that all of the answers (in the one tutorial and one round that I played) are single words, often slid in as character insertions. Just go light on the spacebar, apparently.


You wouldn't happen to be a fan of Jon Agee ?


It’s actually just jargon-ified. When pressing a piece of sheet metal into a die (concave form) with a punch (convex form) to stretch and compress the metal into a deep shape (e.g., a cup, can, etc.), using ultrasound to induce some wiggles can reduce the chance of tearing, likely allowing for less material use and greater yield rates for a machining process.

Basically, if you’re forming metal with high force, give it some high speed micro-wiggles and things will be better.

This should feel pretty natural for folks who have tried to squeeze into skinny jeans after Thanksgiving.


I actually understood the jargon fine. It just felt like a puff piece.


I think that's fair, though "vibration is kind of like lubrication" is a solid move.


There’s a pleasantly elegant “hey, we’ve solved the practical functional complement to this category of problems over here, so let’s just split the general actual user problem structurally” vibe to this journey.

It often pays off to revisit what the actual “why” is behind the work that you’re doing, and this story is a delightful example.

I wrote an arbitrary precision arithmetic C++ library back in the 90’s. We used it to compute key pairs for our then new elliptic-curve based software authentication/authorization system. I think the full cracks of the software were available in less than two weeks, but it was definitely a fun aside and waaaay too strong of a solution to a specific problem. I was young and stupid… now I’m old and stupid, so I’d just find an existing tool chain to solve the problem.


That’s really the core problem here. As is often the case with rent control models, the natural outcome of an incentive structure came to pass in spite of the best intentions of legislators/regulators.


Yep. Government-imposed price controls and caps should actually be illegal. Like actually unconstitutional, with the requisite constitutional amendments in California (and the other States) and the US Constitution being passed.

No matter what anybody’s intentions are, they always end up creating market distortions that eventually lead to more societal issues than the ones they’re suppose to alleviate and it’s just bad economics that shouldn’t even be a tool in politicians’ (or voters, but ballot box lawmaking should be illegal too) toolboxes so they’re forced to address the upstream concerns that cause prices on land, housing, rents, services and goods to climb to the point of being unaffordable to most of the population.


You may be thinking of the US Forest Service’s suspension of prescribed burns.

https://cepr.net/publications/us-forest-service-decision-to-...

However, state and local agencies undertake prescribed burns regularly. To be fair, clarity (read: honesty) from political figures on this topic (and others) is at an all time low.


I once had my boss at a new job sit me down, a couple of weeks into the job, and tell me "Matt, we have a pace that we work at, and we need you to work at that pace... If leadership gets the idea that we can deliver at a much faster pace, they'll expect it from us all the time." By the time I quit that job, I was able to finish my whole week's work before lunch on Monday. Then I'd just spend the rest of the time working on exploratory projects to stay sharp. So I've seen the utterly pathological version of Parkinson's law. However...

Shipping the wrong thing fast is just shipping the wrong thing, and it's a natural impact of trying to squeeze blood from a stone.

I often give something along the lines of this lecture:

You're trying to solve a $5M problem for $1M, so you're going to push people to make poor choices that don't actually address your problem, and it's going to end up costing $10M after they back up the train and re-lay the tracks. In many cases, the "shortcuts" that teams jump to end up being "longcuts" that drag the program out due to being insufficient to address all of the feature requirements, unplanned due to rushing, etc.


Yep.

A corollary of this is the following paper

https://web.mit.edu/nelsonr/www/Repenning=Sterman_CMR_su01_....

Basically stating the fact that people fail to see the value in reinvestment of time and resources for improvement. Being Idle is not a failure but a way to think and be ready if a period of higher intensity comes. And it is healthy to have sometimes more time for a menial task.

People get so crazy about the idea of optimization, but fail to account for severe issues that arise when time is always occupied by something, which seems to happen more and more these days...


Every system can be placed on a spectrum. One end of this spectrum is perfect efficiency. The other end is perfect resilience.


A system without slack is brittle.


> Matt, we have a pace that we work at, and we need you to work at that pace... If leadership gets the idea that we can deliver at a much faster pace, they'll expect it from us all the time.

Work/life balance is a part of the compensation package. If you suddenly start expecting your employees to work harder, the most talented ones will simply jump ship, because from their perspective, if they're already forced to spend long hours on complex projects, why not at least do it for lots of money. So the only people who stay will be hard-working ones, but less skilled. Meanwhile, in a healthy company, you need a mix of people willing to do demanding, yet simple work, and those who rescue a dumpster fire of a project and then take five coffee breaks.


On the other hand, if you expect your employees to produce every week only what they can do in an afternoon and no more, many of the most talented ones will also jump ship, because they are motivated by accomplishing high-quality work.

It may have less to do with talent than motivation. Motivations people can have at work (and many can and do have a combination in different proportions) can include: making as much money as possible; doing as little work as possible; producing as high-quality work (in their own opinion) as possible; recognition from others for contribution; social relationships and/or being liked; learning new things and developing new skills; being challenged; not being challenged; being part of a collaborative team working well together or being left alone to work independently; etc.

But I think few "most talented" people would last very long at workplace GP described, where doing more than a half-day's worth of work in a week was restrained.


Intrinsic motivation can be better than extrinsic motivation. Google won a lot of fans and dedicated employees with its original version of 20% time that encouraged people to find stuff they were highly motivated to want to work on and could do so relatively safely in the company's sandbox. It is possible this sort "everything the company needs you to do can be done in one afternoon each week" could be an incredible "80% time" company. There are lots of people that would find "80% of my time at the company is my own" to be quite motivating.

Of course, certainly not in the example scenario above where one layer of management is effectively lying to another layer. That's not a healthy "80% time" when it involves duplicity and covert play acting.

But if you were feeling crazy enough to try to build an overt "80% time" company you likely wouldn't have a hard time finding some of the "most talented" people.


wasn't xerox parx effectively like that?


That's certainly how Xerox PARC is described in some of the tales of their classic discoveries/inventions/Demos.

Certain decades of Bell Labs, too, have that glimmer of a bunch of the smartest people allowed to explore stuff they cared about with only the oversight from fellow technical people.

I've even heard Microsoft Research can still sometimes feel a bit like that, though with all the pressures of academia like grant finding and patent applications and other such hustles.


This. Motivation is the key to getting employees to produce over a period of time. Give the good ones the maximum amount of ownership they can handle and they will set an example for everyone else. Grow the talent over time.

Fire the whinny shit-talkers as soon as possible. They drag everyone down.


> if you expect your employees to produce every week only what they can do in an afternoon and no more

... then they will have time to fix problems that arrive randomly, rework the parts of their work they got wrong on the first try, and study and train to stay sharp to do the next piece of work. Ironically, all of those will make them produce their next piece of work faster, making this one "problem" worse and worse.

But I left the real benefit to the end... those people will have enough time to think about how they can work into improving something on their workplace. You know, the people that actually do the work, how they can apply the work they know about, because they do it, instead of people that know nothing about it being specialized in thinking how to apply other people's work.

The GP saw a good manager trying to keep his people safe from a dysfunctional organization. The dysfunctional organization is a read flag, for sure, but it doesn't automatically mean the environment is a bad one.


Good software shops leave a good amount of "slack" time for their engineers. It's how you handle shit hitting the fan without burnout.

I currently work on a team that has been basically solving multiple crises per year for years now. In the blips where we've returned to normalcy, we didn't push hard. Because we knew that was time to recover because another crisis was definitely coming.

The crises were external / due to leadership being aggressive without our input..so at that point, all you can do is execute. And part of executing is resting.


Geordi La Forge: Yeah, well, I told the Captain I'd have this analysis done in an hour.

Scotty: How long will it really take?

Geordi La Forge: An hour!

Scotty: Oh, you didn't tell him how long it would 'really' take, did ya?

Geordi La Forge: Well, of course I did.

Scotty: Oh, laddie. You've got a lot to learn if you want people to think of you as a miracle worker.


Sandbagging is a superpower within reason under many circumstances. Most people would much prefer to get a quality promised deliverable on time or a bit early even if it's a bit longer than they would have preferred in an ideal world. (And, if they really do want to pull something in, you sigh and say you'll do your best but no promises.) If something really has a hard aggressive deadline, OK maybe.

But usually it's a case of it will probably take me this long to do a task. But there are some unknowables and someone I might have to lean on could have a sick kid for a couple days. Generally, everyone is happier if you underpromise and overdeliver including you.

An engineering manager I used to work with would drive me crazy because he had this idea of 90% schedules he got from somewhere. Which basically meant there was a 90% chance of meeting the schedule if nothing went wrong. (Which naturally mostly never happened.)


Sandbagging can also lead to nothing ever being worth doing.

Of course, it is shocking how many projects are started without anyone knowing anything other than the end date.

In my view, if the project is of such marginal value that a precise estimate is needed, don't even start the project - instead, find something more valuable to work on.


Maybe that's true of major projects, especially when there are maybe unknown unknowns. (And maybe not.) But there are a ton of deliverables that should be fairly predictable that come with people who have downstream dependencies on them. (Promotion, launch plans, etc.) When something is late it can cause a lot of scrambling and wasted time/money.

[And just to add, I'm mostly speaking to work I'm creating and delivering--for the most part individually. And to the degree that I'm depending on client reviews, etc. that's in the contract.]


The 90% schedule is not a bad idea, but you need to look at it via metrics - what percentage are you actually meeting that schedule? If you are not doing so 9 out of 10 times, you are at a 20% schedule or a 10% schedule.


+1. The 90% schedule should be 90% assuming the normal amount of things go wrong.


Well inflating estimates has short lifespans and Scotty got away because Kirk was a meathead.

I like Kirk was shoot first ask later unlike Picard but Picard was always “oh 1 hour - make it in 30 mins” or “1 hour - make it quick you have 15mins”


No, it’s because Kirk respected Scotty and gave him leeway.

Picard is the meathead, if anyone, and actually watching all the episodes in detail and figuring out their characters you’ll realise that Kirk is twice the scholar and then some Picard pretended to be, especially accounting for the movies and later series. Kelvin timeline doesn’t count, of course.

Kirk >>> Picard.


Well I watched only series so far. Not movies.

For me it was looking like Kirk did knew he knows he doesn’t know stuff so he was meathead with ability to defer and not control what he doesn’t know.

Picard was more like he knows everything and had to maintain this aura. But was on a different level of sophistication that I value much.

Even though I love Kirk’s gun blazing.


Oh cone on, we all know that lady captain that smoked 3 packs a day was the goat.


> You're trying to solve a $5M problem for $1M, so you're going to push people to make poor choices that don't actually address your problem

...you push people to only solve the actual problem.

This is a very strange take to enshrine as advice. If you try to solve a 50$ problem for $5k you're out ~$5k and the next project will cost $7k


The issue with people setting deadline is that they can rarely tell the difference between a $50 job and a $5k job.

If they were good, they would work on improving the flow, not pulling random numbers from their hat.


I would say that it's practically a worse situation. They aren't just ignorant, but willfully so because of a lack of measurable consequences for most. Even worse when there is market capture.


In a distributed setting where a me may wish to join the party late and receive a non-forged copy, it’s important. The crypto is there to stand in for an authority.


> In a distributed setting where a me may wish to join the party late and receive a non-forged copy, it’s important. The crypto is there to stand in for an authority.

Yeh, but that's kinda my point: if your primary use case is not "needs to be distributed" then there's almost never a benefit, because there is always a trusted authority and the benefits of centralisation outweigh (massively, IMO) any benefit you get from a blockchain approach.


100% agreed there. A central authority can just sign stuff. Merkle trees can still be very valuable for integrity and synchronization management, but burning a bunch of energy to bogo-search nonces is silly if the writer (or federated writers) can be cryptographic authorities.


This is a deeply problematic way to operate. En masse, it has the right result, but, for the individual that will have their life turned upside down, the negative impact is effectively catastrophic.

This ends up feeling a lot like gambling in a casino. The casino can afford to bet and lose much more than the individual.


I think the full reasoning here is something like

1. It was unclear if a warrant was necessary

2. Any judge would have given a warrant

3. You didn't get a warrant

4. A warrant was actually required.

Thus, it's not clear that any harm was caused because the right wasn't clearly enshrined and had the police known that it was, they likely would have followed the correct process. There was no intention to violate rights, and no advantage gained from even the inadvertent violation of rights. But the process is updated for the future.


Yeah that is about my understanding as well.


I don't care nearly as much about the 4th amendment when the person is guilty. I care a lot when the person is innocent. Searches of innocent people is costly for the innocent person and so we require warrants to ensure such searches are minimized (even though most warrants are approved, the act of getting on forces the police to be careful). If a search was completely not costly to innocent I wouldn't be against them, but there are many ways a search that finds nothing is costly to the innocent.


If the average person is illegally searched, but turns out to be innocent, what are the chances they bother to take the police to court? It's not like they're going to be jailed or convicted, so many people would prefer to just try to move on with their life rather than spend thousands of dollars litigating a case in the hopes of a payout that could easily be denied if the judge decides the cops were too stupid to understand the law rather than maliciously breaking it.

Because of that, precedent is largely going to be set with guilty parties, but will apply equally to violations of the rights of the innocent.


There is the important question.


I want guilty people to go free if their 4th amendment rights are violated, thats the only way to ensure police are meticulous about protecting peoples rights


It doesn’t seem like it was wrong in this specific case however.


The physics computer lab in Chamberlin Hall at UW in the 90's was a secret treasure trove of idle NeXTstation Turbo machines in an almost always empty room cooled to near refrigeration temperatures. I used to light up at least half of that room to run distributed simulations. There's probably still a 30 year old key to that lab in a junk drawer somewhere.

Eventually I realized that it just made sense to suck it up and get my own hardware, as it was either going to be esoteric "workstation" hardware with a fifth of the horsepower of a Pentium 75 or it was going to be in a room like the UPL jammed with CRT's and the smell of warm Josta.

How do students operate these days? Unless one is interacting with hardware, I'd be very tempted to stay in "fits on a laptop" space or slide to "screw it, cloud instances" scale. Anyone with contact in the last 5 years have a sense of how labs are being used now?


It's been nearly a decade now, but we shared a machine with 128 newish physical cores, a terabyte of RAM, and a lot of fast disk. Anyone with a big job just coordinated with the 1-2 other people who might need it at that level and left 10% of the RAM and disk for everyone else (OS scheduling handled the CPU sharing, though we rarely had real conflicts).

It's firmly in "not a laptop" scale, and for anything that fit it was much faster than all the modern cloud garbage.

The other lab I was in around that time just collected machines indefinitely and allocated subsets of them for a few months at a time (the usual amount of time a heavily optimized program would take to finish in that field) to any Ph.D. with a reasonable project. They all used the same in-house software for job management and whatnot, with nice abstractions (as nice as you can get in C) for distributed half-sparse half-dense half-whatever linear algebra. You again only had to share between a few people, and a few hundred decent machines per person was solidly better than whatever you could do in the cloud for the same grant money.


> Unless one is interacting with hardware, I'd be very tempted to stay in "fits on a laptop" space or slide to "screw it, cloud instances" scale. Anyone with contact in the last 5 years have a sense of how labs are being used now?

In my recent physics experience, this is basically what it was unless you had to rely on some proprietary software only on the lab machines like shudders LabView


In my university you could technically use any computer but must ensure that your code would work/compile on lab PCs cause that's where TAs would check it. As a result, during labs most people would just use computers there(too much hassle otherwise)


I went through community college about 6 years ago. And they still had bona fide computer labs with in-person tech support.

Computers were also ubiquitous in places like the coffeehouse, the library, practically every classroom, etc. And, of course, there were ubiquitous WiFi and USB charging ports, so that students with BYOD could get by (although WiFi was often overloaded and contentious.)

Within the main computer lab I was using, there was also a networking hardware lab, with genuine Cisco equipment such as routers and switches. The Cisco certification prep classes would go in there and do experiments on the hardware, so that students could get accustomed to seeing it in action, however outdated it may be.

The lab itself was chock-a-block with both Apples and Windows PCs, as well as scanners and printers available, and even headphones you could borrow from the desk attendant. You'd need to sign in and sign out. There were strict rules about silence and not leaving your station unattended. There was always space for more users and a generally relaxed atmosphere, where people could feel comfortable studying or doing homework.

I believe that there was also an A/V lab where students could get access to cameras and recording equipment, as well as software for that kind of thing.

The library, in addition to allocating lots of space for Windows PCs and Apples, would also loan out Chromebooks to any student, and I believe they had other things for loan, such as WiFi hotspots, for kids who couldn't afford to carry around their own Internet.

There were also Tutoring Centers, such as the Math one, where most of the desks featured a computer where you could log in to your collegiate account, and access your online course materials.

And the Testing Center was essentially a big computer lab, with cameras and in-person proctors monitoring it. It was partnered with Pearson and CompTIA, so I took more than one certification exam in there.

There is a fully-staffed IT Help Center on campus, so during office hours, you could count on a 1:1 in-person interaction to help you get logged in, debug your device's WiFi, or whatever.

Despite having a great computer setup in the comfort of my own home, and plenty of online courses on my schedule, I still appreciated the immersion of collegiate computer labs, and especially the relaxed coffeehouse access, where I could use Apple systems to work on my English homework and essays.

During the COVID-19 pandemic, all this went topsy-turvy, and a lot of these labs closed down, or took extreme health precautions, and of course, a lot more classes went online-only. But I was done with classes by that time.


I can only speak for the UPL, but, yeah, it was a hallmark of labs at the time that one of the benefits you were getting was the equipment. Nowadays, most people just come in with their laptops -- we have a kubernetes cluster for projects, but most of the actual computing equipment is brought in by students when they want to hang


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: