Hacker Newsnew | past | comments | ask | show | jobs | submit | MattGaiser's commentslogin

Why not ask for something so outrageous in scope that only effective wielding of Claude Code could yield it in an hour or so?

I imagine it is about this:

> But Brazil lacks the human skin, pig skin, and artificial alternatives that are widely available in the US.

This is not an improvement on existing methods (it may end up being, but that is not the motivation) but rather a case of it being all they have to work with.

Tilapia skin is probably better than no skin at all.


> This is not an improvement on existing methods... a case of it being all they have to work with.

But the article says Tilapia skin is better in multiple aspects:

> "We got a great surprise when we saw that the amount of collagen proteins, types 1 and 3, which are very important for scarring, exist in large quantities in tilapia skin, even more than in human skin and other skins," Maciel said. "Another factor we discovered is that the amount of tension, of resistance in tilapia skin is much greater than in human skin. Also the amount of moisture."


It says it's different to human skin in multiple aspects.

Do I need more collagen or more moisture in my skin? I would expect evolution made some pretty good choices around default human skin for typical human activities, and if more moisture was obviously good, I would already have it.

Maybe tilapia skin is better for people who spend 24 hours a day swimming in lakes.


> It says it's different to human skin in multiple aspects.

No it says "even more than in human skin and other skins". Not different.

> Do I need more collagen or more moisture in my skin?

For this context? Yes? Clearly the article answers that already. I even included in my first reply but you'll have a third chance to read it:

> ...which are very important for scarring...

And your attempt to move the goal post fails miserably as well. Or do you think humans evolved to perfection by thinking this:

> I would expect evolution made some pretty good choices around default human skin for typical human activities, and if more moisture was obviously good, I would already have it.

I don't think you are debating in good faith. Good luck.


Let Claude have access to everything, have it run a local environment as close to production as possible (Docker with the real dependencies, not some locally installed finnicky Postgres), and tell it to thoroughly test all its work and make sure it is set up to do that.

Paste in ticket, get 95% of the way to the end.

I will add the caveat that this is only with the code. The overhead of communication has not improved at all beyond pasting code review comments into Claude to get replies.

What I really need is something to listen in at meetings for my name, summarize what was asked, and suggest an answer based on the codebase.


The live transcription tools like MacWhispr that run local transcription can output to a text. Probably a weekend project to do what you’re saying, have a model read the text in realtime somehow and trigger a code search when it hears your name

I imagine the problem is the same with all other free to the user platforms. The user won't pay, so their needs are subordinate to the actual customers.

Indeed, unless it's required by law to surface the information, those who want it will need technical mitigations to enrich accordingly. Zillow gets paid by real estate professionals, real estate buyers and sellers are the product on their platform.

But the agents are paid by the actual buyers and sellers. It’s just that they’re organized as a group. This is why the monopolistic control of listing services and realtor organizations is a problem.

> Why is the independence of the Federal Reserve sacrosanct?

Then the value of the USD is entirely at the whim of whoever is elected and should be priced accordingly.


Claude has twice now thought that deleting the database is the right thing to do. It didn't matter as it was local and one created with fixtures in the Docker container (in anticipation of such a scenario), but it was an inappropriate way of handling Django migration issues.

I wish companies had a pay $50 to speak to a human option if need be.

No spamming abuser will pay that, but it should easily cover the cost of an overseas support agent to handle edge cases.


That would cause companies to provide the shittiest possible service in order to monetize their support. And they would still outsource the support for $1/hr.

The problem with rate limiting with money is always that it's also going to be too much for someone else to pay (or too little for others).

I'm so glad ICANN banned silent auctions for the next round of gTLDs.


I think what we really need is Internet user's Bill of Rights. The power and information asymmetry is too great to obtain fair treatment for the user.

I wish governments had a "pay $100 million if you don't have a human option for all your customers" option.

So they will hire a "support agent" that costs $30 and leave their service as buggy as bearable so they make those extra $20 more often.

You know they'd put you in touch with someone who has no clue about anything.

The shift is from tarpit to unemployment. A Jira ticket processing dev still has use. Probably not for much longer.

Doubt we'll see that in the short term. Long term, possibly, especially if you add a financial crisis.

Truth is most larger software development organizations could have even before LLMs downsized significantly and not lost much productivity.

The X formerly known as Twitter did this and has been chugging along on a fraction on its original staff count. It's had some brand problems since its acquisition, but those are more due to Mr Musk's eccentricities and political ventures than the engineering team.

The reason this hasn't happened to any wider degree is quarterly capitalism and institutional inertia. Looks weird to the investors if the organization claims to be doing well but is also slashing its employee count by 90%. Even if you bring a new CEO in that has these ideas, the org chart will fight it with tooth and nail as managers would lose reportees and clout.

Consultancies in particular are incredibly inefficient by design since they make more money if they take more time and bring a larger headcount to the task: They don't sell productivity, but man hours. Hence horrors like SAFe.


The thing is, Twitter is stuck. It's not growing, most likely it's shrinking. We also have no idea if it's profitable or not.

Twitter had a lot of engineers on its payroll to look for the next big thing.

If you give that up and keep a skeleton crew, sure, that works.

Most businesses don't want to become husks of their former selves.


At the same time, that quest for the next big thing is performative, it's theater for the investors, and exactly what I'm talking about.

It was objectively bad for facebook's net profits to pour over $10 billion into the metaverse, what they gambled would be the next big thing, but the perception that they were cooking up something new, even if that was a massive waste of money, was better for their valuation than the sense that they were just resting on their laurels.


> Twitter had a lot of engineers on its payroll to look for the next big thing.

What was the "next big thing" before acquisition? There seems to be more features added after the acquisition than before it.


There are plenty of devs who do nothing beyond taking a Jira ticket scoped by others, implementing it, and then grabbing the next ticket.

While they may not have been very successful, they did have a place.


You’re right but i have always preferred people who can do a little more. Nothing against the socially awkward and conflict avoidant nature in many of these friends, but people who push back and fight to communicate their views and passions often got our team better outcomes than someone who just turns up and does the work they’re asked to do.

As long as it is not opposite set of skills (talks a lot without knowledge to back it up so essentially using charisma to convince people to do the wrong thing most of the time) then yes, a lil bit of negotiation can save you a whole lot of work in the long run (XY problems being one example)

For sure, I’ve been tricked into hiring those people before too. It’s good that there’s still something hard in running an organization, the whole “what is value?” question feels like it’ll be one of the few things we have to maintain work for humans over the next little while.

Looks very robotic to me, never worked on a place where meetings and dealing with other humans wasn't part of the job.

I’ve been on plenty of teams where meetings didn’t actually require any meaningful participation from most people.

Meetings without any meaningful participation from most people? I guess too many people in the meetings?

That is likely referring to what has become known as the standup, where developers read off the commit log for the "manager" who hasn't yet figured out how to use a computer.

Or for the product manager too lazy to check Jira.

Never been the case for me, additionally I have always worked in shared desks or offices.

Is this genuinely common? I’ve only ever seen that level of hand holding extended to new grad hires.

It definitely happens at bloated organizations that aren’t really good at software development. I think it is especially more common in organizations where software is a cost center and business rules involve a specialized discipline that software developers wouldn’t typically have expertise in.

I have 13 years of professional experience, and I work in a small company (15 people). Apart from one or two weekly meetings, I mostly just work on stuff independently. I'm the solo developer for a number of projects ranging from embedded microcontrollers to distributed backend systems. There's very little handholding; it's more like requirements come in, and results come out.

I have been part of some social circles before but they were always centered around a common activity like a game, and once that activity went away, so did those connections.

As I started working on side hustles, it occurred to me that not having any kind of social network (not even social media accounts) may have added an additional level of difficulty.

I am still working on the side hustles, though.


> it's more like requirements come in, and results come out.

Wow someone is very good at setting requirements. I have never seen that in 25 years of dev life.


I've seen it many many times, a few from myself.

It's not so hard if you're an expert in the field or concept they're asking the solution for, especially if you've already implemented it in the past, in some way, so know all the hidden requirements that they aren't even aware of. If you're in a senior position, in a small group, it's very possible you're the only one that can even reason about the solution, beyond some high level desires. I've worked in several teams with non-technical people/managers, where a good portion of the requirements must be ignored, with the biggest soft skill requirement being pretending they're ideas are reasonable.

It's also true if it's more technical than product based. I work in manufacturing R&D where a task might be "we need this robot, with this camera, to align to align to and touch this thing with this other thing within some um of error."

Software touches every industry of man. Your results may vary.


I've seen that plenty of times. I suspect that you haven't seen it because you live in a place with high cost of living, which induces a high turnover in personnel, or perhaps you've been working in very dynamic markets such as SaaS.

When I was starting my career in Europe as freelance sysadmin, I worked several times for small companies that were definitely not at the forefront of technology, were specialised in some small niche and pretty small (10-15 engineers), but all its engineers had been there for 10-20 years. They pretty well paid compared to the rest of the country, and within their niche (in one case microcontroller programming for industrial robots) they were world experts. They had no intention of moving to another city or another company, nor getting a promotion or learning a new trade. They were simply extremely good at what they were doing (which in the grand scheme of things was probably pretty obsolete technology), and whenever a new project came they could figure out the requirements and implement the product without much external input. The first time I met a "project manager" was when I started working for a US company.


>I worked several times for small companies that were definitely not at the forefront of technology, were specialised in some small niche and pretty small (10-15 engineers), but all its engineers had been there for 10-20 years. They pretty well paid compared to the rest of the country

This isn't possible in the USA. Companies like this (small, and not in tech hub cities) always try to take advantage of their location and pay peanuts, with the excuse "the cost of living is lower here!", even though it's not that much lower (and not as low as they think), and everything besides houses costs the same nationwide.


I agree that something like that is very unlikely in the US, which is why so many people in this thread (I presume Americans) were incredulous as to whether that was even possible, but elsewhere in Europe good software and electronic/electrical engineers can be making very good money for the local standards in stable jobs, while at the same time being paid a lot less than they would be in a similar job in one of the US major tech hubs.

Of course, sometimes people realize that what they asked for wasn't actually what was needed.

I mean... This "realization" is what triggered the advent of agile, 2 decades ago, right?

People almost never know what they want, so put SOMETHING in front of them, fast, and let's go from there


I've heard this, and I've even seen it in plenty of poorly performing businesses, but I've never actually seen it in a highly performing, profitable tech company. Other than at the new grad level but it's treated as net-negative training while they learn how to build consensus and scope out work.

Not coincidentally, the places I've seen this approach to work are the same places that have hired me as a consultant to bring an effective team to build something high priority or fix a dumpster fire.


A lot of highly performing teams don't even use tickets.

Do any highly performing teams use tickets?

A fly-by-night charlatan successfully pushed ticking into our organization in the past year and I would say it was a disaster. I only have the experience of one, but from that experience I am now not sure you can even build good software that way.

I originally hoped it was growing pains, but I see more and more fundamental flaws.


I’ve worked at one, but it required a PM who was ruthless about cutting scope and we focused on user stories after establishing a strong feedback pipeline, both technically through CI/CD/tests and with stakeholders. Looking back, that was the best team I’ve ever worked in. We split up to separate corners of the company once the project was delivered (12 month buildout of an alpha that was internally tested and then fleshed out).

Maybe I had greenfield glasses but I came in for the last 3 months and it was still humming.


How do you keep track of tasks that need to be done, of reported bugs and feature requests?

Previously? There was an understanding of the problem trying to be solved. The gaps left the pangs of "this isn't right".

Now I have no way to know where things stand. It's all disconnected and abstracted. The ticket may suggest that something is done, but if the customer isn't happy, it isn't actually. Worse, now we have people adding tickets without any intent to do the work themselves and there isn't a great way to determine if they're just making up random work, which is something that definitely happens sometimes, or if it truly reflects on what the customer needs.

You might say that isn't technically a problem with ticketing itself, and I would agree. The problems are really with what came with the ticketing. But what would you need tickets for other than to try and eliminate the customer from the picture? If you understand the problem alongside the customer, you know what needs to be done just as you know when you need to eat lunch. Do you create 'lunchtime' tickets for yourself? I've personally never found the need.


You must be working in projects with a relatively small number of “problems to be solved” at any given time, and with the problems having relatively low complexity. In general there’s no way to keep everything in your head and not organize and track things across the team. That doesn’t mean that a lot of communication doesn’t still have to happen within the team and with the customers. Tickets don’t replace communication. But you have to write down the results of the communication, and the progress on tasks and issues that may span weeks or months.

> In general there’s no way to keep everything in your head

I imagine everyone's capacity is different, but you wouldn't want anyone with a low capacity on your team, so that's moot. Frankly, there is no need to go beyond what you can keep in your head, unless your personal capacity is naturally limited I guess, because as soon as you progress in some way the world has changed and you have to reevaluate everything anyway, so there was no reason to worry about the stuff you can't focus on to begin with.


I find that the current way we do Scrum is way more waterfall-ish than what we had before. Managers just walked around and talked, and knew what each person was doing.

We traded properly working on problems for the Kafkaesque nightmare of modern development.


Thing is, Scrum isn't supposed to be something you do for long.

As you no doubt know, Agile is ultimately about eliminating managers from the picture, thinking that software is better developed when developers work with each other and the customer themselves without middlemen. Which, in hindsight, sounds a lot like my previous comment, funnily enough, although I didn't have Agile in mind when I wrote it.

Except in the real world, one day up and deciding no more managers on a whim would lead to chaos, so Scrum offered a "training wheels" method to facilitate the transition, defining practices that push developers into doing things they normally wouldn't have to do with a manager behind them. Once developers are comfortable and into a routine with the new normal Scrum intends for you to move away from it.

The problem: What manager wants to give up their job? So there has always been an ongoing battle to try and bastardize it such that the manager retains relevance. The good news, if you can call it that, is that we as a community have finally wisened up to it and now most pretty well recognize it for what it is instead of allowing misappropriation of the "Agile" label. The bad news is that, while we're getting better at naming it, we're not getting better at dealing with it.


I don’t think people invested in Scrum believe it’s “temporary” or ever marketed it as such.

And agile teams are supposed to be self-managed but there’s nothing saying there should be no engineering managers. It sounds counter intuitive, but agile is about autonomy and lack of micro-management, not lack of leadership.

If anything, the one thing those two things reject are “product managers” in lieu of “product owners”.


> I don’t think people invested in Scrum believe it’s “temporary” or ever marketed it as such.

It is officially marketed as such, but in the real world it is always the managers who introduce it into an organization to get ahead of the curve, allowing them to sour everyone on it before there is a natural movement to push managers out, so everyone's exposure to it is always in the bastardized form. Developers and reading the documentation don't exactly mix, so nobody ever goes back to read what it really says.

> And agile teams are supposed to be self-managed but there’s nothing saying there should be no engineering managers.

The Agile Manifesto is quite vague, I'll give you that, but the 12 Principles makes it quite clear that they were thinking about partnerships. Management, of any kind, is at odds with that. It does not explicitly say "no engineering managers", but having engineering managers would violate the spirit of it.

> not lack of leadership.

Leadership and management are not the same thing. The nature of social dynamic does mean that leadership will emerge, but that does not imply some kind of defined role. The leader is not necessarily even the same person from one day to the next.

But that is the problem. One even recognized by the 12 Principles. Which is that you have to hire motivated developers to make that work. Many, perhaps even most, developers are not motivated. This is what that misguided ticketing scheme we spoke of earlier is trying to solve for, thinking that you can get away with hiring only one or two motivated people if they shove tickets down all the other unmotivated developers' throats, keeping on them until they are complete.

It is an interesting theory, but one I maintain is fundamentally flawed.


I've realized it's a different paradigm in (very loosely) the Kuhn sense. You wouldn't track tasks if you're fundamentally not even thinking of the work in terms of tasks! (You might still want a bug tracker to track reported bugs, but it's a bug tracker, not a work tracker.)

What you actually do is going to depend on the kind of project you're working on and the people you're working with. But it mostly boils down to just talking to people. You can get a lot done even at scale just by talking to people.


People gotta remember its a job just like anything else. I dont see any other profession going above and beyond so why should that be levied upon on programmers, I don't see PMs trying to understand code, CEOs trying to understand the customer more than the investor.

There is still enormous value in cleaning up the long tail of somewhat important stuff. One of the great benefits of Claude Code to me is that smaller issues no longer rot in backlogs, but can be at least attempted immediately.

The difference is that Claude Code actually solves practical problems, but pure (as opposed to applied) mathematics doesn't. Moreover, a lot of pure mathematics seems to be not just useless, but also without intrinsic epistemic value, unlike science. See https://news.ycombinator.com/item?id=46510353

I’m an engineer, not a mathematician, so I definitely appreciate applied math more than I do abstract math. That said, that’s my personal preference and one of the reasons that I became an engineer and not a mathematician. Working on nothing but theory would bore me to tears. But I appreciate that other people really love that and can approach pure math and see the beauty. And thank God that those people exist because they sometimes find amazing things that we engineers can use during the next turn of the technological crank. Instead of seeing pure math as useless, perhaps shift to seeing it as something wonderful for which we have not YET found a practical use.

Even if pure math is useless, that’s still okay. We do plenty of things that are useless. Not everything has to have a use.

I’m not sure I agree. Pure math is not useless because a lot of math is very useful. But we don’t know ahead of time what is going to be useless vs. useful. We need to do all of it and then sort it out later.

If we knew that it was all going to be useless, however, then it’s a hobby for someone, not something we should be paying people to do. Sure, if you enjoy doing something useless, knock yourself out… but on your own dime.


Applications for pure mathematics can't necessarily be known until the underlying mathematics is solved.

Just because we can't imagine applications today doesn't mean there won't be applications in the future which depend on discoveries that are made today.


Well, read the linked comment. The possible future applications of useless science can't be known either. I still argue that it has intrinsic value apart from that, unlike pure mathematics.

There are many cases where pure mathematics became useful later.

https://www.reddit.com/r/math/comments/dfw3by/is_there_any_e...


So what? There are probably also many cases where seemingly useless science became useful later.

Exactly, you're almost getting it. Hence the value of "pure" research in both science and math.

You are not yet getting it I'm afraid. The point of the linked post was that, even assuming an equal degree of expected uselessness, scientific explanations have intrinsic epistemic value, while proving pure math theorems hasn't.

I think you lost track of what I was replying to. Thorrez noted that "There are many cases where pure mathematics became useful later." You replied by saying "So what? There are probably also many cases where seemingly useless science became useful later." You seemed to be treating the latter as if it negated the former which doesn't follow. The utility of pure math research isn't negated by noting there's also value in pure science research, any more than "hot dogs are tasty" is negated by replying "so what? hamburgers are also tasty". That's the point you made, and that's what I was responding to, and I'm not confused on this point despite your insistence to the contrary.

Instead of addressing any of that you're insisting I'm misunderstanding and pointing me back to a linked comment of yours drawing a distinction between epistemic value of science research vs math research. Epistemic value counts for many things, but one thing it can't do is negate the significance of pure math turning into applied research on account of pure science doing the same.


"You replied by saying "So what? There are probably also many cases where seemingly useless science became useful later." You seemed to be treating the latter as if it negated the former"

No, "so what" doesn't indicate disagreement, just that something isn't relevant.

Anyway, assume hot dogs taste not good at all, except in rare circumstances. It would then be wrong to say "hot dogs taste good", but it would be right to say "hot dogs don't taste good". Now substitute pure math for hot dogs. Pure math can be generally useless even if it isn't always useless. Men are taller than women. That's the difference between applied and pure math. The difference between math and science is something else: Even useless science has value, while most useless math (which consists of pure math) doesn't. (I would say the axiomatization of new theories, like probability theory, can also have inherent value, independent of any uselessness, insofar as it is conceptual progress, but that's different from proving pure math conjectures.)


So when you said "so what, hamburgers (science) taste good (is useful)", you were implicitly making a point about how bad (mostly not useful) the hot dogs (math research) was? And that's the thing that supposedly wasn't being followed on the first pass?

That brings us full circle, because you're now saying you were using one to negate the other, yet you were claiming that interpretation was a "failure to follow" what you were saying the first time around.


It really speaks to the weakness of your original claim that you're applying this level of sophistry to your backpedaling.

There are 1135 Erdős problems. The solution to how many of them do you expect to be practically useless? 99%? More? 100%? Calling something useful merely because it might be in rare exceptions is the real sophistry.

It's hard to know beforehand. Like with most foundational research.

My favorite example is number theory. Before cyptography came along it was pure math, an esoteric branch for just number nerds. defund Turns out, super applicable later on.


You’re confusing immediately useful with eventually useful. Pure maths has found very practical applications over the millennia - unless you don’t consider it pure anymore, at which point you’re just moving goalposts.

No, I'm not confusing that. Read the linked comment if you're interested.

You are confusing that. The biggest advancements in science are the result of the application of leading-edge pure math concepts to physical problems. Netwonian physics, relativistic physics, quantum field theory, Boolean computing, Turing notions of devices for computability, elliptic-curve cryptography, and electromagnetic theory all derived from the practical application of what was originally abstract math play.

Among others.

Of course you never know which math concept will turn out to be physically useful, but clearly enough do that it's worth buying conceptual lottery tickets with the rest.


Just to throw in another one, string theory was practically nothing but a basic research/pure research program unearthing new mathematical objects which drove physics research and vice versa. And unfortunately for the haters, string theory has borne real fruit with holography, producing tools for important predictions in plasma physics and black hole physics among other things. I feel like culture hasn't caught up to the fact that holography is now the gold rush frontier that has everyone excited that it might be our next big conceptual revolution in physics.

There is a difference between inventing/axiomatizing new mathematical theories and proving conjectures. Take the Riemann hypothesis (the big daddy among the pure math conjectures), and assume we (or an LLM) prove it tomorrow. How high do you estimate the expected practical usefulness of that proof?

That's an odd choice, because prime numbers routinely show up in important applications in cryptography. To actually solve RH would likely involve developing new mathematical tools which would then be brought to bear on deployment of more sophisticated cryptography. And solving it would be valuable in its own right, a kind of mathematical equivalent to discovering a fundamental law in physics which permanently changes what is known to be true about the structure of numbers.

Ironically this example turns out to be a great object lesson in not underestimating the utility of research based on an eyeball test. But it shouldn't even have to have any intuitively plausible payoff whatsoever in order to justify it. The whole point is that even if a given research paradigm completely failed the eyeball test, our attitude should still be that it very well could have practical utility, and there are so many historical examples to this effect (the other commenter already gave several examples, and the right thing to do would have been acknowledge them), and besides I would argue they still have the same intrinsic value that any and all knowledge has.


> To actually solve RH would likely involve developing new mathematical tools which would then be brought to bear on deployment of more sophisticated cryptography.

I doubt that this is true.


It already has! The progress that's been made thus far, involved the development of new ways to probabilistically estimate density of primes, which in turn have already been used in cryptography for secure key based on deeper understanding of how to quickly and efficiently find large prime numbers.

It's unclear to me what point you are making.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: