Hacker Newsnew | past | comments | ask | show | jobs | submit | deelayman's commentslogin

I think there's a narrow unregulated space where this could be true. I'm exercising my creativity trying to imagine it - where automations are built with the outcome of obscured responsibility in mind. And I could understand profit as a possible driving factor for that outcome.

As an extreme end of a spectrum example, there's been worry and debate for decades over automating military capabilities to the point where it becomes "push button to win war". There used to be, and hopefully still is, lots of restraint towards heading in that direction - in recognition of the need for ethics validation in automated judgements. The topic comes up now and then around Tesla's, and impossible decisions that FSD will have to make.

So at a certain point, and it may be right around the point of serious physical harm, the design decision to have or not have human-in-the-middle accountability seems to run into ethical constraints. In reality it's the ruthless bottom line focused corps - that don't seem to be the norm, but may have an outsized impact - that actually push up against ethical constraints. But even then, I would be wary as an executive documenting a decision to disregard potential harms at one of them shops. That line is being tested, but it's still there.

In my actual experience with automations, they've always been derived from laziness / reducing effort for everyone, or "because we can", and sometimes a need to reduce human error.


I wonder if that quote is still applicable to systems that are hardwired to learn from decision outcomes and new information.

LLMs do not learn as they go in the same way people do. People's brains are plastic and immediately adapt to new information but for LLMs:

1. Past decisions and outcomes get into the context window, but that hasn't actually updated any model weights.

2. Your interaction possible eventually gets into the training data for a future LLM. But this is incredibly diluted form of learning.


What (or who) would have been responsible for the Holodomor if it had been caused by an automated system instead of deliberate human action?

This is also a problem that exists within countries. My RSS feed is littered with Canadian independent (national) news agencies not defining what municipality article headlines relate to. E.g. "Mayor pushes back against province on xyz issue". Okay, that might be huge news for Timmins Ontario , but maybe BAU for Toronto. Even skimming the lead paragraph doesn't define the city often.

*Editting with a point: Perhaps everyone assumes a local audience.


The author suggested that if senior leadership had a development background then tech debt would be easier to get support and resources to deal with. Between the lines I'm reading that the risks are just inherently understood by someone with a tech background.

Then the author suggests that senior leadership without a tech background will usually need to be persuaded by a value proposition - the numbers.

I'm seeing these as the same thing - the risks of specific tech debt just needs to be understood before it gets addressed. Senior leaders with a development background might be better predictors of the relationship between tech debt and its impact on company finances. Non technical leaders just require an extra translation step to understand the relationship.

Then considering that some level of risk is tolerated, and some risk is consciously taken on to achieve things, both might ultimately choose to ignore some tech debt while addressing other bits.


The risk of tech debt is marginal cost of adding features goes up as tech debt goes unpaid.


> If no one asked and no one is on the hook to change anything: Stop talking.

It seems like a matter of knowing who to talk to about what. I don't think the solution is to stop talking to everyone.

Presenting a rationale for something worthy of addressing (need/problem/opportunity) needs to be communicated somehow, and convincingly. In person, in writing, or a simple business case.

From my non-tech background, priorities are fluid, and things that are rationalized as urgent and important are given resources and attention.

If there is someone like the author spinning wheels in frustration, then maybe there's a problem with the organization aligning everyone on goals/objectives/outcomes -> leading to misaligned solutions being raised, and deaf ears. Or, maybe there's no opportunity to raise solutions with the right people.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: