Hacker News new | past | comments | ask | show | jobs | submit | alphazard's comments login

You could frame the entire project of building a political system as just solving this problem without extra side effects.

If you want the right to be left alone, you have to concentrate power, so that people who don't leave other people alone can be effectively punished. That concentrated power attracts the kind of people who don't like leaving other people alone. To these power-seekers, people left alone are an opportunity cost, they could be forced to work towards some goal set by the power-seekers.


I mean you could, but we're inherently dependent on each other. I'm not sure I see the value of building a society around personal liberty. It wouldn't make much of a society, would it?

Sounds like you don't want to leave people alone.

I'm quite the introvert. I just am flummoxed why you would build around something so much less substantial than equitable distribution of resources, which necessarily implies our bothering each other to enforce.

If you want to be left alone above all else, you can always turn into ted kaczynski.

But reaping the benefits of society comes with costs. one of them is accepting annoying neighbors and other social contracts that might not specifically cater to your desires.


There's a lot of blame being assigned to Microsoft, the entire corporation. But I doubt this was a heavily contemplated decision by a room full of executives, or voted on by the shareholders.

More likely, this is a way for someone to get ahead in their career at Microsoft by passing off a successful open source project as their own accomplishment. They can steal users from the original project and justify using Microsoft's resources to maintain it, which puts more resources under their control, and gives them something to talk about during performance reviews.

The open source community should have a way to enforce professional consequences on individuals in situations like this. They are motivated by professional gains after all. That's the only way this will stop happening. Professional consequences does not mean doxxing or other personal attacks, it means losing career opportunities, losing contributor privileges, and becoming known as untrustworthy. These consequences have to be greater than the expected gain from passing a project off as your own at work.

I wonder if a new kind of license could be created which includes projects in some kind of portfolio and violating the license means losing access to the entire portfolio. Similar to how the tech companies added patents to a shared portfolio and patent treachery meant losing access to the portfolio.


Just because the shareholders didn't vote on it, or an exec didn't explicitly say "hey steal this" does not absolve the company. Leadership doesn't get to throw up their hands and say "not my fault" when something bad happens.

It is ultimately the responsibility of the company and its people to create a system where things like this are discouraged or prohibited. Not doing so is tacit approval, especially in this case where they have a significant history of doing the same thing.


It's fine that you think corporations are supposed to work that way, and I don't necessarily disagree. But they don't in practice. They don't feel the consequences of bad actions because of legal economies of scale. They also don't backpropagate consequences from the company's bottom line to the individuals responsible. If you were to rectify this so that it works exactly as you envision, you would have made incredible advances in the Principal-Agent problem as it pertains to corporate compensation.

Most corporate actions that 3rd parties consider "bad" are the result of someone inside the corporation having an asymmetric payoff from directing the corporation to do the bad thing. They get the upside from a success, but not the downside from failure.

If you want to stop a certain bad behavior, your best bet is to change individual incentives.


I think the point being made is that the executives are either responsible for the company, or they're not actually running the company at all.

Like this isn't some tragedy of the commons situation. This isn't some situation where the company is a cooperative confederation of equal partners. Either shit rolls uphill, or you don't have leadership at all. You don't get to pass the buck on criticism because you made a decision out of self interest, either.

"It's not technically illegal," is the most blasé, low-effort rule for behavior. It's why only twelve-year-olds and lawyers use it as a defense for poor behaviors and poor ethics.

Being a POS earns you a reputation for being a POS, and that includes people publicly pointing you out as a POS in public forums.


> or they're not actually running the company at all

Executives are not micro-managing day-to-day implementation decisions of every team, no. They set broad strategic goals, the management layers below them decide how to best operationalize those goals, and the layers below those middle managers make specific implementation decisions to execute those operations.

If you want to think of this as "not actually running the company at all", you're free to. The point is that's how the world works.


You don't have to be personally making the decisions in order to be responsible for them.

That's also the way the world works.


Microsoft has north of 100k SWEs working for them, the idea that corporate management could be personally responsible for the decisions of every single one is absurd.

It’s not “CEO must know everything a junior does”, but more of “If a junior messes up doing something for the company, the CEO is finally answerable” - be it to the board, the govt or the public etc.

Rephrasing it - there’s a reason it’s Zuckerberg and Pichai and Tim Cook who go to congress, and not the folks implementing it on the ground level.


What initiative will executive at microsoft take now that this post became popular?

No initiative? Then it's 100% their fault.


This post isn't popular, it has already fallen off the HN frontpage never to be seen again in any context. It did not and will never break into any sort of traditional media.

Not a single Microsoft C-suite exec, or anyone within spitting distance of the C-suite, will ever hear about this. Do not mistake your personal media bubble for the general media ecosystem.


Yeah managers aren't supposed to learn what's going on in their company from the press :D

Of all the bad arguments, this is the worse.


In reality executives are responsible when the company is doing well. When mistakes happen it is either handled by insurance or by firing an employee who was only partially involved.

The tricky part is how we, as a community, actually build those levers of individual accountability without veering into mob justice

Exactly. And this is why I think all US voters should be held to account for Abu Ghraib. Prison time at the least. The death penalty should be on the table.

My observation ( for other such (similiar) war events) is that investigations by the instigators country will lead to very less serious punishments for the instigators and "down playing" of the harm from such events

I think you misunderstand what direction leadership flows in our political system.

If you’ve funded Abu Ghraib (by paying the US government) then you’re criminally culpable. And don’t try the Nuremberg Defense on me: “I was just following orders to pay every April 15”

You're just repeating what you said above without incorporating what I posted.

Why don't you share in a direct sense the way you think leadership flows and we can see. It's impossible to incorporate your vagueposting.

> by paying the US government

Directly, or indirectly through taxes?


Indirectly is sufficient. You're paying for it to happen.

Yeah, but try not paying taxes. :/ You pay taxes even when you buy products at the grocery stores, too.

Indeed. Hence "just following orders". Ultimately, I don't believe in this kind of strong culpability but it's clear the people who claim they do don't either and just bring it up when convenient.

Do you have any solutions?

Yes, have a moral philosophy which does not lead to total contamination across the interaction graph. It’s okay to pay taxes into the US Government even if some representatives of it act poorly.

But you said "Indirectly is sufficient. You're paying for it to happen.", when I was talking about taxes. What if I have a moral philosophy but my taxes still go to whatever is we are being against? I am indirectly paying for it, but it is coercion, IMO. The "vote for someone else" does not play here, another head of the same dragon.

I was taking that position to illustrate that moral contagion inevitably leads to a declaration of everyone being immoral. Therefore, moral contagion is not a useful differentiator between people.

Reductio ad absurdum.


Yeah, but Microsoft's response to this will actually be a company official position.

It's a space to keep watching.


A flash in the pan about a random fork they have on Github with <100 stars, and no significant public usage, which fails to correctly follow the reproduction requirement of the MIT license will not generate a C-suite response. It won't get outside the local management of the team responsible for the fork. Maybe a few dozen people at MS will ever know about this, and most of those from seeing it on HN; who have zero connection to the responsible team.

It baffles me that HN has no idea how large organizations work. The boss's boss's boss has no idea what random worker bees are doing.


The way you underestimate how companies deal with potential PR problems tells me all I need to know about your corporate experience.

This is not a PR problem, no one cares about this. It's barely a thing on HN, and not something any traditional media cares about.

So what's your point? That megacorps shouldn't be accountable for the actions of their employees? That people saying otherwise are clueless and should shut up?

I don't have a point beyond thinking that this is "Microsoft", the corporate entity, making a strategic decision is wrong. This is Aditya, the random software engineer with 5 years of experience making a decision.

How you reckon with that, what you take away from it, is up to you. If you want to hold MS corporate responsible for every decision Aditya and Piotr and Zhong make, you can feel free to, but it won't help you understand how these decisions are made because it's wrong.


> More likely, this is a way for someone to get ahead in their career at Microsoft by passing off a successful open source project as their own accomplishment.

No, it was a whole team at MSFT: https://news.ycombinator.com/item?id=43755745


It's my personal experience that toxic behaviour is tolerated (and even encouraged) by toxic leadership.

Whilst there are always bad apples in a big company, a good company stamps out bad behaviour as soon as it becomes aware of it.


At my job the management sees not violating copyright as a nuisance. Then when a customer wants to know if we're violating copyright of something or not they suddenly go insane.

Licenses don’t matter and are rarely challenged in court.

This is the nature of OSS. Out right theft in hopes you will never know until it’s too late.

Very rarely do large corporations contribute their fair share back to any project.

Does this make me money and/or solve a problem quickly? Fork it and it’s mine.

Until we stop giving money to large corporations that profit off the free work of others, then it will never stop.

And it won’t because we like low cost solutions that work.


I think it’s a bit charitable to assume that something published under an official Microsoft public channel wouldn’t have some sort of legal review, at least for the initial publication.

They created the atmosphere that encourages or even necessitates shenanigans like these. Absolutely blame the corporation

Exactly. If you don't hold managers responsible for the results of the incentives they set, you give the most powerful people in a company the most moral leeway. It should be the other way around.

This kind of forced practice can create the appearance of a certain level of competence, but it rarely produces a deep understanding or innate appreciation of any of those subjects.

Take music, for example. Many high schoolers play an instrument as part of the college admissions game. Almost none of those kids can play music with their friends and just enjoy it. To them music is this structured activity where they get paper with dots on it, and they have to play the right notes at the right time to pass the class. These kids never develop a true understanding or appreciation for music. They don't keep their instruments or practice as adults.

There's so many things to learn to be good at, why not find something that you actually like?


This is one of the reasons I'm really happy that my daughter found show choir. Choir sucks. My kids hated it. I hated going to watch it. Bunch of terrible old songs that no one knows. Now she's singing and dancing to pop songs and show tunes on the stage and it's far more engaging for her. I do think it also helps that show choir is a tryout based program so the floor for interest and talent is far higher than with the regular choir.

Different people clearly mean different things when they talk about software quality. There is quality as perceived by the user: few bugs, accurately models the problem they have, no more complicated than necessary, etc. Then there is this other notion of quality as something to do with how the software is built. How neat and clear it is. How easy it is to extend or change.

The first kind of quality is the only kind that matters in the end. The second kind has mattered a lot up until now because of how involved humans are in typing up and editing software. It doesn't need to matter going forward. To a machine, the entire application can be rewritten just as easily as making a small change.

I would gladly give up all semblance of the second kind of quality in exchange for formal specifications and testing methods, which an AI goes through the trouble of satisfying for me. Concepts and models matter in the problem domain (assuming humans are the ones using the software), but they will increasingly have no place in the solution domain.


The second type of quality is necessary to achieve the first type of quality for systems with nontrivial levels of complexity. It doesn’t need to be perfect, or even close to perfect, but it does need to be “good enough” -

Your end users will eventually notice how long bugs take to get fixed, how long and how often outages occur, and how long it takes to get new functionality into your software.

But beyond your end-users, you likely have competitors: and if your competitors start moving faster, build a reputation of dependability and responsiveness, your business WILL suffer. You will see attrition, your CAC will go up, and those costs get absorbed somewhere: either in less runway, less capex/opex (layoffs), higher priced or all of the above. And that’s an entire domain AI isn’t (yet) suited to assist.

There’s no free lunch.


Software architecture was never about code elegance, it’s about making it easier to get reliable results. And that’s mostly about using automated tooling to check correctness and easy to understand and to modify modules.

That’s the easiest way to get both definitions of quality as it’s way esier to test isolated modules and their integration than testing the whole system as a whole. And way easier to correct wrong code.


> Software architecture was never about code elegance, it’s about making it easier to get reliable results. And that’s mostly about using automated tooling to check correctness and easy to understand and to modify modules.

The first isn't entirely true, and automating tooling is actually quite poor at doing any sort of architecture analysis since the "right" architecture is heavily dependent on factors you cannot see in the code itself: What sort of SLO are you setting? What is your services load and usage pattern (read heavy? write heavy? blend?). Read heavy and high availability? You may want CQRS, but if the service is only experiencing light loading that could easily be over-engineering. Tooling won't help you identify that; you'll need experienced engineers to make judgements.


Isn’t that system design? And automated tooling is for correctness, not performance or availability. So things like linting, tests runner, static analysis and the like.

> The first kind of quality is the only kind that matters in the end.

How easy it is to maintain and extend does absolutely matter, in a world where software is constantly growing and evolving and never "finished"


I'm not disagreeing with you.

Just an observation though:

There seems to be a world in software where "it works" well enough to grow a large user base to achieve a a large valuation and then dip out is also a viable option.

Growing/evolving the code does not matter because the product no longer matters after the founders have made a large sum of money.


I hear what you're saying; but the implications seem ... net harmful? If you're actively hacking something together with the intent of boosting a valuation, selling your company and GTFOing before your purchasers figure out the bag of unmaintainable garbage you've sold them, that ...

You're harming:

  * your customers who trusted you
  * the people that purchased your product
I think "grift" is a good term (GPT recommended 'predatory exit' as an alternative) for what a team that's done that has done.

There’s nothing wrong with iterating fast or building MVPs. But when teams knowingly pass off a brittle mess to others, they’ve stopped building products and started selling lies.


> but the implications seem ... net harmful?

I was not trying to imply that it was not. Just observing that when money is on the table there appears little incentive in some cases to build a good product.

Or to follow good engineering practices because the founders & investors already made money without it.


> The first kind of quality is the only kind that matters in the end.

Yes. But the first kind of quality is enabled with the second kind.

Until we live in a faultless closed loop[1], where with AI "the entire application can be rewritten just as easily as making a small change." you still need the second kind.

[1] and it's debatable if we ever will


The problem domain is part of the solution domain: writing a good specification and tests is a skill.

Moreover, I suspect the second kind of quality won't completely go away: a smart machine will develop new techniques to organize its code (making it "neat and clear" to the machine), which may resemble human techniques. I wouldn't bet much on it, but maybe even, buried within the cryptic code output by a machine, there will be patterns resembling popular design patterns.

Brute force can get results faster than careful planning, but brute force and planning gets results faster than both. AI will keep being optimized (even if one day it starts optimizing itself), and organization is presumably a good optimization.

Furthermore: LLMs think differently than humans, e.g. they seem to have much larger "context" (analogous to short-term memory) but their training (analogous to long-term memory) is immutable. Yet there are similarities as demonstrated in LLM responses, e.g. they reason in English, and reach conclusions without understanding the steps they took. Assuming this holds for later AIs, the structures those AIs organize their code into to make it easier to understand, probably won't be the structures humans would create, but they'll be similar.

Although a different type of model and much smaller, there's evidence of this in auto-encoders: they work via compression, which is a form of organization, and the weights roughly correspond to human concepts like specific numbers (MNIST) or facial characteristics (https://www.youtube.com/watch?v=4VAkrUNLKSo&t=352).


> The first kind of quality is the only kind that matters in the end.

From a business perspective, this is what's exciting to a lot of people. I think we have to recognize that a lot of products fail not because the software was written poorly, but because the business idea wasn't very good.

If a business is able to spin up its product using some aspect of vibe coding to test out its merits, and is able to explore product-market fit more quickly, does it really matter if the code quality is bad? Likewise, a well-crafted product can still fail because either the market shifted (maybe it took too long to produce) or because there really wasn't a market for it to begin with. Obviously, there's a middle ground here, and if you go too far with vibe coding and produce something that constantly fails or is hard to maintain, then maybe you've gone too far, but it's a balance that needs to be weighed against business risk.


Low/no code MVP solutions have existed for a long time. Vibe coding seems like you'll get worse results than just using one of those, at least from a bug/support standpoint.

You are talking about an imagined future not current reality.

An AI will be as flustered by spaghetti as a human. Or not so much flustered it will just make willy willy changes and end up in an expensive infinite loop of test failures and drunken changes to try and fix them.


> The second kind has mattered a lot up until now because of how involved humans are in typing up and editing software. It doesn't need to matter going forward.

Tell that to the vagus nerve in the giraffe.


The problem is that the first kind of quality is something that's hard for even human programmers to do well, while AI is, like the rest of the tools that came before, much better at the second.

If you want to understand the language of stochastic calculus as mathematicians have formalized it, then you need all of their jargon. Probability, Diff Eqs, Integrals, and Derivatives. If you are trying to tick a box on a resume, then that's what you have to do. If you have a CS degree then you have a little slice of Probability from combinatorics and information theory. You'll have to build up from there.

Stochastic Calculus was invented to understand stochastic processes analytically rather than experimentally. If you just want to build an intuition for stochastic processes, you should skip all that and start playing with Monte Carlo simulations, which you can do easily in Excel, Mathematica, or Python. Other programming languages will work too, but those technologies are the easiest to go from 0 to MC simulation in a short amount of time.


If you just want some intuition, I found this previous HN submission https://jiha-kim.github.io/posts/introduction-to-stochastic-... pretty approachable at giving you some key ideas without being too rigorous. It's not useful for doing calculating anything practical of course but it can either be a starting point or just a way to satisfy that curiosity.

Alright, I'll bite. What's a reasonable price for Reddit? Aren't most of their users bots?

Doesn't matter. Subreddits create vast islands of value. A single sub overrun with bots is quarantined effectively.

That is why Reddit is one of my favourite social sites. It is algorithmic but if you go to r/assholedesign you get asshole design. (and an anal mod who keeps it like that) Etc.

Value $44bn ;)


I think people are going to interpret "Don't Guess" in a way that is totally impractical and not what the best programmers do.

You should have a strong sense of the model that a tool or library presents to you as the consumer. And you should use that model to "guess" about the behavior of the tool. You should choose tools that are coherent, so that your guesses are more accurate than not, and avoid using libraries/tools with many special cases that make it hard to "guess" what they do.

The best programmers do not double check the docs or implementation for every function that they call. They are good at writing tests that check lots of their assumptions at once, and they are good at choosing tools that let them guess reliably, and avoiding tools that cause them to guess incorrectly.

Leverage in programming comes from the things you don't have to understand and the code you don't have to read in order to accomplish a goal.


Can anyone comment on whether this or MCP are at all well designed? Is there any sort elegance to them? Or is it exactly what I would expect from a multi-corporation committee: lots of different ways to do the same thing, use case bloat, complicated to implement, complicated to test, etc.


There seems to be a fundamental mismatch between how sane people think about sandboxing, and how linux manages namespaces.

A linux-naive developer would expect to spawn a new process from a payload with access to nothing. It can't see other processes, it has a read only root with nothing in it, there are no network devices, no users, etc. Then they would expect to read documentation to learn how to add things to the sandbox. They want to pass in a directory, or a network interface, or some users. The effort goes into adding resources to the sandbox, not taking them away.

Instead there is this elaborate ceremony where the principal process basically spawns another version of itself endowed with all the same privileges and then gives them up, hopefully leaving itself with only the stuff it wants the sandboxed process to have. Make sure you don't forget to revoke anything.


> a read only root with nothing in it

A lot of things break if there's no /proc/self. A lot more things break if the terminfo database is absent. More things break if there's no timezone database. Finally, almost everything breaks if the root file system has no libc.so.6.

When you write Dockerfiles, you can easily do it FROM scratch. You can then easily observe whether the thing you are sandboxing actually works.

> no users

Now you are breaking something as fundamental as getuid.


The modern statically linked languages (I'm thinking of Go and Zig specifically) increasingly need less and less of the cruft you mentioned. Hopefully, that trend continues.

> no users

I mean running as root. I think all processes on Linux have to have a user id. Anything inside a sandbox should start with all the permissions for that environment. If the sandbox process wants to muck around with the users/groups authorization model then it can create those resources inside the sandbox.


The things that break in C if /proc/self or the terminfo DB are missing will break in Go and Zig too.

What I think you might mean is something like: "in modern statically linked applications written with languages like Go and Zig, it is much less likely for the them to call on OS services that require these sorts of resources".


That is pretty much what jails are in FreeBSD, especially thin jails.


Or capabilities. Additive security has been known for decades; Linux really dropped the ball here. Linux file descriptors (open file descriptions, whatever) are close to a genuine capability model, except there's plenty of leakage where you can get at the insecure base.

> Instead there is this elaborate ceremony where the principal process basically spawns another version of itself endowed with all the same privileges and then gives them up

The flags to unshare are copies of clone3 args, so you're actually free to do this. There's some song and dance though, because it's not actually possible to exec an arbitrary binary will access to nothing.

But I think the big discrepancy is that there is inherently a two step process to "spawn a new process with a new executable." Doesn't work that way - you clone3/fork into a new child process, inheriting what you will from the parent based on the clone args/flags (which could be everything, could be nothing), do some setup work, and then exec.


> There seems to be a fundamental mismatch between how sane people think about sandboxing, and how linux manages namespaces.

What bothers me most about sandboxing with linux namespaces is that edge cases keep turning up that allow them to trick the kernel into granting more privileges than it should.

I wonder if Landlock can/will bring something more like FreeBSD jails to the table. (I haven't made time to read about it in detail yet.)


This is why I would still rather isolate using QEMU, docker, or Virtually Box rather than a very think chroot-like environment

Docker uses namespaces by default. Are you using an add-on that makes it use a hypervisor instead?

I believe this is because on POSIX systems the only way to create a new process is fork().


There is the later added posix_spawn, which could be implemented with a system call, even if on Linux it is emulated with clone + exec.

posix_spawn can do much, but not all, of what is possible with clone + exec. Presumably the standard editors have been scared to add too complex function parameters for its invocation, though that should not have been a problem if all parameters had reasonable default values.


> While the Linux syscalls themselves are very stable and reliable, the c library on top of them is not.

Maybe just don't use that library then? Or don't do that ridiculous thing where you fumble around at runtime desperately looking for executable pages that should just be included in your binary.


It's not "some c library on top of them", it's glibc. You can use another libc, but that means you're going to be incompatible with the distro expectations in terms of configuration, because that's handled by glibc, so you just push off the instability to a different part of your system.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: