Hacker News new | past | comments | ask | show | jobs | submit | mlthoughts2018's comments login

> “ Programmers need to think long and hard about your process invocation model. Consider the use of fewer processes and/or consider alternative programming languages that don't have significant startup overhead if this could become a problem (anything that compiles down to assembly is usually fine).”

This is backwards. It costs extra developer overhead and code overhead to write those invocations in an AOT compiled language. The trade off is usually that occasional minor slowness from the interpreted language pales in comparison to the develop-time slowness, fights with the compiler, and long term maintenance of more total code, so even though every run is a few milliseconds slower, adding up to hours of slowness over hundreds of thousands of runs, that speed savings would never realistically amortize the 20-40 hours of extra lost developer labor time up front, plus additional larger lost time to maintenance.

People who say otherwise usually have a personal, parochial attachment to some specific “systems” language and always feel they personally could code it up just as fast (or, more laughably, even faster thanks to the compiler’s help) and they naively see it as frustration that other programmers don’t have the same level of command to render the develop-time trade off moot. Except that’s just hubris and ignores tons of factors that take “skill with particular systems language” out of the equation, ranging from “well good luck hiring only people who want to work like that” to “yeah, zero of the required domain specific libraries for this use case exist in anything besides Python.”

This is a case where this speed optimization actually wastes time overall.


> This is a case where this speed optimization actually wastes time overall.

That's too much of an absolute to be a good rule. If your heavyweight runtime is being launched 1000s of times to get a job done but you only do this once every few months, sure, don't do many optimizations, certainly don't worry about a rewrite in another language. The savings probably aren't worth it. If your heavyweight runtime is being launched 1000s of times to get a job done every day or multiple times a day, consider optimizing. Which may include changing the language. That's hardly controversial, this is the same thing we consider with every other programming task.

Is X expensive in your language and do you have to do this frequently? Then minimize X or rewrite in a language that handles it better.


> “ If your heavyweight runtime is being launched 1000s of times to get a job done every day or multiple times a day, consider optimizing. Which may include changing the language. That's hardly controversial”

No, that is controversial because the time saved per run (even 1000s of times per run with multiple daily runs) is never going to come close to amortizing the upfront sunk cost of that migration and future maintenance.

I’m specifically saying in the exact case you highlighted, people will short-sightedly think it’s a clear case to migrate out of the easy-but-slow interpreted language or never start with it to begin with, and they would be quantitatively wrong, missing the forest for the trees.


Think from the point of view of a tools/productivity engineer at a large company.

Yes, you invest some of your time to create the faster tool. Then hundreds to tens of thousands of people all use that faster tool to save time, day in and day out.

Just to put some concrete numbers to this, if you have a 100-person engineering team and you ask one of them to spend all their work time ensuring that the others are 1% more efficient than they would be otherwise (so saving each of them 5 minutes per typical 8 hour workday), you about break even. If you have a 500-person team, you come out ahead.

Now it's possible that the switch to a compiled language we are discussing would not save people 5 minutes per day, or that you don't have 100+ engineers. Obviously for a tool that only the developer of the tool will use the calculus is very different!


Docker / containers are necessary but not sufficient. For example, in a machine learning CI / CD system, there could be a fundamental difference between executing the same step, with the same code, on CPU hardware vs GPU hardware.


Many CI systems try to strictly enforce hermetic build semantics and disallow non-idempotent steps from being possible. For example, by associating build steps with an exact source code commit and categorically disallow a repeat of a successful step for that commit.


Isn’t Drone also an example of the author’s ideal solution?


I do agree with the consistency thing. One bad culture failure mode is a case where one or two senior executives run amok enforcing their view of culture, while the CEO fails to enforce consistency and a bunch of other executives, directors, etc., just try to “stay out of it” and remain culturally neutral (which isn’t really possible).

I saw this in one company where it was the CTO running amok with an aggressive culture of yelling in meetings, slamming doors, and emphasizing arbitrary deadlines.

I saw it in another org where it was the CPO instead, trying to install Dilberty consultant snake oil with constant reorgs and zero accountability for product managers.


It’s really sad that you are blind to your own severe discrimination. If you think in sweeping generalizations like, “leftists make the mistake...” you really need to step back and spend time not commenting at all and work harder to gain more self-awareness about your own prejudices.

I know you will not like this comment and you will want to knee-jerk reply to “refute” it, but you need to resist that urge and admit you have really significant and really troubling prejudices that come through like a megaphone to others observing you, and spend time just dealing with that.


Where in the world is the “severe discrimination” and prejudice in that comment? You’re knee-jerk character charging without addressing the main points.


This user stalked me from a previous comment chain. Essentially they are bought into critical theory and I'm bought into systems theory so we're talking past each other.

It's kind of amusing given we're presumably after the same thing (reducing power differentials) but by different means.


[flagged]


We've banned this account for repeatedly breaking the site guidelines, including personal attacks and ideological flamewar, and ignoring our many requests to stop. Not cool.

If you don't want to be banned, you're welcome to email hn@ycombinator.com and give us reason to believe that you'll follow the rules in the future. They're here: https://news.ycombinator.com/newsguidelines.html.


then consider me called out. For what it's worth, I don't think you are being particularly pleasant either. That's just disliking each other, and I'm OK with that.

> For example, you made a recent comment talking about how almost all software developers are horrible at their jobs and that a small set of gatekeeper senior engineers should guard all code review.

For what it's worth it was something of an ironic joke related to the idea that "half of all people are below average at X", but also pointing at the idea that the industry gets better as the individuals who make it up get better at what they do.

I support the idea that people who are better at a job should help to educate others. I try to humble myself to better engineers than myself.


[flagged]


you're allowed to be biased against sets of ideas. I hold no ill will against leftists and as I've already said, I agree with most of the problems they identify, but I am pointing out a logical issue that I have noticed commonly occurs within the group of people who hold these ideas. I could be totally off-base, I'm not arrogant enough to say that I know the truth for certain - I'm expressing a belief, in sincerity and with positive intent.

I don't think we have anything else to talk about. You don't like the way I think and talk - I accept and am OK with that, we don't have to like each other.


I think you misunderstand where I'm coming from. I agree with the goals of leftists - and with many left criticisms of late capitalism. My criticism comes from wanting those goals to be achieved more effectively. In other words, I think the left has successfully identified many problems in our society but lacks the tools to fix them, and I think I have an idea of what those tools might be (systems thinking primarily).

I don't particularly care about your recommendation to self-crit. I spend enough time trying to criticise my own mental models as it is.

I could cite some individual leftists that I think are doing great work in the space if you would like. I know the phrases I use are over generalising here, so feel free to imply the word "some" in front of the nouns.


hi!!!!!!


Hey there


Posts that masquerade as “legitimate dissenting opinions just not getting a fair shake from the left by golly” but which really seek to undermine structures by which systemic racism can possibly be called out and held accountable for harm (like the framework of microaggressions) deserve to be flagged and shut down. They don’t serve any free speech or intellectual honesty purpose to preserve discourse and the psychological safety to disagree, and they are roundly disingenuous tools for filibustering actual legit discourse and progress. It’s just taking crass tools of hate groups and dressing them up with five dollar words and armchair discussion of academic freedom of expression, when really it’s just hostile noise jamming against progress.


[flagged]


Flamewar comments and personal attacks are not cool and will get you banned here, regardless of how right you are or feel you are. Please review https://news.ycombinator.com/newsguidelines.html and use HN as intended.


sounds suspiciously actually like you don’t know what systemic racism is...


"no u" isn't a very good comeback.

Systemic _anything_ is that thing perpetuated through systemic effects. In the case of racism it's a structural effect that lasts long after the prolonged and intentional oppression of certain minority groups. Things like a lack of generational wealth, de facto segregation, racial profiling in policing and so on.

The thing with systemic effects in general is that they're not perpetuated intentionally, but are emergent properties of relationships within a complex system. There are ways to unpick and understand why these problems are occurring and prevent their continuation, but I'm sorry to say that I'm just not seeing that coming from the progressive movement at the moment. I care deeply about systems theory and its applicability in solving some of our pressing social problems, but I'm not seeing its use in social activism. Instead the tactics of corporate D&I seem actively designed to look good while not really doing anything. Same with microaggression and unconscious bias training; all they're going to do is put well meaning people on the defensive when interacting with marginalised people for fear of saying something offensive, and that's not going to help anybody.


> a lack of generational wealth, de facto segregation, racial profiling in policing and so on.

That's what's always bugged me about people saying "systemic racism". They never could tell me what it is. You are the first person I have read who can actually say something concrete, something I can actually understand.

I had assumed that, because nobody could actually say what it is, that it was just a nebulous term being tossed around to make whites feel guilty. Now I'm going to have to re-think that.

So, congratulations. You made me think. That's about the highest compliment possible on HN...


Thanks, I'm glad I could help.

FWIW sometimes I'm not sure that the people using the term know what it means either. They often use it but then the attempted solution is basically to call out perceived racism in individuals, which will obviously do nothing to help these problems. The root self perpetuating problem is the poverty trap, combined with the racial aspects of segregation and profiling.

Systemic solutions would be things like investing more into inner city schools than the national average, targeted educational and entrepreneurial assistance for enterprising people from poor black backgrounds (I have a sneaking suspicion that many of the current progressive outreach programs are taken up by already-middle-class black people, which doesn't help solve the problem, but I hope I'm wrong) and other measures designed to provide ladders out of poverty to a large number of people at once. You basically need to provide elevated opportunities.


That you feel you are identifying solutions, or even a definition, not understood or pursued by people creating things like microinequity training, just reveals your ignorance.

I think my earlier comment which you described as a “no u” comeback was actually really apt and crystal clearly accurate based on your follow-ups.


You have not provided any evidence for this claim. And in fact, the very concept of "microinequity" (whatever that might mean) is at the opposite end of the spectrum from anything systemic.


K


> Instead the tactics of corporate D&I seem actively designed to look good while not really doing anything

You just described the Shirky Principle :)


Thankfully this getting flagged is still routine. Can you imagine how sickening a place this would be if this kind of post didn’t get flagged? It’s horrific, yet parading around as if it was merely legit discourse that is snubbed unfairly by the left.


This is simply the best kind of humor: ambiguous parody


Robust potential functions are a huge part of ML. Many people have researched robust potential functions for use as loss functions. Its use in ML algorithms predates use for SLAM, tracking algorithms, etc., which only used it after classical ML.

Neural nets typically don’t benefit much from it because you can use batch normalization, dropout and clever activation functions to achieve the same results, by having the network learn diminished sensitivity to outliers that produce neurons which saturate the low end of an activation function.

This is preferable because many of the robust potential functions involve absolute values, order statistics and other non-differentiable quantities that are hard to put into backpropagation-based optimizers. You almost always would need to relax the loss function to something that trades off smoothness against outlier robustness, where convergence will be slower and slower as you crank the trade off closer to outlier robustness.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: