Hacker News new | past | comments | ask | show | jobs | submit | notpachet's comments login

Some of us are still shipping Redux apps with React as the V and life is good.


> this could be the moment that Google realises it needs to stick to what it knows best - Search

You misspelled "ads"


It's HN so you can't be entirely sure.


Aren't we already doing that?


> In a capitalist system, with enough bad decisions, your company might go bankrupt.

Or your company might lobby against regulations that safeguard public health. Your company might make carcinogenic products and pressure government groups to downplay their effects[0]. Your company might consistently resist efforts to install better safety measures for chemical transport, resulting in hundreds of gallons of extremely toxic chemicals spilling and catching fire in your neighborhood[1]. Your company might contribute to pushing humanity into a doom loop of global externalities that we can't escape.

In other words, the separation of "money" from "physical power" cuts both ways. Where it breaks down in capitalism is when corporations are able to use their money to evade or neuter the government's physical power so that they can continue doing harm.

[0] https://en.wikipedia.org/wiki/Monsanto [1] https://en.wikipedia.org/wiki/East_Palestine,_Ohio,_train_de...


They can't use their money to evade or neuter the government's physical power. The people they pay are the government. The government is the problem in that situation, as it's responsible for making sure those things don't happen.


> The government is the problem in that situation, as it's responsible for making sure those things don't happen.

The government would certainly be complicit in this scenario, but is the corporation doing the act not still responsible for their actions? The corporation is still the actor that is motivated to do "the bad thing" (ie: cut safety features to save money), and then lobbies the complicit government to turn a blind eye. The phrasing of this seems to imply that corporations can do whatever they want as long as they play by the rules, and then puts the full blame on the government because they either failed to provide a "perfect" ruleset or they were actively complicit in turning a blind eye.

I know technically "doing whatever they want as long as they play by the rules" is in practice just following the letter of the law, but are they not responsible for taking some degree of responsibility beyond that?


I would say: the state has all the power in the situation. All the company can do is present its position as well as possible, and as long as they're not breaking the law (e.g. bribery, when it would be joint culpability) then that's fine. The company is there to provide value for its customers and shareholders, and employment for its employees. The state's whole job, for which it takes free money from every transaction in its country, is to be impartial. Outside of bribery, at best its culpability is 99%. At worst it's 100%.


> some people seem to have wildly different thresholds for the propositional coherence they can spot

This sums up the last decade remarkably well.


They attach diodes to your head and monitor the alphawaves.


Just be careful not to do any inadvertent organizational damage from up there.


Exactly. It's that damned JS "class" keyword! ...right?


Actually yes, I think stuff like this makes programming hard. A half ass implementation of "class", not behaving like a class, brings unnecessary confusion. Programming in the real world, is full of these details, you have to know to be productive. 0.1+0.2 = 0.30000000000000004 in many languages is another one.

(And semicolons are ugly and I avoid them, wherever I can get away with it, but no, are probably not the reason)


I agree that the JS implementation of "class" is bolted-on and obscures the underlying prototypical inheritance, and that this kind of thing makes programming harder. I wish JS had leaned more into the theory of prototypes, possibly discovering new ideas there, instead of pretending like they're using the same inheritance scheme as other languages (although perhaps we should have expected that from a language whose literal name is from bandwagoning Java). The way to reduce this difficulty is by making better programming languages, by improving the underlying theory of programming language design, software engineering, etc. Cleaner, purer languages, closer to the math (being the study of self-consistent systems). This is the opposite direction of "low code". It's more like "high code". Low code is chock full of this kind of poorly thought-out, bolted-on, leaky, inconsistent abstraction, because their entire point is to eschew ivory-tower theory; they avoid the math, and so become internally inconsistent and full of extraneous complexity.

I also agree that 0.1 + 0.2 != 0.3 is another thing that makes programming hard. This is intrinsic complexity, because it is a fundamental limitation in how all computers work. The way around this is -- you guessed it -- better programming languages, that help you "fall into the pit of success". Perhaps floating point equality comparisons should even be a compiler error. Again, low-code goes the opposite direction, by simply pretending this kind of fundamental complexity doesn't exist. You are given no power to avoid it biting you nor to figure out what's going on when it does. Low-code's entire premise is that you shouldn't need to understand how computers work in order to program them, but of course understanding how floating-point numbers are represented is exactly how you avoid this issue.


I think it is pessimistic to say that number precision is a problem fundamental to computing. The bitter lesson gives me hope that someday no one will have to care about non-arbitrary-precision math. Programming could get that simplified by a great platform.


I suspect that if you dive deeply into arbitrary-precision math (although I don't mean to assume you haven't), you'll probably find that a programming language that supports such a thing forces quite a bit more thought into exactly what numbers are and how they work. Arbitrarily precise arithmetic is deeply related to computability theory and even the fundamental nature of math (e.g. constructivism). A language that tried to ignore this connection would fail as soon as someone tried to compare (Pi / 2) * 2 == Pi; such a comparison would run out of memory on all finite computers. In fact it's not clear that such a language could support Pi or the exponential function at all.

A language that was built around the philosophy of constructivist math in order to allow arbitrary precision arithmetic would basically treat every number as a function that takes a desired precision and returns an approximation to within that precision, or something very similar to that. All numbers are constructed up to the precision they're needed, when they're needed. But it would still not be able to evaluate whether (Pi / 2) * 2 == Pi exactly in finite time -- you could only ask if they were equal up to some number of digits (arbitrarily large, but at a computational cost). If you calculate some complex value involving exponentials and cosines and transcendentals using floating point, you can just store the result and pass it off to others to use. If you do it with arbitrary precision, you never can, unless you know ahead of time the precision that they're going to need. There are no numbers: only functions. You could probably even come up with a number that suddenly fails at the 900th digit, which works perfectly fine until someone compares it to a transcendental in a completely different part of the software and it blows up.

This does not sound like it's simplifying anything. Genuinely, a healthily-sized floating point is the simplest way to represent non-integer math; this is why Excel, many programming languages, and most science and engineering software uses it as their only (non-integer) number format. It's actually hard to come up with a situation where arbitrary precision is actually what the users need; if it really seems like you do need it, then you might actually want a symbolic math package like MATLAB or Mathematica/Wolfram Alpha or something.


I'm sorry, but 0.1+0.2 != 0.3 is fundamental. It creates difficulty, but you are not capable of doing math in a computer if you don't understand it and why it happens. Even if your environment uses decimals, rationals, or whatever.

The SQL `numeric` makes the right choice here, putting the problem right at the front so you can't ignore it.

That said, I completely agree with your main point. Modern software development is almost completely made of unnecessary complexity.


> it will probably mostly help big picture architecture devs compete with people who are really good at Leetcode type algorithm stuff.

The competition should be happening in the other direction.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: