Hacker News new | past | comments | ask | show | jobs | submit | speedplane's comments login

Saw both the movie and the real thing multiple times. Subject of numerous a heist fantasy… even today though, the MTA guys collecting money from the machines are escorted by NYPD conspicuously showing their firearms while giving you the eye. The attitude remains.


The question is whether consciousness is emergent from large enough models, or whether it’ll be something we’ll need to design/build.


“Power is not a material possession that can be given, it is the ability to act. Power must be taken, it is never given.”

“‘This country, with its institutions, belongs to the people who inhabit it. Whenever they shall grow weary of the existing Government, they can exercise their constitutional right of amending it, or their revolutionary right to dismember or overthrow it.’ Abraham Lincoln”


> Power must be taken, it is never given.

Seems self-evidential that you should not let anyone who says this become a leader. And similarly, the problem solves itself if you elect someone who doesn't believe it.


> Instead, we should be using game-style "data oriented programming" a.k.a. "column databases" for a much higher performance.

This makes logical sense, but I don’t buy it in practice. Most of the heavy data reads are handled by databases, which do optimize for this stuff. I just doubt that, in most software, a significant amount of software performance issues are a result of poor memory alignment of data structures.


Cache misses can lead to major slowdowns.

Everything depends on the access patterns in critical loops. If you need most of the fields of a struct in each iteration, the classic way is beneficial. If you need a narrow subset (a "column") of fields in a large collection of objects, splitting objects into these columns speeds things up.


The most upvoted stackoverflow question is about branch prediction (which is caching in a way):

https://stackoverflow.com/questions/11227809/why-is-processi...


When your datasets are quite large a sql database becomes an absurd bottleneck. Think of weather or aerodynamics simulations.


> I don't think that being able to replace a library at runtime is a useful enough feature to justify the high maintenance cost of shared libraries.

We’re moving to a world where every program is containerized or sandboxed. There is no more real “sharing” of libraries, everything gets “statically linked” in a Docker image anyway.


I bet someone will invent shared content for Docker containers in the next few years as a disk-saving measure.


That's called volume mounts and/or base images ?


If I do an `apt-get install` of the same packages in different containers, with anything different at all before it in the Dockerfiles, there's no way to share a common layer that includes the package contents between those images.

You could volume mount /usr and chunks of /var to help, but I'm not sure that would be sane.


There are storage systems which do this, de-dupe on disk level.


In the cloud. Otherwise it is very seldom justified.


Not using Docker, but Snap/Flatpak use similar approaches.


Snap is one really sick idea. I do not understand how could it ever take off.


> Can someone explain the value of such a list, am I missing something?

You can glean information about your competitors’ usage and advertising, which can help you optimize your own app’s advertising spend.


Scale it


> There are literally savings accounts with interest rate over my mortgage interest, meaning there is a zero risk arbitrage opportunity.

Many utilities give an 8-9% interest rate on bonds, with very little risk (Comcast isn't going bankrupt soon). That said, there's no such thing as zero-risk arbitrage.


When the savings are state guaranteed in the savings account, the risk is basically limited by the credit rating of the nation. It’s never zero but for stable economies it’s as low as you get


> Wealth inequality is not a problem in itself. Poverty or theft is.

Inequality leads to power centers. Power centers lead to systemic corruption. Systemic corruption is a problem.

So yes, extreme inequality is itself a problem.


> Inequality leads to power centers

Yep, though I'd flip it: Concentrations of power create opportunities for wealth inequality. Or they're not even different things. Whenever something is centralized (Google), or a chokepoint is created (Suez, Panama), or network effects stabilize a monopoly position, then the people who control those contested single points of failure become wealthy; their wealth is their control of those resources (e.g., Bezos' shares in Amazon). Prices are Lagrange multipliers; they're forces impinging on a constraint (e.g., limited land in SV). Whereas systems that are more local and distributed create less concentration of power and wealth. Unfortunately they also tend to be less efficient. Where they win is in redundancy and robustness. I suppose this means that if you want more of the latter, then you need more chaotic conditions. Which might be even worse.


Why focus on a politically fraught issue like taxes/wealth redistribution as a path to addressing issues which are are not constrained by party politics? Surely it would be easier to gain a broad base of bipartisan support for legislation addressing corruption, if corruption is the problem, or concentration of power, if that's the problem.

It seems to me that wealth equality is nothing more than a handy wedge issue. The concept of 'fairness' is baked into our monkey brains[0], which makes it easy mechanism for engendering anger, in the hopes of turning anger into votes.

[0] https://www.pnas.org/content/110/6/2070


> It is often best to avoid non-ASCII characters in source code. ... >> Depends of the language. In Python 3, files are expected to be utf8 by default, and you can change that by adding a "# coding: <charset>" header.

It's interesting that many languages avoid unicode and non-ASCII text, yet they make assumptions about file and directory structures about the underlying system. It's as if interpreting directory and file system structures is "okay", but interpreting file formats is not.

> In fact, it's one of the reasons it was a breaking release in the first place, and being able to put non-ASCII characters in strings and comments in my source code are a huge plus.

Sorry, but as a Python dev that went from 2 to 3, yes native unicode features are nice, but no, it was not worth breaking two decades of existing code.


As somebody living in Europe, I think it's a perspective you can have only if you live mostly in an English speaking world.

Up to 2015, my laptop was plagued with Python software (and others) crashing because they didn't handle unicode properly. Reading from my "Vidéos" directory, entering my first name, dealing with the "février" month in date parsing...

For the USA, things just work most of the time. For us, it's a nightmare. It's even worse because most software is produced by English speaking people that have your attitude, and are completely oblivious about the crashing bugs they introduce on a daily basis.

In Asia it's even more terrible.

And I've heard the people saying you can perfectly code unicode aware software in Python 2. Yes, you can, just like you can code memory safe code in C.

In fact, just teaching Python 2 is painful in Europe. The student write their name in a comment ? Crashes if you forget the encoding header. Using raw_input() with unicode to ask a question? Crashes. Reading bytes from an image file ? Get back a string object. Got garbage bytes in string object ? Silently concatenate with a valid string and produce garbage.


> As somebody living in Europe, I think it's a perspective you can have only if you live mostly in an English speaking world.

I live in Europe and I (mostly) agree that (most) code shouldn't (usually) contain any codepoint greater than 127. It's a simple matter of reducing the surface area of possible problems. Code needs to work across machines and platforms, and it's basically guaranteed that someone, somewhere is going to screw up the encoding. I know it shouldn't happen, but it will happen, and ASCII mostly works around that problem. Another issue is readability. I know ASCII by heart, but I can't memorize Unicode. If identifiers were allowed to contain random characters, it would make my job harder for basically no reason. Furthermore, the entire ASCII charset (or at least the subset of ASCII that's relevant for source code) can easily be typed from a keyboard. Almost everyone I know uses the US layout for programming because European ones frankly just suck, and that means typing accents, diacritics, emoji or what have you is harder, for no real discernible reason.

String handling is a different issue, and I agree that native functions for a modern language should be able to handle utf8 without crashing. The above only applies to the literal content of source file.


I understand to not want utf8 for the identifiers, as you type them often and want the lowest common keyboard layout demonitor, but comments and hardcoded strings, certainly not.

Otherwise, chinese coders can't put their name in comments? Or do they need to invent an ascii names for themself, like we forced them to do in the past for immigration ?

And no hard coded value for any string in scripts then? I mean, names, cities, user messages and label, all in in config files even for the smallest quick and dirty script because you can't type "Joyeux Noël" in a string in your file? Can't hardcode my "Téléchargements" folder when in doing some batch work in my shell?

Do I have write all my monetary symbols with escape sequences? Having a table of constant filled with EURO_SIGN=b'\xe2\x82\xac' instead of using "€"? Same for emoji?

I disagree, and I'm certainly glad that not only Python3 allows this, but do it in a way that is actually stable and safe to do. I rarely have encoding issues nowadays, and when I do, it's always because something outside is doing something fishy.

Utf8 in my code file, yes please.


Lol, just realized my table should be "EURO_SIGN=b'\xe2\x82\xac'.decode('utf8')".

Damn, a perfect example of why it's nice to be able to just do "€" in my file. It's also less bugs.


If we're still talking about Python 3, you can also write,

  EURO_SIGN = '\N{EURO SIGN}'
Which is ASCII, and very clear about which character the escape is. I wish more languages had \N. That said, I'm also fine with a coder choosing to write a literal "€" — that's just as clear, to me.

While I don't believe the \N is backwards compatible to Python 2, explicitly declaring the file encoding is, and you can have a UTF-8 encoded source file w/ Python 2 in that way.

I'd also note the box drawing characters are extremely useful for diagrams in comments.

(And… it's 2021. If someone's tooling can't handle UTF-8, it's time for them to get with the program, not for us to waste time catering to them. There's enough real work to be done as it is…)


You can do Unicode in Python 2. You can do Unicode faster and easier in Python 3. But by gaining that ability, they set the existing community back by 10 years.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: