Hacker Newsnew | past | comments | ask | show | jobs | submit | toth's commentslogin

> An example you'll see in say a modern C compiler is that if you write the obvious loop to calculate how many bits are set in an int, the actual machine code on a brand new CPU should be a single population count instruction, C provides neither intrinsics (like Rust) not a dedicated "popcount" feature, so you can't write that but it's obviously what you want here and yup an optimising C compiler will do that.

C compilers definitely have intrinsics for this, for GCC for instance it is `__builtin_popcount`.

And apparently it has even standard language support for it since C23, it's `stdc_count_ones` [1] and in C++ you have `std::popcount` [2]

[1] https://en.cppreference.com/w/c/numeric/bit_manip.html [2] https://en.cppreference.com/w/cpp/numeric/popcount.html


The existence of platform specific hacks is not interesting. In reality what happens is that software which has at any point cared about being portable doesn't use them.

But yes stdc_count_ones is indeed the intrinsic you'd want here, and only a few years after I stopped writing C, so thanks for mentioning that.

std::popcount is C++ but it's also kinda miserable that it took until C++ 20 and yet they still only landed the unsigned integer types, even though C++ 20 also insists the signed integers have two's complement representation, so the signed integers do have these desirable properties in fact but you can't use that.


> In reality what happens is that software which has at any point cared about being portable doesn't use them.

I don't think this generalization is actually true. Fast portable software compiles conditionally based on the target platform, picking the fast platform-specific intrinsic, and falls back to a slow but guaranteed portable software implementation. This pattern is widespread in numerical linear algebra, media codecs, data compressors, encryption, graphics, etc.


Maybe we are just quibbling over semantics but the compiler intrinsic here is '__builtin_popcount'. 'stdc_count_ones' is a standard library element that presumably will be implemented using the intrinsic.

And FWIW all major C/C++ have for a long time have had a an intrinsic for this. In clang it even has the same name, Visual Studio it's something like just '_popcount'. So it has long been easy to roll your own macro that works everywhere.


Yes, just semantics. But I don't think I can agree that because you could have ensured this works portably people actually did. That's not been my experience.

Yesterday I watched that "Sea of Thieves" C++ 14 to C++ 20 upgrade story on Youtube, that feels much more like what I've seen - code that shouldn't have worked but it did, kept alive by people whose priority is a working game.


__builtin_popcount is not platform specific.


OK, sure, vendor specific then. C23 does not promise this incantation, it's presumably a GCCism.


I think you shared the wrong link. Based on a quick youtube search I think you meant this one

https://youtu.be/EL7Au1tzNxE


For pytorch the analogue is Named Tensors, but it's a provisional feature and not supported everywhere.

https://docs.pytorch.org/docs/stable/named_tensor.html


That seems uncharitable. I for one enjoy the tidbits he posts in is blog.


> In a different world, we settled on a system of notation based on continued > fractions rather than decimals for writing non-integers. In this world, nobody > marvels at the irregularity of pi or e, the fact that they seem to go on for > ever without a pattern - both numbers have elegant and regular representations > as infinite sums of fractions.

There are some less widely-known topics in math that seem to make some of those that learn them want to "evangelize" about them and wish they had a more starring role. Continued fractions are one.

Now, don't get me wrong. Continued fractions are very cool, some of the associated results are very beautiful. More people should know about them. But they never will be a viable alternative to decimals. Computation is too hard with them for one.

Also, while e has a nice regular continued fraction expansion [1], that is not the case for pi [2]. There is no known formula for the terms, they are as irregular as the decimal digits. There are nice simple formulas for pi as infinite sums of fractions (simplest is probably [3]) but those are not continued fractions.

[1] https://oeis.org/A003417 [2] https://oeis.org/A001203 [3] https://en.wikipedia.org/wiki/Leibniz_formula_for_%CF%80


> But they never will be a viable alternative to decimals. Computation is too hard with them for one.

I think this is too narrow-minded.

You could make the same argument for ideograms vs alphabetic writing: one is clearly superior and you could never have a technological superpower that relies primarily on the other, but thanks to historical path dependency we actually have both.

I could imagine a world where the SI system never took off in engineering, due to stubborn people at inopportune moments. Engineers and physicists would still get their jobs done in imperial units, just like American carpenters do today.

Also I did elide the distinction between continued fractions and infinite sums of fractions, but again we can use our imagination and say that if continued fractions were commonplace, we'd all be a lot more familiar with the infinite sums too.


Well, actually... the "main" function is handled specially in the standard. It is the only one where the return type is not void and you don't need to explicitly return from it - if you do it, it is treated as if you returned 0. (You will most definitely get a compiler error if you try this with any other function.)

You might say this is very silly, and you'd be right. But as quirks of C++ go it is one of the most benign ones. As usual it is there for backwards compatibility.

And, for what it's worth, the uber-bean counter didn't miss a bean here...


I can't understand how you reach your conclusion.

At present, if you have financial capital and need intellectual capital you need to find people willing to work for you and pay them a lot of money. With enough progress in AI you can get the intellectual capital from machines instead, for a lot less. What loses value is human intellectual capital. Financial capital just gained a lot of power, it can now substitute for intellectual capital.

Sure, you could pretend this means you'll be able to launch a startup without any employees, and so will everyone. But why wouldn't Sam Altman or whomever just start AI Ycombinator with hundreds of thousands of AI "founders"? Do you really think it would be more "democratic"?


> But why wouldn't Sam Altman or whomever just start AI Ycombinator with hundreds of thousands of AI "founders"? Do you really think it would be more "democratic"?

AI is useful in the same way with Linux

- can run locally

- empowers everyone

- need to bring your own problem

- need to do some of the work yourself

The moral is you need to bring your problem to benefit. The model by itself does not generate much benefits. This means AI benefits are distributed like open source ones.


Those points are true of current AI models, but how sure are you they will remain true as technology evolves?

Maybe you believe that they will always stay true, that there's some ineffable human quality that will never be captured by AI and value creation will always be bottle-necked by humans. That would be nice.

But even if you still need humans in the loop, it's not clear how "democratizing" this would be. It might sound great if in a few years you and everyone else can run an AI on their laptop that is as a good as a great technical co-founder that never sleeps. But note that means that someone who owns a data-center can run the equivalent of the current entire technical staff of Google, Meta, and OpenAI combined. Doesn't sound like a very level playing field.


Very nice, didn't know about that one!

In a similar vein, Ramanujan famously proved that e^(sqrt(67) pi) is an integer.

And obviously exp(i pi) is an integer as well, but that's less fun.

(Note: only one of the above claims is correct)


The number you are looking for is e^(sqrt(163) pi). According to Wikipedia:

In a 1975 April Fool article in Scientific American magazine,[8] "Mathematical Games" columnist Martin Gardner made the hoax claim that the number was in fact an integer, and that the Indian mathematical genius Srinivasa Ramanujan had predicted it – hence its name.

It is not an integer of course.


Actually `e^(sqrt(n) pi)` is very close to being an integer for a couple of different `n`s, including 67 and 163. For 163 it's much closer to an integer, but for 67 you get something you can easily check in double precision floats is close to an integer, so I thought it worked better as a joke answer :)

FYI, the reason you get these almost integers is related to the `n`s being Heegner numbers, see https://en.wikipedia.org/wiki/Heegner_number.


> The number you are looking for is e^(sqrt(163) pi) […] It is not an integer of course.

Of course? I’m not aware that we have some theorem other than “we computed it to lots of decimals, and it isn’t an integer” from which that follows.


It's not really "of course", and I don't think we have such a theorem in general. But in this case, I believe the fact that it's not an integer follows from the same theorem that says it's very close to an integer. See eg https://math.stackexchange.com/questions/4544/why-is-e-pi-sq...

Basically e^(sqrt(163)*pi) is the leading term in a Laurent series for an integer, and the other (non-integer) terms are really small but not zero.


You didn't know that one because it's a lie. He's telling lies.


Charitably it was a joke, as was my quip about `e^(sqrt(67) pi)`. It is a funnier joke without a disclaimer at the end, but unlike GP I couldn't bring myself to leave one out and potentially mislead some people...

What I meant was that I didn't know that `e^pi - pi` is another transcendental expression that is very close an integer. You might think this is just an uninteresting coincidence but there's some interesting mathematics around such "almost integers". Wikipedia has a quick overview [1]. I didn't realize it before, but they have GP's example and also the awesome `e + pi + e pi + e^pi + pi^e ~= 60`.

[1] https://en.wikipedia.org/wiki/Almost_integer


> Are you just being overawed by IQ tests, which are notorious for measuring only ability to pass IQ tests?

People like to say things like this, but nothing could be further than the truth. There is a vast literature showing that IQ predicts things like job performance, school performance, income and wealth [1]. IQ is highly persistent across time for fixed individuals. Yes, "intelligence" is not a precisely defined concept, but that doesn't mean that it isn't real. A lot of useful concepts have some vagueness about, even "height" to take the example parodied in the OP.

And "super intelligence" is admittedly even vaguer, it just means sufficiently smarter than humans. If you do have a problem with that presentation just think of specific capabilities a "super intelligence" would be expected to have. For instance, the ability to attain super-human performance in a game (e.g., chess or go) that it had never seen before. The ability to produce fully functional highly complex software from a natural language spec in instants. The ability to outperform any human at any white-collar job without being specifically trained for it.

Are you confident that a machine with all those capabilities are impossible?

[1] https://en.wikipedia.org/wiki/Intelligence_quotient#Social_c...


I think those capabilities are made out of ideas. I think yes, a machine could have ideas, and then it would be a person, and an anticlimax.


> and then it would be a person, and an anticlimax.

It might be a person (but: can you prove those things are sufficient for personhood?), but even then it sure isn't a human person.

And what do you mean by an anticlimax?

To circumvent any question of what it takes to make an AI work, let's posit a brain upload. Just one person, so it's a memetic monoculture — no matter how many instances you make, they'll all have the same skills and same flaws.

Transistors are faster than synapses by about the same ratio to which a marathon runner is faster than continental drift, so even if the only difference is speed, not quality of thought, that's such a huge chasm of difference that I can't see how it would be an anti climax even if the original person is extraordinarily lazy.


You make a valid point, but I feel there is something in the direction the article is gesturing at...

The mean of the n-dimensional gaussian is an element of R^n, an unbounded space. There's no uninformed prior over this space, so there is always a choice of origin implicit in some way...

As you say, you can shrink towards any point and you get a valid James-Steiner estimator that is strictly better than the naive estimator. But if you send the point you are shrinking towards to infinity you get the naive estimator again. So it feels like the fact you are implicitly selecting a finite chunk of R^n around an origin plays a role in the paradox...


> But if you send the point you are shrinking towards to infinity you get the naive estimator again.

You get close to it but strictly speaking wouldn’t it always be better than the naive estimator?


Right, it's a limit at infinity


> There's no uninformed prior over this space, so there is always a choice of origin implicit in some way...

You could use an uninformed improper prior.


You would just need to come up with a way to pick a point at random uniformly from an unbounded space.


You can just use the function that is constantly 1 everywhere as your improper prior.

Improper priors are not distributions so they don't need to integrate to 1. You cannot sample from them. However, you can still apply Bayes' rule using improper priors and you usually get a posterior distribution that is proper.


Sure.

The point is that you wrote that « you can pick any point […] » and when toth pointed out that « there is always a choice of origin implicit in some way » you replied that « you could use an uninformed improper prior. »

However, it seems that we agree that you cannot pick a point using an uninformed improper prior - and in any method for picking a point there will be an implicit departure from that (improper) uniform distribution.


Oh.

When I said "you can pick any point P", I meant universal quantification, i.e "for all points P", rather than a randomly chosen P.

I did say "choose P", which was pretty bad phrasing on my part.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: