Hacker Newsnew | past | comments | ask | show | jobs | submit | devlovstad's commentslogin

AkademikerPension is a quite unusual pension fund, even by Danish standards due to them being quite political in their holdings. This stance is based on surveys made from their members, where many have indicated that placing their money ethically is as important as getting a good return/minimizing risk. In March 2025, they chose to drop their Tesla shares due to union-busting, lack of independence in the board and due to Elon Musk's actions[1].

Last year, AkademikerPension had a return between 3 and 6 percent, which is lower than other Danish pension funds[2].

[1] https://akademikerpension.dk/nyheder/vi-ekskluderer-tesla/ (Danish)

[2] https://akademikerpension.dk/nyheder/afkast-mellem-3-og-6-pr... (Danish)


I've read through most of the first paper mentioned.

Here, the authors have taken set up two synthetic experiments where transformers have to learn the probability of observing events from a sampled from a "ground truth" Bayesian model. If the probability assigned by the transformers to the event space matches the Bayesian posterior predictive distribution, then the authors infer that the model is performing Bayesian inference for these tasks. Furthermore, they use this to argue that transformers are performing Bayesian inference in general (belief-propagation throughout layers).

The transformers are trained on thousands of different "ground truth" Bayesian models, each randomly initialized which means that there's no underlying signal to be learned besides the belief propagation mechanism itself. This makes me wonder if any sufficiently powerful maximum likelihood-based model would meet this criteria of "doing Bayesian inference" in this scenario.

The transformers in this paper do not intrinsically know to perform inference due to the fact that they're transformers. They perform inference because the optimal solution to the problems in the experiments is specifically to do inference, and transformers are powerful enough to model belief propagation. I find it hard to extrapolate that this is what is happening for LLMs, for example.


uv has made working with different python versions and environments much, much nicer for me. Most of my colleagues in computational genomics use conda, but I've yet to encounter a scenario where I've been unable to just use uv instead.


I took a course on massively parallel programming taught by one of the authors of this paper that extensively used Futhark and CUDA. While I have not used any of these languages since, I have used JAX[1] quite a lot, where the learnings from this course have been quite helpful. Many people will end up writing code for GPUs through different levels of abstraction, but those who are able to reason about the semantics through functional primitives might have an easier time understanding what's happening under the hood.


I think the intended footnote was accidentally left out. Were you talking about this Python library?

https://docs.jax.dev/en/latest/index.html


There's a JAX for AI/LM too

https://github.com/jax-ml/jax

but yeah no idea which the OP meant


> I took a course on massively parallel programming taught by one of the authors of this paper that extensively used Futhark and CUDA.

PMPH? :)


Slightly OT: I'm a master's student in computer science who focuses mostly on machine learning. Still, the best course I've ever taken was one on semantics and types, presenting many of the ideas in this article. Learning proof-writing using natural deduction with a ruthlessly rigorous teacher has made me much more precise when I write proofs in general, and learning about theory of computation and logic has given me a good mental model of how my programs execute.

While the course is an elective mostly focused on students interested in programming languages, I think all computer scientists can benefit from taking such a course. In a time where everyone wants to do AI, the course only had around 12 students out of a class of maybe 200 students.

Even more OT: Phil Wadler gave a talk at the programming language section of my university not too long ago, which I was much excited to see. Sadly, he chose a vague pop-sciency talk on AI which felt quite a bit outside his expertise.


Hey, do you have interesting slides/homeworks to share? I would be interested in taking a look


The course did not use slides but instead wrote everything on a whiteboard. The lecture notes are not public afaik.

The lecturer did suggest the following supplementary material:

- Michael Huth and Mark Ryan. Logic in Computer Science: Modelling and Reasoning about systems (2nd ed.). Cambridge University Press 2004. (Mainly chapters 1, 2)

- Glynn Winskel. The Formal Semantics of Programming Languages: An Introduction, MIT Press 1993. (Mainly chapters 1, 2, 3, 6, 11)

- Benjamin C. Pierce. Types and Programming Languages. MIT Press 2002. (Mainly chapters 5, 8, 9, 11, 12)


While I can accept the notion that we should reject Toyota, I don't really see how Lean is related to it.

Is Lean the cause of Toyotas failure to protect its workers? How can we know that the misfortunes of the actual TPS also applies to the companies that are inspired by it?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: