Hacker News new | past | comments | ask | show | jobs | submit | michielderhaeg's comments login

I do agree that the pace is reasonable. Both with opaque pointers and the new pass manager API there was a quite a long transition period where both were supported before the old system was deprecated/removed. But with every new non-patch release, some part of the public API is changed and we have to adapt.


> But with every new non-patch release, some part of the public API is changed and we have to adapt.

bumping only on major release is recipe for pain and misery. everyone project i'm familiar with bumps weekly against HEAD. much more manageable.


I think the bitcode parsers (binary and textual) are quite backwards compatible. Old bitcode IR will be upgraded transparently. So you might not have to keep up.


Absolutely not. They do invasive changes all the time. At work we maintain an LLVM compatibility library that provides a stable interface across different LLVM versions. It has grown quite big over the years. The pure C interface is in comparison much more stable, but also quite limited.

Not just the API changes, the LLVM IR syntax can change drastically as well. E.g. not too long ago they switched from types pointer types to opaque pointers. `i32*` just becomes `ptr`. [1]

[1] https://llvm.org/docs/OpaquePointers.html


IR can be auto-upgraded between LLVM versions.


citation? i work on LLVM (like llvm/llvm-project) and i'm not aware of such a tool.


The textual IR is not backwards compatible, but the bitcode format has been best-effort auto-upgradeable for as long as I've been involved (2006). The policy is documented here: https://llvm.org/docs/DeveloperPolicy.html#ir-backwards-comp...


bitcode policy i'm aware of - it's the "IR" part that caught my attention.


a pretty evident case of not almost got caught


This reminds me of when I was still in university. During our compilers course we used the "Modern Compiler Implementation in Java" book. Because I was really into FP at the time, I instead used the "Modern Compiler Implementation in ML" version of this book (which was the original, the Java and C versions were made to make it more accessible). We noticed during the course that my version was missing a whole chapter on visitors, likely because ML's pattern matching made this kind off trivial.

One of the other students made a similar remark, that software design patterns are sometimes failures of the language when they have no concise way of expressing these kinds of concepts.


«Design patterns are bug reports against your programming language» (Peter Norvig)

This is pithy, but only applies to some rote design patterns which you have to code every time because the language lacks a good way to factor them out.

Wider-scale patterns often express key approaches enabled by the language, and they can't go onto a one-size-fits-all library or language feature, because you tailor their implementation to your specific needs. They are less like repetitive patterns of a brick wall, and more like themes or motifs that keeps reappering because they are fruitful.


Yes, I wouldn't want all of these https://en.wikipedia.org/wiki/Software_design_pattern to be enabled by the language. I personally like my languages to be simple.

Also, I'm not sure how some of these patterns could be realized as language features. Like, particularly the common UI patterns.


> that software design patterns are sometimes failures of the language when they have no concise way of expressing these kinds of concepts.

This is _the_ most common comment you hear in any language design discussion, and it's so boring. Java doesn't have visitors because it's the best the language could come up with. It has visitors because of a particular dogmatic adherence to OOP principles. A fundamentalist reading that says you're never supposed to have any control flow, only virtual method calls. Visitors come from the same place as Spring and the infamous AbstractSingletonProxyFactoryBean.

Java is perfectly capable of expressing an enum, and you can easily switch on that enum. The limitation that you have to express that enum as a type hierarchy, and therefore encounter the double dispatch problem is entirely OOP brainrot.


> Java is perfectly capable of expressing an enum

I think this is a bit of an exaggeration, unless you're talking about very modern versions of Java (sealed interfaces and better pattern matching). Java has no good way to attach different data types to each enum variant (a la ADTs) except inheritance, and until recently switching over class types was a PITA.


Enums were added in java 5 and I'm pretty sure you could switch on them from day one. From that point you just need a single class (call it TreeNode) that holds a value of that enum (say the node type) and some fields, and you can navigate it and switch over the node type without any sort of visitors.

"But what if I have a leaf node. Wont that have a child pointer, even though it doesn't need it" Sure it will. Who cares. You don't need to make it harder than it has to be. All the new stuff just enables you to get more type safety, it was never necessary.


You're getting downvoted but I don't think you're getting the benefit of understanding the disagreements. The benefit of visitor is for when you need multiple different TreeNode classes with different aspects, either different member variables or different methods/implementations themselves. In that case, the unused child ptrs are the smaller of the class modeling difficulties. For example, if you had a single base TreeNode class with the enum, after switching on the enum you need to explicitly downcast to the type. The dual-dispatch approach bakes this into the visit-accept protocol.

The last time I worked on a complex syntax tree structure, I used the approach you suggested because _editing_ a Visitor suite is a pain in the rear. Only terms that had special payload data (numbers, pointers to metadata objects, etc.) need downcasting.


Indeed there should be no control flow, only computing values of function application :)

Indeed, this is also absolutism, just of a different kind, that proved to be somehow more fruitful in the application programming domain. And, to my mind, it's mostly because functions can be pure and normally return a value, while methods are usually procedures mutating some hidden state.

OTOH in a different domain, system programming, control flow is a good abstraction over the close-by hardware, mutable state is all over the place in the form of hardware registers, and it's more fruitful to think in terms of state machines, not function application.

If a tool forced you to do repetitive irrelevant motions, it's just a wrong tool, you need to find or create a better one.


My experience was some 20-odd years ago, when I was trying to learn design patterns by implementing them in PHP. Specifically, I built the whole array of abstract and concrete factories and all that, until I realised that it can all be replaced by something like:

  $className = "MyClass";
  $classInstance = new $$className();


Flask is FOSS, maybe it is not too difficult to add support for this yourselves.


If I switch to the light theme and back again, the diagrams become white and readable.


He is inverting the embedded SVG's colors using CSS. However, that CSS is JavaScript applied and doesn't trigger on page load. SVG offers a nice 'currentColor' color value, which resolves to the inherited text-color. Replacing stroke='#000000' with stroke='currentColor' would solve this bug without requiring any JS.


The only downside is, I believe, the electricity bill. The reason to use a rpi is that it has enough computational power to do many useful self-hosting tasks while consuming only a fraction of the power of a beefy computer.


Mac Studio idle is 11 W (max 115 W) compared to RPI 4 idling at 4 W (max 6 W).

It's reasonable to assume that the 115 W on a Mac Studio will never be hit on a workload that could be supported by a RPI...so you're looking at an extra $10-20/yr to operate the Mac Studio.

The biggest issue is just the upfront cost, since of course the Mac Studio is proportionally expensive to its capability.


Idle power consumption for Apple Silicon Mac is low despite it's workstation, it's outlier. Average Intel desktop consumes similar or a little more power for idle, that is acceptable IMO for flexibility and reuse. Maybe DIY PC consumes a bit more than average Dell desktop even if CPU is same. Average Intel workstation/server consumes much more than them so it's wasteful for almost idle server. So it depends on what is the previous PC.


Oh for sure, if you pick out a high consumption CPU then it'll be high for sure


There are many places in the world where this is not a thing. I live in a densely populated country where it it simply not possible to get that far away from any city. If you drive 50km in any direction anywhere in the country, you would've encountered quite a few cities. I would have to cross several borders to get to a place without light pollution.


Netherlands? Germany (1 border) or France (2 borders) are fine. NL (I am from there but moved away a long time ago for this and other reasons) is more of an exception than a rule I would say.


This is what the Glasgow Haskell Compiler runtime does. If you run a Haskell application, you'll see it has uses 1TB of virtual memory.

This way you never have to talk to the kernel if you need more memory.


Also Clozure Common Lisp.


Can you explain the privacy argument to me? Bitcoin is a massive public ledger, I have a local copy of literally every transaction ever made. How is that better for privacy?


You are anonymous until you are linked to your wallet.

Anyone who has transacted with you can do that, so then anything sent/received from that wallet address is linked to you.

You can get around this by having many wallets. If you don't reuse those wallets, then a counterparty cant directly identify your other transactions. That means you need to have your balance distributed among many wallets, in denominations small enough that they would be consumed in a single spend.

But - you need to get the funds into those wallets in the first place. You cant simply split up your wallet, as these wallets would be linked to each other via a common ancestor. Instead, you would need to receive from a 3rd party, but then they would be able to identify the transaction from that wallet as coming from you...

So to remain anonymous, you simply never transact with anyone other than yourself ;)


Is this FUD or just a lack of understanding? Staying anonymous is very easy. You can choose between a number of ways to move coins into a separate address in an untraceable way. E.g. (1) CoinJoin transactions (which are natively supported in the protocol), (2) through Lightning, which is also an official extension, usable in production right now, or (3) using a third-party mixing service. Those are just off the top of my head.


Knowledge a few years out of date I guess. Still counterparty issues with mixers, somewhat with coinjoin too, but lightning seems pretty cool.


Yes you can have privacy with bitcoin only if you never transact with it... lol


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: