Hacker News new | past | comments | ask | show | jobs | submit | rubiquity's comments login

Swift has packages and a module system.


I think the complaint is the global runtime namespace, not source modules. Statics live forever, and extensions in any library on a type apply to all uses in the runtime (with no guarantees about conflicts).

Mostly that's minimizable with good practice, but can be annoying to those used to hierarchical namespacing and static memory reclaimed after the class becomes unreachable.


What I meant is that if you have a larger project with hundreds of files, every top-level component in every file is globally available, which is a problem.


No submodules however.


Programmers can't be trusted with submodules, as you can see from C#/Java and its standard libraries where everything is named like System.Standard.Collections.Arrays.ArrayList.

Of course, taking them away doesn't stop them from other kinds of over-organization like ArrayFactoryFactoryStrategies, but it helps a little.


"Programmers can't be trusted with..." isn't the best argument here IMO. You already gave one reason why. Programmers will create a mess regardless IMO, despite how nice the language is. Adding to that (1) among all he things I didn't like about Java, nested modules were least of it. (2) Lot of it has to do with how reference code in that ecosystem are written which are then adapted as standard practice. Its all good for stuff that do one thing, but if you are building things in same repo with several components, they are nice to have. Rust/Python/Racket are few languages I can think of which have submodules/nested modules and I've not heard people complain about that there.


It’s actually less discussion than here.


I don’t agree with much in this writing other than that eventual consistency is a bad choice. Distributed systems are hard but in 2024 there are enough known patterns and techniques to make them less icky. Systems built on total ordering are much more tractable than weaker protocols. Mahesh Balakrishnan’s recent paper[0] on the Shared Log abstraction is a great recipe, for example.

As an aside, I’ve never enjoyed the defeatist culture that permeated operations and distributed systems pop culture, which this post seems to re-enforce.

0 - https://maheshba.bitbucket.io/papers/osr2024.pdf


I think the defeatism comes from practicality. I'm at a 30 person IT org trying to do distributed systems and eventual consistency. Dont take on complexity unless you have to. And eventual consistency requires a LOT of scale before it becomes a have too.


I dont think it's necessarily a question of scale. Where I work, we have a lost of strategic partnerships, and all those partners have their own it systems with their own master data. It's intractable to enforce strong consistency between all of these disparate systems that don't speak to one another, and you expressly dont want to take the whole substrate offline when a single parter has a network issue. The best you can do is really eventual consistency.


I guess im talking about internal eventual consistency. Im not arguing against using amother system outside your own single instance database.


> I don’t agree with much in this writing other than that eventual consistency is a bad choice

Does it really matter wherever is bad or not? As far as I know, every database that scales beyond a single node (for performance) is eventually consistent. Otherwise you've gotta wait for a sync between nodes before a response can be given, which would effectively force your cluster to have worse performance then running on single node again.


There are a bunch that offer strong consistency e.g. Cloud Spanner and DynamoDB.


Don't forget FoundationDB


> As far as I know, every database that scales beyond a single node (for performance) is eventually consistent

That's SO not true. Spanner, Amazon's S3 are some of the biggest databases on the planet, and they are strongly consistent.

> Otherwise you've gotta wait for a sync between nodes before a response can be given, which would effectively force your cluster to have worse performance then running on single node again.

Yes, you are trading latency for fault-tolerance, but so what? What if the resulting latency is still more than good enough? There is no shortage of real large-scale applications where this is the case.


Debian is going to look as ridiculous for doing this as Alma Linux is for insisting btrfs isn’t an “enterprise file system” due to it lacking RAID 5/6.


AlmaLinux has made no such statement, for what it's worth.


Red Hat Enterprise Linux excludes btrfs support in its distribution because Red Hat does not believe btrfs to be stable enough to be worth considering. That decision trickles down to recompilation projects that pretty much amount to "RHEL, but without having to pay for it".


And rightfully so, imo. btrfs is the only filesystem I've used where it's gotten so corrupted due to faulty memory on a computer that I had to recover the files and reinstall from scratch after replacing the memory (obviously not using btrfs the second time ..).


Thanks for that correction. I misunderstood a forum post that was probably someone being snarky/cynical.


btrfs lacking RAID 5/6 is exactly the sort of thing that makes it get written off as a toy. ZFS has had working raidz[12] since 2005, raidz3 since 2007, and draid since 2021. Completely stable and enterprise-ready.


Let’s not misrepresent Kent over a single incident of sending too much after a merge window. He’s extremely helpful and nice in every interaction I’ve ever read.


He’s been CEO for 3 years and some change. Turning around the Titanic takes a ton of effort. I think they’re doing better. Intel Core (edit: Core Ultra) and Raptor Lake are great platforms, better than their AMD equivalents (and that hurts to say as an AMD fan).


> Intel Core and Raptor Lake are great platforms, better than their AMD equivalents

"Intel Core"?

"Raptor Lake" is the codename for the 13th and 14th generation processors, which are in the current news cycle for being buggy and Intel not recalling them.


Edited. I meant Intel "Core Ultra."


Not sure what they guy is huffing. I got lucky and went in on Alder Lake and was really close to getting a Raptor, but the half price for Alder swung me.


What makes them better than AMD in your view?


Well the AMD one aren't as good at barbecuing food for one! (the "raptor lake" he quoted is actually the generation of Intel chip that is currently making the news for melting itself through insane power usage, only to barely keep up to AMD)


AMD and Intel have flipped their approaches a bit. AMD has been working on the X2 chip to smartly integrate two cores, and Intel came along with Pentium-D which was basically just two Pentium dies next to each other.

These days AMD has been playing that same tactic up in spades. Strix Point is a very very nice monolithic APU, but everywhere else they have an IO-Die and then a varying number of Core Complex Die. They're just dropping down variable numbers of cores.

Intel by compare is building interesting bespoke chiplet configurations, is taking on X2 like challenges. And I believe in the eventual gains here. They have parcelled up responsibilities in a really interesting way with Meteor Lake, a CPU, and GPU, a SoC (with its own e-cores as well!), and a io chiplet. Intel's got tons of value add from this. Gobs of usb4 that AMD is nowhere near delivering, a massive image/video processor, the ability to easily drop in new cores or new gpu's as they increment. The modular design is ambitious and interesting. https://www.anandtech.com/show/20046/intel-unveils-meteor-la...

And Intel seems well ahead in the packaging game. Rather than big interposer dies, Intel's using smaller sized & much finer EIMB bridges between chips. Which helps them save power as well as reducing size. They have Foveros for much more heterogenous chip stacking than what most are pulling off.

Architecturally it feels like Intel's been very much refining & iterating across the multichip era for a long time, from the drastically underrated old Lakefield, to the very very highly integrated upcoming Lunar Lake. AMD is doing a great job making cores and gpu's, but Intel's been doing remarkably good, especially considering their 10nm+++ Intel 7 process, especially with the E-cores being a modest sized nicely performing core.

I also want to complement Intel on their really interesting architecture innovations. But I have severe doubts that their various on-chip accelerators are going to reach critical adoption levels where the developers that matter are excited about spending time optimizing for these awesome luxuries. Like Ponte Vecchio, very interesting tech, and something the hyperscalers and supercomputer can be excited about, but it's hard to see a path towards long term success.

I'd love to see Intel ship photonics-intergated solutions again. Their EIMB tech should complement that well, and that used to be a huge high value offering they had.


Even more fun are asymmetric network degradations or partitions.


I know a lot of people into fishing that would love this as an app to quickly check the moon phase on a given date when planning a multi-day fishing trip or deciding when to go out. Apex predators are typically less hungry around and during a full moon due to the extra light making hunting at night easy.


the ios weather app has moon info for the current, previous, and next month


The point trying to be made is that with nimble infrastructure the A in CAP can be designed around to such a small amount you may as well be a CP system unless you have a really good reason to go after that 0.005% of availability. Not being CP means sacrificing the wonderful benefits that being consistent (linearizability, sequential consistency, strict serializibility) make possible. It's hard to disagree with that sentiment, and is likely why the Local First ideology is centered on data ownership rather than that extra 0.0005 ounces of availability. Once availability is no longer the center of attention the design space can be focused on durability or latency: how many copies to read/write before acking.

Unfortunately the point is lost because of the usage of the word "cloud", a somewhat contrived example of solving problems by reconfiguring load balancers (in the real world certain outages might not let you reconfigure!), and missing empathy that you can't tell people not to care about how the semantics that thinking about, or not thinking about, availability imposes on the correctness of their applications.

As for the usage of the word cloud: I don't know when a set of machines becomes a cloud. Is it the APIs for management? Or when you have two or more implementations of consensus running on the set of machines?


> Nicaragua and Dominican Republic basically make the best ones last I checked.

You’re going to need sources on that. Cubans are still the best dollar for dollar.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: