Hacker News new | past | comments | ask | show | jobs | submit | chipdart's comments login

> You can learn from everyone around you, regardless of their status. There is no "universal developer experience curve", everyone has more or less knowledge on a field or with a specific tool/framework.

There's a big difference between learning from someone and having someone teach you something. The latter expedites your progress and clarifies learning path, whereas the former can even waste your time with political fights pulling you into dead-ends.


> But a warning: you will spend years in the weeds, focusing on things that don't matter.

That sums up anyone's college experience.

The hard part is telling apart what doesn't matter from what does. More often than not, what dictates which is which is the project you find yourself working on.


> C# has this:

This is only syntactic sugar to allow using object initializers to initialize specific member varabiles of a class instance instead of simply using a constructor and/or setting member variables in follow-up statements. It's hardly the feature OP was describing.


While mutation of init-only properties is sometimes done by e.g. serializers through private reflection or unsafe accessors, it otherwise can lead to unsound behavior if class implementation does not expect this. You cannot bypass this through normal language means.

Same applies to readonly instance fields.

Where does "syntax sugar" end and "true features" begin?


Anyway, trying to actually prevent a program from modifying its own memory is really hopeless, right? So any promises beyond “syntactic sugar” would be resting on a poor foundation, perhaps even dangerously misleading.


You can always mmap a memory range, place a struct there, then mprotect it. And on the language side you cannot overwrite readonly structs observed by readonly refs (aside from unsafe, which will trigger segfault in this case).

There are ways to do it. What matters is the user ergonomics, otherwise, by this logic, most higher-level languages would have even less of a claim to immutability, and yet somehow it's not an issue?

If there are exotic requirements - there are exotic tools for it. FWIW static readonlys are blocked from modification even through reflection and modifying their memory with unsafe is huge UB.


> The problem is that programming languages have always focused on the definition side of types, which is absolutely necessary and good, but the problem is that only limiting use by, e.g., "protected, private, friend, internal, ..." on class members, as well as the complicated ways we can limit inheritance, are barely useful.

Virtually all software ever developed managed just fine to with that alone.

> I don't know of any programming environment that facilitates properly specifying calculating something even that basic in the init phase of running the system, (...)

I don't know what I'm missing, but it sounds like you're describing the constructor of a static object whose class only provides const/getter methods.

> or even a db table's row(s).

I don't think you're describing programming language constructs. This sounds like a framework feature that can be implemented with basic inversion of control.


I loved the article. Insightful, and packed with real world applications. What a gem.

I have a side-question pertaining to cost-cutting with Kubernetes. I've been musing over the idea of setting up Kubernetes clusters similar to these ones but mixing on-premises nodes with nodes from the cloud provider. The setup would be something like:

- vCPUs for bursty workloads,

- bare metal nodes for the performance-oriented workloads required as base-loads,

- on-premises nodes for spiky performance-oriented workloads, and dirt-cheap on-demand scaling.

What I believe will be the primary unknown is egress costs.

Has anyone ever toyed around with the idea?


For dedicated they say this:

>All root servers have a dedicated 1 GBit uplink by default and with it unlimited traffic.

>Inclusive monthly traffic for servers with 10G uplink is 20TB. There is no bandwidth limitation. We will charge € 1/TB for overusage.

So it sounds like it depends. I have used them for (I'm guessing) 20 years and have never had a network problem with them or a surprise charge. Of course I mostly worked in the low double digit terabytes. But have had servers with them that handled millions of requests per day with zero problems.


20TB egress on AWS runs you almost $2,000 btw. one of the biggest benefits of Hetzner


1 / 8 * 3600 * 24 * 30 = 324000 so that 1GBit/s server could conceivably get 324TB of traffic per month "for free". It obviously won't, but even a tenth of data is more than the data included with the 10G link.


They do have a fair use policy on the 1GBit uplink. I know of one report[1] of someone using over 250TB per month getting an email telling them to reduce their traffic usage.

The 10GBit uplink is something you need to explicitly request, and presumably it is more limited because if you go through the trouble of requesting it, you likely intend to saturate it fairly consistently, and that server's traffic usage is much more likely to be an outlier.

[1]: https://lowendtalk.com/discussion/180504/hetzner-traffic-use...


> We will charge € 1/TB for overusage.

It sounds like a good tradeoff. The monthly cost of a small vCPU is equivalent to a few TB of bandwidth.


We've toyed around with this idea for clients that do some data-heavy data-science work. Certainly I could see that running an on-premise Minio cluster could be very useful for providing fast access to data within the office.

Of course you could always move the data-science compute workloads to the cluster, but my gut says that bringing the data closer to the people that need it would be the ideal.


> Has anyone ever toyed around with the idea?

Sidero Omni have done this: https://omni.siderolabs.com

They run a Wireguard network between the nodes so you can have a mix of on-premise and cloud within one cluster. Works really well but unfortunately is a commercial product with a pricing model that is a little inflexible.

But at least it shows it's technically possible so maybe open source options exist.


You could make a mesh with something like Netmaker to achieve similar using FOSS. Note I haven’t used Netmaker in years but I was able to achieve this in some of their earlier releases. I found it to be a bit buggy and unstable at the time due to it being such young software but it may have matured enough now that it could work in an enterprise grade setup.

The sibling comments recommendation, Nebula, does something similar with a slightly different approach.


> They run a Wireguard network between the nodes so you can have a mix of on-premise and cloud within one cluster.

Interesting.

A quick search shows that some people already toyed with the idea of rolling out something similar.

https://github.com/ivanmorenoj/k8s-wireguard


I believe the Cilium CNI has this functionality built in. Other CNIs may do also.


Slack’s Nebula does something similar, and it is open source.


I'm a bit sad the aggressive comment by the new account was deleted :-(

The comment was making fun of the wishful thinking and the realities of networking.

It was a funny comment :-(


Enable "showdead" on your profile and you can see it.


It wasn’t funny. I can still see it. The answer was vpn. If you want to go fancy you can do istio with vms.


And if you wanna be lazy, there is a tailscale integration to run the cluster communication over it.

https://tailscale.com/kb/1236/kubernetes-operator

They've even improved it, so you can now actually resolve the services etc via the tailnet dns

https://tailscale.com/learn/managing-access-to-kubernetes-wi...

I haven't tried that second part though, only read about it.


Okay, vpn it is.


I just wanted to provide the link in case someone was interested, I know you already mentioned it 。 ◕ ‿ ◕ 。

(Setting up a k8s cluster over software VPN was kinda annoying the last time I tried it manually, but super easy with the tailscale integration)


yes, like i said, throw an overlay on that motherfucker and ignore the fact that when a customer request enters the network it does so at the cloud provider, then is proxied off to the final destination, possibly with multiple hops along the way.

you can't just slap an overlay on and expect everything to work in a reliable and performant manner. yes, it will work for your initial tests, but then shit gets real when you find that the route from datacenter a to datacenter b is asymmetric and/or shifts between providers, altering site to site performance on a regular basis.

the concept of bursting into on-prem is the most offensive bit about the original comment. when your site traffic is at its highest, you're going to add an extra network hop and proxy into the mix with a subset of your traffic getting shipped off to another datacenter over internet quality links.


a) Not every Kubernetes cluster is customer facing.

b) You should be architecting your platform to accomodate these very common networking scenarios i.e. having edge caching. Because slow backends can be caused by a range of non-networking issues as well.

c) Many cloud providers (even large ones like AWS) are hosted in or have special peering relationships with third party DCs e.g. [1]. So there are no "internet quality links" if you host your equipment in one of the major DCs.

[1] https://www.equinix.com.au/partners/aws


> yes, like i said, (...)

I'm sorry, you said absolutely nothing. You just sounded like you were confused and for a moment thought you were posting on 4chan.


Nobody said „do it guerilla-style”. Put some thought into it.


> Demoralized or denormalized?

The database is denormalized. The developers are demoralized.


> I'm an independent developer right now, building systems for businesses and there is literally no better way to deliver line-of-business internal applications than via a monolith.

This is the same sort of myopic, naive, clueless take that led people armed with this blend of specious reasoning to dive head-first onto microservices architectures without looking at what they were doing and thinking about the problems they are solving.

The main problems that microservices solve are a) organizational, b) resilience, c) scalability.

If you work on single-person "teams" maintaining something that is barely used and does not even have SLAs and can be shut down for hours then there's nothing preventing you from keeping all your eggs into a single basket.

If you work on a professional environment where distinct features are owned by separate teams then you are way better off running separate services, and perhaps peel out shared responsibilities to a separate support service. This is a fact.

But let's take it a step further. You want to provide a service but some of the features are already provided by a separate service either provided by a third-party or whose project you can simply download and run as part of your deployment. Does this count as a microservices architecture to you or is it a monolith?

Consider also that your client teams have a very specific set of requirements and they rolled out services to provide them. Is this a microservices architecture or a monolith?

Consider also that you start with a monolith and soon notice some endpoints trigger workflows that are so computationally demanding they cause brownouts, and to mitigate that these are peeled out of the monolith to dedicated services to help manage load. Is this a monolith or microservices?

Consider that you run a monolith and suddenly you have new set of requirements that forces you to do a major rewrite. You start off with a clone of the original monolith and gradually change functionality, and to avoid regressions you deploy both instances and have all traffic going through an API gateway to handle dialups. Is this microservices or monolith?

The main problem with these vacuous complains about monoliths is that they start from a place of clueless buzzwords, not understanding what they are talking about and what problems are being addressed and solved. This blend of specious reasoning invariably leads jumps from absolutisms to other absolutisms. And they are always wrong.

I mean, if problems are framed in terms of fashion tips, how can the possibly be right?


> If you work on single-person "teams" maintaining something that is barely used and does not even have SLAs and can be shut down for hours then there's nothing preventing you from keeping all your eggs into a single basket.

There's a whole spectrum between that and "needs to go down for less than a minute per year". For every project/job/app that needs the AWS levels of resilience and availability, there are maybe a few 100k that don't, and none of those are the "barely-used, down for hours" type of thing either.

Having been a developer since the mid-90s, I am always fascinated by the thought that computer, server and/or network resilience is something that humanity only discovered in the last 15 years.

The global network handling payments and transactions worked with unnoticeable downtime for 30-odd years; millions of transactions per second, globally, and it was resilient enough to support that without noticeable or expensive downtime.


> For every project/job/app that needs the AWS levels of resilience (...)

I don't think you're framing the issue from an educated standpoint. You're confusing high-availability with not designing a brittle service by paying attention to very basic things that are trivial to do. For example, supporting very basic blue-green deployments that come for free in virtually any conceivable way to deploy services. You only need a reverse proxy and just enough competence to design and develop services that can run in parallel. This is hardly an issue, and in this day and age not being able to pull this off is a hallmark of incompetence.


> I don't think you're framing the issue from an educated standpoint.

And I think you'd make a better point without the personal remarks and/or skepticism over my competence.

I mean, was all this really necessary to make your point?

> myopic, naive, clueless take

> specious reasoning

> If you work on single-person "teams" maintaining something that is barely used and does not even have SLAs and can be shut down for hours

> a place of clueless buzzwords

> not understanding what they are talking about

> hallmark of incompetence

I also think you're ignoring what I said about 30-odd years of resilience that came before microservices.

> For example, supporting very basic blue-green deployments that come for free in virtually any conceivable way to deploy services.

I'm genuinely confused here: what does that have to do with creating monoliths? Are you claiming that monoliths prevent a whole lot of good practices (blue-green, canary deployments, whatever)?

Because monoliths have been deployed in a gradual rollout fashion before, have been multi-sited for DR rollovers onto secondaries, have been deployed with hot rollovers, etc.

There are, right now, COBOL "monoliths" running and managing a significant part of your life.


What an appallingly bad article. It starts with a premise only backed with an unsubstantiated and outright false appeal to authority ("the likes of Amazon are moving to monoliths!1") and proceeds to list a few traits that are so wrong they fall into the "not even wrong" territory. For example, things like "incorrect boundary domains" and circular dependencies are hardly related to how distributed services are designed.

This nonsese reads like a badly prompted machine-generated text.


> ("the likes of Amazon are moving to monoliths!1")

I've been at an amazon-scale company, and the thing is: yes, such companies do use a service-oriented architecture... but they do split the services into microservices because that means they can a) further optimise throughput/latency and b) they can delegate responsibilities (ie: split teams when they get too large).

The throughput gains you can get when your software only does one thing are really incredible. FAANG-sized companies do optimize everything then: software, operating systems, hardware. And they can do that because their software is highly specialized. But most non-faang companies? They barely optimize the software, they don't even consider much optimizing the OS or the hardware.

Outside of FAANGs many companies do split stuff into microservices mostly because they want to be trendy and stay on whatever the latest craze is and only secondarily to delegate responsibility and split teams.

I think most "microservices" could be a module or a library within a monolith. The boundary would largely be the same (API contracts) minus the operating overhead. Integration testing would cover the usual issues, and needless to say there would be less "distributed-systems-headache".

Don't get me wrong, I'm not against microservices: it's just that it's often over-used in my opinion.


I agree that microservices are vastly overused, and I would add that they are often misused

If you can't set up a development environment without running a bunch of local microservices then you are probably misusing the concept. They are too tightly coupled to run independently, so they probably should not be separated

All that does is slow everything down by introducing network requests where there shouldn't be, imo

It also leads to situations where layoffs leave services behind that are running and mission critical but have no owner anymore in the company.


> (..) you really want those 2x 12 memory channels a Dual EPYC system offers (...)

I had to check and I was amazed that there are companies selling workstations with dual EPYC processors, providing a whopping 256 CPU cores and over 2TB of DDR5. All in a desktop form factor. Amazing.


This article reads like an AMD advertisement for their EPYC processor line.


It was written by Robert Hormuth AMD’s Corporate Vice President, Architecture & Strategy – Data Center Solutions Group and published on amd.com.

Are you one of the reasons why SEO spam sites are clicked on so often?


> Are you one of the reasons why SEO spam sites are clicked on so often?

I don't think your poorly thought-through personal attack has any relevance to the topic. I clicked on the article because there was a submission in HN with the title "Myths and Urban Legends About Dual-Socket Servers". What leads you to believe SEO holds any relevance


It does but it also seems like AMD is saying just buy one CPU, which is weird because you’d think they would want you to buy two to double the profit.


They’re fighting a calcified perception in the corporate IT market that the standard “unit of scale” should be a dual-socket system, because they have a differentiated product in EPYC that shines single-socket.

This likely still remains a major market barrier for them: Outside of the always-be-optimizing hyperscalers, “ordinary” datacenter buyers tend to follow old patterns and rules of thumb from generation to generation.


AMD chips have more cores than Intel chips, so pushing for a single powerful CPU means "Buy AMD, not Intel."

Of course they'll like it even better if you bought two AMD chips instead of one, but they probably don't care as much whether you put those chips into one server or two.


Intel will have more cores shortly.


> Intel will have more cores shortly.

The article was posted in 2023.


More weak cores with lower overall performance.


> It does but it also seems like AMD is saying just buy one CPU, which is weird because you’d think they would want you to buy two to double the profit.

The are saying "buy a single EPYC instead of two of our competition".


I think that's because it is.


True, but it reminds us that advertising can be educational, technical, and straightforward instead of manipulative, emotional, and focus-grouped.

There's nothing wrong per se with writing an article about your awesome product and why everyone should use it


It’s on amd.com


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: