Hacker News new | past | comments | ask | show | jobs | submit login

Unpopular opinion: Domain driven stack.

We have been living in the golden era of software industry, thanks to Moore's law. We were able to afford RISC (= general purpose CPU arch), general purpose operating systems, general purpose languages, general purpose databases etc all because the hardware was going to evolve and get faster anyway.

Now with Moore's law showing signs of death, the future for better computing would be domain driven stack. A quick thought experiment will be that: cloud applications will be written with cloud-friendly languages, using cloud friendly databases, on cloud-ready operating systems and processors that are architected for heavy cloud workloads. Much like how gaming was relying on custom stack for performance (GPUs, play station, X-box, etc)

The advent of TPUs by Google is a symptom of this pattern too. Of course, personal computers with general-purpose-everything will keep existing, but the business industry will start shifting towards domain driven stack slowly and steadily for obvious reasons.




Not sure this adds up for me. So long as network latency and throughput remain asymmetrically limited with respect to machine and CPU cache speed, the implementation detail of data locality will bleed into anything you write. At that point, what makes a language, database, OS anymore "cloud-friendly" than what we have today? I can already get 90% of the way there with Kubernetes, Aurora and any language of choice.


Unpopular answer: Meh.

I see your point, but we are already using specialized algorithms to solve problems on generic hardware (CPU). You can move to different generic hardware (GPU/OpenCL/...) which might be better suited (depending on the problem), or use/rent more generic hardware on demand (cloud computing).

What you're implying is already happening, using/programming "generic" FPGAs to act as specialized accelerators seems to be slowly trending (e.g. Xilinx UltraScale); and if that's working well, "larger" process nodes seem to be getting cheaper these days (e.g. >= 45nm ASICs). But as far as I am aware the tooling and ecosystem for all this is still pretty bad; especially compared to how C/C++ compilers came a long way, JS's ease of accessibility or python's trove of libraries. (Disclaimer: I am not working in that field, so I might be outdated).

So to refine you suggestion: Improving the eco system around hardware synthetization could be a thing?

However, that doesn't seem to be what user richtapestry was thinking of(?).


I just wanted to clarify your example. When you say cloud applications, I assume you mean applications written to run a cloud, as opposed to running IN a cloud?

Because if it's the latter, that doesn't sound like a domain driven stack to me.


Not sure what more recent xboxes are like, but the original xbox was a cut down windows 2000 running on Intel and Nvidia. It was really close to commodity hardware and software.


In terms of hardware, they're running on AMD x86 CPUs. AFAIK there isn't anything special about them, other than having a wider memory bus (they use GDDR5 as opposed to DDR4).


But it had a really thin OS layer, and everybody had virtually exactly the same box, so you could micro-optimize to the exact architecture. IMO it does fit the concept of "domain specific stack", it's just that homogeneity is one of the important properties of the stack instead of unbridled performance.


the current Xbox is running a Windows 10 (one kernel design) while the PS4 run a patched up FreeBSD.

Only Nintendo bothers with writing custom kernels, and historically Sony with the PS2 having exotic "Cell" processor units.


You mean PS3? It had the CBE.


The Switch also runs on FreeBSD, not custom.


It is still true that most cloud-provider datacenters house racks of commodity hardware? If so, I could definitely imagine a shift to hardware that was designed to support running virtualized environments while keeping power and cooling costs down.

I'm not sure what that would look like. Mainframe-esque, perhaps?


Re: "cloud applications will be written with cloud-friendly languages"

Carefull, you invest your code base on a "cloud-friendly" language and clouds then could fall out of style. That goes for other components as well.


I feel like the language choice isn't going to be what bites people here, the danger is more architecting to a specific platform, relying on its SDKs and optimizing to a provider's specific resource idiosyncrasies.


That is true, but somebody did mention a "cloud friendly language". Most of the languages in common use were designed pre-cloud. This new thing, whatever it is, could thus have cloud dependencies that current languages don't.


So DSLs and ASPs?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: