Hacker News new | past | comments | ask | show | jobs | submit | more markhahn's comments login

in-package ram is certainly not an apple thing. if anything, it's a phone thing.


Indeed, but I don't know of any other PC manufacturers who were widely using it before Apple went down the soldered-RAM path?


wonder when/how this will be used in desktop and server contexts - the mounting/shape seems to emphasize planar area. with current servers needing 12 channels/socket, is there space for 12x LPCAMM in even a standard 1U dual-socket server?

Rumor has it that AMD is pushing a 256b memory interface in Strix Halo, perfect for 2x LPCAMM: we might see some exciting developments in desktops as well.


Is Strix Halo launching on desktops? I'd have assumed that the 256b bus would restrict it to a unique socket.


being desktop doesn't imply AM5. there was a 4-channel threadripper, for instance.

but I was thinking more of a mini-pc, since legacy form-factors don't make that much sense for a high-end APU. being a mini-pc would also make it more palatable to solder on the CPU like a laptop - and of course soldered-on RAM gets to run at higher speed.


Threadrippers exist, but would Strix Halo then use Threadripper sockets? Would it use its own sockets? I don't expect Strix Halo to appear in any socketed form.

I think you're right that it'll appear in SFF PCs, like most of AMD's mobile parts do. And that is pretty exciting IMO.


but programming it is "import pytorch" - nothing nvidia-specific there.

the mass press is very impressed by Cuda, but at least if we're talking AI (and this article is, exclusively), it's not the right interface.

and in fact, Nv's lead, if it exists, is because they pushed tensor hardware earlier.


Someone does, in fact, have to implement everything underneath that `import` call, and that work is _very_ hard to do for things that don't closely match Nvidia's SIMT architecture. There's a reason people don't like using dataflow architectures, even though from a pure hardware PoV they're very powerful -- you can't map CUDA's, or Pytorch's, or Tensorflow's model of the world onto them.


I'm talking about adding Pytorch support for your special hardware.

Nv's lead is due to them having Pytorch support.


Eh if you're running in production you'll want something lower level and faster than pytorch.


"Lightweight Java" hah!


Don't see a snark here -- Java libs are really lightweight, in a matter of kilobytes. Because it's interpreted, you're supposed to have OS-specific JVM installed to run it. Exactly the same as Python.

Compare that to Go, where Hello World weighs 1.8 MB, more than 100x heavier than the lib in question.


Comparing fairly with Go would mean adding the JRE to the lib, because the lib itself is useless. Go produces executables.


Why? If there are 2 adumbra libs, the total size won't change much. But 2 Go hello worlds would take 2x space.

Also, we'd want to include libc.so and even OS kernel that both Go and JVM depend on.


Knock knock

Who's there?

Long pause

Java


Really weird to see "opinionated" used as a good thing.


Most people, including Tech people, with software just want a list of sensible defaults out of the box. You are installing Calico, Ingress-Nginx, CoreDNS, MetalLB, cert-manager and ArgoCD out of the box? Cool, some deployment/service/ingress YAML later and my workloads are cooking.

As SRE who deals with a ton of Kubernetes clusters, I find a ton of needlessly complex clusters because rookies setting up the clusters didn't understand the implications of their actions and grabbed whatever a blog post said was good idea.


Glymphatic drainage is the "point" (in the sense of "kill you if you don't do it").

And it's typical of evolution to use the behavior for several other purposes...


isn't that expected? or do you mean "I'd buy a minipc designed like a laptop"?


If I wanted another minipc, I'd be looking for one designed like a laptop. The vast majority of consumer computers spend most of their time idle, even when it is something we are directly interacting with at the time (like laptops). In my experience, this is even more true for minipcs. I'm wondering what magic has gone into laptops and hasn't made it into electron guzzlers like the intel minipc in this article.


what does "your distro" mean here? the distro in the container, or the container host, or the client host?

containers are just a packaging/isolation technique. you can keep using an obsolete stack in a container, regardless of what changes outside it. rebuilding containers from scratch is certainly not easier than rebuilding an install via ansible.


> what does "your distro" mean here? the distro in the container, or the container host, or the client host?

Container host.

> rebuilding containers from scratch is certainly not easier than rebuilding an install via ansible.

How so? The OP is giving an example of ansible scripts breaking because of OS version change, and having to fix them. With containers, the container OS is very slim, so fewer things to break with upgrades, and you can upgrade the host OS easily since docker is quite stable across OS versions.


Yes, it was traditional "philosphers of mind" who found him dismissive, mainly because those are all basically Mysterians.

For instance, he cut Chalmers no slack on the incoherency of Philosophical Zombies.


well, the interesting part here is that the tools (verilog and openlane) seem to work pretty well and are highly accessible. designing a cpu hasn't been a complicated exercise for decades, but implementing it has been.


Verilog is kind of trash by modern standards. Unfortunately we are stuck with it (well SystemVerilog) until tool vendors support something else.

It's kind of a similar situation to JavaScript actually. And in a similar way, you can compile to Verilog, but just like with JS it makes debugging much more painful.

There was this interesting project but it seems inactive: https://llhd.io/

There's also various alternative HDLs that seem to have various levels of solving the wrong problem (SpinalHDL, MyHDL, Chisel). This one looks quite interesting though: https://filamenthdl.com/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: