Hacker News new | past | comments | ask | show | jobs | submit | jetbalsa's comments login

Looks like it was dropped after sitting too long[1].

[1] https://www.smh.com.au/world/europe/assange-has-not-been-vin...


it /used/ to take forever back in the Core 2 Duo days, with the amount of cores and the sheer speed of the IPC its gotten a ton better.


I've used SSDB[0] in the past for some really stupid large datasets (20TB)_and it worked really well in production

[0] https://github.com/ideawu/ssdb


I switched from SSDB to Kvrocks recently, because SSDB is abandoned and the author missing for 3 years now. I used to recommend SSDB, but now there's better alternatives available:

https://github.com/apache/kvrocks

https://github.com/sabledb-io/sabledb


These are great recommendations, thanks!


Its also worth checking out kvrocks, which is a redis interface on top of rockdb that's part of the Apache project, and very well maintained.


And which is not in-memory at all.


It can cache in-memory using RocksDBs caching mechanisms.


Nice to see you using voip.ms for all your science -- The number I just left for you is also hosted on it, its a fun little IVR Maze


Android Auto is not yet supported in GrapheneOS and that's pretty much half the usage of my phone gets on a daily basis


Is that up to date? Their site describes support

https://grapheneos.org/features#android-auto


Android Auto support has been added recently and it seems to work well.


Alpine in Diskless mode is the closest I can think of as a good way of doing this.

Also the Windows ADK to make a custom Windows PE is how HBCD is made to load a windows desktop with a bunch of tools off a CDROM


marquee tag was never dropped, and its wild, Firefox's is more like the old days, a little jittery. Chrome/Edge is a ton smoother in its scroll


Its mostly Jellyfin is king of open source media library stuff, XBMC morphed into Kodi over time and is still used as a OS / Media Library as well


A 6c / 12t Dedicated Server with 32GB of ram is 65$ a month with OVH

I do get that it is a bare server, but if you deploy even just bare containers to it, you would be saving a good bit of money and get better performance from it.


Another interpretation is the so-called dedicated servers are too good to be true.


It depends on what the 6 cores are. Like I have a 8C/8T dedicated server sitting in my closet that costs $65 per the number of times you buy it. (Usually once.) The cores are not as fast as the highest-end Epyc cores, however ;)


At the $65/month level for an OVH dedicated server, you get a 6-core CPU from 2018 and a 500Mbps public network limit. Doesnt even seem like that good a deal.

There is also a $63/month option that is significantly worse.


Don't forget the tooling, ROCm still hasn't taken off very well.


ROCm runs PyTorch and TensorFlow. It seems to have more or less caught up on the technical capability front.

There are outstanding problems, particularly I've found it very crash prone on a consumer desktop and wouldn't recommend an AMD card for research compute tasks where you are also running an X server using the same card. But there aren't $30 billion opportunities for custom chips on the consumer desktop right now - I'm guessing these will be for SaaS businesses where AMD are focusing. IE, it won't matter that they can't X.org and multiply matrices at the same time because servers won't use the cards for graphics.


People don't seem to understand that running neural network inference is very easy. It's not the machine learning frameworks and libraries that are difficult to get right. Those are the trivial part.

The hard part is getting a culture that gives a damn about developing software that works and designing the hardware to support the features that the software needs.

AMD has not figured out how to run both graphics and compute on the same GPU. There can be many reasons for that, but honestly it is probably because they either don't have the necessary virtualization hardware or because two different drivers are conflicting with one another.


> The hard part is getting a culture that gives a damn about developing software that works and designing the hardware to support the features that the software needs.

NVIDIA isn't missing the mark on the programming model and toolkit framework (PTX and forward/backward compat) either. They have a good, lean gpu design with a lot of features and a good programming model and ecosystem etc.

You're right, it's not just the matrix math, that's not rocket science, but there's a ton of little glue code around it. And you need something GPU-like for that anyway, plus a bunch of scheduler and shader-execution-reordering stuff for your tensor threads and glue code, etc. You end up with something broadly similar to a GPU anyway.

It's the ProgPOW theorem, right? That there is not some major gain to be squeezed by implementing a smaller/different machine on the instruction set. That GPUs are relatively close to some kind of computational optimum for parallel workloads (in terms of programmability/flexibility and performance).

NVIDIA's model isn't far off the global optimum imo, it's certainly in a great local minimum, and that's really true of a lot of their designs these days. It is always a little wild how everyone trivializes the idea that AMD/etc are going to catch up with some 80% solution in RT or tensor etc... like just maybe NVIDIA did the math and figured out what they think a reasonable ray performance level is, and how much they'd need to upscale, and what parts of the pipeline make sense to have accelerated by units vs emulated on shaders/etc, and there's not some massive gain to be squeezed by just putting a handful of devs on a project for a year?

Same thing for prices too. Everyone wants to assume that AMD is just choosing to follow them in gouging or whatever. The null hypothesis is that both nvidia and AMD are subject to the same industry cost trends and can’t actually do significantly better (not like 2x perf/$ or whatever), and that nvidia is in some kind of reasonable price structure after all. People are going to find that a lot of electronics prices are going to go up in the coming years. There’s no more 1600AF for $85 or 3600 for $160 either, or Radeon 7850 for $150 etc.

Not talking about original 4080 pricing etc but actually 4070 and 4060 are fairly reasonable products, and 4070 quickly fell even further below msrp. 7800xt and 7900xt and 7600xt are all fine as well. That’s about what the price increases have been since the last leasing-edge products.


CUDA runs on essentially all NVIDIA consumer gpus. ROCm is supported on a single 5 year old model and two 2 year old models.

https://rocm.docs.amd.com/projects/install-on-linux/en/docs-...


The people who write their documentation aren't very good. "Support" seems to mean something like tested + enterprise grade support. It works on a fair number of officially unsupported cards. Eventually it'll work well on everything even though the support matrix is unlikely to ever get bigger.

Although, as mentioned, works under a cycloptic vision where the card is only doing compute tasks. I'd be interested to know if even supported cards can multitask GPU and pure compute tasks without crashing because it looks like it might be a design issue. Hard to tell with driver corruption. Maybe the testing catches that on supported cards; who knows.


The Mesa folks are working on the tooling situation with RustiCL, which has potential to support SYCL too in the future, and quite possibly HIP. Not just for ROCm-supported devices too, but across the board (subject to pure hardware constraints).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: