Hacker News new | past | comments | ask | show | jobs | submit login

Recommending Upmem's video:


They produce some RAM with an ARM core inside, in the same silicium. So, instead of fetching some bits to the CPU (650 pJ), they allow some computation to be done locally (150 pJ cost).

Some programming models are at the moment already prepared for that kind of computing: Spark's MapReduce worloads (and other associatively expressed computations); in which the Map portion becomes practically free and instantaneous.

Spark could be configured to have 32 MB partitions, and all workloads expressed with .mapPartitions() can be pushed to the RAM chunks (which are 64MB in size, and let's say we reserve the other 32 MB for storing the results)

Applications are open for YC Summer 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact