Hacker News new | comments | show | ask | jobs | submit login
Shedding Light on Dark Bandwidth (nextplatform.com)
78 points by bob_rad on Sept 16, 2017 | hide | past | web | favorite | 10 comments



As ARM Research architect, Jonathan Beard describes it, the way systems work now is a lot like ordering a tiny watch battery online and having it delivered in a box that could fit something an order of magnitude larger and with a lot of unnecessary packaging around it to account for the extra space

Seems that's how most stuff we order online IS delivered. I can see the point, but maybe "unnecessary" is more a matter of perspective?


I think the point refers to getting overly large boxes, ones way bigger than necessary. It's easier for the people shipping and stocking boxes to have a fixed set of boxes to pick from versus stocking customized sizes. From that perspective it's great. Probably not so good for packing as much as possible in the lorry...


Well, we're up against limits of nature now. We can't go smaller; CPUs won't get much faster automatically. They'll only get wider and use existing space more efficiently.


Exactly. Architects are a bit like that kid that took all the easy classes, got all A's and just now realized they had to work in the real world. Things are going to likely change rapidly in micro architecture and architecture over the next ten years, far more than in the past thirty. Frequency scaling and node shrinkage killed the art. I can't wait to see more from this guy, the paper referenced on another comment seems to be vey close to what is described by Beard in this position piece. I wonder what else he has going on.


If a theoretical technology that can remove that "unnecessary" waste exists wouldn't it be valid to say it is an unnecessary waste even if only in theory?


It would be quite interesting if we ended up with a query language that treats our memory as a remote device the same way we have query languages for our databases.


This might be interesting for you then: http://www.jonathanbeard.io/pdf/b17a.pdf


> The key technology presented in this paper is the Sparse Data Reduction Engine (SPDRE)

It's a shame you missed the opportunity to call this a Sparse Data Engine (SpaDE) - you would then get nice terminology about shovelling data around. On the other hand, the work around cache invalidation looks solid, and one out of two ain't bad :).

As you note in the paper, a key difference between SPDRE and things like Impulse is that SPDRE works in batch, whereas Impulse is on-demand. That means a higher up-front cost for setting up a reorganisation, but a lower cost for accessing it. Do we know how that advantages and disadvantages the two approaches in different domains?

I can imagine that for classic HPC stuff like working on matrices, batch is better. You have a matrix of structures, and you're going to access some particular field in every one of them, potentially several times. So, all of the work done during reorganisation is useful.

On the other hand, i can imagine that for searchy tasks, as in databases, access might be much sparser and harder to predict. I might have an graph of data that i want to find a path through; i expect to only touch a tiny number of the nodes in the graph, but i don't know which ones upfront. Reorganising the relevant data out of all the nodes would be a huge amount of wasted work.

The programmer interface to both approaches seems like it could be pretty similar: define a reorganisation that should exist somewhere in memory, wait for a signal that that it is ready, access it, tear it down. Does that open the door to hybrid approaches which combine on-line calculation with speculative bulk work? That would limit the interface to a lowest common denominator way to specify reorganisations; would that sacrifice too much power?


It'll be called SPiDRE (pronounced spider) at MEMSYS. Name coined by a colleague, I can't take credit.

Thanks for reading! You'll have to wait for the presentation and follow-on papers for some of those answers :). If you read the Dark Bandwidth paper, there are some solutions mentioned there and in the presentation (http://www.jonathanbeard.io/slides/beard_hcpm2017.pdf) that could apply to what you suggest.


I've always wondered about such schemes. The references at the end are welcomed.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: