Hacker News new | comments | ask | show | jobs | submit login

This seems like a really interesting approach and I think the numbers are promising. Given that it's an entirely different type of model-building than traditional methods, I think it's fine for it to just be up to par with a basic shallow model. If the constructive approach turns out to be comparable to current state-of-the-art models with sufficient refinement, it could be really valuable for low-compute applications like IoT devices, etc.

To be honest, I can't say I know enough about the math here to do anything more than vaguely follow their explanations despite my ML/NLP background. I'm curious - other ML researchers out there, how much of this are you able to understand? My impression is that this math is pretty far beyond what ML folks typically know, although I'm on the lower end of the spectrum as far as math knowledge goes, so I may be totally wrong (and need to spend more time reading textbooks haha). I wonder if the complexity may slow down progress if it does turn out that this kind of geometric construction can compete with iterative training. It sounds like this approach could potentially support more complex networks by working more on the geometric representation, so I certainly hope this paper serves its purpose of motivating people with the right skillsets to do further exploration.






In this case the fundamental idea is not especially complicated, but the math may make it seem more complex than it is to somebody unfamiliar with the notation/vocabulary.

Imagine the case where an input dataset has three dimensions: each piece of input data could be represented as a point in 3D space. We want to classify each point with a category. Imagine a cube containing all such points, and imagine the cube was subdivided into many tiny cubes of the same size. If these tiny cubes were sufficiently small, then each cube would only contain points belonging to a single category (sufficiently small cubes would only contain a single point, making this trivially true). We could then assign to each cube a category based on the category of the points in it, then if we wanted to classify a new point, we'd just check what cube it's inside and look up the category of that cube.

The above process doesn't scale well (would be too slow in practice). An alternative is, rather than subdividing the cube into tiny evenly-sized cubes, we use an iterative process, starting with the whole cube, finding an ideal dimension along which to split it, splitting it (subject to the constraint that splitting it wouldn't make the children too small), then repeating this process with the two 3D rectangles that resulted from splitting the original shape. This could produce extremely long, thin 3D rectangles however, which might not generalise well to new data, so we cap the aspect ratio of the child rectangles so they can't get too thin. This approach may leave some areas of the original cube unclassified, so for each such area we find the category of the 3D rectangle that the unclassified areas touches most, and assign its category to the unclassified area, then repeat until there are no unclassified areas remaining. The end result approximates the result of subdividing the cube into tiny cubes, but is significantly faster to produce.

The above approach is roughly what's described in the paper, except they describe an n-dimensional hypercube rather than just a 3-dimensional one, and provide more detail on how to determine the ideal axis along which to split a hyperrectangle into child hyperrectangles.


This sounds like a K-d tree. Am I understanding it right or have I missed something?

Yep, pretty much, except it's constructed differently to how we'd normally construct a K-d tree.

Cool, thanks for the translation! I'm gonna go read the paper now...

Thanks for the tldr. This sounds rather a lot like a tree based method. How fair is that analogy?

It seems fair to me, in the sense that they are effectively constructing a kind of decision tree. There may be some subtle-yet-significant differences, but they don't contrast their model with other decision trees in the paper.

Not an ML researcher, but none of non-ML math here is beyond someone who has taken introductory classes to topology and linear algebra. Rather than being too advanced for you this may just be somewhat orthogonal to the aspects of the problem space you're used to thinking about.



Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: