Hacker News new | past | comments | ask | show | jobs | submit login
Computational Algebraic Topology, lecture notes [pdf] (ox.ac.uk)
127 points by MAXPOOL 30 days ago | hide | past | favorite | 35 comments



Does anyone have a reference to actual computational applications of Algebraic Topology?

These notes contain the definition of various chain complexes and groups (co-homology/homotopy) that are in-principle computable. However, I have not seen this being efficiently applied somewhere, and not been able to come up with interesting applications myself. Topics I have considered are:

1. Triangulation of solutions of equations (for e.g. homology computations) does not seem to be very popular, at least in dimension >3. I suspect this is a hard problem, but maybe I am just missing pointers to the right literature.

2. CAD applications or Computer Games have lot's of triangulated objects. Their topology seems to be not to be very interesting. Again: In dimension <=3 there is really not that much going on. And since you have constructed the object yourself, you probably know the geometry already.

3. Graphs appear everywhere, and can be viewed as 1-dimensional chain complexes but do not have interesting co-homology groups.

4. Did anyone compute homology groups for "Manifold Learning"?

It's also not clear to me, how much interesting information can be extracted from those homology groups. Applications of Homotopy/Homology in (semi-classical) Physics are already quite slim (apart from Quantum Field Theory, String Theory, Gauge Theory?!) as most of it takes place in contractible spaces $IR^n$.


The field of topological data analysis attempts to use homologies that persist in multiple scales (="Persistent Homologies") as learnable features, but frankly nothing I saw from this field seemed to work particularly well.

The one practical algorithm I've seen which uses topological arguments for justification is UMAP for dimensionality reduction, the core idea IMO* is very much like density estimation and I'm not sure the mathematical justification gives more insight.

* The core idea IMO: while in high-dimensional space, absolute distances lose meaning, relative distances between neighboring points are good yard-sticks. This insight is also what powers DBSCAN and other such density-based algorithms. I don't think using the language of simplical complexes when explaining the algorithm simplifies or adds insight.


I'm not sure if this is the kind of example you mean, but there have been interesting applications in analyzing electroencephalographic readings in various cognitive and neurologic disorders: e.g. https://journals.plos.org/plosone/article?id=10.1371/journal...


Thanks. This is something!


I'm always interested in applications to signal processing:

https://link.springer.com/book/10.1007/978-3-642-36104-3

You can download it for free from academic libraries that have SpringerLink.


For anyone looking for the "computational" aspects, the only bit I could find was the algorithm presented in section 6. Otherwise these are pretty standard algebraic topology notes.


I assume it's using computational since the focus in on simplicial complexes where everything can theoretically be computed. Some of the filtration stuff on finite discrete points looks like Machine Learning / Topological Data Analysis which definitely wasn't covered in my AT class.


The hardest thing for me to grok trying to learn enough AT for TDA (but: I mean really learn, not just understand in a loose Medium-blog kind of way) was the ker/im definition of homology. I pestered people on IRC for days, and just worked hard to stretch my brain enough to see that it applies in many "small" cases.

If you're coming from topology to AT and can grok the ker/im stuff, TDA should be easy peasy to you. I flunked (I think I scraped a couple of points on appeal by sophistry and desperation so I passed, but really) real analysis, so I never moved into topology beyond skimming books to "learn about" key concepts.

You simply can't have too much math in life. I'm having a kid soon and I struggle between what seems truer to my core values (let the kid have a childhood and then personal inclinations; advise but not push; if he wants to do the gifted kid thing, yay) and what seems practical advice (learn as many human languages as you can, at least English, French and German; learn math deeply, suffering along the way).


Yea, my own mental picture of homology and cohomology tends to default to DeRham (co)homology or the axiomatic approach versus the combinatorial approaches that are normally taught (I'm also bad at combinatorics).

I'm going to be hand wavey here, but in DeRham, the "boundary" operator is basically just the derivative. So the DeRham homology of a manifold (the ker/im) are the functions that differentiate to zero (locally constant functions) / functions that can be integrated (eg functions that are the derivative of some function). You get left the functions that are "interesting" and define the geometry of the manifold in some way. Since manifolds are patched together pieces of Euclidean space, locally constant functions don't need to be globally constant which is why the quotient operation isn't trivially zero. If you've never seen quotient rings/groups/topologies before though, I can see how ker/im would be a major trip-up no matter the homology domain.

The crazy part of algebraic topology (to me) is that all these homology theories are isomorphic. The DeRham homology and the simplicial homology give you the same homology group despite not having much to do with each other at first glance. (Stoke's theorem provides the isomorphism).


Wait until you learn about modern homotopy theory. It turns out that basically every cohomology theory can be described as homotopy classes of maps into some space.

The standard cohomology groups with integer coefficients H^n(X; Z), those are just homotopy classes of maps from X into the Eilenberg-MacLane K(Z,n). Want K-theory instead? No problem. That cohomology group K(X) is homotopy classes of maps [X, Z x BU], where Z is the integers and BU is the classifying space for the unitary group.

It turns out essentially all cohomology theories are represented this way. h(X) = [X,S_h], where [] is homotopy classes of maps and S_h is a topological space called the spectrum of the cohomology theory.

Even wackier, the various algebra operations you can do on cohomology classes are mirrored by / derived from algebraic operations on the spectra. Which means that much of the machinery of algebraic geometry can be used to study cohomology theories.


Wikipedia is wonderful at giving you the handwavey picture, and then you can even skim some more advanced papers with handwavey ideas (e.g. there's a rigorous theory of PDEs in graphs relating the Khirchoff/incidence matrix to the de Rham operator, and then developing a theory of how graph Laplacians are really the Laplacians from calculus).

But I kind of have a BDSM relationship with mathematics. I'm not happy until it makes me suffer.


> The crazy part of algebraic topology (to me) is that all these homology theories are isomorphic.

This should not be crazy at all. The general recipe is that you take something for which there is a local solution, say in a ball or square, and then ask what are the obstructions that prevent you from patching the local solutions into a global solution. That is all that's going on with the ker/Im.

For example, differential equations have local solutions (just the contraction mapping theorem). But can you patch them into global solutions? Locally spaces can be described as a ball or disk, but can you globally patch them together? So the obstructions end up measuring something topological. A series of local solutions on, say, a circle can patch together smoothly but when you wrap around to where you started you can be off - there is an obstruction.

So the technique of finding local solutions can involve solving a differential equation or constructing a simplex but the obstructions of both have only to do with the consistency of patching these together, which is a topological question.

As a historical note, mathematicians understood that counting the dimension of these obstructions reveals something about the underlying space, and there was hope that they would end up measuring finer structures -- e.g. the differential structure rather than just the topological structure. But the above discussion should at least motivate that this is unlikely. All the various *-homology theories ended up being mostly equivalent. It wasn't until the 1980s with the discovery of Simon Donaldson that true differential invariants were found, and the mechanism of finding these was to take the old homology invariants but apply them to an associated object whose own topology is shaped by the differential structure of the original space. For example, the moduli space of vector bundles associated to a space. Then the homological invariants of that space can detect differential invariants of the original space. There is a beautiful lecture series here: https://cmsa.fas.harvard.edu/donaldson-thomas-gromov-witten/


Did anyone suggest thinking of ker/im as "boundaries modulo cycles"? That is, homology detects "holes," loosely speaking, and a hole is exactly something that you can draw a cycle around, where that cycle is not the boundary of some disk (in 2 dimensions, for example).

To be even more concrete, if I cut out a disk from a piece of paper, I've created a cycle (the circle bounding the disk) that is not the boundary of any disk in the piece of paper (because I cut it out).

Admittedly, this gets a little funkier in higher dimensions, but for 2 dimensions, it's a good guide. (I think Hatcher's book has a discussion of how to make this idea precise in higher dimensions.)


Small correction - it's "cycles mod boundaries". In general, a quotient object R/I is read "R mod I" (top mod bottom).

What's a cycle? It's something in the kernel of the boundary map. To be in the kernel means the boundary map takes it to 0. So heuristically, cycles are "things with no (zero) boundary".

For the quotient cycles/boundaries to make sense - for any quotient to make sense - the bottom should be contained in the top. Like in Z/6Z, the integers mod 6, the "6Z" (the integers which are a multiple of 6) is contained in Z (the integers).

In this case, for "cycles/boundaries" to make sense, a boundary should be a cycle - that is, a boundary should have no boundary. This is (again heuristically) what the definition of a boundary map (d^2 = 0) means.

Consider your disk/circle analogy. The circle is a boundary (the boundary of the disk). Does the circle have a "boundary"? No - the boundary of a boundary is 0.

When the quotient cycles/boundaries is nonzero, you have nontrivial homology. Think of nontrivial homology as "having a hole". To see an example of this, take the plane but remove the origin (0, 0). Consider your circle again (say the unit circle x^2 + y^2 = 1). It's still a cycle, because its boundary is 0 (no boundary). But it's not the boundary of "something" (where "something" means roughly "something nice" - it's hard to explain this better without getting technical). So the cycle represented by the circle represents a nontrivial homology class (in the homology of plane minus origin).


This is just the same as my own conception, then again I also used (and can highly recommend) Hatcher.

He kindly provides a pdf on his webpage here: https://pi.math.cornell.edu/~hatcher/AT/ATpage.html


It was the "modulo" I had trouble with (the idea of quotient spaces). I understood well how kernels corresponded to cycles (people on IRC made me do lots of calculations with chain complexes both in Z/2 and R), and had some idea of groups (or, at least, of when something is a semigroup and not a group...) but didn't have an internal feeling of what quotient groups were.


Do you grok equivalence classes because quotients (of anything) are basically sets of equivalence classes?

And on the parenting front, luckily you start with a baby not a ten year old so you have lots of chance to grow as a parent before making those sorts of decisions - a lot of parenting is deciding when to be directive/non-directive and when to take charge based on your superior understanding of the world (not letting your toddler walk into the road!) and when to let them learn from their own mistakes.


I'm sold. I'm not very good at maths, but modding the image of ∂ₙ by the kernel of ∂ₙ₊₁ is something I "understand" in the Von Neumann sense. That is to say I've become "used to it" and now it seems to me utterly obvious. (I accept Dunning-Kruger objections may apply.)

Anyway, where is this Topological Data Analysis that's going to be easy for me?


My tip is to look for slide decks before diving into papers.


Looks like “simplicial” would be a better adjective, or “combinatorial”, as opposed to “continuous”.


The webpage for the course [1] includes not just these lecture notes, but also 4 problem sets and links to videos of the lectures on YouTube.

[1]: http://people.maths.ox.ac.uk/nanda/cat/


This book looks like it may be a nice complement to these CAT notes:

https://www2.math.upenn.edu/~ghrist/notes.html

(I haven't gone through either.)


I wrote a series of blog posts about persistent homology and topological data analysis for absolute beginners as I was obsessed with it for some time: http://outlace.com/TDApart1.html


Just briefly skimming over your post, a quick correction to the Continuity section:

> Definition (Homomorphism) There is a homomorphism (i.e. an equivalence relation) between two topological spaces if there exists a function f:X→Y, where X and Y are topological spaces (X,τX) and (Y,τY) with the following properties

This should be "homeomorphism", not "homomorphism". A homomorphism is a structure-preserving map, like a continuous function (in the context of topological spaces) or a linear map (in the context of vector spaces). Homomorphisms are very much one-directional. Homeomorphisms, on the other hand, are specifically equivalences between topological spaces, and are defined as continuous functions with continuous inverses.

Edit: finished reading it. I'm impressed by how much you cover in such a short space. Overall it looks good, so thanks for taking the time to write it. Some of these ideas are at a pretty high level of abstractness, so I sympathize with anyone who struggles to get any kind of intuition for them at a first go.


Thanks for the correction!


This looks great, thanks!


Anyone have a glossary or index for newbs? Like a shortcut to reading the symbolic notation? I have some expository knowledge to sets (ie. sets are denoted by capital letters). And some of the symbols (ie. the vertical pipe means given or 'where' and indicates a predicate assumption of sorts). Anyone have the cheat code for laypeople and undergrads?


Sorry, there is no cheat code here. Understanding this stuff requires real and significant time investment, and is beyond arms reach (though not beyond sight) of undergrads and especially laypeople. If you put in investment, understand what is module, quotient operation, exact sequence etc, you will be able to follow. However, this is not trivial and not just an issue of notation: there are real, fundamental concepts here that one must learn to grasp.


What would be your "pareto list", ie the list of the concepts, symbols, notation, etc that would get one 80% of the way with 20% of the knowledge?


I don't think that there is any, sorry.


These notes (on a different yet somewhat related subject) seem a little more friendly in the way they explain the basics:

https://www.cs.cmu.edu/~kmcrane/Projects/DDG/


Cool notes, I also recommend this paper which actually implements persistent homology :P http://fodava.gatech.edu/files/reports/FODAVA-08-02.pdf


27 MB for just 100 pages of lecture notes? You can find in certain Russian site, 600p textbooks weighting 2-3 MB


There's quite a few pictures in there.


Examples please




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: