
A Brief Guide to a Few Algebraic Structures - edgarvm
https://argumatronic.com/posts/2019-06-21-algebra-cheatsheet.html
======
SAI_Peregrinus
The Wikipedia page on Algebraic structures _is_ rather more detail than I
usually want, but the page "Outline of algebraic structures"[1] is a great
quick reference.

[1]
[https://en.wikipedia.org/wiki/Outline_of_algebraic_structure...](https://en.wikipedia.org/wiki/Outline_of_algebraic_structures)

------
_hardwaregeek
Interesting seeing this defined from a functional programming/category theory
perspective. Personally I like examples for my algebraic structures along with
some simple proofs why they're that structure. What was extremely useful for
my Algebra class was a Wikipedia note detailing the hierarchy of rings via set
inclusion:

    
    
        commutative rings ⊃ integral domains ⊃ integrally closed domains ⊃ GCD domains ⊃ unique factorization domains ⊃ principal ideal domains ⊃ Euclidean domains ⊃ fields ⊃ finite fields
    

Found here:
[https://en.wikipedia.org/wiki/Integral_domain](https://en.wikipedia.org/wiki/Integral_domain)

------
nilkn
> Rng

I really think names without a fairly obvious way of saying them out-loud
should just be avoided/deprecated. I can't pronounce this as "ring" (because
then you'd think I was referring to a ring, not a rng). Do I just spell it
out? R-N-G? I'd rather just call it a non-unital ring. That involves more
letters but is much more descriptive (to a mathematician, anyway).

~~~
amatecha
Yeh, she calls it out right in the definition:

"If we may speak frankly, the rig and rng nomenclatures are abominations.
Nevertheless, you may see them sometimes, but we will speak of them no more."

:)

~~~
nilkn
Sure, but I also wanted to suggest a simple descriptive name that sidesteps
the problem.

------
quickthrower2
That is beautifully written and a nice recap on those things having studied
maths a long long time ago. The Heything Algebra is quite interesting. I
learned about that more recently on some lectures about logic and category
theory online (I forgot the location now).

------
mcphage
I really like this, but one change I'd like to see: given how many structures
are defined in terms of simpler structures (which are defined in terms of yet
simpler ones), it would be nice if a structure listed _all_ of the laws, not
just the ones that simpler structures don't. That way, you would have to jump
around the page to get a complete picture of a single structure. Maybe have
them be a different color, so you can see what is new vs. what is inherited?

~~~
joker3
It's somewhat confusing to see it this way the first time, but it's absolutely
the right way to think about these structures. It's better to do the work to
get used to it early on.

~~~
mcphage
It's the right way to think about the structures, sure. But when you want to
look up the laws for what makes something a Ring, for instance, having them
all in 1 place instead of having to jump between: Monoid, Semigroup, Group,
Abelian Group, Quasiring, Nearring, and Ring. (And some of those laws occur
twice, for different operators) makes it a lot more useful as a reference
document.

~~~
BoiledCabbage
That's like saying when implementing a function you shouldn't call another fn,
you should copy and paste its implementation so you don't have to flip to
another location in the code.

The right way is to learn what the function means so you can set see calls to
it and know what it does.

~~~
mcphage
The purpose of the site is to be:

> So, that’s the intent of this post – to be that fast reference to the kinds
> of algebraic structures I care about.

I learned these structures in the early 90s; I don’t need to be told the right
way to learn them. However, a “fast reference” would be useful.

~~~
srean
For arithmetic properties and definitions do you suggest tracing them right to
Peano axioms ?

~~~
mcphage
If the page had arithmetic structures, then obviously yes. Groups don’t
include those properties, which is why they are not listed, and that is a very
important distinction.

------
JoelMcCracken
Oh wow, this looks great. I had started working on my own FP glossary, but
this has a ton of information and is clear enough that I'm sure many will find
it very very helpful.

~~~
GrumpyNl
It is very clearly written. Never really done algebra, but after this , it
start to make sense.

------
crimsonalucard
I'm learning category theory to understand haskell better.

I'm finding category theory to be almost a theory about program structure in
the context of composition. It's giving me a whole new perspective on one of
the least concrete things about programming namely design.

How relevant is abstract algebra to programming? Will it change my perspective
on everything related to programming? How much of a mind bender is it compared
to category theory?

~~~
merijnv
> I'm learning category theory to understand haskell better.

I would consider myself a fairly expert Haskell programmer and I have (for
fun/curiousity) spend some time reading up on/studying category theory and I
can say, without a doubt or hesitation that if your goal is to either 1)
understand Haskell better and/or 2) become better at writing Haskell, then
studying category is a MAJOR waste of your time. I would advise you to spend
that time instead on reading up on the lambda calculus, type theory, and some
basic algebra (like this post).

Haskell is not/has never been based on category theory (and I keep being
baffled by how many people on social media will claim that it is, given how
well documented it's origins are) and the common terminology of Functor/Monad
that have been pilfered from CT via Wadler have only a passing
resemblance/relation to their CT friends.

Some things that I instead _would_ recommend reading up on are: \- Type theory
(Benjamin Pierce's "Types and Programming Languages" is the de facto
introduction to this. It covers everything from untyped lambda calculus to
things way more complex than standard Haskell, including example
implementations of type checker, etc.) \- Computer assisted proofs/formal
verification of programs (the Software Foundations book series, co-authored by
Pierce are a good (and free!) intro:
[https://softwarefoundations.cis.upenn.edu/](https://softwarefoundations.cis.upenn.edu/))
\- The Spineless Tagless G-machine (if you are a more low level/C minded
person, this talks about how we compile a lazy functiona language like Haskell
to an Intel CPU:
[http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.53.37...](http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.53.3729))
\- The Typeclassopedia (which talks about how various CT inspired classes
relate to each other and their laws:
[https://wiki.haskell.org/Typeclassopedia](https://wiki.haskell.org/Typeclassopedia))

EDIT: All of the above is not to say that you shouldn't learn category theory,
but that you should have realistic reasons/expectations (even if that reason
is just "I'm curious and it's cool"). I just hate seeing people get burned out
trying to "get" category theory and (as a result) deciding Haskell must not be
for them...

~~~
crimsonalucard
Your insight is valuable and interesting. I'm certainly not an expert on
Haskell and I'll take a look at your book recommendations. Thank you.

I have to say though I can't agree with you on category theory yet. Especially
the part about functors. The fmap for the functor f looks to me to be 100
percent the definition of the morhphism that maps arbitrary categories to the
morphisms in functor f. The resource I am reading on category theory makes it
seem highly relevant to Haskell and software design in general. I'm curious as
to your opinion on why the functor in Haskell only has a passing relationship
to functors in category theory.

~~~
merijnv
> I'm curious as to your opinion on why the functor in Haskell only has a
> passing relationship to functors in category theory.

If we assume Hask, the universe of Haskell types and functions operating, is a
category (which is hotly debated with some regularity, as there is no formal
definition of Hask). Then the Functor class in Haskell really only describes
one specific subset of functors: endofunctors on Hask (i.e. mapping objects
(types) in Hask to objects in Hask).

This is a rather limited subset of all functors. You can also see this in the
laws for example. You probably know the 2 functor laws "fmap id x = id x" and
"fmap f . fmap g = fmap (f . g)", but Haskell's Functor is limited enough that
these are actually redundant. In Haskell having a proof of just 1 functor law
gives you the other one for free (as a free theorem, Wadler's "Theorems for
Free!" is a good intro, and another paper. It basically covers "why is more
generic code easier to reason about").

> The resource I am reading on category theory makes it seem highly relevant
> to Haskell and software design in general.

I'm not saying there aren't some useful ideas to be gleaned at both the
superficial and deeper levels. The whole lawfulness of functors is great for
being able to reason about code without knowing what it actually does, and all
of lens is deeply inspired by category theory.

All of CT is about composition and reasoning about it, which we could
certainly use more of in programming. So I encourage anyone curious to look
into. I'm just trying to provide some nuance to the...enthusiastic overhyping
of CT, because I don't want people to be scared away of wonderful tools and
languages like Haskell, just because they think they're not smart enough to
get all that fuzzy math :)

~~~
antisemiotic
>If we assume Hask, the universe of Haskell types and functions operating, is
a category (which is hotly debated with some regularity, as there is no formal
definition of Hask).

Isn't the consensus that "Hask is a category as long as you wave your hands
profusely and pretend that exceptions and non-terminating functions never
happen"?

------
amelius
I was wondering about something. If I have the sum x+x+x...+x (n times), then
that is the same as x * n. If I have the product x * x * x ... * x (n times)
then that is the same as x ^ n.

What is it called when this is generalized? E.g. call + op1, call * op2, call
^ op3. What would op0 be? And what would op0.5 be?

How does the unit element for these operations behave?

And the rules for associativity, commutativity, for increasing order of the
operation?

~~~
sa1
It's called a hyperoperation:
[https://en.wikipedia.org/wiki/Hyperoperation](https://en.wikipedia.org/wiki/Hyperoperation),
and it's defined only for natural numbers, so op0.5 would still be undefined.

------
wsxcde
Is there a good textbook that covers these? I am primarily interested in a
computer science perspective to these algebraic structures.

~~~
nandemo
My favourite abstract algebra textbook is Fraleigh's _A First Course in
Abstract Algebra_. Unlike the article here, which just dumps you a bunch of
definitions, the book introduces each structure with proper motivation. It's
not a CS approach though.

[https://www.amazon.com/First-Course-Abstract-
Algebra-7th/dp/...](https://www.amazon.com/First-Course-Abstract-
Algebra-7th/dp/0201763907) (the 1-star reviews apply to the Kindle version,
not the contents. Just get the paperback and you'll be fine)

If you want a CS approach I suggest learning basic Haskell then tackling the
fantastic Typeclassopedia. The downside is you'll be missing on
structures/theorems that are super useful in math but not that useful in
programming.

[https://wiki.haskell.org/Typeclassopedia](https://wiki.haskell.org/Typeclassopedia)

~~~
commandlinefan
Wow, if the kindle edition looks anything like the preview, I understand the
1-star reviews - it's not just bad, it's a travesty. The pages aren't even in
order, and some of the diagrams are missing.

------
7373737373
A lambda-cube like visualization with the various properties (associativity
etc.) as the axes could be helpful.

