Hacker News new | past | comments | ask | show | jobs | submit login
SageMath – Open-Source Mathematical Software System (sagemath.org)
446 points by lainon on Sept 15, 2018 | hide | past | favorite | 80 comments

Sage is wonderful. It has a huge number of uses but I mostly use it for cryptography. Sage has the best (certainly open source) support I know of for a myriad of things like group theory and elliptic curves. Here's a short example for how easy it is to play around with a tiny elliptic curve:

  sage: p = 19; p.is_prime()
  sage: K = GF(p); K
  Finite Field of size 19
  sage: E = EllipticCurve(K, [5, 9]); E
  Elliptic Curve defined by y^2 = x^3 + 5*x + 9 over Finite Field of size 19
  sage: E.count_points(), E.order(), E.gens()
  (19, 19, [(4 : 6 : 1)])
There are far too many things to name that sage can do, but the CLI has a great autocomplete. Here's an example: let's say you have ECDH with point compression, and you specify only an x coordinate. Point compression limits the effectiveness of invalid curve attacks, where an attacker gives you a maliciously picked Diffie-Hellman value that isn't actually on the curve you're supposed to be on. However, if the x coordinate doesn't map to a point on the curve, it's necessarily on its "nontrivial quadratic twist". Sage makes this easy to play with because sage makes pretty much everything easy to play with:

  sage: E.lift_x(6, all=True)
  sage: E.quadratic_twist()
  Elliptic Curve defined by y^2 = x^3 + 6*x + 13 over Finite Field of size 19
  sage: E.quadratic_twist().lift_x(6, all=True)
  [(6 : 6 : 1), (6 : 13 : 1)]
If you want to do a full-on invalid curve attack, the easiest way to do that is with Sage. You look up how the explicit formulas work in the EFD[efd], you write a ladder, you figure out how to create other elliptic curves for which the short Weierstrass doubling formulas still work (which parameter doesn't appear in the formula?), and then just let sage generate every possible curve and see which ones have the poor cryptographic properties you're after.

There's a reason the introduction to Cryptopals Set 8[set8] sends with the words:

> By the time you're done, you will have written an ad hoc, informally-specified, bug-ridden, slow implementation of one percent of SageMath.

[efd]: https://www.hyperelliptic.org/EFD/g1p/auto-shortw.html

[set8]: https://cryptopals.com/sets/8

Aren't both of those attacks all-but-negated by djb's Curve25519?

Effectively, yes. Curve25519 has a few tricks that make this hard to do: it defines a curve compression format that mandates using 32-byte public keys and uses a single-coordinate ladder. It has the same property that points not on the curve are on the twist, but the twist is also secure. Curve25519's main innovation was to make the secure thing the default and the obvious thing to do: it was not the first twist-secure curve, it was not the first curve to have point compression that forces a point to be on the curve or the twist, plain old NIST P-256 had both of thsose. We also had a 1-coordinate ladder for Weierstrass prior to Curve25519, but it was far less used than effective point compression. But it is true that it is not incorrect to use P-256 without point compression which makes it easy to have these sorts of problems, and it is incorrect to do so in Curve25519, because the spec encompasses more than just the mathematical object of the curve.

Curve25519 and its sister curve Ed25519 are really good at what they were designed for. Unfortunately people also use them for things that they're not good at, and then you get bugs like the Monero double spend. Montgomery curves necessarily don't have cofactor 1 -- so that's an example of a bug that could only have occurred on X25519.

Hi - I am the guy who started Sage. If you have any questions I can try to answer them here.

Hello! I am a mathematician as well as a programmer and have been very interested in Sage for many years, it has inspired me over the years to make various calculator type projects. You are going to love my new one which I am preparing to release, promise! Thanks so much for all your hard work I have followed your career.

I have almost completely finished a visualization block coding system which sits on top of Numpy, Scipy, and Sage designed to help kids who are learning math on youtube, to be able to create "problem set" style save files which can be shared to supplement math education videos, and to create animations for my upcoming youtube channel for math education. It also has a fully functional debugger and console, its like web inspector but for math, so I have named it Math Inspector.

I think it's really exciting what is happening in the world of math education and the way that the medium of video is being used for the first time to communicate the joy of mathematics to the world, especially to young people. It's revolutionary. My goal has been to create a tool which enables people to be able to interact with Sage without needing to even know python at all. I wish you the best!

Hey calhoun137,

I'm doing some research on ed-tech tools and helping students learn by creating and experimenting! I'd love to check out your tool and talk to you about it, but I see no way to contact you! Could you add a email or Twitter handle to your profile, or you could contact me, my details are on my HN user page.

Hi deepakkarki,

I prefer to only correspond with serious mathematicians, and for fun recently I have been using Sage to work through Mochizuki's paper on Inter-universal Teichmüller theory, I got a little stuck on this passage, if you can explain it to me I will tell you more about my project:

"In the first three papers of the series, we introduced and studied the theory surrounding the log-theta-lattice, a highly non-commutative two-dimensional diagram of “miniature models of conventional scheme theory”, called Θ±ell NF-Hodge theaters, that were associated, in the first paper of the series, to certain data, called initial Θ-data. This data includes an elliptic curve EF over a number field F , together with a prime number l ≥ 5. Consideration of various properties of the log-theta-lattice led naturally to the establishment, in the third paper of the series, of multiradial algorithms for constructing “splitting monoids of LGP-monoids”.

Just Kidding!!!!! I sent you an email =) Thanks so much for your interest in this project, looking forward to hearing more about what you are working on as well. Cheers!

Just want to say thanks. Sage is the best thing that exists when it comes to creating a proof of concept in a cryptographic attack. I’ve used your thing for years and pretty much everyone in the cryptanalysis field use it. Good job.

Thanks a lot for your work, sage is awesome!

I think sage should advertise more that it can be viewed as a big library + modified but compatible python 2, i.e. all your weird file parsers just work. I am looking forward a lot to finally get python 3 compatibility, for that reason (write as much of my code as possibly to run on cpython as well, not just sage-python; possibility of using stuff where only python 3 bindings exist).

Python3 support for Sage is close.

I came here hoping to read this.

Thank you.

FWIW, although there are no binary releases that include Python 3 yet, if you build from scratch with ./configure --with-python=3 you can use Sage with Python 3 and you'll find that the vast majority of stuff works (though with no promises).

Hi William! I'm a paying customer at CoCalc, and I'm grateful for the amazing tools, great community and amazing team you have built! Keep up the great work!

I will join the chorus in saying thanks. It’s not perfect, and for me it was complementary to Mathematica, but I defended my dissertation in 2013 and need to revisit.

It attacks a hard, hard problem and I’m super happy that it will eventually replace Mathematica so that anyone who wishes can see how the black box works.

So I used Sage a lot when I was an undergraduate, but now use anything else available because the Debian Archive never seems to have a working sagemath package (eg, I just typed "sagemath" into my terminal right now and got a crash on initilisation).

Something seems a bit weird about that to me, because when I used it ~8 years ago Sage was great and none of the features I care about changed.

Do can I get a meta-comment on why Sage might struggle to get packaged for Debian in the way that, eg, Octave/maxima/R/etc do?

Scientific software has strong dependencies on the specific versions of software used. The debian package needs to play nice with a couple of hundred packages which are not the ones sage is building against upstream.

In sage 5.x the source code was quite small compared to today and could be patched with some effort. Today the project is huge and this would be impossible for one person to do.

It's vastly easier to just install sage using their own scripts from source and have some libraries duplicated. The sage source code build script is one of the simplest to use.

(I package sage for nix)

The problem is that sage actually depends on Octave, Maxima, R and many other packages. It parses binary output of many dependencies and often needs the exact version it expects because of that. It also has relatively brittle doctests testing everything an as a result often breaking on minor changes in dependencies.

If you have too much time, there is a very lengthy discussion on this on sage-packaging: https://groups.google.com/forum/#!topic/sage-packaging/ZJmJZ...

I've been looking at how wiki frameworks of today are really separated from data processing and mathenatics. I'm wondering if Sage could be used to solve arbitrary equations and output data that can then be graphed customely, something like how wolframalpha can take any equation and allows manipulation of data sets.

I'm a software engineer that has an MS in Math (ABD actually). I was good at analysis and differential geometry. Any parts of the codebase/deployment I would be best suited to contribute to?

I tend to use pari/gp, octave and gnuplot independently (and I am fairly proficient in all three). What would I gain by moving to Sage ?

I use Sage on a daily basis rather than Mathematica as an undergrad. Love it and plan on contributing in the future!

I made this comment below, but about 6 or so years ago, I found it could be used for science and engineering purposes like solving ODEs but it seemed a little bit rough at that. Today, is it improved for use for science types? Do Sage and Cocalc find users in this area?

I am a pure mathematician (number theorist) and the vast majority of the 600+ Sage developers over the years come from research mathematics. We generally view the numerical python ecosystem (numpy, scipy, pandas, etc.) as the place where development on engineering code happens for Python. So Sage itself isn't directly better for engineering, but I think the Python ecosystem has overall improved during the last six years...

When I was in college, about six or so years ago, I used Sage for my physics classes. At some point I stopped using it because it didn't really stand up to Mathematica, which we got for free. My professor said, funnily enough, "sage has been taken over by mathematicians!" talking about how it has a lot of support for math uses but not as much for things like solving ODEs that is more of use for physicists and engineers. It's funny now seeing most of the comments here talking about crypto, which I read about in examples for Sage at the time (finite fields) whilst not knowing what it could be used for other than research.

These days I'm a computational scientist, so I end up using numeric tools anyway (numpy is my life). That said, it'd be great to have access to an open source symbolic tool for the times I need them. How has Sage improved for engineer/science types in the last few years?

If you know what you're doing the axiom bindings make it the most powerful cas in terms of symbolic manipulation.

Mathematica is optimized for giving you results you need in undergrad classes. Sage is not optimized at all, but can solve problems useful to researchers.

It's the difference between osx and linux.

>Mathematica is optimized for giving you results you need in undergrad classes

What makes you think this? Mathematica is used in all sorts research projects, technical fields, finance, and so on. I've seen Mathematica required or recommended in many, many job listings, especially for quants. I've never seen Sage there.

For evidence, here [1] is what you get for searching for Mathematica on Indeed.com. Note pretty much every one of those jobs lists several math related packages. Not one mentions Sage.

Unfortunately you cannot run the same search on Sage, since there is an ERP package that shows up instead. However, from browsing those listing again I find zero hits for the math package Sage.

[1] https://www.indeed.com/jobs?q=Mathematica&l=

[1] https://www.indeed.com/jobs?q=Mathematica&l=

The languages themselves. To evaluate most simple integrals in mathematica you just put them in and all the corner cases are assumed away for you. For sage and the axiom binding you need to make those assumptions explicit. Pretty much everything in mathematica assumes you're working on the real numbers, pretty much nothing in sage does: http://doc.sagemath.org/html/en/tutorial/tour_rings.html

Most math quants use is at the undergrad level, the finance industry in general is pretty backwards. A very large chunk of it is run on spreadsheets that are passed around in emails. I've worked in it and having seem what it takes to run some of the clients spreadsheets still gives me nightmares.

Again, the difference between osx and linux.


To clarify what I mean by undergrad level: Something that is taught to some undergrads in some degree.

Someone who specialized in pure symbolic mathematics is likely to see integrals in their last year that other people will not see until they start doing postdoc work.

>Most math quants use is at the undergrad level, the finance industry in general is pretty backwards

Do you have any idea what a quant is? I have a PhD in math, and am decently well versed in the math quants use, and it's nothing like undergrad math. Many of the people I got PhD's with became quants, and we've had plenty of discussions on the math they use.

Quants build models using math far beyond what an undergrad learns, including tools such as martingales, stochastic calculus, Black-Scholes (and vast generalizations), Brownian motion, Stochastic differential Equations, numerical methods (usually much more advanced than an undergrad will see), and more.

Quant jobs usually want a PhD in math or related field. If an undergrad math degree covered what they needed they'd not require a PhD for most positions.

What do you think a quant does?

Here's a site [1] for quant jobs. Probably none for someone with only an undergrad math degree.

[1] http://www.quantfinancejobs.com/

>tools such as martingales, stochastic calculus, Black-Scholes (and vast generalizations), Brownian motion, Stochastic differential Equations

These are all practically the same thing, i.e. the single field of stochastic processes. The "meat" of stochastic processes is the underlying topology, measure theory and probability upon which it's built, all of which an undergrad learns (e.g. I'd expect a good undergrad to be able to follow and understand the proof of Ito's lemma, and the proof of Ito's lemma is way more mathematically interesting and involved than that of Black-Scholes).

It also probably depends on whether the firm in question is HFT or not, and whether it trades options. It'd be perfectly possible to be a quant pricing futures at a HFT without even understanding stochastic calculus, as most of the logic ends up boiling down to just some variant of "oh shit, would you look at the size of that trade tick, better giddy up and follow it!".

Quants build models using math far beyond what an undergrad learns, including tools such as martingales, stochastic calculus, Black-Scholes (and vast generalizations), Brownian motion, Stochastic differential Equations, numerical methods

You absolutely don't need a PhD to learn those things. Black-Scholes and Brownian motion was covered in my undergrad courses and the rest of the topics you mentioned where covered in my Masters degree. Certainly most of the quants I know in Europe only have a masters degree.

>You absolutely don't need a PhD to learn those things.

True. You can learn anything without a PhD. A PhD requirement reduces the cost to hire qualified people. Instead of having to pay to interview 1000 people to get 5 you can interview 20.

Companies recruit PhDs because they have amassed many techniques, and have modelling skills on average that are better than non-PhDs.

There's also quite a difference between seeing these topics in class, having been exposed to them, and being able to wield them at a fundamental level. This difference is also increased by having the rigor of a PhD program teach far more skills than a masters or undergrad teaches.

A good example is the Fourier transform. Many people have seen it and have a rough idea what it is. Very few of them truly understand it at a basic level and all it can do, it's generalizations, etc.

>covered in my Masters degree.

Bingo. The claim I was refuting above was that these things are rarely taught to undergrads.

Wow that's so absolutely contradicted by my experience, I'm almost speechless.

Having a degree does diddly-squat to the requirement to interview. It can even (famously) be counter-productive. The only good indicator of what somebody knows, is to ask them to demonstrate facility.

>The only good indicator of what somebody knows, is to ask them to demonstrate facility.

That's true. But not making the pool as concentrated as possible before asking them is a waste of limited resources.

Candidates cost money. When resources are limited, such as money to fly candidates out, time spent screening instead of building, time for senior people to work on interviews instead of getting billable work done, time to get a project done, then it is very valuable to winnow the search to places where there is a higher chance of finding a candidate.

So, one doesn't interview people randomly sampled from the entire populace for a reason. Sure you will get the best candidate, eventually, but the cost to do so is silly. Finding predictors that make your search smaller is extremely valuable.

Thus many jobs have educational requirements. Having hired probably 100ish people in my career, (as well as many friends of mine who have hired similarly), I can without a doubt tell you that demonstrated academic aptitude is a good predictor of overall candidate quality for these types of jobs.

>Having a degree does diddly-squat to the requirement to interview.

So you would claim for jobs using advanced math there is zero difference in mean skill among those with only a 6th grade education and those with a PhD?

Pointless strawman. I'd take evidence of working successfully with math, regardless of degree. Choose your pool that way, instead of lazily reading only the first two lines of the resume.

>Pointless strawman.

How so? Do you think for some jobs without requiring a PhD they'd find the candidate they want without having to interview more people?

>I'd take evidence of working successfully with math, regardless of degree.

A PhD in math, especially from a good school, most certainly gives evidence of working successfully with math. It's easier to fake resume experience than to get a PhD, something that I've found quite common when interviewing people. And it's trivial to check someone actually got PhD - I've yet to find a candidate lie about that part.

I've probably interviewed a ~100 people over the years. I've found from experience that anything to reduce the pool quickly saves time and money, and I've still ended up with excellent candidates. Before learning how to weed quickly, I spent far more time and money on people that had very little ability to solve the problems we wanted solved.

Done much hiring?

Companies recruit PhDs because they have amassed many techniques...

While everything you say is no doubt true, I've got a a lot of friends who have had no problem getting quant jobs in most major finance centers in Europe despite only having Masters degrees

My master thesis (a german diploma, to be precise) 25 years ago comprised the application of the qualitative theory of stochastic differential equations to specific functional analytic problems. So, this is not a compelling argument.

Say I am enrolled in a CS PhD program but want to get hired as a quant. What math courses should I take to get the required background? Would I even be looked at if I had a CS PhD instead of a Math PhD?

Plenty of CS PhDs get hired as quants. Be sure to take as much math as possible. Google around to find which things you should take. If your school offers (graduate if possible) classes in financial modelling take some of those.

Quant jobs are making models of markets (or other financial items), using the best math and tools available. For practical performance these models need implemented, so you'll program. Developing these models is often done in math packages like Mathematica, then once nicely tested, ported to high performance code in C/C++/asm or sometimes even into FPGAs or ASICs.

Quant job are a mix between math and programming. The better you are at both, the more valuable you become. If you're really strong at one compared to the other, you'll drift that way. If you are terrible at either, you'll not get hired.

There's a number of things to disentangle in that post.

1). The domain you posted is for sale and appears to be broken or slow. Given it uses aspx and a Microsoft stack that's ten years out of date I'd say it's perfectly representative of the state of the larger finance industry.

2). What is a quant. Depends on the job. I was hired as a quant for risk management then got switched to algos when the pm found out I have a lot of hard real time experience with Linux. The first job was all discrete math, the second was optimizing x86 assembly.

3). Credential inflation is real. The application of the maths you talked about was something I saw first hand. The assumptions under which the formal models would work were so thread bare a stiff breeze would tear them down quote Keynes:

>The object of our analysis is, not to provide a machine, or method of blind manipulation, which will furnish an infallible answer, but to provide ourselves with an organised and orderly method of thinking out particular problems; and, after we have reached a provisional conclusion by isolating the complicating factors one by one, we then have to go back on ourselves and allow, as well as we can, for the probable interactions of the factors amongst themselves. This is the nature of economic thinking. Any other way of applying our formal principles of thought (without which, however, we shall be lost in the wood) will lead us into error. It is a great fault of symbolic pseudo-mathematical methods of formalising a system of economic analysis, such as we shall set down in section vi of this chapter, that they expressly assume strict independence between the factors involved and lose all their cogency and authority if this hypothesis is disallowed; whereas, in ordinary discourse, where we are not blindly manipulating but know all the time what we are doing and what the words mean, we can keep 'at the back of our heads' the necessary reserves and qualifications and the adjustments which we shall have to make later on, in a way in which we cannot keep complicated partial differentials 'at the back' of several pages of algebra which assume that they all vanish. Too large a proportion of recent 'mathematical' economics are merely concoctions, as imprecise as the initial assumptions they rest on, which allow the author to lose sight of the complexities and interdependencies of the real world in a maze of pretentious and unhelpful symbols.

The above is still excellent advice, most often never followed. Of course the lunch talks we had were very interesting, completely divorced from reality, but very interesting.

4). Most undergrads will never see the mathematics. True. But some will. My numerical methods 4th year project was a symplectic integrator for the gravitational interaction between Jupiter, Mars, the Sun and the asteroid belt. It wasn't new research, the results were known 50 years ago, but it was on par with the state of the art in industry in terms of complexity of implementation (cleanness of implementation was a completely different thing, while not spaghetti code it's rightly never seen the light of day).

The quality of a website hardly has anything to do with the state of the quantitative finance industry.

1) Irrelevant. Care to post all these quant jobs that can be done with undergrad math? I just posted a lot requiring a PhD.

2) >The first job was all discrete math, the second was optimizing x86 assembly.

I suspect you didn't have the math so ended up pushed to the programming side, right? If so, how can you judge what math a quant uses?

3) >Credential inflation is real.

Job requirements are real too. Keynes was not a quant, and his quote is irrelevant compared to simply addressing the point. If you're simply going to start quoting random people instead of addressing he issue then we'll be here forever.

4). > Most undergrads will never see the mathematics. True.

Didn't you just write "Most math quants use is at the undergrad level"? If it's undergrad level math, won't most undergrads see it? If most won't see it, perhaps it's taught more often in graduate level courses (which it is).

Just curious - since you worked as a quant, can you explain in your own words the importance and mechanism of Black-Scholes? And if that's too easy, how about explaining more recent generalizations?

>I suspect you didn't have the math so ended up pushed to the programming side, right? If so, how can you judge what math a quant uses?

The compensation was 30% higher.

You're arguing in bad faith here so I'm done.

>The compensation was 30% higher.

Then you weren't what most places call a quant. You were a programmer at a financial company with a toe in finance.

Did you use the math tools above or not? If you didn't how are you able to judge their efficacy?

>You're arguing in bad faith here so I'm done.

Convenient considering above you routinely did not address your claims, reversed positions, and discounted jobs on a quant site because the tech stack was aspx.

I figured your position may have been borne from not knowing the other side and from career sour grapes.

Despite being the Sage fanboy in the thread, I have to admit that all the use I've seen of Sage has been academic or experimental; I haven't heard of anyone using Sage for statistics in any context, and not for quants or actuarial science or anything of the sort.

I'd suggest that Sage is likely less good for those uses since Sage is good at the kinds of things you're likely to find academics use Sage for (e.g. tons of discrete math and most things cryptography). Secondly, since there's very little pressure for high-end uses of commercial packages to worry about the licensing cost of a mathematics package like JMP or Mathematica.

Indeed. Sage was written by and for academics in academia doing research in pure mathematics. There's also been a fair amount of work to make it usable for undergrad math courses (started originally by some undergrads I hired actually). The only significant overlap with industry in terms of dev support is probably crypto research... But the Python ecosystem overall is pretty strong in industry, and Sage is part of that ecosystem...

If I had the funding to do so, I would love to improve use of Sage for computational physics, particularly for particle physics and QFT. It would be great if Sage could serve easily as a full replacement for Mathematica in those areas, but currently lacks some library support.

I downloaded Sage a few years ago, when I was getting into algebraic number theory. That was my first experience with Python, notebooks, numpy/scipy etc. Now I do almost all my programming - which is mostly maths-related graphics and image manipulation/processing, but all kinds of stuff - with Sage, in Cython. Usually with the CLI (and a text editor), occasionally with the notebook (to run individual lines and for the better error messages). I love being able to choose the C or Python way to do something. I can quickly get from initial idea to working program, and it runs fast.

It's just a pleasure to use. Thank you williamstein!!

It's great to see lots of amazing projects and that people get to have more choice and can avoid being locked into one solution, but one thing I'd like to see more in a description of any product (open source or not) especially when there's a learning curve involved, is the use cases where it would make more sense to take on the effort involved in the learning curve for this particular product rather than others that seems similar to it (at least to the uninitiated).

I'll give it a shot: Sage is what happens when you build a computer algebra system on a giant pile of existing open source libraries and give it a nice, consistent interface and a ton of starving postdocs to write cool integrations and features.

If you're a programmer and you've used Mathematica or Maple and you've ever found yourself wishing that you'd have something like that except in a real programming environment, Sage is it. Sage is Python (with some syntactical extensions).

Sage is something I keep meaning to learn.

Every time I've tried it though, it just didn't feel as polished and integrated as Mathematica.

I used Sage a couple years ago for a cryptography course. I share your opinion, many times Sage was confusing or even frustrating. But I'd still say it's a great project, very capable, and... Mathematica is really expensive. Sage being open source and free is a real asset. This said, I don't think you should "learn" much Sage. At least if you know Python already. Just search for what you need when you need it, write the script you want... and that's it. Unless you are using it daily, at least.

I am unfortunately forced to use Mathematica because my scientific collaborators do so. While I have become very proficient at it, I don't like the interface or the language very much. Its a functional language put into an interface suitable for iterative programming. Loading external libraries is very non-intuitive, and just being able to separate a single program over many files that you can simultaneously edit is hard.

I am looking forward to the day I can switch to Sage and return to coding in emacs.

I got my PhD in computational plasma physics in a group that heavily used matlab. I'm lucky enough that my advisor was okay with me using numpy and friends instead as long as my plots and figures looked okay. Today, I run a one-man show for my post-doc doing simulations for an experimental group, so I thankfully get to continue this mode of work. And thank god I didn't have to face losing the ability to run my scripts after leaving the university (this happened to a couple of fellow graduates I know!) and today, when I want to run something on the supercomputer, I just run it; I don't have to worry about how many licenses are still on the license server or whatever the fuck. I get how Matlab is a thing and people like it, but potentially having analysis delayed because of lack of licenses seems somewhere between primitive and barbaric in 2018.

I used Matlab in undergrad and Python a lot in industry. I got a Mathematica license recently being impressed with the documentation and the sheer breadth of stuff built into the language in a consistent manner. I ran into an issue when I found out that my license could only have 4 kernels activated at a time. I was able to close out the old sessions with the task manager, but I almost (exaggeration) punched my computer when the message popped up about licensing. Wtf? How much did I pay for this? Maybe I'll just stick with Python & Julia for what I need. I just traded one frustration for another.

Fair, but it would be really interesting to pit an avid MATLAB person against an avid numerical Python person in a data analysis race.

I've found that you can generally do most things you can do with MATLAB in Python, but the experience in MATLAB is buttery smooth while the experience with numerical Python is clunky and obnoxious (whether it be all the useless textual noise in the code itself, the inferior quality of the plotting and visualization capabilities or the fact the Python scientific computing library ecosystem is so disjoint (this thing can handle pandas frames, this other thing needs numpy vectors, this other thing expects Python lists, whatever).

I had high hopes for Julia, I have a test for it. How annoying is it to plot(sin(0:0.01:6*pi)) out of the box? For the extreme challenge question, how annoying is it to also open a new plot window and plot(cos(0:0.01:6:pi)) alongside it. I want to be able to pan around and inspect these complicated functions.

So far, it has not yet passed the qualifiers.

>I had high hopes for Julia, I have a test for it. How annoying is it to plot(sin(0:0.01:6*pi)) out of the box? For the extreme challenge question, how annoying is it to also open a new plot window and plot(cos(0:0.01:6:pi)) alongside it. I want to be able to pan around and inspect these complicated functions.

Just use the plot command in Makie.jl? This is a nice first few commands to check the install.

In 0.6, I tried Plots.jl. It seemed pretty nice, but then if I tried to do a whos() to see what variables were in the workspace, it would go out to lunch for >20s. Not just the first time, every time I simply wanted to see what I had in RAM. This didn't seem very useful to me for interactive use.

In 1.0, I tried Plots.jl again. I tried adding it using the package manager and it blew up trying to build. This is with no special state from me, I unpacked the 1.0.0 release, started it and did Pkg.add('Plots')... boom.

There's such a great story around it and I really want to like it, but this is like extremely basic suitably to task stuff... Who knows, maybe I'm using it wrong.

Sorry you had a bad experience. I found that 1.0 generally works, but there were so many changes in the language that several bugs have cropped up. 1.0.1 should fix a lot of them.

My sentiment wasn't really to pit "matlab people" against "python people". Tools are tools, with different aspects and uses, and people and their abilities aren't tied to their tools for the most part.

I don't use matlab but I don't consider "matlab people" as bad scientists or limited in terms of what they can potentially do due to their tool. The limit of access for matlab due to licensing and expensive external modules such is the problem in my view. That isn't a critique at all of matlab's actual numerical and scientific utility, of course it's apt in that arena otherwise people wouldn't use it.

Unfortunately, the Mathematica language is a 'write-only' language; reading and debugging code months after use is... difficult. And that means it's hard to build up libraries which depend on libraries. The Sage community is great, and pushes a lot of code for research problems; things that are technically possible in Mathematica, which someone might work up into a notebook at some point, with a lot of effort, grow into reliable libraries in Sage.

It is true that stuff like (to take a random example)

    Total@Map[Function[{L}, Min @@ Table[dist[func[Array[c[#, k]&, n]]][L], {k, m}]], dd]
is normal and common in Mathematica code. Like with Perl, the culture has started from small scripts making heavy use of shorthands provided by the language, and slowly evolved to writing larger codebases.

Some of this can, on an individual level, be fixed by adopting a clearer coding style. But like Perl 5, the language kind of lures you to use the shorthands a lot.

It's also cultural- line breaks, indentations, and parenthesis help a lot to make Mathematica more readable, but there's a strong tendency to do things in one liners like you show. Even the official docs are like this. I think even then it can have some readability problems, but it would certainly improve things quite a bit if the community would use a better style.

This isn’t true. It depends how you write your Mathematica.

If you define modules, it’s nearly the same as as a scripting language.

Mathematica has lots of drawbacks, chief among which : if you write Mathematica code, you're building on top of an expensive, closed-source bunch of opaque algorithm that are neither guaranteed to be correct (can't see what they do) nor to continue behaving the way they did when you wrote your code (you're at the mercy of Wolfram changing what the code does).

As far as I'm concerned, this essentially precludes doing anything publishable using Mathematica.

You can say that, but I know lot of scientists and mathematicians who extensively use Mathematica in their research.

Don't other languages cause issues as well? If you wrote your python code in 2.7, it probably won't run in 3.6 without some modifications. Although, with Python everything is free and you can work anywhere without your boss needing to buy a license.

The main point is auditability. Closed source libraries can have errors that can't be audited by the scientific community.

That I would agree with. I probably won't peek under the hood and fix any bugs with numpy, but someone else probably will.

Kip Thorne (pretty famous physicist) uses Mathematica for his work almost exclusively if i understand an interview they did with him correctly.

The initial issue with Sage was its packaging due to its large number of dependencies. That is largely a non-issue now for the end user.

Now there is also SageMathCloud, that makes it even more convenient to adopt.

You are waaaay outdated. "SageMathCloud is Now CoCalc" since May 2017. Go google it. And it's not free _at all_, except for very basic stuff. Your pricing depends on what you need, from hundreds to thousands of dollars. Not a criticism, only a heads up for the guys reading the comments.

By very basic stuff you mean "Doing everything a cas can do". The expenses come when you want to connect it to the internet, because people started using the full linux distro you get as a botnet, and increasing performance.

CoCalc is an open source system. It is convenient to pay for a setup, which is how they hope to get revenue to pay developers.

We also provide a nice easy to install Docker image of Cocalc here: https://github.com/sagemathinc/cocalc-docker

wow, many times downvoted for redirecting people to an up to date situation so they can chose the plan best suited for they convenience. Gosh, HN is getting brutal. Btw, very nice job to the guys at CoCalc in general, as a collaboration tool, and specially at courses based on Jupyter Notebooks. EDIT: and now some guys going through my old posts in different threads giving me negative votes. Really?Wow.Just wow.

You don't have any other comments from the last 24 hours, so how are people supposed to be downvoting your old comments?

Please donate. I just donated $50 to start things out.


SageMath has always been Open Source.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact