Hacker News new | comments | show | ask | jobs | submit login
PEP 450: Adding A Statistics Module To The Standard Library (python.org)
185 points by petsos on Aug 11, 2013 | hide | past | web | favorite | 83 comments



It's not a terrible idea to support the absolute basics like mean & variance, but anything beyond that (particularly things like models or tests) is not a good idea for a standard library. Once you hit even something simple like a linear regression you have issues of how to represent missing or discrete variables, handling colinearity, or whether to do online or batch modes which can give different results. Tests in particular are fraught because if you're going to make them available for general consumption they need a good explanation of when they're appropriate, which is basically a semester course in statistics and well out of scope for standard library docs.

Basically, the idea of "batteries included" should also mean that if something looks like you can put a D-cell in there, you're unlikely to blow your arm off.


The PEP suggests that the functionality would be comparable to that in a high school calculator or in Excel/LibreOffice/Gnumeric. The existence of that functionality suggests to me that a stats package can be useful even if it doesn't handle things like missing data.

Similarly, Excel/etc. support these functions without a "semester course in statistics." Instead, you'll find that there are many web pages from semester courses in statistics which end up teaching how to use Excel. The same would no doubt happen with Python.

I don't why a statistics standard library module needs to provide a "good explanation of when they're appropriate" to a higher standard than any other module. Python provides trigonometric and hyperbolic functions without teaching trigonometry. It provides complex numbers and cmath without teaching people about complex numbers. It provides several different random distribution functions without teaching anything about Pareto, Weibull, or von Mises distributions.

For that matter, data structures is a semester course as well, but the Python documentation doesn't teach those differences in its documentation of deque, stack, hash table, etc., nor describe algorithms like heapq and bisect.

"whether to do online or batch modes which can give different results". The PEP says it will prefer batch mode:

      Concentrate on data in sequences, allowing two-passes over the data,
      rather than potentially compromise on accuracy for the sake of a one-pass
      algorithm


By that logic, you should get rid of about everything in standard libraries. A simple RNG? Anything related to floating point? Multi-threading? Character encodings? Unicode normalization? Time and date handling? All fraught with danger for those who do not know, and "basically a semester course and well out of scope for standard library docs".

Surely, it would be better to supply good implementations of algorithms rather than refrain from doing that, and letting programmers write and use bad ones instead?

IMO, the discussion should be about what c/should end up in the _standard_ library, and what is better put in a separate product/download.


Agreed. I'm studying statistics at the moment and I'm continually reminded of how easy it is to choose the wrong model / distribution and be incorrect because of some non-obvious and technical reason. For example, just the other day, I wanted to use the binomial distribution to solve a problem. To use this distribution, the trials must be independent of one another. In that particular problem, there was a subtle condition that made the trials non-independent. I arrived at correct-appearing answers (0 <= P <= 1) that were actually all wrong. Statistics is way too easy to break to be used naively.


> Statistics is way too easy to break to be used naively.

Fair enough, but the same argument could be made about using an unskewed standard distribution on non-symmetrical datasets, a common error even among people who should know better.

I think binomial functions should be included, on the ground that they're very useful and their probability of misuse is only equal to the continuous statistical forms, not more so.


Hell, sometimes they use a dictionary when they should be using a list. Almost everything can be used wrongly by a begginer, which doesn't mean it shouldn't be there.

I think having a basic stats module always handy would be very convenient.


> I think having a basic stats module always handy would be very convenient.

I absolutely agree. My only point was that these tools are sometimes misapplied, not at all to argue that they shouldn't be readily available. They should be.


While you are correct, it sounds like his point was that even to someone moderately skilled it is easy to make a mistake that makes your work __completely__ invalid, rather than merely inefficient or too-complicated.


  ...how to represent missing or discrete variables...
Don't. Just say no. Just give me the simple easy stuff. Most of us will be fine, and everyone else will know they need something better and won't bother.


Batteries included is a fine philosophy when starting a language to encourage early adoption, but at this point, I don't think it's worth adding new libraries to the stdlib. Here's why:

- It's very easy to find and install third party modules

- Once a library is added to stdlib, the API is essentially frozen. This means we can end up stuck with less than ideal APIs (shutil/os, urllib2/urrlib, etc) or Guido & co are stuck in a time consuming PEP/deprecate/delete loop for even minor API improvements.

- libraries outside of the stdlib are free to evolve. users of those libraries who don't want to stay on the bleeding edge are free to stay on old versions.


The PEP acknowledges the existence of high-end statistics libraries. It also notes that the alternative to such libraries are DIY implementations - which are often incorrect in their implementation.

The PEP proposes adding simple, but correct support for statistics.

Apart from high-end libraries being an overkill and DIY implementations being incorrect, the PEP also cites resistance to third party software in corporate environments. This problem is more social than technical though, and I'm not sure what weight must be attached to it


There is a different perspective on the frozen APIs. Using an API from the stdlib gives you the certainty that your program is not going to break with a minor python version bump. This might not matter for all software but is crucial for others.


Those are good reasons for rejecting any addition to the standard library. However, new libraries are sometimes added to the standard library, which means the reasons you listed can be overcome by even better reasons for inclusion.

What are those reasons for why a new library can be included, and why aren't those reasons appropriate justification for including this proposed statistics package?


> However, new libraries are sometimes added to the standard library, which means the reasons you listed can be overcome by even better reasons for inclusion.

Or overcome by bad judgement.


Just out of curiosity, I submitted this yesterday:

https://news.ycombinator.com/item?id=6190603

The URL was

    http://www.python.org/dev/peps/pep-0450/
While this is

    http://www.python.org/dev/peps/pep-0450
That is, exactly the same except for a trailing slash. Doesn't the deduplication algorithm handle this case?


Technically speaking, they are separate URLs that may lead to separate resources. For example, Google engine treats them as separate URLs.

That's the reason why opening http://www.python.org/dev/peps/pep-0450 redirects to http://www.python.org/dev/peps/pep-0450/ . HN engine should follow redirect to avoid situations like this.


Technically speaking, there are no equivalent URLs in general, different strings may lead to different resources.

Still, there are a number of common sense heuristics to normalize URLs, that HN applies to do de-duplication. I was wondering what is the rationale for not having trailing slash removal among them. I mean, is there any legitimate website that serves a different resource if you remove the trailing slash?


Per RFC3986/7, http://example.com/%60 and http://example.com/a are equivilant. (Indeed, all major browsers will request the latter regardless of what is input.) Equally, punycode encoded IRIs and the original IRI are equivilance. There is a whole section on equivilance in both of the RFCs (3967 includes 3986 by reference, so is a superset).


Browser equivalence is another thing entirely. Most browsers will accept http://www。google。com (because in Japanese '。' is '.'). But if you tried to request that actual resource it doesn't lead anywhere.

But yeah HN should just use browser equivalence.


But totally undefined and all browsers do their own thing for what's entered in the address bar — there's more consistency in URLs in content, and that doesn't do stuff like normalising '。' but does do the percent-encoded case (for unreserved characters, as the spec says).

Following what the spec says for eqivilance makes sense, at least. Anything drastic is technically treating distinct URLs as equivilant.


Without actually checking redirects, not breaking a few edge cases is much better than a few submissions being duplicated.


Or it could check for a 3xx HTTP status.


I posted an ASK PG [1] about this last month ago and got an angry email from (presumably) a mod. His point was totally valid - that PG 'aint go time fo' that' which is true, but I was just hoping to draw attention to it, as opposed to demand PG drop what he's doing right now and fix it. To be honest, I was quite surprised by the tone of the email.

[1] https://news.ycombinator.com/item?id=5908075


The email wasn't from one of us.


Why should it?

For one, it provides the welcome ability to bring topics up in Hacker News again, where they might get accepted better the second or third time (e.g because more people are online at the time of the second submission).

If the "deduplication algorithm" had "handled this case", then we would only be left with the first submission (a dead discussion), whereas as it is, HN users have now caught on to this PEP news and we have a discussion going on.


> For many people, installing numpy may be difficult or impossible. For example, people in corporate environments may have to go through a difficult, time-consuming process before being permitted to install third-party software.

I do not regard this as a good justification for putting something in the standard library! If you don't have root access, use vitualenv (which you might want to do anyway) and install the package somewhere under your home directory.


Never under-estimate how much more helpful it can be to have something in a standard library. I do a lot of embedded development, and we have Python scripts all over the place on various embedded targets to perform all sorts of functions. Often it is incredibly inconvenient, expensive or even impossible to install extra packages on those machines, due to lack of permissions, lack of bandwidth, lack of time, or even a read only form of disk to name but a few reasons.

If your work involves a lot of scientific computing then yes, you're going to need numpy but if you're just updating an existing script that's doing some performance monitoring, having an accurate version of mean and standard deviation available seems like a great idea.

Python attempts to be batteries included, this is part of its philosophy and one of the reasons for its popularity. I'm glad this is being extended into a new area.


NumPy is not nearly portable enough to do what you are describing as a user on a windows machine. You cannot simply do a `pip install numpy` into a virtualenv. Instead you must either install the package system-wide, or compile it yourself, which means getting a working MinGW environment or similar.

Edit: Although, I do agree that NumPy being difficult to install is not, on its own, a good justification for the PEP.


numpy is quite portable, I am not sure what you mean by not nearly portable enough.

The reason why you can't do pip install numpy is pip's fault, there is nothing that numpy can do to make that work. Note that easy_install numpy does work on windows (without the need for a C compiler).


the problems I have had with numpy are endless. Usually I'll just prefer to write my own, because it's quicker. if you have tried to get numpy running on a cloud machine you'll know what I'm talking about. basically you will have to know how to compile from source, know some gcc, etc. the last time I tried to get it running I promised myself never to use numpy again.


I suggest you try again. I think the situation has changed. I have been using Numpy/Scipy for a few years now, with all the negative experience of having to sort out whether to use g77 or gfortran and the right gcc flags etc. The most recent version, Numpy 1.7.0, is the first one that installed for me smoothly on Windows and OS X (official packages). YMMV, but I think it is alright now.


many people have been able to use numpy on clouds. You only need gcc and the python-dev package on linux.

Frankly, building numpy is not very complicated (scipy is a bit complicated, and only if you are not on linux).


> basically you will have to know how to compile from source, know some gcc, etc.

If you're happy with the system-wide Python, try apt-get install python-numpy next time.


I'm back at work, and have had a chance to verify; it turns out that you were correct. I was confused by my bad memories of trying to install SciPy into a virtualenv.


Having the technical capability to do something is not the same as having permission to do it. The PEP refers to corporate policies. Installing third-party software in your home directory would be just as much a violation as doing it in a location that requires root.


Son, I hope you never have the displeasure of working in an enterprise environment.

First of all, as others mentioned, your personal machine runs on Windows so right off the bat there goes your instant virtualenv, pip install.

Even on the Unix app server, you're probably behind a firewall that's tight as a duck's ass so chances are you're downloading the tar ball and making the package yourself.

Third, wtf are you doing littering the app server with all these binaries? And what is Python? I'm sorry, no, rewrite this in Java, please.

If you do manage to convince your manager and the rest of your team that Python is not black magic, the first time numpy breaks or some small issue crops up or you have to migrate to a new server and reinstall numpy but now there's a new version and... GTFO of here with that black magic.

I agree with you, developing websites on your MacBook Pro there's no excuse not being able to install numpy. In the real world though, having basic necessities in stdlib does absolute wonders.


Not everybody using python is writing freestanding software for machines they control. Lots of software for example ship with a python install for scripting, automation or writing plugins for example.


Great idea, but while assembling this library, don't leave out permutations, combinations, and the binomial Probability Mass Function (PMF) and Cumulative Distribution Function (CDF). Small overhead, easy to implement, very useful. More here:

http://arachnoid.com/binomial_probability


Permutations and combinations already exist within the itertools module.


There is something deep about software that thwarts our dreams of making it re-usable, and makes debates about what primitives to put into the stdlib difficult. For one thing, it seems like we should be able to collectively determine a DAG of concepts, starting from simplest primitives and progressively building up derived concepts, like the tech tree in a game like Civilization, or Principia Mathematica. But that tree is so vast, and the most-useful points so sparse along it (ex: I don't think anyone's asking for Church numerals in their stdlib) that we end up debating which are the most-useful derived concepts to surface. Moreover, there's no language or compiler that is sufficiently smart to efficiently map all those concepts to the hardware we have, we instead go with expertly-wrought implementations.

To take this example, you _could_ count enumerations and permutations by passing a range(n) list to itertools and then counting how many actual results you get back, but that's silly when you could also just use the binomial theorem to get there directly. A compiler that could generally perform such transformations would be miraculous -- well beyond the territory of automated proof assistants like mathematica or gcc -O3 that trundle along cultivated routes of expert system rules, into the realm of actually discovering deep linkages at the frontier of our knowledge.

Until then it seems like stdlibs will just fracture along lines of strain among the userbase. Presumably, most Python users don't need anything beyond what a financial calculator would provide, and anyone else should head to numpy.


You make some good points. I've been in this field for almost 40 years -- I can remember a time when having a dedicated floating-point processor was seen as an unnecessary gimmick, only available at extra cost. Much has changed, and I think what constitutes a normal set of mathematical functions is changing as well, for a number of reasons:

1. Better math training in school.

2. More kinds of applied math problems being routinely evaluated by Python and other languages.

3. More available memory and storage capacity.

All of which argue for larger math libraries with more functions and classes of functions.

> Presumably, most Python users don't need anything beyond what a financial calculator would provide, and anyone else should head to numpy.

I would normally agree, but the argument has been made in this thread that numpy can't be installed in some environments -- environments that easily support Python, but that don't accommodate numpy without great difficulty.


> Permutations and combinations already exist within the itertools module.

Not exactly. Given argument lists, Itertools provides result lists (actually, iterators for that purpose) with the original elements permuted and combined, but doesn't provide numerical results for numerical arguments, as shown here: http://arachnoid.com/binomial_probability

I was referring to permutation and combination mathematical functions, not generator functions.


Hmm. Python has default Bignum promotion so the naive N choose K implementation will not suffer from overflow. I guess it could be slow for large values, but how many casual users need to calculate N choose K for large N?


Put differently, you want the functions that count the number of permutations and combinations possible, not functions that yield/generate the actual permutations and combinations.


Yes, exactly, for a number of reasons including the problem of large arguments and results.


About damned time. Writing your own stats library is like writing your own crypto.


You wouldn't write your own - numpy / scipy have everything you'll need.


Some of the things I need are:

- fewer dependencies for my package

I've written the average() and standard_deviation() functions at least a couple of dozen times, because it doesn't make sense to require numpy in order to summarize, say, benchmark timing results.

- reduced import time

NumPy and SciPy were designed with math-heavy users in mind, who start Python once and either work in the REPL for hours or run non-trivial programs. It was not designed for light-weight use in command-line scripts.

"import scipy.stats" takes 0.25 second on my laptop. In part because it brings in 439 new modules to sys.modules. That's crazy-mad for someone who just wants to compute, say, a Student's t-test, when the implementation of that test is only a few dozen lines long. (Partially because it depends on a stddev() as well.)

Sure, 0.25 seconds isn't all that long, but that's also on a fast local disk. In one networked filesystem I worked with (Lustre), the stat calls were so slow that just starting python took over a second. We fixed that by switching to zip import of the Python standard library and deferring imports unless they were needed, but there's no simple solution like that for SciPy.

- less confusing docstring/help

Suppose you read in the documentation that scipy.stats.t implements the Student's t-test as scipy.stats.t.

    >>> import scipy.stats
    >>> scipy.stats.t
    <scipy.stats.distributions.t_gen object at 0x108f87390>
It's a bit confusing to see scipy.stats.distributions.t_gen appear, but okay, it's some implementation thing.

Then you do help(scipy.stats.t) and see

    Help on t_gen in module scipy.stats.distributions object:
    
    class t_gen(rv_continuous)
     |  A Student's T continuous random variable.
     |  
     |  %(before_notes)s
     |  
        ...
     |  
     |  %(example)s

Huh?! What's %(before nodes)s and %(example)s?

The answer is, scipy.stats auto-generates various of the distribution functions, including things like docstrings. Only, help() gets confused about that because help() uses the class docstring while SciPy modifies the generator instance's docstring. Instead, to see the correct docstring you have to do it directly:

    >>> print scipy.stats.t.__doc__
    A Student's T continuous random variable.
    
        Continuous random variables are defined from a standard form and may
        require some shape parameters to complete its specification.  Any
        optional keyword parameters can be passed to the methods of the RV
        object as given below:


scipy.stats distribution objects are a bit particular, that's a bit unfair to pin point them.

Generally, numpy and scipy have much better docstrings than python stdlib itself.


Well, help(scipy.optimize.nonlin.Anderson) has the same problem, but you're right in that that failure mode is rare, and that numpy/scipy has good documentation. However, in the context of a stats library, I think it's okay to point out that scipy.stats has some annoying parts. ;)

In all honesty, I seldom use NumPy and rarely use SciPy, so I can't judge that deeply. I know that when I read their respective code bases I get a bit bewildered by the many "import *" and other oddities. It doesn't feel right to me. I know the reason for most of the choices - to reduce API hierarchy and simplify usability for their expected end-users - but their expectations don't match mine.

So I looked at more of the documentation. I started with scipy/integrate/quadpack.py. The docstring for quad() says, in essence, "this docstring isn't long enough, so call quad_explain() to get more documentation." I've never seen that technique used before. The Python documentation says "see this URL" for those cases.

Again, this is a difference in expectations. I argue that NumPy and Python have different end-users in mind. Which is entirely reasonable - they do! But it means that it's very difficult to simply say "add numpy to part of the standard library."

There's also a level of normalization that I would want should numpy be part of the standard library. For example, do out of range input raise ValueError or RuntimeError? scipy/ndimage/filters.py does both, and I don't understand the distinction between one or the other.

Now, in the larger sense, I know the history. RuntimeError was more common in Python, and used as a catch-all exception type. Its existence in numpy reflects its long heritage. It's hard to change that exception type because programs might depend on it.

But it means that integrating all of numpy into the standard library is not going to work: either it breaks existing numpy-based programs, or the merge inherits a large number of oddities that most Python programmers will not be comfortable with.


Actually, I don't think the import * in numpy is anything else than historical artefact. Numpy just happens to be one of the oldest, still widely used python library (considering numpy started as numeric), as you point out. As for import speed, have you considered using lazy import in your script ?

I don't see numpy being integrated in python anytime soon. I don't think it would bring much, and one would have to drop performance enhancement that rely on blas/lapack.

I think installing has improved a lot, and once pip + wheel matures, it should be easy to pip install numpy on windows.


I've asked on the numpy mailing list. The "import * " was a design decision, now irrevocable without breaking existing code.

For examples, from http://mail.scipy.org/pipermail/numpy-discussion/2008-July/0... :

Robert Kern: Your use case isn't so typical and so suffers on the import time end of the balance

Stéfan van der Walt: I.e. most people don't start up NumPy all the time -- they import NumPy, and then do some calculations, which typically take longer than the import time. ... You need fast startup time, but most of our users need quick access to whichever functions they want (and often use from an interactive terminal).

I went back to the topic last year. Currently 25% of the import time is spent building some functions which are then exec'ed. At every single import. I contributed a patch, which has been hanging around for a year. I came back to it last week. I'll be working on an updated patch.

There's also about 7% of the startup time because numpy.testing imports unittest in order to get TestCase, so people can refer to numpy.testing.TestCase. Even though numpy does nothing to TestCase and some of numpy's own unit tests use unittest.TestCase instead. sigh. And there's nothing to be done to improve that case.

Regarding the age - yes, you're right. BTW, parts of PIL started in 1995, making it the oldest widely used package, I think. Do you know of anything older?


Well, one of the things I need is for it to be built in the standard library, so not, numby/scipy doesn't have everything I need.


Reminds me of the story that made rounds here couple of years ago: The Python Standard Library - Where Modules Go To Die

https://news.ycombinator.com/item?id=3913182


Nice proposal. I think the problem is numpy itself. If you could just do pip install numeric_package then nobody can complain. I don't quite understand why a package has to depend on LINPACK. I will probably switch to julia-lang, because numpy is (at least for me) not that great to work with.


NumPy is a full MAT-LAB for Python, not a simple drop-in statistics library. It has to depend on LINPACK because writing a full linear algebra library that performs well is damn hard and takes several researcher-years. Most serious scientific computing libraries and utilities depend on it, including Julia I think. There is certainly room for simpler libraries for people not seriously into numeric computations, as the document linked well indicates.


Numpy does not depend on LINPACK, only scipy does (numpy only uses blas if you have one installed, it is optional).

The reason why scipy (and Julia BTW) need blas/lapack is because that's the only way to have decent performance and reasonably accurate linear algebra. The alternative is writing your own implementation of something that has been used and debugged for 30 years, which does not seem like a good idea.


This is what I don't understand at all. Imagine somebody in a different area of computing would say: oh, we solved that 30 years ago and now there is no room for improvement at all? why can't this be done at least in C?


There is room for improvement, and it gets improved all the time (e.g. openblas is a recent contender). LAPACK is essentially an API for linear algebra, which is what allowed people to improve implementations and to benefit from them in older programs.

Think of it as the C library of numerical computing.


I'm confused how a library written in Fortran 90 can be called "the C library of numerical computing".


I was comparing LAPACK to the C library: nobody claims it is weird to use the C library, written > 30 years ago, instead of using something 'more modern'. Everybody uses the C library for system/low level programming, the same is true in numerical computing w.r.t. blas/lapack. Numpy/scipy use it, R use it, julia use it, matlab use it, octave use it.


You'll be disappointed to learn Julia depends on BLAS, LAPACK and librmath. That shouldn't dissuade you from trying it, though: is a pretty cool language.


numpy has all sorts of awful C bindings which make it less than versatile in environments where you want pure Python. It's great from a performance point of view, but horrible for compatibility.

Google App Engine used to suffer because of this (more specifically, it still only restricts your runtime to pure Python, but now you can import numpy at least). I believe the PyPy folks have also had their own set of struggles with numpy compatibility, although I'm not sure what the state of that is at present.

In any case, I think these compatibility concerns alone make a strong argument for including simple Statistics tooling into the standard library.


How does that work for SQLite 3, which _is_ part of Python library?

I would actually prefer to have numpy included before those statistics functions.


OK, that's a pretty good counterexample. Touché. :-p

GAE just avoids it alltogether (except locally, where you have CPython and use it to stub out core services hosted on the cloud runtime). You simply can't import sqlite3 on GAE when running on cloud runtime, nor can you really use it as an external dependency.

I'm not really up on the details, but the PyPy website claims they've gotten around this by implementing a pure Python equivalent of the CPython stdlib library (http://pypy.org/compat.html).

I would put forward that SQLite3 is probably a pretty easy include in most C projects compared to whatever numpy would likely require. That said, I'm not qualified to assess this, being neither a numpy, Python core, or sqlite3 dev.

All of this aside, it's worth mentioning that the entire standard lib includes and depends on some other C-only libraries. So it's not unprecedented. In principle, you'd want the standard lib to have as much pure Python as possible (PyPy kind of takes this to the ultimate extreme from what I can gather), but this isn't always practical (great example of "practicality beats purity" if you ask me).

Speaking of which, if it's cool to have `sqlite3` in the standard lib as part of the included batteries, why not mean and variance and the like? :D


PyPy has its own implementation of the sqlite module based on CFFI, as well as other stdlib modules that wrap C libraries (a lot of the libraries written in C for the sake of performance in CPython are just pure Python in PyPy — CPython normally includes both pure Python and not in those cases). numpy is far more complex because it's a far bigger API than anything like sqlite, though work is progressing on it.


numpy allows you to do some pretty low level stuff (like creating buffer from a raw pointer), which GAE wants to avoid for obvious security issues. Same rationale as why ctypes (parts of the stdlib) is not available.

I have never used numpy on GAE, but I would suspect some of it is not enabled for those reasons.


I'm in favor. I was surprised and annoyed to find there wasn't a standard library for doing excel-level statistics. If you throw basic least-squares linear regression in there too, I can eliminate Excel from my physics classes.


One side effect is that this would accelerate the adoption of Python 3 in the scientific community


Kudos to PHP for apparently being ahead of the curve among dynamic languages with regard to statistics. Another interesting, yet unmentioned option is Clojure/Incanter.


I upvoted this comment, then I realized that those PHP stats functions aren't in the standard library. They're in a PECL extension (equivalent to Python C extensions).


And another option is PDL - http://pdl.perl.org

NB. And I believe Perl6 (spec) includes PDL - http://perlcabal.org/syn/S09.html#PDL_support


No mention of Pandas?

http://pandas.pydata.org/


They probably left out pandas because it depends on numpy and also this point:

> For many people, installing numpy may be difficult or impossible.

that's as true, and arguably more, for pandas.


Although the solution, imho, is to make numpy easier to install across many platforms (ideally `pip install numpy pandas` should just work).


There are two types of potential limitations to installing software, technical and policy. Just solving the technical won't necessarily make it easier to install numpy/pandas on a users computer from a policy perspective. Being able to rely on basic statistics functions being available in a default python install would be nice.


I think Pandas is a great candidate for inclusion in the stdlib if this ever happens - and hopefully, numpy/parts of scipy will also be thrown in :)


I think the idea is to include a small independent stats package not a full featured still developing third party. Any number of people need std dev easily and reliably available. If you need numpy on top of that, you know you do and can afford the effort.

For 99% of my work numpy and the associated compilation overhead is unneeded - fits my brain, fits my needs


So let's amortize the cost of compiling and/or installing fast binaries by only relying on plain Python.

It would be great if there was a natural progression (and/or compat shims) for porting from this new stdlib library to NumPy[Py] (and/or from LibreOffice). (e.g. "Is it called 'cummean'")?


I guess that's the point of the stats-battery - pure python stats with no / minimal cost to migrate to numpy e.g.

  From stats import mean

  ...

  from numpy import mean


I'm against this. Either you have to create a new statistics module or you would have to include numpy/pandas/statsmodels into the standard library. In both cases it would essentially freeze the modules for further development outside the python release cycle...


I would like having these simple functions, but I think they can just go into 'math' library.


Nice idea




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: