
Maya – Python Datetimes for Humans - kenneth_reitz
https://www.kennethreitz.org/essays/introducing-maya-datetimes-for-humans
======
BoppreH
Awesome! Timekeeping is hard and I'm glad we now have one more tool do deal
with it.

One thing that is bothering me is that when you ask for
`maya.when('tomorrow')`, or give only a date, you get back a timestamp with
millisecond precision, representing 00:00 of that day. I understand this
simplifies the implementation, but shouldn't `tomorrow` be a _range_ , from
00:00 to 23:59?

Treating imprecise dates as ranges would allow for stuff like

    
    
        # Did this event happen "yesterday" according to the US/Eastern time zone?
        timestamp in maya.when('yesterday', to_timezone='US/Eastern')
    
        # Get me all events that happened last Friday (UTC)
        [event for event in events if event.time in maya.when('2016-12-16')]
    

Maybe I'm being naive, and there's a reason why this won't work, but this
seems the way most humans deal with time.

PS: it failed to install on Windows, so I opened an issue at
[https://github.com/kennethreitz/maya/issues/10](https://github.com/kennethreitz/maya/issues/10)

~~~
tokenizerrr
How would you serialize that range to, for example, ISO8601?

~~~
cheapsteak
From wiki: ISO8601 has four ways to express a time interval:

1\. Start and end, such as "2007-03-01T13:00:00Z/2008-05-11T15:30:00Z".

2\. Start and duration, such as "2007-03-01T13:00:00Z/P1Y2M10DT2H30M"

3\. Duration and end, such as "P1Y2M10DT2H30M/2008-05-11T15:30:00Z"

4\. Duration only, such as "P1Y2M10DT2H30M", with additional context
information

[http://en.wikipedia.org/wiki/ISO_8601#Time_intervals](http://en.wikipedia.org/wiki/ISO_8601#Time_intervals)

------
sametmax
I wish Kenneth would have contributed to an existed project for once.

Arrow and pendulum (my current favorite) have a very decent API. The later one
is especially well tested for the numerous corner cases of date handling,
which I doubt Kenneth got right on first try.

For request, a full rewrite made sense because urllib sucked so much and we
had no good alternatives. But for date time, alternative exists and they are
good. Do not split the open source effort, join forces!

~~~
jsnathan
Worrying about split effort makes most sense when the problem is large and the
amount of effort dedicated to the problem is limited, or when there are
network effects that come from having more people involved in the project. I
don't think that applies here.

This library is not going to divert much (any?) effort that could have
improved other date-time libraries.

~~~
sametmax
If you contribute to any open source project, you quickly learn code is only a
small (yet important) part of the effort.

Other stuff that are splitted:

\- attention;

\- visibility;

\- pull requests;

\- documentation edits;

\- bug reports;

\- installs (and hence testing on the field);

\- compatibility layers;

\- compatible tooling;

\- 3rd party tutorials, snippets and examples;

\- help from integrators;

Now if you don't have any very good alternatives, it's good : you want to
steal that from them. But when you DO have good existing projects, then those
are the bread and butter of their success.

Now I know it's way harder to contribute to something than roll you own. I
know that's it's also more fun to roll your own. And there is the ego at play
as well.

I understand all that, and I respect Kenneth as a professional. But I don't
think it's a good call on his part here.

It is espacially true for him, because he now have such a good rep in the
Python community that everything he does get under the spotlight.

~~~
jsnathan
Have a look at the code [1]. There's maybe 20 lines of actual logic in there.

Even taking all of what you mentioned into account, it's still not more than a
single afternoon.

I agree with you in general, splintering in open source is a real problem
worth tackling, but this is just not a case where it makes any sense to worry
about that.

[1]:
[https://github.com/kennethreitz/maya/blob/master/maya.py](https://github.com/kennethreitz/maya/blob/master/maya.py)

------
ianamartin
I think its fascinating that the community has no consensus about a datetime
library.

A lot of Python is really solved. We don't argue about using requests (a not-
coincidental example). If you're using Python, and you need to deal with http,
you use requests. Everyone knows this.

There are basically 3 platforms for web frameworks. Flask, Pyramid, and
Django. Maybe we're a little more dissolute than C# or Ruby folks, but that's
pretty impressive considering how much we Python people like to roll our own.

The fact that there is real disagreement about this among ourselves about this
particular issue says to me that this is more about the difficulty of the
problem than it is anything else.

~~~
NelsonMinar
Related: the datetime library in pretty much every language is terrible. The
only one I've ever used that I remember not actively hating was MomentJS.

Python's history of time types was terrible, as well. Particularly the era
when we just used tuples of length 8 or 9 that weren't timezone aware.

~~~
masklinn
> The only one I've ever used that I remember not actively hating was
> MomentJS.

Boy do we disagree. Between the mutable API, the fuzzy parsing, the lack of tz
support and the yet-another-reinvention of date/time formatting DSL[0] moment
was one of those things I'd rather have not had in my life.

[0] which, to add insult to injury, is really similar to but not quite
compatible with the LDML's

~~~
STRiDEX
I've used timezones with momentjs.
[http://momentjs.com/timezone/](http://momentjs.com/timezone/) Its great. The
parsing also has a strict mode [http://momentjs.com/docs/#/parsing/string-
format/](http://momentjs.com/docs/#/parsing/string-format/)

When did you last use it? I love working with momentjs.

~~~
masklinn
> I've used timezones with momentjs.
> [http://momentjs.com/timezone/](http://momentjs.com/timezone/) Its great.
> The parsing also has a strict mode
> [http://momentjs.com/docs/#/parsing/string-
> format/](http://momentjs.com/docs/#/parsing/string-format/)

Yes, relatively recent options or completely separate projects allow doing
these things. Do you know what happens to options when the default behaviour
looks like it works? They don't get used, and the original garbage remains.

> I love working with momentjs.

And as I wrote earlier, boy do we disagree.

------
jMyles
What's up buddy?

My only real question:

> rand_day = maya.when('2011-02-07', timezone='US/Eastern')

This returning an object representing a DateTime on the 6th (in UTC time)
strikes me as perhaps "not for humans."

If I just see that line casually, I think I expect to get a Date and for it to
be the 7th.

It looks like, in order to get this (arguably expected) object, I need to take
the resulting MayaDT epoch and run its `datetime` method, passing naive=True?

And I also see that _tz can only ever be pytz.timezone('UTC') - is this the
result of some belief that timezones are illusions or something? :-)

For a while, I have kinda thought that timezones foment a confused mental
model of time and teamwork. I prefer to think in terms of the astronomy - it's
not actually a _different time_ anywhere else, it's just that the sun is at a
different position relative to the rest of the earth (and thus, ones faraway
teammates and loved ones).

Anyway, thanks for yet another set of interesting ideas. Hope you are well.

~~~
guitarbill
It's worth thinking of it the same as strings. Strings can have several
encodings, e.g. ASCII, UTF-8 and a hundred others. These are external
representation. You don't want to parse the string every time you have to deal
with it, so you parse it and store it using (ideally) a single internal
representation (e.g. UCS-2, UCS-4). Then, when you need to pass the string to
something else, you encode again.

Same with dates. Instead of trying to deal with and store all possible
representations (format, timezones, etc), you convert all dates to a single
representation (hence UTC). When you need to output the date, encode it into
whatever format and timezone you need.

It's much cleaner this way, because there's less chance you'll mix up
representations. So the "for humans" part is more "for developers, which are
also humans but like to pretend they aren't and never make mistakes".

------
krautsourced
I wonder if the naming isn't a bit unfortunate, seeing that Maya is one of the
major 3d packages out there and googling for Maya and Python will almost
always lead there (also, not sure whether Autodesk might object...)

~~~
duncanawoods
Yep. Picking cute short brand names for libraries as though they were startups
clutters up the namespace of programmer's conversational language causing
ambiguity and obfuscation. It would make life easier if we adopted an informal
naming convention for small libraries e.g. <lang><brand><purpose>. Lang is
useful when libraries get ported so you get e.g. PyMayaTime, GoMayaTime etc.
Still sounds cute but you know what its for when you pick up someone else's
code and its easy to google.

~~~
Senji
Picking names which collide with stuff seems to be the standard these days.
Don't get me started on Google(Chrome, Go)

------
etanol
Not to undermine its merit, but most of the dirty work and heavy lifting is
done by its dependencies:

[https://github.com/kennethreitz/maya/blob/d57a78c6bc6b5295f7...](https://github.com/kennethreitz/maya/blob/d57a78c6bc6b5295f7d7207be0ee7cf0d62f063f/setup.py#L14-L20)

And i18n support in humanize is a bit lacking, as it only translates to
French, Korean and Russian. Given that most of the translations needed to
render human dates can be found in the CLDR database, maintaining their own
looks like a bit of a wasted effort.

Reference:

[http://cldr.unicode.org/](http://cldr.unicode.org/)

~~~
masklinn
> Given that most of the translations needed to render human dates can be
> found in the CLDR database, maintaining their own looks like a bit of a
> wasted effort.

You still need to parse the CLDR (the formats are pretty wonky) and provide
APIs for its functions. Of course that's what Babel
([http://babel.pocoo.org](http://babel.pocoo.org)) does.

~~~
thrownblown
and here I thought babel was for es2015

------
Goopplesoft
Kenneth Reitz is a testament to how important good interfaces are in the
developer community. There are literally 0 interesting things in the code (all
dependency driven, [https://git.io/v15i3](https://git.io/v15i3)). It does have
a nice interface and because of this it will probably become one of the more
popular python datetime libs.

~~~
vegabook
Arguably bringing multiple dependencies under one roof, with a nice, human-
readable interface, is exactly what the value is here. This SO answer from
Pandas lead Wes Mckinney says it better than anything I could comment. Python
datetime/timestamp management: "Welcome to Hell"

[http://stackoverflow.com/questions/13703720/converting-
betwe...](http://stackoverflow.com/questions/13703720/converting-between-
datetime-timestamp-and-datetime64)

I run batch and real time financial feeds into Python from multiple sources,
inevitably from different programming language paradigms, and with different
time/date/timezone conventions, and then I allow user interaction with it in
their own lingo for dates and times, and my experience is that it is indeed
hell consolidating all this into a common tongue. The "slang time" idea
suggest to me that this library understands the need for flexibility and
malleability in this very disjointed and often frustrating part of the Python
ecosystem.

~~~
dom0
I think there are two separate issues here.

One thing is that some libraries bring their own not-really-different-just-
different-enough-to-break-things date/time types to the table. That can be
worked out over time, not that big of a problem, just inconvenient.

The other is that I've grown to think that the abstraction commonly used,
seeing date/time as something like (year, month, day, hour, minute, second,
subsecond, timezone) is in itself not really suitable, because the derived
interfaces (like accessing and modifying days, months etc. separately) mean
that most code working with these will inevitably be bug-ridden.

The root cause for this is imho that date and time are extremely complicated
and it's usually unclear what the correct behaviour would be in any of the
many, many edge cases.

Some examples:

1.) One week after `this` date. Does this mean, add seven days? What about
leap days? Should the result be the same weekday, just in the next week
instead?

2.) Start and end time of a process should be recorded. The process starts and
finished. All good. The duration of the process is -1 hour and 5 minutes.
Should instead a start timestamp be recorded and a monotonic duration? Or
perhaps we should rather note the begin of a monotonic period and record an
end timestamp, deriving the start timestamp? Which is correct?

3.) One ^W two words: _recurrence rules_

4.) DST

5.) Combining 3.) and 4.)!

 _dom 's rule of thumb_: if code does addition or substraction with some
measure of time, it's probably wrong.

~~~
smallnamespace
Regarding recurrence rules specifically, I worked in the financial services
industry and bonds and derivatives obviously recur (usually monthly) for
decades.

Not only do you need to specify the precise dates of recurrences, but you also
need to map time intervals to fractions of a year (e.g. if I owe you $1mm a
year, is 'one month' equal to 1/12 of a year, or do I count the actual numer
of days in between? What about leap days, etc.). Getting it wrong even
slightly means you get all the cashflows wrong.

Welcome to the joy of 'day count conventions' [1] and 'date rolling' [2].

[1]
[https://en.wikipedia.org/wiki/Day_count_convention](https://en.wikipedia.org/wiki/Day_count_convention)
[2]
[https://en.wikipedia.org/wiki/Date_rolling](https://en.wikipedia.org/wiki/Date_rolling)

Funny thing is these have _all_ probably be reimplemented from scratch at
every single bank.

~~~
vegabook
I've personally re-implemented bond daycount for multiple markets several
times at different shops. Maybe quantlib has this? Have never been allowed to
use it.

~~~
smallnamespace
Quantlib has a relatively small set implemented:

[https://github.com/lballabio/QuantLib/tree/master/ql/time/da...](https://github.com/lballabio/QuantLib/tree/master/ql/time/daycounters)

------
bndr
How does it differ from Arrow which was made several years ago? [1]

[1] [http://crsmithdev.com/arrow/](http://crsmithdev.com/arrow/)

~~~
smegel
That would be nice to know. Arrow still seems active, and has been
consistently so over the last 3 years.

I have a bit of doubt about a time library that defaults to UTC but uses human
phrases like "tomorrow". Who thinks about UTC tomorrow?

~~~
kenneth_reitz
that's why there's `maya.when('yesterday', timezone='US/Eastern')` :)

~~~
Drdrdrq
That's great, but why this then:

    
    
        >>> tomorrow = maya.when('tomorrow') 
        <MayaDT epoch=1481919067.23> 
        >>> tomorrow.slang_date() 
        'tomorrow' 
        >>> tomorrow.slang_time() 
        '23 hours from now'
    

Huh? Given that we never specified timezone, I would expect 24h (unless DST
change happens).

This is actually my biggest gripe with date+time libs. Imho API should always
be explicit in what its default TZ is.

EDIT: still, appreciate what you are doing. Requests rule, and there are many
other areas (including datetime) that need libs with better APIs. Thumbs up!

~~~
misnome
Why on earth is 'tomorrow' just +24 (local) hours. I'd expect it to be the
next calendar date.

~~~
jibsen
I suppose due to DST, the same clock time on the next calendar date might not
exist or be ambiguous.

~~~
pherq
Conversely, +24 hours may fall on the same calendar date or two calendar days
on from now.

Really, "tomorrow" should be the interval from the beginning of the next
calendar date to the end.

------
Drdrdrq
I have found that no matter what language/platform I use, the one thing that
is always supported is UNIX timestamp. That makes date+time operations much
easier:

    
    
      1) Whenever dealing with users, use local tz.
      2) Always save and manipulate in utc.

~~~
masklinn
Except (2) doesn't necessarily work when events are _local_ and you factor in
timezone variations (both DST and actual TZ changes). There are classes of
events where you're much better off with zoned local dates e.g. local
meetings.

Example: in early 2011, Samoa announced that on December 29th at midnight
_local_ they would switch their timezone offset from -11 to +13. Before that
announcement (or at least before your timezone database has been updated),
store as UTC a meeting in local Samoa time, and the user will miss their
meeting by a day.

Of course storing a meeting on December 30th local would also have been
fraught as there is no December 30th 2011 in Pacific/Apia but that is a
_common occurrence_ historically due to the unsynchronised julian/gregorian
switches e.g. none of the dates between February 16th and February 28th 1923
(included) exist in Greece, and the US doesn't have a 9/11 in 1752.

~~~
appleflaxen
I think the recommendation still holds.

You want a meeting at 1100 local time.

Meeting is stored for that time in UTC, on the correct date (before Samoa
changed, we all understood what date you meant)

Samoa changes the rules.

The calendar doesn't change.

This stuff is mind-bending, so I could be missing something, but a more
detailed walk through your mental debugger might clarify.

~~~
achamayou
Classic counter example: storing business hours. If the store/exchange opens
at 8:00AM every weekday in Somethingania/Foocity, you want to store 8:00 and
the name/reference of the timezone. Because the store/exchange will still open
at 8:00AM local time, even across DST, government-mandated timezone changes
etc (largely, I'm sure there are exceptions but for stock exchanges there
definitely aren't).

School times, times for religious services, lots of times are relative to
local time, which itself is subject to change against UTC (sometimes
predictable, sometimes not, the government gets to update the timezone and the
DST scheme arbitrarily). So only ever storing UTC and using local time purely
for display isn't a sustainable one-size-fits-all option.

------
dom0
> Datetimes are a headache to deal with in Python, especially when dealing
> with timezones, especially when dealing with different machines with
> different locales.

Anything with date/time calculations is always a pain, probably doesn't really
have much to do with the library/language itself, but that the abstraction
level that's used (and typically used in other libraries) means that the
complexities of calendar and time systems are sprinkled all over application
code.

I do have to notice here that always using UTC is _not_ always the right thing
to do. For example, evaluating rrules in UTC is rather error-prone (DST).

~~~
mjevans
If you're mixing UTC and DST you are doing something wrong.

~~~
deathanatos
I think what dom0 means is a situation like the following. Say you want to
schedule, on a calendar, an event that happens daily at 2PM in your local
timezone, say America/New_York. If you record that even as happening at 2200
UTC each day, when DST rolls around, it'll shift to 1PM or 3PM in the local
timezone. This is not what the user wants.

Here, despite storing everything in UTC and attempting to apply DST as late as
possible as if it were a "display issue" — normally the right thing — here
results in the wrong outcome.

~~~
mjevans
In your example what I said is still true.

The user didn't want a precise coordinated time (DST is not a factor for
storage/recall); the user wanted a fuzzy reference to local time (DST is a
factor for evaluation).

------
ak217
I like Kenneth's work, but "I wrote a new datetime library" is a cliche now.
We have datetime, dateutil, pytz, babel, arrow, pendulum, delorean, a bunch of
lesser known stuff, and now this. I have yet to see the need for anything but
the first four.

~~~
toyg
Seems to be a case of "Python Web Framework"-itis: it's relatively simple to
hash up something from scratch that covers one's particular use-case (in this
case, it seems, a slightly smarter parser), but it's actually hard to properly
cover all use-cases.

This is what happened with Python web frameworks, a scene which was heavily
fragmented before Django and Flask basically solidified the two main
communities of users ("I need everything and the kitchen sink" / "I need the
bare minimum to get going"). The same happened with urllib2/urllib3/httplib/
etc before Requests appeared.

Nobody seems to have pulled this trick for datetime libraries yet, so here's a
new contestant.

------
foxhop
My first commit to ago.py was 'Fri Jun 29 19:25:55 2012'.

    
    
        >>> import ago
        >>> import dateutil
        >>> ago.human(dateutil.parser.parse('Fri Jun 29 19:25:55 2012'))
        '4 years, 172 days ago'
    
    
     The current implementation is 66 lines of code including docstrings:
    

* [https://bitbucket.org/russellballestrini/ago/src/tip/ago.py](https://bitbucket.org/russellballestrini/ago/src/tip/ago.py)

* [https://pypi.python.org/pypi/ago](https://pypi.python.org/pypi/ago)

------
japhyr
I'm competent with strptime(), but this looks really nice:

    
    
        # Automatically parse datetime strings and generate naive datetimes.
        >>> scraped = '2016-12-16 18:23:45.423992+00:00'
        >>> maya.parse(scraped).datetime(to_timezone='US/Eastern', naive=True)
        datetime.datetime(2016, 12, 16, 13, 23, 45, 423992)
    

I'm happy not to have to write formatting arguments to strptime() anymore. Do
the other datetime libraries have similar parsing functions?

~~~
detaro
Maya actually uses the dateutil.parser.parse function from dateutils under the
hood, which as far as I remember is the most comprehensive such function among
the typical python datetime libs.

~~~
japhyr
Wow, I've been using datetime for years, without ever noticing the dateutil
library. I'll use this from now on.

------
charlex815
I suppose this beats breaking out time delta, but I think it'll be hard for me
to see an actual use in my projects that I couldn't accomplish with maybe just
a couple extra lines

~~~
kenneth_reitz
you'd be surprised, the amount of headache-inducing code that timezones can
cause, especially when dealing with servers with different locales than your
development machine when doing timezone algebra.

This API avoids that problem entirely.

~~~
SFJulie
It does not: timezone (TZ) definitions are politicial, inaccurate, and stupid
and non versioned.

The problem with TZ is TZ in its core definition.

1) they always change: if you have not updated your TZ since 3 months they are
probably inaccurate

2) your TZ definition maybe accurate but they may not have been applied for
real in the concerned zone

3) you can have different local time for the same longitude

4) you can have different days on the same longitude ...

5) TZ are not versioned, if the TZ changed between 2 records you may have
made, you have inaccurate intervals stored. We do NOT have an API to take TZ
change over time in consideration.

6) CEST/DST is breaking the axiom that time is a growing monotonic function

References 8.5.3 TZ in postgresql man
[https://www.postgresql.org/docs/9.2/static/datatype-
datetime...](https://www.postgresql.org/docs/9.2/static/datatype-
datetime.html)

Computerphile what's wrong with timezones
[https://www.youtube.com/watch?v=-5wpm-
gesOY](https://www.youtube.com/watch?v=-5wpm-gesOY)

~~~
DasIch
The TZ database has versions and you could at least in theory expose that at a
higher level in libraries.

~~~
nicpottier
Pretty sure pytz will hand you the appropriate timezone for a given date.
There aren't version numbers after all but it can do the right thing based on
when you say it is.

There is some weird behaviour due to this when trying to get a timezone
without an associated date as it doesn't default to now.

Then again you probably have all sorts of DST bugs if you have places in your
code that do that.

~~~
Stasis5001
Pytz handles a certain class of funkiness in TZ changes, but not the one
mentioned here. Consider, a user in footopia enters a datetime 3 months from
now which is saved in UTC in your database. Then footopia changes their TZ
definition. The only way you could get this right is if you also know when the
datetime in the database was created. You would need to tell your datetime
library the datetime and _when it was created_, and pytz doesn't do that (I
don't know any that do, actually).

~~~
guitarbill
You'll never get this right, even if you had all of this info, because as
mjevans points out[0] to make it work you need to know:

> P) The event is at a precise internationally recognized moment (better for
> co-ordination globally).

> R) The event is in local time (like a lunch date) and expected to remain
> colloquially fixed.

In the case you mention, it's somewhat arguable that the burden of changing
the colloquially fixed dates (R) falls on the citizens of footopia, in the
same way as changing the time of a purely mechanical clock also would. caveat
emptor.

[0]
[https://news.ycombinator.com/item?id=13206671](https://news.ycombinator.com/item?id=13206671)

------
meltingwax
This is designed for programmers of user facing applications. For science and
engineering, it does not address the problems, eg lack of leap seconds.

~~~
masklinn
> For science and engineering, it does not address the problems, eg lack of
> leap seconds.

What do you mean by "lack of leap seconds"? A non-unix-timestamp-based time
library?

~~~
meltingwax
When the mapping between a time_t and a "human readable format" (hours,
minutes, seconds, etc) takes into account extra leap seconds. You could use
time_t underneath, if that's what you really wanted to do.

As my co-worker once put it, "time is a four letter word."

At the very least, there is a need for showing the occasional 61st second of a
minute, and adding a time and timedelta where leap seconds is taken into
account.

~~~
masklinn
> When the mapping between a time_t and a "human readable format" (hours,
> minutes, seconds, etc) takes into account extra leap seconds.

I still don't understand what you mean.

> You could use time_t underneath, if that's what you really wanted to do.

Well you've got unix timestamp (UTC) which "skips" leap seconds but as a
result is very easy to map to "human time", or you've got TAI[0] which
includes leap seconds but doesn't allow dates in the future since you don't
know where and when new leap seconds will be added long in advance, and now
you need regular updates/permanent connectivity so you can remap TAI onto
human time. _And_ you need an NTP replacement, though I guess having a GPS
unit in everything would do.

[0] and GPS time which is just TAI + 19s

~~~
deathanatos
What I believe he's trying to say is that sometimes, you need a library
capable of the following:

    
    
        2016-12-31T23:59:30Z + 60s == 2017-01-01T23:59:29Z
    

(The above is a true statement, due to the presence of a leap second in the
duration.)

Python does not do this:

    
    
      In [5]: datetime.datetime(2016, 12, 31, 23, 59, 30, tzinfo=pytz.UTC) + datetime.timedelta(seconds=60)
      Out[5]: datetime.datetime(2017, 1, 1, 0, 0, 30, tzinfo=<UTC>)
    

(Nor does Arrow. Maya doesn't really seem to support arithmetic on a MayaDT
short of converting to a datetime, so in that regard, it behaves like Python.)
The above output is not terribly surprising, as Python and a lot of software
tend to follow Unix/POSIX time. Whether it is "right" depends.

You also mix up the various timescales:

> _Well you 've got unix timestamp (UTC) which "skips" leap seconds_

Unix/POSIX time might not skip leap seconds; some will repeat a second as the
leap second occurs. (I.e., while UTC counts 23:59:59, 23:59:60, 00:00:00,
POSIX will count 23:59:59, 23:59:59, 00:00:00.) Linux falls into this latter
case, I believe, during which time adjtimex() will return TIME_OOP.

However, Unix/POSIX time is not UTC. UTC never "skips" leap seconds, as leap
seconds are an inherent property of UTC.

> _or you 've got TAI which includes leap seconds_

TAI does not include leap seconds[1]:

> _the name International Atomic Time (TAI) was assigned to a time scale based
> on SI seconds with no leap seconds._

> _TAI is exactly 36 seconds ahead of UTC. The 36 seconds results from the
> initial difference of 10 seconds at the start of 1972, plus 26 leap seconds
> in UTC since 1972._

UTC: has leap seconds as a property of how it works

TAI: does not include leap seconds

POSIX/Unix time: an integer that can be mapped to UTC except during leap
seconds, where it becomes ugly (unless you know if TIME_OOP was set, in which
case it can still be mapped to UTC).

> _but [TAI] doesn 't allow dates in the future since you don't know where and
> when new leap seconds will be added long in advance, and now you need
> regular updates/permanent connectivity so you can remap TAI onto human time_

With the above, it should be obvious that this is a property of UTC, not TAI.
TAI timestamps in the future should be stable/usablable without surprises.
UTC's might not be due to leap seconds.

[1]:
[https://en.wikipedia.org/wiki/International_Atomic_Time](https://en.wikipedia.org/wiki/International_Atomic_Time)

~~~
masklinn
> You also mix up the various timescales:

No, but I think we're interpreting the word "skipping" differently which leads
to confusion. To me, in TAI, a "leap second" is nothing special and included
in the normal time stream, in UTC the leap second is removed from the normal
time stream and tacked on as a special case on some specific days, hence
skipping it, hence UTC running late compared to TAI: UTC has "removed" 36
seconds from the timestream compared to UTC.

> Unix/POSIX time might not skip leap seconds

UNIX/POSIX can only ignore leap seconds, how it does so doesn't matter,
because it's defined as constant-length days, the timestamp is 86400 _days
since epoch + seconds since midnight. The day after a leap second, that leap
second has "disappeared" from the timestamp sequence.

> However, Unix/POSIX time is not UTC.

Unix time is specifically defined as UTC (at least after January 1st 1972
since that's when UTC was defined, pre-72 it's ambiguous), though not true
since it can't _represent* leap seconds. Outside of the extent of leap
seconds, UNIX time tracks UTC exactly.

> as leap seconds are an inherent property of UTC.

They're an inherent property of UTC in that it removes them from normal time
treatment.

> TAI does not include leap seconds[1]:

It includes them in the sense that it treats them as perfectly normal seconds
without anything special about them.

> With the above, it should be obvious that this is a property of UTC, not
> TAI.

The paragraph you're quoting here (and my entire comment really) is talking
about how these systems _map to human time_ aka
year/month/day/hour/minutes/seconds. Leap seconds were introduced specifically
for that purpose, and UTC (and unix timestamps) are thus trivially mappable
_by definition_.

Because it does not, TAI can't be mapped to human time without leap seconds
definition AKA May 7th, 2078 at 17:54:33 is known in UTC, but not known in
TAI, because we don't know how many leap seconds will be introduced in the
next 61 years and that's necessary to map TAI to and from human time.

------
diyseguy
But does it get DST correct. I haven't found any python time libraries that
do.

~~~
sdispater
You can take a look at pendulum:
[https://github.com/sdispater/pendulum](https://github.com/sdispater/pendulum)

A lot of effort has been put into getting DST right so I hope it will be what
you are looking for.

Disclaimer: I am the author of Pendulum :-)

------
Animats
Bikeshedding. The Python "datetime" module does almost all of this. (Although
I did try to get ISO8660 parsing put in, after finding eight libraries for it,
all of which were broken in some way.)

------
ben_jones
People are asking why this is needed compared to Arrow and a few other
libraries. Personally I was never happy using the python time libraries, it
always felt like my use cases were slightly different then the library, and I
always found myself getting frustrated over little things here and there. I
think there are a LOT of use cases for time in python applications and there
is plenty of room for small libraries to satisfy these conditions, versus one
monolithic time library that attempts to solve all.

------
djoser
I understand the link with the maya believes but for a "digital" feature, this
sound a lot like an Autodesk product... That said, nice API!

------
RubyPinch

        <MayaDT epoch=1481850660.9>
    

not the most human readable is it?

I guess it would help with like, "X happened before Y" situations, but I don't
think I'd trust my eyes for that!

I think I would of preferred the "this happened in X month" case instead, I
find it easier to trust my eyes for that, instead of trying to look for
differing digits!

\- - -

UTC default is a godsend though

~~~
jvdh
Fortunately it is really easy to print it in other formats. Which is certainly
not the case for the standard datetime library.

------
partycoder
[offtopic] I do not like the "for humans" thing. What could it be for instead?
for kangaroos or giraffes?

~~~
frou_dh
Popo: An Artisinal Handcrafted Curated Simple-yet-Powerful Library (for
Waterfowl). By L'Wren Scott.

------
krick
Uh, Arrow? I'm not a big fan of it, really, as I've seen it to break silently
on some edge cases API is not totally awesome, but this one doesn't seem to be
any better. Still not sure if I should use this or that or write one of my
own.

------
sebastibe
Hopefully we will have a "API for Humans" for each of the standard library
modules.

------
niftich
_The link to its github was posted on HN when it was brand new, and I left a
comment [6], which I 'm reproducing here -- since most of my points still
apply._

(...) take my very early comments with a grain of salt -- they refer to the
progress as of this commit [1].

I love Requests -- its API design is fantastic, and manages to distill down
most of a complex problem domain to a clean, dare-I-say, elegant API. So I can
eagerly anticipate this design applied to datetimes. But the progress being
shown so far is definitely not it.

 _> >> tomorrow = maya.when('tomorrow')_

 _< MayaDT epoch=1481919067.23>_

Why is "tomorrow" a precise-to-centisecond, infinitesimally small point on a
giant cosmic timeline? I'm reasonably sure it's an abstract concept that
describes the calendar day that starts after the current calendar day ends.

At least, Pendulum normalizes tomorrow() and its ilk to represent midnight on
the given day [2], while Delorean's natural language methods [3] like
next_day() advance the day while leaving the time-of-day unchanged, but the
method name makes this fairly clear.

Even Arrow, which is heavily inspired by Moment.js to the point of conflating
every single datetime idea into the same class, opts for mutators that are
still more clear [4].

 _> Timezones fit in here somewhere..._

Yeah, this needs more work.

Java 8 / Threeten, and its predecessor Joda-Time took the approach of clearly
modeling each and every concept that humans actually use to describe time;
even if you take issue with their API, the designers have clearly done their
homework, and their data model is solid.

Formats like TOML wrestled with datetimes and realized [5] that datetimes
aren't just woefully underspecified in most other specs and APIs, but that
they're frequently mis-modeled, so they adopted large portions of Threeten's
data model. Cases like this should merit strong consideration from anyone
trying to propose new datetime APIs today.

[1]
[https://github.com/kennethreitz/maya/commit/ecd0166ba215c1a5...](https://github.com/kennethreitz/maya/commit/ecd0166ba215c1a5..).
[2]
[https://pendulum.eustace.io/docs/#instantiation](https://pendulum.eustace.io/docs/#instantiation)
[3]
[http://delorean.readthedocs.io/en/latest/quickstart.html#nat...](http://delorean.readthedocs.io/en/latest/quickstart.html#nat..).
[4] [http://crsmithdev.com/arrow/#replace-
shift](http://crsmithdev.com/arrow/#replace-shift) [5]
[https://news.ycombinator.com/item?id=12364805](https://news.ycombinator.com/item?id=12364805)
[6]
[https://news.ycombinator.com/item?id=13190314#13190657](https://news.ycombinator.com/item?id=13190314#13190657)

------
aivosha
i dont trust any piece of software that has all these made-up, unrelated, non-
reflecting names. I mean come on, arrow, maya ? the main challenge about
software engineering is correct naming. If you fail there you pretty much fail
in the rest. Same goes for actually builtin name, "datetime". There is no such
thing in real life as datetime. There is date and there is time. They are very
separate notions and the root of the problem IMO is in trying to pile them
together.

------
d0m
Yeah.. datetime is one of the rare part of the Python API that I really hate
using. It's just badly designed, e.g. simple things are hard and confusing.

------
smilekzs
Slightly off topic:

> Maya never panics, and always carrys a towel.

Nice reference!

------
jchassoul
I don't understand the author's effort on solving this "problem"... I wonder
if he considered at some point make a contribution to stable open-source
existing libraries like arrow that even claim inspiration on one of his
projects for the API. What's not for humans on the arrow API? what are the
arguments for basically start yet another time library from scratch?, why is
making a contribution not an option? probably the author of arrow will love
your contributions and will love to hear the arguments on the changes you
propose.

~~~
scrollaway
As rich as it is to tell Kenneth to "consider contributing to existing open
source libraries", I have to agree: what's so exceptional about this library
when Arrow is already a thing?

[http://crsmithdev.com/arrow/](http://crsmithdev.com/arrow/)

------
batbomb
Kenneth: It would be really cool if you could put in support for TAI and MJD.

------
aibottle
Great job, Mr. Reitz. I think I will use this in every project from now on.
Thanks!

------
Walkman
This is a joke. The whole "library" is 200 lines, has 9 tests, Kenneth
probably wrote it in a couple of hours. It is totally not necessary, because,
there are a gazillion datetime libraries and still, this is on the front page?

Also:

    
    
        >>> dt = maya.now()
        >>> dt.datetime()
        datetime.datetime(2016, 12, 18, 19, 24, 50, 212663, tzinfo=<UTC>)
    
        >>> dt.datetime('Europe/Budapest')
        datetime.datetime(2016, 12, 18, 20, 24, 50, 212663, tzinfo=<UTC>)
    

I would not use it...

~~~
tedmiston
Just wanted to remind you that the entire Node community depends on one
11-line module. Lines of code is still a pretty meaningless metric.

~~~
jasoncchild
Idk, pointing to issues like deep dependency within the Node community isn't
the best argument. It's healthy to be skeptical of dependencies imo,
especially in light of issues like the infamous left pad problem.

~~~
tedmiston
My comment wasn't about deep dependencies; it was about parent suggesting 200
lines is too few to be a legitimate library.

~~~
jasoncchild
Ah, I see. Noted!

------
joaoqalves
Yet another library to handle dates in Python. Oh, boy...
[https://xkcd.com/927/](https://xkcd.com/927/)

------
fnord123
Python datetimes used with timezones, even UTC, are ridiculously slow and
bloated. It's puzzling why this doesn't wrap np.datetime64 instead. Or wrap
boost.datetime. There are many good options to claw some performance back so
it's really head-scratch inducing that someone would recognize that stdlib
datetime is a dog and then wrap it instead of scrapping it.

~~~
aangjie
> It's puzzling why this doesn't wrap np.datetime64 instead. Perhaps to avoid
> the numpy dependency. I've found numpy a bit heavy and wouldn't want to
> depend on it simply for the sake of some datetime syntax. May be
> boost.datetime or even numpy's implementantion on datetime, but that sounds
> like a lot of maintenance.

~~~
guitarbill
Agree, 100%. E.g. numpy in a virtual environment can be a real pain. Best
case, it takes a bit longer to install. Worst case, it's on a system that's
missing some OS specific dependencies. "Works for me" is not okay for stuff on
PyPI.

~~~
aangjie
Hmm.. that's making me rethink that 'datascienceutils' project I put up on
pypi.. May be work on getting it to stable version before pushing to pypi..

