Hacker News new | comments | show | ask | jobs | submit login
Removing Python 2.x support from Django for version 2.0 (github.com)
720 points by ReticentMonkey on Jan 19, 2017 | hide | past | web | favorite | 392 comments



The next release, Django 1.11, will be a long-term support release, and the one after that, Django 2.0, will no longer support Python 2.

https://www.djangoproject.com/weblog/2015/jun/25/roadmap/

I've grow to highly respect the Django project for its good documentation, its healthy consideration for backwards compatibility, security, steady improvements and all round goodness.


Interestingly, I have the exact opposite view on Django.

I hate their API and overall architecture, which I find to be the result of glueing features on top of features for many years. The internal code also is just like that: looks like every single method is riddled with out-of-band conditionals, which is the result of a community that prefers to hack things to work, instead of rethinking/refactoring.


I'd prefer to hold judgement on the internals of Django, however as a developer using the API, I also think it's a pleasure to use. There is a very high level of consistency, similar patterns used throughout, and the architecture results in a project structure that I can explain to a new developer who has never used Django before in minutes. I find few frameworks really scale in terms of structure and consistency, and working on a large web application, Django has proven to be very valuable for us.

I'm curious, what's an example of a framework that you think has a better architecture, and in what context are you evaluating this?


I would bet my horse on https://trypyramid.com/ when it comes to API consistency, I've updated my applications from 0.9 to 1.7 and that was a breeze. Over the years it was exceptionally great experience.


I always expected Pyramid to offer some performance benefits over Django, given the origins (taking the best of framework X and Y), esp given that you can choose your own ORM (e.g., SQLAlchemy), etc.

However, once you're out of the unrealistic scenarios (single query benchmarks, etc.), it doesn't do that well[1]. It's not prohibitively slow, but to make the jump from something as well documented and with as large a community as Django, most developers would need to see major gains in one or more areas; performance, docs, community, coding efficiency, etc.

Pyramid doesn't offer these gains. It only makes promises about maintainability, which are hard to verify, unless you personally know somebody that you trust as a skilled developer, and who has worked on a sufficiently large enough project in Pyramid to make such claims... it's easy to see how this creates a hole that Pyramid has to dig itself out of.

Also, Pyramid still talks about "supporting your decisions", like Jinja2, etc., as if that's a problem people are still dealing with. Django supports Jinja2 even in the admin now, let alone being able to use whatever you want elsewhere. As a side note, I don't see many Python developers wanting to use anything other than Django templates (for simplicity and separation of concerns) or Jinja2 (for speed and flexibility) these days.

If the argument, in response to the aforementioned difficulty in verifying claims about maintainability, is "well, it's up to you; Pyramid stays out of the way", then the immediate response is that you're better off using Flask or Falcon; the former if you need third-party tools, and the latter if you need maximum raw speed, but still want Python. Both of those frameworks will drastically outperform Pyramid and stay out of your way.

IOW, I don't think Pyramid fills any reasonably sized and easily understood market gap.

1. https://www.techempower.com/benchmarks/#section=data-r13&hw=...


Flask was an April fools joke from Armin that became a serious thing. The thread locals hacks and use of globals is pretty horrible. I do quite like Falcon however if I want something != Django.

http://lucumr.pocoo.org/2010/4/3/april-1st-post-mortem/

http://mitsuhiko.pocoo.org/flask-pycon-2011.pdf


What is your opinion of bottle.py?


I used Pyramid for a couple of large projects back when it was Pylons, and around the time of the merger that resulted in Pyramid.

Django has grown up a lot in the last several years. It used to be really hard to use a different templating language, and I personally really dislike Django's templating language. That was the biggest turn off for me every time I looked into Django. If it had been pluggable earlier, this probably would've changed the calculation for me.

I've always thought of Pyramid as the ideal framework, conceptually. It feels like you're writing Python, not Pyramid, which is GREAT. Pyramid only shows up when you say "OK, I have my Python stuff; I need to make this show up in the web browser now." It is entirely unobtrusive. All frameworks should strive for that.

Unfortunately, the vast majority of frameworks take the opposite approach. Django at least used to be an example of that by forcing DTL (is this the abbreviation for "Django Templating Language"?) down everyone's throat, among other things.

I've also been very impressed by Chris McDonough and the rest of the Pyramid team. They're extremely helpful in IRC, they keep up the great work even after spending years as a small/niche framework, and their code is very clean and well-tested, not obnoxious or overcomplicated. It's a tight, consistent framework that stays out of the way and is just there to serve the developer, not to force him/her to comply. It is so everything a framework should be and so everything most frameworks aren't that I can't help but love it.

I haven't done a major Python-based web project for many years (have one in progress, porting a Rails-based site to a Mezzanine-based site, but work on it only rarely), so I'm sure some of this is outdated.


Thanks for the reply.

What makes Pyramid better for you than e.g., Falcon?

From your comment, it sounds like a lot of the value you've gleaned from Pyramid is based on the framework staying out of your way, so why not use something 5x faster that does the same thing? Does Pyramid provide some unmatched abstractions? (I'm coming from a position of no true experience in Pyramid)


As far as I know Falcon didn't exist last time I seriously looked into Python frameworks, so I don't know what would make it better or worse. From a quick glance, it looks promising and I'll definitely want to evaluate it next time I seriously engage in a custom Python web application, though I don't expect to do so for a long time (current project depends on the also-superb Mezzanine, a Django-based CMS).


Yes, there isn't feature parity between pyramid and falcon.

You have event system, security abstraction, all kinds of overloading helpers for views and internal machinery that falcon doesn't seem to provide at least from my cursory look. Compare the size of documentation and configuration options between both projects.

Also looking at this http://klen.github.io/py-frameworks-bench/ the 5x better speed of falcon seems to be completly made up or occur in some very specific situation. More realisticly it looks like falcon can be 30% faster in some scenarios not involving storage - but for the price of giving you less options.

Which probably means that if you use any kind of storage the difference between those two can be ignored.


Pyramid is actually more of repoze.bfg[1] which inherited a little bit from Zope. The overall experience is still very Pylons-like, and Zope stuff are very transparent, though.

[1]: http://docs.repoze.org/


> However, once you're out of the unrealistic scenarios (single query benchmarks, etc.), it doesn't do that well[1].

I'm curious about the result since it didn't matched my experience. So I proceed to take a look at the source code for both Pyramid's[1] and Django's[2] benchmarks. From the look of it, I feel like there are too much difference in implementation to call this a realistic comparison.

From a quick glance, Django's benchmark seems to contain a little bit of micro-optimization and is rendering JSON response directly with uJSON[3] (which looks to be a lot faster than native JSON module)[4], while Pyramid's benchmark did not seems to go through any optimizations and is returning a list of SQLAlchemy objects that get passed into a custom renderer[5]¹ and render using native JSON module.

So I don't think it's entirely fair to say Pyramid is slow in a real world scenario based on this benchmark alone.

[1]: https://github.com/TechEmpower/FrameworkBenchmarks/blob/c8a5...

[2]: https://github.com/TechEmpower/FrameworkBenchmarks/blob/c8a5...

[3]: https://pypi.python.org/pypi/ujson

[4]: http://artem.krylysov.com/blog/2015/09/29/benchmark-python-j...

[5]: https://github.com/TechEmpower/FrameworkBenchmarks/blob/c8a5...

¹ Using pyramid.renderers.JSON here probably make more sense, but to match Django's implementation, one can wrap a serialized JSON into `Response` object.


I've been talking to some of the developers on FreeNode about their support of old releases. In particular, related to statements about "Once 1.8 comes out, 1.6 will no longer receive patches". 1.7 came out 9 months ago and 1.8 is in beta. In other words, they don't really have a LTS process.

And the developers seem to think this is totally ok.

In my experience, anything that doesn't have a commitment to at least fixing security issues for ~5 years, isn't a good choice for business. Businesses don't want to tie their internal development and release cadence to an external development team with less than 3 year cycles, ideally 5+.

I've seen it time and again over dozens of companies. There is no time to qualify apps for new major/minor releases. Anything that doesn't have a LTS release story is just not something I can feel good about deploying in my business.


> And the developers seem to think this is totally ok.

To be clear the core maintainer-ship of pyramid is about 2-3 people. We would love more contributors to the core codebase, but with that level of commitment we're obviously not able to provide a comparable level of support to a project with more contributors. This is the nature of open source and it is the rare exception to the rule to find packages / projects that have grown to be able to offer such a level of support.


Sure, but this is in a thread that starts with "Django has a new release", so I do think it is relevant considering that Django DOES have LTS releases.

(Aside: mmerickel is the developer I was talking to in Freenode)

Honestly, I didn't know that Pyramid had such a small amount of manpower behind it. It presents itself and has a reputation in the Python community, at least in my experience, of a bigger project.

But, as far as not being able to commit to security releases for old versions: it's obviously a choice in where to spend the available resources, not one of the size of those resources. Choosing new development over providing critical fixes of older releases is a choice, it's just one I need to understand before I commit to using a component. I understand why one would make either of those choices, I just need to know which one has been selected. It is great for people who want to choose forward development focus, I'm just not one of them.


While I'd say that's true as a general rule, an aging system requires a disproportionate amount of time as it ages and the diversity of the platforms underlying stack changes and evolves. This is an issue that would grow exponentially worse over time and being as their team is that small they need to consider risk to their user base when choosing the longevity of their product support. It's quite possible they could support normal issues up to 5 years, but what about the '20 year flood scenario' or 'once in a lifetime flood' if you prefer? What if something that level of bad hits near the 5 year mark and the platform diversity is nearing its maximum diversity?

I believe it's better for them to focus on what they know they can do day in and day out regardless of the circumstances and let companies make the decisions that make sense to them.

In a world where the hardware is written off by the business in 3 years though and websites age out at 3 years and car manufacturers are increasingly trying to hit a 3 year model revision mark it seems to me it's not too much to ask that companies investing hundreds of thousands in an application using Django or Pyramid also invest in maintaining it and keeping it modern or perhaps their existing business model needs additional revision too.

Just a thought as the least informed member of HN.


But you think about this in a wrong way - pyramid is a "glue" framework, unlike django.

- request/response (webob) gets its own updates.

- templates get their own updates.

- sqlalchemy gets its own updates Etc.

You WILL get security fixes for most of your application even without upgrading the framework itself (the attack surface for pylons/pyramid itself is smaller than monolith).

By being on a lower version you don't really miss much - the API's are very stable. And 99% of time you can update pyramid version itself without risking application breakage (the chance of it is probably a lot lower than with monolithic framework).

I know about very heavy websites that are still using pylons(precursor to pyramid) in production without issues (8 years now).


... then isn't it not a big deal to commit to security updates?


Sure, as far as i can tell it was always being updated so far - even without written commitment.


Pyramid is great, but for years they had a ridiculous Iron Maiden- heavy metal like branding [0] which make it hard to sell it in the corporate world. I'm glad they evolved on this point.

[0] http://keitheis.github.io/use-pyramid-like-a-pro/?full#Cover


I don't mind the artwork (and actually like it), but I think the real issue is the text:

    Use Pyrapid Like a Pro
"Like a pro(fessional)" implies that you, the target audience, are not "professionals". However, if you use it in the corporate world, you do use it as a professional.

This makes it sound like an advanced toy rather than a bullet-proof work of engineering you can rely on. Which is really a pity, because Pyramid _is_ a framework you can rely on.


Not gonna lie, that cover(?)/artwork(?) looks pretty sweet. But I'm probably biased because I really like Iron Maiden.


I agree, but you know what; I think the new branding is worse :(

It's a shame, because I've wanted to try Pyramid since I watched their talk at PyCon 2011, but the branding (including the new 99 Designs looking logo / theme) has always made me think, "This will never really take off."


Not to mention, a few parts / typical components of pyramid are kind of nuts, like auth. Colander is also a struggle to use.

Still my go-to framework though. The websauna framework linked above might be something I have to try.

As I build more and more side projects I find myself wishing pyramid were just a bit more opinionated so I could get the basics up and running a tad faster. At this point I mostly cut up old projects and port them to each next one, heh


One of the off-putting remarks I get is, "what is this, a pyramid scheme?". I don't like the name, they should have stuck with Pylons.


If like me you really liked the slides, this is the framework/engine https://github.com/shower/shower

The theme is one of the included ones, Ribbon.


My problem with Pyramid (although having never used it in production) is the same as my problem with Flask.

They say "Start small, finish big", and push the fact that Pyramid scales from a single file codebase, up to many files. This is something that I just haven't seen in practice.

A Django project requires quite a few files, so it's never a great answer when the total amount of business logic is <100, or maybe even <1000 lines of code. However, a 10k line Django site and a 100k line Django site basically look the same in my experience, and that is hugely valuable.

The biggest red flag from Pyramid for me is their point: "Use Pyramid as a "framework framework" to craft your own special-purpose, domain-specific web system".

In my experience, this means that every project is a special snowflake of a project that works in a very different way to other projects. With Django you have a tried and tested architecture, with Flask and Pyramid, it often seems that you have to create your own architecture in some ways, and that results in each project being really quite different.


Well, for what its worth 1.7 ships with new default scaffolds that that have better separation between models,views and templates - and now is a very decent starting point with everything preconfigured for you (certainly much better than starting with flask from scratch in my opinion if you are not really experienced). You do not start with a single file codebase anymore - if you have a moment try it again you might like it.


WebSauna looks promising: https://websauna.org/docs/narrative/background/intro.html

Offers Django feature parity with best of breed components.


You'll get Django parity with Pyramid modern design patterns.

If you have any questions please pop into the chat http://gitter.im/websauna/websauna


Minutes is a bit of an understatement, but given the amount of complexity involved, the API is acceptably consistent.


> instead of rethinking/refactoring.

And breaking backwards compatibility

Django is not RoR. Their users rely on being able to upgrade seamlessly.

The overall Api makes sense, there are some rough corners (yes Sites, I'm talking about you) the docs are ok once you get the hang of them


When you say "Django is not RoR", are you saying Django doesn't break backwards compatibility but Ruby on Rails does?

If so, I have to very strongly disagree. It's almost mind boggling to me how much they break backwards compatibility as a framework. They usually warn users with a deprecation warning in one version and then they make the backwards incompatible change in the next version, but the sheer amount of these kind of changes makes upgrading an infuriating process. Especially when upgrading a large codebase.


> warn users with a deprecation warning in one version and then they make the backwards incompatible change in the next version

That's exactly how _not_ to break backwards compatibility.


That is, by definition, breaking backwards compatibility. I.e., the code that I wrote against version X will not run against version X + n. They just give a warning about it in version X + (n - 1). Backwards compatibility means that my code will run against X + n for all values of n >= 0 without modifications.


How should you break backwards compatibility then? This is about as good a solution I can come up with.


> That's exactly how _not_ to break backwards compatibility.

No, no, no. Backwards compatibility means that there are no breaking changes at all - announced by deprecation warnings or not. Think Linux's public API.


Oh they do break, but it's a much slower process than RoR (as you described)

IIRC the most impacting changes I remember were in Config, replacement of South with a native solution and some timezone issues


The upgrade process of south was pretty shitty.

The documentation was telling something like: Ensure you applied all previous migrations and then start from scratch. Thats not an upgrade path. It may work for standalone web applications, but I have a python project (packaged as deb/rpm package and using the packaged django version of different distributions) that should work with multiple django versions and the user may upgrade the django version at any point.

I had to write raw sql statements to get the mirgration status of south (after the upgrade, without a working version of south anymore) and "fake" apply the migrations of the new native solution.


If you were supporting applications which needed to work with or without South at the time of the transition, the best thing to do was probably ship two separate sets of migrations (one set written for South, one written for Django's built-in migration framework).


Yes, that's what I am doing. The problem is, how to preserve the state of the migrations when the user updates the django version.


I've upgraded a large (~700K LOC) Django project across several versions (1.5-1.9). I think Django does a reasonably good job of maintaining a good upgrade path, though they frequently break backwards compatibility, and at a pretty fast rate (now 8 months release cycles, which barely gives the community to catch up). Some of the changes like module renames seem pointless from an end user perspective, so it feels like the django core team shovels busywork onto users. I also find their method naming policies annoying, since they don't annotate "unsupported" features in any way (eg with an _attribute, and also some _attributes ARE supported). This makes it difficult to figure out whether you're "staying within the lines" of their support policy. I keep a checkout of the django repo around so I can grep the docs for mentions of a method before I use it.

That said, having started on updating that same codebase to Python 3, the Django team tries hard to provide the KEY thing you need to seamlessly update a codebase: a version which can support both incompatible features. Without this, you need to have one giant branch with a bunch of renames and other changes. They actually introduce deprecation warnings, and have something non-deprecated you can use in the same version. I think if Python core had introduced a version of Python including both versions of all the module renames with deprecation warnings, the community would be much further along on the upgrade path.


How else would you introduce backwards incompatible changes? Especially with the LTS upgrade path that alasdairnicol mentioned I wonder what there is still to improve.


If you want to be backwards compatible, then you "simply" do not introduce such changes, there's no "how", you don't.

A platform that works hard to be backwards compatible does only additions and extensions, and keeps maintaining the old API as well so that old apps run without changes; possibly keeping around multiple depreciated ways to do the same thing (e.g. as win32 does).

It is debatable whether backwards compatibility is worth that cost (and it often isn't), but it's certainly a choice.


Mark it as deprecated, change the documentation to tell the user to use the new function Y instead and the most important part: keep the old api even if it's crufty. Wait for a few years until everyone upgraded and no longer uses any deprecated functions. Maybe make a final LTS release for those who don't want to upgrade. Then and only then should you ever break backwards compatibility.


Wait for a few years until everyone upgraded and no longer uses any deprecated functions.

Then view HN threads of people saying they have thirty-million-line codebases which utterly rely on deprecated functionality and they never ever plan to upgrade or do even basic maintenance ever for any reason ever, and will abandon your platform and completely rewrite in something that treats them "better".


I worked in a C shop for like 8 or 10 years and we never introduced a back-wards incompatible API release. There were a few changes that required all servers to be restarted (not at once) to change network protocol (16-bit fields overflowing, that sort of thing). But it was inconceivable that you'd break something that is working. To be sure, we took a couple of weeks before designing an API, and for no one was the API they wrote and shared with others the first API they designed as an adult. Even when the implementation mutated out of all recognition, the API owner would just do whatever re-coding/re-implementation was needed behind the existing API.


> How else would you introduce backwards incompatible changes?

Not at all. That would have the added benefit of core developers thinking twice before introducing a new API - exactly like it happens with Linux syscalls.


breaking changes get a 2 major version release timeline. that's like 18 months (on average) of warning to update your shit. how could you possibly do this better?


If you keep up with releases it usually is quite alright. Switching LTS versions causes a lot of drama indeed. But that is to be expected.


Agreed 1.4 LTS to 1.8 LTS was tricky, but future LTS to LTS upgrades like 1.8 to 1.11 should be easier. From 1.8 onwards, if your code runs without depreciation warnings in one LTS, then it should work on the next.


> there are some rough corners

content_types, generic foreign keys, and basically any other uncommon use pattern for RDBMS are very rough in django. Tradeoff of their ORM being heavily streamlined for the 90% use cases (typical SQL selects and upserts).


But you can use other ORM if you want, or no ORM. However, the benefits of using Django come mostly from using their highly integrated components, I admit. In any case, we swapped the ORM with our own implementation of the data layer and we are doing fine. I like the API overall, specially class based views.


Yes

Might be easier to just write SQL in those cases


Could you give a description of some subsystem that has been architectured in this way, and then provide a concrete example of some methods that implement this pattern and why it is bad?

We are users of Django, and it helps us to deliver projects, quickly. Interested to know how you think things could be improved.


I disagree with the parent post; Django's codebase is overall pretty high quality. It definitely used not to be that way, though.

But there are components that fit that. The entire form subsystem is awful to work with. Working with Javascript, webpack apps etc is a huge pain.

The template syntax is also a failed design experiment, based on the premise that backend coders and template authors are not the same people and should not have the same level of power -- some of this is based on a good idea at its core, but Django ended up backtracking on that half-way and we're now with a very inconsistent template system and having to implement kludgy template tags and filters or new object methods that do not take parameters whenever we want to do anything remotely complex.. At this point I wish it'd just move towards supporting Jinja syntax (that is, alongside the regular syntax, not as a separate engine like it's currently possible).


I think the form abstraction is one of the better versions of it that I've seen in any of the web frameworks I've worked with. It's simple and elegant for most forms, and can even handle complicated ones (multipart forms, file attachments, complex validation scenarios, etc.) reasonably well.

on a really fundamental level working with complicated forms is just a complicated thing to do, no matter what kind of technology you're doing it with.


Django added Jinja2 as a built-in template backend some versions ago. The Django ecosystem hasn't really caught up to supporting it with reusable templates yet though.


Yeah I mentioned that. I don't think it can ever really work the way it's implemented right now. I think the only way it can work is slowly transitioning the Django template language to be syntactically similar to Jinja, that will give a progressive transition path. But it's a ton of work.


> Working with Javascript, webpack apps etc is a huge pain.

Could you explain what you mean? Django doesn't need to have any connection at all to your JavaScript stack.


If you want to use the Django staticfiles system (which you do - it's good and useful for non-js stuff as well), it kinda does.

We use Django Webpack Loader for our sites: https://github.com/owais/django-webpack-loader - This allows us to do `{% render_bundle ... %} which pulls in the appropriate script tag.

But even then you can really feel how painful it is to work with, especially if you're adding typescript/scss to the mix. Solutions like Django Compressor are not really the right model anymore.

And there's other issues as well. If you want to share settings between Django and the JS stack for example, you'll need to build your own pipeline for that. Building a form in react? Say goodbye to DRY on the form fields. Just, in general, Django predates JS apps being anything more than in-place enhancements and it shows.


If you're building a form in React why do you even bother with django forms when there's Django Rest Framework?


I do use DRF; but it's quite painful to have to set up all the intermingling between the API urls and the actual form definitions. I don't use Django forms; you will generally want a single source of truth for the forms and it kind of has to live in the JS, but then that makes it inaccessible to Django...

For large apps it makes sense to do it this way. For smaller apps it really sucks.


The template system is sometimes a pain, but sometimes the fact that it pushes you to move complexity out of the HTML and into Python code can make for more maintainable code in the long run. That's at least been my experience over the years.


It's annoying at first, but if you maintain a code base long enough, you'll be glad it's done the way it is.

This also has the side-effect of making the transition to client-side templating easier.


Not sure what it's like now, but the insides of the admin used to be fairly terrible (generating HTML with strings for instance).

There was a patch to change this to use templates, but it was rejected at the time as it slowed things down too much.

FormWizard (which I hear doesn't exist now), was a nightmare to use for any sort of complex form - I tried to do just this and had to call many private methods.

I understand metaclasses, but found the implementation of the ORM a little overcomplex.

Trying to use the ORM API to work out the structure of the DB was a bit of a pain last time I tried (had to call private APIs).


The core Django developers will (and have) freely admit that the admin interface implementation is a bit of a mess. There is an enormous amount of technical debt in it. It is however not a "core" part of Django and there are many alternatives. I think most people would agree that the correct long term solution to the admin interface is a complete rewrite, however the cost of doing so is too high.

The html forms (used in the admin and everywhere else) have until now been generated from strings, however the next realise 1.11 (went to alpha yesterday) this has been changed to using the template tools.


While the admin code is kinda messy it's still easy to extend and customize. I'd be somewhat surprised if you could make something that dynamic and not be at least a bit messy.

I myself and also other people I know have written quite some domain-specific applications almost entirely within the Django admin, which are as a result: secure, fast and easy to use, which in some cases are not just a bit better than the existing commercial tooling in these domains.


Not sure why you hate it? It is one thing to say Django is bad at x and then specify x and other thing is to say I hate Django because it its internals are shit and shoot your points like a machine gun. The second approach is not going to be helpful to anyone.


I honestly think Django was a victim of it's own success. It started as a python (explicit is better than implicit) framework, but with a ruby-on-rails (convention over configuration) mentality. It became too successful too fast and it was too late to change things without breaking things. I remember it had all sorts of double import issues, a very limited (compared to jinja2) template engine and a very limited (compared to SQL Alchemy) ORM. It also coupled everything with everything, so it made it very hard to fix those issues. So as people grew into their projects, django got in the way. Most things were hidden in the framework lazily loaded so when you had some weird issues you didn't know where to look. I found it hard to digest that I couln't write a simple script that accesses the database without having to go through the entire autodiscovery process, hence incurring a 2-3 second delay on every execution. We also stopped upgrading at some point, because by the time django started supporting our special needs (especially around user handling), we already hacked around it, and upgrading would have been near impossible. The core developers probably knew all of these and they did more and more "de-magic-ification" with every version, and things are much better now, but we are stuck on an old version.


In my opinion the user (where user is a web developer) facing layer of Django is among the best I've ever worked with. I will agree with you that the internals are very messy and hard to read and work with.

The overall architecture is sound though. Tight coupling of core components, loose coupling of non-core components. Keep in mind that it's an "opinionated" framework though, so they do safely assume that if you're using django you want to be using its core components as a bundle, and will work with them as designed.


I'd say if you hate it, you should propose the changes and how to refactor. There's only so much people can do and they certainly will benefit from first-time contributors who may have fresh ideas. Otherwise this is just plain complain and nothing else.


Last time I looked a lot of the insides were terrible.

A project I was really impressed with the internals of is Celery (was expecting the worst having seen Django).


> Last time I looked a lot of the insides were terrible.

But is it really? Speaking from personal experience it is easy to compare project with large featureset (and one with heritage) to one with scope on doing single thing and come with conclusion that smaller, focused codebase is more consistent and better implemented. At the end of day what matters is if those terriblenesses actually bite back:

- is this code changed frequently? Does it need to be changed frequently? - is it written in a way that that makes fixes and improvement unbearably costful? - is it written in way that allows it to be put apart? How costful are those individual parts to improve?

Django is large codebase that is worked on by different people and when time permits this, which means different parts differ in their age, practices, and ultimately, quiality. This eventually results in codebase that may give appearance of being messy.

Joel Spolsky explains this nicely in his article about old and large codebases appearing as hairy and messy to developers:

https://www.joelonsoftware.com/2000/04/06/things-you-should-...

Especially the part that follows below quote is valuable wisdom to keep in mind:

> When programmers say that their code is a holy mess (as they always do), there are three kinds of things that are wrong with it.


Celery is an interesting one. The code looks well written, but I found contributing to be a real pain and documentation a mess, especially compared to Django's docs.

Also, up until recently it was a one man job by Ask Solem. He's done incredible work making this project live and I am grateful for that. However I fear others have found it hard to maintain as well, slowing down progress on a very popular backend python project.


Things were a little bit hairy back in the 1.3 days, but they've done a ton of cleanup in modern times. I find the source much easier to follow these days. Many of the systems have been more clearly isolated to allow for replacement.


When was the last time you looked?


> its healthy consideration for backwards compatibility

Is this a joke? They break backwards compatibility with every minor version, wasting tens of thousands of man-hours all over the world - time that, if we're honest, is not exactly billable.

Most people don't upgrade because of this and keep on using vulnerable versions. It's good that they are trying to kill the project. It saves newbies from stepping into the tar pit.


> They break backwards compatibility

For the most part, they think they things through a lot and most importantly they document breaking changes. Their approach is an absolute dream compared to, say, updating xcode/iOS apps. It's a total shit show at Apple.


Ha, if you think that updating iOS apps is bad, try updating all your node modules to the latest version without anything breaking.


It's like Russian Roulette with your code. Maybe something will break, maybe you'll get off easy!


This is why continuous integration is good. If the latest version of a dependency breaks something, you'll know sooner rather than later.


Whilst your menacing tone is not really in tune with HN's guidelines (and you'd be wise to edit it and soften it up I think), I think this is an important point to discuss. I don't have many Django projects, but even I have experienced weird, sometimes intermittent API breakages on minor version bumps that I've had to spend hours debugging, only to find that Django upstream changed the way some function behaved.

I think the project is venerable and I have respect for it in general. It's one of the most accessible major open-source projects for newcomers; like I said, I'm not a heavy user and I've already submitted a couple of patches and had them merged. But issues like this one are worth pointing out.

Is there a deficiency in the test suite? Is the open and accepting nature of the community that I just praised the cause of this issue, because it leads to a bias against thorough vetting? I really don't know enough about the project to know, but it'd be nice if they figured it out.


In my experience, many reasonably large projects suffer the same issue. The "deficiency" is always in the test suite, but that's because a particular code pattern or property wasn't envisioned and isn't properly tested. The test suite is limited by the patterns of the developers, and if you do something that's technically possible but outside of what they are thinking, they could make seemingly innocuous changes that break your code. If you start from the docs every time, following the recommendations, Django is pretty good.

I manage a modest project and there are plenty of issues with corner cases that I knew were possible but never could reproduce. As people stumble upon them and report issues I can update the test suite to avoid stepping on the landmine again, but that doesn't change the fact that we've barely scratched the surface.


I usually work lower down the stack, and, frankly, that level of code quality would not be acceptable. (But what you describe matches my experince writing application level code)

Here is a strawman low level test: Randomly generate a sequence of nonsensical (but legal) API calls by randomly generating some data layout, then feed it into a state machine of legal API calls (eg. CRUD), and check the return values. Once that runs for 10 minutes without crashing, extend the test to be multithreaded and run with 1000 thread for a few hours. Dial back the runtime to 60 seconds, and stick it in the regression suite.

As long as there is a well defined API, this finds most bugs (and I also write targeted tests to exercise tricky / error prone paths).

Any thoughts on why it is so much harder to implement reliable high level frameworks? Is it dynamically typed languages / lack of encapsulation, or something more fundamental?


You are finding crash bugs with your test. Most Django regressions are logic bugs. Like some library overwrote a method and that method got a new param in Django, breaking the library. Good libraries with good tox suites catch this. Not all libraries are good.


The example you gave (wrong number of parameters in an override) would be caught by any sane statically typed language. (Though if you are distributing updates to .so's you need to explicitly check for ABI compatibility)

I had to search for Tox. It seems like analogous tools would be nice for statically typed "systems" languages too, though there is less need for them there (more bugs are caught at build time, instead of after deploy).

Debian sort of does the same thing when it builds packages, but only checks compatibility with current versions of dependencies. It would be nice if they also checked / tracked compatibility breakage (the "not all libraries are good" observation is language independent, and a "n days since we broke users of this library" label would be great).


I put all sorts of asserts in the state machine logic (this is the 'check the return values' part). When sufficiently clever, such checks can confirm a surprising range of high and low level behavior.


I don't have any projects using Django but they have an LTS model - surely you don't need to update to avoid being vulnerable? Just security patches?


For three years, if you release your project the day the LTS comes out.


How long do you expect?

Java 7 had less than four years of support.

Ubuntu Desktop LTS was three years, and starting with 12.04 it increased to five years.

That's a language runtime and an operation system known for their stability, forming the bottom of very wide and deep ecosystems.

Show me a web framework with a 3 year LTS, and I'm overjoyed.


Symfony has 3 year LTS plus a year security support after. http://symfony.com/doc/current/contributing/community/releas...


Well the new model will have a new LTS every 2 years, with guaranteed compatibility as long as you didn't have any deprecation warnings. So you get 2 years to fix warnings, then a guaranteed working new version. I find it hard to believe anybody is doing anything better in this space.


If you start your project on the day a LTS comes out, you'll theoretically be compatible with the _next_ LTS, 2 years later, so in some ways it's up to 5 years until you need to change your code.


Django releases tend to add useful features/optimizations, and have very light upgrade effort. Most devs will WANT to upgrade to the latest stable.


>its healthy consideration for backwards compatibility

Like screwing the huge majority of users with Python 2.x Django projects and having them update or be left behind?


It was pretty clear for multiple years that Django would eventually migrate to python3 only. Even regarding library support, I've started going python3-only with my Django projects at the beginning of 2015 and had practically zero problems, as only some esoteric packages were not supporting python3 (and then I wouldn't have used them for client projects anyway, because esoteric).

I still have one customer with a Django/python2 code base. When migrating to Django 1.11 in 2017, that project gets security updates until at least 2020. Plenty of time to gradually transition to py3. I honestly don't see the problem.


You're complaining about not being able to use the latest major version of a library when you refuse to change to the corresponding latest version of the language? So you want to stay cutting edge here but not there?

That's illogical and inconsistent. they're providing an LTS release for you. Get over it


Well it has been on the roadmap for around 3 years, and they will have support for Python 2 in active maintenance for the next ~3 years, so I think this is actually fantastic support from the core team.


It's going to be more and more of a burden for the developers to maintain compatibility with a 7-year old version of Python.

I think this is smart and will help them focus on the road forward. Django 1.11 is an LTS release, so legacy stuff can stick with it and be just fine.


>It's going to be more and more of a burden for the developers to maintain compatibility with a 7-year old version of Python.

That's the version of Python that most of their users actually use -- with no major plans of mass updating.


Yeah no.

I know that there are domains that are mostly on Python 2, and of course you'll always have legacy / unmaintained things lying around. But "zeee majoritie is Python 2.7!!1111" does not become true by some people chanting it over and over again. The simple fact that frameworks and libraries are moving away from Python 2 already proves that the majority does, in fact, not use Python 2. Otherwise maintainers would also be in an approximate majority to block/veto such changes.


>But "zeee majoritie is Python 2.7!!1111" does not become true by some people chanting it over and over again.

Maybe check your language a little? "zee majoritie", "chanting it over and over" etc, gets tired and offensive soon.

That aside, there are actual numbers for Pypi supporting that. What do you have to counter these?

>The simple fact that frameworks and libraries are moving away from Python 2 already proves that the majority does, in fact, not use Python 2.

It just proves that after 7+ years, some frameworks and libs managed to justify porting over to 3 too. It doesn't say much about which is used more.


> there are actual numbers for Pypi supporting that. What do you have to counter these?

99% of Pypi traffic is composed of mirrors and bots. Python 3 toolchains are more likely to use tools like devpi, wheel, and Docker to cache their packages, while Python 2 toolchains are often going to hit Pypi directly.

We're concerned about which version has the majority of users, not about which has more downloads on Pypi.


Exactly. PyPi numbers are totally meaningless. It's commonly used for installing applications (that's how packages like supervisor end up in the top 20) and used in all sorts of CI scenarios.

Every OpenStack build, for example, pulls in hundreds of packages from PyPi.

In terms of real-world use, all of the Python devs I personally know moved to Python 3.


There are some major products like Ansible, that are still only compatible with Python 2.7.


Yea, that really annoyed be about Ansible. You have to bootstrap systems like Ubuntu Xenial which only ship Python3 using the raw tasks.

As seen with Django, they were able to support both. I've been able to support both with the code base to some of my projects as well.

Currently I have one project I'd like to be Python3, but it will involve forking two dependencies (owfs and phidgets) and having them support Py3 (phidgets actually builds and entire python3 tree and installs it and yet you can't import anything from it because it's all python2 syntax -_-)

Python 3 has been out for quite some time. I don't see why everyone is still holding out.


https://docs.ansible.com/ansible/python_3_support.html

> Ansible 2.2 features a tech preview of Python 3 support.

So they are working on it.


But the Python developers will abandon v2 sooner or later, so IMHO it's wise to start following the bilingual development guidelines https://wiki.python.org/moin/PortingToPy3k/BilingualQuickRef and eventually upgrade to v3.

What are the alternatives? Forking the language or switching to another language look to have a higher cost.


Guide announced at PyCon last year that Python 2's end of life is extended to 2020.

https://www.python.org/dev/peps/pep-0373/


*Guido

(auto-correct)


The ruby community moved on after their last breaking change in less than 2 years.

The JS community deals with breaking changes every 3 months or so.

From Python 3.0 the end of support, you have __15 freaking years__, warnings, tutorials and excellent tooling at your disposal. Oh, on a free software. Half made by some charity workers.

Now you made a choice, and there are very good reasons to have made it. We won't criticize it.

But you lost the right to complain.


Ruby also BROKE A LOT LESS, and offered immediate, tangible reasons to upgrade.


The reasons to upgrade are often dependent upon your usage case. To me, the Unicode improvements are reason enough on their own before you even get into the other great stuff (async work, type hinting, stdlib cleanup, pyc rework, etc).

asyncio is going to be a slow build, as the ecosystem starts unifying around it. But as that happens, we'll be much better off for it.


Ruby offered an immediate 25% performance increase. That's a lot less niche than Unicode.


LOL, "niche", says the english native.

Plus, 25% of performance, while nice, is not really something that would matter for most Ruby projects. They are web projects, and their bottleneck is not Ruby. Same for Python. While I'm telling you, good unicode support in most european countries is a HUGE deal.

But that's not all. Often People thinking about Python 3 think unicode, but for me there are 2 other things that made my life much nicer:

- better debugging. Error handling is a hell lot better, with better and messages, greater granularity, more safety nets... E.g: you can't compare some objects anymore, you have several exceptions to handle file opening, imports are absolute by default, division is what you expects, a lot more operations are lazy, the stdlib has been cleaned manu redundancies, encoding parameters everywhere, etc. - less verbosity. Writting is consistently reduced. You get finner file boilerplate, shortcuts for OO, unpacking generalization, yield from, f-strings, etc.

Now those are things that are not easy to sell. You don't see them as a reason to migrate when you hear about it. But once you used to them, going back to Python 2.7 feels so bad.


Unicode is niche?


We'll just have to see. That's the beauty of this. Don't want to upgrade? Stick with Django 1.11, since you don't want or need new things. It's LTS, so you won't miss out on critical fixes.


Django and most libraries were already working seamlessly 2 years ago

Python 2.7 will EOL in 2020. But I guess even then some luddites will complain how they will have been left hanging


When I started with Python - around 2011ish - Python 2.7 on Windows (with Qt 4) was a reasonable choice. Not much later almost everything I did was on Python 3. No regrets, no looking back, and no missed opportunities.


they aren't screwing anyone, if you want to stay on an outdated version of a language that won't be receiving updates you can do the same with Django.


Python 2.7 will not be maintained by PSF soon anyways.


The Python 2.x EOL was extended to 2020.


Which, I believe, is also the support timeline for the Django 1.x LTS release before 2.0. So Django plans to support Python 2.x as long as the PSF does.


Yeah thats 3 years for now... thats very little time - especially if you have to migrate old projects.


The same argument could be made for any arbitrary amount of time though.

1 week is not enough

1 month is not enough

1 year is not enough

5 years is not enough

10 years is not enough

20 years is not enough...

Do you really have Python projects that can't be migrated in 3 years?

At some point the line has to be drawn to move everyone forward. Extending the EOL for the last long-term of 2.7.x by 5 additional years is pretty generous.

https://pythonclock.org/


I think you completly misunderstood my comment ;-) And yes 3 years for some projects with small teams is barely enough :)


I suspect Python 2.x will be extended like copyright.


I doubt it and that argument makes no sense. Copyright is extended thanks to the lobbying effort of huge business like Disney in order to make a ton more money.

Unless you see some big corporate support contracts coming over the horizon for PSF, not to mention they'd have to be worth the money vs the technical debt, I don't think Py2 support will get extended.


Backlash from the tons of big companies still using it will become lobbying to keep it supported. I don't expect them to stop in the next couple years, or the next few, etc.

There's less muscle behind keeping Python 2, but there's a lot less muscle needed to keep it going than to extend copyright.


Protip: Python 2 is EOL'd in 2020.

Start your migration now.


This call has been made a while back, and it makes perfect sense. Python 2 is slowly being EOL'd and if you're starting a brand new Django project there's no reason on earth you should choose Python 2 anymore.

Sure legacy projects still need support and for that they get the 1.11 LTS, but otherwise it's really time to move on.


Easy to say when you don't depend on C extensions only compatible with 2.7.


You're calling code that depends on C extensions from directly inside the request-response cycle?

One possible solution would be to offload this work into a celery worker which can call python2 as necessary.


I code against both Django and Tornado. My Tornado based HTTP servers call C extension while they're handling GETs and POSTs. And that's why I'm staying on 2.7.8 until a customer pays me to port to 3.


I'm curious why 2.7.8? What is preventing you from using 2.7.9+?


2.7.8 was the latest 2.x when I built the current codebase. Currently I'm testing before a cloud launch. Fixing bugs in my product's functionality is a higher priority than porting to the latest 2.7


Python 2.7.9

Release Date: 2014-12-10

https://www.python.org/downloads/release/python-279/


2 years is not an unusual time between starting product development and seeing any sort of major customer uptake, and if you're in that time period, your priority really should be getting the customers rather than futzing with code migrations.


It's still the case that a bad call was made when the codebase was started, picking a language version that was already on its way out. It's like starting a new project now with Java 7 o .Net 2.0.


Of all the bad calls you can make when starting a business, picking a bad language or platform is a relatively minor one. Google was originally on Python 1.6 (or technically Java 1.02, if you go back to when it was Larry's dissertation project), Facebook chose PHP (!!), basically every Android app is forced to use Java 7, and a good number of iOS apps are still in Objective-C.


That doesn't seem too surprising. E.g. for Tornado, you have tornaduv, a C replacement for the core ioloop running everything.

Replacing hot spots with C is a tried and true tactic. I don't see how django would substantially change that equation.


Is really not that hard to make C extension support Python 3, you're either using an extension that is no longer maintained, or its author(s) are stubborn, which is a different kind of problem.

Making code compatible with python2.7 onward means that you can't use any new features from 3+ and the code is plain ugly, I don't know if you wrote 2/3 compatible code, it is not as enjoyable to do.

I think Django's approach where the LTS will still work on 2.7 (and LTS is for 3 years, which is until python2 itself stop being maintained) is fair.

You still have 3 years to fix things and if you want to use latest Django and be cutting edge, you probably should use latest Python as well.


How hard it is to port a C extension? I don't really know the APIs, but is it impossible to transform by a script?


Not hard. You can support 3 and 2 in the same file without much hassle.

Practical example: https://github.com/zopefoundation/BTrees/blob/master/BTrees/... https://github.com/zopefoundation/BTrees/blob/master/BTrees/...

There are a couple #if PY3K, but not much, really.

I ported a bunch of extension modules, total a couple thousand LOC, and it was pretty much a matter of reading the docs (see guide at https://docs.python.org/3/howto/cporting.html ) and adding a few #ifs. Total time maybe an hour or two.


My experience is that it is drastically easier to port C extensions than to port python code itself - the C API hasn't really changed a lot and it's usually very easy to reason about C code due to it being more strongly typed than python.

The only reason why it might be hard is you are unfamiliar with the extension's code and/or C


Wouldn't the problem be that you're already relying on unmaintained software?


How many are still at that stage?


>Python 2 is slowly being EOL'd and if you're starting a brand new Django project there's no reason on earth you should choose Python 2 anymore.

Except that there are still some critical packages that aren't on Python 3 yet. Not to mention a lot of functionality breaks even if the libraries do exist, which means you have to code things up quite differently sometimes.


That argument has grown stale. If the devs of your favorite package are still too lazy to port to Py3 there are bigger issues with the existing code you should be concerned about. 2to3 fixes most everything other than a few incompatible changes to the stdlib API.

http://py3readiness.org/


What critical packages still need to be ported to Python 3? Might be a fun project if they're open source.


For me, the google api client libraries, and AWS Lambda. That last one isn't totally Django related, but we use it for certain service calls, and it'd be nicer to be able to maintain one version of the language across the django app and related services.

We ended writing a service in PHP to use Google's APIs, and are mostly using Scala or JS instead of Python for the Lambda services because this project is basically a ton of unicode mangling, and you can pry Python 3's sane unicode support from my cold dead hands! But the libraries still aren't totally perfect. It just takes on dependency being out of date to screw your plans.


The Google API Client is at least Python 3.4 compatible:

https://github.com/google/google-api-python-client/

I see only one Py3 bug is the open issues. And I think even their ancillary library Python Flags is now officially Py3 compatible, even though there has been a Py3 fork of it for years.


Nice! Looks that was just added in December, prior to that they had a statement saying it would probably work in 3.3+ but wasn't tested.

I didn't mean the one literally called google-api-python-client though (forgot they had that, ha). The googleads-python-lib is the critical one for me. PyPI says it supports 3+, but do one search in the repo for "print" and you'll see that's clearly not true.

Looking through the history, seems like they've claimed support for it for a while. We've tried it twice and had issues both times, though I never tried it in Py2, so maybe it just has problems in general.


I agree on Lambda.

There's a "hack" of running Python 3 on AWS Lambda via subprocess until it's officially supported.

http://stackoverflow.com/questions/36143563/using-python-3-w...


Almost everything critical that's still holding out has a good reason for it. For example Twisted is being ported... slowly.


Much of Twisted already works on Python 3 even though they're not done porting yet!


IDAPython (the python scripting plugin included with IDA Pro) still only supports Python 2.7.


Then contact IDA and ask when they are releasing a python3 version of their package. Your a paying customer after all...


There's a long list here. Feel free to dive in! :) http://fedora.portingdb.xyz/

Included in the list: nodejs, chromium, trac, bugzilla, bazaar


1) Super super SUPER old argument (that's completely wrong and out of date FWIW)

2) Let's see some proof of your argument, because many packages and platforms are happily supporting Python 3 now. Put your money where your mouth is.


Still having issues with AWS related packages not being fully ported. A lot of our workflows in Pandas broke. It operates quite differently in Python 3 for certain operations like typing data frames which messed up some of our ETL. Also, more explicit encoding was needed.


Agree. The argument was semi-relevant but on the way out when I learned to program 6 years ago.

If in half a decade things haven't been ported there are more serious issues.


which critical packages?

https://python3wos.appspot.com/

almost everything on the list has gone green now


> some critical packages that aren't on Python 3 yet

[citation needed]

Pretty much all critical packages now work on 3, as witnessed by the Wall of Superpowers.

Maybe some django-specific lib is still 2-only? In which case, rewrites/forks should be relatively trivial and I'm even happy to have a look myself.


>and if you're starting a brand new Django project there's no reason on earth you should choose Python 2 anymore.

How about millions of lines of code in your company in Python 2, and several Python 2 based services and websites?

Why on earth will you go to Python 3 at huge rewriting costs? To get some fancy syntactic sugar and improved unicode?


>> and if you're starting a brand new Django project there's no reason on earth you should choose Python 2 anymore. > Why on earth will you go to Python 3 at huge rewriting costs?

If you're starting a brand new Django project you're not rewriting anything. If you're rewriting something, it's not really new.

My company is a Django shop and while we have no intention of upgrading Python 2.x codebases to 3, we do start all our new projects on Python 3. It just doesn't make sense not to do it.


Why can't your Python 2 services talk to your Python 3 web app?


Why should I maintain code, servers and libs in 2 versions of a backend programming language?


Then don't. Stick with Python 2 and accept that means you're going to stop getting work done for you for free in 3 years time.


Because one is essentially EOL?


Well, I'm for user driven EOL, as opposed to top-down, we-know-better EOLining.

And the latter doesn't work so well thus far for Python 3.


If enough people like Python 2 that much, I guess someone else can take over maintaining and developing this language version past the official EOL date.


Like Guido's employer, for example. (see pyston)


I think you see this argument used a lot when it comes to Python 3 about it not working so well. I don't see the maintainers changing their minds about EOL on python 2 so either switch to Python 3 or to another language. There is so much toxicity around the Python community because of people thinking they are entitled to have everyone still support Python 2.


Because it is and always has been a sensible middle ground between rewriting everything for every release and never upgrading.


Why do you have to rewrite all the code in your company if you are starting a brand new Django project? Seemingly the brand new means it is separate from currently existing projects.


I'm glad they are making a clean break from Python 2 and I hope this pushes other projects in the ecosystem to fix those remaining libraries without Python 3 support. It does get a bit frustrating when things break between Django releases, but they have a good system of deprecating things for a couple of releases beforehand. And at the end of the day, Django is for people who want to build websites, not life support machines... and I think they're doing a decent job of striking a balance between breakage and stagnation.


I have a Python 2.7 project that has been running smoothly for many years now and I'm having trouble finding a reason to upgrade to Python 3. The project uses the unicode type to represent all strings, and encodes/decodes as necessary (usually to UTF-8) when doing I/O. I haven't really had any of the Unicode handling problems that people seem to complain about in Python 2.

Can someone explain what benefit I would actually gain from upgrading to Python 3 if I'm already "handling Unicode properly" in Python 2? So far it still seems rather minimal at the moment, and the risk of breaking something during the upgrade process (either in my own code or in one of my dependencies) doesn't seem like it's worth the effort.


You gain access to continued language support in 2020. New features involving strings will have less risk of bugs. The "range" function is more memory efficient. Integer division automatically floors, reducing bug risk. Dictionaries with guaranteed ordering. Thousands separator in string formatting. Bit length on integers. Combinations with replacement on itertools. New, faster I/O library and faster json. Concurrent futures module. Ability to define stable ABI for extensions. New CLI option parsing module. Dictionary-based logging configuration. Index and count on ranges. Barrier synchronization for threads. Faster sorting via internal upgrade to Timsort. Async I/O. Support for spawn and forkserver in multiprocessing. Child context in multuprocessing. Has collision cost reduced. Significantly faster startup. Type hints. Faster directory traversal. Faster regular expression parsing. Faster I/O for bytes. Faster dumps. Reduced method memory usage through better caching. Dramatically less memory usage by random. Faster string manipulation. Faster propert calls. Formatted string literals (interpolated strings). Asynchronous generators. Asynchronous comprehensions.

How much faster might your code run just by upgrading to Python 3? How much memory might you save?


> Dictionaries with guaranteed ordering.

I don't think you're suppose to depend on the ordering of dictionaries. It's an implementation detail which might get changed, although it wont actually ever be changed because people will come to depend on it.


I'm specifically referring to OrderedDict in this case, which does have guaranteed, insertion-based ordering. It was introduced in 3.1, circa 2009, via PEP 372.



That's a good point. I didn't realize that it was also introduced into 2.7 at the same time.


> I don't think you're suppose to depend on the ordering of dictionaries. It's an implementation detail which might get changed, although it wont actually ever be changed because people will come to depend on it.

I came here to make the same distinction - though I will say I hope you are wrong about people coming to depend on it when OrdereredDict is still there for a reason. The docs still plainly state that Dict should be considered un-ordered and do not make mention of this implementation detail (nor should they).


You forgot functools.lru_cache. I've found in some Python 2.7 projects you can speed things up with a hacky dictionary-cache, but the proper functools.lru_cache gives better results and is far more flexible.


Like many database-backed apps performance is currently limited by I/O, and memory is limited based on the size of the data sets I load (machine learning). A faster, lower memory Python does sound nice but I'm not sure how much effect it would actually have in real life.


What py3 does is it enforces the things that py2 only suggests as far as unicode goes. That means on py2 you can go for the proper handling and if you did do everything right, then py3 migration should be almost boring.

The difference is when you didn't handle all the cases correctly. In that situation Py2 will silently do either the right, or the wrong thing and you'll never know. Py3 will likely throw an extension and tell you where the issue is.

So if you're handling lots of unicode text and think you've handled all the string correctly already - that's the reason to move. Now you'll be sure it's correct. The problem with py2 wasn't so much that it was handling anything badly - it was that the default, easy way was incorrect and just waiting to blow up in peoples' faces. (or maybe just silently corrupt something)


I'm fairly sure I'm handling strings correctly under Python 2. What I'm not sure of is the risk of something breaking if I upgrade to Python 3, especially when it comes to upgrading external dependencies. Even if it's just non-backwards compatible API changes, it's more risk to deal with.


You're fairly certain, but with Python 3 you can be completely certain. That's the magic of it.

Python 2 will allow this to sometimes work, Python 3 will make sure this never works, because there's a bytes/text mismatch. It's nice to rule out entire classes of bugs like this

    from sys import argv
    import json
    with open(argv[1], 'rb') as f:
        json.loads(f.read())
Porting is still difficult. But when we ported from Py2 to Py3, we found a couple issues like this (despite dealing with weird encodings all the time).


Exactly this. When I ported my stuff to Python 3, I fixed many encoding bugs in the process.

You can do this correctly in Python 2, it's just much harder.


Do you think you will ever have to move to Python 3?


I don't know. I will probably try to by 2020 once it gets officially EOLed. Until then I have plenty of other work to do.


Isn't that what test bases are for?


Tests will tell you something broke, you still have to go through the work of figuring out why and how to fix it.


It has always seemed to me that Python 3 was mainly a fix on the philosophy of handling strings, but that it didn't offered a clear practical advantage for programmers already handling strings with care. I don't think there is a practical reason to upgrade to Python 3 in terms of language design. The reason will be in term of survival as the community seems to be willing to follow the Python 3 movement and official support for Python 2 ends in 2020.


There are a lot of reasons to upgrade! There is so much more useful stuff in python 3!

Even if you think you would not use those features, other libraries you may use might benefit a lot from it.

A few features: async/await, lists (and others) use iterators, no var leaking in list comprehensions, super().my_method() instead of super(MyClass, self).my_method(), class MyClass: instead of class MyClass(object):, improved exception handling, required arguments, ", ".join(["etc"]* 1000)

Besides that: a much improved standard library, although that technically not is language design.


Of which the improved exception handling is my favourite.

You could do ", ".join(["etc"]* 1000) in python2 - what am I missing?


ah well, it was just a geeky way to say etc etc etc.... no python3 stuff intended to be used there. Sorry :) This one then, althoug it sort of works in python 2 as well..

", ".join(['ètç']* 1000)


Ha! Sorry about that :-)


Yes, that is my impression as well. In hindsight, it is unfortunate how much it ended up fracturing the Python ecosystem.


If everything fine with Python 2 on the current project, why bother to upgrade Django 2.0 which will break the compatibility?


That's pretty much true of any django release... you can stay on it if you so choose to. It will get security updates/bug fixes for a bit and then they will stop coming. You're free to stay where you are if you want.

It turns out that most developers have a desire to move to the next version if it's not too hard. There's still COBOL programmers out there too and that's perfectly fine.

Django has made the process as smooth as it can be. You can upgrade to python3 while maintaining your Django version. Then update to the next Django version as a separate step. It's fine to have waited until now. You can keep waiting if you want but it's getting to the point where you should really just do it. It's not so bad.

We switched and python2 --> python3 was bumpier and more work than most Django updates we've done (we've done pretty much every one since 1.0) but it was still entirely reasonable. We're much happier now.


Right, that's more or less my question. (I'm not actually using Django though.)


My feeling is that you want to be using dependencies that are actively being maintained. These probably already support py3 - or there's a newer / maintained alternative available.

What's annoying is discovering that it's a struggle to upgrade to a newer OS because you're using some old python dependency that has some C component linking to some library that you're going to spend a week getting working (and then have to continue to maintain).

It sounds like you might not have too much trouble upgrading anyway. The strings and missing libs are the places most people get caught out and it sounds like you're already handling the worst of those.

If I were you, I'd try switching to python3 and see what breaks. When I did it, it took about a day to get up and running again on a reasonably complex project (numpy, scipy etc). One of the main things I ran into was places where python3 had swapped lists to generators etc (eg, some_dict.keys()[0] no longer works).


Well, there are some new nifty things, e.g. not having to write u in front of every string or a more pleasant subprocess API or the fact that dicts in 3.6 keep their insertion order which makes debugging a bit easier. It adds up.

But the big one is that Python 2.7 will go away at some point.


"I have a Python 2.6 project that has been running smoothly, why upgrade to 2.7?"

If you don't need any of the new features and fixes, don't, that's perfectly fine.


The difference between going from 2.6 -> 2.7 and 2.7 -> 3 is enormous though.


Not if you don't need it.


I meant in terms of work needed to upgrade.


> Dictionary-based logging configuration

I believe this already exists: https://docs.python.org/2.7/library/logging.config.html#logg...


More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: