WRT private class members, I can understand why this might be frustrating for library writers. But it's just so damned useful to be able to reach into a library and get functionality the library's writer didn't think I'd need, that I'll personally never consider this to be bad.
Multi-line lambdas would certainly be nice, but most of it can be handled with a scope-level function. It adds, on average, one line of boilerplate. As for syntax, no need to use Ruby's syntax, just incorporate parenthesis (a well established continuation construct within Python):
with(open("foo.txt"), lambda f: (
print("I am a teapot")
auth = flask.ext.httpauth.HTTPBasicAuth()
return "I am a tea pot"
As for the lack of a type system, yeah, that is definitely Python's biggest weakness for large programs. A lot of it is resolved with the type hints in Python3, but a lot of it is just Python itself. Love it or hate it, that's how Python is, was, and will be.
As for the example error - there is a typo. Of course it's going to throw a traceback. The error message is even more explicit than I expected, frankly. It pointed out the missing function name, which would make the typo rather easy to find.
I was ready to accept this as a matter of opinion, but then with this :
> Metaprogramming is the source of exponentially more technical debt (and bugs) than is reasonable, frankly.
Once you've jumped the shark and play with introspection and internals, are you really going to make the point that metaprogramming (or almost anything else, for that matter) should be banned ?
As a fan of the concept of the "Catfish" developer (even though I'm a "corporate drone"), absolutely.
Or, if not banned, kept to an absolute minimum. It's like the preprocessor in C. You can do anything and everything in it... but should you?
You can be exceedingly clever with metaprogramming, but clever code is exceedingly hard to read and maintain. Give me boring, explicit code any day of the week.
Boring, explicit code which depends directly on the unspecified, unsupported, implementation details of the libraries you're using ... ?
So, while occasionally under specified, they are quite well supported and stable.
I do miss out on the fun of sorting out the requests dependency chain with every release of my code, but I'm mostly OK with that.
It's almost like the qualification is that the reader can't even imagine writing the abstraction and therefore they can ignore it.
By the same token, if someone implements TensorFlow in a different language, there is little room for complaint assuming the same functional abstraction (and minimization of intellectual overhead) is present in that language's implementation.
Third-party resources are where it's at for taking advantage of things that require skills you don't have. I haven't written enough frameworks and abstract libraries to expertly debug, test, and write my own.
Indeed. You're more likely to be able to use and debug it if you wrote it yourself rather than relying on the weirdom of the crowd from SO.
> I haven't written enough frameworks and abstract libraries to expertly debug, test, and write my own.
I suspect you'd do just fine if you tried, assuming you don't try and displace django.
Lisps, C Preprocessor (within certain limits), and so forth are a bit different, since you can expand the macros without having to run the code itself.
Does that indicate a strength of Python or a weakness? A complete value judgement there. In either case, metaprogramming in Python (and many other languages, even those with support for metaprogramming resolution in their IDEs) is still simply harder to understand and debug.
Metaprogramming also cannot necessarily be "hard to read and mantain". Take a look at macros in Common Lisp. They are almost the same as a normal function. Yet they do metaprogramming.
> Give me boring, explicit code any day of the week.
With macro metaprogramming you can eliminate boilerplate code / unnecessary copy-paste repetition of code, and this does improve maintainability of the code.
At the same time, replacing 10 repetitions of the same big boilerplate code (similar code, written with slight variations each of the 10 times) with 10 very simple (one line each) calls to a macro, improves readability: because you will easily read what is the difference between those 10 calls each.
There is no such problem with Common Lisp macros.
C programmers gluing together symbols from pieces so that MACRO(PREFIX) expands to some PREFIX_var or whatever are often not complete idiots; they are desperately doing whatever they can to simplify what they are doing in the best way that is supported by the portable language.
I'm not trying to dunk on your editor, but people have had this problem solved since the early 90s. If your programmer's editor doesn't have some affordances for your language, then it's not a programmer's editor at all, it's a text editor.
Most modern editors are able to horizontally scroll and thus handle long lines in source code. That doesn't make a 500-character-long line of code acceptable by any stretch.
Either you want your editor aware of your language's semantics if you don't. I rather do, because it saves a ton of time and basic integrations are usually quite straightforward in 2017.
> Or perhaps better documentation on your program's part and/or better education/familiarization on my part is warranted.
Barring my opinions on this, I don't get why this would exclude editor semantics.
> Most modern editors are able to horizontally scroll and thus handle long lines in source code. That doesn't make a 500-character-long line of code acceptable by any stretch.
Okay, but this is a false equivalence. You're telling me not to use metaprogramming tools because they make it hard for a specific editor feature to be used in conjunction with them in the most difficult case for any system (introducing new run-time-dependent lexical bindings into a scoped block and handling that at compile time).
But unlike code visibility, in most good solutions exist to work with these systems (within reason).
In many cases with common macros I'd add decls for Cursive Clojure + Idea to let me autocomplete introduced bindings. For example, I helped my editor understand unique structure, e.g.,
(defauthedendpoint get-frobnaz [request user]
I guess that without the right environment ifdef stuff falls out, but I dunno what to tell you there other than making builds contingent on the environment is a sketchy practice.
Seems, um, like a pretty reasonable idea to me if you care about C-style macros (I sure don't.)
That said, my (admittedly few) days spent debugging bad macros were pretty damned bad. Having to inspect the results of the macro expansion and figure out what went wrong is not my idea of a good time. And that doesn't even account for reader macros.
It takes extra time and effort to make macros that are suitable for broader use (people other than the macro writer).
But ultimately, bugs in software happen, and bugs in macros can be a pain in the butt to debug. The extra steps to go in and expand the macro, and then translate the fix in that expansion into a fix for the macro itself gets complicated, fast.
It's no different from any other "input -> process -> output" computing situation where the output is wrong, the right output is obvious, and you have work back into the process to make that output come out.
You shouldn't be fixing macro output. If the macro is wrong, the first step is to find the simplest macro invocation which reproduces the problem. Use a dummy argument for anything irrelevant, and use an atom for any argument that allows one. For any argument position which accepts a list of items, see if the problem reproduces with an empty list, or a list of just one. If you can see the problem in (mymac (a b) c d), you have it made. Hey look, b is being wrapped in an extra list, and d should be quoted."
Hacking something with private library parts doesn't involve any more "introspection" and "internals" than reading the library's code and finding useful loot to steal; metaprogramming is a completely different and orthogonal way to compromise code maintainability.
"Banned", no. Just that's not simply this big trove of goodies that it's often cracked up to be. And in particular, it isn't strictly speaking necessary or essential for a language to be successful -- or even among the top 5 considerations that make a language successful or not.
I'll second that. I'll share my own experience to give some perspective. Older versions of the JDK had an LDAP library that for some reason I can't understand (actually pretty sure it was a mistake because it was changed in subsequent version) specified a File as the type instead of a Stream for its configuration file. There was no way for me to modify the configuration during run time without restarting the JVM. This wasn't acceptable to the software we were writing because it supported multi-tenancy.
What did I end up doing? I used introspection to grab the "private" configuration table (Vector or Hashmap?) and wroten an API around it to allow modification on the fly.
Lesson here is that "private" things aren't really so private and Python style discretionary "private" variables would have required a lot less work.
def __init__(self, price_prediction_calculator_settings: 'PricePredictionCalculatorSettings'):
self.x = price_prediction_calculator_settings.x
self.y = price_prediction_calculator_settings.y
def calculate_something(self) -> int:
return self.x + self.y
def __init__(self, x: int, y: int):
self.x = x
self.y = y
def __init__(self, x: int, y: int):
self.x = x
self.y = y
def __init__(self, settings: Settings):
self.x = settings.x
I'm asking because I'm usually torn between short names that assume the reader can understand them and long overly descriptive names that (in theory) require less implicit understanding.
Is the name price_prediction_calculator_settings too long? If we call it settings I can see an issue come up if we need to pass in another type of settings object calculator_format_settings or something like that.
The other general issue is why have the redundancy? There's only one settings object here. If you need to pass something different in later, refactor! Remember the principle of You Ain't Gonna Need It.
There's no IDE support for "plain" dictionaries, they resulted in code duplication and broken encapsulation since you can't add any logic to a dictionary, and they put unnecessary cognitive load on the developer by forcing them to keep track of which fields go in which dictionary.
I saw it as a form of design-by-contract and self-documenting code. The app I worked on had a ton of these mappings being passed around between different subsystems and it was becoming nightmarish to deal with. With this style, if you were on Team A and needed to plug your work into Team B's FooBar class, all you had to do was look at the FooBarInputData class to see exactly what the FooBar class needs.
The name ``settings`` is too generic. There's probably a better way, but to demonstrate that we'd need a real example and not hypothetical.
At this point, The Zen of Python has been absorbed into many late-gen programming languages; it's easy to see a heavy Pythonic influence on Swift and Go, for example, and some languages, like Nim, even draw the parallel as a promotional thing.
Sadly, the technical underpinnings of the Python runtime itself have not kept up, and it leaves people asking why they shouldn't just enjoy the same advantages they'd get with Python through a newer language with a modern implementation, providing better performance and package/dependency resolution. As much as it pains me to say it, even cutting-edge ECMAScript code can be made to look pretty Pythonic these days.
Python will always hold a special place in my heart, but I'm not sure that a Python implementation is the obvious choice for a dynamic application anymore.
Anyone who says his behavior hasn't been troublesome in #scala at times over the last 5 years isn't paying attention.
I still remember my first run in with him where he demanded I take a "test" to let him "grade" me. In the middle of a larger conversation about a sbt bug.
I can only really think of one instance where I've used this, and it was to work around an outright bug in the library (which was unmaintained, and I was using out of a certain amount of desperation).
FWIW, most of the complaints around private in my post would actually be solved just by having a separate namespace for "private," even if there was a back door. I'd be pretty willing to forgive that.
> Multi-line lambdas would certainly be nice, but most of it can be handled with a scope-level function.
In the absence of the with statement can you really imagine yourself writing this all the time:
print("I am a teapot")
Strikes me as pretty ugly, despite not actually adding much code.
> As for syntax, no need to use Ruby's syntax, just incorporate parenthesis (a well established continuation construct within Python):
Yeah, that's a bit nicer.
> As for the example error - there is a typo. Of course it's going to throw a traceback. The error message is even more explicit than I expected, frankly. It pointed out the missing function name, which would make the typo rather easy to find.
Yeah, that was kinda my point; I was comparing this to what happens if you try to "take duck typing to heart", where you don't get the error until you try to call request. The earlier you find out about mistakes the better.
Honestly, no. But not because of any measures of beauty, but because I just don't do callbacks all that often.
f = open("foo.txt")
print("I am a teapot")
map(lambda x: print(x) # whoops, impossible
I don't want this though. This doesn't do what multi-line lambdas do. It can never do what lambdas do. It's not a lambda function if it's named and fully pre-computed.
> Adding multi line lambdas would make the ultra simple structure harder to quickly parse.
This is just about the least compelling argument I've ever heard. "The parser writers are not going to have fun."
>>> def splatter(t):
... def f(): print(t)
... return f
Still, the existence of this sorta demands we ask, "So why can't we use it then?"
But your complaint isn't without merit here, a profusion of unique nameless verbs can get overbearing sometimes. It's often better than the alternatives (e.g., dependency injection vs lambdas in an environment) but Haskell, ocaml and lisps all seek to have a small but very robust set of combiner and base primitives to help make this kind of code avoid a complexity explosion similar to OO ontology explosions.
If I can simulate anonymous functions with locally scoped functions that then behave as lambdas then what hill is Python dying on here? The "we don't want to submit a patch to make our parser do this?" Hill?
It's definitely possible to do. So...
It's an extremely contrived example, but from the code I posted, it should just be:
for x in xs:
Lambdas are everywhere BUT Python now.
If Python 3 could just get over this conceit, its maintainers could start solving major Python problems really quickly. E.g., Python's janky iteration primitives could be replaced in an afternoon or three with equivalent but cleaner and clearer options.
Or, of course, admit that whether or not a particular programming language conforms to your personal preferences has relatively little bearing on the success of that programming language in the world at large.
Also, preach on duck typing it's a python myth that is absurd. If it quacks like a duck means I had to figure out what it was.
This language is slowly destroying my soul, one painful unit test at a time.
For example, I used a decorator the other day that I called `@logperf` that I can pin to any function, which will `logger.info` the approximate time it takes to run the whole function. It doesn't mutate what the function does, it just adds a side effect.
def make_money(*args, **kwargs):
with_jazz_hands(make_money_impl, *args, **kwargs)
Please, please learn about what higher order functions are in a more general setting before invoking them.
This "please, please learn" bullshit comes off as so smug and patronizing, did you realize? And when you're misunderstanding something it's even worse.
Even languages with labeled effects allow for this sort of construction effect. If they can properly sequence and isolate it, they allow mutation as well!
If you dislike call-by-name, why is Python's practice of insisting all non-trivial functions be called by name is not distasteful to you?
No, I was trying to isolate the important part. "@f def a: return b" would be the Python syntax, and I think it's misleading as sugar for, hmm, you'd have to write it as "a = f(lambda: b)" (hmm, I'd forgotten how different defining a function "natively" or as a lambda value are in Python).
You seem to be using a different definition of call-by-name to the one I'm used to. What bothers me is that a decorator can make a function behave completely differently (e.g. its body might never even be executed) but the syntax doesn't look like it can make that big a change to the function.
I guess in Python land, blueprint's entire purpose is to selectively fire your handlers.
You can take any code with decorators and replace them with this syntax. If you have arguments you must call the factory first, that's the only exception.
So we're two! I'm trying out Rust for a side project now and so far it's fun. Lots of compile-time checks, non-crippled lambdas, and you can use itertools too!
Literally, please do not do this shit. Your function should be modular and able to be useable with or without being decorated.
At the end of the day, Python is just an abstraction. It isn't destroying your soul, or doing anything else to it.
OTOH, your conscious decision to stick it out in a job that makes you work with a language that you find "absurd", and where people do things you "absolutely loathe" but have to put up with, anyway? That's what's destroying your soul, man.
Maybe you meant to say TMP?
TMP, on the other hand, is an entirely separate language within C++ that allows you to create monstrous, arcane code that's incredibly hard to debug or understand. Some of the STL uses TMP. But it sounds like custom TMP is the thing you don't like.
I don't quite understand the author's complaint about this specific case of decorators. An initial read makes me wonder why he doesn't use dynamic routes, which is where the decorator / function-name pattern really starts to shine.
Preach it. Also, you don't have to worry that a method whose signature is `public int DoWork(string input)` actually expects char as its input and returns a long, the way you do when the types are only documented through comments.
Having a required type system for this huge part of the community would be a great let down.
Those criticism come from people comming from a strong dev background. They completly ignore the rest of the world.
That's why having the current OPTIONAL type system is so good. Because you can use it and benefit from it, and many do, but you don't screw half of your community.
Python strength is versatility. It's not the best at anything. But it's very good at most things.
That's why if you know it, you can solve most problems decently. That's what makes it fantastic.
I'm still waiting for people doing data anaylsis in Go, sysadmin in JS, 3D scripting in Haskell, batch testing in erlang or geography in lisp.
Because Python... well it can. And AI. And Web. And embeded systems. And IoT. And... And...
The question is more about "what kind of problem do you want to have and what price are you willing to pay to solve them ?". After that, choose the tech that matches this wish.
For me the iteration speed, the ease of adding new members and the flexibility are key.
But what I discovered is that most people just code bad Python and say they have problems, just like most people write bad C/java/whatever and blame the language.
You are supposed to writte Python in a certain way, like any language, to scale.
For small, local needs to pass around data, a plain dict or tuple usually suffices. If you need stricter contracts, you define good classes and interfaces.
There's a need for discipline in your coding style, but I've frankly not found this to be an issue with moderately competent coders who understand this. Yes, you can shoot yourself in the foot with this, but if anything those bugs are usually obvious and solved in the first pass of testing.
Yeah, but they're often solved after the code goes into production and doesn't work, rather than before the code is ever committed.
Nowaday, ruby is dying in his RoR and Japan niches so the question is moot.
> Having a required type system for this huge part of the community would be a great let down.
... Why? These days types generate much more code than they cost in modern FP languages. E.g., https://github.com/haskell-servant/example-servant-minimal/b...
> That's why if you know it, you can solve most problems decently. That's what makes it fantastic.
Name a general domain that I can't find at least one or more well-maintained projects supported on NPM. I dare you.
> And embeded systems.
Who.. who is doing IoT and Esys in Python outside of toymakers?
one is named
> Maybe, not sure how that is relevant.
Man, it was your dare!
Please tell me what pieces you need and I'll do my best to make good on my claim. At a minimum, tensorflow, nltk and spark bindings exist. And in fact, the popular notebook software packages are reaching out to other runtimes already.
Aren't most IoT things basically toys anyway?
That NPM has churn is an example of how massive that ecosystem is compared to PIP, which is comparatively microscopic and is much more dependent on specific corporate actors to continue investing.
I'm still amazed that ya'll put up with Python's trashy numeric tower, coming from other contexts.
They have 30x 200 lines longs scripts and couldn't care less about code quality. They just want the result.
Sorry to be pedantic, but you probably mean static type systems, not strong type systems.
See: "What to know before debating type systems" - http://blogs.perl.org/users/ovid/2010/08/what-to-know-before...
Instead of the compiler telling me exactly what something is, what I can do with it, and if it makes sense to pass it to some function, I have to figure all of that out myself, greatly slowing me down.
Edit: It occurs to me I have yet to hear an answer to my question of how types hinder anything, especially if they are inferred. The only exception is the trivial requirement of sometimes having to cast between numeric types.
However, it isn't free. Type theory is a kind of math that most people have very little exposure to, so there's going to be a lot of work in order to start becoming proficient.
Additionally, there's more than one type of type theory. Are you using a system F like system? Are you going to have sub-typing? Is it going to be structural sub-typing? Maybe you want row polymorphism. Is there going to be a kind system? What about higher order poly kinds? Dependent typing? Intensional or extensional?
Additionally, there's more than one type of implementation of these type systems. Ocaml functors ... is it a generative or applicative functor? Haskell ... are you using gadts, functional dependencies, or type families?
In the end I think that type systems will eventually be able to get you a completely typed experience that feels exactly like a completely dynamic experience, but with compile and design time checks that always make you feel good about the experience. However, I don't think we are quite there yet and I don't think you can expect everyone to be able to take the time to get sufficiently familiar with an arbitrary type system in order to be productive with it.
Do you realize how insane that sounds? Static typing makes programs harder to debug? Harder to maintain??
On the contrary, static typing helps debugging and maintenance: changes that break invariants are more likely to be caught by the type system.
This speak of tradeoff sounds wise on the surface, but this is hardly a tradeoff at all. For many people (including me), a good static type system makes prototyping and maintenance easier.
> Yeah, you [DarkKomunalec] basically [use static typing to] make initial implementation easier at the expense of maintenance and debugging
It was not clear that "Yeah" was an approval (and not a dismissal), and it was not obvious that "you" was a general "you" (and not a personal "you" directed at DarkKomunalec).
Nevertheless, you were still talking about a tradeoff, and I personally see none: in my experience, dynamic typing makes initial implementations harder (or longer), because so many errors are caught too late. Static type systems have a much tighter feedback loop.
A normal person doesn't care. He just wants, and expects, 2+2.5 to yield 4.5. He doesn't want to use a cast, or write the 2 as 2.0, or use some sort of baroque type conversion procedure, or anything like that.
This answer is not Python-specific, of course, but it's a good example of the overhead that gets introduced when a language becomes too type-happy.
So does anyone who's done math. They also know 3/5ths is different. It's not unreasonable to ask for addition to be defined in a reasonable way though.
> This answer is not Python-specific, of course, but it's a good example of the overhead that gets introduced when a language becomes too type-happy.
Besdies OCaml, who actually does this for general programming? I can't think of many examples at all.
P.S., "A normal person doesn't care. He just wants". Stop this. The community here might give you a pass for being tedious and correct. Being tedious and incorrect is pretty much unforgivable.
Hmmm... I could have had an undergrad math degree in addition to the CS degree if I'd stuck around one more semester, but decided to head off for CS grad school instead. And yeah, I understand cardinality, and why there are more real numbers than integers (and could even write the proof for you from memory).
I also completely understand that 2.5 in a computer isn't actually represented as a non-integral real number, or anything like it. The computers we have now can't represent arbitrary real numbers (quantum computers can, I think, but I haven't studied those in any great degree). At one time I even wrote some non-trivial asm programs that ran on the 80x86 FPU, but I'd have to do a fair amount of review before doing that again.
So yeah, I'd say I've both "done some math" and have a good handle on how integers and floats are represented in a computer.
That still doesn't mean I want to have to put in a freakin' type cast when I add 2 and 2.5 on a pocket calculator. Nor does anyone else.
Or is this about Pascal again? Did ocaml bite you and you still have a mark? I'm trying to give you an opportunity to suggest this isn't a straw man. My most charitable hypothesis is that you really don't know much about modern static and strong typing techniques.
Everyone's numeric tower accounts for this and does sensible (if not optimal) type conversions. The best of the bunch give you fine grained control on what happens when. That something must happen is inescapable.
"You can only push off the complexity for so long if you want to do things that aren't trivial."
There are a lot of things that aren't "trivial" that nonetheless don't require a totalitarian type system.
Having your share of holiday costs come out as NaN is fiddlier than getting an exception at the point where you actually divided by zero.
The "type safe" guys like to pretend that their approach can catch all that stuff at compile time. It does catch a certain class of error, but at the cost of making the code take much longer to write. That doesn't work in a world where your competitor is iterating five times while you're still building the first one. Excellent way to get your milkshake drunk, that.
Sure, the point is that using integer arithmetic for integer calculations gets you better error reporting that saves you time when tracking odwn other bugs.
> The "type safe" guys like to pretend that their approach can catch all that stuff at compile time. It does catch a certain class of error, but at the cost of making the code take much longer to write. That doesn't work in a world where your competitor is iterating five times while you're still building the first one.
My experience is that I can iterate a lot faster if the compiler's able to help me catch errors faster. It doesn't slow down writing the code; almost anything I'd write in say Python translates directly into the same Scala. (I guess the language pushes me into defining my data structures a little more explicitly, but that's something I'd want to do in Python anyway if only for documentation reasons).
You are living in your bubble. The bubble of people who knows what they are doing.
Get out, you'll be surprise how much amateurish the world is.
Yet it runs.
The common response is, "But then I have to learn something new." But this is the curse of technology, and pretty much inevitable. Some learning needs to occur because tech changes so fast.
But you don't, necessarily. Dealing with type wankery takes time. And no, it has nothing to do with "learning something new". Languages that tout "type safety" have been around since at least Pascal (47 years old)... arguably even before that, with some of the Algol variants.
Yet they've never made it to mainstream acceptance. It's not even about hype -- Pascal was pushed HARD, by just about every university. Where is it now? Turbo Pascal had a small degree of success, but that's only because it chucked a lot of real Pascal's rigidity out the window.
If so, I counter: the last 3 years have been a series of breakthroughs both in terms of technology and social acceptance of typed programming. TypeScript is the rapidly growing language, Haskell's never been more boring to use, Scala's edging out Clojure even though it has very fragmented community leadership. C++ has adopted a lot of powerful new features and you're seeing functional programmers speaking at C++ conferences because the folks there can use it. Java has more capable and composable lambdas than Python.
Systems not using these techniques are plagued by security flaws, while those that are work on stabilizing performance under various workloads.
It's never been a better time to be using a strong, static type system..
I would make it "Pascal was never popular", but yes.
"If so, I counter: the last 3 years have been a series of breakthroughs both in terms of technology and social acceptance of typed programming."
This isn't the first rodeo for many of us. "Compile-type static type checking will solve all of our problems" is an idea that's come around repeatedly. Outside of a few niche applications, it never works, or even catches.
As for the supposed booming popularity of TypesScript... dude, TypeScript doesn't even make the top 30 on GitHub. It's less popular than assembly language and Visual Basic.
Then why... why bring it up? Should I discount all of dynamic typing because Io and Pike never took off? C++ did stick around, Oak became Java. APL is still in active use.
> This isn't the first rodeo for many of us. "Compile-type static type checking will solve all of our problems" is an idea that's come around repeatedly. Outside of a few niche applications, it never works, or even catches.
"It never works" is a pretty bold claim given that the majority of code you interact with on a daily basis has SOME sort of type system. I'd say C++ is better endowed than most in this dimension.
> As for the supposed booming popularity of TypesScript... dude, TypeScript doesn't even make the top 30 on GitHub. It's less popular than assembly language and Visual Basic.
My dude it would be extremely suspicious if it did. Instead, look at the growth stats: https://octoverse.github.com/. It's the fastest growing language that has non-trivial presence on github (and of course, that's the correct way to phrase it, a new language appearing can appear to have quadruple-digit growth percentile).
This seems profoundly disingenuous. Is that your intent?
As for "the case", Java does reduce the # of NPEs you get by not letting you deref invalid keys on objects, and it makes it easier to handle objects in the abstract.
But moving on to your claim in this post, nobody ever said "compile-time checks eliminate errors altogether." What they do do is reduce errors and completely eliminate certain classes of errors. They also make maintenance and debugging much easier because they define clear expectations for the interfaces of their arguments and return values. The length of stack traces is a completely orthogonal concern.
It's pretty rare to have measurements that are accurate to more than a few decimal places.
Types are great for large projects, but tend to add verbosity and development time for small scripts (thus why there are so few strongly typed scripting languages). SML/ocaml show that there is a nice middle ground where most types are inferred, so you can keep your types without too much work. Unfortunately, they've been around for decades with little usage in the profession.
My first programming job was writing map reports (choropleth maps) using AutoLisp.
In fact Lisp can be much simpler than Python, it is just like using a RPN calculator if you don't want to dare into macros and other advanced stuff.
Worse, I didn't have the docs -- just the header file. That one cost me a fair amount of head scratching before I figured it out.
More info on this in the pep: https://www.python.org/dev/peps/pep-0484/
I've written Python for several years, and as a result the way I reason about problems have been heavily influenced by it.
I'm fluent in Ruby as well, but it takes mental effort for me to conform to its "flow", for lack of a better term. My time in Ruby has made me a better Python programmer, too - I understand more fully the idea of a DSL and how it should work. Ruby is great for writing DSLs and writing concise code that's guided by the language of the domain in which you're working.
I'm competent in Clojure, which if nothing else has given me a distinct distaste for impure functions - if a Python function takes an array, it should not modify that array in place unless it's very clear from the name that's it's going to do so, and a function should not both modify its input and return it.
At the end of the day, Python is still the language that I reach for whenever I'm writing pretty much any personal project. It's clean, easy to read, and the overall feel of the language guides me to write maintainable code.
For instance, `1 + "hello"` will throw a `TypeError` at runtime because the `+` operator is not supported for use with `str` and `int` types. In a statically typed language, the code would not run or compile unless a `+` operator has been defined that takes a `str` type as it's first argument and an `int` type as it's second argument.
I guess you could say that a `TypeError` at runtime isn't really "type checking" in a sense. I guess it would be more like "type enforcement".
No, you're referring to type-related errors at runtime. That's not checking. Nothing is checked. Code breaks and may recover, but it has no idea what the types of the arguments to the offending expression were, only that it didn't work with that bit of code.
This is not type checking.
> I guess you could say that a `TypeError` at runtime isn't really "type checking" in a sense. I guess it would be more like "type enforcement".
All it does is say, "This code cannot execute with this implicit prior state." It has nothing to do with types except in the most tangential way.
1 + 'hello' resulting in an Exception would be a product of dynamic type checking.
A language would not be able to explicitly throw a type related error unless it had some information regarding the type of the data.
Most resources show little confusion on this, I think you are wrong. https://stackoverflow.com/questions/1347691/static-vs-dynami...
Incorrect. It might be the product of a run time type check. It is not inherently so. You ca't even be sure you actually had a type error when you get a TypeError.
But even then, this is using "type check" in the most vacuous and equivocal way possible. It's not the concept most people are referring to.
> A language would not be able to explicitly throw a type related error unless it had some information regarding the type of the data.
You (and this resource to some extent) are confusing strong vs weak typing with static vs dynamic typing. A + method might have a type check embedded (esp python since + is magical), but it's actually quite rare of that a.foo(b) is anything other than an assertion that the object A's vtable-analogue has an entry.
This is sort of dynamic typing, in the sense that I can think of formalisms that model this (named extensible row types come to mind), but this is a profoundly useless defintion of "type."
There are many more important problems with Python: packaging, no way to know which exception you are gonna get, no nice async web framework and so on.
But this? You have those in any languages.
In the past, I've created scripts that have not needed packaging. Packaging is what is making me fall out of love with Python, especially as I'm learning Rust and seeing how great it can be.
There are many different metadata files with slightly overlapping information without a clear place that documents all of them.
I'd love to see poet take off.
- no clean way to freeze / update
- no good way to distribute a standalone executable
- binary packaging is still a mess
- big libs like GUI framewok are still terrible depend on
- compilation solutions like the awesome nuitka requires a lot of manual work
- too many things to know
For a beginer it's hell. I haven't been a beginer in 10 years, but I train people in Python for a living, and I know it's a pain point.
- pip and setuptools maintainers break everything quite often*
- basically no mobile support what so ever for packaging
(* quite often you say, skeptically? Well, they completely screwed it up twice this year already. That's twice more than any other package manager I use)
I'm on Win10, and pyinstaller made it easy to create an .exe for Windows, but I could find no way on Earth to assemble an executable for Mac.
As it happened I just asked a tech-savvy colleague on a Mac to use pyinstaller on their machine, and it worked, but still - I'm really impressed with Python generally, but this seems like a surprising and considerable oversight.
Are you a beginner in general maybe? Because that's either arcane, damn difficult to setup or nigh impossible in tons of other languages too -- even ones that actually do produce executables to work (which Python by default does not).
(And of course you can always just create the executable on a vm -- no need to buy a computer running the other OS).
But my work has, until this year, rarely involved compiling software for other people to use. So that may be the noobness you're detecting.
Whereas, from my experience, it's usually a pain in the ass to set up.
I just thought it was interesting quite how hard it turned out to be! As someone who's usually on the games/VR side of things, I may have taken some of the magic that Unity, for example, uses to compile to all sorts of platforms for granted.
Other operating systems apart from Mac OS should be much easier to run in a VM, and where they're not, I'd blame that more on the creators of the OS's themselves rather than on Python.
But hey, you can use VMs to hit a lot of these targets.
> At one point a simple letsencrypt refresh ended up re-installing some critical python component eventually resulting in a full re-install of a server.
The letsencrypt developers explicitly recommend using a virtualenv to run the certbot script.
The best part of virtualenvs is that you can almost use them like little containers. For instance, let's say you have a script running a web interface and another script that performs data calculations both on the same server.
Your web application can have its own virtualenv and be listening for proxied requests from an nginx instance. The data processing script can be running in it's own virtualenv as well and listening on a local unix socket for incoming data to process. The data processing script is 100% independent of the web application script and vice-versa. They each have their own interpreter and dependencies. Hell, you could even run your data processing script in Python 2 and your web application in Python 3 if you needed to.
Yes, but virtualenv is not always available and updating a certificate should not require a large amount of software to be installed on the sly on a machine, it should just upgrade the certificate and be done with it.
pip install virtualenv
virtualenv -p python3 venv
If you have a heavily locked-down server or something, talk to the administrator.
> updating a certificate should not require a large amount of software to be installed on the sly on a machine, it should just upgrade the certificate and be done with it.
I respectfully disagree. First, updating a certificate can be done by hand without the use of any software. The point of letsencrypt is to automate the process. Automation requires software. If letsencrypt were written in C you would still need to ensure that the executable was compiled for your architecture and that you have the correct header files available in the correct locations.
I'm also not sure what you mean by "on the sly" here either. If we assume that you mean that the letsencrypt package automatically creates a virtualenv, how is this any different from postgres installing libxml2 as a dependency for example?
> First, updating a certificate can be done by hand without the use of any software.
Yes, I'm aware of that.
> The point of letsencrypt is to automate the process. Automation requires software.
Exactly. So, how difficult can it be to upgrade a certificate that was already there, nothing on that machine needed 'upgrading' over and beyond the certificate, especially not without doing so in an irreversible way. All the software required to do the upgrade was in place because it worked 90 days before then.
You'll have to explain. Any problems that arise would be problems that would arise with installing any package from any packaging system. I fail to see your point. Errors and bugs are always possible in any situation. This isn't really an argument against vurtualenvs, it's an argument against software in general.
> Exactly. So, how difficult can it be to upgrade a certificate that was already there, nothing on that machine needed 'upgrading' over and beyond the certificate, especially not without doing so in an irreversible way. All the software required to do the upgrade was in place because it worked 90 days before then.
When dealing with security measures such as SSL, it's extremely important that all packages involved in the process are secure and up to date, Therefore, it makes sense to me that an SSL library would want to ensure that all of it's dependencies have the latest bugfixes and security patches.
virtualenv helped a lot when that came out but it still doesn't save you from shared system libraries and dependencies
You should check out aiohttp: https://github.com/aio-libs/aiohttp
I've built a couple of tester projects with it in Python 3.5. It's actually quite pleasant.
For example, I slowly go insane dealing with the dumpster full of clownshoes that is python iteration. It's so dumb. I know it's a relatively small thing and only a few small tweaks would fix it, but after about 100 lines or so where I have to deal with it I'm ready to rage-eat my office chair.
There is nothing half as good as Django in the async world.
In Python, you are encouraged to follow Easier-to-Ask-Permission-than-Forgiveness and catch specific exceptions. Doing either / both requires you actually know what exceptions to catch so you don't hide bugs. The problem is that the docs rarely mention the expected exceptions. You have to resort to discovering them experimentally and hope that that exception was an intended part of the API contract and not an implementation detail.
The Python exception hierarchy is not as good as I'd like. It should be harder to match NameError, AttributeError, etc because there's rarely a sane way to handle those.
People often write code expecting to "handle" exceptions but usually end up masking design defects. Most of the time you can only sanely handle a handful of exceptions way back at the base of the stack. And the handling at that level is usually "log" and/or "limited backoff/retry mechanism". Most of the code you're writing shouldn't do much with exceptions other than perhaps add context and re-throw.
Even if you're not required by the compiler to handle an Exception, knowing which ones are possible to be thrown in a part of the code is still handy.
The same way Optionals and Enums in language like Rust force you to handle all cases.