I'm not in love with this aspect of Go either, and I also find that the idiom for dealing with it (multiple return values and multi-statement if conditional clauses) doesn't play well with Go's scoping rules, so that I find myself regularly having to decide between cleaner conditional or an explicit variable declaration. I also don't love how it makes my code look like my teenage-years C code.
But I also think this is a very silly reason to adopt or not adopt a tool. No other approach to error handling is less fraught.
If Python is your only language, Go or something like Go is probably a very useful thing to have in your back pocket: compiled native binaries, fine-grained control over memory layout, and a simple and effective concurrency model is a good thing to have in one package.
There are a lot of things that annoy me about Go (and for that matter Python, which irritates me for very similar reasons). What I tell myself to get over that and keep an open mind is, Go may not be my idea of an elegant language (yet; I'm still learning to appreciate it), but it is an excellent tool. I can get over the language stuff if the tool works well enough, and Go seems to for me.
Incidentally, who "leaves" a language? I have a very hard time seeing how, even from the label on the tin, anyone could believe Go is a great solution for every problem. Python sure as hell isn't either.
I started programming almost 30 years ago. Some languages I haven't touched in decades: BASIC, Fortran or Pascal. These days every time I have to use Java I hope it's the last. On the other hand, I used C for the first time 25 years ago. I don't use it very often but I wouldn't say I've left it.
So yes, people leave languages. And in many cases for good reason.
What a refreshingly even-handed perspective. And you even talk about semantics instead of syntax.
Go isn't particularly elegant in my opinion, either, at least not compared to a lot of languages. But it's pretty good for getting stuff done, and handles a lot of common problems in a way that is direct and simple. I can perceive that utilitarianism as a form of elegance.
> No other approach to error handling is less fraught.
The thing that kills me about Go's error handling is you return error AND value all the time. It's like an Either that always has Left and Right. I think it's a bummer, especially coming from a group that has developed languages in the past.
I imagine that this is to keep compatibility with C. And also, this ensures that when a Left value is accidentally read as a Right one, you don't get a "random" bit pattern due to type punning, but a correct (albeit meaningless) value such as NULL.
I'm not sure how, C doesn't have multiple return values either. I also don't really understand the second part. How is that any worse than the current situation of not checking the error side and reading an undefined (or whatever it is?) value?
Well, the choice is between structures or tagged unions.
I'm not a Go user, but if I understand correctly ("It's like an Either that always has Left and Right."), functions return a structure with both an error code and a value.
To wrap a C function into a Go function with this signature, you can initialize a structure with { code: success; value:f() }; and then of course error checking has to be done C-style (errno, lib_last_error(), etc).
The problem with tagged unions is that when you access the wrong field (when it's not enforced by the language), everything can happen : you can build ill-formed values, jump to random places, etc. Having an explicit NULL in that cases is better.
In a nutshell, unsafe tagged unions < explicit NULLs < safe tagged unions.
Go actually returns true multiple values, not a structure. You can't assign it to a single variable, for example, you actually have to assign it to N variables, with N being the number of values the function returns.
>> when a Left value is accidentally read as a Right one, you don't get a "random" bit pattern due to type punning, but a correct (albeit meaningless) value such as NULL
> How is that any worse than the current situation of not checking the error side and reading an undefined (or whatever it is?) value?
You get deterministic failure modes instead of dragons flying out of your nose.
Also assigning error in a variable ignoring it
sticks out a lot more. The single-return value C
version doesn't tell you if it can return failure or not.
> You get deterministic failure modes instead of dragons flying out of your nose
What Go currently has does not solve this problem, which is what I'm pointing out. Your argument is basically "not handling the error Looks Bad". Mine is "It should be impossible to do wrong". Something like ADT's would have been very welcomed, then we could just have a classy option type or either type.
Do you mean that restarts help the clutter and distraction that error handling brings? Or is there something else that
makes the condition system less fraught than exceptions?
It's less fraught than exceptions in the sense that it offers more options for recovery. With "regular" exceptions by the time your handler executes the stack's been unwound and the context of the error vanished; you only have what information the exception itself provides. Not so with CL's exceptions.
I have mixed feelings about errors as return codes. Then again, I have mixed feelings about exceptions.
There are two general use cases for exceptions:
1. Unexpected (typically fatal) problems;
2. As an alternative to multiple return values.
(1) is things like out of memory errors. (2) is things like you're trying to parse a user input into a number and it fails. I despise (2) for exceptions. It means writing code like:
try:
f = float(someText)
catch ValueError:
# I just parsed you, this is crazy,
# here's an exception, throw it maybe?
where this gets particularly irritating is when you start writing code like this:
try:
doSomething()
catch ValueError:
pass
I nearly always end up writing wrapper functions around that crap.
Java is worse for this because some libraries (including standard ones) abuse checked exceptions for this. I actually prefer:
if f, err := strconv.ParseFloat(text); err != nil {
// do something
}
or even:
f, _ := strconv.ParseFloat(text);
for this kind of scenario.
For the truly bad--typically fatal--error conditions and cleanup, IMHO defer/panic actually works quite well. I certainly prefer this:
f := File.Open('foo')
defer f.Close()
// do stuff
to:
try:
f = open('foo')
# do stuff
finally:
if f:
f.close()
as Go puts the two relevant things together.
Don't get me wrong: I like Python too but I do think Go has a lot going for it and has a bright future.
One problem with Go is that is uses multiple return values to indicate errors instead of alternative return values. When you call strconv.ParseFloat, you always get back two values, the error code and... wait, what float do you get back when there's an error?
If you're going to use return values to indicate errors (and certainly I feel that there are reasons to use them, exceptions vs error codes is not an either/or proposition IMO), you should use sum types and return either an error or the correct value and have a mechanism to make sure you handle both cases.
ML and Haskell get this right, it's a shame Go overlooked them in this regard.
> One problem with Go is that is uses multiple return values to indicate errors instead of alternative return values. When you call strconv.ParseFloat, you always get back two values, the error code and... wait, what float do you get back when there's an error?
Why would you even be interested in the float when there's an error?
But if there was an error, WHY WOULD YOU EVEN BOTHER?! Read the documentation what it says about the value in case of an error, and stop inflating a non-issue.
Human beings are capable of making mistakes, that's the entire point. If you forgot to check the error code in go, then you will end up using that non-float float that shouldn't exist as if it were really a float. In a decent language, you get either an error or a float, you have no way to accidently use the float if an error occurred.
>If you don't handle the error (e.g. by forgetting), the compiler will show an error because you declared err, but didn't use it.
But if I make a mistake like I mentioned previously, I could very well use err, thus not getting a compiler warning, and still also use foo, which is a whoknowswhat. It should be an Either, not two seperate return values.
>Making these specifics a criteria for decency is nothing but arrogance.
Having opinions is not arrogance. You should consider that when someone shares their opinion, they are not declaring that everyone everywhere must acknowledge it as a universal truth.
As you point out, in both cases the program will crash, which for many people is preferable to the program churning along with invalid data. In Go with multiple return values, it is possible to have a code path where there's an error but still use what would've been the result of the procedure. In ML or Haskell when using sum types, that code path does not exist.
> In ML or Haskell when using sum types, that code path does not exist.
Of course it exists: don't unpack the result and ignore it or pass it along, or use it through its monadic or applicative interface (to proxy-with-transform to the caller).
I feel we may be arguing without me having properly defined what I meant, so I will try to remedy this. Consider the following code:
a, _ := some_computation_that_may_fail()
b := f(a)
In Go, if there was an error in `some_computation_that_may_fail`, the call to `f` will be made with a undetermined value for `a` and we end up with an undetermined value for `b`. This is why I say that there is a path where we can use `a` even when we shouldn't.
Compare with OCaml:
let Some a = some_computation_that_may_fail () in
let b = f a
First of all, the compiler will warn you that the pattern matching on line 1 is non-exhaustive and will give you an example of a failing case (here, it will simply be `None`). If the programmer ignores that warning and runs the program anyway and the computation executes with an error, the program will fail at the first line, and thus the second function `f` will never be called and `b` will not contain an undetermined value.
I hope this clears up what I meant, I apologize that I wasn't clearer in the first place. Cheers!
I don't know what you mean by your first example, how does that lead to using undefined data? And your second example is incorrect, if you use an Either in monadic style then each successive function call after the error occurs is doing nothing. It simply returns the existing error from the Either, it doesn't try to run a computation on the non-existent value.
That isn't the same. If you do that you will get an Irrefutable pattern failed exception. You don't actually get access to x if justOrNone returns Nothing, because you are pattern matching and the Just pattern doesn't match. So you won't be able to use a non-value in further computations like you can with go.
I agree that exceptions can be cumbersome at times. And I do prefer the Go parsing float example to the try/catch one if my entire program is only one line.
But in order to live in peace you have to either:
* Force every programmer to always check all the error values.
or
* Allow errors to pass silently.
So either you accept Go as a heavy-duty, heavy boilerplate error handling laden language. Or you accept it as a flaky, risky language.
Exceptions are the better compromise in my opinion. You get much cleaner code.
If this is the more common pattern for your programming life:
try:
f = float(someText)
use_it_here(f)
use_it_there(f)
catch Exception:
# I wanted to do things, but something went wrong.
# Here's what to do now, this is easy
Then you're better off using exceptions. If you need to check every single call (e.g. NASA or Kernel programming) then you might prefer this:
if f, err := strconv.ParseFloat(text); err != nil {
// do something
}
if f, err := use_it_here(text); err != nil {
// do something
}
if f, err := use_it_there(text); err != nil {
// do something
}
So, I rarely see eg Java code that actually handles the exceptions it receives. Usually its just "Oh, I got an exception, so I'll print an error (or silently fail) and blindly keep going".
Given this, I don't see how Go is any worse with respect to sloppy programmers. They'll Always Find A Way.
Also, why couldn't your second example be nested? It seems like either of those two examples could be structured identically to the other.
I find Java exceptions quite good for cases where you want to fail at a much coarser level than the specific problem, but finer than the whole program. E.g. my previous^2 job was essentially a message-processing system; it had a bunch of loops that took messages off queues and processed them one at a time. If processing any given message threw an (uncaught) exception, it marked that message for retry or as failed and carried on.
You could certainly argue for handling each message in a separate process a la erlang, but this approach worked well for us.
I think checked exceptions were a mistake (IIRC Gosling agrees), and Java could do with better support for multiple return values (which exceptions get abused for), but I like Java-style runtime exceptions.
> You could certainly argue for handling each message in a separate process a la erlang, but this approach worked well for us.
I'm pretty sure you wouldn't do that. You'd have a shallow tree of processes, each leaf process would be tasked with doing message processing e.g. provided by its supervisor. The supervisor would "manage the queue" so to speak, and depending on the semantics of the queue it would have 1 to n children; and it would be tasked with marking failed messages when a child process would blow up (and restarting a new child).
Modelling messages as processes would likely be impractical.
Errors should never pass silently
Unless explicitly silenced
The problem with go is that you have 3 error modes:
1. Explicitly handle everything
2. Implicitly silence sometimes
3. Panic/recover explodes sometimes (who knows where/when?)
Mode #2 is dangerous.
Java/Python give you 2 only modes:
1. Implicitly explode on error.
2. Explicitly silence sometimes.
Where both modes aren't inherently dangerous, i.e. they won't directly cause undefined states to execute.
--------------------
Concerning the nesting in my examples - I'm used to the style of C programming where failing is mostly handled by a return as to keep the program as readable (and thus flat) as possible. So pardon my french but I assumed "// do something" would somehow prevent further usage of 'f'.
Note that the Go example, although tedious, isn't bad in cases where you really do need to check every single possible error.
I think this is not what you're actually seeing. When you get an exception there is rarely much you can do other than log the error or alert the user. It is much more important what you do not do in such a case - you do not let bad data flow into your program, and exceptions are great at preventing that.
Ad your last Python example - the preferred method of doing resource cleanup in Python is the 'with' mechanism:
with open("x.txt") as f:
#do stuff with f
That's it. Obviously, compared to 'defer', it hides stuff - the actual magic happens in special methods __enter__ and __exit__ of the object passed to 'with'.
Ruby by convention uses blocks for the same situation:
def withSomeResource
handle = acquireSomeResource
yield handle
ensure
freeSomeResource(handle) if handle
end
withSomeResource do |h|
h.whatever
end
File.open and other parts of the standard library does that by default. I imagine the Python implementation of "with" is pretty similar, given that it can be implemented pretty much like this in Ruby, and "(begin) .. ensure .." is pretty much equivalent to "try ... finally ..":
def with r
r.__enter__
yield(r)
ensure
r.__exit__
end
The two mechanisms are not very similar. With the blocks mechanism (if I understand correctly) the resource itself implements a method "execute this code and clean up yourself". This only uses the standard mechanisms of the language (i.e. blocks.) The control is with the resource.
Python's 'with' inverts the control - the control is with the language runtime and the resource is passive here. In fact 'with' is a special syntax extension, with calls to hard coded method names (__enter__ and __exit__) done by the Python runtime. The resource itself (or the contextmanager representing it) only implements __enter__ and __exit__ and is passive with regards to this mechanism.
> With the blocks mechanism (if I understand correctly) the resource itself implements a method "execute this code and clean up yourself".
Where the method is doesn't matter, e.g. the (incomplete) example I gave of implementing "with" in Ruby that shows a method that can be defined in the global "main" scope or wherever you want.
Some classes implement class-methods to instantiate an object and pass it to a black and free the resource as a convenience, such as File.open, but it could go wherever else.
The resource itself most certainly does not need to know a thing about it as long as its API lets you do the cleanup you want/need.
> The control is with the resource.
The control is with whatever calls yield. Whether that be a method on the global "main" object (closest thing Ruby has to a global, freestanding function) or on the class of the resource itself, or an explicit "begin ... ensure ... end" block if you only need it once.
> In fact 'with' is a special syntax extension, with calls to hard coded method names (__enter__ and __exit__) done by the Python runtime.
Which is why I gave my example of how I thought "with" would look like in Ruby (as it turns out I missed some exception handling in order for it to be equivalent to the Python version in functionality).
Which is why I gave my example of how I thought "with" would look like in Ruby (as it turns out I missed some exception handling in order for it to be equivalent to the Python version in functionality).
What you can't do, though, is express ruby's blocks using python's with statement; ruby is strictly more powerful here. One simple example of this is ruby's fork statement, which runs the code in the block in a process of its own:
fork { puts "child" }
puts "parent"
That will print "child" and "parent" once each, and it works because the fork method has control over the block's execution, so it can choose to only evaluate the block in the child's context. In python, the approximate code would be:
with fork():
print 'child'
print 'parent'
However, python doesn't allow the resource to control block execution, so this code can't work (within standard python; there's a bytecode hack that can make it work, but that's outside the spec). Anyhow, I guess we've diverged pretty hard from the article, but I can never pass up the opportunity to vent against python's with statement.
> given that it can be implemented pretty much like this in Ruby, and "(begin) .. ensure .." is pretty much equivalent to "try ... finally ..":
Note that there's a difference in the handling of exceptions: if an exception is triggered from the `with` block, it is intercepted, provided to __exit__ and can be silenced if needed or desired (by returning a truthy value). So a closer approximation would be:
def with r
r.__enter__
begin
yield(r)
rescue Exception => e
raise unless r.__exit__(e)
else
r.__exit__(nil)
end
end
Yeah, I don't expect it would be very useful in Ruby, but it's not uncommon for Python (and its users) to use exceptions for flow control (see GeneratorExit and StopIteration)
You can do the same with Java. Both are in a sense trying to re-gain the most useful parts of RAII semantics from C++: making it much more difficult to forget to clean up resources after you've finished with them.
>try:
>> f = float(someText)
>> catch ValueError:
>> # I just parsed you, this is crazy,
>> # here's an exception, throw it maybe?
No, you have it exactly backwards. It's with return values that you have to check after every call to be safe. With exceptions you can truly say: if I don't know what to do if this fails then I don't do anything. You can let the exception bubble up higher, all the way up to crashing the program if you like (which gives a nice stack trace that can then be debugged).
With return values, if you don't check then you could be building up more and more trash and you won't even know it until you either hit a point of code that finally does look at what's returned, or if an exception occurs (e.g. segfault).
But not checking is invisible. I can't grep for it. It doesn't stand out in a code review. It's almost never what I meant to do, so it shouldn't have been the default or easily overlooked, much less both.
If you use a language that uses error codes, then you're forced to explicitly think about what to do in every single error case (or not, and accept the consequences). This can add a lot of mental overhead, but can result in a much more robust and well-thought-out program. But because more logic is involved period (to handle the robustness), the program is necessarily more complex.
If you use exception handling, then suddenly catching errors becomes much easier, and writing code is faster, but it's not like your program is any more robust. If you catch an error up at the top, your program or function is probably just going to quit. Of course, you can add more code to deal individually with each of the exceptions in more robust ways, even to the point of wrapping each statement in its own try/catch block, but then it's no different in practice from returning error codes.
In the end, I just don't see much of a practical difference. In both cases, if you want to be a lazy or rapid programmer, errors will lead to your software just not working. And if you want to be conscientious about your errors, then you can do that in both cases.
Exception handling lets you handle errors "at a distance". It's convenient, but harder to keep track of. Returning errors keeps your code execution where you can see it, and you always know where it is. It's simpler, but more verbose.
The default behavior for an error in a programing language with exceptions is to crash. This gets your program back to a known state (i.e. not running). It's impossible for code to blindly continue when something bad happens.
If you want to handle some error in a way other than crashing, handle it. Web frameworks usually catch exceptions that arise while a request is being handled, finish the request, and keep handling new requests. You handle things more specifically only as needed.
I like that behavior so much more than having a program keep running in an unknown state unless you handle errors line by line (and value it more than keeping code execution where I can see it). I don't think my programming language knows what errors are very important, either — that depends way too much on context.
> The default behavior for an error in a programing language with exceptions is to crash. This gets your program back to a known state (i.e. not running). It's impossible for code to blindly continue when something bad happens.
In Smalltalk, the default behavior for an error is pretty much business as usual, because exception handling is mostly just Smalltalk code running normally. In some Smalltalks, you can override the "top level handler" and ensure that an app never crashes. (An infinitely growing collection will exhaust heap and crash the program that way.)
I don't know about Go, but lint is commonly used to enforce checking of return values in C. Presumably Go could bake an option into the compiler to enforce this too.
Sort of,
you can actually get away with invoking a function and not assigning its result at all
I think assigning the error to _ is only necassary when you have multiple return values(one of which is the error) and you are interested in at least one of them.
The difference for me is whether I'm trying to reduce MTBF (mean time between failures) or MTTR (mean time to recovery).
I'm in the first mode when I'm writing high-quality code to solve stable, well-understood problems.
I'm in the latter when I'm doing almost anything else. E.g., prototyping, exploring, pushing out a MVP for user testing, adding a quick-and-dirty version of a feature to get real-world feedback.
In the latter mode, I'm very tolerant of errors just as long as I can diagnose the problem quickly. The way I start is to catch exceptions at a very high level, tell the user something nice, and have myself paged. If something blows up, I fix it quickly. There's no sense in making code robust if I don't know if it will exist next week.
Basically, the two approaches are the same only if you have infinite time and money. Google does, so I guess Go makes sense for them. But if this fellow's take is correct, Go's much less interesting to me as a general-purpose language.
Though it works, one potential issue I found with it is that it does not give enough context as to where in the program the exception occurred. E.g. if I have more than one open() call (all for read mode) in the code, it will give the same error message for an exception occurring on any of those open() calls (as long as the exceptions all happen for the same underlying reason, such as "file not found". E.g.:
Exception occurred:
IOError(2, 'No such file or directory')
Had you come across this and found any way to handle it? I though of passing some unique code for each case, but there seems to be no way to do it, because we are not calling the except clause ourselves - Python does it.
Edit: just thought of a (crude) way to handle that issue: declare a global variable, say, "location", and set it to a different numeric or string value at each place in the code where an exception may be thrown, or at least at one place, say the top of the function, in each function or method. And then in the single except clause, print the value:
print "location=", location
This will at least help narrow down the area of code in which the exception was thrown.
Huh? Most languages with exceptions provide stack traces, so you can find which function (and line) caused the exception and also what functions were called up to the function that caused the exception. This makes it pretty easy to find out which call to open() caused the exception.
Heck, I commented in a hurry without thinking :( I did know about stack traces (used so many times), and you're right, of course. Thanks for pointing it out, though :). Now that I think of it more, since stack traces exist, there isn't even a need for that top-level try/except, for early prototypes. You can just write your main code, run it and let it fail, and fix the errors as you find them, by using try/except/finally etc.
I use the high-level error handler to let me know when a failure has some impact besides dumping a stack trace on my screen. E.g., failed web request, batch job barfing, etc. And to present the user with some reasonable "we're on it" thing.
I would guess that most (99%) of C programmers are lazy or has an rapid programming style then. Its a rare thing to see C source code that even do the simple thing like wrapping every write/read/print call with a loop that detects eintr. In my years of programming, I have yet to stumble on a other programer who even knows that one should be doing this.
Looking at c libraries and their example code, almost every time, the example lacks error handling. Its almost like people avoid doing error checking in C code because of the cumbersome amount needed to be done in order to check each and every time. If an example code would include error checking, the number of lines can easy increase 3-4 times the original size.
In Go if you use the return value of a function you must also assign the error to something, either a variable or explicitly ignore it by assigning it to _. It was designed this way specifically to avoid the problem of not checking error conditions in C code.
That's true but also C's error reporting differs greatly between libraries. Each library has a learning curve in how it wants it's errors checked. From the top of my head, 10 years after writing any C code there is check against NULL - guess the error, check against minus values - lookup minus values in a table, check against 0 - call method to get error string and many more.
Go standardises this in a super clean way. Every method that can error will return an error. If you try to write a method that calls methods that can error and you don't return an error then you will be extremely aware of that. After a few days of Go it will be physically painful to write code that eats errors.
So what Go did was to take the return parameter style of error reporting, mix in multiple return values to make that not suck and then optimize towards the local maxima of programming comfort within this paradigm.
The result - as anyone who has been writing go for a few months can tell you - is something that works incredibly well. I'd say it works better then exceptions style all but that may be my personal preference.
I don't doubt that Go is usable and reflects an attempt to rationalize C's (non) conventions, which is a goal I appreciate. but it certainly isn't false for Python that 'the result - as anyone who has been writing [Python] for a few months can tell you - is something that works incredibly well.'
To be fair, most sample code I've seen, regardless of language, usually does not include error handling. Why not? Probably because it would detract from the main point being illustrated.
In my years of programming, I have yet to stumble on a other programer who even knows that one should be doing this
That's a bold statement. Nice to meet ya! I'm no PC-Luser! ;-)
The PC-lusering problem is mentioned in "worse is better", so I would hope a fair number of folks know about it. Every good code base I've seen uses EINTR wrappers, but it is true that many folks I've met don't know about it.
Hey now.. it's not just C programmers, though. We have to use the same EINTR-retry loops in python too, sometimes:
EINTR is a flaw in Unix's syscall model: it forces on every single application the responsibility for dealing with something that should have been handled at a lower level. That's unrelated to C's error handling style; it would be just as ugly if every write() call had to be wrapped in a try-catch block to retry when an InterruptedException was thrown, and people would still forget to write that ugly boilerplate.
"Of course, you can add more code to deal individually with each of the exceptions in more robust ways, even to the point of wrapping each statement in its own try/catch block, but then it's no different in practice from returning error codes."
It is very different. With a try/catch block, you can see the normal logic flow of your code in one place, separated from the error handling logic. If you wrap each and every line that can have an exception with it's own try/catch block, then yes, that is pretty much the same as handling explicit error codes. Which is why I consider that poor style in languages that have exceptions.
Separating error handling code apart from the logic flow of normal operation is simply not possible with error codes as function return values.
Um... errors ARE normal operation. Consult historical output from your C++ compiler, if you don't believe me!
(If anything, that should probably be "errors", quotes included, because what people usually mean by the term is "easily-forseeable occurrence that I couldn't be bothered to write the code for".)
Compiler errors aren't usually implemented as exceptions because compilers these days don't stop at the first error.
Exceptions are ideal for things that require aborting and unwinding to a point - usually a loop, like a server request handler or UI event dispatcher - high up on the stack. If the general behaviour is not to abort and unwind, it's not a good fit for an exception.
It's also why exception handlers should be very rare. There shouldn't be any more than a half-dozen in most programs. Error codes that need checking and aborting by hand at every function call boundary are a poor fit for replacing exceptions, because they optimize for the case where the error can be handled, rather than the usual case for exceptions, where the error cannot be handled.
> Error codes that need checking and aborting by hand at every function call boundary are a poor fit for replacing exceptions, because they optimize for the case where the error can be handled, rather than the usual case for exceptions, where the error cannot be handled.
From what I understand, the Go language team wanted to write robust servers, thus they wanted to "optimize for the case where the error can be handled."
My point was that programmers are happy to push the "error" handling off to some other part of the code, far away from the point at which the "error" arises, in some place that usually has no real idea how to handle it, or present it, or what have you. All as if there's some default privileged path where everything is running normally, and then this occasional strange special case that crops up now and again. Exceptions explicitly encourage this style of programming - keeping the error handling apart from the main logic is their very purpose.
But were programmers to consult the history of their compiler's output (representative summary: "ERROR ... ERROR ... FAILED ... WARNING ... ERROR"), it would be obvious that in fact errors are highly likely, even if you know what you're doing. So what makes the error case different from all the other cases?
Errors in compilation are not exceptional; one of the main reasons to use a compiler rather than an interpreter is specifically to search for errors. So exceptions are a poor fit.
You're quite right that exceptions promote giving up control to a higher level. And that's the right thing to do for exceptional situations. The ultimate higher level is the programmer who's going to have to fix the bug, or the admin who needs to fix the configuration problem - because they are the two main classes of errors for which exceptions are great. (The third class is a logical abort, and depending on the language and how good its idioms support exceptions, this may or may not be good practice.)
You get rich information about the error, including a call stack, the ongoing operation gets cancelled as soon as it occurs rather than muddling onwards and causing more damage. If you can foresee the error such that you can write an error handler, using the method that throws exceptions is probably not the right one - unless it's unavoidable owing to race conditions, or impoverished libraries.
Not at all. If a function can't or shouldn't be responsible for handling an error, it can return that error. The result is that the error passes back up the call stack until something handles it.
It's very similar to exceptions, except you have to consciously choose to do it. That's a virtue to me.
I certainly agree that if you are thinking carefully about the code, either way should work just fine. As someone who likes C and Python I understand that both ways can make robust software, and it took me a little time to accept exceptions at all. More broadly, most of the languages that are widely used can write decent software.
I think this is a matter of what you prefer the program to do by default, when something clearly wrong has happened, but no error-handling code has been written: you may prefer such code to halt, or you may prefer it to keep on trucking.
If you use exceptions, you have to think about it IF you want the program to keep on trucking. Otherwise you can just let exceptional circumstances take care of themselves - if the program stops immediately, it cannot then behave incorrectly.
As a default failure mode, I do prefer the program to halt and decline to further cock up the situation (forcing me and my tests to deal with the condition) than for the program to silently swallow the error and continue as a default (forcing me to thoroughly check error codes everywhere up front out of pure fear that I will miss a condition).
This is just a convention and either one works.
If Go can become a better version of C then I might switch out C for Go. This isn't by any means going to cause me to give up Python because I use C for very specific reasons (when it is totally OK for reasons of speed or control to write a decent amount more code, and have it read more cryptically, and have to take care of many more low-level details).
I basically agree with you. I think return codes are simpler, and exceptions appear simpler because we're used to them. In reality they are a magical, out-of-band mechanism compared to plain old return. I contend that local and explicit is more strongly correlated with simplicity than remote and implicit.
Put another way, verbose code can be simpler if it is direct and explicit — what happens is what's on the page — just as succinct code can be more complex if it is magical or implicit — what actually happens depends on who is calling it, say.
I used return codes before I ever used exceptions, and even chafed at exceptions. Exceptions CAN be simpler. It's just an expressive tool, it matters more what you do with it.
I do hate to read code which uses exceptions as a normal part of program flow, it's very GOTO-like and remote and implicit.
But I don't write code that way and nothing is forcing me to write code that way. When exceptions are reserved for really exceptional conditions where a requested performance CANNOT continue, they are used much less frequently, and more locally and explicitly.
In short I think the problem is a matter of philosophy more than language facilities, and the big differences are within-language rather than between-language.
The reason I like to have exceptions used as a convention is that I think a better default for programs which have not yet covered some corner case is for them to decline to run, rather than to run in dishonor.
Say you need to migrate a database to a new schema. You need 10 SQL statements to do this. Each statement may fail, in which case you want to write stuff in a log and abort the transaction. In this case, wouldn't you agree that writing your statements in a try/catch/finally block would result in clean, easy to understand code, as opposed to adding explicit error handling after each statement?
No need for exceptions at all, and it's very clean and clear what's going on.
I wrote the same code to my young exceptions-indoctrinated colleague who asked the same thing like you, and he couldn't believe at the start that that's all -- that much he wasn't used to a plain control flow. It's even obvious how it can be generalized for any sequence of commands:
for ( int i = 0; i < n; i++ ) {
if ( !callSql( s[ i ] ) {
logAndAbortTransaction();
return false;
}
}
endTransaction();
return true;
The loop variant does know which call failed, and could easily be modified to know what the error was (if the underlying callSql had more than just a boolean failed/success return value).
If you can, write a more concise code with exceptions which will automatically show you which call failed and what the error was. If we include all the declarations, I claim that your solution wouldn't be shorter or more readable than my C variant I'll write, which would only declare error code values, or even return strings as error reports.
There's no advantage in exceptions, except when you use them for "hardware-like" generated exceptions (which is where they belong).
Now I have an exception to log, which means I have a stack trace which is going to tell me which statement failed rather having to play the guessing game. I may even get an explanatory message for free if the exception came with a message.
For me, having exceptions over return codes has a number of advantages:
* Within a single object, you get much more information (error message and stack trace). You need to return two different values if you need both a message and a return code
* You don't want to handle the exception at the call site? no problem, just propagate it up the stack. On the other hand, if you want to try the same thing with return codes, you need a way to differentiate the return codes due an error in your migrate() function and the error codes resulting from a given callSql statement.
* Which brings us to... another benefit: enforced compile-time checking of the exception. You can't write code that is going to check for a DogException when the callee throws a CatException. But it's very easy to check for -1 when the actual code is -2. Of course you can alleviate this somewhat using static constants, but it's still a safety issue.
Obviously, exceptions are not all roses either, whenever you need a new one, you need to create a class for it. But the workflow interruption is not greater by an order of magnitude than adding a static constant somewhere to represent your new error code.
Doing exactly the opposite from what I asked, you omitted all the declarations of all the classes you use. Note that if I use C++ I can also use the classes with the same semantic and that also contain error messages. The stack trace you mention is a debugger feature, not something your classes do by the language definition (I speak about C++, I don't know what's in Java). If you'd include all the code needed for your lines, you'll see that I have more freedom and not less expression possibilities. If you claim that you already have everything in libraries, OK, but C++ libraries without exceptions can have equivalent functionality. Exceptions are absolutely not needed to write the clean code that handles all the error conditions.
The example code is not using a new exception, it's handling the callee's, the same way that your code does not use a new return code but relies on whatever the callee returns and does not include the code for "callSql".
I assume you are right about C++ exceptions (the little C++ I do doesn't use them). Modern managed languages incorporate the stack trace in the exception (including Java).
I'm not sure how you propose to have both return code and error message in C++, return a struct with an error and a message? This also does not answer my points 2 and 3 about handling errors occurring at different layers and compile-time checking.
I'm sorry but I firmly believe you completely missed the point. I wrote "If you can, write a more concise code with exceptions which will automatically show you which call failed and what the error was. If we include all the declarations, I claim that your solution wouldn't be shorter or more readable than my C variant I'll write" and you managed to omit everything I proposed as needed in order to demonstrate what is actually involved in exceptions -- not the use, but all the classes etc needed. In short, if you actually use library exceptions, there are thousands of lines of declarations. You can claim "but they are already there, I don't have to write them" still it's a library thing, not the "language as such" thing.
Errors can be handled at different layers (using normal control flow the same way as I already demonstrated -- simply writing ifs) and the error declaration can be checked at compile time in C++ exactly as C++ has (who would have thought that!) classes. Java also uses class infrastructure for that, exceptions are only "out of normal flow path" mechanism.
import com.foo.SomeOtherException;
public class MyException extends SomeOtherException {
public MyException(String message) {
super(message);
}
}
That's about as complicated as it gets (considering the example extends a custom exception instead of inheriting from java.lang.Exception). I'll note that your example doesn't declare any return code anywhere. Now, I'm curious to know how you would do the equivalent of this with error codes:
public class MigrationException extends Exception {
public MigrationException(String msg) { super(msg); }
}
public class SqlException extends Exception (
public SqlException(String msg) { super (msg); }
}
public class Migration {
public void migrate(Statement stat, int migrationNumber)
throws MigrationException, SqlException{
if (migrationNumber <= 5) throw new MigrationException("Wrong migration number " + migrationNumber, should be > 5");
// throws SqlException
stat.executeNonQuery("INSERT foo INTO bar");
}
}
public class MigrationRunner {
public void run(Statement stat, Migration migration, int migrationNumber) {
try {
migration.migrate(stat, migrationNumber);
}
catch(MigrationException ex) {
migrationLogger.error("Bad migration", ex);
}
catch(SqlException ex) {
migrationLogger.error("DB error", ex);
sqlLogger.error("Error during migration", ex);
}
}
}
I'm on purpose not reimplementing an SQL driver here :)
I'm not quite sure why you are arguing about "error declarations which can be checked at compile time in C++". Sure, C++ has exceptions, but I thought you were trying to make a point about return codes, not C++ exceptions vs language X exception. My point, on the other hand, is that it's very easy to do this:
#define SQL_ERROR 1
// whoops, same error code due to copy-paste
#define MIGRATION_ERROR 1
void handleError(int errorCode) {
// Too bad, this was actually an SQL error
if (errorCode == MIGRATION_ERROR) {
...
}
else {
// We'll never enter this branch
...
}
}
No compile-time check for that. Or even:
#define MIGRATION_ERROR 1
#define SQL_ERROR 2
void handleError(int errorCode) {
// I actually meant 2 here
if (errorCode == 1) {
logSqlError();
}
}
Throwing errors up the return chain using error values is really very little different than throwing up the return chain using exceptions, except that now there is only one way to return, and it is always clearly marked. You return where you say "return". And you return what you said you would.
What am I missing here? This appears to be fully in-band and contains a redundant code path (constituting a NaN bug in languages with incomparable NaN—no idea if that applies to Go) for no apparent reason.
What you're missing is that I'm showing how error throwing and error returning are functionally the same, when you have a standard error type, multiple valued return, and an idiom of returning errors you chose not to handle.
Sure, you can handle errors and you can explicitly pass the buck. That's no different than what we do in C. Exceptions are out-of-band (from a regular return value), and what you're talking about is entirely in-band. What you're doing, functionally, is returning a tuple. This is as old as the sun and nothing at all like exceptions.
I'm not sure what you think idiom means (Hello, my name is Inigo Montoya), but you never need one to do anything in code. In human language they capture some context and their meaning must be understood by rote or in terms of that context, but in code they are entirely explicit and work just as well before and after becoming idiomatic—which is solely a form of classification that people use and computers don't.
Idiom just means "a standard way to do things, that everyone recognizes and uses".
Normal exceptions, the kind that unroll the stack (and not deep magic like call/cc) are only "out of band" as an optimization. There is no behavioral difference between "save the last place you decided to handle errors, and jump there directly unrolling the stack at once" versus "unroll the stack by a series of buck-passing in band error returns, until you get to the last place you decided to handle errors". A compiler could implement exceptions by silently transforming them into Go form. And a Go compiler could silently transform buck-passing into traditional exceptions.
In particular idiom does not mean something deliberately entered into the language to solve a particular problem—that's just regular semantics. Idioms are by nature emergent. For example I might be conscious of 3 idioms among all the languages I know (which is much more than 3), yet I write idiomatic code because I know the languages well. In fact, outside of Go circles no one talks of idioms, they talk of idiomatic code.
Who cares about compiler transformations? The issue is the semantic difference between return codes and exceptions. Nobody is asking for exceptions because they can't or don't know how to get to the correct scope to handle their errors, they're asking for them because they don't want to write all the boilerplate necessary to do so.
And yes, you can transform exceptions to chained returns. That would be a lot of work for terrible performance. You can't do the other transformation though. It would require being able to statically analyze the code path to determine where the errors are eventually handled, which is impossible in the general case.
My point is that whether it's "A() error" or "A() (int, error)", whether it calls one function or three, what it's doing with the error is throwing it up the line, rather than handling it locally. Exactly as if it had said "throw err". So yeah, your version is what I'd shorten it to in practise, it just wasn't quite so illustrative.
Its not really the same though. The two differences I can think of offhand are that 1) you have to manually pass error up the call stack until you want to handle it, while exceptions are automatically propagated, meaning that you don't have to remember to do this until the point where you actually want to handle the error and 2) exceptions usually contain a stack trace to where they were raised while this method won't (without additional work anyway), so contextual information is lost.
Heh, I thought the answer was "Because my blog is named uberpython and it would be way confusing." :-)
But error handling is one of those topics that really gets people going. From ABEND in the old IBM batch days, to uncatchable signals like SEGV in unix and uncaught exceptions in C++ or Java.
I tend to come down on the "decide what you are going to do in the code, right where the error occurs" flavor of the argument. So checking for errors when they occur is important to me, what to do next gets to be sticky.
So there are three things you want to do:
1) Easy - you just want to quit/die/exit pretend you never existed. This is pretty easy and most OS'es and embedded toolkits have something along the lines of 'exit' or 'exit_with_dump.' So that later you can try to reconstruct what happened.
2) Is "this is bad, but it might be ok later" where you want to unwind to the point where this started but not completely exit. Exceptions are pretty good for this if used well since you can catch them at the 'unwind' point and if you can attach the unwinding actions (closing files, freeing memory, etc) to the return stack then it can be managable.
3) You want to plod along in a degraded mode, which means you need some way of communicating with the rest of your code that you're damaged goods at some level.
Go has an interesting mix which I haven't used extensively but I wouldn't dismiss it out of hand. The folks writing it run services that continue through partial outages so if it were too egregious folks would rebel inside of Google.
Just some more oil for the fire. Having worked primarily in Python for over 10 years (and generally been a huge enthusiast during that time), I'm pretty comfortable at this stage with the notion that exception handling languages contribute significantly in only one respect - they allow newbies to write code without ever properly understanding how errors should be managed without violating layering, where retries should be inserted, and so on.
The culture of writing code, and that code being fine once a unit test is passing seems to propagate the myth that it's fine for any code to throw any exception at any time – it doesn't matter as long as those errors expected to occur from testing are the only conditions that ever occur. As for the rest, well. Kaboom!
Starting off from C, it doesn't take you more than your first 100 lines before you discover pain due to missing a return value. In my experience, eventually 60% of your time is spent wondering how this particular return value propagates throughout the rest of the code you've written, and libraries in use.
The end result is my C code tends to be much more robust in the face of braindamage (say, network errors, IO errors) than my Python, simply as a result of a cultural mindset that basically encourages ignorance of error conditions.
You can also trace one of the biggest pains from Python 2 back to this culture: deferred error handling is at the very core of the Unicode/strings mess, probably the single most motivating factor for moving to Python 3.
[Extremely tired, aware this is poorly written and sways between points, but hope it makes some sense]
You are right that #2 is well served by exceptions when done right. It's also well served by checking locally and returning your own error status. Both ways can be done well. IME exceptions tend to pass the buck to the caller when they shouldn't, in the hands of mediocre programmers.
The problem I found is #1 is hard. In C# if I want to just exit/return an HTTP error I ignore the exception and it happens. In Go for the majority of cases I have to check the error and call a panic function. I wrote a helper function to do this - it takes one parameter and panics if not nil.
2 and 3 I've found to be at little less verbose as there's no try {...} catch{...} around it.
It seems a nice way of avoiding the stack sentinels or setjmp/longjmp that C++ has, which means the output is less architecture dependent.
Personally I don't think Go is quite ready for prime time, but it's definitely promising. I'm doing a couple of projects with it to see how it runs.
It's not "tempting to ignore errors". You're the programmer, if you want to ignore it, do so. I don't know why you'd want to ignore the returned value and possible error on a GET request - given that 1) it can fail for so many reasons, and 2) the suggestion that you might be "GETTING" something. Ignoring the error of a print statement is more acceptable since it's unlikely to occur and there's rarely a lot you can do about it anyway.
Error handling is explicit, like the quote you quoted said it should be. If you ignore the returns (either explicitly or implicitly) you pay the price in debugging.
"leaving me confused as to which error happens when" Panics happen when something very bad happens. You shouldn't be accessing off the end of an array, it signals that you've probably programmed it incorrectly, rather than it being a problem with your input (~ish). Same with divide-by-zero and type assertions. However, they provide another control path that in a limited number of situations, can give elegant results.
That's his whole point. He doesn't want to ignore errors but feels like Go is so verbose about it that it's a pain for programmers to deal with it. Example in case, print. Would you check its return value every time you call it? No.
Contrast that to other languages, such as python, where you don't ignore the errors because you know that if something goes wrong, an exception will be raised.
Note that, in that case, we don't check if 'some-param' is in POST. Neither what happens if it's not a number.. Basically, an exception will be raised and a 404 will be shown.
> Basically, an exception will be raised and a 404 will be shown.
I understand your point in general, but have to disagree with the example you chose. A 404 implies Page Not Found.
The Python framework I use (http://bottlepy.org/docs/0.9/_modules/bottle.html) makes this a 500 Internal Server Error, with the text "Critical error while processing request: /foo-bar". IMO, this is ugly, and not super user-friendly, so I usually end up writing try catches around "one-liners" to log a better error than a stack dump, and present a friendlier message.
If he really wants this kind of framework-provided error message, then a panic would be more appropriate, because the HTTP server in Go will do the same thing as bottle. It's definitely not going to be as succinct, but it'll be more succinct.
It's actually worse than that. The blog has error handling code like:
if err := datastore.Get(c, key, record); err != nil {
return &appError{err, "Record not found", 404}
}
This is pretty typical Google Go error handling, where errors are only actually used as boolean flags. Errors are universally returned as type 'error' so to actually do anything with the error you have to first cast it to some specific type. But that type isn't part of the method signature so you have to dig around in the source or, if you are very lucky, it was documented someplace. You end up having actual error handling plus even more error handling for if you got the error type wrong.
For instance if datastore could return 'record not found' or 'io error' then they would usually be treated the same way by the code because to do otherwise would be even that more of a hassle.
I think this is exactly the point. You have to read the comment to get the error types (isn't compiler accessible), it doesn't list what all the possible error types are (doesn't say the ones documented are exhaustive), and the code returns a singleton error (no state) presumably so the caller doesn't have to do a bunch of casts..
> You have to read the comment to get the error types (isn't compiler accessible), it doesn't list what all the possible error types are (doesn't say the ones documented are exhaustive)
You're absolutely right. People should be able to use libraries without reading their documentation, and instead rely on compile failures to slowly iterate their code towards perfection. /s
Seriously, though, I believe expecting programmers to read documentation of a library they're using is a pretty low bar.
Also, no it doesn't list what all the possible error types are. To do this would require what amounts to checked exceptions in Java. The issues with these are relatively well known.
First, it complicates versioning as adding a new error type is a breaking change for all clients. When they get an error from an API, most clients either return that error verbatim, decorate that error slightly and return the decorated error, or handle a few specific error situations and return an error in all other cases. Declaring all possible error types makes your programs brittle, as it is easy for libraries you are using to break your code.
Second, they are a hassle for larger programs that touch many systems. It is easy to declare that you return an EntityNotFound error. But it is not so easy to declare that you return an EntityNotFound, MemcacheCASConflict, FileNotFound, FileExists, PermissionDenied, Timeout, InvalidJSON, ConnectionClosed, or TemplateRenderingFailed error. This is perfectly reasonable set of errors for a simple method that gets a value from datastore (possible caching it) decoding a field as json, and writing a template to an HTTP connection. Any 'simple' wrapper of this method then inherits all of these declarations. Now certainly with Java, IDEs will "fill in all the forms" for you, so this problem is a little more palatable, but Go does not require programmers to use (often heavyweight) tools to make them productive.
> code returns a singleton error (no state) presumably so the caller doesn't have to do a bunch of casts
This is a little disingenuous, as you don't really know why they made it a singleton error. I can't think of any state that would be useful in this situation, can you?
The Java version of this API throws an exception with a single field, the Key that could not be found. I think this is supremely unhelpful and just adds clutter to the documentation, as the user of this method clearly already has this information in hand.
> Errors are universally returned as type 'error' so to actually do anything with the error you have to first cast it to some specific type. But that type isn't part of the method signature so [...]
This is quite correct and what's really broken with Go's approach to error handling and documentation.
I respectfully disagree. You almost never cast the error. Two approaches are common in the go community.
1. Error constants. These you can use == or a switch statement to do control flow with.
2. Custom Error Types. These you typically use a type switch which uses reflection to do dispatch off. In this case you might also then cast it if you need specific data off of the error but most of the time you don't need any more than the string the error() method returns.
Both of these cases use errors in the way go intends them to be used. For control flow at the location where it makes the most sense.
You should note that the above test code is actually far less verbose than the analogous end-user code, given that the timeout_test.go clearly expects the 'err' var to be a net.Error.
Now consider the case of a network subsystem, with funcs having in args of type 'bufio.Reader' and/or 'bufio.Writer'. At some prior point you may have set 'net.Conn.SetReadDeadline(aPointInTime)', and in your stream processing funcs you may naturally encounter either net.Error (due to timeout) or buffer overruns, etc.
What would call site look like? Can you safely cast to (net.Error) like the test code? Not unless you can live with panics on interface type mismatch.
// hmm. what is the error type?
_, e := readMessage(reader)
switch errType(e) { // you write this reflective func
case NET_ERROR:
if e.(net.Error).timeout() { /* handle timeout */}
default:
/* deal with other errors */
}
You are using an internal testing package as an example?
First you wouldn't cast to net.Error like the test code for real code. That code is testing internal details of the net package and not meant to be a guide for idiomatic consumption of the code at the level you are describing.
here is what you would actually do in production code:
_, e := readMessage(reader)
switch et := e.(type) {
case net.OpError:
// handle OpErrors specifically
if et.timeout() { /* handle timeout */ }
case net.AddrError:
// handle AddrErrors specifically
case net.DNSError:
// handle DNS errors specifically
case net.Error:
// handle all the rest of the net packages errors
default:
// deal with other errors
}
Of course that's just for low level code. if you want to see if net.Error has code leaking though the wrapping package then you would perhaps use a case looking for all the wrapper packages specific errors and then look for net.Error types after that.
Most of the time you won't be dealing with the net package either. Instead you will be dealing with the http package for instance where your errors will mostly look like this: http://golang.org/pkg/net/http/#variables for which you can use a regular switch. I have just listed both of my cases taken directly from the stdlib including the case you listed.
IO read/write functions are not limited to the net package. net.Conn supports a subset of these (interface types) and it is entirely idiomatic and possible for a net.Conn flavor to endup at some deep layer of your code in a function accepting generic io in args.
Functions taking non-net "in args" returning "error" can not be assumed to always return "net.Error". You will need to test the type (reflectively) and then safely cast.
To be clear, the two-valued cast I used in my example never panics. It returns (casted value, true) or (nil, false). No need to write any reflective functions.
As to your example of io.ReadFull: what about it? If it gets an EOF before filling the buffer you passed in, it'll return ErrUnexpectedEOF per the documentation. If it gets any other error, it'll return it instead. I haven't actually looked at the source in a while but that's how almost all of the byte stream functions work.
I suppose it's all a matter of taste. The error handling design decision is one thing I really like about Go. I write high reliability embedded software and "throw/catch" just doesn't cut it for me since it obscures possible error conditions thrown from subroutines. I want to have to explicitly deal with those errors whenever they might occur and error returns are a good way of doing that.
For every language feature someone likes there's someone else who hates it, and vice-versa.
Checked exception need to be handled properly. It depends on the sanity of the API.
However, the standard way how to handle checked exception is a lot easier to maintain than error return values.
The only mainstream language that has checked exceptions is Java, and it is almost universally agreed, even within much of the Java programming community, that checked exceptions cause more problems than they solve.
Of course, unchecked exceptions have the problem that is basically impossible to keep track of what exceptions a function might throw (because it has to document all the exceptions that any functions it calls might throw too, and so on and on.)
In the end, of all error handling mechanisms around, Go is by far one of the best.
Easier to maintain how? I don't think I see it. Return values, at least as a mechanism, are strictly simpler; they do not introduce new control flow. They just use "return." Control always reverts to the caller.
For an exception, you don't know whether it's the caller, the caller's caller, some generic exception handler, or what. It's no fun trying to track down all of the callers of A, some subset of which might not catch exception X, then track down the subset of those callers which catch X.
Otherwise adding or removing error codes seems equivalent to adding or removing exceptions. Same with trying to decide which exceptions/errors to handle or bubble up.
I write Python and Go code every week, and have since I started using Go almost 2 years ago.
I appreciate Go because I'm working hard to become an engineer who builds robust software rather than a sloppy hacker that throws scripts together. One significant difference between the former and the latter is carefully handling errors versus not.
Python's error handling seems much more succinct only because most of us don't both handling errors at all! Every other line throws many exceptions, but we ignore this for convenience. (Those ugly "except ___:" statements ruin our oh-so-cool one-liners!)
Bottom line:
When I feel like having fun making something simple and getting it done _fast_, I use Python.
When I feel like building something that _needs_ to work -- especially anything that does more than one thing at a time, or should use all cores efficiently -- I use Go. And yes, that means taking error handling seriously.
I don't understand why anyone would "leave" a language that was working well for them just because of some hype. Don't get me wrong, I think we should all be striving to learn new things, but that doesn't mean we should just drop everything to jump on the next bandwagon.
Your statement seems self-contradictory. Should we strive to learn new things, or should we not jump on the next bandwagon? Are you saying that we should try new things but not really try them very hard, or what?
There is such a thing as progress, and in my opinion the way you get it is by spending lots of time trying new things and seeing which ones work. So I'm in favor of "leaving a language that was working well for them just because of some hype."
Because hype != progress. Hype is basically a bag full of air.
You can try new things, but abandon the other ones because of hype (or thinking it's better cuz "ppl who don't know anything about languages said so") instead of true technical merit is pretty fucked up :)
Now then again Go ain't bad, but I don't see it being better enough than Python to switch in this case. Python in better for many things as well. Not a large step enough.
Hype is a bunch of people claiming something is good. If there are enough people, you should probably go find out whether it really is good. If it really is, then voila! Progress happened. If it's not, you stop. This guy is busy finding out, that's all.
Fair enough, but I fear that most of the time hype comes from people who fear missing the latest boat, or who think it'll help them if they say stuff like XX is cool (a misplaced positive attitude always helps more, than misplaced criticism)
With mass globalization of the hype, we have countless examples of hype - even for programming - around things that aren't especially good.
Garbage collection relieves you from the drudgery, accounting and bureaucracy of manual memory management and indirectly leads to more expressive forms of programming once function boundaries are freed from having to specify who owns the data being passed back and forth. But you still need to be aware of space vs time usage, you still need to know where memory is being allocated, used and kept alive in your program, and you can still have "leaks" where a data structure is rooting too much dead data. This isn't immediately obvious to the novice, but that isn't IMHO a good reason to dislike the "magic" behaviour of GC. In the hands of someone who knows what's going on, programs get much simpler.
Exceptions relieve you from the drudgery, accounting and bureaucracy of manual error passing and recovery, and indirectly leads to more expressive forms of programming once function boundaries are freed from having to specify how rich error conditions are passed back along the chain of callers. But you still need to be aware of errors that require aborting and errors that can be fixed and resumed. You can still ignore exceptions that leads to programs crashing when they shouldn't. Looking at a program using exception handling, it sometimes isn't immediately obvious to the novice how the exception plumbing is working (especially since you don't usually see it at the function call level), but that isn't IMHO a good reason to dislike the "magic" behaviour of exceptions. In the hands of someone who knows what's going on, programs get much simpler.
>In the hands of someone who knows what's going on, programs get much simpler
I never really got this argument. With RAII types that have value semantics (ala shared_ptr) what is so difficult about memory management in C++? I guess there are reference cycles, but weak_ptr can help there. Manual ref-counting (like a COM AddRef/Release pattern) can be tricky, but that is where attention to detail, code reviews and basic competence come in. RAII wrappers with value semantics help here too. Most problems I have seen with that have been people simply not bothering to learn even the basics or using "smart" objects without understanding how they work or how to use them properly, which really isn't that hard, honestly.
>that isn't IMHO a good reason to dislike the "magic" behaviour of exceptions.
Agreed, my primary pain point around exceptions is people that write functions that mutate some state and are interrupted in the middle via an exception and stack unwind leaves objects in some franken-state. Transactional semantics in the face of exceptions don't just happen except in trivial cases (i.e. functions that don't mutate state or have trivial unwind semantics where they must only restore say a single mutated value).
That said the same problem exists anytime there is state mutation and multiple, failure exit paths. For some reason I guess the code I have seen using error code return style tends to handle this better. Could just be some kind of selection bias.
In my experience of web server apps, it's been fairly easy to guarantee that either the whole request succeeds, or - if there was an exception - every change is rolled back. Both user state and database state has been transactional, and state that lived on in memory in between requests was either read-only or caches.
In UI apps, it's much much harder to guarantee transactional semantics unless you're using persistent data structures or similar techniques that make in-memory transactions / undo trivial to make correct. So it makes more sense for exceptions there to save user data where possible, log the error (potentially back to the vendor), and restart the app.
(I don't want to talk about C++. I think both C++ exception handling and memory allocation are broken by C++'s design. You can only make it sort of work with coding standards, and even then it's labour intensive. If you don't yet understand why GC increases productivity, it's an epiphany you'll need to look forward to. Bonus: GC also makes exception safety far easier. For example, you can write a stack.pop() that returns the value popped - one of my favourite examples of C++ being broken.)
>If you don't yet understand why GC increases productivity, it's an epiphany you'll need to look forward to.
Oh I fully understand/agree with that, I just don't find C++ memory management to be too hard. The concurrency concerns pointed out by others are true, though. Whether they (the concerns) are offset by the lack of deterministic destruction and the extreme triviality of creating reference chains that root things far longer than needed due to the root being live while the rest of the chain should be dead, is another question.
As you pointed out, in some domains transactional code is fairly straightforward, or at least tractable without inducing massive depression. In some other domains, not so much :)
The domain of programs for which C++ memory management is not hard has only a partial intersection with programs that are small, expressive and easy to understand.
That is, if you take C++'s techniques for memory management for granted, simpler approaches to problems won't even occur to you.
The problem with shared_ptr is that you can't use it for fine grained stuff because it's terribly inefficient. It also has problematic concurrency implications.
But more importantly, what makes C++ memory management difficult is that different libraries do it in completely different ways. Bridging that mess is error prone and a huge mental burden.
> With RAII types that have value semantics (ala shared_ptr) what is so difficult about memory management in C++?
For me, C++ memory management becomes challenging in the face of concurrency. Particularly when you write applications that are event-driven, instead of thread-based, and you've surrendered to an event loop. It becomes challenging to keep track of object lifetimes.
Rejecting something because it's "from the 1970s" seems like a particularly weak way to roll. Automatically equating "old" with "broken" is just as lazy as automatically equating "new" with "pointless".
The author of the blog post is overlooking a major feature here. Go does multiple return types, so you can do things like this:
i, err := getValue()
however if you want to ignore a return variable or in this case the returned error you use an underscore like this (same as in haskell):
i, _ := getValue()
This is Go's mechanism for handling (or ignoring errors) which imho is really nice because you don't have to learn anything extra, like throwing/catching error mechanisms
Handling errors in one spot doesn't work for all situations, though, right? Especially in an OO environment an object may need context for retrying a request, or some other kind of error decision making. So in practice some exceptions percolate through the object graph, while others are handled in-house, so to speak.
It works for most situations. If you need retry logic - exceptions is the wrong way to go. Errors returned by the standard library are most likely not errors you want to retry by default.
That depends, though, doesn't it? Just about anything network-related is subject to flakiness, disconnection, and timeouts. That means HTTP, database connections, sockets, etc.
If you were to handle the same error cases in Python that you gave examples for in Go, it would probably be even more verbose with the try/catch wrapped around.
> Errors passing silently – ticking time bombs to go
Are you kidding? Unless exceptions are documented well, which they rarely are, this is a much greater problem when using exceptions.
>Are you kidding? Unless exceptions are documented well, which they rarely are, this is a much greater problem when using exceptions.
No it's not. If you get an exception you didn't prepare for, your code fails, not silently but very loudly. The argument to have is whether this is better or worse than your code silently continuing to do all the next steps even though the first one failed.
(FWIW I agree with the article, but will say it's kind of surprising to see that point of view from a python programmer given the python approach to type-checking)
> If you were to handle the same error cases in Python that you gave examples for in Go, it would probably be even more verbose with the try/catch wrapped around.
Not sure what you are talking about:
Is catching all exceptions really what you wanted to do here? A generic appError() is not equivalent to handling each possible failure more by type, as the Go example does.
As usual in these threads about Go, I really wish people would consider Haskell as a nice alternative. A lot of people write Haskell off as "academic" or "impractical", which I feel is not an entirely fair assessment.
Particularly: Haskell is fast, concurrent by design (whatever that means, I'm sure Haskell is :P), typed but not cumbersome or ugly (less cumbersome and ugly than Go's types, even) and--most importantly for this article--does error handling really well.
In a certain sense, Haskell's main method for handling errors is actually similar to Go's. It returns a type corresponding to either an error or a value (this type, naturally, is called Either). This method is special, oddly enough, because it isn't special: Either is just a normal Haskell type so the error checking isn't baked into the language.
Coming from another language, you would expect to have to check your return value each time. That is, you fear your code would look like this pseudocode:
if isError val1
then pass val1
else val2 <- someFunction val1
if isError val2
...
However, this is not the case! The people implementing Either noticed that this was a very common case: most of the time, you want to propagate an error value outwards and only do any work on a valid value. So you can actually just write code like this:
then, if any of the values return an error, it gets propagated to the end of the code. Then, when you want to check if you actually got a valid value--maybe in some central location in your code, or wherever is convenient--you can just use a normal case analysis.
Additionally, while the errors do pass through essentially silently, the type checker does ensure you deal with them at some point before using or returning them. If you ever want to get the Int from a Either Error Int, you have to either handle the error case somehow or explicitly ignore it (e.g. with a partial pattern match). The latter option will generate a compiler warning, so you can't do it by accident without being notified.
So the mechanism is simple, but it can also stay out of your way syntactically. So what else is this good for? Well, it's just a normal data type, nothing special; you can use existing library functions with these values. For example, you can use the alternation operator <|> to find the first non-error value:
val1 <|> val2 <|> someFunction 5 <|> ...
This is often a very useful idiom which would be harder to write with a different way of handling errors. There are more utility functions like this (e.g. optional) that let you make your intents very clear at a high level. The optional function, for example, lets you do exactly what it claims: you can mark a value as "optional", meaning that any error from it will be ignored rather than propagated.
You can also layer on this error-handling logic on other similar effects. For example, there are some types (like LogicT) that represent backtracking search. Combining error-handling with nondeterminism gives you a question: should an error cause the whole computation to fail, or only that particular branch? The beauty is that you can choose either option: if you wrap LogicT with ErrorT (this is just like Either except it can be combined with other types) an error will cause the whole computation to fail; if you wrap ErrorT with LogicT, the error will only cause the current branch to fail. This not only makes it easy to choose one or the other, but also makes it very clear which one you did choose: it's right there in the type system in a very declarative fashion.
Haskell also has a bunch of other advantages which aren't worth going into here. I think anybody looking for something like Go--a fast, high-level, concise, typed alternative to Python--should definitely consider Haskell. While it may not seem so at first, I believe that Go and Haskell are good for a very similar set of problems and so actually overlap quite a bit.
If you really dislike some particular things about Haskell, you should also consider some similar languages like OCaml and F#. I personally prefer Haskell, but there are definitely good cases to be made for either of the other two.
Haskell (IMHO) won't ever be mainstream for much the same reasons Lisp never was (or will be?): it has an incredibly high learning curve (eg [1]).
This is really the problem of the "pure" functional languages. Functional programming is suited to some tasks. With others the fit is almost tortuous. Imperative programming is well-suited to how we think and how we break down tasks.
If you look at the popular languages they're pretty much all multi-paradigm (eg Ruby, Python, Go, C#, arguably even C++/Java) meaning they give you the low-hanging fruit of functional programming while still being in a relative sweet spot of being easy to learn yet reasonably expressive.
Why I think Go has a very bright future ahead of it is that it is the first of these multi-paradigm language to combine all these features:
- easy to learn (seriously, go look at how short [2] is; you can knock that out in an afternoon);
- it is minimal. I LOVE Go's minimal OO model for example; and
- it is statically typed.
If you look at the other statically typed multi-paradigm languages you have C#, which is generally well-regarded... except (Mono notwithstanding) it is very Windows-centric. Java is, well, Java. C++ is incredibly complicated.
The only thing Go is missing is a mode for (semi-)manual memory management. I'm thinking something like Obj-C's ARC (in iOS 5+) and it could well supplant C/C++ for the vast majority of their (already shriking) use cases... eventually (Go has some work to do on speed).
Haskell (IMHO) won't ever be mainstream for much the same reasons Lisp never was (or will be?): it has an incredibly high learning curve (eg [1]).
Bingo. Anybody that thinks Haskell will ever escape its niche has not spent enough time working with rank & file programmers. That doesn't mean you shouldn't consider it or that it can't find a large enough niche to sustain itself but it does mean that it's unlikely to ever overcome what Carmack refers to as "externalities" for a lot of prospective users.
This kind of comment kind of misses the point. The monadic style of threading error values is perfectly compatible with imperative programming if only language designers knew about it.
Problem is that you then have to explain monads to average programmers who'll be using the language.
I love monads (I love arrows more, but that's another issue). I'm a language design geek. My level of expertise is different from someone who has just been hired into a new job and wants to get things done.
I suspect that most language designers know about monads - I'm not sure Rob Pike did, but it's pretty common knowledge among folks who do this stuff. Until you can explain them to the folks who'll be using the language in a way that doesn't make them seem like arcane black magic, though, you'll find programmers will just say "I don't get this" and use what they're familiar with.
>Problem is that you then have to explain monads to average programmers who'll be using the language.
No you don't, this is exactly what you don't need to do. Do we explain the finer points of stream implementation to would-be C++ programmers? No, when they need to do something non-standard with a stream/monad, that's the time to talk about them. For the majority of programmers it's just "when you need to write to a file you do it like this".
The Either type described in the first comment is also a Monad. They are more important in Haskell than just "when you need to write to a file..." and I don't think you can totally grasp the error handling method without understanding them.
I couldn't disagree more. The Either type may have a Monad interface, but it's a Sum type and languages that don't have thousands of Monad tutorials also have Sum types. There's no need to explain Monad theory to someone just to explain Sum types or even why chaining do expressions doesn't do unnecessary computations in the face of errors. Just show them the code for join and >> and they'll see why it works. No need to bring up Monads.
> As usual in these threads about Go, I really wish people would consider Haskell as a nice alternative. A lot of people write Haskell off as "academic" or "impractical", which I feel is not an entirely fair assessment.
As someone with 10+ years of programming experience in the industry but no formal college education, I see the problem with Haskell (and other, similar functional programming languages) that in order to fully understand the language, you need to understand its theoretical foundation. Same goes for monads, combinators, etc. Without understanding the theory they're based on, it's impossible to use them and also hard to read programs that employ them.
The theoretical foundation of Go on the other hand is much smaller, and way easier to understand, especially for people without a formal CS education.
So, yes, I admit that I don't "get" Haskell in all its glory, and I'm not ashamed of it because I know that most people in IT don't and that's why it will always remain relatively obscure even if its approaches to a number of programming language problems are technically and theoretically sound.
"So you can actually just write code like this:
do val1 <- someValue
val2 <- someFunction val1
val3 <- someFunction val2
someFunction val3
then, if any of the values return an error, it gets propagated to the end of the code."
I have seen this (the error monad) mentioned before as a "nice" way of handling errors, even with explicit error returns. I beg to differ - the error messages produced by such a program will be obscure, as all the context is lost - if the first call to someFunction fails, for example, there's not necessarily any indication that the error came from that call rather than the one after it.
Go's explicit error handling means that it's easy to add meaningful context wherever relevant - the error messages printed by such a program are likely to be considerably more useful.
In the end, adding error checks is doing useful work. Each error case should be considered individually, and I've often found it to be the case that it's useful to treat errors as regular values (for example by collecting a bunch of errors, or returning the most important error only).
I can understand the control flow of a Go function by inspecting it on the page (without a glance at the documentation). That's a huge plus.
In theory the lack of context problem only happens if you use Maybe's Nothing to signal errors. Something else such as Either should allow you to include extra data on the error case.
As for the error collecting bit I think Haskell can do that as well. Errors are just regular values in Haskell and its just a matter of not using do notation or using a diferent error monad if you want to create a list of errors instead of getting out when the first one occurs.
Ill have to agree with you on the control-flow at a glance issue though. This gets really tricky in Haskell, especially when you take lazy evaluation into account.
>I beg to differ - the error messages produced by such a program will be obscure, as all the context is lost - if the first call to someFunction fails, for example, there's not necessarily any indication that the error came from that call rather than the one after it.
This is completely incorrect. If you use Maybe, then you get Just value or Nothing, in which case there is no indication of what error occurred. This is used when you don't want to consider what error occurred, simply that the computation failed. If you use Either, then you get an error or a value. The error obviously gives the context you are looking for.
I love Haskell too, but I always run into problems with distributing my compiled haskell binaries to other systems that don't have a GHC compiler available to them.
For example, if I compile a binary on my Ubuntu machine and SCP it to a server running centOS Linux, the binary just fails to run because of shared lib issues.
Even if you try and compile the binary statically I still run into similar problems.
This is in stark contrast to any Go/D/C/C++ binary which works fine
That'd kind of defeat the whole point of linking a binary. Were you using cabal to build executables? What shared libs were missing on remote machines?
When I just do a simple hello world app, I get this error
ghc helloworld.hs -o helloworld
<scp binary and ssh to remote machine>
./helloworld
./helloworld: error while loading shared libraries: libgmp.so.10: cannot open shared object file: No such file or directory
However if I compile statically, I get a different error
ghc helloworld.hs --make -optc-static -optl-static -optl-pthread -static -o helloworld
<scp binary and ssh to remote machine>
./helloworld
FATAL: kernel too old
Segmentation fault
Amusingly enough, I'm pretty sure "FATAL: kernel too old" is a problem with glibc, so it's actually the C standard library that's preventing you from running your staticly-linked Haskell binary on your server.
So the haskell RTS requires the same version of glibc on both machines to make it work?
This wouldn't be too much of a problem but I don't have administrative privileges to these servers. So when I want to write throwaway scripts or programs, I find myself turning to Go or D and they work without any quibbles.
Nope. In theory you'd have exactly the same problem with any statically-linked binaries compiled on that machine and run on that server, regardless of what language they're written in. glibc has a minimum kernel version requirement and the glibc you're statically linking against just plain isn't compatible with the kernel on the machine you're running it on. Dynamically linking to a newer glibc and running against an older one doesn't generally work either, and IIRC may even result in apps that appear to start but crash unexpectedly when they try to access versions of library calls that aren't there, again regardless of language used.
Your problems with the non-statically-linked version, on the other hand, can probably be solved by copying the appropriate libraries to a directory on the server and pointing LD_LIBRARY_PATH at it... at least until you run into glibc problems.
Basically, what you're doing isn't supported and the fact that it worked for you with languages other than Haskell is mostly luck.
Running 2.6 I believe, but these servers are not managed by me, they're owned and administered by my workplace, so I don't have any say on upgrades etc
Of course there is a difference between an "index out of bounds" error and a network connection error: one is a programming error and the other is not.
On the other hand, us who haved loved and embraced C don't mind this at all. It's up to us how to arrange the error handling.
Different situations warrant different strategies. Sometimes it's ok to return NULL, sometimes it's ok to return true/false and pass the actual result back indirectly, sometimes it's ok to return the value but use an external error flag. Sometimes it's ok to mix control flow and external flagging of errors, namely what an exception is, effectively.
However, in my opinion, languages who point too much to one single error handling mechanism are more irritating than languages that leave it up to the programmer. I'm particularly wary of exceptions per se: they're a really nice, clean concept but on the other hand I rarely hit an use case that would be a perfect hit for exceptions, and even those perfect cases for exception-handling wouldn't look too shabby if designed with other error handling strategies.
On the other hand, many cases where errors are handled with exceptions get awfully ugly in the normal case. For example, the "try: ... except <name-of-error>: pass" idiom in Python. I can't count the number of times I've written a small wrapper function around a trivial exception handling case, that just flattens the result and error into a suitable value if that fits my use case.
When dealing with C and error handling, I have rather simple rule. If in the process of writing a simple program (+1k lines), and you did not find a bug in the libc documentation, then you are not looking hard enough at return values.
I find Python's exceptions model to be something that does occasionally frustrate me. For example, I'm used to using NSIS for some things (the PortableApps.com Launcher being one thing); its model is that a command may set the global error flag. This means that some sorts of things will fail silently - occasionally bad, but generally good. Some styles of Python scripts (generally automation scripts rather than full-blown programs) would do much better ignoring most errors. When you have a script merrily chugging along and all of a sudden due to some obscure corner case (perhaps not even documented) in a small, incidental part of the script the whole thing quits, it can be rather annoying.
The Go approach lets you catch errors if you want, or drop them, much more easily than Python approach. My inclination at present is that the Go approach is more likely to end up with error-resilient code, as it encourages dealing with errors each time, while Python typically encourages you to let the caller deal with an exception if you can't - but often I think the caller doesn't realise that he should. Or after a couple of levels of indirection, it's not even documented that that exception can occur. (This sort of thing, I believe, is something that Java's checked exceptions were supposed to help with - either deal with the exception or declare it.)
For myself, I've had lots of experience with Python and am now dabbling with Go, using it for two smaller projects to decide which to use for my next major project. At the very least, I think the approach is worthwhile trying in a serious project that its merits and those of the Python approach may be more completely analysed.
I'll agree that this is perhaps the single thing that I don't like about Go. Now, you can argue it helps with "explicit error handling" and all that, but in that case you might as well have must-be-caught exceptions.
What is it about panic/recover that forking the language, not merely using Go in a non-idiomatic way, would be necessary?
I could see forking it to add generics in a way that the official guys aren't satisfied with, but why is that needed to use another style of error handling?
Anyone using panic/recover across API boundaries to emulate exceptions should have their brains gofmt-d.
But so far I have not noticed anyone doing this, perhaps because actually returning and handling errors properly is not as much of an issue as most people make it to be.
I personally like Go's error handling so I wouldn't consider doing it, but what exactly would be the issue with doing so?
The reasons not to that I have seen are all basically boil down to "Because you're not supposed to, it's not what it's for". Because of how recover works it certainly wouldn't make sense to use it exactly the same as try/catch is generally used, but what exactly would be the problem with using panic/recover for more common cases (less 'exceptional' cases...) if we've already decided to not value writing idiomatic code?
It strikes me as a terrible idea, but try/catch does too.. I guess I don't understand why it would be even worse than that.
Because one of the nice things about Go is that I don't have to worry about this.
Is not just unidiomatic code in itself, but it creates unidiomatic APIs that behave in ways Go programmers don't usually have to worry about.
Unlike Python programmers, who should always be worrying about what exceptions any function or method they call could throw, hell, even setting a property or adding an item to a map or list can throw all kinds of exceptions. And this is rarely documented, and when it is documented it is usually incomplete, because whoever wrote that code doesn't know what exceptions might be thrown by any other code he calls.
Just to be clear though, there is no point "adding" exceptions to Go because the functionality is already there, just named something else and meant to be used in a completely different way?
What would forking the language do to solve that? What changes would you actually make? Would it just be a matter of giving it a new name so that people didn't expect idiomatic Go things to work with it?
Are we talking about just forking its standard library and not actually changing the language?
x, err := something()
if err != nil {
handleError(err)
}
n, err := somethingElse(x)
if err != nil {
handleError(err)
}
andSoOn()
you could write
switch x := something() {
case error:
handleError(err)
case string:
switch n := somethingElse(x) {
case error:
handleError(n)
case int:
andSoOn()
}
}
?
There's also the option of doing it on an explicitly tagged union, Erlang-style: `something` returns not `Value` or `Either(Error, Value)` but `{ok, Value} | {error, Reason}`.
This means you can handle the error:
case something() of
{ok, Value} -> %%;
{error, Reason} -> %%
end
or you can "ignore" it
{ok, Value} = something()
but (and this is important) the latter *will not pass silently if `something()` returns an error.
Instead, it will raise a "BadMatch" fault, similar to an Haskell-ish
I propose a new HTTP verb, MEH, to address the belief that the HTTP verb mechanism is unnecessary or over specified. The semantics of HTTP MEH are application-defined, so it can be used to fetch, update, replace, query, delete, or whatever the application defines to be necessary. Services honoring HTTP MEH implicitly signal that interoperability is not supported, but that intermediate proxies 'should do the right thing' regardless.
On the topic of when to use error-returns vs. when to use panic:
I struggled with this aspect of Go coding too and what I decided was that, in order to choose your approach to errors, you should think about how important it is that your function can be composed into an expression:
x := foo(a) + bar(b)
vs.
c, err := foo(a)
if err != nil {
...
}
d, err := bar(b)
if err != nil {
...
}
x := c + d
It's a trade-off. By using "panic", you enable more concise and compositional code. By using error-returns, you are being more explicit and making it easier for the caller to handle errors at the call-site.
Usually, functions that can fail are not the kind you desperately want to use in expressions anyway. So error-returns are more common. But there are many situations where errors can happen and where composition is a big win. In those cases, you reach for "panic".
If you look at some of the built-ins that can panic--I'm thinking of array-indexing and division and regexp.MustCompile(), for example--you notice that these would become very cumbersome if they used error-returns instead.
There's also the issue of initialization. That's the justification given in the case of regexp.MustCompile(). An initializer must be a single expression.
The words "panic" and "error" probably lead people to think that you should make the decision based on severity but I don't think that's right.
(That's just my take on it. I'm no Go expert so there are likely better ways to think about it.)
As I understand it, it is about severity. I think "panic" and "error" are meant to handle situations where there is no obvious, correct answer, like with array indexing. A nil pointer is another gimme example. Some languages have unchecked exceptions — maybe conceptualize it somewhat like that. panics() are for disasters, for serious program errors.
By contrast, a network timeout is not a disaster. It's not "normal" or desired behavior but it's well within widely-known modes of behavior.
Composition is nice-to-have but in general you ought to be able to write Go as if everything that does not return errors will succeed under the vast majority of circumstances, or where failure is inevitable.
Recover is, among other things, a failsafe of sorts for cases when code you do not control (e.g. a library) panics. It might be catastrophic failure for the library, but it may not be for your code, and you need an escape hatch.
...I am not an expert, either, but I've spent a fair amount of time on the Go mailing lists. I feel like I'm actually repeating something I've read, but Effective Go doesn't talk much about this. (It does say that library functions should avoid panic.)
Panic() should only be used to signal either programmer error, or truly panic-worthy situations where state is so messed up that crashing is the only good option.
Put another way: when you call an API correctly, it should be safe to assume it will never panic().
Panic/recover can be used in rare occasions within libraries to do more exception-ish style error handling, but those panics should never be allowed to escape and cross API boundaries.
I would argue that there is an obvious correct answer for array indexing and nil pointer dereference. Go could have been designed to work like the following:
x, err := *p
Where err is nil unless p is nil. And for array indexing:
x, err := a[n]
The runtime could do its bounds check and return (zero, IndexOutOfBoundsError) if the check fails, where zero is the zero-value for the type. It seems to me that these solutions are perfectly workable except for the massive code-size/verbosity explosion they would induce.
In such cases, the code should effectively prove that p is not nil and n is within bounds before performing the risky operations. Maybe a good plan is to always validate input first so that you can write expression-oriented code that only fails in the case of programmer error.
A situation from my experience was a recursive transformation where an intermediate call had no good way to deal with an error except pass it along to its caller--so I used a panic within the package for that. In hindsight, I think a better solution may have been to validate the input in an earlier pass so that the recursive transformation should always succeed.
So, in cases where the program can validate inputs first, it should do so and then be free to use compositional code that panics when the validation was broken. In cases where validation cannot remove error conditions, error-returns should be used.
Reserving panic for programmer-error, as luriel recommends in his reply, seems like a good maxim. I think I'll try to use that from now on.
>That problem is errors are handled in return values. 70′s style.
>This is one of the things I can’t stand in C.
Having to check error values for every function call is indeed a pain. But C has macros and I think it is a nice and elegant way to handle errors. I personally prefer MACROS to exceptions when writing C/C++ code. But to each his own.
Does anyone know if go supports macros ? If they do not, it is one ugly problem to have!
EDIT I just realized you can write custom exceptions that can provide similar information about line numbers, functions etc. So removed a line saying that was a plus for macros.
I am eternally asking myself how do people handle exception in multi-threading programs ?
Not that error code solve the issue at all, but it look easier (less complex) to handle thread error with error code
Well, given how hopeless it is to try to do threaded programming with Python, I guess that is not a consideration Python programmers usually worry about.
The same way you handle error return values in multi-threaded programs. Exceptions don't mean you simply ignore all errors all the time. They're just a different mechanism for communicating failures across function calls. You can code with exceptions the same way you do with errors.
Really the two styles of error communication are very similar. The default behavior for exceptions is to abort and let the parent deal with it, but you have the option of continuing on instead (catch). The default behavior for returned errors is to keep going, but you have the option of aborting and informing the caller. In either case you have to examine the code you're calling or its documentation to find out the details of what could happen when things go wrong.
Exceptions have the advantage of the abort-behavior requiring zero code, but it may be less obvious to beginners what is going on. In either case, people just don't like to be surprised, so they prefer what is familiar.
In C#, which has Tasks for concurrency (very similar to goroutines), an uncaught exception in a Task will bring down the entire program at some non-deterministic time, unless you make sure to observe it somewhere. So generally you're using some extension method so that invoking doSomething() in a Task looks like:
Common Lisp has an interesting exception system where errors have a chance to be handled without unwindind the stack. Basically, functions also receive an "error handling" object as an extra argument and they consult when they encounter an exception. The error handling object then edcides wether to continue operation, or to unwind back.
Someone can fix this by creating and popularizing a simple preprocessor that implements a macro called "safely" that expands to something like "call this function, and then if it errors, report the error to STDERR and abort execution". Then just prefix the majority of your commands with "safely", and implement your own error-handling in the few cases where you care.
What happens in Go if there is an exception (division by zero or whatnot)? Does it just exit, no stack trace? I am so used to exceptions that I don't even know how it used to work without them. I think getting a stack trace for debugging is really important...
* It's really slow.
* Most people write their code statically, but it's dynamically typed language anyway.
* Things like '__init__'... what?
* Having something you can see (whitespace) be significant is a bad idea.
* Even worse, the community likes to use spaces instead of tabs, which makes the 'significant whitespace' a significant problem.
* With the exception of Flask, important design patterns and lessons: DRY, SOC, DI, SRP, etc are largely ignored and called "Java-like."
Sure, python is slow. For problems where this matters, there are usually python libraries available that will do it much faster than pure python (e.g. for scientific number crunching you use numpy, which is using FORTRAN behind the scenes). There are probably some problems where performance is critical and no libraries are available, but not many.
Sure, python is dynamic; at the time it was first created it was hard to provide the level of flexible ("duck") typing python has in a static environment. Now that typing has advanced, we are starting to see some level of type annotation support in python; I would expect it to get static typing eventually, but we can't break existing code. Still, I can totally understand leaving python for a static language with good type inference, like haskell or scala. I don't think go's type system is powerful or elegant enough though.
__init__ is fantastic; it removes all the special cases around constructors you see in other languages, and makes it "just another method" that follows all the usual method resolution rules etc. This means less to learn, less to bite you, and much easier to e.g. refactor a constructor into a factory method (or vice versa).
Whitespace isn't significant, indentation is. And when you look at a piece of code, you can see the indentation far more quickly and easily than you can see the braces. At least I can.
DRY, SOC, SRP are very much part of the python philosophy. I find python's standard library is much better at following "tell, don't ask" than most languages'. I agree that many python programs should follow the DI principle better and possibly the language should have better support for it, but IME the java/spring-approach as it's usually implemented doesn't really provide DI, it's just a complex reimplementation of singletons. I've yet to see a language or framework that can really help programmers do DI right if they don't already know.
The principle behind Go's design decision is based on actual experience and need to be understood by amateurs.
The idea is quite simple: errors are not some rare special cases, they are ordinary, general events.
That means there are no need to some special mechanism for handling them. They should be handled as ordinary events within common FSM.
There is no contradiction in using the general mechanism of examining returned values to determine the state of procedure execution.
Any FSM requires if (or cond) statements. It is the essence of a program's logic.
The mechanism of exceptions was wrong. It is wrong in Java, it is wrong in C++. It is just a bad design.
The good design is using the same mechanism the underlying OS uses. In case of UNIX-derivatives it is the explicit checking of a return value. Because errors are common and ordinary events.
Amateurs believe that errors are rare, and they will avoid them. This is over-optimistic - in practice errors are of nothing special. It is just an alternative branch of a condition, where a consequent assumed to be a success.
The error handling style is certainly different - but it's not so bad. It's a matter of style and getting used to stuff.
Maybe a better error handling mechanism is needed - but I'd prefer if it's not exception handling. Perhaps lisp style error handling? What ever that is - for I'm not familiar with it - but I keep hearing lispers brag about how awesome it is!
Think exceptions, but instead of blowing away the stack on the way up to your exception handler, it gives you the option of resuming execution in some way where you left off. It also lets you encapsulate error handling better.
I am glad to see some CL condition system pop up here. This article basically turned me off from Go and reading peoples support for C style error handling has made me question what the heck they are thinking.
There are actually several nice things about the CL system, but nothing there is going to convince someone that actually thinks not only that having every return value in your language serve the role of an error signal is reasonable, but actually arguing that it is equivalent from a programmers perspective or even a better method of programming.
Instead of insulting people and declaring that you can't convince them, try posting some useful information about CL condition system and why you think it's better than everything else.
First let me apologize for the length, second I will say that to my knowledge none of this is about the CL condition system in particular, third this is basically a brain dump because I need to get back to work and still don't understand the error code camps points...
I definitely wasn't trying to insult but I recognize that I wasn't being helpful either. I was basically throwing up my hands. And while I like CL's system, I haven't come close to trying everything out there so I can't be a judge; I mean this not like a cop out but seriously, I am very poorly informed about what other languages have to offer.
But since you ask: I guess if I was trying to convince someone that exceptions are inherently a better mechanism, I would start by exploring how arithmetic is handled in the C language. Let's say we have a function called "add" that adds two numbers. Typically we would think expect that the return value of such a function should hold the result of that addition. It would be naive, however, to expect add to always complete without something like an error happening. While addition of two numbers seems safe, it isn't when we start using our imperfect number representations such as floating point numbers (susceptible to overflow, underflow, loss of precision, and more) but also with machine sized integers (susceptible to overflow) and even bigints (what happens if we attempt to add two numbers and exhaust system heap space?). The way these are currently handled in the C language (to the best of my knowledge) is by three different mechanisms: 1) floats return special floats that say indicate that there was an error somewhere, like NaN (though you can probably instruct your OS/hardware/whatever to issue these as signals instead of silently spitting out an NaN), 2) wrap-around is loosely taken as a standard but truly it has undefined consequences, your code must guarantee that this cannot happen, and 3) the program will probably exit. This is messy but we have learned to deal with it and has become second nature to C programmers; but that doesn't means it is good.
But this doesn't need to be the case, we could use the return value checking that is being promulgated by some here. We can define add as...
error_code add(int a, int b, int *return)
...where int could be any number type and translate every occurrence of "a+b" in our code into something like...
int ret;
if (err = add(a,b,&ret))
{
// err handling here
}
// ret holds the answer here
I cannot believe this is preferable to anybody, in fact I am going to go out on a limb and say that it isn't. Instead we just trust this to work when we know that there are instances where it won't and, in the case of floating point math, it is extremely common for you to hit those cases where it doesn't work. If we were serious about writing code that handled these errors gracefully, our C code would be unmaintainable. If we were honest about the source of this laziness, I think we would say that this is due to the syntactic and mental overhead of the error handling. The fact that we don't code like this, even the very fact that C will spit out an NaN and propagate it along indefinitely, is proof in point that people dislike this type of "check the return value" error handling. This is C more or less pushing us to write code that is less robust.
If we really wanted to consider what happens in a Common Lisp system, you would have a function "add" that takes two numbers and will always return the correct result of that addition or won't return at all. This means no wrap around overflow, no NaNs; this is vastly cleaner. However, as far as I know, this has nothing to do with CL in particular, this is presumably how Python or any language with a exceptions works. This is what I cannot fathom: that there are people that are willing to throw away this simplifying assumption because it means that your code might have controlled non-local transfer of execution. This is what Joel on Software directly says (linked elsewhere in this thread); exception handling is like goto, goto is bad, even when it is contained within an apparatus fixes its problems, like the exception handling apparatus. Or that people think the same amount of work is involved in either system? How does that make sense? Which way would you rather treat something like "add(a, add(b, add(c, d)))" because to my eye the C style handling adds a bunch of boiler plate code that I'd rather not write while adding little to no value. Like I was saying, I cannot fathom it so it is hard to form an argument against the alternative stance that I do not understand. Perhaps there are places where we want our execution so close to the structure of the code that handling errors C style makes sense, but I have to imagine that they are few and far between.
If we compare a more common example where people will place error handlers: writing to a file. Is it better to have writefile(file, data) return an int that indicates error, or is it better to just say "writefile puts this data in file, period". Here are two errors that might come up when writing to a file, we don't have write permission for that file or the disk is full. Where do you want to handle these errors? One error happens when we open it, the other happens when we attempt to write to it further down the call tree, but presumably we might want to catch both and deal with them at the same location. With exceptions you catch/handle the exceptions you are interested in. When returning error codes you have to manually defer the error up the call tree, affecting the interface of every calling function up to the highest function that can actually handle the error. So you can add the boiler plate code to combine and propagate errors up the call tree to your task list now. At the root of this is the exact same issue as the previous point, it adds syntactic and mental overhead for the programmer.
...if there is an exception inside wait_for_our_men_to_come_in then we'll never close the gate. While this is true for the code he's written, most people would (or should) consider that code buggy. In my experience and the comments of that post confirm it, most languages have a mechanism to ensure certain side effects happen even under non-local control transfer. Where are the points of intermediate state that he render exceptions equally bad as error codes? Are they the states that we explicitly account for in any bug free program, like closing the gate if there is an problem bringing the men in? I will reiterate, I truly am at a loss, this is not a rhetorical question. It seems like I am missing something.
To sum up: One method seems like a clean method of dealing with errors that will happen, the other seems like a nightmare syntactically and basically consists of code that I deeply feel should be up to a compiler to write.
But I also think this is a very silly reason to adopt or not adopt a tool. No other approach to error handling is less fraught.
If Python is your only language, Go or something like Go is probably a very useful thing to have in your back pocket: compiled native binaries, fine-grained control over memory layout, and a simple and effective concurrency model is a good thing to have in one package.
There are a lot of things that annoy me about Go (and for that matter Python, which irritates me for very similar reasons). What I tell myself to get over that and keep an open mind is, Go may not be my idea of an elegant language (yet; I'm still learning to appreciate it), but it is an excellent tool. I can get over the language stuff if the tool works well enough, and Go seems to for me.
Incidentally, who "leaves" a language? I have a very hard time seeing how, even from the label on the tin, anyone could believe Go is a great solution for every problem. Python sure as hell isn't either.