Hacker News new | comments | show | ask | jobs | submit login
My poor use of golang's defer woke me up (daemonl.com)
44 points by daemonl on June 23, 2014 | hide | past | web | favorite | 24 comments



Never delete code. This is why you have git or svn, or whatever your tool of choice is. Never, ever delete code. You may think it's dumb, you may think it's crap, or useless or whatever, but in 2 years, you'll think. "Damn, I remember doing this already, don't I have some code in somewhere?" And you will.

You may look at it and rewrite huge chunks because you're a far better programmer now, but trust me, re-writing code is way easier than writing it from scratch


Yep. I know. I even deleted the github repo. I'm seriously not sure what I was doing. Remove all traces of a bad idea?


Ouch, why on earth...

Anyway, good blog post.


Nah, SCM tools were not made to be your personal snippet collector, you're better off just getting a real one if you're the hoarder kind of programmer.


Sometimes you just need to burn the pictures of you with your ex and move on. In some way, you should "KEEP" everything, after all digital space is cheap right? But you can keep a lot of code around that you will never revisit in the future.

> but trust me, re-writing code is way easier than writing it from scratch

Not always true, and not even often true.


> > but trust me, re-writing code is way easier than writing it from scratch

> Not always true, and not even often true.

In my experience, virtually always true. Just rereading the code you wrote before will bring back the understanding you had when you wrote it (unless you intentionally wrote obfuscated code, I suppose?), and it'll be immediately obvious to several-years-on you what the shortcomings were of that idea. If you have the time, a full rewrite almost always turns out to be better code than the old version, as long as you can hold off on trying new experiments in the process.


You can always take the experience, but often the old code exists in such a misguided architecture that it is better to scrap it vs. unwind multitudes of bad uninformed decisions (b.c. you know better now!).

Re-writing code is actually almost always harder than writing it from scratch, but we do it for other benefits: interoperability with legacy components, legacy of expected behavior (warts and all), risk (the old code is debugged), and culture (programs in the team know that code). But if you don't have those requirements, you will often come out behind in rewriting all code rather than going with a green field.

It also depends on whether the work one is doing is cutting edge (lots of experimentation and learning required) or basic dev work over relatively well known concepts.


As Brooks said, "Plan to throw one away; you will, anyhow."

There is value in a prototype - even if you don't actually use any piece of the prototype in the final product.


If you're going to quote Brooks, it might be worth noting that he's had a slight change of heart on that point: http://www.wired.com/2010/07/ff_fred_brooks/


grep -a 50 -b 50 string /dev/disk0 ftw


I assume you mean grep -C 50. Lowercase a and b are ascii and byte offset rather than lines after and before.


> [...] sudo less /var/log/upstart/app.log, 99999... oh, this log ACTUALLY has 99999 lines. Waiting, Waiting (note to self: google the command to jump to the end, there must be one).

G (as in shift-g) jumps to the end of the file in less. Or use tail instead.


If things are configured correctly the 'end' key on the keyboard will do that too.

Also, less has build in help if you press 'h'.


Easier to remember:100%

0%, 45%, etc work like you'd expect


I've been bitten by similar complexities around indirectly managing the database connection pool in Go, too. There might be a little too much magic in the library (such as successfully iterating to the end of a resultset implicitly releasing the results).


That one I'm still not clear on. I THINK I should be closing that particular rows set already, the SQL is LIMIT 1 and I check if !rows.Next(){return ...}, yet... here we are :-)


> In my haste, I'd not noticed, I'm queueing up all of my rows close statements for the function end, which happens after the for loop, which opens way more than the allowed connection limit (about 100 in this case).

Here I thought this was going to be about defer and how it is error-prone compared to RAII and how it is a modern-day alloca with the same type of scope problems, like being unsafe to use in loops, and how it has a weird order of execution with arguments being evaluated immediately and statement evaluated later on.

Instead it's just about having poor project management. A missed opportunity I guess.


Or... lack of project management.

There is definitely something to be written on that, but - well I'm not the guy. Yet.


There are a couple of good resources you can use here, the first is just about the best reference for using Go with a RDBMS:

http://go-database-sql.org/

The second speaks of database connections:

http://jmoiron.net/blog/gos-database-sql/

The general approach:

1. Use a single DB connection, it will pool automatically

2. Use this pattern for all single row queries:

    err = db.QueryRow(`...`, ...).Scan(&...)
    if err == sql.ErrNoRows {
    	// Handle no rows
    } else if err != nil {
    	// Handle actual error
    }
    // All fine
3. Use this pattern for all multi-row queries where you want to return a slice of structs containing the row values. Note that it is fine to call rows.Close() as soon as possible in addition to deferring it, defer takes care of handling it whenever something goes wrong and the explicit call returns the connection as soon as possible:

    rows, err := db.Query(`...`, ...)
    if err != nil {
    	// Handle connection or statement error
    }
    defer rows.Close()
    
    things := []rowStruct{}
    for rows.Next() {
    	thing := rowStruct{}
    	err = rows.Scan(
    		&thing.id,
    		&thing.value,
    	)
    	if err != nil {
    		// Handle row parsing error
    	}
    
    	things = append(things, thing)
    }
    err = rows.Err()
    if err != nil {
    	// Handle any errors within rows
    }
    rows.Close()
4. Use transactions as serial things, if you need to call another query whilst in a loop where you can't rows.Close(), then read the rows into a slice and range over the slice. You must never have two queries running in the same transaction... so code to do one thing before you do another, and be mindful of this if you are passing the transaction to other funcs.

An extra bit of info:

5. defer doesn't just have to be used to call rows.Close(), if you want to know when things happen you can wrap the defer and log:

    rows, err := db.Query(`...`,...)
    if err != nil {
    	// Handle connection or statement error
    }
    defer func() {
    	log.Println(`Closing rows`)
    	rows.Close()
    }()
On which point, beware there are some theoretically uncaught errors, for example tx.Rollback() can return an error http://golang.org/pkg/database/sql/#Tx.Rollback but if you have called it using defer tx.Rollback() after creating a transaction you'll never know. I hope that the only reason that might error is that something has already ended the transaction, but there is definitely scope for deferred finalisation within a func to cause errors that you might miss and it's worth considering the pattern above if you have any mysterious behaviour going on.


With defers and named return values, it is actually possible to alter the return value inside of a defer. I had to do this recently to properly log an error (to abort in the calling code). Also to do with database operations of course:

https://github.com/aktau/gomig/blob/a63d309848907a72782dd94e...

It's not the prettiest, but I needed it fixed soon. Will refactor later :).

Read about it here as well: http://blog.golang.org/defer-panic-and-recover


That's really detailed feedback, thanks for that!

I'm actually fighting with some interesting things now with error handling, I wan't aware I had to do my own retry on deadlocks.

Ugh, it's just one of those days I feel like I'm not as good at this as I thought I was.


If you know where the deadlocks are likely to occur, consider turning to a sync.Mutex and wrapping the statement in a lock. It will cause other goroutines to wait until the lock is free.

It all depends where the deadlocks are though, you can easily achieve them in the Go code as well as the database queries.

I'm not around much today as I'm with a client this morning and lunch, but if you're stuck later I may well be in https://gophers.slack.com/ . Happy to help out if I can, as I'm sure most others will be.


Is there a club for people like us? I, too, do a bunch of work at two startups. I'm in the same exact situation, in fact.

Thanks for sharing.


We could start one?

I think that's what HN kind of is...




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: