Deploy Go with Capistrano

I’ve been using Capistrano for a while now, even for non-Rails projects, and I honestly think it’s too tightly coupled with Rails. But, anyway, I’ve always managed to deploy my projects, be it through really ugly hacks or simply sticking to the “Capistrano-way”.

In this post I’d like to cover how I successfully managed to deploy a Go/Gin web-service sticking to Capistrano as much as possible. This approach is quite general purpose but it all started trying to stick with the following 2 requirements:

  • Check out the code locally, rather than on the target machine
  • Build the code locally, for the target machine, and then only push the binary

As you may or may not know, Capistrano handles version-controlled code rather well but assumes you want to download your source on the target machine. In my particular case I didn’t really fancy the idea of setting up a Go environment on my target machine. I didn’t want that, mostly for laziness. But setting up a whole Golang environment on the target machine would also have implied more security vulnerability for my machine.

The code

Last time I checked, this feature wasn’t really documented on the Capistrano docs but turns out you can override the default Git Strategy that by default handles your code checkout on the target machine.
To be honest, the strategy itself doesn’t really enforce the checkout on the target machine, but Capistrano wraps every method of the strategy in an on block.
In order to bypass this behaviour I simply copied the Git strategy from the Capistrano sources and wrapped back each method in a run_locally block.

Copy the script in where you keep your deployment code. Be sure to load it. A good place can be your Capfile. After loading it, make sure to set the appropriate strategy in your recipe:

 

As you can see from the snippet, the repository will be cloned into repo_path, so be sure to specify a valid path. Now that we have instructed Capistrano on how to checkout the code on our local machine we have to instruct it on how to build and then push our Go project. The following snippet exposes a couple of Rake tasks to perform such things. This script assumes that you’re a deploying linking the log and pid directories. The pid is going to hold the pid file of your binary after executing it. This is gonna be useful across deployments to be able to automatically shut your process down and restart with the new version.

This rake task handles 3 useful options that you can specify using the set method provided by Capistrano:

  • go_build_env
  • go_run_env
  • go_run_args

go_build_env has to be used to specify the target machine operating system in order for Go to be able to cross-compile your code. In my case, for example, I want to deploy to a linux x64 architecture:

You can check for more environments here.

Use go_run_env to specify environment variables to be set upon starting your executable. Use go_run_args to provide your executable with the startup arguments it needs. As you can see, this rake file already hooks into the right Capistrano tasks to makes sure it builds and uploads after fetching the right repository revision.

Wrapping up

If you’ve correctly included the previous snippets, configuring your deployment should be straightforward. We have managed to deploy our Go code without having to setup another Go environment on the target machine. I think this is great since the cross-compilation produces a static executable that has 0 dependencies to be run on our target machine.

As I said, already, I think the guys over at Capistrano could do a little more to make the tool really general purpose and be more helpful for such integrations. But overall, this integration hasn’t really required too much code after all.

I hope you enjoyed this quick overview on how to deploy your Go code with Capistrano.

GitHug

Hey ya all,
I’ve spent the last weekend coding with Ruby again for the RailsRumble contest working on a crazy idea we came up a couple of days before the contest actually started.

It’s GitHug. We’ve always enjoyed GitHub but we’ve never been able to discover new interesting repositories through the site itself. So we created this simple service that applies a really basic (quite dumb) machine learning algorithm to try to learn from your activity on GitHub in order to suggest you repositories that may be of interest for you.

I’d really like you to give it a try, especially if you use GitHub on a daily basis.

I promise there’ll be a version of this service unrelated to the contest in order to deliver better performances and results. But this won’t happen any time this week 🙂

Cheers!

I’d rather Go complain about Rails

[SPOILER ALERT]

Actually this is not a complaint. I don’t root for any particular technology. It’s just I found Go outperforming Rails in this particular use case.

[END OF SPOILER]

I’ve had this post in the queue for a while but never managed to finish it. Part of the reason for this is due to what I’m going to tell you with the sequel of this post.

I’m writing this post to share my experience but also request for comments. I might really have misunderstood something in the process so I want to hear back from you.

I had this little notification service that mostly did this:

  1. accept an HTTP connection
  2. setup a thread listening on a channel over Redis
  3. setup a thread that pings the client to know for premature disconnects
  4. join the 2 threads with the current one and wait for a notification from the Redis channel

My Rails controller was pretty much like the following:

This controller was simply keeping the incoming connection open as long as new notification records were ready to be dispatched to the client. The begin, rescue, ensure dance was there to handle the case of premature client disconnects. Prematurely closing clients were keeping the controller in use until the timeout was reached. But I needed to save resources and be able to release them as soon as the client disconnected. I found it pretty hard in Rails to hook the client disconnection and it looks like it’s pretty much implementation-dependent and varies from server to server.

Anyway, this worked pretty nicely. As long as there were no more than ~4 connected clients. :/

More than ~4 connected clients made my nginx+passenger freeze. No way I could figure it out. I didn’t manage to fix that. Also, I wasn’t sure how many MySQL connection Rails was using. Connections to the database were being open and never closed until server restart.

So, given I had pretty nice results with similar services in the past with Go, I decided to spend the night and port the notifications controller to a tiny Go-based (Martini-helped) web service.

Here’s the result (actually a lighter version of it).

I feel this version of the service more consistent. Also, the language provides me with the means to discover when the client disconnects. And even the Redis library this time helps me understand when un-subscription occurs.

This, of course, will be compiled and turned into a native server that only handles the notification system. As a result, this outperforms the Rails-based version of the service. I found it pretty stable also. This service has then been proxied through Nginx and has never crashed since then (~5 months). I don’t serve much clients actually, ~30-40 simultaneous connections, but this amount wasn’t sustainable with Rails. Also, the MySQL connections were open and closed as expected and the number of simultaneous connections was reasonable.

I’d like your feedback regarding this experience. I know Go would always outperform Ruby in cases like this but I’m interested in understanding what was wrong with my Rails implementation. Also, can you confirm detecting client disconnect is pretty hard with Rails (4.2) ?

Cheers everybody.