Update (again): Not described here are the security group settings necessary for accessing the Jupyter notebook running on the AWS instance (or SSH-ing in). Again, something I might add in future.
Update: I tried saving this setup as an AMI to then launch a new spot instance from. Turns out to be able to persist use of the Nvidia drivers, Nouveau needs to be blacklisted (as described here). I also only just discovered that nvidia-docker has specific documentation for deployment on AWS EC2. I might check that out and make a fixed Nvidia-Docker + Tensorflow-GPU AMI. I might alternatively work on a non-Docker Tensorflow-GPU install since Docker might just be even more hassle than CUDNN, etc.
I might elaborate on this later, but at the moment this is a barebones script for getting Nvidia-Docker up and running on an Ubuntu 14.04 AWS instance (I used a spot g2.xlarge in Oregon) to then run the Tensorflow-GPU docker image. It puts into one place the commands you have to follow which are given separately (with more commentary) at the following links:
Installation of the nvidia drivers requires supervision and interaction by the user, unlike most other steps in the script—I found default settings worked fine, though it was pointed out to me that if you want to use X to access the AWS instance graphically, you may want to let Nvidia overwrite X where it asks to
gcc is required, so install build-essential; apparently nvidia-modprobe is also necessary but I didn’t try it without so don’t know (a) if it’s necessary, and (b) what error messages might arise without it
The final step launches a Jupyter notebook server that you can access by substituting in the Public IP address (see AWS instance details panel) into the notebook location that the terminal output shows
An alternative is to link in to a bash terminal within the Docker container, where you can run python scripts without using Notebook, etc. but will leave writing about that to another day