Machine learning engineer based in London, UK.
2017-02-23 13:09:00 +0000
Work in progress
GPU instances most appropriate: g2.2xlarge and p2.xlarge. g2.2xlarge uses Nvidia GRID K520, “each with 1,536 CUDA cores and 4GB of video memory”. p2.xlarge uses NVIDIA K80 GPUs, “each with 2,496 parallel processing cores and 12GiB of GPU memory”.
Pricing is a function of the instance type and what region you run your instance in. Not all instances are available in all regions. I use US East (N. Virginia) and US West (Oregon).
TLDR; If you have an AMI you are happy with and you don’t mind mounting disks when launching new instances, and you don’t mind not being able to temporarily STOP instances (as opposed to terminate), spot instances are not scary and are very cheap!
My personal workaround to use spot instances:
TLDR;
The main argument to use Docker, in my view, is to not have to:
Of course, Nvidia handily provide step-by-step instructions for both alternatives:
I have tried:
I have not tried:
I haven’t done this yet but intend to do so soon.