Saki Shinoda

Machine learning engineer based in London, UK.

Home
About
Blog

Approaches to running Tensorflow on AWS

2017-02-23 13:09:00 +0000

Work in progress

Contents

Instance types, regions, and pricing

Instance types

GPU instances most appropriate: g2.2xlarge and p2.xlarge. g2.2xlarge uses Nvidia GRID K520, “each with 1,536 CUDA cores and 4GB of video memory”. p2.xlarge uses NVIDIA K80 GPUs, “each with 2,496 parallel processing cores and 12GiB of GPU memory”.

Regions and pricing

Pricing is a function of the instance type and what region you run your instance in. Not all instances are available in all regions. I use US East (N. Virginia) and US West (Oregon).

On-demand vs. spot instances

TLDR; If you have an AMI you are happy with and you don’t mind mounting disks when launching new instances, and you don’t mind not being able to temporarily STOP instances (as opposed to terminate), spot instances are not scary and are very cheap!

My personal workaround to use spot instances:

Installation types

TLDR;

Docker

The main argument to use Docker, in my view, is to not have to:

Of course, Nvidia handily provide step-by-step instructions for both alternatives:

Prebuilt AMI’s

I have tried:

I have not tried:

From scratch

I haven’t done this yet but intend to do so soon.

Miscellaneous

Security groups on AWS

SSH from iPhone