The Cluster GPU Instance: Amazon's New EC2 Instance Type

Amazon introduces its lates EC2 instance type called the "Cluster GPU Instance". Now any AWS user can develop and run GPGPU on a cost-effective, pay-as-you-go basis. Similar to Cluster Compute Instance type that was introduced earlier this year, Cluster GPU Instance (cg1.4xlarge if you are using the EC2 APIs) has the following specs:A pair of […]

Amazon introduces its lates EC2 instance type called the "Cluster GPU Instance". Now any AWS user can develop and run GPGPU on a cost-effective, pay-as-you-go basis. Similar to Cluster Compute Instance type that was introduced earlier this year, Cluster GPU Instance (cg1.4xlarge if you are using the EC2 APIs) has the following specs:

  • A pair of NVIDIA Tesla M2050 "Fermi" GPUs.
  • A pair of quad-core Intel "Nehalem" X5570 processors offering 33.5 ECUs (EC2 Compute Units).
  • 22 GB RAM.
  • 1690 GB of local instance storage.
  • 10 Gbps Ethernet, with ability to create low latency, full bisection bandwidth HPC clusters.

"Each Tesla M2050s contains 448 cores and 3 GB of ECC RAM and are designed to deliver up to 515 gigaflops of double-precision performance when pushed to the limit. Since each instance contains a pair of these processors, you can get slightly more than a trillion FLOPS per Cluster GPU instance. With the ability to cluster these instances over 10Gbps Ethernet, the compute power delivered for highly data parallel HPC, rendering, and media processing applications is staggering," stated Amazon.

More Info: GPGPU

[Source]