Artificial intelligence for self-driving cars. Predicting our climate's future. A new drug to treat cancer. Some of the world's most important challenges need to be solved today, but require tremendous amounts of computing to become reality. Today's data centers rely on many interconnected commodity compute nodes, limiting the performance needed to drive important High Performance Computing (HPC) and hyperscale workloads. NVIDIA® Tesla® P100 GPU accelerators are the most advanced ever built for the data center. They tap into the new NVIDIA Pascal™ GPU architecture to deliver the world's fastest compute node with higher performance than hundreds of slower commodity nodes. Higher performance with fewer, lightning-fast nodes enables data centers to dramatically increase throughput while also saving money. With over 400 HPC applications accelerated—including 9 out of top 10—as well as all deep learning frameworks, every HPC customer can now deploy accelerators in their data centers.
The Tesla P100 is reimagined from silicon to software, crafted with innovation at every level. Each groundbreaking technology delivers a dramatic jump in performance to inspire the creation of the world's fastest computer node.
PERFORMANCE SPECIFICATION FOR NVIDIA TESLA P100 ACCELERATORS | |
Double-Precision Performance | 4.7 TeraFLOPS |
Single-Precision Performance | 9.3 TeraFLOPS |
Half-Precision Performance | 18.7 TeraFLOPS |
NVIDIA NVLink™ Interconnect Bandwidth | - |
PCIe x16 Interconnect Bandwidth | 32 GB/s |
CoWoS HBM2 Stacked Memory Capacitye | 16 GB or 12 GB |
CoWoS HBM2 Stacked Memory Bandwidth | 720 GB/s or 540 GB/s |
Enhanced Programmability with Page Migration Engine | ● |
ECC Protection for Reliability | ● |
Server-Optimized for Data Center Deployment | ● |
TECHNICAL SPECIFICATIONS |
Tesla P100 |
|
Peak double-precision floating point performance (board) |
4.7 Tflops |
|
Peak single-precision floating point performance (board) |
9.3 Tflops |
|
Number of GPUs |
1x GP100 |
|
Number of CUDA cores |
3584 |
|
Memory size per board |
12GB HBM2 |
16GB HBM2 |
Memory Interface |
3072-bit |
4096-bit |
Memory bandwidth for board (ECC off)2 |
540 Gbytes/sec |
720 Gbytes/sec |
Thermal Solution |
Passive |
|
Max Power Consumption |
250W |
|
Form Factor |
4.376"H x 10.5" L |
|
System |
* Please check with your server vendor for the CPU power connector usage before turn on the server to avoid potential damage.