The demand for visualization, rendering, data science and simulation continues to grow as businesses tackle larger, more complex workloads than ever before. However, enterprises looking to scale up their visual compute infrastructure face mounting budget constraints and deployment requirements.
Meet your visual computing challenges with the power of NVIDIA Quadro RTX™ GPUs and NVIDIA virtual GPU software in the data center. Built on the NVIDIA Turing™ architecture and the NVIDIA RTX™ platform, the Quadro RTX 6000 Passive feature RT Cores and multi-precision Tensor Cores for real-time ray tracing, AI, and advanced graphics capabilities. Tackle graphics-intensive mixed workloads, complex designs, photorealistic renders, and augmented and virtual environments at the edge with NVIDIA Quadro RTX™, designed for enterprise data centers.
Provides a compute-based geometry pipeline to speed processing and culling for geometrically complex models and scenes to improve performance by up to 2x.
Offers more granular control over how GPU horsepower is distributed (i.e. more cycles applied on the detailed areas of a scene and fewer on the less detailed areas) to increase performance and at the same image quality, or produce similar image quality with 50% reduction time to generate shaded pixels.
Enables shading and geometry samples to be processed at different rates for more efficient execution
More control over pixel shading rate; efficient for effects like motion, blur, foveated shading. This capability enables shading and geometry samples to process at different rates for more efficient execution.
Connect a pair of Quadro RTX 6000 cards with NVLink to double the effective memory footprint and scale application performance by enabling GPU-to-GPU data transfers at rates up to 100 GB/s (total bidirectional bandwidth).
Dramatically reduce visual aliasing artifacts or "jaggies" with up to 64X FSAA (128x with SLI )for unparalleled image quality and highly realistic scenes.
Texture from and render to 32K x 32K surfaces to support applications that demand the highest resolution and quality image processing.
Deep learning frameworks such as Caffe2, MXNet, CNTK, TensorFlow, and others deliver dramatically faster training times and higher multi-node training performance. GPU accelerated libraries such as cuDNN, cuBLAS, and TensorRT delivers higher performance for both deep learning inference and High-Performance Computing (HPC) applications.
Software framework that makes realtime ray tracing possible, portable, and presentable. Provides interoperability between rasterization, ray tracing, compute and AI/Deep Learning. New Turing ray tracing acceleration in OptiX, DXR and Vulkan. NVIDIA MDL, now open source, and support for Pixar’s Universal Scene Description (USD) promote portability and consistency.
Natively execute standard programming languages like C/C++ and Fortran, and APIs such as OpenCL, OpenACC and Direct Compute to accelerates techniques such as ray tracing, video and image processing, and computation fluid dynamics.
A single, seamless 49-bit virtual address space allows for the transparent migration of data between the full allocation of CPU and GPU memory.
GPU Architecture | Turing |
CUDA Parallel Processing cores | 4608 |
NVIDIA Tensor Cores | 576 |
NVIDIA RT Cores | 72 |
Frame Buffer Memory | 24 GB GDDR6 |
RTX-OPS | 80T |
Rays Cast | 10 Giga Rays/Sec |
Peak Single Precision (FP32) Performance | 14.9 TFLOPS |
Peak Half Precision (FP16) Performance | 29.9 TFLOPS |
Peak Integer Operation (INT8) Performance | 238.9 TOPS |
Deep Learning TeraFLOPS1 | 119.4 Tensor TFLOPS |
Memory Interface | 384-bit |
Memory Bandwidth | 624 GB/s |
Max Power Consumption | 250 W |
Graphics Bus | PCI Express 3.0 x16 |
Form Factor | 4.4” H x 10.5” L Dual Slot |
Product Weight | 1200 g |
Thermal Solution | Passive |
NVLink Interconnect | 100 GB/s |
1 FP16 matrix multiply with FP16 or FP32 accumulate