posted by Neuralrack AI 4 months ago
$1.00 / gpu / hour - 192 RTX PRO 6000s available from 01/15/2026 to 07/15/2027 (NC/FL/TX, US)
Unverified
Interconnect: Ethernet 100GbE
Cores per node: 96
RAM per node: 1024 GB
NVME storage per node: 19200 GB
Location: NC/FL/TX, US
Cloud provider: Neuralrack AI
Cluster interface: Bare metal SSH (deploy your own clustering software)
Minimium bookable GPUs: 4
Minimium bookable duration: 4 weeks
Additional details
Neuralrack AI: 3 years of AI infra experience, 1000+ GPUs deployed in total, redundant BGP 10Gbps internet (upgradable), and tier 3 DC partners (SOC2/ISO27001/HIPAA). All equipment fully insured. Blackwell available on neuralrack.ai in March/July 2025, 60 days after official launch.
We have up to 192 RTX Pro 6000 96GB (600w SE or WS variants) cards available for rent in 4x, 8x or 12x bare metal configurations, or with virtualization. Pay $0.45-$1.00/GPU/hr with long-term commitments, upfront payment or ownership. Bulk pricing available on request: https://neuralrack.ai/contact
All servers are in-stock, deployable in 48 hours. Locations: Raleigh, NC; Tampa, FL; DFW, TX. Accepted payment methods: ACH, wire, or crypto.
Nvidia Blackwell RTX Pro 6000 96GB: an H100 alternative for half the price. Equal or better performance to the H100 for the most popular LLM inference workloads, with Blackwell NVFP4 support, ray tracing cores, confidential compute (CC) and MiG support.
System specs for 8xPro6000 (upgradable): 1TB DDR5 4800+ ECC memory, 2x AMD EPYC 9474F 48-core or 2x Intel Xeon 6767P 64-core CPUs, 2x 1.92TB Gen4 NVMe boot + 2x 7.68TB / 1x 15.36TB Gen4 NVMe data storage, 100GbE / 25GbE local network, all-way PCIe Gen5 x16 (100GB/s per GPU)
Custom clusters available (RTX Pro 6000 96G, RTX Pro 5000 48/72G, RTX 5090 32G, RTX 5080 16G, B200 180G, H200SXM 141G, H200NVL 141G, MI325X 288G, and more) at https://neuralrack.ai
contact seller
seller email domain is neuralrack.ai