Build and service custom inference servers under €10,000 — private, on-premise AI without cloud costs.
Recommended server configurations for different budgets and workloads.
Target: Small businesses, experimentation, light inference workloads
| Component | Spec | Est. Price |
|---|---|---|
| GPU | 2x Tesla P100 16GB | €600-800 |
| Server Chassis | Used Dell/HP/Supermicro 2U | €300-500 |
| CPU | Xeon E5-26xx (included) | — |
| RAM | 64GB DDR4 ECC | €150-200 |
| Storage | 1TB NVMe SSD | €100-150 |
| Total | €1,150-1,650 |
What it runs:
Target: Production inference, multiple concurrent users
| Component | Spec | Est. Price |
|---|---|---|
| GPU | 2x Tesla V100 16GB | €1,600-2,400 |
| Server Chassis | Used Dell/HP/Supermicro 2U | €400-600 |
| CPU | Xeon Silver/Gold (included) | — |
| RAM | 128GB DDR4 ECC | €300-400 |
| Storage | 2TB NVMe SSD | €200-300 |
| Total | €2,500-3,700 |
What it runs:
Target: Always-on inference, low power consumption, quiet operation
| Component | Spec | Est. Price |
|---|---|---|
| GPU | 2x Tesla T4 16GB | €3,000-4,000 |
| Server/Workstation | Quiet tower or 2U | €500-800 |
| CPU | Xeon or Ryzen (included) | — |
| RAM | 64-128GB DDR4 | €200-400 |
| Storage | 1-2TB NVMe SSD | €150-250 |
| Total | €3,850-5,450 |
Why this build:
Target: Larger models, heavier workloads
| Component | Spec | Est. Price |
|---|---|---|
| GPU | 2x Tesla V100 32GB | €4,000-5,000 |
| Server Chassis | Quality 2U/4U | €600-1,000 |
| CPU | Dual Xeon Gold | €400-600 |
| RAM | 256GB DDR4 ECC | €600-800 |
| Storage | 4TB NVMe RAID | €400-600 |
| Total | €6,000-8,000 |
What it runs:
All builds include: