FeaturesTrusted PoolsPricingLogin

Distributed GPU Sharing for Sustainable AI Training

Unify heterogeneous GPUs (2GB–32GB+) into a single compute pool. Train AI models at lower cost with measurable sustainability impact.

200
Active Nodes
20
Active Users
12.4 PF
Compute Power

Why Choose Hugin?

Democratizing AI infrastructure with a community-powered compute grid.

💸

Cost Efficiency

Access GPU compute at 30-50% lower cost than centralized cloud providers. Our distributed network eliminates data center overhead, passing savings directly to you.

Faster Iteration

No more waiting in queues for H100s. Launch jobs instantly across thousands of consumer and pro-sumer GPUs perfectly suited for fine-tuning and inference.

📈

Flexible Scaling

From single-GPU prototypes to multi-shard distributed training. Seamlessly scale your workload across a heterogeneous grid of NVIDIA, AMD, and Intel hardware.

How Hugin Works

Six-step pipeline from job definition to verified delivery with transparent billing.

01
📋

Job Definition

Data classification, target metric, and pool selection.

02
💰

Cost Estimate

HU & GPU-seconds estimate with upper bound and SLA.

03
🧩

Sharding & Scheduling

Capability-aware distribution, node selection, redundant execution.

04
⚙️

Execution

Sandboxed micro-shard execution with telemetry, health checks, auto-retry.

05

Aggregation & Verification

Quality threshold, spot-check, and anomaly detection validation.

06
📄

Billing & Payment

Verified job billing, automatic GPU owner payout.

Trusted Pools

Choose the right trust level for your workload — from open community to dedicated enterprise capacity.

Community

Best-effort execution for non-sensitive workloads. Open participation from all verified node agents.

  • Open to all GPU owners
  • Best-effort SLA
  • Public/non-sensitive data only
  • Lowest cost tier
Verified

KYC/KYB verified nodes with policy enforcement, optional EU/EEA geo-restrictions, and secure boot signals.

  • KYC & KYB verified
  • Policy enforcement
  • EU/EEA geo-restriction
  • Secure boot & TPM
  • Higher completion targets
Dedicated

Customer-reserved capacity with custom policies. Maximum predictability and control for enterprise workloads.

  • Reserved capacity
  • Custom policies
  • Highest predictability
  • Customer-specific allowlist
  • Enterprise SLA

Transparent HU-Based Pricing

HU (Hugin Unit) is your simple billing unit. 1 HU ≈ 3,600 normalized GPU-seconds. Pre-estimate and upper bound before every job.

0.21EUR / HU
Transparent Pricing for Training
  • More iterations with same budget
  • Transparent HU + GPU-seconds billing
  • Pre-estimate & upper bound guaranteed
  • Community, Verified, or Dedicated pools
  • ESG reporting: kWh/HU & CO₂e/HU
Start Training Now
Share Your Compute
Turn idle devices into passive value. Hugin network harnesses compute power from any connected device globally.
  • 📱 Smartphones (Browser or App)
  • 🚗 Electric Vehicles (EV Compute)
  • 🌐 Any Web Browser Tab
  • 💻 Desktop & Server GPUs
Join the Network

HU Rate by Device Class

Owner Payout Rate: 1 HU = €0.153 net  ·  73% owner share  ·  ~20% above RunPod / Vast.ai

Device Class~HU/HourExamples
Smartphones (Idle)0.05 - 0.15iPhone 15 Pro, Pixel 8
Electric Vehicles (EV)0.2 - 0.4Tesla MCU, Polestar 2
4-6GB Consumer0.4 - 0.6GTX 1650, RTX 3050
8-12GB Consumer0.6 - 0.9RTX 3060, RTX 4070
16-24GB Workstation1.6 - 2.8RTX 3090, RTX 4090
24-48GB+ Datacenter2.8 - 4.5A100, H100