Secure Your Data
With Local Hardware
Companion Intelligence Severs arrive with Companion HUB, with access to an OSS marketplace of locally-hosted apps that can be downloaded and run from anywhere.
expanding potential
Hardware Comparison
This sample layout is designed for Squarespace sections and includes a responsive comparative matrix for Companion Core and key alternative platforms.
| Category | Companion Core | NVIDIA DGX Spark | Mac mini (M4) | Mac Studio (M4 Max RAM comparable) |
|---|---|---|---|---|
| CPU Platform | AMD Ryzen™ AI Max+ 395 (soldered) · 3.0GHz base · Up to 5.1GHz max boost · 16-core/32-thread · 64MB L3 Cache · 120W sustained / 140W boost | NVIDIA Grace Blackwell 20-core Arm · 10 Cortex-X925 + 10 Cortex-A725 | Apple M4 14-core CPU | Apple M4 Max 16-core CPU |
| GPU | AMD Radeon™ 8060S · Up to 2.9GHz · 40 Compute Units · 32MB MALL Cache | Blackwell GPU | Integrated GPU (20-core) | Integrated GPU (40-core) |
| Memory | 128GB unified · ~256 GB/s | 128GB unified · ~273 GB/s | 64GB unified · ~273 GB/s | 128GB unified · ~273 GB/s |
| Storage | 2TB NVMe (Upgradable to 16 TB) | 4 TB NVMe (Upgradable to 8 TB) | 2 TB (Not upgradable) | 2 TB (Not upgradable) |
| AI Performance | Up to ~200B parameter models | Up to ~200B parameter models | Up to 80B parameter models | Up to ~200B parameter models |
| Price | ~$3,600 | $4,699 | ~$2,899 | ~$4,099 |
Compare Companion
Explore the three core comparison layers and jump directly to the most relevant section.
The Ownership Layer
Companion is built for personal AI sovereignty: local-first by default, minimal cloud reliance, and a built-in memory system. Most alternatives keep data local, but aren’t designed for personal ownership end-to-end.
The Capability Layer
Companion supports local AI workflows with agents and persistent memory, so automation improves over time. Run models with more durable context, multi-step actions, and a cohesive local-first stack.
120B vs 32B Models
120B-class models are better for complex, multi-step tool use and long-context reasoning, but require far more VRAM. 32B models are faster and cheaper to run, yet less reliable when tasks get deep or orchestration-heavy.
Ownership Layer (Core Differentiator)
The core differentiator: data ownership, system design for personal AI, and independence from cloud services.
| Factor | Companion Core | NVIDIA DGX Spark | Mac mini (M4) | Mac Studio (M4 Max) |
|---|---|---|---|---|
| Data stays local | ✓ | ✓ | ✓ | ✓ |
| System designed for personal AI | ✓ | ✗ (enterprise) | ✗ | ✗ |
| No cloud dependency | ✓ | Partial | ✗ | ✗ |
| Built-in memory system | ✓ | ✗ | ✗ | ✗ |
| Turnkey AI system | ✓ | ✗ | ✗ | ✗ |
Capability Layer
What each platform can do: core AI capabilities, agent orchestration, and system readiness.
| Capability | Companion Core | NVIDIA DGX Spark | Mac mini (M4) | Mac Studio (M4 Max) |
|---|---|---|---|---|
| Local AI (LLMs) | ✓ | ✓ | ✓ (GPU dependent) | Moderate (memory constrained) |
| Multi-agent workflows | ✓ | Limited | Limited | ✗ |
| Persistent memory system | ✓ | ✗ | ✗ | ✗ |
| App ecosystem (local-first) | ✓ | Enterprise-focused | macOS apps | macOS apps |
| Out-of-box AI system | ✓ | Partial (enterprise setup) | ✗ | ✗ |
120B vs 32B Model Comparison
Tradeoffs between 120B-class models and 32B alternatives for Companion deployments.
| Feature | 120B Model | 32B Model |
|---|---|---|
| Primary Use | High-reasoning agentic tasks | Standard conversation & light automation |
| Context Window | ~131K tokens | ~41K tokens |
| Reliability | High; manages multi-step tool calls | Moderate; may fail at complex tool use |
| Reasoning | Native "Thinking" / configurable depth | Generally limited to base instructions |
| Hardware Req. | 60GB–80GB VRAM (e.g., H100) | 24GB+ VRAM (e.g., RTX 3090/4090) |