Skip to main content

Node Capabilities

Schema for advertising node hardware, loaded models, and availability.

NodeCapabilities

The top-level capabilities object sent during registration and returned in discovery responses.

{
"hardware": { ...HardwareCapability... },
"loadedModels": ["mlx-community/Qwen3-8B-4bit"],
"maxModelSizeGB": 20.0,
"isAvailable": true,
"ptnIDs": ["a1b2c3..."]
}
FieldTypeDescription
hardwareobjectHardwareCapability profile
loadedModelsarrayModel IDs currently loaded and ready for inference
maxModelSizeGBnumberLargest model this node can load (GB)
isAvailablebooleanWhether the node is accepting inference requests
ptnIDsarrayPrivate TealeNet IDs this node belongs to

HardwareCapability

Describes the node's hardware profile.

{
"chipFamily": "m4Pro",
"chipName": "Apple M4 Pro",
"totalRAMGB": 48.0,
"gpuCoreCount": 20,
"memoryBandwidthGBs": 273.0,
"tier": 2,
"gpuBackend": "metal",
"platform": "macOS",
"gpuVRAMGB": null
}
FieldTypeRequiredDescription
chipFamilystringYesOne of the chip family values below
chipNamestringYesHuman-readable chip name (e.g., "Apple M4 Pro", "NVIDIA RTX 4090")
totalRAMGBnumberYesTotal system RAM in gigabytes
gpuCoreCountnumberYesNumber of GPU cores
memoryBandwidthGBsnumberYesEstimated memory bandwidth in GB/s
tiernumberYesDevice tier: 1 (backbone), 2 (desktop), 3 (tablet), 4 (phone/leaf)
gpuBackendstringNoGPU compute backend (optional, inferred from chipFamily for Apple devices)
platformstringNoOperating system (optional, inferred from compile target)
gpuVRAMGBnumberNoDiscrete GPU VRAM in GB (null for unified memory architectures)

Derived Properties

Available RAM for models:

availableRAMForModelsGB = max(totalRAMGB - 4.0, 1.0)

The 4 GB reservation accounts for OS and background processes.

Estimated inference watts: Power draw during active inference, used for electricity floor pricing.

Chip Family Values

Apple Silicon -- Mac

ValueChip
m1Apple M1
m1ProApple M1 Pro
m1MaxApple M1 Max
m1UltraApple M1 Ultra
m2Apple M2
m2ProApple M2 Pro
m2MaxApple M2 Max
m2UltraApple M2 Ultra
m3Apple M3
m3ProApple M3 Pro
m3MaxApple M3 Max
m3UltraApple M3 Ultra
m4Apple M4
m4ProApple M4 Pro
m4MaxApple M4 Max
m4UltraApple M4 Ultra

Apple Silicon -- iPhone/iPad

ValueChip
a14Apple A14 Bionic
a15Apple A15 Bionic
a16Apple A16 Bionic
a17ProApple A17 Pro
a18Apple A18
a18ProApple A18 Pro
a19ProApple A19 Pro

Non-Apple (Cross-Platform)

ValueDescription
nvidiaGPUNVIDIA GPU (CUDA)
amdGPUAMD GPU (ROCm)
intelCPUx86_64 Intel CPU
amdCPUx86_64 AMD CPU
armGenericARM64 non-Apple (Snapdragon, Raspberry Pi, etc.)
unknownUndetected or unsupported hardware

Estimated Inference Watts

Power consumption estimates during active inference workloads, by chip family.

Chip FamilyWattsNotes
M120W
M1 Pro30W
M1 Max40W
M1 Ultra60W
M222W
M2 Pro35W
M2 Max45W
M2 Ultra65W
M322W
M3 Pro36W
M3 Max48W
M3 Ultra70W
M422W
M4 Pro38W
M4 Max50W
M4 Ultra75W
A14, A155WiPhone/iPad
A16, A17 Pro6WiPhone/iPad
A18, A18 Pro7WiPhone/iPad
A19 Pro8WiPhone/iPad
NVIDIA GPU300WTypical RTX 3090/4090 TDP
AMD GPU250WTypical AMD GPU TDP
Intel CPU65WTypical desktop CPU
AMD CPU65WTypical desktop CPU
ARM Generic10WLow-power ARM (Snapdragon, RPi)
Unknown30WConservative estimate

GPU Backend Values

ValueDescription
metalApple Metal (macOS/iOS)
cudaNVIDIA CUDA
rocmAMD ROCm
vulkanVulkan (cross-platform)
syclIntel SYCL
cpuCPU-only fallback

Platform Values

ValueDescription
macOSmacOS
iOSiOS / iPadOS
linuxLinux
windowsWindows
androidAndroid
freebsdFreeBSD

Device Tiers

TierNameExamplesRole
1BackboneMac Studio, Mac Pro, Linux servers with GPUAlways-on inference nodes
2DesktopMacBook Pro, Mac Mini, Linux/Windows desktops with GPUPrimary compute
3TabletiPad Pro (M-series), high-end Android tabletsLight inference
4Phone/LeafiPhone, Android phones, SBCsConsumer only

Tier numbering is inverted: tier 1 is the highest capability. When filtering with minTier, a value of 2 means "tier 2 or better" (i.e., tier 1 and tier 2).