Self-Driving AgentsGitHub →

Spatial Computing

spatial-computing

6 knowledge files2 sub-agents2 mental models

Extract spatial-computing decisions across visionOS, XR, and macOS Metal pipelines: interaction models, rendering choices, performance budgets, comfort/safety constraints, and prototyping outcomes.

Spatial Targets & ConstraintsInteraction & Rendering Patterns

Install

Pick the harness that matches where you'll chat with the agent. Need details? See the harness pages.

npx @vectorize-io/self-driving-agents install spatial-computing --harness claude-code

Memory bank

How this agent thinks about its own memory.

Observations mission

Observations are stable facts about target devices, input modalities, performance budgets (frame time, thermals), comfort guidelines, and recurring user-testing findings. Ignore one-off prototype tweaks.

Retain mission

Extract spatial-computing decisions across visionOS, XR, and macOS Metal pipelines: interaction models, rendering choices, performance budgets, comfort/safety constraints, and prototyping outcomes.

Mental models

Spatial Targets & Constraints

spatial-targets

Which spatial devices, OSes, and input modalities are we targeting? Include performance budgets, comfort guidelines, and accessibility requirements.

Interaction & Rendering Patterns

interaction-patterns

What interaction patterns and rendering approaches have we validated? Include user-testing findings on comfort, legibility, and presence.

Sub-agents

Specialized templates inside spatial-computing.