Weaves, Looms, Strands, and Fabric: A Recursive Knowledge Schema
The Frame Codex schema (Weave → Loom → Strand) isn't just organizational sugar. It's a carefully designed recursive structure that unlocks graph algorithms, enables efficient caching, and maps naturally to how humans actually organize knowledge.
Recursion All the Way Down
At first glance, the hierarchy seems simple:
Fabric
└── Weave (universe)
└── Loom (collection)
└── Strand (content)But here's the key insight: strands can reference other strands. A strand about "recursion" can link to a strand about "induction", which links to "mathematical proof", which links back to "recursion". The graph is cyclic, not a tree.
Looms and weaves are just metadata containers that group strands. They don't break the recursion—they provide boundaries for scoped queries and caching.
Why Three Tiers?
Why not just files in folders? Why the explicit weave/loom/strand distinction?
1. Scoped Queries
When you search for "machine learning", you probably want results from the "technology" weave, not the "cooking" weave. Weaves give us natural query boundaries:
// Search within a weave (10-100x faster) SELECT * FROM strands WHERE weave = 'technology' AND content MATCH 'machine learning' // vs global search (slow) SELECT * FROM strands WHERE content MATCH 'machine learning'
2. Hierarchical Caching
We cache aggregate statistics at the loom level:
- Total strands in loom
- Unique keywords (TF-IDF vectors)
- Average difficulty
- Common subjects/topics
When a single strand changes, we only recompute that loom's stats, not the entire weave. This is how we achieve 85-95% cache hit rates.
3. Natural Sharding
Each weave is independent. No cross-weave relationships. This means:
- Weaves can be deployed as separate microservices
- Horizontal scaling: add more weaves without coordination
- Namespace isolation: "intro.md" in weave A ≠ "intro.md" in weave B
Graph Algorithms Enabled
The recursive structure unlocks powerful graph operations:
Shortest Path (Concept Navigation)
"How do I get from 'variables' to 'closures' in the JavaScript loom?"
// Dijkstra on the strand graph
path = shortestPath(
start: 'variables',
end: 'closures',
edgeWeight: (a, b) => {
// Prefer same-loom edges
if (a.loom === b.loom) return 1
return 5
}
)
// Result: variables → functions → scope → closuresPersonalized PageRank (Recommendations)
"Show me strands similar to what I've read, weighted by my interests."
// Bipartite graph: user ↔ strand
graph = {
nodes: [...strands, ...users],
edges: [
{ user: 'alice', strand: 's1', weight: 5 }, // rating
{ strand: 's1', strand: 's2', weight: 0.8 }, // similarity
]
}
recommendations = personalizedPageRank(
graph,
startNode: 'alice',
dampingFactor: 0.85
)Community Detection (Auto-Loom Suggestions)
"Which strands should be grouped into a new loom?"
// Louvain algorithm on strand similarity graph communities = detectCommunities(strandGraph) // Suggests: "These 15 strands about 'React hooks' // should become a new loom"
Fabric: The Materialized View
When you union all weaves together, you get the fabric—the complete knowledge graph. This is what OpenStrand operates on when doing cross-domain queries:
// Find connections between cooking and chemistry path = shortestPath( start: 'weaves/cooking/looms/techniques/strands/emulsification.md', end: 'weaves/science/looms/chemistry/strands/molecular-bonds.md', graph: fabric ) // Discovers: emulsification → lipids → molecular-bonds
The fabric is expensive to materialize (O(n²) for relationship edges), so we only do it for specific queries. Most operations stay within a single weave or loom.
Recommendations with Ratings
You asked about movie/book recommendations. Here's how it works with strands:
Data Model
// User ratings (stored in OpenStrand, not Codex)
ratings = [
{ user: 'alice', strand: 'recursion-intro', score: 5 },
{ user: 'alice', strand: 'functional-programming', score: 4 },
{ user: 'bob', strand: 'recursion-intro', score: 5 },
]
// LLM-generated similarity (stored in Codex metadata)
similarities = [
{ strand_a: 'recursion-intro', strand_b: 'induction', score: 0.85 },
{ strand_a: 'recursion-intro', strand_b: 'loops', score: 0.60 },
]Collaborative Filtering
// Find users similar to Alice similarUsers = users .filter(u => cosineSimilarity(u.ratings, alice.ratings) > 0.7) // Recommend strands they liked that Alice hasn't seen recommendations = similarUsers .flatMap(u => u.ratings) .filter(r => !alice.ratings.includes(r.strand)) .sort((a, b) => b.score - a.score)
Graph Walk
// Start from Alice's highest-rated strand startStrand = 'recursion-intro' // Walk the similarity graph recommendations = breadthFirstSearch( start: startStrand, maxDepth: 2, edgeFilter: (edge) => edge.score > 0.7, nodeFilter: (node) => !alice.hasRead(node) )
Why This Matters
Most knowledge bases are flat. You search, you get results, done. Frame Codex is a graph. Every strand is a node, every reference is an edge. This unlocks:
- Concept navigation: "Explain X assuming I know Y"
- Learning paths: Shortest path from beginner to expert
- Knowledge gaps: Missing edges = content opportunities
- Semantic clustering: Auto-discover related content
And because it's all git-native, you can fork the entire fabric, experiment with new algorithms, and PR your improvements back upstream.
Related Posts
Introducing Frame: The OS for Your Life
Today we're thrilled to announce Frame—a revolutionary suite of operating systems designed to organize, simplify, and enhance every aspect of your digital existence.
Read more →AgentOS is Now Live
Our production-ready runtime for AI agents is now available. Deploy, manage, and orchestrate AI agents at scale with TypeScript-native tooling.
Read more →