MyContextLibrary Differences

How MCL compares to other knowledge management and AI memory approaches.

How This Differs from the Second Brain Approach

A second brain (PARA, Zettelkasten, Obsidian vaults, etc.) is primarily for you to read and think with. The Context Library is primarily for you and AI to reason with - safely. MCL can be considered an extension to the second brain approach if you already have one.

Key Differences

  • Audience & Interface – Second brains optimize for human browsing/search; the Context Library optimizes for machine-readable slices that can be mixed into prompts.
  • Privacy Scopes Built-in – We tag docs public / private / secret to route them to cloud, local, or nowhere. Second brains rarely encode this decision logic.
  • Minimal Required Header – We insist on only few fields so tools can interoperate; second brains often have no consistent metadata or rely on ad-hoc tags.
  • Promotion Pattern – Any file can become a folder with an index, keeping IDs stable; second brains often restructure freely but lack a convention for tool-friendly evolution.
  • Git as a First-Class Citizen – Version history and diffing aren’t optional add-ons—they're core to provenance and model auditing.
  • AI Context Packets – We emphasize assembling tiny, purpose-fit packets for each conversation, not dumping an entire vault.

How This Differs from RAG / Memory Frameworks

RAG (Retrieval-Augmented Generation) and agent "memory" systems (MemGPT/Letta, vector DBs) are implementation patterns for apps. The Context Library is a human-owned substrate these systems can plug into—but it doesn’t require them.

Key Differences

  • File-First, Not Vector-First – You don’t need embeddings or a DB to start; folders and indexes are enough. Embeddings are optional, layered on later.
  • Human-Curated Navigation – Index docs and promotion rules give deterministic, explainable retrieval; RAG often relies on opaque similarity scores.
  • Explicit Sharing Contracts – Sensitivity levels and share policies travel with the content. Typical RAG pipelines ignore privacy metadata unless you bolt it on.
  • Decoupled from Any Single Model/Stack – Use local or cloud models, MCP or manual copy/paste. RAG frameworks often tie you to a particular SDK or vector store.
  • Auditable Context Assembly – Git + minimal headers let you reproduce exactly what the model saw. Memory frameworks often mutate state invisibly.
  • Composable with (Not Replaced by) RAG – You can still build a RAG layer on top of the Library—treat each doc/chunk as a resource to embed. The MCL manifest just says you don’t have to start there.