The Worth-Sharing Workflow

After solving a problem or discovering a pattern, should you share it? Not every solution needs to be public—but genuinely innovative approaches or solutions in unique contexts can save others significant time.

The /worth-sharing workflow automates this decision. It evaluates documented solutions for novelty, searches for similar approaches, and if the solution is worth sharing, creates both a GitHub gist and a full article.

Why this exists

Every’s Compound Engineering workflow includes a /workflows:compound command that documents solutions and patterns. This creates internal documentation—useful for your own agents and future development.

But some of that documentation has broader value. The question is: which parts?

Manually evaluating novelty is tedious. You’d need to:

  • Search the web for similar solutions
  • Check GitHub for existing implementations
  • Compare your approach to what’s already out there
  • Decide if your context or approach adds something new

The /worth-sharing workflow automates all of this. If a solution passes the novelty check, it generates shareable content automatically.

How it works

The workflow runs in six phases:

Phase 1: Input detection and context extraction

The workflow accepts a path to a solution or pattern document (from docs/solutions/ or docs/patterns/). It extracts:

  • Problem statement
  • Solution approach
  • Code examples
  • Domain context (LangGraph, async Python, stores, etc.)
  • Innovation points

All context comes from the document itself—no conversation history required.

Phase 2: Parallel search for similar solutions

Multiple searches run simultaneously:

  • External search: Web search and GitHub code search for similar approaches
  • Internal search: Grep through existing docs and patterns
  • Best practices research: Agent-driven search for established patterns in the domain

Results are consolidated with similarity scores and “easy to find” flags.

Phase 3: Novelty analysis

The key decision point. The workflow proceeds only if:

  • No easy-to-find, high-quality solution exists, OR
  • The solution exists but in a significantly different context, OR
  • The approach is novel even if the problem is common

If an easy-to-find, high-quality solution already exists in the same domain, the workflow stops—no point sharing something that’s already well-documented elsewhere.

Phase 4: Deep review

If the solution passes novelty analysis, multiple code review agents run in parallel:

  • Always: Code simplicity reviewer, general Python reviewer
  • Conditional: LangGraph reviewer, async Python reviewer, LLM interaction specialist, data integrity guardian, architecture strategist

These agents refine the code examples and ensure quality before sharing.

Phase 5: Content generation

Two outputs are created:

GitHub Gist: Concise, self-documenting code with 1-3 files maximum. Focus on the core solution pattern, not all edge cases.

Quartz Article: Full write-up with:

  • Problem context and why it matters
  • Detailed solution explanation
  • Usage examples
  • Links to related resources

Phase 6: Post-creation

The source document is updated with cross-references to the gist and article. A summary displays what was created.

Integration with Compound Engineering

The workflow integrates with /workflows:compound:

# Document a solution
/workflows:compound docs/solutions/my-solution.md
 
# Evaluate for sharing
/workflows:worth-sharing docs/solutions/my-solution.md
 
# Or evaluate the most recent
/workflows:worth-sharing latest

After running /workflows:compound, the workflow can offer to evaluate for sharing automatically.

Agent architecture

The agents used in this workflow are spun off from Every’s compound-engineering-plugin, customized for thala’s needs. Key agents include:

  • best-practices-researcher: Searches for external best practices
  • learnings-researcher: Searches internal docs and patterns
  • pattern-recognition-specialist: Identifies if a pattern is novel
  • code-simplicity-reviewer: Ensures minimal, clean code
  • Domain-specific reviewers for LangGraph, async Python, LLM interactions, etc.

These agents run in parallel where possible, reducing total execution time.

Get the workflow

The full workflow specification is available as a public gist:

worth-sharing.md

To use it with Claude Code, add the file to .claude/commands/workflows/worth-sharing.md in your project.