No description
Find a file
2026-03-22 21:54:51 -04:00
examples Change API port from 8080 to 8675 across all configs and docs 2026-03-22 21:54:51 -04:00
mcp Change API port from 8080 to 8675 across all configs and docs 2026-03-22 21:54:51 -04:00
.env.example Initial commit: Skills API with MCP servers 2026-03-22 21:18:23 -04:00
.gitignore Initial commit: Skills API with MCP servers 2026-03-22 21:18:23 -04:00
CLAUDE.md Change API port from 8080 to 8675 across all configs and docs 2026-03-22 21:54:51 -04:00
compression.py Add token-saving patterns: semantic cache, RAG, compression 2026-03-22 21:32:08 -04:00
database.py Initial commit: Skills API with MCP servers 2026-03-22 21:18:23 -04:00
docker-compose.yml Change API port from 8080 to 8675 across all configs and docs 2026-03-22 21:54:51 -04:00
Dockerfile Initial commit: Skills API with MCP servers 2026-03-22 21:18:23 -04:00
main.py Add token-saving patterns: semantic cache, RAG, compression 2026-03-22 21:32:08 -04:00
models.py Initial commit: Skills API with MCP servers 2026-03-22 21:18:23 -04:00
rag.py Add token-saving patterns: semantic cache, RAG, compression 2026-03-22 21:32:08 -04:00
README.md Change API port from 8080 to 8675 across all configs and docs 2026-03-22 21:54:51 -04:00
requirements.txt Add token-saving patterns: semantic cache, RAG, compression 2026-03-22 21:32:08 -04:00
schemas.py Initial commit: Skills API with MCP servers 2026-03-22 21:18:23 -04:00
semantic_cache.py Add token-saving patterns: semantic cache, RAG, compression 2026-03-22 21:32:08 -04:00
TOKEN-SAVING-PATTERN.md Change API port from 8080 to 8675 across all configs and docs 2026-03-22 21:54:51 -04:00

AI Skills API

Local infrastructure for AI context management. Store skills, snippets, conventions, and cache responses to reduce token consumption.

Quick Start

# Copy env file
cp .env.example .env

# Run with Docker
docker compose up -d

# Or run locally
pip install -r requirements.txt
uvicorn main:app --reload

API available at http://helm:8675 Docs at http://helm:8675/docs

Endpoints

Endpoint Description
GET /skills List all skills
GET /skills/{id} Get skill (increments usage_count)
POST /skills Create skill
PUT /skills/{id} Update skill
DELETE /skills/{id} Delete skill
GET /skills/search?q=query Search skills
GET /snippets List snippets
GET /snippets/{id} Get snippet
POST /snippets Create snippet
DELETE /snippets/{id} Delete snippet
GET /conventions List conventions
GET /conventions?project=/path Get conventions for project
POST /conventions Create convention
PUT /conventions/{id} Update convention
DELETE /conventions/{id} Delete convention
POST /cache/lookup Check cache for prompt
POST /cache/store Store response in cache
GET /cache/stats Cache statistics
GET /memory List memory entries
GET /memory?project=name Get memory for project
POST /memory Create memory entry
PUT /memory/{id} Update memory
DELETE /memory/{id} Delete memory
GET /context?project=/path&skills=id1,id2 Get full context bundle

Example Usage

Create a skill

curl -X POST http://helm:8675/skills \
  -H "Content-Type: application/json" \
  -d '{
    "id": "homelab-docker-compose",
    "name": "Docker Compose Standard",
    "category": "homelab",
    "content": "Always use docker-compose v3.8+. Include health checks, restart policies, and resource limits.",
    "tags": ["docker", "compose", "infrastructure"]
  }'

Get context bundle

curl "http://helm:8675/context?project=/home/server/apps/media-server&skills=homelab-docker-compose,react-v2"

Check cache

curl -X POST http://helm:8675/cache/lookup \
  -H "Content-Type: application/json" \
  -d '{
    "prompt": "How do I configure traefik?",
    "model": "claude-3-opus"
  }'

Integration Pattern

In your agent's system prompt or pre-request hook:

  1. Call GET /context?project={current_project}&skills={skill_ids}
  2. Inject returned content into the prompt
  3. Before sending to LLM, check POST /cache/lookup
  4. After receiving response, optionally POST /cache/store

This avoids re-sending your standards every request and caches repeated queries.

Database

SQLite database ai.db with tables:

  • skills - Reusable patterns and instructions
  • snippets - Code snippets
  • conventions - Project-specific conventions
  • cache - LRU cache of LLM responses
  • memory - Project memory/notes