ai-skills-api/README.md
Lukas Parsons 7f7699ff94 Initial commit: Skills API with MCP servers
- FastAPI backend with SQLite (ai.db)
- Tables: skills, snippets, conventions, cache, memory
- MCP servers: homelab, gameservers, skills
- Docker Compose setup
- Seed data with 8 skills, 2 conventions, 2 snippets
- Token savings patterns via context bundles and caching
2026-03-22 21:18:23 -04:00

2.9 KiB

AI Skills API

Local infrastructure for AI context management. Store skills, snippets, conventions, and cache responses to reduce token consumption.

Quick Start

# Copy env file
cp .env.example .env

# Run with Docker
docker compose up -d

# Or run locally
pip install -r requirements.txt
uvicorn main:app --reload

API available at http://localhost:8080 Docs at http://localhost:8080/docs

Endpoints

Endpoint Description
GET /skills List all skills
GET /skills/{id} Get skill (increments usage_count)
POST /skills Create skill
PUT /skills/{id} Update skill
DELETE /skills/{id} Delete skill
GET /skills/search?q=query Search skills
GET /snippets List snippets
GET /snippets/{id} Get snippet
POST /snippets Create snippet
DELETE /snippets/{id} Delete snippet
GET /conventions List conventions
GET /conventions?project=/path Get conventions for project
POST /conventions Create convention
PUT /conventions/{id} Update convention
DELETE /conventions/{id} Delete convention
POST /cache/lookup Check cache for prompt
POST /cache/store Store response in cache
GET /cache/stats Cache statistics
GET /memory List memory entries
GET /memory?project=name Get memory for project
POST /memory Create memory entry
PUT /memory/{id} Update memory
DELETE /memory/{id} Delete memory
GET /context?project=/path&skills=id1,id2 Get full context bundle

Example Usage

Create a skill

curl -X POST http://localhost:8080/skills \
  -H "Content-Type: application/json" \
  -d '{
    "id": "homelab-docker-compose",
    "name": "Docker Compose Standard",
    "category": "homelab",
    "content": "Always use docker-compose v3.8+. Include health checks, restart policies, and resource limits.",
    "tags": ["docker", "compose", "infrastructure"]
  }'

Get context bundle

curl "http://localhost:8080/context?project=/home/server/apps/media-server&skills=homelab-docker-compose,react-v2"

Check cache

curl -X POST http://localhost:8080/cache/lookup \
  -H "Content-Type: application/json" \
  -d '{
    "prompt": "How do I configure traefik?",
    "model": "claude-3-opus"
  }'

Integration Pattern

In your agent's system prompt or pre-request hook:

  1. Call GET /context?project={current_project}&skills={skill_ids}
  2. Inject returned content into the prompt
  3. Before sending to LLM, check POST /cache/lookup
  4. After receiving response, optionally POST /cache/store

This avoids re-sending your standards every request and caches repeated queries.

Database

SQLite database ai.db with tables:

  • skills - Reusable patterns and instructions
  • snippets - Code snippets
  • conventions - Project-specific conventions
  • cache - LRU cache of LLM responses
  • memory - Project memory/notes