Commit graph

24 commits

Author SHA1 Message Date
63f41c318a Simplify MCP instructions: remove workflow coaching, trust AI's local tools 2026-03-23 00:46:57 -04:00
a23ae2c4e0 docs: add note about first session project identifier prompt 2026-03-23 00:40:39 -04:00
9e3d5ae550 MCP: coach AI to ask for project ID at session start; add project-setup-guide seed skill 2026-03-23 00:40:13 -04:00
bae42fb141 docs: update project scoping to use git remote for cross-machine consistency 2026-03-23 00:33:26 -04:00
2a87dfafcf docs: clarify ask-first approach for skill/memory creation 2026-03-23 00:27:59 -04:00
22dbdeffef MCP instructions: require explicit permission before creating skills/memories 2026-03-23 00:27:42 -04:00
39d891e9ce docs: note about auto-coaching in MCP integration 2026-03-23 00:25:25 -04:00
5adfa7472b Add MCP server instructions for autonomous learning and tool use 2026-03-23 00:24:59 -04:00
db27c0d64b Fix MCP SSE: use mcp.sse_app() + uvicorn.run() instead of run_sse() 2026-03-23 00:12:37 -04:00
d9231e23a0 Increase pip timeout to 100s for slow network builds 2026-03-23 00:03:54 -04:00
4351b56a95 Fix mcp dependencies: loosen version constraints 2026-03-23 00:02:08 -04:00
e346d356e5 Add SSE MCP server, comprehensive docs, and OpenCode integration
- Implement SSE mode for MCP server (mcp/skills.py)
- Add MCP service to docker-compose.yml on port 3000
- Add uvicorn dependency to mcp/requirements.txt
- Create SETUP.md, USAGE.md, OPENCODE-MCP.md
- Update README with quick links and MCP section
- Remove semantic cache references throughout
- Add cross-platform Python MCP setup script to template repo
2026-03-22 23:59:33 -04:00
95805dfc86 Fix config loading to return proper dataclass objects instead of dicts 2026-03-22 23:21:31 -04:00
62637acb6f Remove exposed Ollama port to avoid conflict with host systemd service 2026-03-22 23:03:54 -04:00
5505d2b217 Fix compress endpoint to use request.messages correctly 2026-03-22 22:47:49 -04:00
9ad11f5be4 Fix compression endpoint request validation and message schema 2026-03-22 22:47:07 -04:00
6853999534 Add Ollama service to docker-compose, expand seed skills with D&D and monitoring, create entrypoint for auto-model-pull 2026-03-22 22:41:49 -04:00
e4dd4da188 Update MCP server (remove cache tool), fix readme endpoints, add template reference 2026-03-22 22:35:02 -04:00
3dce79e818 Add agent template for Forgejo 2026-03-22 22:33:39 -04:00
b8edf40010 Major refactor: remove semantic cache, add config, auth, improve RAG performance, fix tags JSON 2026-03-22 22:32:44 -04:00
62c875c9a6 Change API port from 8080 to 8675 across all configs and docs 2026-03-22 21:54:51 -04:00
82fd963577 Add token-saving patterns: semantic cache, RAG, compression
- semantic_cache.py: Semantic similarity matching for cache hits
- rag.py: RAG-based context selection with local embeddings
- compression.py: Conversation history summarization
- New endpoints: /cache/semantic-lookup, /cache/semantic-store, /context/rag, /compress
- Uses sentence-transformers (all-MiniLM-L6-v2) - no external API calls
- No vector DB needed - cosine similarity on small datasets is fast enough
- Expected savings: 50-70% token reduction
2026-03-22 21:32:08 -04:00
7f7699ff94 Initial commit: Skills API with MCP servers
- FastAPI backend with SQLite (ai.db)
- Tables: skills, snippets, conventions, cache, memory
- MCP servers: homelab, gameservers, skills
- Docker Compose setup
- Seed data with 8 skills, 2 conventions, 2 snippets
- Token savings patterns via context bundles and caching
2026-03-22 21:18:23 -04:00
114b3b1628 Initial commit 2026-03-22 21:13:03 -04:00