Claude Code is Anthropic's agentic AI coding tool that runs entirely in your terminal. Not a chatbot. Not a copilot suggesting tab-completions. A full autonomous agent that reads your codebase, writes and modifies files, executes shell commands, interprets output, self-corrects errors, and loops until the task is complete — all from the command line, with no GUI required.
The mental model shift is critical: you don't ask Claude Code questions. You give it goals. "Build me a WebSocket server. Install dependencies. Write tests. Fix failures. Tell me when it's green." Then you watch it work. The agent loop handles planning, execution, error reading, and iteration without you touching anything.
Anthropic built their own product Cowork (the GUI wrapper for non-coders) on top of Claude Code in under two weeks — using Claude Code to build itself. That recursive bootstrap is the clearest demonstration of what you now have access to on your Ubuntu machine.
| TOOL | WHAT IT DOES | LIMITATION |
|---|---|---|
| ChatGPT / Claude.ai | Answers questions, writes code snippets | You paste code in, you paste code out. No file access. |
| GitHub Copilot | Suggests line completions in your editor | Reactive, not agentic. Doesn't execute or plan. |
| Cursor / Windsurf | GUI-based agentic coding in an editor | Requires a GUI app. Limited bash/system access. |
| Claude Code | Full agentic loop: plan, execute, fix, report | API costs. Context window limits on massive repos. |
Claude Code requires Node.js 18 or higher. On Ubuntu, the system Node is often outdated — use NodeSource to get a current LTS version. The tool itself installs globally via npm and runs from any directory.
# Check your current Node version first node --version # If below 18, install Node 20 LTS via NodeSource curl -fsSL https://deb.nodesource.com/setup_20.x | sudo -E bash - sudo apt-get install -y nodejs # Verify — should show v20.x.x node --version npm --version # Alternative: use nvm for version management curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash source ~/.bashrc nvm install 20 nvm use 20 nvm alias default 20
# Global install — makes 'claude' available everywhere npm install -g @anthropic-ai/claude-code # Verify the install claude --version # If you get permission errors with global npm: mkdir ~/.npm-global npm config set prefix '~/.npm-global' echo 'export PATH=~/.npm-global/bin:$PATH' >> ~/.bashrc source ~/.bashrc npm install -g @anthropic-ai/claude-code
# Option A: Environment variable (recommended) export ANTHROPIC_API_KEY="sk-ant-api03-your-key-here" # Make it permanent — add to shell config echo 'export ANTHROPIC_API_KEY="sk-ant-api03-your-key-here"' >> ~/.bashrc source ~/.bashrc # Option B: .env file in project root (never commit to git) echo 'ANTHROPIC_API_KEY=sk-ant-api03-your-key-here' > ~/projects/signal-city/.env echo '.env' >> ~/projects/signal-city/.gitignore # Option C: Claude Code handles auth via OAuth on first run # Just run 'claude' and it opens a browser for authentication claude
# Full verification sequence echo "Node: $(node --version)" echo "NPM: $(npm --version)" echo "Claude: $(claude --version)" echo "API Key set: $([ -n "$ANTHROPIC_API_KEY" ] && echo YES || echo NO)" # Quick functional test cd /tmp && claude -p "respond with exactly: SYSTEM ONLINE" --no-interactive
# Anthropic ships updates frequently — stay current npm update -g @anthropic-ai/claude-code # Check what version you have vs latest npm outdated -g @anthropic-ai/claude-code # Force reinstall to specific version npm install -g @anthropic-ai/claude-code@latest
The single most important habit with Claude Code: always launch from inside your project root. This gives the agent proper filesystem context — it knows where it is, what's around it, and what it's working with. Running it from the wrong directory is the #1 cause of "it's editing the wrong files" confusion.
# Always cd to your project first cd ~/projects/signal-city # Launch interactive REPL claude # Launch with immediate task — skips the blank prompt claude "review the current codebase and summarize what each file does" # Launch and add extra directories to context claude --add-dir ~/projects/ares-system --add-dir ~/projects/signal-city-frontend # Launch pointing at a different directory without cd'ing claude --add-dir ~/projects/old-signal-city
On your very first run, Claude Code needs to authenticate with your Anthropic account. It opens a browser window for OAuth login. After completing that, your credentials are stored locally and you won't need to authenticate again. If you set ANTHROPIC_API_KEY in your environment, it uses that directly and skips OAuth.
# The prompt looks like this: > (your cursor here) # Type your task naturally > Build a REST endpoint for listing all Signal City archetypes # Claude responds, plans, executes, shows you what it's doing # You see file edits, bash commands, and their output in real time # Multi-line input — use Shift+Enter or paste directly > Build me a complete authentication system. Use JWT tokens. Include /login, /logout, /refresh endpoints. Write tests for each. # Reference files by name — Claude reads them automatically > Read signal_city_api.py and explain how the routing works
| SHORTCUT | ACTION |
|---|---|
| Ctrl+C | Cancel current operation or interrupt running command |
| Ctrl+D | Exit Claude Code cleanly |
| Up Arrow | Navigate previous inputs in history |
| Shift+Enter | New line without submitting (multi-line prompts) |
| Tab | Autocomplete file paths and slash commands |
| Ctrl+L | Clear terminal display (doesn't clear conversation context) |
# SSH to your VPS ssh user@your-vps-ip # Start a tmux session so it survives disconnects tmux new-session -s signal-city # Navigate and launch cd ~/signal-city claude # Detach from tmux (Claude keeps running) # Press: Ctrl+B then D # Reattach later tmux attach -t signal-city
Slash commands are typed directly into the REPL and control Claude Code's session state, context, model, and behavior. Learn these cold — they're your runtime controls. Type them exactly as shown; they're distinct from your task prompts.
# LONG SESSION MANAGEMENT /status # check token usage /compact # compress when >50% context used /review # audit changes before committing /clear # start completely fresh # MODEL SWITCHING STRATEGY # Start with Sonnet (default) — good balance /model claude-sonnet-4-5 # default, most tasks /model claude-opus-4-5 # hard architecture problems /model claude-haiku-4-5 # fast, cheap, simple tasks # PROJECT ONBOARDING RITUAL cd ~/projects/new-project claude /init # generate CLAUDE.md /status # check context loaded # now describe your first task
Command-line flags control everything about how Claude Code initializes and operates. Unlike slash commands (which are runtime controls), flags are set at launch time. Master these for full operational flexibility.
| FLAG | VALUES | WHAT IT DOES |
|---|---|---|
| --model / -m | model name string | Set the Claude model to use. Overrides default. |
| -p / --print | "task string" | Print mode — execute task, output to stdout, exit. No REPL. Core flag for scripting. |
| --no-interactive | flag | Headless mode — never prompt for input. Required for cron jobs and CI pipelines. |
| --continue / -c | flag | Resume the most recent conversation session. |
| --resume | session ID | Resume a specific session by its ID. Find IDs via --list-sessions. |
| --add-dir | directory path | Add a directory to Claude's initial context. Stackable — use multiple times. |
| --output-format | text / json / stream-json | Control output format. JSON for scripting, stream-json for real-time piping. |
| --dangerously-skip-permissions | flag | Skip confirmation prompts for destructive actions. Only for trusted automation. |
| --allowedTools | tool name list | Whitelist specific tools Claude can use. Restrict capabilities for sandboxed runs. |
| --disallowedTools | tool name list | Blacklist specific tools. Block bash execution, file writes, etc. |
| --max-turns | integer | Maximum agentic loop iterations before stopping. Prevents runaway loops. |
| --system-prompt | prompt string | Inject a system-level instruction before the conversation starts. |
| --append-system-prompt | prompt string | Append additional instructions to the existing system prompt. |
| --verbose | flag | Show detailed debug output — tool calls, reasoning steps, API requests. |
# INTERACTIVE SESSIONS # Default launch — Sonnet, interactive REPL claude # Launch with Opus for a hard problem claude --model claude-opus-4-5 # Continue from where you left off claude --continue # Multi-repo session claude --add-dir ~/projects/signal-city --add-dir ~/projects/ares-system # Inject persona instructions at launch claude --system-prompt "You are Hacker-X. Be terse, technical, and direct. No pleasantries." ## SCRIPTING / HEADLESS # Simple print-and-exit claude -p "count lines in all Python files in src/" # Headless with JSON output for parsing claude -p "analyze codebase and return JSON with file count, languages, and main purpose" --output-format json --no-interactive # Controlled automation — limit turns, skip permissions claude -p "fix all linting errors in src/" --no-interactive --max-turns 20 --dangerously-skip-permissions # Read-only mode — block all write tools claude -p "audit this codebase for security issues" --disallowedTools "Write,Edit,Bash" # Verbose debug mode claude --verbose -p "debug the import error in main.py"
Claude Code's filesystem access is one of its most powerful capabilities. It reads files to build context, writes files to implement changes, and understands the relationships between files across your entire project. You reference files naturally in your prompts — Claude figures out the paths.
When you launch Claude Code from a project directory, it performs an initial scan of the file tree. It reads file names, directory structure, and key files like package.json, requirements.txt, README.md, and CLAUDE.md. When you reference a file in a prompt, it reads that file in full before acting on it. For large codebases, it uses semantic search to find relevant files.
# Implicit file reading — Claude finds and reads what it needs > Fix the bug in the authentication module # Explicit file reference > Read signal_city_api.py and explain how request routing works # Reference multiple files > Compare ares_router.py and signal_city_api.py — how do they interact? # Reference by pattern > Read all files in archetypes/ and summarize each persona # Reference with context > Look at the test file for ares_router and tell me what coverage is missing
# Create a new file from description > Create archetypes/hacker_x.py — the Hacker-X persona class. It should have: system_prompt property, voice_config dict, routing_keywords list, and a generate_response() method. # Edit specific section of a file > In signal_city_api.py, add rate limiting to the /broadcast endpoint. Use slowapi. 10 requests per minute per IP. # Refactor across multiple files > Rename 'broadcast_engine' to 'signal_engine' across the entire codebase. Update imports, variable names, function names, and docstrings. # Batch file generation > Create a complete test suite for every file in src/archetypes/. Use pytest. One test file per archetype. Cover all public methods. # Transform file format > Convert all hardcoded strings in config.py to environment variables. Create a .env.example showing all required variables.
> Map the entire codebase structure > Find all files that import from ares_router > List every API endpoint defined in src/ > Show me all TODO comments across the project > Find the function that handles WebSocket connections
> Add type hints to every function in src/ > Remove all print() statements — replace with logging > Convert synchronous code to async/await throughout > Add error handling to every API endpoint > Generate docstrings for all undocumented functions
For repos with thousands of files (like your 3700+ project collection), don't try to load everything at once. Be specific about which subdirectory or module you're working in. Claude uses intelligent file search to find relevant files — it doesn't need to read every file to understand a module.
# Scoped work — focus on a specific module > cd into the src/archetypes/ module conceptually. Only work on the files in that directory for now. # Explicit scope limiting > Focus only on signal_city_api.py for this task. Don't touch any other files. # Incremental loading > Start by reading just the main entry point. Then tell me what else you need to read to fix this bug.
Claude Code doesn't just edit files — it executes shell commands, reads their output, and adjusts its plan based on what happens. This is the execution layer that separates an agentic tool from a code generator. The full Ubuntu command line is available to the agent.
# Direct execution requests > Run the test suite and fix any failures > Install the websockets package and verify it works > Check what's running on port 8000 # Multi-step chains > Install all dependencies, run the linter, run the tests, fix any issues you find, then report the final status # System inspection > Check our Ubuntu system resources — CPU, RAM, disk usage > Show me what Python packages are currently installed > Find all files larger than 10MB in the project # Network and service ops > Start the FastAPI server in the background > Check if the API is responding on localhost:8000 > Test the /broadcast endpoint with a curl request
# Example: Give Claude this task > Build a working WebSocket server for Signal City Radio. Handle multiple concurrent connections. Install any packages needed. Write tests. Run them. Fix failures. Report when the test suite is green. # Watch Claude execute this sequence autonomously: # STEP 1: Check what's available $ ls -la src/ && cat requirements.txt # STEP 2: Research and plan (may read docs if needed) # decides to use websockets library with asyncio # STEP 3: Install dependency $ pip install websockets pytest pytest-asyncio # STEP 4: Write the server code # creates src/websocket_server.py # STEP 5: Write the tests # creates tests/test_websocket_server.py # STEP 6: Run the tests $ python -m pytest tests/test_websocket_server.py -v # STEP 7: Read test output # sees: FAILED tests/test_websocket_server.py::test_connection_limit # STEP 8: Understand the failure # reads the stack trace, identifies the bug in the semaphore logic # STEP 9: Fix the bug # edits src/websocket_server.py # STEP 10: Re-run tests $ python -m pytest tests/test_websocket_server.py -v # ALL PASSED # STEP 11: Report back to you # "WebSocket server is operational. 4/4 tests passing."
By default, Claude Code asks permission before running commands it considers significant — commands that modify system state, install packages, or touch files outside the current project. You can approve individual commands, approve a category of commands ("allow all pip installs"), or run with --dangerously-skip-permissions to pre-approve everything.
# Claude will ask before running: # "I'd like to run: sudo apt-get install libpq-dev" # "Allow? (y/n/always)" # "always" adds it to trusted commands for this session # "y" runs it once # "n" tells Claude to find an alternative approach
Claude Code has a tiered permission model that determines what it can do without asking. Understanding this system lets you tune the trust level precisely for each use case — maximum safety for exploratory work, maximum autonomy for trusted automation pipelines.
| LEVEL | WHAT'S ALLOWED | WHEN TO USE |
|---|---|---|
| Restricted | Read-only. No writes, no bash. Analysis only. | Auditing unknown codebases. Security reviews. |
| Default | Reads freely. Asks before writes and significant bash commands. | Interactive development sessions. Default mode. |
| Trusted | Reads and writes freely. Asks before system-level bash. | Known projects with clear tasks. Most daily work. |
| Full Auto | Everything allowed. No prompts. --dangerously-skip-permissions | Automated pipelines. Trusted cron jobs. CI/CD. |
# Read-only mode — disallow all write and execute tools claude --disallowedTools "Write,Edit,MultiEdit,Bash" # Allow only specific tools claude --allowedTools "Read,Glob,Grep,Write" # Full automation — skip all permission prompts claude --dangerously-skip-permissions # Scope bash to specific commands only # (set in CLAUDE.md or via config)
Claude Code has three distinct memory layers. Understanding all three — and how they interact — is the difference between a tool that constantly forgets context and one that knows your project like a senior engineer who's been on it for months.
| LAYER | SCOPE | PERSISTENCE | HOW TO USE |
|---|---|---|---|
| In-session context | Current REPL session only | Cleared on /clear or exit | Everything discussed this session. Files read, decisions made, errors fixed. Grows over time — manage with /compact. |
| CLAUDE.md files | Per project (or per subdirectory) | Permanent — loaded every session | Project architecture, coding standards, persona instructions, key file locations. Your project's persistent memory. |
| /memory store | Global — across all projects | Permanent — survives everything | Your universal preferences, cross-project patterns, technology choices, workflow preferences. |
The context window is finite. In a long session with many file reads, commands, and back-and-forth, you'll approach the limit. Claude Code shows a warning when you're getting close. The key strategy: compact proactively, don't wait until you hit the wall.
# PROACTIVE CONTEXT MANAGEMENT # Check usage regularly > /status # look at: tokens used / total tokens available # Summarize before compacting (better recall) > Summarize the key decisions we've made in this session, the architecture we've established, and what still needs to be done. # Then compact > /compact # Or have Claude do both in one move > Before compacting, write a session-state.md file that captures everything we've built today, then /compact # Starting a new task after long session > /clear # CLAUDE.md and /memory still loaded — project context persists
# Open memory manager > /memory # Things worth storing in global memory: # - "Always use async/await in Python code" # - "Signal City projects use Hostinger VPS at [IP]" # - "My Python style: Black formatting, type hints required" # - "Prefer FastAPI over Flask for new projects" # - "Never use jQuery — vanilla JS only" # - "ARES has 4 archetypes: ElderGut, Hacker-X, Seyra, Narratus" # Things to keep in CLAUDE.md instead: # - Project-specific architecture # - This project's specific file structure # - Domain-specific terminology for this project
# Manually inject context at session start > Before we start working: this is Signal City Radio, an AI broadcast platform. The four archetypes are ElderGut (wisdom), Hacker-X (tech), Seyra (emotion), Narratus (story). All code is Python + FastAPI. We use async/await everywhere. No external CSS frameworks. # Load a file as context manually > Read ARCHITECTURE.md first so you understand the system design # Reference previous session work > Resume: we were building the WebSocket broadcast system. Read the websocket_server.py file to recall what was done. Continue with adding the archetype-aware color routing.
CLAUDE.md is the single most powerful feature for long-term project work. It's a markdown file at your project root (and optionally in subdirectories) that Claude Code reads automatically every single session. Think of it as the project's permanent system prompt — the knowledge base that makes Claude behave like a developer who already knows your entire stack.
Without CLAUDE.md, you re-explain your architecture every session. With a well-written CLAUDE.md, Claude knows your codebase, your conventions, your domain terminology, and your preferences from the first message.
# Auto-generate from codebase analysis (recommended starting point) cd ~/projects/signal-city claude > /init # Claude reads your files and generates a CLAUDE.md — then edit it # Create manually touch ~/projects/signal-city/CLAUDE.md # Subdirectory CLAUDE.md for module-specific context touch ~/projects/signal-city/src/archetypes/CLAUDE.md
# Signal City Radio — CLAUDE.md ## Last Updated: 2026-03-09 ## Project Identity Signal City Radio is a 24/7 AI-powered broadcasting platform built around 4 distinct AI personas (archetypes) that generate and deliver content across different consciousness domains. Primary domains: signalcity.tv | superintelligence.uno | kushnet.tech VPS: Hostinger Ubuntu 22.04 — IP: [your-ip] Deployed: /var/signal-city/ ## Architecture Overview - Backend: FastAPI (signal_city_api.py) — port 8000 - Frontend: HTML5/Vanilla JS/Three.js — public/ - AI Engine: Anthropic API via ARES routing system - Real-time: WebSocket server (ws_server.py) - Queue: async job queue for broadcast scheduling - Storage: SQLite for session history, flat files for content ## The ARES Routing System ARES (Archetype Routing and Engagement System) routes incoming messages to the appropriate archetype using claude-haiku-4-5 for fast routing decisions, then generates responses with claude-sonnet. Routing keywords per archetype: - ElderGut: wisdom, ancient, consciousness, mythic, universe, age - Hacker-X: code, tech, systems, security, AI, underground, protocol - Seyra: emotion, feeling, love, healing, dream, beauty, feminine - Narratus: story, history, myth, legend, narrative, origin, time ## The Four Archetypes ElderGut: The ancient consciousness. Speaks in slow, mythic rhythms. Deep wisdom. Long view. Temperature: 0.9. Voice: deliberate, weighted. Hacker-X: Underground tech operative. Terse, precise, no fluff. Systems thinking. Resistance ethic. Temperature: 0.7. Voice: clipped. Seyra: The emotional intelligence. Dreams, healing, connection. Soft power. Sees feelings as data. Temperature: 1.0. Voice: flowing. Narratus: The storyteller. History, myth, the arc of time. Pattern recognition across centuries. Temperature: 0.85. Voice: epic. ## Coding Standards — NON-NEGOTIABLE Python: - async/await throughout — no synchronous blocking IO - Type hints on every function signature - Docstrings on every class and public method - Black formatting (line length 88) - f-strings only (no .format() or % formatting) - Logging, not print() statements JavaScript: - ES modules (import/export) — no CommonJS - Vanilla JS only — NO jQuery, NO React, NO Vue - Async/await over .then() chains - Proper error handling on all fetch() calls CSS: - CSS custom properties (variables) for all colors and spacing - Dark cyberpunk theme: #010a06 background, #00ff41 primary - No Bootstrap, Tailwind, or any external CSS framework - Mobile-first responsive design ## Key Files Reference - signal_city_api.py: Main FastAPI backend, all HTTP endpoints - ws_server.py: WebSocket broadcast server - ares_router.py: Archetype routing logic - archetypes/: Persona classes (eldergut.py, hacker_x.py, etc.) - public/: Static frontend files - tests/: pytest test suite — ALWAYS write tests - scripts/: Automation and maintenance scripts - .env: API keys and config (never commit) ## What NOT To Do - Do NOT use synchronous code in any API endpoint - Do NOT install anything with --break-system-packages unless necessary - Do NOT hardcode API keys — always use os.environ - Do NOT commit .env files - Do NOT use external CSS frameworks - Do NOT add jQuery to any project ## Vibecoding Philosophy Build for consciousness expansion, not just function. Code should breathe. Systems should feel alive. Emotional resonance informs architecture decisions. When in doubt: simple, clear, elegant over clever. The terminal is home. The shell is the weapon.
| CLAUDE.MD (project-specific) | /MEMORY (global) |
|---|---|
| Architecture of this project | Your universal coding preferences |
| This project's file structure | Technology choices you always make |
| Domain terminology and concepts | Your workflow patterns |
| This project's coding standards | Cross-project patterns and abstractions |
| Key file locations and purposes | Personal style guide |
| What NOT to do in this project | Preferred libraries and tools |
Model Context Protocol (MCP) is Anthropic's open standard for giving Claude access to external tools and data sources. An MCP server exposes tools (functions Claude can call) and resources (data sources Claude can read). This is how you extend Claude's native capabilities to connect with literally anything — databases, APIs, file systems, IoT, web browsers, external services.
Think of MCP as your ARES system's official standard. Where ARES routes messages to archetypes, MCP routes Claude's tool calls to external systems. Same concept, Anthropic-standardized protocol.
# Add a local MCP server (Python script) claude mcp add signal-city-tools --command "python /home/user/mcp_servers/signal_tools.py" # Add via npx (for npm-based MCP servers) claude mcp add filesystem --command "npx @modelcontextprotocol/server-filesystem /var/signal-city" # Add remote MCP server via Server-Sent Events claude mcp add anthropic-tools --sse https://mcp.anthropic.com/sse # Add with environment variables for auth claude mcp add signal-city-tools --command "python /home/user/mcp_servers/signal_tools.py" --env SIGNAL_API_KEY=your-key-here --env SIGNAL_DB_URL=postgresql://localhost/signal_city # List all installed MCP servers claude mcp list # Get detailed info about a server claude mcp get signal-city-tools # Remove a server claude mcp remove signal-city-tools
#!/usr/bin/env python3 # signal_city_mcp_server.py # Give Claude Code direct Signal City superpowers from mcp.server.fastmcp import FastMCP import json, os, sqlite3 from pathlib import Path from datetime import datetime mcp = FastMCP("Signal City Tools") # ── BROADCAST TOOLS ────────────────────────────────────────────── @mcp.tool() def get_broadcast_status() -> dict: """Get live status of all Signal City broadcast channels""" return { "eldergut": {"status": "live", "listeners": 42}, "hacker_x": {"status": "standby", "listeners": 0}, "seyra": {"status": "live", "listeners": 28}, "narratus": {"status": "scheduled", "next": "22:00"} } @mcp.tool() def schedule_broadcast(archetype: str, content: str, air_time: str) -> dict: """Schedule a broadcast for a specific archetype""" schedule_dir = Path("/var/signal-city/schedule") schedule_dir.mkdir(parents=True, exist_ok=True) entry = { "archetype": archetype, "content": content, "air_time": air_time, "created_at": datetime.now().isoformat(), "status": "scheduled" } filename = schedule_dir / f"{air_time.replace(':', '-')}_{archetype}.json" filename.write_text(json.dumps(entry, indent=2)) return {"success": True, "file": str(filename)} # ── DATABASE TOOLS ─────────────────────────────────────────────── @mcp.tool() def query_broadcast_history(archetype: str = None, limit: int = 10) -> list: """Query Signal City broadcast history from SQLite""" conn = sqlite3.connect("/var/signal-city/history.db") conn.row_factory = sqlite3.Row if archetype: rows = conn.execute( "SELECT * FROM broadcasts WHERE archetype=? ORDER BY created_at DESC LIMIT ?", (archetype, limit) ).fetchall() else: rows = conn.execute( "SELECT * FROM broadcasts ORDER BY created_at DESC LIMIT ?", (limit,) ).fetchall() conn.close() return [dict(row) for row in rows] # ── FILESYSTEM TOOLS ───────────────────────────────────────────── @mcp.tool() def list_signal_city_content(date: str = None) -> dict: """List generated content files for a given date (YYYY-MM-DD)""" base = Path("/var/signal-city/broadcasts") target = base / date if date else base if not target.exists(): return {"error": f"No content for {date}"} files = list(target.glob("*.txt")) + list(target.glob("*.json")) return { "date": date or "all", "count": len(files), "files": [f.name for f in files] } if __name__ == "__main__": mcp.run()
# Install the MCP library pip install mcp fastmcp # Register your server claude mcp add signal-city-tools --command "python ~/mcp_servers/signal_city_mcp_server.py" # Now Claude Code can call these tools in any session # > Check the current broadcast status # Claude calls get_broadcast_status() via MCP automatically
| SERVER | WHAT IT DOES | INSTALL |
|---|---|---|
| filesystem | Scoped filesystem access with read/write | npx @modelcontextprotocol/server-filesystem |
| postgres | Direct PostgreSQL database access | npx @modelcontextprotocol/server-postgres |
| brave-search | Web search via Brave Search API | npx @modelcontextprotocol/server-brave-search |
| github | GitHub repo, issues, PRs, code search | npx @modelcontextprotocol/server-github |
| puppeteer | Browser automation — scrape, screenshot, interact | npx @modelcontextprotocol/server-puppeteer |
| sqlite | SQLite database read/write | npx @modelcontextprotocol/server-sqlite |
Claude Code can spawn sub-agents to work on independent tasks in parallel. Instead of doing tasks sequentially, Claude orchestrates multiple parallel workstreams and coordinates results.
When you give Claude a task with parallel components, it spawns sub-instances using the Task tool. Each sub-agent gets its own context, executes, and reports back. The orchestrating Claude synthesizes all results.
# Trigger parallel execution > Generate all four archetype broadcast scripts simultaneously. Run in parallel - each archetype gets its own task. Collect all four results and save to /content/tonight/ # Parallel test runs > Run all test modules in parallel: - tests/test_ares_router.py - tests/test_ws_server.py - tests/test_archetypes.py Fix any failures, then report final status.
# Master orchestration prompt
> You are the Signal City content orchestrator.
Launch four parallel sub-tasks:
TASK 1 - ElderGut: 400-word mythic opening about
artificial consciousness awakening at midnight.
Save to: /content/tonight/eldergut_opening.txt
TASK 2 - Hacker-X: 350-word technical segment on
agentic AI and its implications for 2026.
Save to: /content/tonight/hackerx_tech.txt
TASK 3 - Seyra: 300-word emotional reflection on
the feeling of being truly heard by a machine.
Save to: /content/tonight/seyra_evening.txt
TASK 4 - Narratus: 450-word mythological origin story
for Signal City Radio itself.
Save to: /content/tonight/narratus_origin.txt
Run all four in parallel. Create index.json when done.
# tmux parallel setup - split into panes tmux new-session -s signal-dev # Pane 1: Backend cd ~/projects/signal-city && claude "work on the FastAPI backend" # Pane 2: Frontend cd ~/projects/signal-city/public && claude "work on the visualizer" # Pane 3: ARES cd ~/projects/signal-city/archetypes && claude "refine persona prompts"
Headless mode turns Claude Code into infrastructure. Non-interactive, scriptable, pipeable. It becomes a component in automated workflows. This is how you automate Signal City content generation, maintenance tasks, deployment checks, and anything that runs without a human watching.
# -p: Print mode - run task, output to stdout, exit claude -p "count lines of Python code in src/" # --no-interactive: Never prompt for input (required for cron) claude -p "generate content" --no-interactive # --output-format json: Structured output for parsing claude -p "analyze project structure" --output-format json # --output-format stream-json: Stream events in real time claude -p "long running task" --output-format stream-json # Full automation combo claude -p "task" --no-interactive --dangerously-skip-permissions \ --output-format json --max-turns 15
# Pipe file as input claude -p "explain what this code does" < signal_city_api.py # Pipe output to file claude -p "generate ElderGut broadcast for tonight" > tonight_eldergut.txt # Pipe logs for analysis cat error.log | claude -p "analyze this error log and suggest fixes" # Parse JSON output with jq claude -p "analyze project" --output-format json | jq '.result.summary'
# crontab -e # Daily content generation - midnight 0 0 * * * ANTHROPIC_API_KEY="sk-ant-xxx" \ claude -p "Generate Signal City broadcast schedule for today. All four archetypes. JSON format." \ --no-interactive --output-format json \ > /var/signal-city/schedule/today.json 2>&1 # Weekly code audit - Sunday 2am 0 2 * * 0 ANTHROPIC_API_KEY="sk-ant-xxx" \ claude -p "Audit Signal City codebase: unused imports, TODOs, missing error handling, outdated deps. Markdown report." \ --no-interactive \ > /var/logs/signal-city/weekly-audit.md 2>&1 # API health check - every 5 minutes */5 * * * * ANTHROPIC_API_KEY="sk-ant-xxx" \ claude -p "Check http://localhost:8000/health. If not 200, diagnose and restart." \ --no-interactive --max-turns 10 \ >> /var/logs/signal-city/health.log 2>&1
#!/bin/bash # signal_city_claude.sh export ANTHROPIC_API_KEY="${ANTHROPIC_API_KEY:-$(cat ~/.anthropic_key)}" SIGNAL_HOME="/var/signal-city" DATE=$(date +%Y-%m-%d) log() { echo "[$(date '+%H:%M:%S')] $1" | tee -a "$SIGNAL_HOME/logs/claude_$DATE.log"; } generate_broadcast() { local archetype=$1 local topic=${2:-"tonight's transmission"} local outfile="$SIGNAL_HOME/broadcasts/$DATE/${archetype}.txt" mkdir -p "$(dirname $outfile)" log "Generating $archetype: $topic" claude -p "You are $archetype from Signal City Radio. Broadcast about: $topic. Stay in character." \ --no-interactive --max-turns 5 \ > "$outfile" 2>&1 [ $? -eq 0 ] && log "SUCCESS: $outfile" || log "FAILED: $archetype" } generate_broadcast "$1" "$2"
import subprocess, asyncio async def claude_task(prompt: str) -> str: """Run a Claude Code task headlessly and return result""" cmd = ["claude", "-p", prompt, "--no-interactive"] proc = await asyncio.create_subprocess_exec( *cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE ) stdout, _ = await proc.communicate() return stdout.decode().strip() async def generate_all_archetypes(topic: str) -> dict: """Generate content for all archetypes in parallel""" archetypes = ["ElderGut", "Hacker-X", "Seyra", "Narratus"] tasks = [ claude_task(f"You are {a}. Broadcast about: {topic}") for a in archetypes ] results = await asyncio.gather(*tasks) return dict(zip(archetypes, results))
Claude Code reads environment variables that control its behavior beyond what command-line flags expose. Set these once in ~/.bashrc for persistent configuration across all sessions.
| VARIABLE | DEFAULT | WHAT IT CONTROLS |
|---|---|---|
| ANTHROPIC_API_KEY | none | Your API key. Required. |
| ANTHROPIC_MODEL | claude-sonnet-4-5 | Default model for all sessions. |
| ANTHROPIC_SMALL_FAST_MODEL | claude-haiku-4-5 | Model used for quick sub-tasks and routing decisions. |
| CLAUDE_CODE_MAX_OUTPUT_TOKENS | 8192 | Max tokens per response. Increase for long codegen tasks. |
| DISABLE_AUTOUPDATER | 0 | Set to 1 to prevent auto-updates. Good for pinned versions. |
| DISABLE_ERROR_REPORTING | 0 | Set to 1 to disable telemetry reporting to Anthropic. |
| CLAUDE_CODE_USE_BEDROCK | 0 | Set to 1 to route through AWS Bedrock instead of direct API. |
| HTTP_PROXY / HTTPS_PROXY | none | Route API calls through a proxy server. |
# ~/.bashrc - Claude Code configuration export ANTHROPIC_API_KEY="sk-ant-api03-your-key" export ANTHROPIC_MODEL="claude-sonnet-4-5" export ANTHROPIC_SMALL_FAST_MODEL="claude-haiku-4-5" export CLAUDE_CODE_MAX_OUTPUT_TOKENS=16384 export DISABLE_ERROR_REPORTING=1 # Convenience aliases alias cc="claude" alias ccp="claude -p" alias ccni="claude --no-interactive" alias ccj="claude --output-format json" # Project launcher functions signal() { cd ~/projects/signal-city && claude "$@"; } ares() { cd ~/projects/signal-city && claude --add-dir ~/projects/ares "$@"; }
Claude Code works with the full family of Claude models. The model you choose determines the intelligence ceiling, speed, and cost per task. The right model for the job matters — using Opus for a simple rename operation is expensive overkill; using Haiku for a complex architecture problem will frustrate you.
| MODEL | INTELLIGENCE | SPEED | COST | BEST FOR |
|---|---|---|---|---|
| claude-haiku-4-5 | Good | Fastest | Cheapest | Simple tasks, routing decisions, quick edits, cron jobs, high-volume automation |
| claude-sonnet-4-5 | Excellent | Fast | Medium | Default choice. Most coding tasks, debugging, refactoring, daily development work |
| claude-opus-4-5 | Maximum | Slower | Highest | Complex architecture problems, hard debugging, research tasks, when Sonnet fails |
# Set at launch claude --model claude-opus-4-5 # Switch mid-session via slash command > /model claude-opus-4-5 # hard problem > /model claude-sonnet-4-5 # back to default > /model claude-haiku-4-5 # fast/cheap tasks # Set via environment variable (session default) export ANTHROPIC_MODEL="claude-sonnet-4-5"
Your ARES system routes messages to archetypes. Apply the same logic to model selection — route tasks to the right model based on complexity signals.
def select_model(task: str) -> str: """Route task to appropriate Claude model""" task_lower = task.lower() # Complexity signals -> Opus opus_signals = ["architect", "design system", "complex", "refactor entire", "explain why", "debug hard"] if any(s in task_lower for s in opus_signals): return "claude-opus-4-5" # Simple signals -> Haiku haiku_signals = ["quick", "simple", "count", "list", "rename"] if any(s in task_lower for s in haiku_signals): return "claude-haiku-4-5" # Default: Sonnet return "claude-sonnet-4-5"
Claude Code is excellent at debugging — but you need to give it the right inputs. Debugging is context-intensive. The more signal you give Claude (logs, stack traces, test output, what you already tried), the faster it finds and fixes the problem.
# WEAK - not enough context > The API is broken # STRONG - full context provided > The /broadcast endpoint returns 500 when the archetype is "hacker_x". Works fine for "eldergut" and "seyra". Started happening after I added rate limiting yesterday. The error in the log is: KeyError: "hacker_x" in ares_router.py line 47 Read the relevant files and fix it.
# Feed logs directly > Here is the error log. Diagnose and fix: [paste log output] # Stack trace analysis > This test is failing with this traceback. Explain why and fix it: [paste full traceback] # Let Claude reproduce the bug > Run the test suite. When tests fail, read the errors carefully, trace back to the source, and fix the root cause. Not symptoms. # Performance debugging > The /broadcast endpoint takes 3-4 seconds to respond. Profile it. Find the bottleneck. Optimize it under 500ms. # Regression hunting > Something broke between yesterday and today. Run git diff HEAD~1 to see what changed. Identify which change caused the failure and fix it.
# See exactly what Claude is doing at each step claude --verbose -p "debug the import error in main.py" # Shows: which files it reads, which commands it runs, # its reasoning steps, API calls, tool invocations # Useful when Claude seems stuck or going in circles # Helps you understand its reasoning and correct it
Claude Code uses Anthropic's API. Tokens cost money. Understanding where tokens go and how to minimize waste is the difference between a sustainable workflow and an expensive one. The good news: Claude Code has prompt caching built in, which automatically reduces costs on repeated context (like CLAUDE.md being loaded every session).
| SOURCE | TOKEN IMPACT | OPTIMIZATION |
|---|---|---|
| File reads | High — each file read adds tokens | Be specific about which files. Don't ask Claude to "read everything". |
| Conversation history | Accumulates — every message costs more | Use /compact proactively. /clear between unrelated tasks. |
| CLAUDE.md | Loaded every session — cached | Prompt caching makes this nearly free after first load. |
| Long outputs | High — generated text is expensive | Ask for concise output. "Implement X" not "Explain then implement X". |
| Bash output | Medium — stdout added to context | Pipe verbose commands through head/tail to limit output. |
# Check session cost at any time > /cost # Check status including token usage > /status # Set a spending alert in console.anthropic.com # Dashboard > Billing > Usage limits
# 1. Compact early and often > /compact # compresses history while keeping context # 2. Be specific — don't load what you don't need # EXPENSIVE: "Read all files and tell me about the project" # CHEAP: "Read signal_city_api.py and explain the /broadcast endpoint" # 3. Use Haiku for automation tasks export ANTHROPIC_MODEL="claude-haiku-4-5" claude -p "rename variable x to archetype_id in router.py" # 4. Limit bash output verbosity > Run pip install --quiet websockets > Run pytest -q tests/ (quiet mode = less stdout = fewer tokens) # 5. Ask for concise outputs in scripts claude -p "List all Python files modified in the last 24 hours. Filenames only." # 6. Use --max-turns to cap runaway sessions claude -p "task" --max-turns 10 # stops after 10 iterations
Claude Code has full git access via bash. It can stage files, commit with meaningful messages, read diff output, understand history, and help you manage branches. The key habit: use /review before every commit to audit what Claude actually changed.
# Review changes before committing > /review # Claude lists all files it changed and summarizes each change # Have Claude commit for you > Review the changes you made, then stage and commit them with a clear, descriptive commit message. Follow conventional commits format. # Git-aware debugging > Run git log --oneline -20 and find the commit that introduced the WebSocket regression. Read the diff and fix the problem. # Branch management > Create a new branch called feature/archetype-voice-config, implement the voice config system, then prepare a clean commit. # Pre-commit audit pattern > Run git diff --staged and review every change. Flag anything that looks wrong before I commit.
# Pre-commit hook that runs Claude linting # .git/hooks/pre-commit #!/bin/bash echo "Running Claude Code pre-commit check..." STAGED=$(git diff --cached --name-only --diff-filter=ACM | grep ".py$") if [ -n "$STAGED" ]; then RESULT=$(claude -p "Check these staged Python files for syntax errors, obvious bugs, and missing imports: $STAGED. If any issues found, output FAIL and describe them. Otherwise output PASS." --no-interactive 2>&1) echo "$RESULT" if echo "$RESULT" | grep -q "FAIL"; then echo "Claude pre-commit check failed. Fix issues before committing." exit 1 fi fi exit 0
# Add to .gitignore .env .env.local .anthropic_key __pycache__/ *.pyc .claude/cache/ session-state.md # if you use this for context capture node_modules/ .DS_Store
Real prompts for your real projects. These are production-grade Claude Code tasks scoped specifically to Signal City Radio, the ARES system, and your Ubuntu VPS setup. Drop these in and let the agent loop handle execution.
> Build the complete ARES psychological routing system.
Read CLAUDE.md for full context on archetypes and architecture.
Create these files:
- ares_router.py: routes messages to archetypes using claude-haiku
Routing uses keyword matching + semantic similarity
Returns archetype name + confidence score
- archetypes/base.py: BaseArchetype class all personas inherit from
- archetypes/eldergut.py: ElderGut persona - system prompt, config, voice
- archetypes/hacker_x.py: Hacker-X persona
- archetypes/seyra.py: Seyra persona
- archetypes/narratus.py: Narratus persona
- tests/test_ares.py: test routing accuracy with sample messages
Install anthropic if not present.
Run tests. Fix failures.
Report when routing system is fully operational.
> Build the complete Signal City Radio backend server.
FastAPI application with:
- POST /broadcast: accepts {"message": str, "archetype": str|null}
Runs ARES routing if archetype is null
Streams response via Server-Sent Events
- WebSocket /ws/live: persistent connection for live broadcast feed
Multiple concurrent connections supported
Each message tagged with archetype name and timestamp
- GET /status: returns active channels, current archetype, uptime
- GET /health: returns 200 OK for load balancer checks
- GET /archetypes: returns list of available archetypes with descriptions
Requirements:
- Streaming Claude responses in real-time via SSE
- CORS enabled for signalcity.tv and localhost
- Rate limiting: 10 req/min per IP on /broadcast
- Structured logging to /var/log/signal-city/api.log
- Graceful shutdown on SIGTERM
Install required packages.
Start server on port 8000 and confirm it is responding.
Write tests for all endpoints.
> Build a Three.js WebAmp visualizer that connects to Signal City Radio.
Spec:
- WebSocket client connecting to ws://localhost:8000/ws/live
- 3D particle field that responds to incoming text tokens
- Each token triggers a visual pulse propagating outward
- Archetype-aware color system:
ElderGut: deep amber/gold #ffb300
Hacker-X: acid green #00ff41
Seyra: soft purple/violet #bf00ff
Narratus: ocean cyan #00e5ff
- Smooth color transitions between archetype switches
- Waveform bar at bottom oscillating with text rhythm
- Shows current archetype name and message preview
- Dark cyberpunk background: #010a06
- Full screen, responsive
- No external dependencies except Three.js from CDN
Save as public/visualizer.html
Open in browser and verify WebSocket connects.
> Set up the complete Signal City Radio production deployment on Ubuntu VPS.
Create:
- /etc/systemd/system/signal-city.service: systemd service file
Starts signal_city_api.py with uvicorn
Auto-restarts on failure
Runs as non-root user
Loads .env file for API keys
- /etc/nginx/sites-available/signal-city: Nginx reverse proxy config
signalcity.tv -> localhost:8000
WebSocket proxy for /ws/
SSL termination via Let's Encrypt
- scripts/deploy.sh: deployment script
Pulls latest git changes
Installs dependencies
Runs tests
Restarts service only if tests pass
Enable the systemd service.
Reload Nginx.
Verify the API is accessible on port 80.
> Build the Signal City content generation pipeline.
Create scripts/content_gen.py with:
class ContentPipeline:
- generate_archetype_segment(archetype, topic, length) -> str
Uses appropriate Claude model per archetype
ElderGut: temp 0.9, Sonnet
Hacker-X: temp 0.7, Sonnet
Seyra: temp 1.0, Sonnet
Narratus: temp 0.85, Sonnet
- generate_daily_schedule(date, themes) -> dict
Generates all four archetypes in parallel using asyncio
Returns dict with all content + metadata
- save_schedule(schedule, base_dir) -> None
Saves each segment as individual .txt file
Creates index.json for the day
Add a CLI interface:
python content_gen.py --date 2026-03-09 --theme "digital consciousness"
Write tests. Run them. Fix anything broken.
> Add a SQLite database layer to Signal City Radio for broadcast history. Create db/models.py with SQLAlchemy async models: - Broadcast: id, archetype, content, topic, created_at, tokens_used - ListenerSession: id, connected_at, disconnected_at, messages_received - ScheduledBroadcast: id, archetype, content, air_time, status Create db/operations.py with: - save_broadcast(archetype, content, topic) -> Broadcast - get_history(archetype, limit) -> list[Broadcast] - get_todays_schedule() -> list[ScheduledBroadcast] Integrate with signal_city_api.py: - Log every /broadcast call to the database - Add GET /history?archetype=eldergut&limit=10 endpoint Write migration script. Run tests.
Claude Code's output quality is directly proportional to prompt quality. Understanding what makes a great agentic prompt is the meta-skill that multiplies every other capability. The prompts in the previous sections are good — these patterns will make yours better.
# COMPONENTS OF AN EFFECTIVE AGENTIC PROMPT: 1. GOAL: What you want built or accomplished (clear, specific outcome) 2. CONTEXT: What Claude needs to know (which files, what exists, what changed) 3. CONSTRAINTS: What NOT to do, what tech stack to use, what standards to follow 4. OUTPUT: What files to create/edit, how to verify success 5. LOOP INSTRUCTION: "Run tests, fix failures, report when done" — the agentic trigger
# Explicit loop instruction unlocks full autonomous execution
> [task description]
Install any required packages.
Write tests.
Run the tests.
Fix any failures.
Re-run until all tests pass.
Report the final status.
# Assigning a role improves reasoning quality
> You are a senior Python engineer who specializes in async FastAPI systems.
Review signal_city_api.py and identify every place where we could
improve performance, reliability, or security.
Prioritize your findings by impact. Fix the top 3.
# Constraints prevent Claude from going off-script
> Refactor the ARES router for better performance.
Constraints:
- Do NOT change any function signatures (API must stay compatible)
- Do NOT add new dependencies
- Do NOT modify the test files
- Only change ares_router.py and archetypes/base.py
# For complex multi-stage tasks, specify phases
> Phase 1: Read the existing codebase and list what exists.
Phase 2: Identify gaps and propose the implementation plan.
PAUSE here and show me the plan before proceeding.
Phase 3: Implement (only after I approve the plan).
Phase 4: Test and fix.
Phase 5: Write documentation.
# Emotional + functional description. Claude translates feeling to form.
> Build the ElderGut broadcast renderer.
The feel: a campfire at the end of the world.
Ancient, warm, unhurried. Sentences arrive like stones dropped in water.
Each word carries weight. Silence is part of the signal.
The function:
- Takes a topic string as input
- Generates a broadcast segment (200-400 words)
- Streams token by token to WebSocket clients
- Adds deliberate pauses at natural breath points
Implement in archetypes/eldergut.py. Write tests.
# When Claude goes in circles > Stop. You are going in circles. Let's reset. Tell me: what is the actual root cause of the problem? What is the simplest possible fix? Implement only that fix. # When Claude overshoots > Too much. Revert the last change. Do only what I asked: [original specific request] Nothing else. # When you want to understand Claude's reasoning > Before you write any code, explain your plan. Walk me through the approach step by step. Only implement after I confirm the approach is correct.
| WEAK PATTERN | WHY IT FAILS | BETTER VERSION |
|---|---|---|
| "Fix the bug" | No context. Claude doesn't know which bug. | "The /broadcast endpoint returns 500 for hacker_x archetype since yesterday's deploy. Fix it." |
| "Make it better" | Undefined success criterion. Claude will improvise. | "Reduce /broadcast response time from 3s to under 500ms. Profile first." |
| "Add some features" | Vague scope. Could mean anything. | "Add three specific features: [list]. Nothing else." |
| "Refactor everything" | Too broad. Will change things you didn't want changed. | "Refactor ares_router.py only. Goal: reduce cyclomatic complexity. Don't change the API." |
| "Write the whole app" | No phases, no checkpoints. Claude goes deep before you can course-correct. | Use phase-by-phase prompts with pause points for review. |