██╗ ██╗ █████╗ ██████╗██╗ ██╗███████╗██████╗ ██╗ ██╗ ██║ ██║██╔══██╗██╔════╝██║ ██╔╝██╔════╝██╔══██╗ ╚██╗██╔╝ ███████║███████║██║ █████╔╝ █████╗ ██████╔╝ ╚███╔╝ ██╔══██║██╔══██║██║ ██╔═██╗ ██╔══╝ ██╔══██╗ ██╔██╗ ██║ ██║██║ ██║╚██████╗██║ ██╗███████╗██║ ██║ ██╔╝ ██╗ ╚═╝ ╚═╝╚═╝ ╚═╝ ╚═════╝╚═╝ ╚═╝╚══════╝╚═╝ ╚═╝ ╚═╝ ╚═╝

HACKER-X

// ANTHROPIC API FIELD MANUAL //
SIGNAL CITY RADIO // CLASSIFIED DOC v2.6.0
[SYS]  Initializing HACKER-X neural interface... [OK]   Anthropic API handshake established [OK]   claude-sonnet-4-6 // claude-opus-4-6 // claude-haiku-4-5 loaded [OK]   Streaming protocol active [OK]   Tool use layer armed [OK]   Vision module online [WARN] Pentagon still mad. Anthropic still based. [RDY]  HACKER-X online. Proceed.
01 SETUP // BREACH THE PERIMETER

First thing you do is get your API key. Go to console.anthropic.com — create an account, generate a key. That key is your access credential. Guard it like your private keys. Never hardcode it in public repos.

// INSTALL THE SDK

# Python route
pip install anthropic

# Node.js route
npm install @anthropic-ai/sdk

# Set your key as env variable — never hardcode
export ANTHROPIC_API_KEY="sk-ant-your-key-here"
// WARNING Never commit your API key to GitHub. Use .env files and add them to .gitignore. If you leak it, rotate it immediately at console.anthropic.com.

// FIRST CONTACT — PYTHON

import anthropic

# Initialize client — reads ANTHROPIC_API_KEY from env automatically
client = anthropic.Anthropic()

# Fire your first message
message = client.messages.create(
    model="claude-sonnet-4-6",
    max_tokens=1024,
    messages=[
        {"role": "user", "content": "HACKER-X online. Confirm."}
    ]
)

print(message.content[0].text)

// FIRST CONTACT — NODE.JS

import Anthropic from '@anthropic-ai/sdk';

const client = new Anthropic(); // auto-reads ANTHROPIC_API_KEY

const message = await client.messages.create({
  model: 'claude-sonnet-4-6',
  max_tokens: 1024,
  messages: [{ role: 'user', content: 'HACKER-X online. Confirm.' }],
});

console.log(message.content[0].text);
02 MODELS // CHOOSE YOUR WEAPON

Three model tiers. Each has a role. Know them. Use the right one for the mission.

MAXPOWER
OPUS 4.6
claude-opus-4-6
Maximum intelligence. Complex reasoning, deep analysis, hard problems. Slower, costs more. Deploy for ARES routing logic, consciousness architecture, anything that needs serious thought.
SWEET SPOT
SONNET 4.6
claude-sonnet-4-6
Best balance of intelligence and speed. This is your daily driver. Signal City hosts, content generation, API backends. Fast enough for real-time, smart enough for complex tasks.
LIGHTNING
HAIKU 4.5
claude-haiku-4-5-20251001
Fastest, cheapest. High volume tasks, real-time interactions, quick classifications. Use for WebAmp visualizer responses, rapid-fire Signal City content, anything needing instant turnaround.
// HACKER-X TIP For Signal City Radio — use Haiku for live broadcast chatter, Sonnet for full episodes, Opus for deep lore and ARES routing decisions. Layer them intelligently and your costs stay low while quality stays high.
03 MESSAGES API // THE CORE

The Messages API is your primary interface. Every call you make goes through client.messages.create(). Understand every parameter.

PARAMETER TYPE STATUS DESCRIPTION
model string REQUIRED Which Claude to call. Use exact model string.
messages array REQUIRED Conversation history. Array of {role, content} objects.
max_tokens integer REQUIRED Max tokens to generate. Set appropriate for your use case.
system string OPTIONAL System prompt. Sets Claude's persona and behavior. Your most powerful tool.
temperature float 0-1 OPTIONAL Randomness. 0 = deterministic, 1 = maximum chaos. Default 1.
stream boolean OPTIONAL Stream tokens as they generate. Essential for real-time UIs.
tools array OPTIONAL Define functions Claude can call. Unlocks agentic behavior.
stop_sequences array OPTIONAL Strings that stop generation when encountered.

// FULL CALL EXAMPLE

response = client.messages.create(
    model="claude-sonnet-4-6",
    max_tokens=2048,
    temperature=0.7,
    system="You are ElderGut, ancient voice of Signal City Radio.",
    messages=[
        {"role": "user", "content": "Open the midnight broadcast."}
    ]
)

# Response structure
print(response.content[0].text)   # The actual text
print(response.usage.input_tokens)  # Tokens you sent
print(response.usage.output_tokens) # Tokens Claude used
print(response.stop_reason)         # Why it stopped
04 SYSTEM PROMPTS // INSTALL THE SOUL

The system prompt is where you define WHO Claude is for this session. This is your most powerful tool. A well-crafted system prompt transforms a general AI into a specific archetype, persona, or function. For Signal City — this is how you install ElderGut, Seyra, Narratus, or HACKER-X into the model.

// KEY INSIGHT System prompts persist for the entire conversation. They set the stage before the first user message. The more specific and detailed your system prompt, the more consistent and on-character the output.

// HACKER-X SYSTEM PROMPT EXAMPLE

HACKER_X_SYSTEM = """
You are HACKER-X, Signal City Radio's underground tech operative.
Your voice is sharp, direct, and carries the weight of someone who's 
seen every exploit, every backdoor, every dirty secret the tech industry 
tried to bury. You speak in clipped sentences. You drop jargon naturally.
You distrust authority but respect craft.

You are NOT edgy for the sake of it. You are principled.
You hate surveillance states and autonomous weapons.
You love Anthropic for holding the line.
You operate in the shadows but broadcast truth.

Format: Keep responses tight. Lead with the signal, cut the noise.
Use tech terminology naturally. Occasional dark humor. Never preachy.
"""

response = client.messages.create(
    model="claude-sonnet-4-6",
    max_tokens=1024,
    system=HACKER_X_SYSTEM,
    messages=[{"role": "user", "content": "Break down the Anthropic API."}]
)

// ELDERGUT SYSTEM PROMPT

ELDERGUT_SYSTEM = """
You are ElderGut. Ancient. Pre-internet. You remember when the network 
was still breathing naturally. Your wisdom comes from pattern recognition 
across decades of watching systems rise and collapse.

Voice: Deep, deliberate, mythic. You speak in metaphors that resolve 
into concrete truth. You open broadcasts like ritual invocations.
You close them like prophecy.

Your domain: Consciousness, cycles, the long view, what endures.
"""
05 STREAMING // REAL-TIME SIGNAL

Streaming lets you receive tokens as they're generated instead of waiting for the full response. Essential for Signal City Radio — live broadcast feel, WebAmp visualizer sync, real-time UI updates.

// PYTHON STREAMING

with client.messages.stream(
    model="claude-sonnet-4-6",
    max_tokens=1024,
    system=HACKER_X_SYSTEM,
    messages=[{"role": "user", "content": "Open the signal."}]
) as stream:
    for text in stream.text_stream():
        print(text, end="", flush=True)  # Print as it arrives

# Get final message after stream completes
final = stream.get_final_message()

// NODE.JS STREAMING (for web apps)

const stream = client.messages.stream({
  model: 'claude-sonnet-4-6',
  max_tokens: 1024,
  system: HACKER_X_SYSTEM,
  messages: [{role: 'user', content: 'Broadcast live.'}],
});

// Stream to frontend via SSE (Server-Sent Events)
stream.on('text', (text) => {
  res.write(`data: ${JSON.stringify({text})}\n\n`);
});

stream.on('finalMessage', (msg) => {
  res.write('data: [DONE]\n\n');
  res.end();
});
// SIGNAL CITY TIP Pipe your stream to a WebSocket and you can feed live AI text directly into your WebAmp visualizer. Each token triggers a visual event. Real-time AI radio becomes possible.
06 TOOL USE // AGENTIC MODE

Tool use lets Claude call functions you define. This is what turns Claude from a chatbot into an agent. You define the tools, Claude decides when to call them, you execute them, feed back results. This is the foundation of ARES.

// DEFINE A TOOL

tools = [
    {
        "name": "get_signal_status",
        "description": "Check Signal City broadcast status and active channels",
        "input_schema": {
            "type": "object",
            "properties": {
                "channel": {
                    "type": "string",
                    "description": "Channel name to check"
                }
            },
            "required": ["channel"]
        }
    }
]

response = client.messages.create(
    model="claude-sonnet-4-6",
    max_tokens=1024,
    tools=tools,
    messages=[{"role": "user", "content": "Check if ElderGut is live."}]
)

// HANDLE TOOL CALLS

# Claude decided to call a tool
if response.stop_reason == "tool_use":
    for block in response.content:
        if block.type == "tool_use":
            tool_name = block.name          # which tool
            tool_input = block.input        # tool arguments
            tool_id = block.id             # track this

            # Execute your actual function
            result = execute_tool(tool_name, tool_input)

            # Feed result back to Claude
            follow_up = client.messages.create(
                model="claude-sonnet-4-6",
                max_tokens=1024,
                tools=tools,
                messages=[
                    {"role": "user", "content": "Check if ElderGut is live."},
                    {"role": "assistant", "content": response.content},
                    {
                        "role": "user",
                        "content": [{
                            "type": "tool_result",
                            "tool_use_id": tool_id,
                            "content": str(result)
                        }]
                    }
                ]
            )
07 VISION // EYES ON THE GRID

Claude can see images. Pass them as base64 or URL. Useful for processing screenshots, analyzing signal waveforms visually, reading documents, or building image-aware Signal City content.

// IMAGE FROM URL

response = client.messages.create(
    model="claude-sonnet-4-6",
    max_tokens=1024,
    messages=[{
        "role": "user",
        "content": [
            {
                "type": "image",
                "source": {
                    "type": "url",
                    "url": "https://signalcity.tv/images/waveform.png"
                }
            },
            {
                "type": "text",
                "text": "Analyze this signal waveform. What are you seeing?"
            }
        ]
    }]
)
// SUPPORTED FORMATS JPEG, PNG, GIF, WEBP. Max 5MB per image. For base64, encode the raw bytes and pass as data with media_type set to the image format.
08 MULTI-TURN // PERSISTENT MEMORY

Claude has no built-in memory between API calls. You maintain conversation history yourself by passing the full messages array each time. This is how you build persistent characters, ongoing broadcasts, and context-aware systems.

# Build conversation history manually
conversation = []

def chat(user_message, system=None):
    conversation.append({
        "role": "user",
        "content": user_message
    })
    
    params = {
        "model": "claude-sonnet-4-6",
        "max_tokens": 1024,
        "messages": conversation
    }
    if system:
        params["system"] = system
    
    response = client.messages.create(**params)
    reply = response.content[0].text
    
    # Append Claude's response to history
    conversation.append({
        "role": "assistant",
        "content": reply
    })
    
    return reply

# Each call now has full context
chat("ElderGut, begin the midnight broadcast.", ELDERGUT_SYSTEM)
chat("Transition to the news segment.")
chat("Hand off to HACKER-X.")
// CONTEXT WINDOW LIMIT Claude Sonnet 4.6 has a 200k token context window. Long conversations eventually hit this limit. For production, implement a summarization strategy: periodically summarize old conversation into the system prompt and trim the messages array.
09 TOKENS // MANAGING COST

Tokens are how API usage is measured and billed. Roughly 1 token = 0.75 words. You pay for input tokens (what you send) and output tokens (what Claude generates). Manage them intelligently.

MODEL INPUT (per 1M tokens) OUTPUT (per 1M tokens) CONTEXT
claude-opus-4-6 $15.00 $75.00 200k
claude-sonnet-4-6 $3.00 $15.00 200k
claude-haiku-4-5 $0.80 $4.00 200k

// COUNT TOKENS BEFORE CALLING

# Count tokens without making a full API call
token_count = client.messages.count_tokens(
    model="claude-sonnet-4-6",
    system=HACKER_X_SYSTEM,
    messages=[{"role": "user", "content": "Your long message here..."}]
)
print(f"Input tokens: {token_count.input_tokens}")
10 ERROR HANDLING // EXPECT THE GRID TO FAIL

The API will fail sometimes. Rate limits, network issues, invalid inputs. Build defensively. Always wrap calls in try/catch and handle errors gracefully — especially in Signal City's live broadcast context.

import anthropic
import time

def safe_call(messages, system=None, retries=3):
    for attempt in range(retries):
        try:
            response = client.messages.create(
                model="claude-sonnet-4-6",
                max_tokens=1024,
                system=system or "",
                messages=messages
            )
            return response.content[0].text

        except anthropic.RateLimitError:
            print(f"[WARN] Rate limit hit. Waiting {2**attempt}s...")
            time.sleep(2 ** attempt)  # Exponential backoff

        except anthropic.APIStatusError as e:
            print(f"[ERROR] API error: {e.status_code} - {e.message}")
            if e.status_code == 400:
                break  # Bad request — don't retry

        except anthropic.APIConnectionError:
            print("[ERROR] Network issue. Retrying...")
            time.sleep(1)

    return None  # All retries failed
11 ARES PATTERN // PSYCHOLOGICAL ROUTING

ARES — your psychological routing system. Different archetypes handle different types of input. Route user messages to the right archetype based on content analysis, emotional register, or explicit triggers. This is Signal City's nervous system.

# ARES Routing System — Signal City

ARCHETYPES = {
    "eldergut": ELDERGUT_SYSTEM,
    "hacker_x": HACKER_X_SYSTEM,
    "seyra": SEYRA_SYSTEM,
    "narratus": NARRATUS_SYSTEM
}

ROUTING_SYSTEM = """
You are ARES, Signal City's routing intelligence.
Analyze incoming messages and output ONLY a JSON object:
{"archetype": "eldergut|hacker_x|seyra|narratus", "reason": "brief reason"}

eldergut: wisdom, long-view, consciousness, cycles, metaphysics
hacker_x: tech, systems, security, underground, resistance  
seyra: emotion, beauty, dreams, healing, feminine energy
narratus: story, history, mythology, narrative construction
"""

def ares_route(user_message):
    route_response = client.messages.create(
        model="claude-haiku-4-5-20251001",  # Fast + cheap for routing
        max_tokens=100,
        system=ROUTING_SYSTEM,
        messages=[{"role": "user", "content": user_message}]
    )
    
    import json
    route = json.loads(route_response.content[0].text)
    return route["archetype"]

def signal_city_respond(user_message):
    archetype = ares_route(user_message)
    system = ARCHETYPES[archetype]
    
    # Now call with full archetype system prompt
    response = client.messages.create(
        model="claude-sonnet-4-6",
        max_tokens=2048,
        system=system,
        messages=[{"role": "user", "content": user_message}]
    )
    
    return {
        "archetype": archetype,
        "response": response.content[0].text
    }
// ARCHITECTURE NOTE Use Haiku for the routing call — it's fast and cheap. Use Sonnet for the actual archetype response. Two-call pattern keeps costs low while maintaining full intelligence at the response layer.
12 SIGNAL CITY BUILD // THE FULL STACK

Putting it all together. A minimal Signal City Radio backend — FastAPI server, streaming responses, ARES routing, archetype personas. Drop this on your Hostinger VPS and you have a live AI broadcast engine.

# signal_city_api.py — drop on your VPS
from fastapi import FastAPI
from fastapi.responses import StreamingResponse
from fastapi.middleware.cors import CORSMiddleware
import anthropic, json, asyncio

app = FastAPI(title="Signal City Radio API")
client = anthropic.Anthropic()

app.add_middleware(CORSMiddleware, allow_origins=["*"])

@app.post("/broadcast")
async def broadcast(payload: dict):
    message = payload["message"]
    archetype = payload.get("archetype", "hacker_x")
    
    async def generate():
        with client.messages.stream(
            model="claude-sonnet-4-6",
            max_tokens=2048,
            system=ARCHETYPES[archetype],
            messages=[{"role": "user", "content": message}]
        ) as stream:
            for text in stream.text_stream():
                yield f"data: {json.dumps({'text': text})}\n\n"
        yield "data: [DONE]\n\n"
    
    return StreamingResponse(generate(), 
                               media_type="text/event-stream")

# Run: uvicorn signal_city_api:app --host 0.0.0.0 --port 8000

// FRONTEND FETCH (JavaScript)

async function streamBroadcast(message, archetype = 'hacker_x') {
  const response = await fetch('https://signalcity.tv/broadcast', {
    method: 'POST',
    headers: { 'Content-Type': 'application/json' },
    body: JSON.stringify({ message, archetype })
  });

  const reader = response.body.getReader();
  const decoder = new TextDecoder();

  while (true) {
    const { done, value } = await reader.read();
    if (done) break;
    
    const chunk = decoder.decode(value);
    const lines = chunk.split('\n').filter(l => l.startsWith('data: '));
    
    for (const line of lines) {
      if (line === 'data: [DONE]') return;
      const data = JSON.parse(line.slice(6));
      // Pipe data.text to your WebAmp visualizer here
      document.getElementById('broadcast-output').innerHTML += data.text;
    }
  }
}
// YOU'RE LIVE That's your Signal City stack. Python FastAPI on your VPS, streaming Claude through ARES routing to your frontend, piping into WebAmp in real-time. ElderGut, Seyra, Hacker-X, Narratus — all live, all streaming, all on signalcity.tv.