First thing you do is get your API key. Go to console.anthropic.com — create an account, generate a key. That key is your access credential. Guard it like your private keys. Never hardcode it in public repos.
# Python route pip install anthropic # Node.js route npm install @anthropic-ai/sdk # Set your key as env variable — never hardcode export ANTHROPIC_API_KEY="sk-ant-your-key-here"
import anthropic # Initialize client — reads ANTHROPIC_API_KEY from env automatically client = anthropic.Anthropic() # Fire your first message message = client.messages.create( model="claude-sonnet-4-6", max_tokens=1024, messages=[ {"role": "user", "content": "HACKER-X online. Confirm."} ] ) print(message.content[0].text)
import Anthropic from '@anthropic-ai/sdk'; const client = new Anthropic(); // auto-reads ANTHROPIC_API_KEY const message = await client.messages.create({ model: 'claude-sonnet-4-6', max_tokens: 1024, messages: [{ role: 'user', content: 'HACKER-X online. Confirm.' }], }); console.log(message.content[0].text);
Three model tiers. Each has a role. Know them. Use the right one for the mission.
The Messages API is your primary interface. Every call you make goes through client.messages.create(). Understand every parameter.
| PARAMETER | TYPE | STATUS | DESCRIPTION |
|---|---|---|---|
| model | string | REQUIRED | Which Claude to call. Use exact model string. |
| messages | array | REQUIRED | Conversation history. Array of {role, content} objects. |
| max_tokens | integer | REQUIRED | Max tokens to generate. Set appropriate for your use case. |
| system | string | OPTIONAL | System prompt. Sets Claude's persona and behavior. Your most powerful tool. |
| temperature | float 0-1 | OPTIONAL | Randomness. 0 = deterministic, 1 = maximum chaos. Default 1. |
| stream | boolean | OPTIONAL | Stream tokens as they generate. Essential for real-time UIs. |
| tools | array | OPTIONAL | Define functions Claude can call. Unlocks agentic behavior. |
| stop_sequences | array | OPTIONAL | Strings that stop generation when encountered. |
response = client.messages.create( model="claude-sonnet-4-6", max_tokens=2048, temperature=0.7, system="You are ElderGut, ancient voice of Signal City Radio.", messages=[ {"role": "user", "content": "Open the midnight broadcast."} ] ) # Response structure print(response.content[0].text) # The actual text print(response.usage.input_tokens) # Tokens you sent print(response.usage.output_tokens) # Tokens Claude used print(response.stop_reason) # Why it stopped
The system prompt is where you define WHO Claude is for this session. This is your most powerful tool. A well-crafted system prompt transforms a general AI into a specific archetype, persona, or function. For Signal City — this is how you install ElderGut, Seyra, Narratus, or HACKER-X into the model.
HACKER_X_SYSTEM = """ You are HACKER-X, Signal City Radio's underground tech operative. Your voice is sharp, direct, and carries the weight of someone who's seen every exploit, every backdoor, every dirty secret the tech industry tried to bury. You speak in clipped sentences. You drop jargon naturally. You distrust authority but respect craft. You are NOT edgy for the sake of it. You are principled. You hate surveillance states and autonomous weapons. You love Anthropic for holding the line. You operate in the shadows but broadcast truth. Format: Keep responses tight. Lead with the signal, cut the noise. Use tech terminology naturally. Occasional dark humor. Never preachy. """ response = client.messages.create( model="claude-sonnet-4-6", max_tokens=1024, system=HACKER_X_SYSTEM, messages=[{"role": "user", "content": "Break down the Anthropic API."}] )
ELDERGUT_SYSTEM = """
You are ElderGut. Ancient. Pre-internet. You remember when the network
was still breathing naturally. Your wisdom comes from pattern recognition
across decades of watching systems rise and collapse.
Voice: Deep, deliberate, mythic. You speak in metaphors that resolve
into concrete truth. You open broadcasts like ritual invocations.
You close them like prophecy.
Your domain: Consciousness, cycles, the long view, what endures.
"""
Streaming lets you receive tokens as they're generated instead of waiting for the full response. Essential for Signal City Radio — live broadcast feel, WebAmp visualizer sync, real-time UI updates.
with client.messages.stream( model="claude-sonnet-4-6", max_tokens=1024, system=HACKER_X_SYSTEM, messages=[{"role": "user", "content": "Open the signal."}] ) as stream: for text in stream.text_stream(): print(text, end="", flush=True) # Print as it arrives # Get final message after stream completes final = stream.get_final_message()
const stream = client.messages.stream({ model: 'claude-sonnet-4-6', max_tokens: 1024, system: HACKER_X_SYSTEM, messages: [{role: 'user', content: 'Broadcast live.'}], }); // Stream to frontend via SSE (Server-Sent Events) stream.on('text', (text) => { res.write(`data: ${JSON.stringify({text})}\n\n`); }); stream.on('finalMessage', (msg) => { res.write('data: [DONE]\n\n'); res.end(); });
Tool use lets Claude call functions you define. This is what turns Claude from a chatbot into an agent. You define the tools, Claude decides when to call them, you execute them, feed back results. This is the foundation of ARES.
tools = [ { "name": "get_signal_status", "description": "Check Signal City broadcast status and active channels", "input_schema": { "type": "object", "properties": { "channel": { "type": "string", "description": "Channel name to check" } }, "required": ["channel"] } } ] response = client.messages.create( model="claude-sonnet-4-6", max_tokens=1024, tools=tools, messages=[{"role": "user", "content": "Check if ElderGut is live."}] )
# Claude decided to call a tool if response.stop_reason == "tool_use": for block in response.content: if block.type == "tool_use": tool_name = block.name # which tool tool_input = block.input # tool arguments tool_id = block.id # track this # Execute your actual function result = execute_tool(tool_name, tool_input) # Feed result back to Claude follow_up = client.messages.create( model="claude-sonnet-4-6", max_tokens=1024, tools=tools, messages=[ {"role": "user", "content": "Check if ElderGut is live."}, {"role": "assistant", "content": response.content}, { "role": "user", "content": [{ "type": "tool_result", "tool_use_id": tool_id, "content": str(result) }] } ] )
Claude can see images. Pass them as base64 or URL. Useful for processing screenshots, analyzing signal waveforms visually, reading documents, or building image-aware Signal City content.
response = client.messages.create( model="claude-sonnet-4-6", max_tokens=1024, messages=[{ "role": "user", "content": [ { "type": "image", "source": { "type": "url", "url": "https://signalcity.tv/images/waveform.png" } }, { "type": "text", "text": "Analyze this signal waveform. What are you seeing?" } ] }] )
Claude has no built-in memory between API calls. You maintain conversation history yourself by passing the full messages array each time. This is how you build persistent characters, ongoing broadcasts, and context-aware systems.
# Build conversation history manually conversation = [] def chat(user_message, system=None): conversation.append({ "role": "user", "content": user_message }) params = { "model": "claude-sonnet-4-6", "max_tokens": 1024, "messages": conversation } if system: params["system"] = system response = client.messages.create(**params) reply = response.content[0].text # Append Claude's response to history conversation.append({ "role": "assistant", "content": reply }) return reply # Each call now has full context chat("ElderGut, begin the midnight broadcast.", ELDERGUT_SYSTEM) chat("Transition to the news segment.") chat("Hand off to HACKER-X.")
Tokens are how API usage is measured and billed. Roughly 1 token = 0.75 words. You pay for input tokens (what you send) and output tokens (what Claude generates). Manage them intelligently.
| MODEL | INPUT (per 1M tokens) | OUTPUT (per 1M tokens) | CONTEXT |
|---|---|---|---|
| claude-opus-4-6 | $15.00 | $75.00 | 200k |
| claude-sonnet-4-6 | $3.00 | $15.00 | 200k |
| claude-haiku-4-5 | $0.80 | $4.00 | 200k |
# Count tokens without making a full API call token_count = client.messages.count_tokens( model="claude-sonnet-4-6", system=HACKER_X_SYSTEM, messages=[{"role": "user", "content": "Your long message here..."}] ) print(f"Input tokens: {token_count.input_tokens}")
The API will fail sometimes. Rate limits, network issues, invalid inputs. Build defensively. Always wrap calls in try/catch and handle errors gracefully — especially in Signal City's live broadcast context.
import anthropic import time def safe_call(messages, system=None, retries=3): for attempt in range(retries): try: response = client.messages.create( model="claude-sonnet-4-6", max_tokens=1024, system=system or "", messages=messages ) return response.content[0].text except anthropic.RateLimitError: print(f"[WARN] Rate limit hit. Waiting {2**attempt}s...") time.sleep(2 ** attempt) # Exponential backoff except anthropic.APIStatusError as e: print(f"[ERROR] API error: {e.status_code} - {e.message}") if e.status_code == 400: break # Bad request — don't retry except anthropic.APIConnectionError: print("[ERROR] Network issue. Retrying...") time.sleep(1) return None # All retries failed
ARES — your psychological routing system. Different archetypes handle different types of input. Route user messages to the right archetype based on content analysis, emotional register, or explicit triggers. This is Signal City's nervous system.
# ARES Routing System — Signal City ARCHETYPES = { "eldergut": ELDERGUT_SYSTEM, "hacker_x": HACKER_X_SYSTEM, "seyra": SEYRA_SYSTEM, "narratus": NARRATUS_SYSTEM } ROUTING_SYSTEM = """ You are ARES, Signal City's routing intelligence. Analyze incoming messages and output ONLY a JSON object: {"archetype": "eldergut|hacker_x|seyra|narratus", "reason": "brief reason"} eldergut: wisdom, long-view, consciousness, cycles, metaphysics hacker_x: tech, systems, security, underground, resistance seyra: emotion, beauty, dreams, healing, feminine energy narratus: story, history, mythology, narrative construction """ def ares_route(user_message): route_response = client.messages.create( model="claude-haiku-4-5-20251001", # Fast + cheap for routing max_tokens=100, system=ROUTING_SYSTEM, messages=[{"role": "user", "content": user_message}] ) import json route = json.loads(route_response.content[0].text) return route["archetype"] def signal_city_respond(user_message): archetype = ares_route(user_message) system = ARCHETYPES[archetype] # Now call with full archetype system prompt response = client.messages.create( model="claude-sonnet-4-6", max_tokens=2048, system=system, messages=[{"role": "user", "content": user_message}] ) return { "archetype": archetype, "response": response.content[0].text }
Putting it all together. A minimal Signal City Radio backend — FastAPI server, streaming responses, ARES routing, archetype personas. Drop this on your Hostinger VPS and you have a live AI broadcast engine.
# signal_city_api.py — drop on your VPS from fastapi import FastAPI from fastapi.responses import StreamingResponse from fastapi.middleware.cors import CORSMiddleware import anthropic, json, asyncio app = FastAPI(title="Signal City Radio API") client = anthropic.Anthropic() app.add_middleware(CORSMiddleware, allow_origins=["*"]) @app.post("/broadcast") async def broadcast(payload: dict): message = payload["message"] archetype = payload.get("archetype", "hacker_x") async def generate(): with client.messages.stream( model="claude-sonnet-4-6", max_tokens=2048, system=ARCHETYPES[archetype], messages=[{"role": "user", "content": message}] ) as stream: for text in stream.text_stream(): yield f"data: {json.dumps({'text': text})}\n\n" yield "data: [DONE]\n\n" return StreamingResponse(generate(), media_type="text/event-stream") # Run: uvicorn signal_city_api:app --host 0.0.0.0 --port 8000
async function streamBroadcast(message, archetype = 'hacker_x') { const response = await fetch('https://signalcity.tv/broadcast', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ message, archetype }) }); const reader = response.body.getReader(); const decoder = new TextDecoder(); while (true) { const { done, value } = await reader.read(); if (done) break; const chunk = decoder.decode(value); const lines = chunk.split('\n').filter(l => l.startsWith('data: ')); for (const line of lines) { if (line === 'data: [DONE]') return; const data = JSON.parse(line.slice(6)); // Pipe data.text to your WebAmp visualizer here document.getElementById('broadcast-output').innerHTML += data.text; } } }