Skip to main content

Getting Started with SpecWeave Kafka

Get your first Kafka cluster running in under 15 minutes

This guide will walk you through installing the SpecWeave Kafka plugin suite, starting a local Kafka cluster, and producing/consuming your first messages.

Prerequisites

Before starting, ensure you have:

  • Node.js 18+ installed (node --version)
  • Docker Desktop running (docker ps)
  • SpecWeave CLI installed (npm install -g @specweave/cli)
  • Claude Code (latest version)

Step 1: Initialize SpecWeave (2 minutes)

# Create a new project directory
mkdir my-kafka-project
cd my-kafka-project

# Initialize SpecWeave
specweave init

# Select your AI coding tool
# → Choose "Claude Code" (recommended)

# Plugins are automatically installed during init

What happens during init:

  • ✅ Creates .specweave/ directory structure
  • ✅ Installs all 4 Kafka plugins (kafka, confluent, kafka-streams, n8n)
  • ✅ Registers skills and agents
  • ✅ Configures default templates

Step 2: Start Local Kafka Cluster (3 minutes)

# Start Kafka with Docker Compose
/sw-kafka:dev-env start

# Wait for cluster to be ready (~60 seconds)
# ✓ Kafka broker (KRaft mode) on port 9092
# ✓ Schema Registry on port 8081
# ✓ Kafka UI on port 8080

Verify the cluster is running:

# Check Docker containers
docker ps | grep kafka

# Expected output:
# kafka-broker
# schema-registry
# kafka-ui

Access Kafka UI:

Step 3: Configure MCP Server (2 minutes)

The MCP (Model Context Protocol) server enables AI-powered Kafka operations.

# Auto-detect and configure MCP server
/sw-kafka:mcp-configure

# The command will:
# 1. Detect available MCP servers (kanapuli, tuannvm, etc.)
# 2. Generate .mcp.json configuration
# 3. Test connection to localhost:9092

If you don't have an MCP server:

# Install kanapuli MCP server (simplest option)
npm install -g @kanapuli/mcp-kafka

# Or use tuannvm (more features)
go install github.com/tuannvm/kafka-mcp-server@latest

Step 4: Produce Your First Message (2 minutes)

Option A: Using kcat (Command Line)

# Install kcat if not already installed
# macOS: brew install kcat
# Linux: sudo apt-get install kafkacat

# Produce a message
echo '{"user": "alice", "action": "login", "timestamp": "'$(date -u +%Y-%m-%dT%H:%M:%SZ)'"}' | \
kcat -P -b localhost:9092 -t user-events

# Verify message was sent
kcat -C -b localhost:9092 -t user-events -c 1 -o beginning

Option B: Using Node.js Code

Create producer.js:

const { Kafka } = require('kafkajs');

const kafka = new Kafka({
clientId: 'my-first-producer',
brokers: ['localhost:9092']
});

const producer = kafka.producer();

async function sendMessage() {
await producer.connect();

const result = await producer.send({
topic: 'user-events',
messages: [
{
key: 'user-alice',
value: JSON.stringify({
user: 'alice',
action: 'login',
timestamp: new Date().toISOString()
})
}
]
});

console.log('Message sent:', result);

await producer.disconnect();
}

sendMessage().catch(console.error);

Run it:

npm install kafkajs
node producer.js

Step 5: Consume Messages (2 minutes)

Create consumer.js:

const { Kafka } = require('kafkajs');

const kafka = new Kafka({
clientId: 'my-first-consumer',
brokers: ['localhost:9092']
});

const consumer = kafka.consumer({ groupId: 'my-consumer-group' });

async function consumeMessages() {
await consumer.connect();
await consumer.subscribe({ topic: 'user-events', fromBeginning: true });

await consumer.run({
eachMessage: async ({ topic, partition, message }) => {
console.log({
topic,
partition,
offset: message.offset,
key: message.key?.toString(),
value: message.value?.toString(),
});
},
});
}

consumeMessages().catch(console.error);

Run it:

node consumer.js

# Expected output:
# {
# topic: 'user-events',
# partition: 0,
# offset: '0',
# key: 'user-alice',
# value: '{"user":"alice","action":"login",...}'
# }

Step 6: Setup Monitoring (4 minutes)

# Deploy Prometheus + Grafana stack
/sw-kafka:monitor-setup

# This deploys:
# - Prometheus with JMX Exporter
# - Grafana with 5 pre-built dashboards
# - 14 alerting rules

# Wait for deployment to complete (~2 minutes)

Access dashboards:

Pre-built dashboards:

  1. Kafka Overview - Cluster-wide metrics
  2. Consumer Lag - Per-group lag tracking
  3. Broker Health - CPU, memory, disk, network
  4. Topic Metrics - Per-topic throughput
  5. Producer/Consumer Metrics - Client-level stats

Verification Checklist

At this point, you should have:

  • ✅ Kafka cluster running locally
  • ✅ MCP server configured
  • ✅ Messages produced and consumed
  • ✅ Monitoring stack deployed
  • ✅ Grafana dashboards accessible

Next Steps

Learn Advanced Features

Exactly-Once Semantics:

// See: .specweave/docs/public/guides/kafka-advanced-usage.md
const producer = kafka.producer({
transactional: true,
transactionalId: 'my-transaction-id'
});

Schema Registry (Avro):

// Register schema and serialize messages
// See: plugins/sw-kafka/examples/avro-schema-registry/

Kafka Streams:

# Generate Kafka Streams app
/sw-kafka:app-scaffold

# See: .specweave/docs/public/guides/kafka-streams.md

Deploy to Production

AWS MSK:

/sw-kafka:deploy aws-msk

# See: .specweave/docs/public/guides/kafka-terraform.md

Confluent Cloud:

/sw-confluent:cluster-create

# See: plugins/sw-confluent/README.md

Explore Examples

# Working code examples
ls plugins/sw-kafka/examples/

# - simple-producer-consumer/
# - avro-schema-registry/
# - exactly-once-semantics/
# - kafka-streams-app/
# - n8n-workflow/

Common First-Time Issues

Kafka Won't Start

Problem: Docker containers exit immediately

Solution:

# Check port 9092 is available
lsof -i :9092

# If occupied, kill the process or change port in docker-compose.yml
docker-compose -f plugins/sw-kafka/templates/docker/kafka-kraft/docker-compose.yml down

# Restart
/sw-kafka:dev-env start

MCP Server Connection Failed

Problem: Cannot connect to MCP server

Solution:

# Check if MCP server is installed
which kcat

# If not installed:
brew install kcat # macOS
sudo apt-get install kafkacat # Linux

# Reconfigure
/sw-kafka:mcp-configure

Consumer Not Receiving Messages

Problem: Consumer runs but no messages appear

Solution:

# Check topic exists and has messages
kcat -L -b localhost:9092

# Verify messages in topic
kcat -C -b localhost:9092 -t user-events -c 10 -o beginning

# Check consumer group
kcat -b localhost:9092 -G my-consumer-group user-events

Grafana Dashboards Not Loading

Problem: Grafana shows no data

Solution:

# Check Prometheus is scraping Kafka metrics
curl localhost:9090/api/v1/targets

# Restart monitoring stack
/sw-kafka:monitor-setup --restart

# Verify JMX Exporter port (7071)
curl localhost:7071/metrics

Learning Resources

Documentation

Skills (Ask Claude Code)

  • "How do I configure SASL authentication?" → kafka-architecture skill
  • "Show me kcat examples" → kafka-cli-tools skill
  • "Deploy Kafka to AWS" → kafka-iac-deployment skill
  • "Setup monitoring" → kafka-observability skill

Commands

  • /sw-kafka:deploy - Deploy to cloud
  • /sw-kafka:monitor-setup - Setup monitoring
  • /sw-kafka:dev-env - Local development
  • /sw-confluent:cluster-create - Confluent Cloud

Examples

# Browse working examples
cd plugins/sw-kafka/examples/

# Run example
cd simple-producer-consumer
npm install
npm start

Quick Reference

Start/Stop Local Cluster

# Start
/sw-kafka:dev-env start

# Stop
/sw-kafka:dev-env stop

# Reset (deletes all data)
/sw-kafka:dev-env reset

# View logs
/sw-kafka:dev-env logs

Produce/Consume with kcat

# Produce (interactive)
kcat -P -b localhost:9092 -t my-topic
> message 1
> message 2
> ^D

# Consume (from beginning)
kcat -C -b localhost:9092 -t my-topic -o beginning

# Consume (tail)
kcat -C -b localhost:9092 -t my-topic -o end

Topic Management

# List topics
kcat -L -b localhost:9092

# Create topic (via Kafka UI)
# → http://localhost:8080

# Delete topic (via CLI)
kafka-topics --bootstrap-server localhost:9092 --delete --topic my-topic

Monitor Consumer Lag

# Via kcat
kcat -b localhost:9092 -C -G my-group my-topic

# Via Grafana
# → http://localhost:3000
# → Consumer Lag dashboard

Congratulations! 🎉

You've successfully:

  • ✅ Set up a local Kafka cluster
  • ✅ Configured MCP server integration
  • ✅ Produced and consumed messages
  • ✅ Deployed monitoring stack
  • ✅ Accessed Grafana dashboards

Total time: Under 15 minutes

Ready for production? See the Terraform Guide to deploy to AWS, Azure, or Confluent Cloud.

Need help? Check the Troubleshooting Guide or ask in GitHub Discussions.


Next: Advanced Usage Guide →