Hello World
Create a file called hello.keel:
agent Hello {
role "A friendly greeter"
model "claude-haiku"
every 5.seconds {
notify user "Hello from Keel!"
}
}
run Hello
Run it:
KEEL_OLLAMA_MODEL=gemma4 keel run hello.keel
Output:
⚡ LLM provider: Ollama (http://localhost:11434)
▸ Starting agent Hello
role: A friendly greeter
model: gemma4 (ollama @ http://localhost:11434)
⏱ polling every 5 seconds
▸ Hello from Keel!
▸ Agent running. Press Ctrl+C to stop.
▸ Hello from Keel!
▸ Hello from Keel!
The agent prints “Hello from Keel!” every 5 seconds until you press Ctrl+C.
What just happened?
agent Hello— declares an agent namedHellorole "..."— describes what the agent does (used as system prompt context)model "claude-haiku"— which LLM model to use (mapped to your local Ollama model)every 5.seconds { ... }— runs the block every 5 secondsnotify user "..."— prints a message to the terminalrun Hello— starts the agent
Using AI
Let’s make the agent actually use AI:
type Mood = happy | neutral | sad
task analyze(text: str) -> Mood {
classify text as Mood fallback neutral
}
agent MoodBot {
role "Analyzes the mood of text"
model "claude-haiku"
every 10.seconds {
mood = analyze("I love building programming languages!")
notify user "Mood: {mood}"
}
}
run MoodBot
The classify keyword sends the text to the LLM and maps the response to one of the enum variants. The fallback neutral ensures you always get a valid result.