DynaPrompt¶
Prompt management that grows with your LLM application.
DynaPrompt brings the configuration philosophy of Dynaconf to the world of LLM prompt engineering — zero I/O at import time, environment-aware layering, and Jinja2-powered templates.
Why DynaPrompt?¶
# Prompts are scattered — mixed into application logic
def analyze_call(transcript: str) -> str:
prompt = f"""You are an expert call analyzer.
Analyze the following call transcript and extract:
- Key issues raised
- Customer sentiment (1-5)
- Agent performance
Transcript:
{transcript}
Respond in JSON."""
return call_llm(prompt, model="gpt-4o", temperature=0.1)
Hard to update prompts without touching code
No environment-specific overrides (dev vs. prod model)
No validation or token limit guards
No version tracking or audit logs
prompts/analyze_call.md
---
model: gpt-4o
temperature: 0.1
---
You are an expert call analyzer.
Analyze the following call transcript and extract:
- Key issues raised
- Customer sentiment (1-5)
- Agent performance
Transcript: {{ transcript }}
Respond in JSON.
app.py
from dynaprompt import DynaPrompt
prompts = DynaPrompt(settings_files=["prompts/"])
result = prompts.analyze_call.render(transcript=transcript)
Prompts are versioned and reviewable like code
Switch models per environment (dev → gpt-3.5, prod → gpt-4o)
Token limit validation built-in
Every render produces a SHA-256 hash for audit trails
Installation¶
5-Minute Quick Start¶
Step 1: Create a Prompt File¶
prompts/greeting.md
---
model: gpt-4o
temperature: 0.7
max_tokens: 512
---
You are a helpful assistant for {{ app_name }}.
Greet the user named {{ user_name }} warmly and ask how you can help them today.
Step 2: Load and Render¶
from dynaprompt import DynaPrompt
prompts = DynaPrompt(settings_files=["prompts/"])
rendered = prompts.greeting.render(
user_name="Ahmed",
app_name="TechTrax"
)
print(rendered.text) # The final prompt string
print(rendered.config) # {'model': 'gpt-4o', 'temperature': 0.7, ...}
print(rendered.prompt_hash) # SHA-256 for audit logging
Step 3: Inspect What's Loaded¶
# See all available prompts
print(prompts.keys()) # ['greeting', 'support.chat', ...]
# Rich debug info
prompts.inspect()
Core Concepts¶
graph LR
A[📁 Prompt Files<br/>.md / .toml / .py] --> B[DynaPrompt]
C[🌍 Environment<br/>dev / staging / prod] --> B
D[📦 Variables<br/>.json / .yaml / dicts] --> B
B --> E[PromptNode]
E --> F[🎨 .render(**kwargs)]
F --> G[RenderedPrompt<br/>text + config + hash]
| Concept | Description |
|---|---|
DynaPrompt |
The central manager — lazy-loads and caches everything |
PromptNode |
A single prompt template with its configuration |
RenderedPrompt |
The output of .render() — text, config, and a SHA-256 hash |
| Environment | A named context (development, production) that controls which values win |
What's Next?¶
- Getting Started — Full guide to formats, directories, and variables
- Environment Layering — Override prompts per environment
- Hooks & Validation — Intercept renders and enforce constraints
- Async Support — Use DynaPrompt in FastAPI & async agents