Security Best Practices
Secure your agents: API key management, input validation, and security
Security Best Practices
๐ถ Explained like I'm 5
Imagine you have a secret diary. You wouldn't leave it where everyone can read it, right? You'd lock it up!
AI agents handle important information too - API keys, user data, secrets. We need to protect them just like a locked diary!
Security means keeping your agent and its data safe from bad people.
Security is Critical
A security breach can expose user data, cost money, and damage trust. Always prioritize security.
โ Why we need this
Agents handle sensitive information:
- API keys and secrets
- User personal data
- Financial information
- Business data
Without security:
- Hackers can steal API keys
- User data can be exposed
- Agents can be manipulated
- Systems can be compromised
Security protects your agent, your users, and your reputation!
๐ง How it works
Security Principles
- Secrets Management: Never expose API keys
- Input Validation: Check all inputs
- Authentication: Verify who can use the agent
- Encryption: Protect data in transit and at rest
- Rate Limiting: Prevent abuse
- Monitoring: Watch for attacks
1. API Key Security
# โ Bad: Hardcoded key
api_key = "sk-1234567890"
# โ
Good: Environment variable
import os
api_key = os.getenv('OPENAI_API_KEY')
# โ
Better: Use secrets manager
from google.cloud import secretmanager
def get_secret(secret_name):
client = secretmanager.SecretManagerServiceClient()
name = f"projects/{project_id}/secrets/{secret_name}/versions/latest"
response = client.access_secret_version(request={"name": name})
return response.payload.data.decode("UTF-8")
api_key = get_secret('openai_api_key')
2. Input Validation
# โ
Good: Validate inputs
def safe_agent_input(user_input):
# Check length
if len(user_input) > 10000:
raise ValueError("Input too long")
# Check for injection attempts
dangerous_patterns = ['<script', 'javascript:', 'eval(']
for pattern in dangerous_patterns:
if pattern.lower() in user_input.lower():
raise ValueError("Invalid input detected")
# Sanitize
sanitized = user_input.strip()
return sanitized
3. Authentication
# โ
Good: Require authentication
from functools import wraps
def require_auth(f):
@wraps(f)
def wrapper(request, *args, **kwargs):
token = request.headers.get('Authorization')
if not validate_token(token):
return {'error': 'Unauthorized'}, 401
return f(request, *args, **kwargs)
return wrapper
@require_auth
def agent_endpoint(request):
# Protected endpoint
return agent.process(request.data)
4. Rate Limiting
# โ
Good: Limit requests
from functools import lru_cache
import time
request_times = {}
def rate_limit(user_id, max_requests=10, window=60):
now = time.time()
if user_id not in request_times:
request_times[user_id] = []
# Remove old requests
request_times[user_id] = [
t for t in request_times[user_id] if now - t < window
]
# Check limit
if len(request_times[user_id]) >= max_requests:
raise Exception("Rate limit exceeded")
# Add current request
request_times[user_id].append(now)
๐งช Example
Secure Agent Setup
import os
from crewai import Agent, Task, Crew
from dotenv import load_dotenv
# Load environment variables securely
load_dotenv()
# Get API keys from environment
openai_key = os.getenv('OPENAI_API_KEY')
if not openai_key:
raise ValueError("OPENAI_API_KEY not set")
# Create agent with security considerations
agent = Agent(
role='Secure Assistant',
goal='Help users safely',
backstory='You are a secure AI assistant',
max_iter=10, # Limit iterations
allow_delegation=False, # Control delegation
verbose=True
)
# Validate inputs before processing
def secure_process(user_input):
# Validate
if not user_input or len(user_input) == 0:
return "Please provide input"
if len(user_input) > 5000:
return "Input too long"
# Process safely
return agent.process(user_input)
๐ฏ Real-World Case Studies
API Key Exposure Incident
๐ Scenario
A developer accidentally committed API keys to GitHub. Keys were exposed publicly, leading to unauthorized usage and costs.
๐ก Solution
Implemented: (1) Immediate key rotation, (2) Environment variable enforcement, (3) Pre-commit hooks to prevent key commits, (4) Secrets scanning in CI/CD, (5) Team training on security.
โ Outcome
No further key exposures. Costs controlled. Team educated. Security practices improved. Automated scanning prevents future incidents.
๐ Key Lessons
- Never commit secrets to git
- Use environment variables
- Automate secret scanning
- Rotate keys immediately if exposed
- Train team on security
๐ Hands-on Task
Secure your agent:
- Audit Secrets: Find all API keys and secrets
- Move to Environment: Put secrets in .env
- Add Validation: Validate all inputs
- Add Rate Limiting: Prevent abuse
- Enable Logging: Monitor for attacks
- Test Security: Try to break your own agent
โ Checklist
Understand security:
๐จ Common Pitfalls & Solutions
Pitfall 1: Committing Secrets
Problem: API keys committed to git, exposed publicly.
Solution: Use .gitignore, environment variables, pre-commit hooks.
# .gitignore
.env
.env.local
*.key
secrets/
Pitfall 2: No Input Validation
Problem: Malicious inputs can break or exploit agent.
Solution: Validate and sanitize all inputs.
Never Trust Input
Always validate user input. Assume it's malicious until proven safe.
Pitfall 3: No Rate Limiting
Problem: Agent can be abused, causing high costs.
Solution: Implement rate limiting per user/IP.
๐ก Best Practices
- Never Commit Secrets: Use environment variables
- Validate Inputs: Check everything users provide
- Use HTTPS: Encrypt data in transit
- Limit Access: Only give necessary permissions
- Monitor Logs: Watch for suspicious activity
- Rotate Keys: Change keys regularly
- Keep Updated: Update dependencies for security patches
๐ Security Checklist
Before deploying:
- All secrets in environment variables
- .env in .gitignore
- Input validation implemented
- Rate limiting enabled
- HTTPS enabled
- Error messages don't leak info
- Logging configured
- Dependencies updated
- Security headers set
- Authentication required
๐ Additional Resources
OWASP AI & LLM Security Guide
Security best practices for building AI applications using large language models
๐ Challenge for GitHub
Create a security guide for your agent:
- Document all security measures
- Create security checklist
- Add security tests
- Document incident response
- Share security best practices
Security Expert!
You now understand how to secure agents. This protects your users, your data, and your reputation!