Environment Setup
Workshop Modules
Section titled “Workshop Modules”This workshop is structured as a progressive, hands-on journey. You’ll start by building a simple conversational agent and incrementally add capabilities (tools, memory, multi-agent orchestration, safety guardrails, and production deployment) until you have a fully functional SupportBot running on AWS. Each module builds directly on the previous one, so by the end you’ll understand how every piece fits together.
| Module | Topic | Duration |
|---|---|---|
| 0 | Environment Setup & Intro | 20 min |
| 1 | Your First Agent with Strands | 25 min |
| 2 | Custom Tools & MCP Servers | 35 min |
| 3 | Memory & Context Management | 30 min |
| 4 | Multi-Agent Patterns | 30 min |
| 5 | Evals, Safety & Observability | 30 min |
| 6 | Deploy to AWS AgentCore | 25 min |
Prerequisites
Section titled “Prerequisites”Before the workshop, make sure you have the following installed:
- Python 3.12+: Download Python
- Git: For cloning the workshop repository
- A code editor (VS Code recommended)
- One of the following model providers:
- AWS credentials + Bedrock access (for Claude on Bedrock), or
- OpenAI API key (
OPENAI_API_KEY), or - OpenRouter API key (
OPENROUTER_API_KEY), or - Any other provider supported by Strands Agents SDK
Setup Steps
Section titled “Setup Steps”-
Clone the workshop repository
Terminal window git clone <workshop-repo-url>cd agentic-ai-workshops -
Create a Python virtual environment
Terminal window cd workshoppython -m venv .venvsource .venv/bin/activate # On Windows: .venv\Scripts\activate -
Install dependencies
Terminal window pip install -r requirements.txt -
Configure your model
Open
shared/model.pyand uncomment one model option. This is the single place that controls which LLM is used across all modules.Option Provider What you need 1 AWS Bedrock (Claude Sonnet — default) AWS credentials configured 2 AWS Bedrock (Claude Haiku — faster/cheaper) AWS credentials configured 3 OpenAI (GPT-4o, GPT-4o-mini) export OPENAI_API_KEY="..."4 LiteLLM (Anthropic, Ollama, Cohere, etc.) Depends on provider 5 OpenRouter via LiteLLM export OPENROUTER_API_KEY="..."For example, to use AWS Bedrock (default), uncomment Option 1:
from strands.models.bedrock import BedrockModelmodel = BedrockModel(max_tokens=1000,temperature=0.7,)Or to use OpenAI, uncomment Option 3:
from strands.models.openai import OpenAIModelmodel = OpenAIModel(model_id="gpt-4o",params={"max_tokens": 1000, "temperature": 0.7},)Change it here once, and every module picks it up automatically.
-
AWS setup (only if using Bedrock or for Module 6 deployment)
Configure credentials and enable Bedrock model access:
Terminal window aws configure# Enter your AWS Access Key ID, Secret Access Key, and region (us-east-1)Go to the Amazon Bedrock console in
us-east-1and enable access to Anthropic models. Anthropic models require submitting brief use case details in the console; approval is automatic and typically takes a few seconds. Bedrock uses serverless cross-region inference profiles, so once your use case is approved, inference is available across regions.No AWS setup needed for Modules 1–5. Just set the appropriate API key environment variable for your chosen provider and configure
shared/model.py. -
Verify your setup
Terminal window python module_00_setup/verify_setup.pyYou should see checks passing. AWS-specific checks (credentials, Bedrock) will show as
[SKIP]if you’re using a non-AWS provider — that’s fine for Modules 1–5.
What’s Installed
Section titled “What’s Installed”| Package | Purpose |
|---|---|
strands-agents | Core agent framework |
strands-agents-tools | Pre-built tool collection |
litellm | Unified LLM gateway (OpenRouter, Anthropic, Ollama, etc.) |
mcp | Model Context Protocol client |
fastmcp | Fast MCP server builder |
boto3 | AWS SDK for Python (needed for Bedrock & Module 6) |
opentelemetry-* | Observability and tracing |
rich | Pretty terminal output |
Project Structure
Section titled “Project Structure”workshop/├── module_00_setup/ # This setup module├── module_01_first_agent/ # Basic agent├── module_02_tools_mcp/ # Tools + MCP server├── module_03_memory/ # Memory patterns├── module_04_multi_agent/ # Multi-agent system├── module_05_evals/ # Evals + safety├── module_06_deploy/ # AgentCore deployment├── shared/ # Shared config & data│ ├── model.py # ⬅ Configure your LLM here (one place for all modules)│ └── data.py # Mock product/order data├── requirements.txt└── pyproject.toml