Skip to content

Environment Setup

This workshop is structured as a progressive, hands-on journey. You’ll start by building a simple conversational agent and incrementally add capabilities (tools, memory, multi-agent orchestration, safety guardrails, and production deployment) until you have a fully functional SupportBot running on AWS. Each module builds directly on the previous one, so by the end you’ll understand how every piece fits together.

ModuleTopicDuration
0Environment Setup & Intro20 min
1Your First Agent with Strands25 min
2Custom Tools & MCP Servers35 min
3Memory & Context Management30 min
4Multi-Agent Patterns30 min
5Evals, Safety & Observability30 min
6Deploy to AWS AgentCore25 min

Before the workshop, make sure you have the following installed:

  • Python 3.12+: Download Python
  • Git: For cloning the workshop repository
  • A code editor (VS Code recommended)
  • One of the following model providers:
    • AWS credentials + Bedrock access (for Claude on Bedrock), or
    • OpenAI API key (OPENAI_API_KEY), or
    • OpenRouter API key (OPENROUTER_API_KEY), or
    • Any other provider supported by Strands Agents SDK
  1. Clone the workshop repository

    Terminal window
    git clone <workshop-repo-url>
    cd agentic-ai-workshops
  2. Create a Python virtual environment

    Terminal window
    cd workshop
    python -m venv .venv
    source .venv/bin/activate # On Windows: .venv\Scripts\activate
  3. Install dependencies

    Terminal window
    pip install -r requirements.txt
  4. Configure your model

    Open shared/model.py and uncomment one model option. This is the single place that controls which LLM is used across all modules.

    OptionProviderWhat you need
    1AWS Bedrock (Claude Sonnet — default)AWS credentials configured
    2AWS Bedrock (Claude Haiku — faster/cheaper)AWS credentials configured
    3OpenAI (GPT-4o, GPT-4o-mini)export OPENAI_API_KEY="..."
    4LiteLLM (Anthropic, Ollama, Cohere, etc.)Depends on provider
    5OpenRouter via LiteLLMexport OPENROUTER_API_KEY="..."

    For example, to use AWS Bedrock (default), uncomment Option 1:

    from strands.models.bedrock import BedrockModel
    model = BedrockModel(
    max_tokens=1000,
    temperature=0.7,
    )

    Or to use OpenAI, uncomment Option 3:

    from strands.models.openai import OpenAIModel
    model = OpenAIModel(
    model_id="gpt-4o",
    params={"max_tokens": 1000, "temperature": 0.7},
    )

    Change it here once, and every module picks it up automatically.

  5. AWS setup (only if using Bedrock or for Module 6 deployment)

    Configure credentials and enable Bedrock model access:

    Terminal window
    aws configure
    # Enter your AWS Access Key ID, Secret Access Key, and region (us-east-1)

    Go to the Amazon Bedrock console in us-east-1 and enable access to Anthropic models. Anthropic models require submitting brief use case details in the console; approval is automatic and typically takes a few seconds. Bedrock uses serverless cross-region inference profiles, so once your use case is approved, inference is available across regions.

  6. Verify your setup

    Terminal window
    python module_00_setup/verify_setup.py

    You should see checks passing. AWS-specific checks (credentials, Bedrock) will show as [SKIP] if you’re using a non-AWS provider — that’s fine for Modules 1–5.

PackagePurpose
strands-agentsCore agent framework
strands-agents-toolsPre-built tool collection
litellmUnified LLM gateway (OpenRouter, Anthropic, Ollama, etc.)
mcpModel Context Protocol client
fastmcpFast MCP server builder
boto3AWS SDK for Python (needed for Bedrock & Module 6)
opentelemetry-*Observability and tracing
richPretty terminal output
workshop/
├── module_00_setup/ # This setup module
├── module_01_first_agent/ # Basic agent
├── module_02_tools_mcp/ # Tools + MCP server
├── module_03_memory/ # Memory patterns
├── module_04_multi_agent/ # Multi-agent system
├── module_05_evals/ # Evals + safety
├── module_06_deploy/ # AgentCore deployment
├── shared/ # Shared config & data
│ ├── model.py # ⬅ Configure your LLM here (one place for all modules)
│ └── data.py # Mock product/order data
├── requirements.txt
└── pyproject.toml