How to Structure Your Python Projects for AWS Lambda, APIs, and CLI Tools

6 min read

When a Python project is just one file, structure doesn’t matter. But the moment you start adding a second helper module, environment config, or a Dockerfile, things get messy fast if you don’t have a clear layout from the start. A good Python project structure makes it easy to find things, test things, and deploy things — and it doesn’t need to be complicated.

This post shows three practical layouts I use for the most common types of Python projects: AWS Lambda functions, REST APIs (Flask/FastAPI), and CLI tools. Each one includes the directory tree, what each file does, and sample code so you can set one up from scratch.

1. AWS Lambda Project

Lambda functions tend to grow from a single handler file into several modules once you add logging, validation, and external API calls. Here’s a structure that keeps things organized without overengineering it.

Directory Layout

my-lambda-project/
├── src/
│   ├── __init__.py
│   ├── handler.py            # Lambda entry point
│   ├── services.py           # Business logic / external API calls
│   ├── config.py             # Environment variables
│   ├── utils.py              # Shared helper functions
│   ├── validator.py          # Input validation
│   └── exceptions.py         # Custom exceptions
├── tests/
│   ├── __init__.py
│   └── test_handler.py
├── requirements.txt
├── Dockerfile                # If deploying as container image
└── README.md

Key Files Explained

handler.py — This is the Lambda entry point. Keep it thin — it should receive the event, validate input, call your service layer, and return a response. Don’t put business logic here.

import json
from src.services import process_order
from src.validator import validate_order_event
from src.exceptions import ValidationError


def lambda_handler(event, context):
    try:
        body = json.loads(event.get("body", "{}"))
        validate_order_event(body)
        result = process_order(body)
        return {
            "statusCode": 200,
            "body": json.dumps(result),
        }
    except ValidationError as e:
        return {"statusCode": 400, "body": json.dumps({"error": str(e)})}
    except Exception as e:
        return {"statusCode": 500, "body": json.dumps({"error": "Internal error"})}

services.py — Where the actual work happens. Database queries, API calls to third-party services, data transformations. This is the layer you test most.

import boto3
from src.config import TABLE_NAME


def process_order(order_data):
    dynamodb = boto3.resource("dynamodb")
    table = dynamodb.Table(TABLE_NAME)
    table.put_item(Item=order_data)
    return {"message": "Order created", "order_id": order_data["order_id"]}

config.py — Reads environment variables in one place. This way you’re not scattering os.environ calls across every file.

import os

TABLE_NAME = os.environ.get("TABLE_NAME", "orders")
LOG_LEVEL = os.environ.get("LOG_LEVEL", "INFO")
REGION = os.environ.get("AWS_REGION", "ap-southeast-1")

If you need to package dependencies as a layer, check out How to Build and Deploy Python Libraries for AWS Lambda Layers.

2. REST API Project (Flask / FastAPI)

For APIs, you want a clear separation between routes (what endpoints exist), controllers (what they do), and services (how they talk to databases or external APIs).

Directory Layout

my-api-project/
├── src/
│   ├── __init__.py
│   ├── main.py               # App entry point
│   ├── routes.py             # Route definitions
│   ├── controllers.py        # Request handling logic
│   ├── services.py           # Database / external API calls
│   ├── models.py             # Pydantic models or DB schemas
│   ├── config.py             # Settings and env vars
│   ├── utils.py
│   └── exceptions.py
├── tests/
│   ├── __init__.py
│   ├── test_routes.py
│   └── test_services.py
├── requirements.txt
├── .env                      # Local env vars (don't commit this)
├── Dockerfile
└── README.md

Sample Files (FastAPI)

main.py — Creates the app and includes routes:

from fastapi import FastAPI
from src.routes import router

app = FastAPI(title="My API")
app.include_router(router)

routes.py — Defines endpoints. Keeps route definitions separate from the logic:

from fastapi import APIRouter, HTTPException
from src.controllers import get_user, create_user
from src.models import UserCreate, UserResponse

router = APIRouter(prefix="/users", tags=["users"])


@router.get("/{user_id}", response_model=UserResponse)
async def read_user(user_id: int):
    user = get_user(user_id)
    if not user:
        raise HTTPException(status_code=404, detail="User not found")
    return user


@router.post("/", response_model=UserResponse, status_code=201)
async def add_user(user: UserCreate):
    return create_user(user)

models.py — Pydantic models for request/response validation:

from pydantic import BaseModel, EmailStr


class UserCreate(BaseModel):
    name: str
    email: EmailStr


class UserResponse(BaseModel):
    id: int
    name: str
    email: str

controllers.py — The actual logic behind each endpoint. Calls services, does transformations:

from src.services import db_get_user, db_create_user
from src.models import UserCreate


def get_user(user_id: int):
    return db_get_user(user_id)


def create_user(user: UserCreate):
    return db_create_user(user.model_dump())

The split between routes, controllers, and services might feel like overkill for small apps. But once you have 10+ endpoints, you’ll be glad the routing logic isn’t mixed with database calls.

3. CLI Tool Project

CLI tools have a different entry point — usually argparse or click for argument parsing — but the rest follows the same pattern.

Directory Layout

my-cli-tool/
├── src/
│   ├── __init__.py
│   ├── main.py               # Entry point
│   ├── cli.py                # Argument parsing / click groups
│   ├── commands.py           # Command implementations
│   ├── utils.py
│   ├── config.py
│   └── exceptions.py
├── tests/
│   ├── __init__.py
│   └── test_commands.py
├── pyproject.toml            # Modern packaging config
├── requirements.txt
└── README.md

Sample Files (using click)

main.py — Simple entry point:

from src.cli import cli

if __name__ == "__main__":
    cli()

cli.py — Defines the CLI group and registers commands:

import click
from src.commands import deploy_cmd, status_cmd


@click.group()
def cli():
    """My deployment tool."""
    pass


cli.add_command(deploy_cmd)
cli.add_command(status_cmd)

commands.py — Each command’s logic lives here:

import click
from src.utils import run_deploy, check_status


@click.command("deploy")
@click.option("--env", required=True, help="Target environment (dev, staging, prod)")
@click.option("--dry-run", is_flag=True, help="Preview without executing")
def deploy_cmd(env, dry_run):
    """Deploy to the specified environment."""
    if dry_run:
        click.echo(f"[DRY RUN] Would deploy to {env}")
        return
    run_deploy(env)
    click.echo(f"Deployed to {env}")


@click.command("status")
@click.option("--env", required=True)
def status_cmd(env):
    """Check deployment status."""
    status = check_status(env)
    click.echo(f"Status for {env}: {status}")

To make the CLI installable as a command (so users can run mytool deploy --env prod instead of python main.py deploy --env prod), add a [project.scripts] section to pyproject.toml:

[project]
name = "my-cli-tool"
version = "0.1.0"

[project.scripts]
mytool = "src.main:cli"

Then install it locally with pip install -e . and the mytool command becomes available.

Common Patterns Across All Three

No matter which type of project, a few things stay the same:

Keep config in one file. Every project has environment-specific values. Put them all in config.py so they’re easy to find and change. Don’t scatter os.environ.get() calls across your codebase.

Separate entry point from logic. Your handler, main, or routes file should be thin — just wiring things together. The actual work happens in services, controllers, or commands. This makes testing way easier because you can test the logic without invoking the full entry point.

Add __init__.py files. These turn directories into Python packages. They can be empty — just need to exist so from src.services import ... works.

Use a tests/ directory. Even a single test file is better than none. Mirror the structure of src/ — if you have src/services.py, create tests/test_services.py.

Don’t commit secrets. Add .env to .gitignore. Use environment variables or AWS Secrets Manager for credentials. If you need cross-account secret access, see How to Access AWS Secrets Manager from Another Account.

Adding Code Quality Tools

Once you have a project structure, add linting and formatting. A basic pyproject.toml setup for any of the three project types:

[tool.black]
line-length = 88

[tool.flake8]
max-line-length = 88
extend-ignore = ["E203", "W503"]

For a full walkthrough on setting up linting, see How to Use Flake8 and Black for Python Code Quality and Style Consistency. And to run these automatically before every commit, How to Install and Use Pre-commit on Ubuntu WSL 2 covers the pre-commit hook setup.

Quick Reference

File Purpose
handler.py / main.py Entry point — keep it thin
routes.py API endpoint definitions (APIs only)
controllers.py Request handling logic (APIs only)
services.py Business logic, DB calls, external APIs
models.py Pydantic models or DB schemas
config.py Environment variables in one place
utils.py Shared helper functions
validator.py Input validation (Lambda)
exceptions.py Custom error classes
cli.py / commands.py CLI argument parsing and command logic

Conclusion

You don’t need a perfect structure from day one, but having a basic pattern to follow prevents the “everything in one file” problem that makes projects painful to maintain later. Pick the layout that matches your project type, start building, and adjust as the project grows.

If you’re managing Python versions across projects, How to Install and manage Python Versions on WSL Ubuntu covers setting up pyenv so each project can use the right Python version.