Hexagonal Architecture Meets Serverless: Designing Event-Driven Systems with Ports and Adapters
You’ve probably heard about hexagonal architecture. Maybe you’ve used it in a traditional web app. But what happens when you try to apply it to serverless functions? That’s where things get interesting.
Most teams I work with are moving to serverless. AWS Lambda, Azure Functions, Google Cloud Functions - they’re all over the place. The problem is that serverless functions can turn into a mess of vendor-specific code mixed with business logic. You end up with functions that are hard to test, hard to move between clouds, and hard to maintain.
Hexagonal architecture can help. But you need to adapt it for the serverless world. Let me show you how.
What is Hexagonal Architecture Again?
Hexagonal architecture, also called ports and adapters, separates your business logic from everything else. Think of it like this:
- Core: Your business logic lives here. It doesn’t know about databases, web frameworks, or cloud services.
- Ports: Interfaces that define what your core needs. Like “I need to save an order” or “I need to send an email.”
- Adapters: The actual implementations. A database adapter, an email service adapter, etc.
The core talks to the outside world through ports. Adapters plug into those ports. This keeps your business logic clean and testable.
In a traditional app, you might have a web controller that calls a service, which calls a repository. The service is your core, the controller is an input adapter, and the repository is an output adapter.
Why Serverless Makes This Harder
Serverless functions are different. They’re event-driven, stateless, and ephemeral. This creates some problems:
Cold starts: Your function wakes up, loads dependencies, and then processes an event. If you’re not careful, you’re loading the entire world every time.
Vendor lock-in: AWS Lambda handlers look different from Azure Functions. Your business logic gets mixed up with cloud-specific code.
Integration sprawl: Each function might need to talk to databases, message queues, APIs, and monitoring systems. Without good boundaries, this becomes a mess.
Testing complexity: How do you test a function that’s tightly coupled to AWS SDKs or Azure services?
The result? Functions that are hard to test, hard to move between clouds, and hard to reason about.
Applying Hexagonal Principles to Serverless
Here’s how to make hexagonal architecture work with serverless functions:
Ports as Event Abstractions
Instead of thinking about HTTP requests, think about events. Your core doesn’t care if the event came from SQS, Kafka, or an HTTP trigger. It just knows “something happened that I need to process.”
// Input port - what events can trigger this function?
interface OrderEvent {
orderId: string;
customerId: string;
items: OrderItem[];
timestamp: Date;
}
interface OrderEventHandler {
handle(event: OrderEvent): Promise<void>;
}
Adapters for Function I/O
Your adapters handle the cloud-specific stuff. An SQS adapter converts SQS messages into domain events. A DynamoDB adapter converts domain objects into database records.
// SQS adapter - converts cloud events to domain events
class SQSOrderEventAdapter implements OrderEventHandler {
constructor(
private orderService: OrderService,
private logger: Logger
) {}
async handle(event: OrderEvent): Promise<void> {
try {
await this.orderService.processOrder(event);
} catch (error) {
this.logger.error('Failed to process order', { orderId: event.orderId, error });
throw error;
}
}
}
Framework-Agnostic Domain Logic
Your core business logic stays the same whether you’re running on AWS, Azure, or Google Cloud. The adapters handle the differences.
// Core domain service - no cloud dependencies
class OrderService {
constructor(
private orderRepository: OrderRepository,
private inventoryService: InventoryService,
private notificationService: NotificationService
) {}
async processOrder(event: OrderEvent): Promise<void> {
// Business logic here - same everywhere
const order = await this.orderRepository.findById(event.orderId);
if (!order) {
throw new Error(`Order ${event.orderId} not found`);
}
const isAvailable = await this.inventoryService.checkAvailability(event.items);
if (!isAvailable) {
await this.notificationService.sendOutOfStockNotification(event.customerId);
return;
}
await this.orderRepository.updateStatus(event.orderId, 'confirmed');
await this.notificationService.sendConfirmation(event.customerId, event.orderId);
}
}
Implementation Walkthrough
Let’s build a real example. We’ll create an order processing microservice that gets triggered by SQS events.
Step 1: Define the Domain
First, let’s create our domain entities and services:
// Domain entities
export class Order {
constructor(
public readonly id: string,
public readonly customerId: string,
public readonly items: OrderItem[],
public readonly status: OrderStatus,
public readonly createdAt: Date
) {}
}
export class OrderItem {
constructor(
public readonly productId: string,
public readonly quantity: number,
public readonly price: number
) {}
}
export enum OrderStatus {
PENDING = 'pending',
CONFIRMED = 'confirmed',
SHIPPED = 'shipped',
CANCELLED = 'cancelled'
}
// Domain service
export class OrderProcessingService {
constructor(
private orderRepository: OrderRepository,
private inventoryService: InventoryService,
private paymentService: PaymentService,
private notificationService: NotificationService
) {}
async processOrder(orderId: string): Promise<void> {
const order = await this.orderRepository.findById(orderId);
if (!order) {
throw new OrderNotFoundError(orderId);
}
if (order.status !== OrderStatus.PENDING) {
throw new InvalidOrderStatusError(orderId, order.status);
}
// Check inventory
const inventoryCheck = await this.inventoryService.reserveItems(order.items);
if (!inventoryCheck.success) {
await this.orderRepository.updateStatus(orderId, OrderStatus.CANCELLED);
await this.notificationService.sendOutOfStockNotification(order.customerId, inventoryCheck.unavailableItems);
return;
}
// Process payment
const paymentResult = await this.paymentService.chargeCustomer(order.customerId, order.totalAmount);
if (!paymentResult.success) {
await this.inventoryService.releaseItems(order.items);
await this.orderRepository.updateStatus(orderId, OrderStatus.CANCELLED);
await this.notificationService.sendPaymentFailedNotification(order.customerId);
return;
}
// Confirm order
await this.orderRepository.updateStatus(orderId, OrderStatus.CONFIRMED);
await this.notificationService.sendOrderConfirmation(order.customerId, orderId);
}
}
Step 2: Define the Ports
Now let’s create the interfaces that our core will use:
// Input port - what events can trigger order processing?
export interface OrderEvent {
orderId: string;
timestamp: Date;
source: string;
}
export interface OrderEventHandler {
handle(event: OrderEvent): Promise<void>;
}
// Output ports - what external services does our core need?
export interface OrderRepository {
findById(id: string): Promise<Order | null>;
updateStatus(id: string, status: OrderStatus): Promise<void>;
save(order: Order): Promise<void>;
}
export interface InventoryService {
reserveItems(items: OrderItem[]): Promise<InventoryResult>;
releaseItems(items: OrderItem[]): Promise<void>;
}
export interface PaymentService {
chargeCustomer(customerId: string, amount: number): Promise<PaymentResult>;
}
export interface NotificationService {
sendOrderConfirmation(customerId: string, orderId: string): Promise<void>;
sendOutOfStockNotification(customerId: string, items: OrderItem[]): Promise<void>;
sendPaymentFailedNotification(customerId: string): Promise<void>;
}
Step 3: Create the Adapters
Here’s where we handle the cloud-specific stuff:
// SQS input adapter
export class SQSOrderEventAdapter implements OrderEventHandler {
constructor(
private orderProcessingService: OrderProcessingService,
private logger: Logger
) {}
async handle(event: OrderEvent): Promise<void> {
this.logger.info('Processing order event', { orderId: event.orderId });
try {
await this.orderProcessingService.processOrder(event.orderId);
this.logger.info('Order processed successfully', { orderId: event.orderId });
} catch (error) {
this.logger.error('Failed to process order', {
orderId: event.orderId,
error: error.message
});
throw error;
}
}
}
// DynamoDB output adapter
export class DynamoDBOrderRepository implements OrderRepository {
constructor(
private dynamoClient: DynamoDBClient,
private tableName: string
) {}
async findById(id: string): Promise<Order | null> {
const result = await this.dynamoClient.send(new GetItemCommand({
TableName: this.tableName,
Key: { id: { S: id } }
}));
if (!result.Item) {
return null;
}
return this.mapToOrder(result.Item);
}
async updateStatus(id: string, status: OrderStatus): Promise<void> {
await this.dynamoClient.send(new UpdateItemCommand({
TableName: this.tableName,
Key: { id: { S: id } },
UpdateExpression: 'SET #status = :status',
ExpressionAttributeNames: { '#status': 'status' },
ExpressionAttributeValues: { ':status': { S: status } }
}));
}
private mapToOrder(item: any): Order {
// Map DynamoDB item to domain object
return new Order(
item.id.S,
item.customerId.S,
JSON.parse(item.items.S),
item.status.S as OrderStatus,
new Date(item.createdAt.S)
);
}
}
// AWS SNS notification adapter
export class SNSNotificationService implements NotificationService {
constructor(
private snsClient: SNSClient,
private topicArn: string
) {}
async sendOrderConfirmation(customerId: string, orderId: string): Promise<void> {
const message = {
type: 'order_confirmation',
customerId,
orderId,
timestamp: new Date().toISOString()
};
await this.snsClient.send(new PublishCommand({
TopicArn: this.topicArn,
Message: JSON.stringify(message)
}));
}
async sendOutOfStockNotification(customerId: string, items: OrderItem[]): Promise<void> {
const message = {
type: 'out_of_stock',
customerId,
items,
timestamp: new Date().toISOString()
};
await this.snsClient.send(new PublishCommand({
TopicArn: this.topicArn,
Message: JSON.stringify(message)
}));
}
async sendPaymentFailedNotification(customerId: string): Promise<void> {
const message = {
type: 'payment_failed',
customerId,
timestamp: new Date().toISOString()
};
await this.snsClient.send(new PublishCommand({
TopicArn: this.topicArn,
Message: JSON.stringify(message)
}));
}
}
Step 4: Wire Everything Together
Now let’s create the Lambda function that ties it all together:
// Lambda function handler
export class OrderProcessingFunction {
private orderProcessingService: OrderProcessingService;
private eventHandler: OrderEventHandler;
constructor() {
// Initialize dependencies
const dynamoClient = new DynamoDBClient({});
const snsClient = new SNSClient({});
const logger = new ConsoleLogger();
// Create adapters
const orderRepository = new DynamoDBOrderRepository(dynamoClient, process.env.ORDERS_TABLE!);
const inventoryService = new AWSInventoryService();
const paymentService = new AWSPaymentService();
const notificationService = new SNSNotificationService(snsClient, process.env.NOTIFICATIONS_TOPIC!);
// Create core service
this.orderProcessingService = new OrderProcessingService(
orderRepository,
inventoryService,
paymentService,
notificationService
);
// Create event handler
this.eventHandler = new SQSOrderEventAdapter(this.orderProcessingService, logger);
}
async handle(event: SQSEvent): Promise<void> {
for (const record of event.Records) {
const orderEvent: OrderEvent = JSON.parse(record.body);
await this.eventHandler.handle(orderEvent);
}
}
}
// Export the handler
const orderProcessingFunction = new OrderProcessingFunction();
export const handler = orderProcessingFunction.handle.bind(orderProcessingFunction);
Patterns and Best Practices
Testing Strategy
Test your core business logic separately from your adapters. This is where hexagonal architecture really shines.
// Unit test for core business logic
describe('OrderProcessingService', () => {
let service: OrderProcessingService;
let mockOrderRepository: jest.Mocked<OrderRepository>;
let mockInventoryService: jest.Mocked<InventoryService>;
let mockPaymentService: jest.Mocked<PaymentService>;
let mockNotificationService: jest.Mocked<NotificationService>;
beforeEach(() => {
mockOrderRepository = createMockOrderRepository();
mockInventoryService = createMockInventoryService();
mockPaymentService = createMockPaymentService();
mockNotificationService = createMockNotificationService();
service = new OrderProcessingService(
mockOrderRepository,
mockInventoryService,
mockPaymentService,
mockNotificationService
);
});
it('should process order successfully when inventory and payment are available', async () => {
// Arrange
const orderId = 'order-123';
const order = new Order(orderId, 'customer-456', [], OrderStatus.PENDING, new Date());
mockOrderRepository.findById.mockResolvedValue(order);
mockInventoryService.reserveItems.mockResolvedValue({ success: true });
mockPaymentService.chargeCustomer.mockResolvedValue({ success: true });
// Act
await service.processOrder(orderId);
// Assert
expect(mockOrderRepository.updateStatus).toHaveBeenCalledWith(orderId, OrderStatus.CONFIRMED);
expect(mockNotificationService.sendOrderConfirmation).toHaveBeenCalledWith('customer-456', orderId);
});
it('should cancel order when inventory is not available', async () => {
// Arrange
const orderId = 'order-123';
const order = new Order(orderId, 'customer-456', [], OrderStatus.PENDING, new Date());
mockOrderRepository.findById.mockResolvedValue(order);
mockInventoryService.reserveItems.mockResolvedValue({
success: false,
unavailableItems: []
});
// Act
await service.processOrder(orderId);
// Assert
expect(mockOrderRepository.updateStatus).toHaveBeenCalledWith(orderId, OrderStatus.CANCELLED);
expect(mockNotificationService.sendOutOfStockNotification).toHaveBeenCalled();
});
});
// Integration test for adapters
describe('DynamoDBOrderRepository', () => {
let repository: DynamoDBOrderRepository;
let dynamoClient: DynamoDBClient;
beforeEach(() => {
dynamoClient = new DynamoDBClient({
endpoint: 'http://localhost:8000', // Local DynamoDB
region: 'us-east-1'
});
repository = new DynamoDBOrderRepository(dynamoClient, 'test-orders');
});
it('should save and retrieve order', async () => {
// Arrange
const order = new Order('test-123', 'customer-456', [], OrderStatus.PENDING, new Date());
// Act
await repository.save(order);
const retrieved = await repository.findById('test-123');
// Assert
expect(retrieved).toEqual(order);
});
});
Vendor Independence
One of the biggest benefits of this approach is that you can swap out adapters without changing your core logic. Want to move from AWS to Google Cloud? Just create new adapters.
// Google Cloud Pub/Sub adapter (replaces SQS)
export class PubSubOrderEventAdapter implements OrderEventHandler {
constructor(
private orderProcessingService: OrderProcessingService,
private logger: Logger
) {}
async handle(event: OrderEvent): Promise<void> {
// Same logic, different cloud provider
await this.orderProcessingService.processOrder(event.orderId);
}
}
// Google Cloud Firestore adapter (replaces DynamoDB)
export class FirestoreOrderRepository implements OrderRepository {
constructor(private firestore: Firestore) {}
async findById(id: string): Promise<Order | null> {
const doc = await this.firestore.collection('orders').doc(id).get();
if (!doc.exists) {
return null;
}
return this.mapToOrder(doc.data());
}
// ... rest of implementation
}
Monitoring and Observability
Treat monitoring as another adapter. Your core doesn’t need to know about CloudWatch, Azure Monitor, or Google Cloud Monitoring.
// Monitoring adapter
export interface MonitoringService {
recordMetric(name: string, value: number, tags?: Record<string, string>): void;
recordError(error: Error, context?: Record<string, any>): void;
}
export class CloudWatchMonitoringService implements MonitoringService {
constructor(private cloudWatch: CloudWatchClient) {}
recordMetric(name: string, value: number, tags?: Record<string, string>): void {
// Send metric to CloudWatch
}
recordError(error: Error, context?: Record<string, any>): void {
// Send error to CloudWatch
}
}
// Use in your core service
export class OrderProcessingService {
constructor(
private orderRepository: OrderRepository,
private inventoryService: InventoryService,
private paymentService: PaymentService,
private notificationService: NotificationService,
private monitoring: MonitoringService
) {}
async processOrder(orderId: string): Promise<void> {
const startTime = Date.now();
try {
// ... business logic
this.monitoring.recordMetric('order.processing.duration', Date.now() - startTime);
} catch (error) {
this.monitoring.recordError(error, { orderId });
throw error;
}
}
}
The Benefits
This approach gives you several advantages:
Maintainable code: Your business logic is separate from cloud-specific code. You can understand and modify it without getting lost in AWS SDK calls.
Testable systems: You can unit test your core logic with mocks. Integration tests can focus on specific adapters.
Portable applications: Want to move from AWS to Azure? Just swap the adapters. Your core business logic stays the same.
Clear boundaries: It’s obvious what belongs where. New team members can understand the structure quickly.
Easier debugging: When something goes wrong, you know whether it’s in your business logic or in an adapter.
The Trade-offs
This approach isn’t free. There are some costs:
Complexity: For simple functions that just move data around, this might be overkill. You’re adding layers of abstraction.
Cold start overhead: Dependency injection and adapter initialization happen on every cold start. This can add latency.
Learning curve: Team members need to understand hexagonal architecture. Not everyone is familiar with it.
More code: You’re writing more interfaces and adapters. This means more files to maintain.
When to Use This Pattern
Use hexagonal architecture in serverless when:
- Your business logic is complex
- You need to test your core logic thoroughly
- You might need to support multiple cloud providers
- You have multiple functions that share business logic
- Your team values clean architecture
Don’t use it when:
- Your functions are simple data transformers
- You’re prototyping and need to move fast
- Your team is small and doesn’t care about architecture
- You’re certain you’ll never leave your current cloud provider
Conclusion
Hexagonal architecture can work well with serverless functions. The key is to think of your function as just another adapter in the system. Your core business logic stays clean and testable, while adapters handle the cloud-specific stuff.
This approach takes more upfront work, but it pays off as your system grows. You get maintainable, testable, and portable code. Your business logic doesn’t get tangled up with AWS SDKs or Azure services.
The pattern works best when you have complex business logic that you want to test thoroughly. For simple functions, it might be overkill. But for systems that need to scale and evolve, it’s worth the investment.
Start small. Pick one function and try this approach. See how it feels. If it works for your team, gradually apply it to more functions. The goal isn’t to have perfect architecture everywhere - it’s to have code that you can understand, test, and maintain.
Discussion
Loading comments...