Microservices Architecture Guide 2026: Design, Communication, and Deployment

Sanjeev SharmaSanjeev Sharma
5 min read

Advertisement

Microservices 2026: When to Use and How to Build

Most teams should start with a monolith. But when you hit team scaling problems or truly independent scaling needs, microservices are the right tool. Here's how to do it properly.

Monolith vs Microservices Decision

Start with a monolith if:
- Team < 15 engineers
- Domain not well understood
- Early-stage product

Consider microservices when:
- Independent deployment of features is blocked
- Different services need different scaling
- Multiple teams need autonomy
- Technology diversity is required

Service Decomposition Principles

Decompose by business capability, not by technical layer:

BAD decomposition (by layer):
├── frontend-service
├── api-service
└── database-service

GOOD decomposition (by domain):
├── user-service          (auth, profiles, preferences)
├── product-service       (catalog, inventory, pricing)
├── order-service         (cart, checkout, order management)
├── notification-service  (email, SMS, push)
└── search-service        (search, recommendations)

Synchronous Communication: REST and gRPC

// REST between services (simple, but coupling)
// user-service calls order-service
async function getUserOrders(userId: string) {
  const response = await fetch(`${ORDER_SERVICE_URL}/api/orders?userId=${userId}`, {
    headers: { 'X-Service-Name': 'user-service', 'X-Trace-ID': getTraceId() },
    signal: AbortSignal.timeout(5000),  // 5s timeout
  })

  if (!response.ok) {
    if (response.status === 404) return []
    throw new ServiceError('order-service', response.status)
  }

  return response.json()
}

// Circuit breaker pattern
class CircuitBreaker {
  private failures = 0
  private lastFailure = 0
  private state: 'CLOSED' | 'OPEN' | 'HALF_OPEN' = 'CLOSED'

  async call<T>(fn: () => Promise<T>): Promise<T> {
    if (this.state === 'OPEN') {
      if (Date.now() - this.lastFailure > 30000) {
        this.state = 'HALF_OPEN'
      } else {
        throw new Error('Circuit breaker OPEN')
      }
    }

    try {
      const result = await fn()
      if (this.state === 'HALF_OPEN') { this.failures = 0; this.state = 'CLOSED' }
      return result
    } catch (err) {
      this.failures++
      this.lastFailure = Date.now()
      if (this.failures >= 5) this.state = 'OPEN'
      throw err
    }
  }
}

Asynchronous Communication: Message Queues

// Event-driven communication with RabbitMQ / BullMQ
import { Queue, Worker } from 'bullmq'
import { redis } from './lib/redis'

// order-service publishes events
const eventQueue = new Queue('events', { connection: redis })

// When order is placed
async function placeOrder(orderData: CreateOrderInput) {
  const order = await createOrderInDB(orderData)

  // Publish events — don't call other services directly!
  await eventQueue.add('order.created', {
    orderId: order.id,
    userId: order.userId,
    items: order.items,
    total: order.total,
  })

  return order
}

// notification-service subscribes
const notificationWorker = new Worker(
  'events',
  async (job) => {
    if (job.name === 'order.created') {
      const { userId, orderId, total } = job.data
      const user = await getUser(userId)
      await sendEmail({
        to: user.email,
        subject: 'Order Confirmation',
        template: 'order-confirmation',
        data: { orderId, total },
      })
    }
  },
  { connection: redis, concurrency: 10 }
)

// inventory-service subscribes
const inventoryWorker = new Worker(
  'events',
  async (job) => {
    if (job.name === 'order.created') {
      for (const item of job.data.items) {
        await decrementStock(item.productId, item.quantity)
      }
    }
  },
  { connection: redis }
)

API Gateway

// Single entry point for all microservices
// Using Express as a simple gateway

import express from 'express'
import { createProxyMiddleware } from 'http-proxy-middleware'
import jwt from 'jsonwebtoken'

const gateway = express()

// Auth middleware
gateway.use(async (req, res, next) => {
  const token = req.headers.authorization?.split(' ')[1]
  if (token) {
    try {
      const user = jwt.verify(token, process.env.JWT_SECRET!)
      req.headers['X-User-ID'] = (user as any).sub
      req.headers['X-User-Role'] = (user as any).role
    } catch {}
  }
  next()
})

// Rate limiting
gateway.use(rateLimiter)

// Route to services
gateway.use('/api/users', createProxyMiddleware({
  target: process.env.USER_SERVICE_URL,
  changeOrigin: true,
  pathRewrite: { '^/api/users': '/api/v1/users' },
  on: {
    error: (err, req, res) => {
      (res as any).status(503).json({ error: 'User service unavailable' })
    },
  },
}))

gateway.use('/api/orders', createProxyMiddleware({
  target: process.env.ORDER_SERVICE_URL,
  changeOrigin: true,
}))

gateway.listen(3000)

Service Discovery

// Service registry with consul or use Kubernetes DNS
// Kubernetes: services are discovered by DNS name

// With Docker Compose (development)
// Services refer to each other by service name:
// user-service → http://order-service:3001/api/orders

// Kubernetes service DNS:
// http://order-service.default.svc.cluster.local:3001

// Environment-based (simple, works everywhere)
const SERVICES = {
  user: process.env.USER_SERVICE_URL || 'http://user-service:3001',
  order: process.env.ORDER_SERVICE_URL || 'http://order-service:3002',
  product: process.env.PRODUCT_SERVICE_URL || 'http://product-service:3003',
}

Distributed Tracing with OpenTelemetry

// Add tracing to every service
import { NodeSDK } from '@opentelemetry/sdk-node'
import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-http'
import { Resource } from '@opentelemetry/resources'
import { SemanticResourceAttributes } from '@opentelemetry/semantic-conventions'

const sdk = new NodeSDK({
  resource: new Resource({
    [SemanticResourceAttributes.SERVICE_NAME]: 'order-service',
    [SemanticResourceAttributes.SERVICE_VERSION]: '1.0.0',
  }),
  traceExporter: new OTLPTraceExporter({
    url: process.env.OTEL_EXPORTER_OTLP_ENDPOINT,
  }),
})

sdk.start()

// Auto-instrumentation: HTTP, gRPC, databases are traced automatically
// Traces appear in Jaeger/Grafana Tempo/Datadog

// Manual span
import { trace } from '@opentelemetry/api'

async function processOrder(orderId: string) {
  const tracer = trace.getTracer('order-service')
  const span = tracer.startSpan('processOrder')
  span.setAttribute('orderId', orderId)

  try {
    const result = await doWork(orderId)
    span.setStatus({ code: SpanStatusCode.OK })
    return result
  } catch (err) {
    span.recordException(err as Error)
    span.setStatus({ code: SpanStatusCode.ERROR })
    throw err
  } finally {
    span.end()
  }
}

Shared Nothing: Data Isolation

Each service owns its data:
├── user-service      → users_db (PostgreSQL)
├── order-service     → orders_db (PostgreSQL)
├── product-service   → products_db (PostgreSQL)
└── search-service    → search_db (Elasticsearch)

Never share databases between services
  Shared DB creates tight coupling — you've built a distributed monolith

Deployment: Docker Compose → Kubernetes

# kubernetes/order-service.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: order-service
spec:
  replicas: 3
  selector:
    matchLabels:
      app: order-service
  template:
    spec:
      containers:
        - name: order-service
          image: myregistry/order-service:v1.2.0
          ports:
            - containerPort: 3002
          env:
            - name: DATABASE_URL
              valueFrom:
                secretKeyRef:
                  name: order-db-secret
                  key: url
          livenessProbe:
            httpGet: { path: /health, port: 3002 }
            initialDelaySeconds: 10
          readinessProbe:
            httpGet: { path: /health/ready, port: 3002 }
          resources:
            requests: { cpu: '100m', memory: '128Mi' }
            limits: { cpu: '500m', memory: '512Mi' }
---
apiVersion: v1
kind: Service
metadata:
  name: order-service
spec:
  selector:
    app: order-service
  ports:
    - port: 3002
      targetPort: 3002

Microservices add significant complexity. Only adopt them when team scaling is the bottleneck — not performance. Start with modular monolith, then extract services when teams need autonomy.

Advertisement

Sanjeev Sharma

Written by

Sanjeev Sharma

Full Stack Engineer · E-mopro