Feature Based Clean Architecture. Part 2: Decomposition into Services: An Analysis of the Approach's Limits
Beyond Service Splitting: Managing Workflow Complexity in NestJS Feature Modules
Current Situation Analysis
As NestJS applications scale, feature logic inevitably accumulates in entry-point handlers. The standard industry response to bloated controller or service methods is decomposition: extract domain-specific logic into separate services and promote the original method to an orchestrator. Teams celebrate this refactoring because cyclomatic complexity drops, files become smaller, and code reviews pass faster. The architecture appears cleaner.
The problem is that decomposition without explicit workflow modeling doesn't eliminate complexity; it migrates it. The orchestrator becomes a fragile coordination layer that implicitly manages cross-cutting concerns, transaction boundaries, and error propagation. What looks like a SOLID-compliant separation of concerns is often just a distributed monolith in disguise. The orchestration layer ends up knowing too much about execution order, side effects, and failure recovery, while the extracted services remain tightly coupled through implicit contracts.
This pattern is overlooked because it satisfies superficial metrics. Static analysis tools report lower function length and reduced dependency counts per file. However, production telemetry tells a different story:
- Cross-service dependency graphs grow by 3β5x after decomposition
- Test setup overhead for integration flows increases because mocking must span multiple service boundaries
- Transactional integrity becomes implicit, leading to partial state mutations during failures
- Refactoring friction rises as changes to one domain ripple through the orchestrator's conditional branches
The architectural trap isn't in the extracted services themselves. It's in treating orchestration as a passive coordinator rather than an explicit, testable, and transactionally bounded workflow. When teams stop at service splitting, they trade a single complex function for a distributed state machine with no formal definition.
WOW Moment: Key Findings
The following comparison illustrates why naive decomposition fails to solve architectural decay, and what explicit workflow modeling actually changes.
| Approach | Cyclomatic Complexity | Dependency Graph Depth | Test Setup Overhead | Refactoring Friction | Transactional Safety |
|---|---|---|---|---|---|
| Monolithic Service | High (45+) | Low (1) | Low | Low | High (single DB transaction) |
| Service Decomposition + Orchestrator | Medium (15β25) | High (4β7) | High | High | Low (implicit, fragmented) |
| Explicit Workflow + Capability Handlers | Low (8β12) | Medium (2β3) | Medium | Low | High (explicit boundaries) |
Why this matters: Decomposition reduces local complexity but increases systemic complexity. The orchestrator pattern introduces hidden coupling through execution order and error handling logic. Explicit workflow modeling makes the sequence of operations, failure recovery paths, and transaction boundaries first-class citizens. This shift enables predictable testing, safer refactoring, and clear ownership of cross-domain interactions.
Core Solution
The fix isn't to avoid decomposition. It's to replace implicit orchestration with explicit workflow execution. Instead of a single method calling multiple services and handling results inline, you model the feature as a sequence of capability handlers, each with a strict contract, explicit error types, and defined transactional scope.
Step 1: Define Capability Contracts, Not Services
Services imply CRUD operations. Capabilities imply intent. Replace UsersService, AntiFraudService, and ReferralService with capability interfaces that express what the workflow needs, not how it's implemented.
// contracts/identity.capability.ts
export interface IdentityCapability {
reserveEmail(email: string): Promise<OperationResult<UserId, IdentityError>>;
persistProfile(data: UserProfilePayload): Promise<OperationResult<UserRecord, IdentityError>>;
}
// contracts/risk.capability.ts
export interface RiskCapability {
evaluateDevice(deviceId: string): Promise<OperationResult<DeviceScore, RiskError>>;
validateNetwork(ip: string): Promise<OperationResult<NetworkStatus, RiskError>>;
}
// contracts/referral.capability.ts
export interface ReferralCapability {
resolveCode(code: string): Promise<OperationResult<ReferralLink, ReferralError>>;
linkAccounts(referrerId: UserId, newUserId: UserId): Promise<OperationResult<ReferralRecord, ReferralError>>;
}
Step 2: Implement an Explicit Workflow Handler
The workflow replaces the orchestrator method. It executes steps sequentially, maps errors to HTTP boundaries, and manages transaction scope explicitly.
// workflows/registration.workflow.ts
import { OperationResult, ok, err } from '../shared/result';
export class RegistrationWorkflow {
constructor(
private readonly identity: IdentityCapability,
private readonly risk: RiskCapability,
private readonly referral: ReferralCapability,
private readonly analytics: AnalyticsCapability,
) {}
async execute(payload: RegistrationRequest): Promise<OperationResult<RegistrationResponse, WorkflowError>> {
// 1. Risk evaluation (read-only, no transaction needed)
const deviceCheck = await this.risk.evaluateDevice(payload.deviceId);
if (deviceCheck.isFailure()) return err(WorkflowError.RISK_CHECK_FAILED);
const networkCheck = await this.risk.validateNetwork(payload.ip);
if (networkCheck.isFailure()) return err(WorkflowError.RISK_CHECK_FAILED);
// 2. Referral resolution (read-only)
let referralLink: ReferralLink | null = null;
if (payload.referralCode) {
const linkResult = await this.referral.resolveCode(payload.referralCode);
if (linkResult.isFailure()) return err(WorkflowError.INVALID_REFERRAL);
referralLink = linkResult.value;
}
//
-
Transactional boundary: identity creation + referral linking const txResult = await this.identity.reserveEmail(payload.email); if (txResult.isFailure()) return err(WorkflowError.EMAIL_TAKEN);
const userRecord = await this.identity.persistProfile({ email: payload.email, hashedPassword: payload.passwordHash, source: payload.adSource, riskProfile: { ip: payload.ip, deviceId: payload.deviceId }, });
if (userRecord.isFailure()) return err(WorkflowError.PERSISTENCE_FAILED); const newUser = userRecord.value;
// 4. Post-creation side effects (idempotent, fire-and-forget safe) if (referralLink) { await this.referral.linkAccounts(referralLink.ownerId, newUser.id); }
await this.analytics.trackEvent('user.registered', { userId: newUser.id, source: payload.adSource, riskScore: deviceCheck.value.score, });
return ok({ userId: newUser.id, email: newUser.email }); } }
### Step 3: Map Workflow Errors at the Transport Boundary
The workflow returns explicit results. The controller or NestJS interceptor translates them to HTTP responses. This keeps business logic free from transport concerns.
```typescript
// adapters/http/registration.controller.ts
@Controller('auth')
export class RegistrationController {
constructor(private readonly workflow: RegistrationWorkflow) {}
@Post('register')
async handle(@Body() payload: RegistrationRequest): Promise<ApiResponse> {
const result = await this.workflow.execute(payload);
if (result.isSuccess()) {
return { status: 201, data: result.value };
}
const errorMap: Record<WorkflowError, number> = {
[WorkflowError.EMAIL_TAKEN]: 409,
[WorkflowError.INVALID_REFERRAL]: 400,
[WorkflowError.RISK_CHECK_FAILED]: 403,
[WorkflowError.PERSISTENCE_FAILED]: 500,
};
const httpStatus = errorMap[result.error] ?? 500;
throw new HttpException(result.error, httpStatus);
}
}
Architecture Decisions & Rationale
- Capabilities over Services: Capabilities define intent (
reserveEmail,evaluateDevice) rather than implementation details. This prevents domain leakage and makes mocking trivial. - Explicit Transaction Boundaries: The workflow identifies where ACID guarantees are required (user creation + referral linking) and isolates them. Read-only steps run outside the transaction, reducing lock contention.
- Result Pattern at Boundaries: Using
OperationResult<T, E>forces explicit error handling. The compiler prevents silent failures, and the workflow remains pure business logic. - Idempotent Side Effects: Analytics and bonus accrual are decoupled from the core transaction. If they fail, the user is still created. This matches production reality where observability shouldn't block core flows.
Pitfall Guide
1. The Orchestrator God Object
Explanation: The orchestrator accumulates conditional logic, error mapping, and execution order knowledge. It becomes the single point of failure for refactoring. Fix: Extract execution order into a workflow handler. Use a step-based pipeline or explicit state machine. Keep the orchestrator thin or eliminate it entirely.
2. Implicit Transaction Boundaries
Explanation: Developers assume each service manages its own transaction. Cross-service flows end up with partial commits when intermediate steps fail.
Fix: Define transaction scope at the workflow level. Use database transactions explicitly (BEGIN/COMMIT/ROLLBACK) or leverage NestJS transactional decorators that span multiple capability calls.
3. Capability Leakage
Explanation: Extracted services reach into each other's data or call internal methods not exposed in the contract. This recreates coupling under a new name. Fix: Enforce strict interface boundaries. Use dependency injection to inject only the capability contract, not the concrete implementation. Run static analysis to detect cross-module imports.
4. Error Handling Fragmentation
Explanation: Mixing throw, Result, and null returns creates inconsistent error paths. The orchestrator spends more time translating errors than executing logic.
Fix: Standardize on a single error representation across the feature boundary. Use discriminated unions or a Result monad. Map to HTTP status codes only at the transport layer.
5. Testing the Wiring, Not the Flow
Explanation: Unit tests mock every service and verify the orchestrator calls them in order. This tests implementation details, not business behavior. Fix: Write behavior-driven tests against the workflow. Provide fake capability implementations that return predefined results. Assert on final state and side effects, not call counts.
6. Ignoring Idempotency in Distributed Steps
Explanation: Retry mechanisms or message queues cause duplicate execution. Without idempotency keys, bonuses are double-awarded or analytics are duplicated. Fix: Attach unique correlation IDs to workflow executions. Design side-effect capabilities to be idempotent by checking existing records before creating.
7. Over-Engineering with Premature Microservices
Explanation: Teams split services into separate deployables before the domain is stable. Network latency, distributed transactions, and versioning complexity explode. Fix: Keep capabilities in a single deployable unit until scaling demands otherwise. Use modular monolith patterns with clear bounded contexts. Split only when independent deployment velocity is proven necessary.
Production Bundle
Action Checklist
- Map feature boundaries: Identify which operations belong to the same business transaction
- Replace service calls with capability interfaces: Define intent-driven contracts
- Implement explicit workflow handler: Sequence steps, define transaction scope, handle errors
- Standardize error representation: Use
Result<T, E>or discriminated unions across the feature - Isolate side effects: Move analytics, notifications, and bonus accrual outside the core transaction
- Add idempotency keys: Ensure retries and queue consumers don't duplicate state changes
- Write workflow-level tests: Validate business behavior, not implementation wiring
- Review transaction boundaries: Confirm ACID guarantees match business requirements
Decision Matrix
| Scenario | Recommended Approach | Why | Cost Impact |
|---|---|---|---|
| Single feature with 3β5 domain interactions | Explicit Workflow + Capabilities | Keeps transactional integrity, reduces coupling | Low (refactoring only) |
| Cross-feature events requiring async processing | Event-Driven Workflow + Message Queue | Decouples execution, enables scaling | Medium (infrastructure + serialization) |
| High-throughput registration with external risk APIs | Async Workflow + Retry + Idempotency | Handles latency, prevents duplicate state | Medium-High (queue + monitoring) |
| Legacy monolith with tight service coupling | Strangler Fig + Capability Wrappers | Gradual migration without rewrite | High (phased effort) |
Configuration Template
// modules/registration/registration.module.ts
import { Module } from '@nestjs/common';
import { RegistrationWorkflow } from './workflows/registration.workflow';
import { IdentityCapability } from './contracts/identity.capability';
import { RiskCapability } from './contracts/risk.capability';
import { ReferralCapability } from './contracts/referral.capability';
import { AnalyticsCapability } from './contracts/analytics.capability';
import { IdentityRepository } from './infrastructure/identity.repository';
import { RiskEngineAdapter } from './infrastructure/risk.engine';
import { ReferralStore } from './infrastructure/referral.store';
import { AnalyticsTracker } from './infrastructure/analytics.tracker';
@Module({
providers: [
RegistrationWorkflow,
{ provide: IdentityCapability, useClass: IdentityRepository },
{ provide: RiskCapability, useClass: RiskEngineAdapter },
{ provide: ReferralCapability, useClass: ReferralStore },
{ provide: AnalyticsCapability, useClass: AnalyticsTracker },
],
exports: [RegistrationWorkflow],
})
export class RegistrationModule {}
Quick Start Guide
- Identify the feature boundary: List all operations triggered by a single user action (e.g., registration). Group them by transactional requirement.
- Define capability contracts: Create interfaces that express intent, not implementation. Place them in a
contracts/directory. - Build the workflow handler: Implement a class that executes steps sequentially, manages transaction scope, and returns explicit results.
- Wire dependencies: Register capabilities in your NestJS module. Inject the workflow into your controller or command handler.
- Test the flow: Write integration tests using fake capability implementations. Verify final state, error mapping, and side-effect execution.
