Migrating Spring Boot Monoliths to Microservices via DDD

Most enterprise teams face a breaking point when their Spring Boot monolith reaches a specific scale: the 20-minute CI/CD pipeline. I recently led a migration for a fintech platform where a single OrderService.java had grown to 4,500 lines of code, injected with 22 different repositories. This "Big Ball of Mud" made it impossible to update the Spring Boot version or the underlying Hibernate dialect without breaking unrelated modules. We shifted to a Domain-Driven Design (DDD) approach to carve out microservices without stopping feature delivery.

📋 Tested Environment: Spring Boot 2.7.x migrating to 3.2.x (JDK 17)
Key Discovery: Extracting the database schema is 3x harder than extracting code. We found that breaking @OneToMany relationships between the legacy User entity and Order entity was the primary cause of DataIntegrityViolationException during the migration phase.

[AdPlaceholder1]

Mapping Bounded Contexts to Avoid the Distributed Monolith

The biggest mistake I see teams make is "Folder-based Microservices." They simply move packages into new repos. This results in a distributed monolith where every service calls every other service via REST, leading to 500ms+ latency and cascading failures. Instead, we spent three weeks on Event Storming to find our Bounded Contexts. We realized that what we called a "Product" in the Inventory context was actually an "Item" in the Shipping context. They have different identifiers and different lifecycles.

In our legacy Spring Boot app, these were all one giant @Entity. To decouple, we first had to break the JPA entity relationships. We replaced hard @ManyToOne object references with simple Long productId fields. This immediately broke the build in hundreds of places—which is exactly what you want. It forced the compiler to show us where the "messy middle" was. If a service was reaching deep into another service's logic to calculate a price, that logic needed to be moved or replicated as a read-model.

We found that identifying the "Aggregates" was the turning point. An Aggregate should be a cluster of domain objects that can be treated as a single unit. In our case, the Order and OrderLineItems were a single aggregate. By ensuring that transactions never spanned multiple aggregates, we prepared the code for a physical split. This is the core of DDD: the boundary of the transaction is the boundary of the service. You can read more about this in the Official Spring Framework Documentation.

Executing the Strangler Fig Pattern with Spring Cloud Gateway

We didn't do a "Big Bang" rewrite. Instead, we used the Strangler Fig pattern. We placed a Spring Cloud Gateway in front of our monolith. Initially, 100% of the traffic went to the monolith. When we extracted the "Notification" service, we simply updated the gateway routes to point /api/v1/notifications/** to the new microservice while keeping everything else pointed at the legacy .jar. This allowed us to test the new service in production with real traffic while having a 10-second rollback plan.

The "Messy Middle" occurred when the new Notification service needed data that still lived in the monolith's database. We initially tried direct database access, but that was a disaster for schema migrations. We settled on using Debezium for Change Data Capture (CDC). Whenever the legacy database updated a user's email, Debezium streamed that change to Kafka, and the new Notification service updated its local projection. This kept the services decoupled at the data layer.

// Example of a Shim Client used during the transition
@Service
public class LegacyProductClient {
    private final WebClient webClient;

    // We use a fallback mechanism to handle monolith downtime
    public Mono<ProductDTO> getProduct(String id) {
        return webClient.get()
            .uri("/internal/products/{id}", id)
            .retrieve()
            .onStatus(HttpStatusCode::is5xxServerError, res -> Mono.error(new LegacySystemException()))
            .bodyToMono(ProductDTO.class)
            .retryWhen(Retry.backoff(3, Duration.ofSeconds(2))); // Resilience is key
    }
}
[AdPlaceholder2]

Solving Data Gravity and Shared Databases

Data gravity is the hardest part of any Spring Boot migration. Our legacy database had 200+ tables with complex foreign key constraints. We couldn't just "split" the database overnight. We followed a three-step process: Logical Separation, Table Mirroring, and finally, Physical Extraction. First, we moved the microservice tables into a separate schema within the same PostgreSQL instance. This allowed us to maintain joins if absolutely necessary while enforcing a "no-join" rule in the application code.

During this phase, we encountered massive performance hits. A single SELECT that used to take 5ms now took 150ms because it required three network calls to different services. We solved this by introducing Materialized Views on the query side (CQRS). If the Search Service needed data from the Catalog Service, it wouldn't call the Catalog API on every request. It would maintain a localized, flattened version of the catalog data optimized for searching. [Internal: Link to CQRS Implementation Guide].

The final step was the physical migration. We used Flyway to manage migrations for the new services independently. One specific pain point was handling the Sequence generators. If both the monolith and the new service were inserting into the same logical entity space, primary key collisions were inevitable. We switched to UUIDs or TSIDs for all new services to avoid the headache of centralized sequence management. [Internal: Link to Distributed ID Generation Strategies].

[AdPlaceholder3]

Frequently Asked Questions

Q. Should I use Spring Cloud Netflix or Spring Cloud Gateway?

A. Use Spring Cloud Gateway. Netflix Zuul 1.x is in maintenance mode and doesn't support non-blocking APIs. Spring Cloud Gateway is built on Project Reactor (WebFlux), offering better throughput and easier integration with modern Spring Boot 3.x security filters and rate limiting.

Q. How do I handle Distributed Transactions across services?

A. Avoid 2PC (Two-Phase Commit) at all costs. Instead, implement the Saga Pattern. Use an Orchestrator-based approach for complex flows or Choreography (Event-driven) for simple ones. If a step fails, you must issue "Compensating Transactions" to undo previous steps, as there is no global rollback.

Q. Is DDD overkill for small Spring Boot applications?

A. Yes. If your team is small and the domain is simple, the overhead of defining aggregates, value objects, and domain events will slow you down. DDD is a tool for managing "complexity," not a mandatory standard for every backend project.

Migrating a Spring Boot monolith is a marathon, not a sprint. By focusing on Bounded Contexts first and using the Strangler Fig pattern, you can move your enterprise architecture toward a scalable, maintainable microservice ecosystem without the "Big Bang" failure risk. Always prioritize data decoupling over code decoupling; your future self will thank you when you don't have to coordinate a database deployment across ten different teams.

Post a Comment