DDD Book picture

From an architectural perspective, MSAs can be broken down into three main parts.

  • Infrastructure architecture
    • On-premises
    • Cloud
  • System architecture
    • Monolith
    • Microservices
  • Application internal organization
    • Tightly coupled, locked-in technology stack
    • Loosely coupled, flexible technology stack

If I were an architect, these are all things I would be very concerned about.
However, microservices don’t dictate or force you to use any equipment.
However, if you equip MSA with bare metal equipment, it will be difficult to flexibly scale up or down the infrastructure. I think it’s safe to say that the only way to use bare metal for MSA is to build a private cloud environment on bare metal.

If you’re using a virtualized environment, you can broadly categorize it into virtual machines and containers.
So what is the difference between virtual machines and containers?

First of all, virtual machines are not set up by default like containers, so you need to upgrade the OS version or install libraries.

Therefore, for independent environments such as microservices, containers are more appropriate than VMs.

Also, if containers are the natural choice, you’ll need container orchestration skills. An orchestration tool that can scale containers up or down, load balance them, failover, etc.

Why Netflix moved to the cloud

Around 2006, Netflix started its streaming business on a Monolith system, and suffered a massive loss of data in the streaming DB, which started the journey towards MSA. The company chose EC2 on AWS.

It wasn’t all smooth sailing. When you take a whole chunk of services and spread them across multiple services, you also have to prepare for failure propagation.

Netflix had open-sourced a number of tools to address these issues.
In 2013, Docker & Kubernetes was released,
an orchestration technology that continues to drive the evolution of the MSA ecosystem.


Service Discovery Pattern

How would a client call multiple backend microservices?
And if that instance is replicated across multiple instances, how can we balance the load?
The pattern for this is the discovery pattern. should be used to ensure proper load balancing.

Routers chase IP addresses, but in a cloud MSA environment, we need to receive and change floating IPs that change dynamically.
We need a place to hold this information for mapping purposes, and this pattern is called the service registry pattern. And there’s a tool called Eureka from Netflix that actually does this.

  1. each service instance registers its service name/floating IP with Eureka
  2. router calls Eureka when client calls
  3. load balancing the instances that respond utilizes this information
  4. Calling instances as a result It’s used in the same way as above.

API Gateway Pattern

The Gateway pattern creates a single entry point to provide different APIs to different types of clients.
If you encounter latency issues during a service request, you might want to reroute the request to a different service.

BFF Pattern

Rather than having a single entry point like an API gateway, we want to provide optimized APIs based on the type of client. As a result, each client category has a BFF and goes through an API gateway.

External Configuration Store Pattern

A pattern that allows you to change information about the resources used by the MSA. We can refer to this pattern as the Config principle.
This is the principle that configuration information used by code should be managed completely separately from the code.
Development, test, production IP addresses/ports, etc. are used separately as environment variables.
However, Spring Cloud Config allows you to inject this configuration information into containers in multiple MSA environments at runtime.
Kubernetes provides this external configuration store pattern with Kubernetes ‘ConfigMap’.

Authentication/Authorization Patterns

It would be inefficient to duplicate authentication/authorization for multiple MSAs.
In the traditional monolith pattern, you store user information in a server session.
In MSA, however, instead of storing sessions for each service, we store sessions in shared storage and ensure that all services get the same user data.
We use Redis or Memcached for session storage.

Again, we’ll use an API gateway to handle the client-side JWT so that each service doesn’t have to handle authentication/authorization.

  1. when a client accesses a service, the API gateway makes a request to the service
  2. the service responds that they are not authorized to access it
  3. the client requests the authentication service from the API gateway when it is not authorized to access it
  4. the authentication service responds with a jwt
  5. client bites the jwt and again requests the service from the API gateway
  6. service responds success after allowing JWT authorization

Circuit Breaker Pattern

The circuit breaker pattern is like the stock market in South Korea, where we set upper and lower limits to prevent overheated markets.
The idea is to isolate one of your services when it fails and prevent it from spreading to other services. And it prevents the failure from spreading to other services.

  1. client’s request
  2. service A calls service B
  3. service B fails
  4. isolate B service with circuit breaker pattern
  5. process alternative response to A service’s response
  6. failover

Log Collection Patterns

How should I manage logs for my microservices?

Depending on the usage of your microservice, instances will be spawned and deleted at any given time, and local logs may disappear.

The typical stack we use in this case is the ELK stack.

It is a set of tools for collecting and examining logs in the form of event streams from services. Elastic Search, Logstash, Kibana.

  1. append logs with Logstash
  2. move the loaded logs to the Queue area (like redis)
    • a. We have a memory store to pause loading logs in between because if logs are too crowded, the log store will have performance issues
  3. Save logs to ElasticSearch
    • Index the logs
  4. visualize with Kibana to view dashboards

Service Mesh Pattern

In the early days of MSA,
it would have been very cumbersome to create each of the above-mentioned API gateways, service registries, configs, and other operations management perspective services separately.

And if you don’t use Spring Cloud services, you don’t have the option of importing them from other technology stacks.

So recently, the service mesh pattern has been favored, which allows this to be done at the network infrastructure layer.
Service mesh is part of the infrastructure layer, which handles communication between services and seems to solve many of the problems mentioned above.

A prime example of this is Google’s Istio.
It uses a sidecar pattern, deployed as a separate container from the application.
It is centrally controlled by a Control Plain function, which communicates between sidecars to manage operations. As such, they are completely independent of the business logic.
It is deployed in a pod in Kubernetes with a service container and an Envoy container, which is a sidecar implementation.

Composite Pattern

There is a methodology for splitting the frontend into multiple services and making them independently deployable.
Just like backend microservices, they are separated by function and work in combination.
In other words, they become micro-frontends, each of which can be used to compose a flexible UI.
This means that the menu bar, header, content, footer, etc. can each be flexibly configured and operated separately from each other.

Microservice Connection Pattern

Which method should I use to make microservice calls between frontend and backend?

  1. Synchronous
    • When a request is made, it waits for a response or makes a re-call. (High dependency)
  2. Asynchronous
    • You don’t wait for a response to a request and move on to the next thing.

You’ll probably need to use a message broker like Kafka. You pass messages to the message broker asynchronously, and the message broker guarantees delivery. An additional issue here is that the message broker must scale to the size of the message processing. (The load issue)

This approach would have the advantage of allowing looser coupling and dependencies among the services that will be communicating with each other, controlled by Kafka.

Storage Separation Pattern

Even if you split it up into multiple microservices,
even in a lull in requests, if you have a single unified database, it’s still going to be busy.

In that case, the automatic scaling of microservices is meaningless.
The complementary storage separation pattern is that each microservice owns its own data, meaning it only passes data to other services in API responses.

The benefits are:

  • information hiding effect due to obtaining information only through APIs
  • free polyglot storage
  • Reduced data impact due to independent services

Distributed Transaction Processing Pattern

The distributed transaction processing pattern helps ensure data consistency.
As mentioned in Chapter 1, you can use techniques like two-phase commits, but they tend to have performance issues due to lock-ins.
The Saga pattern is a pattern for distributed transaction processing that is independent of microservices.

Example:

  1. create a draft order in Ordering Service and issue an event (end transaction)
  2. Customer service checks the provisional order event and looks up the limit
    • Process credit approval if the limit is met
    • Process credit over the limit if the limit is not met
  3. order service checks the event issued by customer service and processes it
    • Order authorization
    • Order Rejected

Instead of bundling them into one big transaction as above, you can take conformance into each separate local transaction.
It would be even better if you could apply them together using a message broker.

CQRS pattern (Command, Query, Responsibility, Segregation)

The CQRS pattern is Command Query Responsibility Segregation.
It does not perform CRUD processing on the same DB, but separates them.
It is also used to separate view tables and writable tables in the DB to separate query services from writeable services. By separating them, you can reduce the load and reduce latency.
You can also apply this method to separate microservices by applying a message broker.

  1. microservices corresponding to Create, Update Delete.
  2. a lookup microservice that only holds a View Table in Read.

The lookup service might use a higher-performance stack,
or it might scale instances differently than the command-side service.

This CQRS pattern also solves other problems.

  1. products, orders, customers, and shipments are being viewed from the order history side.
  2. Microservices in the order history service area have their own storage
  3. Respond to each service through event handlers.

We get another benefit of loose dependencies and coupling. (?)

Event Sourcing Pattern

As above, in the Saga and CQRS patterns, you must always be concerned about consistency. To ensure consistency, event messages must be issued all the time, and the application must rip objects and convert them into SQL statements. In this case, it may not be safe to consider many concurrent updates and deadlocks before being processed from the data perspective.

So why not just store the state transaction itself?

This is the event sourcing pattern. Store state change events directly in an event store. When needed, compute the change transaction from the state’s starting point to the current point in time.
You can also reduce the amount of computation by running a batch to finish computing the state.

The idea is that the CQRS pattern-based command/inquiry service doesn’t need to handle all the CRUD, just the CR, to process the event store.

Example:

  1. event ID, Type, Object Data (JSON, etc.) are stored in the event store
  2. then other services only need to call Read

As shown above, you only need to count the accumulated transactions at the application level. It’s amazing.
It is very easy to configure with external applications because it clearly stores state transactions.