Spring Boot MicroServices kubernates tutorial - deploy to Kubernates
Normally, we create Docker images using Dockerfiles. This involves creating a Dockerfile, providing the necessary information, and then running the docker build -d
command to automatically generate Docker images. However, creating and maintaining Dockerfiles can be challenging. An alternative approach is to use Cloud Native Buildpacks. this cloud native buildpacks automatically create docker images from our source code.
It is an industry standard way to create docker images using buildpacks. one advantage of buildpacks is we dont need to create and maintain docker file.

<docker>
<publishRegistry>
<username>lakshmiperumal</username>
<!--suppress UnresolvedMavenProperty -->
<password>${env.DOCKER_PASSWORD}</password> //password here i gave manually to test
</publishRegistry>
</docker>
Builders are part of the Cloud Native Buildpacks project, used for creating container images without needing a Dockerfile
.
dashaun/builder:tiny
- This specific builder (
dashaun/builder:tiny
) is a lightweight builder image.
after setting password, we have to build our project, for that we have spring-boot:build-image option.
After doing that it will build complete project and automatically push complete project into docker hub.
kind is a tool for running local Kubernetes clusters using Docker container “nodes”.
kind was primarily designed for testing Kubernetes itself, but may be used for local development or CI.
to work with kunernates first we have to create K8s folder under root project, and then again create one more folder named kind under k8s. and create below files
kind-config.yaml —> kubernates expects yaml only not yml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
# port forward 80 on the host to 80 on this node
extraPortMappings:
- containerPort: 80
hostPort: 80
# optional: set the bind address on the host
# 0.0.0.0 is the current default
listenAddress: "127.0.0.1"
# optional: set the protocol to one of TCP, UDP, SCTP.
# TCP is the default
protocol: TCP
The above configuration we can get from https://kind.sigs.k8s.io/
kind create cluster --name microservices config kind-config.yaml
Now we have to download go file from https://kind.sigs.k8s.io/ and then download kind by using below command.
go install sigs.k8s.io/kind@v0.26.0
after using above command also it was not installing correctly. and we need to install correctly and set path
in environmnet variables.
cd k8s/kind
./
create-kind-cluster.sh
—> permission denied error will come
chmod +x create-kind-cluster.sh
After running above cluster will be created, before creating make sure docker desktop is opened and running.
Now we have to create manifest files to run our services.
for that under k8s we have to create manifests —> infrastructure folder. and enter into that path.
In docker-compose.yaml file we have added all clusters right → mysql, zookeper..etc. we need to create deployment for everything in kubernates.
before creating deployment we have to install kubectl. to install kubectl follow below stpes
https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/
after that go to below path:
C:\SoftwareProjects\Microservices_Application\Microservices_Application\k8s\manifests\infrastructure>
to create deployemenr for mysql run below command.
kubectl create deployment mysql --image=mysql:8.3.0 --port=3307 --replicas=1 --dry-run=client -o yaml
kubectl create deployment
:
- This command is used to create a new Deployment in Kubernetes
--image=mysql:8.3.0
:
This option specifies the Docker image to be used for the container running inside the Deployment.
Here,
mysql:8.3.0
means you are using the MySQL image version 8.3.0. This image contains the MySQL database server, with its configuration and software needed to run it.
--port=3307
:
This option defines the port on which the MySQL service will be exposed. It is important to note that this is not the internal port of the container but the port through which it can be accessed from outside the Kubernetes cluster.
The value
3307
here is the external port that you will use to connect to MySQL from other services or clients.
--replicas=1
:
This sets the number of replicas (pods) that should be running for this Deployment.
Here,
--replicas=1
means you want one instance of the MySQL container running in your cluster.
--dry-run=client
:
- This flag allows you to see the Kubernetes resource configuration without creating it.
-o yaml
:
- This option tells
kubectl
to output the resource configuration in YAML format.
so we got output in yaml format
Now we have to create mysql.yml file under manifest and paste copied content in output. in that content we have to remove unnessesary data and paste
After creating deployemnet we have to create service as well. to create service we can use below command.
kubectl create service clusterip mysql --tcp=3307:3307 --dry-run=client -o yaml
for this also one yaml file will be generated. we have to paste that also in same mysql yaml file by seperating using —-.
below is both deployment and service. inside deployment we added password as well. in production its not good practice to add password.
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: mysql
name: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:8.3.0
name: mysql
ports:
- containerPort: 3307
env:
- name: MYSQL_ROOT_PASSWORD
value: "mysql"
---
apiVersion: v1
kind: Service
metadata:
labels:
app: mysql
name: mysql
spec:
ports:
port: 3307
protocol: TCP
targetPort: 3307
selector:
app: mysql
type: ClusterIP
we have to make that password as secret.
kubectl create secret generic mysql-secrets --from-literal=mysql_root_password=mysql --dry-run=client -o yaml
by running above command it will encrypt password and send us encrypted password in yaml file format. this is not 100 percent efficient approach. as we are using deployment internally we are using this, in production grade application we will use AWS secret store or vault to store this sensitive data.
After secret yaml file generated we have to past in same mysql.yaml file. and set password in deployment yaml as below.
name: mysql
ports:
- containerPort: 3307
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
key: mysql_root_password
name: mysql-secrets
Now we have to create persistent volume by adding below code in same mysql.yaml file.
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv
specs:
storageClassName: 'standard'
accessMode:
- ReadWriteOnce
capacity:
storage: 1Gi
hostPath:
path: /data/mysql
Now we have to create persistent volume claim
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pvc
specs:
storageClassName: 'standard'
accessMode:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Now we have to Map out init.sql file into this mysql.yaml file
apiVersion: v1
kind: ConfigMap
metadata:
name: mysql-initdb-config
data:
initdb.sql: |
CREATE DATABASE IF NOT EXISTS order_service;
CREATE DATABASE IF NOT EXISTS inventory_service;
After adding all these details, we have to run below command
kubectl apply -f mysql.yaml
output:
To check weather it deployed or not use command → kubectl get all
output:
To get secrets → kubectl get secrets
to get congMaps created → kubectl get configmaps
We can check PVC as well → kubectl get pvc
1Gi → means 1 Giga Byte
we can also get logs of our deployment.
kubectl get pods
→ running container will come
kubectl get logs -f <ContainerName>
kubectl port-forward svc/mysql 3307:3307
—>
The kubectl port-forward
command is used to forward local port requests to a service in a Kubernetes cluster. This is useful for accessing applications running within the cluster from your local machine.
Like wise we have to implement for other services (Grafana, postfres, loki, mongoDB…. etc)
Now we have to remove if we given any hardcoded values in our project. by using @Value annotation we can remove hardcoded value. in api gateway service we added url as hardcoded, now we can remove those.
@Configuration
public class Routes {
@Value("${product.service.url}")
private String productServiceUrl;
@Value("${order.service.url}")
private String orderServiceUrl;
@Value("${inventory.service.url}")
private String inventoryServiceUrl;
@Bean
public RouterFunction<ServerResponse> productServiceRoute(){
return route("product_service")
.route(RequestPredicates.path("/api/product"), HandlerFunctions.http(productServiceUrl))
.filter(CircuitBreakerFilterFunctions.circuitBreaker("productServiceCircuitBreaker", URI.create("forward:/fallbackRoute")))
.build();
}
@Bean
public RouterFunction<ServerResponse> productServiceSwaggerRoute(){
return route("product_service_swagger")
.route(RequestPredicates.path("/aggregate/product-service/v3/api-docs"), HandlerFunctions.http(productServiceUrl))
.filter(CircuitBreakerFilterFunctions.circuitBreaker("productServiceSwaggerCircuitBreaker", URI.create("forward:/fallbackRoute")))
.filter(setPath("/api-docs"))
.build();
}
@Bean
public RouterFunction<ServerResponse> orderServiceRoute(){
return route("order_service")
.route(RequestPredicates.path("/api/order"), HandlerFunctions.http(orderServiceUrl))
.filter(CircuitBreakerFilterFunctions.circuitBreaker("orderServiceCircuitBreaker", URI.create("forward:/fallbackRoute")))
.build();
}
@Bean
public RouterFunction<ServerResponse> orderServiceSwaggerRoute(){
return route("order_service_swagger")
.route(RequestPredicates.path("/aggregate/order-service/v3/api-docs"), HandlerFunctions.http(orderServiceUrl))
.filter(CircuitBreakerFilterFunctions.circuitBreaker("orderServiceCircuitSwaggerBreaker", URI.create("forward:/fallbackRoute")))
.filter(setPath("/api-docs"))
.build();
}
@Bean
public RouterFunction<ServerResponse> inventoryServiceRoute(){
return route("inventory_service")
.route(RequestPredicates.path("/api/inventory"), HandlerFunctions.http(inventoryServiceUrl))
.filter(CircuitBreakerFilterFunctions.circuitBreaker("inventoryServiceCircuitBreaker", URI.create("forward:/fallbackRoute")))
.build();
}
@Bean
public RouterFunction<ServerResponse> inventoryServiceSwaggerRoute(){
return route("inventory_service_swagger")
.route(RequestPredicates.path("/aggregate/inventory-service/v3/api-docs"), HandlerFunctions.http(inventoryServiceUrl))
.filter(CircuitBreakerFilterFunctions.circuitBreaker("inventoryServiceSwaggerCircuitBreaker", URI.create("forward:/fallbackRoute")))
.filter(setPath("/api-docs"))
.build();
}
@Bean
public RouterFunction<ServerResponse> fallbackRoute(){
return route("fallbackRoute")
.GET("/fallbackRoute", request -> ServerResponse.status(HttpStatus.SERVICE_UNAVAILABLE).body("service unavailable, please try again later"))
.build();
}
}
product.service.url=http://localhost:8080
order.service.url=http://localhost:8083
inventory.service.url=http://localhost:8084
same like we have to implement yaml files for main services(order service, inventory service, notification service…etc).
After adding run below command
kubectl apply -f infrastructure
—> it will run all files under infrastructure directory
kubectl get pods
→ to check running pods.
In above image postgres not running we have to check logs to find out the issue.
kubectl describe pod postgres-54845b8f4f-xmw6m
→ to check logs for that particular container
some port related configuration mismatch. after correcting its running successfully.
ports:
- name: postgres
port: 5433 # Host port for external access
targetPort: 5432 # Container's PostgreSQL port
protocol: TCP
kubectl get pods
kubectl get svc
Now we can verify weather we can able to access keycloak or not. as this is cluster we can able to access through localhoost://8080, we have to do port forwarding.
kubectl port-forward svc/keycloak 8080:8080
after above command open http://localhost:8080 → it will open keycloak successsfully.
we have to do port forward for all services and then u can able to see grafana
complete source code: https://github.com/Malalakshmi/Microservices_Application_Main.git