I’ve finally been looking into Kubernetes properly since of late. When learning about containers, pods and services, what pretty much every doc, article, video tells is that,
- Containers in the same pod can just use localhost and the port to communicate.
- If a pod needs to talk to another pod, it can do so if it knows the IP address and the exposed port(s) of that pod.
- Pods tend to be transient. So depending on their IP address for communicating can make the whole communication process very brittle/unreliable. To solve this, one can create services which will act as an abstraction for the pods. The service will have a single DNS name and it’ll load balance the requests among the pods.
But how exactly how would such an app with multiple pods talking to each other would look like? Let’s try out a simple app that demonstrates the 3rd scenario described above since that’s the recommended approach to go. Let’s consider a simple app with a single endpoint /quote
, which would simply return a quote.
Our app would consist of the following services:
- A quotes service – this would return a quote with various metadata associated with the quote (e.g., author, category, length etc.). To keep it simple, we’ll just call the https://quotes.rest/qod to get the quote and return it from the /quote endpoint of this service.
- A proxy service – this would sit in between the quotes API and the users and present the users with a simplified response with all the additional metadata etc. stripped from the payload.
For ease of implementation and clarity let’s use Ballerina to implement these 2 services. In addition to having network abstractions built in to the language, it also provides support for generating Docker and Kubernetes artifacts for our applications. But for the purpose of this exercise, we’ll just stick to generating the Docker images.
We’ll then deploy this locally on Minikube as follows:
- Quote Service Pod – for running the quotes API
- ClusterIP Service – a ClusterIP type service which will act as the abstraction for the quote service pods
- Quote Proxy pod – for running the proxy service for the quotes API
- NodePort Service – a service which will act as the abstraction for the quote proxy service. Since this is a NodePort type service, it will also make the quote proxy accessible from outside the cluster.
The Quote Service
Create a new Ballerina project for the quote service as follows:
$ bal new quote_service
This will initialize a new Ballerina project for the service. In the generated Ballerina.toml file, add the build option cloud="docker"
. The complete Ballerina.toml should look something like the following:
[package]
org = "pubudu"
name = "quotes"
version = "1.0.0"
distribution = "2201.1.1"
[build-options]
cloud="docker"
Create a new file at the project root called Cloud.toml and add the details about the Docker image you want to produce to it. It should look something like the following:
[container.image]
repository="pubuduf"
name="quotes"
tag="1.0.0"
[settings]
buildImage=true
Finally, replace the auto-generated source code with the following service:
import ballerina/http;
final http:Client ep = checkpanic new ("https://quotes.rest");
service on new http:Listener(9090) {
resource function get quote() returns json|error {
return check ep->get("/qod");
}
}
The above code does the following:
- Open a new port on 9090 and start listening for incoming HTTP connections
- Add a new endpoint /quote to the listener which will accept GET HTTP requests
- Within the /quote resource, we just simply get the quote of the day from the https://quotes.rest API
The Quote Proxy
Steps for creating the proxy would be pretty similar to the above. Create a new project for the proxy and add details/code to it as follows:
Ballerina.toml
[package]
org = "pubudu"
name = "quote_proxy"
version = "1.0.0"
distribution = "2201.1.1"
[build-options]
cloud="docker"
Cloud.toml
[container.image]
repository="pubuduf"
name="quote-proxy"
tag="1.0.0"
[settings]
buildImage=true
Source code
import ballerina/log;
import ballerina/http;
final http:Client ep = checkpanic new ("http://quotes-svc:9090");
service on new http:Listener(8080) {
resource function get quote() returns json|http:BadGateway|http:InternalServerError {
json|error resp = ep->get("/quote");
if resp is error {
return http:INTERNAL_SERVER_ERROR;
}
do {
json[] quotes = <json[]>check resp.contents.quotes;
return {"quote": check quotes[0].quote};
} on fail var err {
log:printError("Invalid JSON payload received from backend", 'error = err, payload = resp);
}
}
}
The noteworthy part in this code is the HTTP client initialization. Note the URL: http://quotes-svc:9090
. The host name in this is basically the name we will give to the Kubernetes ClusterIP service we will create for the quotes API. As it can be seen, we are simply using the name we give to the ClusterIP service when we want to invoke the quotes API. This URL can be made configurable. But for now, we’ll hard code it for simplicity in illustration.
Building the Services
Once we are done with coding the 2 services, we can build them. But first, before doing that, make sure that,
- Minikube is running
- Run
eval $(minikube docker-env
) on the terminal. This will basically allow you to build and push Docker images to the Docker registry in the Minikube instance instead of the registry in your host machine. If not, when you try to run a pod with the image, Kubernetes will complain that the image can not be found.
To build a Ballerina project, simply run bal build
command from the root of the project directory. You should see an output similar to the following:
$ bal build
Compiling source
pubudu/quotes:1.0.0
Generating executable
Generating artifacts...
@kubernetes:Docker - complete 2/2
Execute the below command to run the generated Docker image:
docker run -d -p 9090:9090 pubudu91/quotes:1.0.0
target/bin/quotes.jar
Deploying in Kubernetes
Now that we have our Docker images, we can go ahead and deploy our app in Kubernetes. Let’s take a quick look at what our Kubernetes artifacts would be and how they would look like. For the purpose of this illustration, let’s create and deploy the pods manually, instead of using a Deployment.
We will run the 2 services in separate pods. So we’ll need pod definitions for both.
quotes-pod.yaml
kind: Pod
apiVersion: v1
metadata:
name: quotes-service
labels:
app: quotes-service
spec:
containers:
- image: pubuduf/quotes:1.0.0
name: quotes-pod
quotes-proxy-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: quotes-proxy-pod
labels:
app: quotes-proxy
spec:
containers:
- name: quotes-proxy
image: pubuduf/quote-proxy:1.0.0
imagePullPolicy: Never
Then, we’ll also need 2 services for abstracting these 2 pods.
quotes-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: quotes-svc
labels:
svc: quotes-svc
spec:
type: ClusterIP
ports:
- port: 9090
targetPort: 9090
selector:
app: quotes-service
For the quotes service, we select ClusterIP as the service type since we don’t need to expose it outside of the cluster. We are exposing the service on the same port number (9090). Also note the name of the service. It’s the same name we used when initializing the HTTP client within our proxy app.
quotes-proxy-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: quotes-proxy-svc
labels:
svc: quotes-proxy-svc
spec:
type: NodePort
ports:
- port: 8080
targetPort: 8080
nodePort: 30000
selector:
app: quotes-proxy
Here we set the service type as NodePort since we do want to make the proxy accessible from outside of the cluster and we expose to the outside on port 30000.
We can then deploy these using the following commands:
$ kubectl apply -f quotes-pod.yaml
$ kubectl apply -f quotes-proxy-pod.yaml
$ kubectl apply -f quotes-svc.yaml
$ kubectl apply -f quotes-proxy-svc.yaml
Now you should have these 2 pods and 2 services up and running. Then, to verify that everything is working as expected, let’s try sending a request to our app. First, let’s find out the IP address of the node by running either kubectl cluster-info
or minikube ip
. And now we can send the request as follows:
$ curl -v http://192.168.49.2:30000/passthru
* Trying 192.168.49.2:30000...
* Connected to 192.168.49.2 (192.168.49.2) port 30000 (#0)
> GET /passthru HTTP/1.1
> Host: 192.168.49.2:30000
> User-Agent: curl/7.84.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< content-type: application/json
< content-length: 152
< server: ballerina
< date: Thu, 11 Aug 2022 19:58:53 GMT
<
* Connection #0 to host 192.168.49.2 left intact
{"quote":"Time to improve is limited. The clock is always on and doesn't care if you don't feel like it. Someone else does and they're passing you by."}
If everything is in order, you should see a response similar to the above.
So there we have it. In this blog post we,
- created an API and deployed in Kubernetes
- created a Kubernetes service to abstract the API and gave it a consistent DNS name: quotes-svc
- created a proxy for the API and deployed it
- created a Kubernetes service to abstract the proxy and made it accessible to the outside
So basically we are talking to the proxy service through the NodePort service and the proxy is in turn talking to the quotes API through the ClusterIP service.