StackStalk
  • Home
  • Java
    • Java Collection
    • Spring Boot Collection
  • Python
    • Python Collection
  • C++
    • C++ Collection
    • Progamming Problems
    • Algorithms
    • Data Structures
    • Design Patterns
  • General
    • Tips and Tricks

Friday, March 10, 2023

Going Serverless with KNative

 March 10, 2023     Kubernetes     No comments   

Serverless computing has rapidly gained popularity in recent years as it provides a platform for developers to deploy their applications without worrying about infrastructure management. KNative is an open-source platform that simplifies the deployment and management of serverless workloads on Kubernetes, allowing developers to focus on building and scaling their applications. In this article, we will take a closer look at KNative and its benefits.
  • What is KNative?
  • The Architecture of KNative
  • Install KNative on Docker Desktop
  • KNative serving example in Java with Quarkus
  • Conclusion

What is KNative?

KNative is a set of components that extend Kubernetes to provide a platform for building, deploying, and managing serverless applications. It provides a range of features, such as auto-scaling, event-driven processing, and routing, which simplify the deployment and management of serverless workloads. KNative is built on top of Kubernetes, making it easy to integrate with existing Kubernetes clusters.

Benefits of KNative

  1. Simplified Deployment: KNative simplifies the deployment of serverless applications by abstracting away the underlying infrastructure. Developers can focus on writing code and let KNative handle the deployment and management of their application.
  2. Autoscaling: KNative provides auto-scaling capabilities that allow developers to automatically scale their applications up or down based on demand. This ensures that applications are always available, regardless of the number of users accessing them.
  3. Event-Driven Processing: KNative supports event-driven processing, allowing developers to create functions that respond to events triggered by external systems. This makes it easy to create serverless applications that react to real-time events.
  4. Language-Agnostic: KNative supports a range of programming languages, making it easy for developers to write code in their language of choice.
  5. Open-Source: KNative is an open-source platform, which means that developers can contribute to its development and customize it to meet their specific requirements.

Use Cases of KNative

  1. Serverless Applications: KNative is ideal for building serverless applications that can be scaled up or down based on demand.
  2. Event-Driven Applications: KNative's event-driven processing capabilities make it ideal for building real-time applications that respond to events triggered by external systems.
  3. Microservices: KNative can be used to build and deploy microservices, allowing developers to break down their applications into smaller, more manageable components.

The Architecture of KNative

The architecture of KNative can be divided into three main components: Serving, Eventing, and Build. Each component provides a set of functionalities that simplify the deployment and management of serverless workloads.

Serving

The Serving component of KNative provides a platform for deploying and managing serverless applications. It allows developers to deploy containerized applications and functions, automatically scaling them based on demand. Serving is built on top of Kubernetes and uses Istio for traffic management and security. Serving consists of the following components:
  • Knative Serving: The core component of KNative Serving, responsible for deploying and managing serverless applications.
  • Knative Istio Controller: A component that manages the Istio resources required for routing and traffic management.
  • Activator: A component that activates containers on demand and routes traffic to them.

Eventing

The Eventing component of KNative provides a platform for building event-driven applications. It allows developers to create functions that respond to events triggered by external systems, such as message queues or databases. Eventing is built on top of Kubernetes and uses Apache Kafka for event streaming. Eventing consists of the following components:
  • Knative Eventing: The core component of KNative Eventing, responsible for managing event sources and subscriptions.
  • Apache Kafka: A distributed event streaming platform used by KNative Eventing for event processing.
  • Knative Eventing Sources: A set of components that connect to external event sources and generate events.
  • Knative Eventing Channels: A set of components that provide a platform for routing events between event sources and event sinks.

Build

The Build component of KNative provides a platform for building container images from source code. It allows developers to build and package container images using their preferred build tools and then deploy them to Kubernetes. Build is built on top of Kubernetes and uses Tekton for building container images. Build consists of the following components:
  • Knative Build: The core component of KNative Build, responsible for managing build templates and pipelines.
  • Tekton: A Kubernetes-native framework for building and deploying container images.
  • Kaniko: A tool used by KNative Build for building container images from source code.

Install KNative on Docker Desktop

Installing KNative on Docker Desktop with Kourier is a straightforward process that involves a few steps. In this guide, we will walk you through the steps to install KNative on Docker Desktop with Kourier.

Requirements

Before you begin, ensure that you have the following:
  • Docker Desktop installed on your machine
  • Kubernetes enabled in Docker Desktop
  • kubectl command-line tool installed
  • kn command-line tool installed
  • helm package manager installed
Step 1: Install KNative Serving The first step is to install KNative Serving using the helm package manager. Run the following command to add the KNative serving chart repository:
helm repo add knative https://knative.dev/helm-charts 
Then, run the following command to install KNative Serving:
kubectl create namespace knative-serving 
helm install knative-serving knative/serving --namespace knative-serving 
Verify that KNative Serving is running by running the following command:
kubectl get pods --namespace knative-serving 
Step 2: Install Kourier Kourier is a lightweight ingress and egress controller for KNative. It provides a simple way to route traffic to and from KNative services. Run the following command to add the Kourier chart repository:
helm repo add kourier https://storage.googleapis.com/kourier-release 
Then, run the following command to install Kourier:
kubectl create namespace kourier-system 
helm install kourier kourier/kourier --namespace kourier-system --set service.type=NodePort --set service.nodePorts.http=31080 --set service.nodePorts.https=31443 
Verify that Kourier is running by running the following command:
kubectl get pods --namespace kourier-system 
Step 3: Verify the Installation To verify that KNative and Kourier are running correctly, create a sample KNative service and expose it using Kourier. Create a sample KNative service by running the following command:
kubectl apply -f https://knative.dev/docs/serving/samples/hello-world/helloworld-go.yaml 
Expose the service using Kourier by running the following command:
kubectl apply -f https://raw.githubusercontent.com/knative/net-kourier/main/config/ingress/contour/01-crds.yaml 
kubectl apply -f https://raw.githubusercontent.com/knative/net-kourier/main/config/ingress/contour/02-default-backend.yaml 
kubectl apply -f https://raw.githubusercontent.com/knative/net-kourier/main/config/ingress/contour/03-kourier.yaml 
kubectl apply -f https://raw.githubusercontent.com/knative/net-kourier/main/config/ingress/contour/04-namespace.yaml 
kubectl apply -f https://raw.githubusercontent.com/knative/net-kourier/main/config/ingress/contour/05-example.yaml 
To access the sample service, get the IP address of the Docker Desktop Kubernetes cluster by running the following command:
kubectl cluster-info | grep 'Kubernetes control plane' | awk '/http/ {print $NF}' | sed 's/.*\/\/\([^:]*\):.*/\1/' 
Then, open a web browser and navigate to the following URL, replacing IP_ADDRESS with the IP address of the Kubernetes cluster:
http://IP_ADDRESS/hello-world 
If everything is set up correctly, you should see a message that says "Hello World!".

KNative serving example in Java with Quarkus

Create a new Quarkus project using the following command:
mvn io.quarkus.platform:quarkus-maven-plugin:2.16.4.Final:create \
    -DprojectGroupId=com.example \
    -DprojectArtifactId=knative-example \
    -DclassName="com.example.MyKnativeService" \
    -Dextensions="resteasy-jsonb,kubernetes,container-image-docker"

This will create a new Quarkus project with the necessary extensions for Knative serving and JSON serialization using RESTEasy and JSON-B.

Next, add the following properies to the application.properties file.

quarkus.container-image.build=true
quarkus.container-image.group=dev.local
quarkus.container-image.push=false
quarkus.container-image.builder=docker
quarkus.knative.image-pull-policy=never
quarkus.kubernetes.deployment-target=knative
quarkus.kubernetes-client.namespace=mynamespace
This creates a Java class. Customize as needed.
package com.example;

import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.Produces;
import javax.ws.rs.core.MediaType;

@Path("/hello")
public class MyKnativeService {

    @GET
    @Produces(MediaType.TEXT_PLAIN)
    public String hello() {
        return "Hello, Knative";
    }
}

This class defines a simple REST endpoint that returns the message "Hello, Knative!".

KNative deployment specifications is generated in target/kubernetes/knative.yml". Finally, deploy your Quarkus application to Knative serving using the following command:

./mvnw clean package -Dquarkus.kubernetes.deploy=true 
This will build and package your Quarkus application and deploy it to Knative serving. Check if service is installed.
kn service list -n mynamespace
NAME              URL                                                   LATEST                  AGE     CONDITIONS   READY   REASON
knative-example   http://knative-example.mynamespace.127.0.0.1.nip.io   knative-example-00001   6m15s   3 OK / 3     True
You can access your Knative service using the URL provided by Knative serving. To access from host machine we would need to do port forwarding.
kubectl port-forward --namespace kourier-system $(kubectl get pod -n kourier-system -l "app=3scale-kourier-gateway" --field-selector=status.phase=Running --output=jsonpath="{.items[0].metadata.name}") 8080:8080
Sample curl command to access from host machine.
curl --location --request GET 'http://localhost:8080/hello' \
--header 'Host: knative-example.mynamespace.127.0.0.1.nip.io'
That's it! You now have a Quarkus application running on Knative serving. You can modify the MyKnativeService class to add additional REST endpoints and functionality as needed. We will explore KNative eventing example in the next article.

Conclusion

KNative is an open-source platform that simplifies the deployment and management of serverless applications on Kubernetes. With its auto-scaling, event-driven processing, and language-agnostic capabilities, KNative has become a popular choice for building and scaling cloud-native applications. Its flexible architecture and open-source nature make it a powerful tool for building custom solutions that meet specific business requirements. If you're looking for a platform to build and deploy serverless applications, KNative is definitely worth exploring.
  • Share This:  
Older Post Home

0 comments:

Post a Comment

Follow @StackStalk
Get new posts by email:
Powered by follow.it

Popular Posts

  • Avro Producer and Consumer with Python using Confluent Kafka
    In this article, we will understand Avro a popular data serialization format in streaming data applications and develop a simple Avro Produc...
  • Monitor Spring Boot App with Micrometer and Prometheus
    Modern distributed applications typically have multiple microservices working together. Ability to monitor and manage aspects like health, m...
  • Server-Sent Events with Spring WebFlux
    In this article we will review the concepts of server-sent events and work on an example using WebFlux. Before getting into this article it ...
  • Implement caching in a Spring Boot microservice using Redis
    In this article we will explore how to use Redis as a data cache for a Spring Boot microservice using PostgreSQL as the database. Idea is to...
  • Python FastAPI microservice with Okta and OPA
    Authentication (AuthN) and Authorization (AuthZ) is a common challenge when developing microservices. In this article, we will explore how t...
  • Spring Boot with Okta and OPA
    Authentication (AuthN) and Authorization (AuthZ) is a common challenge when developing microservices. In this article, we will explore how t...
  • Getting started with Kafka in Python
    This article will provide an overview of Kafka and how to get started with Kafka in Python with a simple example. What is Kafka? ...
  • Getting started in GraphQL with Spring Boot
    In this article we will explore basic concepts on GraphQL and look at how to develop a microservice in Spring Boot with GraphQL support. ...

Copyright © StackStalk