What is KNative?
KNative is a set of components that extend Kubernetes to provide a platform for building, deploying, and managing serverless applications. It provides a range of features, such as auto-scaling, event-driven processing, and routing, which simplify the deployment and management of serverless workloads. KNative is built on top of Kubernetes, making it easy to integrate with existing Kubernetes clusters.Benefits of KNative
- Simplified Deployment: KNative simplifies the deployment of serverless applications by abstracting away the underlying infrastructure. Developers can focus on writing code and let KNative handle the deployment and management of their application.
- Autoscaling: KNative provides auto-scaling capabilities that allow developers to automatically scale their applications up or down based on demand. This ensures that applications are always available, regardless of the number of users accessing them.
- Event-Driven Processing: KNative supports event-driven processing, allowing developers to create functions that respond to events triggered by external systems. This makes it easy to create serverless applications that react to real-time events.
- Language-Agnostic: KNative supports a range of programming languages, making it easy for developers to write code in their language of choice.
- Open-Source: KNative is an open-source platform, which means that developers can contribute to its development and customize it to meet their specific requirements.
Use Cases of KNative
- Serverless Applications: KNative is ideal for building serverless applications that can be scaled up or down based on demand.
- Event-Driven Applications: KNative's event-driven processing capabilities make it ideal for building real-time applications that respond to events triggered by external systems.
- Microservices: KNative can be used to build and deploy microservices, allowing developers to break down their applications into smaller, more manageable components.
The Architecture of KNative
The architecture of KNative can be divided into three main components: Serving, Eventing, and Build. Each component provides a set of functionalities that simplify the deployment and management of serverless workloads.Serving
The Serving component of KNative provides a platform for deploying and managing serverless applications. It allows developers to deploy containerized applications and functions, automatically scaling them based on demand. Serving is built on top of Kubernetes and uses Istio for traffic management and security. Serving consists of the following components:- Knative Serving: The core component of KNative Serving, responsible for deploying and managing serverless applications.
- Knative Istio Controller: A component that manages the Istio resources required for routing and traffic management.
- Activator: A component that activates containers on demand and routes traffic to them.
Eventing
The Eventing component of KNative provides a platform for building event-driven applications. It allows developers to create functions that respond to events triggered by external systems, such as message queues or databases. Eventing is built on top of Kubernetes and uses Apache Kafka for event streaming. Eventing consists of the following components:- Knative Eventing: The core component of KNative Eventing, responsible for managing event sources and subscriptions.
- Apache Kafka: A distributed event streaming platform used by KNative Eventing for event processing.
- Knative Eventing Sources: A set of components that connect to external event sources and generate events.
- Knative Eventing Channels: A set of components that provide a platform for routing events between event sources and event sinks.
Build
The Build component of KNative provides a platform for building container images from source code. It allows developers to build and package container images using their preferred build tools and then deploy them to Kubernetes. Build is built on top of Kubernetes and uses Tekton for building container images. Build consists of the following components:- Knative Build: The core component of KNative Build, responsible for managing build templates and pipelines.
- Tekton: A Kubernetes-native framework for building and deploying container images.
- Kaniko: A tool used by KNative Build for building container images from source code.
Install KNative on Docker Desktop
Installing KNative on Docker Desktop with Kourier is a straightforward process that involves a few steps. In this guide, we will walk you through the steps to install KNative on Docker Desktop with Kourier.Requirements
Before you begin, ensure that you have the following:- Docker Desktop installed on your machine
- Kubernetes enabled in Docker Desktop
- kubectl command-line tool installed
- kn command-line tool installed
- helm package manager installed
helm repo add knative https://knative.dev/helm-chartsThen, run the following command to install KNative Serving:
kubectl create namespace knative-serving helm install knative-serving knative/serving --namespace knative-servingVerify that KNative Serving is running by running the following command:
kubectl get pods --namespace knative-servingStep 2: Install Kourier Kourier is a lightweight ingress and egress controller for KNative. It provides a simple way to route traffic to and from KNative services. Run the following command to add the Kourier chart repository:
helm repo add kourier https://storage.googleapis.com/kourier-releaseThen, run the following command to install Kourier:
kubectl create namespace kourier-system helm install kourier kourier/kourier --namespace kourier-system --set service.type=NodePort --set service.nodePorts.http=31080 --set service.nodePorts.https=31443Verify that Kourier is running by running the following command:
kubectl get pods --namespace kourier-systemStep 3: Verify the Installation To verify that KNative and Kourier are running correctly, create a sample KNative service and expose it using Kourier. Create a sample KNative service by running the following command:
kubectl apply -f https://knative.dev/docs/serving/samples/hello-world/helloworld-go.yamlExpose the service using Kourier by running the following command:
kubectl apply -f https://raw.githubusercontent.com/knative/net-kourier/main/config/ingress/contour/01-crds.yaml kubectl apply -f https://raw.githubusercontent.com/knative/net-kourier/main/config/ingress/contour/02-default-backend.yaml kubectl apply -f https://raw.githubusercontent.com/knative/net-kourier/main/config/ingress/contour/03-kourier.yaml kubectl apply -f https://raw.githubusercontent.com/knative/net-kourier/main/config/ingress/contour/04-namespace.yaml kubectl apply -f https://raw.githubusercontent.com/knative/net-kourier/main/config/ingress/contour/05-example.yamlTo access the sample service, get the IP address of the Docker Desktop Kubernetes cluster by running the following command:
kubectl cluster-info | grep 'Kubernetes control plane' | awk '/http/ {print $NF}' | sed 's/.*\/\/\([^:]*\):.*/\1/'Then, open a web browser and navigate to the following URL, replacing IP_ADDRESS with the IP address of the Kubernetes cluster:
http://IP_ADDRESS/hello-worldIf everything is set up correctly, you should see a message that says "Hello World!".
KNative serving example in Java with Quarkus
Create a new Quarkus project using the following command:mvn io.quarkus.platform:quarkus-maven-plugin:2.16.4.Final:create \ -DprojectGroupId=com.example \ -DprojectArtifactId=knative-example \ -DclassName="com.example.MyKnativeService" \ -Dextensions="resteasy-jsonb,kubernetes,container-image-docker"
This will create a new Quarkus project with the necessary extensions for Knative serving and JSON serialization using RESTEasy and JSON-B.
Next, add the following properies to the application.properties file.
quarkus.container-image.build=true quarkus.container-image.group=dev.local quarkus.container-image.push=false quarkus.container-image.builder=docker quarkus.knative.image-pull-policy=never quarkus.kubernetes.deployment-target=knative quarkus.kubernetes-client.namespace=mynamespaceThis creates a Java class. Customize as needed.
package com.example; import javax.ws.rs.GET; import javax.ws.rs.Path; import javax.ws.rs.Produces; import javax.ws.rs.core.MediaType; @Path("/hello") public class MyKnativeService { @GET @Produces(MediaType.TEXT_PLAIN) public String hello() { return "Hello, Knative"; } }
This class defines a simple REST endpoint that returns the message "Hello, Knative!".
KNative deployment specifications is generated in target/kubernetes/knative.yml". Finally, deploy your Quarkus application to Knative serving using the following command:
./mvnw clean package -Dquarkus.kubernetes.deploy=trueThis will build and package your Quarkus application and deploy it to Knative serving. Check if service is installed.
kn service list -n mynamespace NAME URL LATEST AGE CONDITIONS READY REASON knative-example http://knative-example.mynamespace.127.0.0.1.nip.io knative-example-00001 6m15s 3 OK / 3 TrueYou can access your Knative service using the URL provided by Knative serving. To access from host machine we would need to do port forwarding.
kubectl port-forward --namespace kourier-system $(kubectl get pod -n kourier-system -l "app=3scale-kourier-gateway" --field-selector=status.phase=Running --output=jsonpath="{.items[0].metadata.name}") 8080:8080Sample curl command to access from host machine.
curl --location --request GET 'http://localhost:8080/hello' \ --header 'Host: knative-example.mynamespace.127.0.0.1.nip.io'That's it! You now have a Quarkus application running on Knative serving. You can modify the MyKnativeService class to add additional REST endpoints and functionality as needed. We will explore KNative eventing example in the next article.
0 comments:
Post a Comment