We use analytics and cookies to understand site traffic. Information about your use of our site is shared with Google for that purpose. Learn more.
YAML-based installation
You can install Knative by applying YAML files using the kubectl
CLI.
Prerequisites
- You have a cluster that uses Kubernetes v1.18 or newer.
- You have installed the
kubectl
CLI. - If you have only one node in your cluster, you will need at least 6 CPUs, 6 GB of memory, and 30 GB of disk storage.
- If you have multiple nodes in your cluster, for each node you will need at least 2 CPUs, 4 GB of memory, and 20 GB of disk storage.
- Your Kubernetes cluster must have access to the internet, since Kubernetes needs to be able to fetch images.
Installing the Serving component
-
Install the required custom resources:
kubectl apply -f https://storage.googleapis.com/knative-nightly/serving/latest/serving-crds.yaml
-
Install the core components of Serving (see below for optional extensions):
kubectl apply -f https://storage.googleapis.com/knative-nightly/serving/latest/serving-core.yaml
-
Pick a networking layer (alphabetical):
The following commands install Ambassador and enable its Knative integration.
-
Create a namespace to install Ambassador in:
kubectl create namespace ambassador
-
Install Ambassador:
kubectl apply --namespace ambassador \ -f https://getambassador.io/yaml/ambassador/ambassador-crds.yaml \ -f https://getambassador.io/yaml/ambassador/ambassador-rbac.yaml \ -f https://getambassador.io/yaml/ambassador/ambassador-service.yaml
-
Give Ambassador the required permissions:
kubectl patch clusterrolebinding ambassador -p '{"subjects":[{"kind": "ServiceAccount", "name": "ambassador", "namespace": "ambassador"}]}'
-
Enable Knative support in Ambassador:
kubectl set env --namespace ambassador deployments/ambassador AMBASSADOR_KNATIVE_SUPPORT=true
-
To configure Knative Serving to use Ambassador by default:
kubectl patch configmap/config-network \ --namespace knative-serving \ --type merge \ --patch '{"data":{"ingress.class":"ambassador.ingress.networking.knative.dev"}}'
-
Fetch the External IP or CNAME:
kubectl --namespace ambassador get service ambassador
Save this for configuring DNS below.
The following commands install Contour and enable its Knative integration.
-
Install a properly configured Contour:
kubectl apply -f https://storage.googleapis.com/knative-nightly/net-contour/latest/contour.yaml
-
Install the Knative Contour controller:
kubectl apply -f https://storage.googleapis.com/knative-nightly/net-contour/latest/net-contour.yaml
-
To configure Knative Serving to use Contour by default:
kubectl patch configmap/config-network \ --namespace knative-serving \ --type merge \ --patch '{"data":{"ingress.class":"contour.ingress.networking.knative.dev"}}'
-
Fetch the External IP or CNAME:
kubectl --namespace contour-external get service envoy
Save this for configuring DNS below.
For a detailed guide on Gloo integration, see Installing Gloo for Knative in the Gloo documentation.
The following commands install Gloo and enable its Knative integration.
-
Make sure
glooctl
is installed (version 1.3.x and higher recommended):glooctl version
If it is not installed, you can install the latest version using:
curl -sL https://run.solo.io/gloo/install | sh export PATH=$HOME/.gloo/bin:$PATH
Or following the Gloo CLI install instructions.
-
Install Gloo and the Knative integration:
glooctl install knative --install-knative=false
-
Fetch the External IP or CNAME:
glooctl proxy url --name knative-external-proxy
Save this for configuring DNS below.
The following commands install Istio and enable its Knative integration.
-
Install a properly configured Istio (Advanced installation)
kubectl apply -f https://storage.googleapis.com/knative-nightly/net-istio/latest/istio.yaml
-
Install the Knative Istio controller:
kubectl apply -f https://storage.googleapis.com/knative-nightly/net-istio/latest/net-istio.yaml
-
Fetch the External IP or CNAME:
kubectl --namespace istio-system get service istio-ingressgateway
Save this for configuring DNS below.
The following commands install Kong and enable its Knative integration.
-
Install Kong Ingress Controller:
kubectl apply -f https://raw.githubusercontent.com/Kong/kubernetes-ingress-controller/0.9.x/deploy/single/all-in-one-dbless.yaml
-
To configure Knative Serving to use Kong by default:
kubectl patch configmap/config-network \ --namespace knative-serving \ --type merge \ --patch '{"data":{"ingress.class":"kong"}}'
-
Fetch the External IP or CNAME:
kubectl --namespace kong get service kong-proxy
Save this for configuring DNS below.
The following commands install Kourier and enable its Knative integration.
-
Install the Knative Kourier controller:
kubectl apply -f https://storage.googleapis.com/knative-nightly/net-kourier/latest/kourier.yaml
-
To configure Knative Serving to use Kourier by default:
kubectl patch configmap/config-network \ --namespace knative-serving \ --type merge \ --patch '{"data":{"ingress.class":"kourier.ingress.networking.knative.dev"}}'
-
Fetch the External IP or CNAME:
kubectl --namespace kourier-system get service kourier
Save this for configuring DNS below.
-
-
Configure DNS
We ship a simple Kubernetes Job called “default domain” that will (see caveats) configure Knative Serving to use xip.io as the default DNS suffix.
kubectl apply -f https://storage.googleapis.com/knative-nightly/serving/latest/serving-default-domain.yaml
Caveat: This will only work if the cluster LoadBalancer service exposes an IPv4 address or hostname, so it will not work with IPv6 clusters or local setups like Minikube. For these, see “Real DNS” or “Temporary DNS”.
To configure DNS for Knative, take the External IP or CNAME from setting up networking, and configure it with your DNS provider as follows:
-
If the networking layer produced an External IP address, then configure a wildcard
A
record for the domain:# Here knative.example.com is the domain suffix for your cluster *.knative.example.com == A 35.233.41.212
-
If the networking layer produced a CNAME, then configure a CNAME record for the domain:
# Here knative.example.com is the domain suffix for your cluster *.knative.example.com == CNAME a317a278525d111e89f272a164fd35fb-1510370581.eu-central-1.elb.amazonaws.com
Once your DNS provider has been configured, direct Knative to use that domain:
# Replace knative.example.com with your domain suffix kubectl patch configmap/config-domain \ --namespace knative-serving \ --type merge \ --patch '{"data":{"knative.example.com":""}}'
If you are using
curl
to access the sample applications, or your own Knative app, and are unable to use the “Magic DNS (xip.io)” or “Real DNS” methods, there is a temporary approach. This is useful for those who wish to evaluate Knative without altering their DNS configuration, as per the “Real DNS” method, or cannot use the “Magic DNS” method due to using, for example, minikube locally or IPv6 clusters.To access your application using
curl
using this method:-
After starting your application, get the URL of your application:
kubectl get ksvc
The output should be similar to:
NAME URL LATESTCREATED LATESTREADY READY REASON helloworld-go http://helloworld-go.default.example.com helloworld-go-vqjlf helloworld-go-vqjlf True
-
Instruct
curl
to connect to the External IP or CNAME defined by the networking layer in section 3 above, and use the-H "Host:"
command-line option to specify the Knative application’s host name. For example, if the networking layer defines your External IP and port to behttp://192.168.39.228:32198
and you wish to access the abovehelloworld-go
application, use:curl -H "Host: helloworld-go.default.example.com" http://192.168.39.228:32198
In the case of the provided
helloworld-go
sample application, the output should, using the default configuration, be:Hello Go Sample v1!
Refer to the “Real DNS” method for a permanent solution.
-
-
Monitor the Knative components until all of the components show a
STATUS
ofRunning
orCompleted
:kubectl get pods --namespace knative-serving
At this point, you have a basic installation of Knative Serving!
Optional Serving extensions
Knative also supports the use of the Kubernetes Horizontal Pod Autoscaler (HPA) for driving autoscaling decisions. The following command will install the components needed to support HPA-class autoscaling:
kubectl apply -f https://storage.googleapis.com/knative-nightly/serving/latest/serving-hpa.yaml
Knative supports automatically provisioning TLS certificates via cert-manager. The following commands will install the components needed to support the provisioning of TLS certificates via cert-manager.
-
First, install cert-manager version
0.12.0
or higher -
Next, install the component that integrates Knative with cert-manager:
kubectl apply -f https://storage.googleapis.com/knative-nightly/net-certmanager/latest/release.yaml
-
Now configure Knative to automatically configure TLS certificates.
Knative supports automatically provisioning TLS certificates using Let’s Encrypt HTTP01 challenges. The following commands will install the components needed to support that.
-
First, install the
net-http01
controller:kubectl apply -f https://storage.googleapis.com/knative-nightly/net-http01/latest/release.yaml
-
Next, configure the
certificate.class
to use this certificate type.kubectl patch configmap/config-network \ --namespace knative-serving \ --type merge \ --patch '{"data":{"certificate.class":"net-http01.certificate.networking.knative.dev"}}'
-
Lastly, enable auto-TLS.
kubectl patch configmap/config-network \ --namespace knative-serving \ --type merge \ --patch '{"data":{"autoTLS":"Enabled"}}'
If you are using a Certificate implementation that supports provisioning wildcard certificates (e.g. cert-manager with a DNS01 issuer), then the most efficient way to provision certificates is with the namespace wildcard certificate controller. The following command will install the components needed to provision wildcard certificates in each namespace:
kubectl apply -f https://storage.googleapis.com/knative-nightly/serving/latest/serving-nscert.yaml
Note this will not work with HTTP01 either via cert-manager or the net-http01 options.
The DomainMapping
CRD allows a user to map a Domain Name that they own to a
specific Knative Service.
kubectl apply -f https://storage.googleapis.com/knative-nightly/serving/latest/serving-domainmapping-crds.yaml
kubectl wait --for=condition=Established --all crd
kubectl apply -f https://storage.googleapis.com/knative-nightly/serving/latest/serving-domainmapping.yaml
Installing the Eventing component
-
Install the required custom resources:
kubectl apply -f https://storage.googleapis.com/knative-nightly/eventing/latest/eventing-crds.yaml
-
Install the core components of Eventing (see below for optional extensions):
kubectl apply -f https://storage.googleapis.com/knative-nightly/eventing/latest/eventing-core.yaml
-
Install a default Channel (messaging) layer (alphabetical).
-
Then install the Apache Kafka Channel:
curl -L "https://storage.googleapis.com/knative-sandbox-nightly/eventing-kafka/latest/channel-consolidated.yaml" \ | sed 's/REPLACE_WITH_CLUSTER_URL/my-cluster-kafka-bootstrap.kafka:9092/' \ | kubectl apply -f -
To learn more about the Apache Kafka channel, try our sample
-
Install the Google Cloud Pub/Sub Channel:
# This installs both the Channel and the GCP Sources. kubectl apply -f https://storage.googleapis.com/google-nightly/knative-gcp/latest/cloud-run-events.yaml
To learn more about the Google Cloud Pub/Sub Channel, try our sample
The following command installs an implementation of Channel that runs in-memory. This implementation is nice because it is simple and standalone, but it is unsuitable for production use cases.
kubectl apply -f https://storage.googleapis.com/knative-nightly/eventing/latest/in-memory-channel.yaml
-
Then install the NATS Streaming Channel:
kubectl apply -f https://storage.googleapis.com/knative-sandbox-nightly/eventing-natss/latest/300-natss-channel.yaml
-
Install a Broker (eventing) layer:
The following commands install the Apache Kafka broker, and run event routing in a system namespace,
knative-eventing
, by default.-
Install the Kafka controller by entering the following command:
kubectl apply -f https://storage.googleapis.com/knative-sandbox-nightly/eventing-kafka-broker/latest/eventing-kafka-controller.yaml
-
Install the Kafka Broker data plane by entering the following command:
kubectl apply -f https://storage.googleapis.com/knative-sandbox-nightly/eventing-kafka-broker/latest/eventing-kafka-broker.yaml
For more information, see the Kafka Broker documentation.
The following command installs an implementation of Broker that utilizes Channels and runs event routing components in a System Namespace, providing a smaller and simpler installation.
kubectl apply -f https://storage.googleapis.com/knative-nightly/eventing/latest/mt-channel-broker.yaml
To customize which broker channel implementation is used, update the following ConfigMap to specify which configurations are used for which namespaces:
apiVersion: v1 kind: ConfigMap metadata: name: config-br-defaults namespace: knative-eventing data: default-br-config: | # This is the cluster-wide default broker channel. clusterDefault: brokerClass: MTChannelBasedBroker apiVersion: v1 kind: ConfigMap name: imc-channel namespace: knative-eventing # This allows you to specify different defaults per-namespace, # in this case the "some-namespace" namespace will use the Kafka # channel ConfigMap by default (only for example, you will need # to install kafka also to make use of this). namespaceDefaults: some-namespace: brokerClass: MTChannelBasedBroker apiVersion: v1 kind: ConfigMap name: kafka-channel namespace: knative-eventing
The referenced
imc-channel
andkafka-channel
example ConfigMaps would look like:apiVersion: v1 kind: ConfigMap metadata: name: imc-channel namespace: knative-eventing data: channelTemplateSpec: | apiVersion: messaging.knative.dev/v1 kind: InMemoryChannel --- apiVersion: v1 kind: ConfigMap metadata: name: kafka-channel namespace: knative-eventing data: channelTemplateSpec: | apiVersion: messaging.knative.dev/v1alpha1 kind: KafkaChannel spec: numPartitions: 3 replicationFactor: 1
In order to use the KafkaChannel make sure it is installed on the cluster as discussed above.
-
-
Monitor the Knative components until all of the components show a
STATUS
ofRunning
:kubectl get pods --namespace knative-eventing
At this point, you have a basic installation of Knative Eventing!
Optional Eventing extensions
-
Install the Kafka controller:
kubectl apply -f https://storage.googleapis.com/knative-sandbox-nightly/eventing-kafka-broker/latest/eventing-kafka-controller.yaml
-
Install the Kafka Sink data plane:
kubectl apply -f https://storage.googleapis.com/knative-sandbox-nightly/eventing-kafka-broker/latest/eventing-kafka-sink.yaml
For more information, see the Kafka Sink documentation.
The following command installs the Eventing Sugar Controller:
kubectl apply -f https://storage.googleapis.com/knative-nightly/eventing/latest/eventing-sugar-controller.yaml
The Knative Eventing Sugar Controller will react to special labels and annotations and produce Eventing resources. For example:
- When a Namespace is labeled with
eventing.knative.dev/injection=enabled
, the controller will create a default broker in that namespace. - When a Trigger is annotated with
eventing.knative.dev/injection=enabled
, the controller will create a Broker named by that Trigger in the Trigger’s Namespace.
The following command enables the default Broker on a namespace (here
default
):
kubectl label namespace default eventing.knative.dev/injection=enabled
The following command installs the single-tenant Github source:
kubectl apply -f https://storage.googleapis.com/knative-sandbox-nightly/eventing-github/latest/github.yaml
The single-tenant GitHub source creates one Knative service per GitHub source.
The following command installs the multi-tenant GitHub source:
kubectl apply -f https://storage.googleapis.com/knative-sandbox-nightly/eventing-github/latest/mt-github.yaml
The multi-tenant GitHub source creates only one Knative service handling all GitHub sources in the cluster. This source does not support logging or tracing configuration yet.
To learn more about the Github source, try our sample
The following command installs the Apache Camel-K Source:
kubectl apply -f https://storage.googleapis.com/knative-sandbox-nightly/eventing-camel/latest/camel.yaml
To learn more about the Apache Camel-K source, try our sample
The following command installs the Apache Kafka Source:
kubectl apply -f https://storage.googleapis.com/knative-sandbox-nightly/eventing-kafka/latest/source.yaml
To learn more about the Apache Kafka source, try our sample
The following command installs the GCP Sources:
# This installs both the Sources and the Channel.
kubectl apply -f https://storage.googleapis.com/google-nightly/knative-gcp/latest/cloud-run-events.yaml
To learn more about the Cloud Pub/Sub source, try our sample.
To learn more about the Cloud Storage source, try our sample.
To learn more about the Cloud Scheduler source, try our sample.
To learn more about the Cloud Audit Logs source, try our sample.
The following command installs the Apache CouchDB Source:
kubectl apply -f https://storage.googleapis.com/knative-sandbox-nightly/eventing-couchdb/latest/couchdb.yaml
To learn more about the Apache CouchDB source, read the documentation.
The following command installs the VMware Sources and Bindings:
kubectl apply -f https://storage.googleapis.com/vmware-tanzu-nightly/sources-for-knative/latest/release.yaml
To learn more about the VMware sources and bindings, try our samples.
Feedback
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.