Ingress in Kubernetes
Ingress in Kubernetes controls cluster service access with HTTP and HTTPS traffic. Ingress is Layer 7, application, NodePort and LoadBalancer Layer 4. It matters because Ingress can “look inside” traffic and intelligently route depending on request headers, hostnames, and URL paths.
The Core Mechanism: Resources and Controllers
Because of its two components—the Ingress Resource and the Ingress Controller-Ingress is unique.
- Ingress Resource: The user defines routing rules in the Ingress Resource, a YAML configuration file, such as which hostnames or routes should go to which backend services.
- Ingress Controller: The ingress controller software daemon enforces the limits. Kubernetes requires installation of a controller from the NGINX, HAProxy, Contour, and Istio ecosystem.
Generating Ingress resources without a Controller is useless. Sitting at the cluster’s edge, the controller sets up an underlying reverse proxy or load balancer to manage incoming traffic while keeping an eye on the Kubernetes API for new Ingress resources.
You can also read What is a LoadBalancer Service? & It’s Limitations
The Problem Solved by Ingress
In production settings, direct application exposure through LoadBalancer Service types can get costly and complicated. For example, a cluster with 25 microservices that face the internet would typically need 25 different cloud load balancers, each of which would cost money. This is addressed by Ingress, which lowers costs and centralizes management by enabling a cluster to employ a single cloud load balancer as a “traffic cop” to allocate requests to several internal services.
Basic Ingress Configuration Example
A minimal Ingress resource requires the apiVersion, kind, metadata, and spec fields. Below is an example of a default Ingress resource that routes traffic to a specific service:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: demo-ingress-example
annotations:
nginx.ingress.kubernetes.io/rewrite-target: / # Annotation to configure controller behavior
spec:
ingressClassName: nginx # Identifies which controller should implement the rules
rules:
- http:
paths:
- path: /demofilepath
pathType: Prefix
backend:
service:
name: demoservice
port:
number: 99
The spec in this case includes the data required to set up the load balancer. A particular IngressClass resource with extra settings, such as the name of the controller that should implement the class, is referenced using the ingressClassName field. A default Ingress class should be defined in the cluster if the ingressClassName is not specified.
You can also read What is Kube-Proxy in Kubernetes and it’s Lifecycle
Types of Ingress Routing
Path-Based Ingress Routing (Simple Fanout)
Depending on the HTTP URI being asked, a fanout setup directs traffic from a single IP address to many services. When an application consists of several microservices or subcomponents, this is helpful.
Example YAML for Path-Based Routing:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: simple-fanout-example
spec:
rules:
- host: myapp.com
http:
paths:
- path: /foo
pathType: Prefix
backend:
service:
name: service1
port:
number: 80
- path: /bar
pathType: Prefix
backend:
service:
name: service2
port:
number: 80
In this configuration, requests to myapp.com/foo are routed to service1, while requests to myapp.com/bar are sent to service2. If multiple paths match, the system typically uses the longest prefix match to determine the destination.
Name-Based Virtual Hosting
Name-based virtual hosts route HTTP traffic to several host names at one IP address. This lets multiple sites share a load balancer.
Example YAML for Host-Based Routing:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: name-virtual-host-ingress
spec:
rules:
- host: first.bar.com
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: service1
port:
number: 80
- host: second.bar.com
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: service2
port:
number: 80
First.bar.com requests are sent to service1, while second.bar.com inquiries are sent to service2.
You can also read Kubernetes Controller Manager vs Cloud Controller Manager
Key Capabilities and Features
TLS Termination
Kubernetes Secrets lets Ingress manage secure HTTPS connections using SSL/TLS certificates. Centralizing certificate management lets backend services handle HTTP traffic while the Ingress point encrypts and decrypts.
A secret including a tls.crt and a tls.key must be specified to secure an intrusion. TLS Configuration with YAML:
Example YAML for TLS Configuration:
apiVersion: v1
kind: Secret
metadata:
name: testsecret-tls
namespace: default
data:
tls.crt: base64_encoded_cert
tls.key: base64_encoded_key
kind: Secret
type: kubernetes.io/tls
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: tls-example-ingress
spec:
tls:
- hosts:
- https-example.foo.com
secretName: testsecret-tls
rules:
- host: https-example.foo.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: service1
port:
number: 80
The Ingress controller is instructed to use TLS to protect the channel between the client and the load balancer by referring to this secret.
Path Types
There must be a matching pathType for every path in an Ingress. These three types are supported:
- Exact: Case-sensitive and precisely matches the URL route.
- Prefix: A URL path prefix divided by
/is used to match. As an illustration,/foomatches/foo/barbut not/foobar. - ImplementationSpecific: The particular IngressClass/controller being utilized determines the matching logic.
Default Backend
Any requests that don’t match a path specified in the specification are typically handled by a defaultBackend. A .spec.defaultBackend must be given to handle all traffic if no rules are defined.
You can also read What is a Kubernetes Controller Manager?
Comparison with Other Service Types
Comparing Ingress to other Kubernetes networking objects helps choose when to utilize it:
| Feature | ClusterIP | NodePort | LoadBalancer | Ingress |
|---|---|---|---|---|
| OSI Layer | Layer 4 | Layer 4 | Layer 4 | Layer 7 |
| Accessibility | Internal only | External via Node IP:Port | External via Public IP | External via Rules |
| Cost | Low | Low | High (Per Service) | Medium (Consolidated) |
| Routing | None | None | None | Host/Path-based |
| TLS | Managed at App | Managed at App | Managed at App/LB | Centralized |
Operational Considerations
Three phases make up the fundamental workflow for deploying Ingress:
- Install an Ingress Controller: Install a controller, like NGINX, onto the cluster using a tool like Helm.
- Define Ingress Resources: Make a YAML file with your unique routing rules.
- Apply Configuration: Use
kubectl apply -f <filename>to deploy the rules to the API server.
Limitations and the Future: Gateway API
Ingress is popular and stable yet has flaws. Since it only supports HTTP and HTTPS, it cannot support Layer 4 protocols like TCP or UDP for databases or SSH. Because provider-specific annotations give advanced capabilities like rate limiting and timeouts, Ingress controller settings are often not portable.
The Kubernetes project created the Gateway API as a more contemporary, adaptable, and protocol-agnostic replacement to deal with these problems. The Ingress API is still fully supported and has no plans to be removed, even though it has been “frozen” and will not be receiving any more functional improvements. Ingress is still the norm for Layer 7 load balancing in the majority of commercial systems.
You can also read What is Kubernetes Cloud Controller Manager?
