NGINX Ingress Controller Archives - NGINX https://www.nginx.com/blog/tag/nginx-ingress-controller/ The High Performance Reverse Proxy, Load Balancer, Edge Cache, Origin Server Wed, 13 Mar 2024 19:27:27 +0000 en-US hourly 1 The Ingress Controller: Touchstone for Securing AI/ML Apps in Kubernetes https://www.nginx.com/blog/the-ingress-controller-touchstone-for-securing-ai-ml-apps-in-kubernetes/ Wed, 28 Feb 2024 19:15:14 +0000 https://www.nginx.com/?p=72925 One of the key advantages of running artificial intelligence (AI) and machine learning (ML) workloads in Kubernetes is a having a central point of control for all incoming requests through the Ingress Controller. It is a versatile module that serves as a load balancer and API gateway, providing a solid foundation for securing AI/ML applications [...]

Read More...

The post The Ingress Controller: Touchstone for Securing AI/ML Apps in Kubernetes appeared first on NGINX.

]]>
One of the key advantages of running artificial intelligence (AI) and machine learning (ML) workloads in Kubernetes is a having a central point of control for all incoming requests through the Ingress Controller. It is a versatile module that serves as a load balancer and API gateway, providing a solid foundation for securing AI/ML applications in a Kubernetes environment.

As a unified tool, the Ingress Controller is a convenient touchpoint for applying security and performance measures, monitoring activity, and mandating compliance. More specifically, securing AI/ML applications at the Ingress Controller in a Kubernetes environment offers several strategic advantages that we explore in this blog.

Diagram of Ingress Controller ecosystem

Centralized Security and Compliance Control

Because Ingress Controller acts as a gateway to your Kubernetes cluster, it allows MLOps and platform engineering teams to implement a centralized point for enforcing security policies. This reduces the complexity of configuring security settings on a per-pod or per-service basis. By centralizing security controls at the Ingress level, you simplify the compliance process and make it easier to manage and monitor compliance status.

Consolidated Authentication and Authorization

The Ingress Controller is also the logical location to implement and enforce authentication and authorization for access to all your AI/ML applications. By adding strong certificate authority management, the Ingress Controller is also the linchpin of building zero trust (ZT) architectures for Kubernetes. ZT is crucial for ensuring continuous security and compliance of sensitive AI applications running on highly valuable proprietary data.

Rate Limiting and Access Control

The Ingress Controller is an ideal place to enforce rate limiting, protecting your applications from abuse, like DDoS attacks or excessive API calls, which is crucial for public-facing AI/ML APIs. With the rise of novel AI threats like model theft and data leaking, enforcing rate limiting and access control becomes more important in protecting against brute force attacks. It also helps prevent adversaries from abusing business logic or jailbreaking guardrails to extract data and model training or weight information.

Web Application Firewall (WAF) Integration

Many Ingress Controllers support integration with WAFs, which are table stakes for protecting exposed applications and services. WAFs provide an additional layer of security against common web vulnerabilities and attacks like the OWASP 10. Even more crucial, when properly tuned, WAFs protect against more targeted attacks aimed at AI/ML applications. A key consideration for AI/ML apps, where latency and performance are crucial, is potential overhead introduced by a WAF. Also, to be effective for AI/ML apps, the WAF must be tightly integrated into the Ingress Controller for monitoring and observability dashboards and alerting structures. If the WAF and Ingress Controller can share a common data plane, this is ideal.

Conclusion: Including the Ingress Controller Early in Planning for AI/ML Architectures

Because the Ingress Controller occupies such an important place in Kubernetes application deployment for AI/ML apps, it is best to include its capabilities as part of architecting AI/ML applications. This can alleviate duplication of functionality and can lead to a better decision on an Ingress Controller that will scale and grow with your AI/ML application needs. For MLOps teams, the Ingress Controller becomes a central control point for many of their critical platform and ops capabilities, with security among the top priorities.

Get Started with NGINX

NGINX offers a comprehensive set of tools and building blocks to meet your needs and enhance security, scalability, and observability of your Kubernetes platform.

You can get started today by requesting a free 30-day trial of Connectivity Stack for Kubernetes.

The post The Ingress Controller: Touchstone for Securing AI/ML Apps in Kubernetes appeared first on NGINX.

]]>
Scale, Secure, and Monitor AI/ML Workloads in Kubernetes with Ingress Controllers https://www.nginx.com/blog/scale-secure-and-monitor-ai-ml-workloads-in-kubernetes-with-ingress-controllers/ Thu, 22 Feb 2024 20:09:02 +0000 https://www.nginx.com/?p=72919 AI and machine learning (AI/ML) workloads are revolutionizing how businesses operate and innovate. Kubernetes, the de facto standard for container orchestration and management, is the platform of choice for powering scalable large language model (LLM) workloads and inference models across hybrid, multi-cloud environments. In Kubernetes, Ingress controllers play a vital role in delivering and securing [...]

Read More...

The post Scale, Secure, and Monitor AI/ML Workloads in Kubernetes with Ingress Controllers appeared first on NGINX.

]]>
AI and machine learning (AI/ML) workloads are revolutionizing how businesses operate and innovate. Kubernetes, the de facto standard for container orchestration and management, is the platform of choice for powering scalable large language model (LLM) workloads and inference models across hybrid, multi-cloud environments.

In Kubernetes, Ingress controllers play a vital role in delivering and securing containerized applications. Deployed at the edge of a Kubernetes cluster, they serve as the central point of handling communications between users and applications.

In this blog, we explore how Ingress controllers and F5 NGINX Connectivity Stack for Kubernetes can help simplify and streamline model serving, experimentation, monitoring, and security for AI/ML workloads.

Deploying AI/ML Models in Production at Scale

When deploying AI/ML models at scale, out-of-the-box Kubernetes features and capabilities can help you:

  • Accelerate and simplify the AI/ML application release life cycle.
  • Enable AI/ML workload portability across different environments.
  • Improve compute resource utilization efficiency and economics.
  • Deliver scalability and achieve production readiness.
  • Optimize the environment to meet business SLAs.

At the same time, organizations might face challenges with serving, experimenting, monitoring, and securing AI/ML models in production at scale:

  • Increasing complexity and tool sprawl makes it difficult for organizations to configure, operate, manage, automate, and troubleshoot Kubernetes environments on-premises, in the cloud, and at the edge.
  • Poor user experiences because of connection timeouts and errors due to dynamic events, such as pod failures and restarts, auto-scaling, and extremely high request rates.
  • Performance degradation, downtime, and slower and harder troubleshooting in complex Kubernetes environments due to aggregated reporting and lack of granular, real-time, and historical metrics.
  • Significant risk of exposure to cybersecurity threats in hybrid, multi-cloud Kubernetes environments because traditional security models are not designed to protect loosely coupled distributed applications.

Enterprise-class Ingress controllers like F5 NGINX Ingress Controller can help address these challenges. By leveraging one tool that combines Ingress controller, load balancer, and API gateway capabilities, you can achieve better uptime, protection, and visibility at scale – no matter where you run Kubernetes. In addition, it reduces complexity and operational cost.

Diagram of NGINX Ingress Controller ecosystem

NGINX Ingress Controller can also be tightly integrated with an industry-leading Layer 7 app protection technology from F5 that helps mitigate OWASP Top 10 cyberthreats for LLM Applications and defends AI/ML workloads from DoS attacks.

Benefits of Ingress Controllers for AI/ML Workloads

Ingress controllers can simplify and streamline deploying and running AI/ML workloads in production through the following capabilities:

  • Model serving – Deliver apps non-disruptively with Kubernetes-native load balancing, auto-scaling, rate limiting, and dynamic reconfiguration features.
  • Model experimentation – Implement blue-green and canary deployments, and A/B testing to roll out new versions and upgrades without downtime.
  • Model monitoring – Collect, represent, and analyze model metrics to gain better insight into app health and performance.
  • Model security – Configure user identity, authentication, authorization, role-based access control, and encryption capabilities to protect apps from cybersecurity threats.

NGINX Connectivity Stack for Kubernetes includes NGINX Ingress Controller and F5 NGINX App Protect to provide fast, reliable, and secure communications between Kubernetes clusters running AI/ML applications and their users – on-premises and in the cloud. It helps simplify and streamline model serving, experimentation, monitoring, and security across any Kubernetes environment, enhancing capabilities of cloud provider and pre-packaged Kubernetes offerings with higher degree of protection, availability, and observability at scale.

Get Started with NGINX Connectivity Stack for Kubernetes

NGINX offers a comprehensive set of tools and building blocks to meet your needs and enhance security, scalability, and visibility of your Kubernetes platform.

You can get started today by requesting a free 30-day trial of Connectivity Stack for Kubernetes.

The post Scale, Secure, and Monitor AI/ML Workloads in Kubernetes with Ingress Controllers appeared first on NGINX.

]]>
Dynamic A/B Kubernetes Multi-Cluster Load Balancing and Security Controls with NGINX Plus https://www.nginx.com/blog/dynamic-a-b-kubernetes-multi-cluster-load-balancing-and-security-controls-with-nginx-plus/ Thu, 15 Feb 2024 16:00:56 +0000 https://www.nginx.com/?p=72906 You’re a modern Platform Ops or DevOps engineer. You use a library of open source (and maybe some commercial) tools to test, deploy, and manage new apps and containers for your Dev team. You’ve chosen Kubernetes to run these containers and pods in development, test, staging, and production environments. You’ve bought into the architectures and [...]

Read More...

The post Dynamic A/B Kubernetes Multi-Cluster Load Balancing and Security Controls with NGINX Plus appeared first on NGINX.

]]>
You’re a modern Platform Ops or DevOps engineer. You use a library of open source (and maybe some commercial) tools to test, deploy, and manage new apps and containers for your Dev team. You’ve chosen Kubernetes to run these containers and pods in development, test, staging, and production environments. You’ve bought into the architectures and concepts of microservices and, for the most part, it works pretty well. However, you’ve encountered a few speed bumps along this journey.

For instance, as you build and roll out new clusters, services, and applications, how do you easily integrate or migrate these new resources into production without dropping any traffic? Traditional networking appliances require reloads or reboots when implementing configuration changes to DNS records, load balancers, firewalls, and proxies. These adjustments are not reconfigurable without causing downtime because a “service outage” or “maintenance window” is required to update DNS, load balancer, and firewall rules. More often than not, you have to submit a dreaded service ticket and wait for another team to approve and make the changes.

Maintenance windows can drive your team into a ditch, stall application delivery, and make you declare, “There must be a better way to manage traffic!” So, let’s explore a solution that gets you back in the fast lane.

Active-Active Multi-Cluster Load Balancing

If you have multiple Kubernetes clusters, it’s ideal to route traffic to both clusters at the same time. An even better option is to perform A/B, canary, or blue-green traffic splitting and send a small percentage of your traffic as a test. To do this, you can use NGINX Plus with ngx_http_split_clients_module.

K8s with NGINX Plus diagram

The HTTP Split Clients module is written by NGINX Open Source and allows the ratio of requests to be distributed based on a key. In this use case, the clusters are the “upstreams” of NGINX. So, as the client requests arrive, the traffic is split between two clusters. The key that is used to determine the client request is any available NGINX client $variable. That said, to control this for every request, use the $request_id variable, which is a unique number assigned by NGINX to every incoming request.

To configure the split ratios, determine which percentages you’d like to go to each cluster. In this example, we use K8s Cluster1 as a “large cluster” for production and Cluster2 as a “small cluster” for pre-production testing. If you had a small cluster for staging, you could use a 90:10 ratio and test 10% of your traffic on the small cluster to ensure everything is working before you roll out new changes to the large cluster. If that sounds too risky, you can change the ratio to 95:5. Truthfully, you can pick any ratio you’d like from 0 to 100%.

For most real-time production traffic, you likely want a 50:50 ratio where your two clusters are of equal size. But you can easily provide other ratios, based on the cluster size or other details. You can easily set the ratio to 0:100 (or 100:0) and upgrade, patch, repair, or even replace an entire cluster with no downtime. Let NGINX split_clients route the requests to the live cluster while you address issues on the other.


# Nginx Multi Cluster Load Balancing
# HTTP Split Clients Configuration for Cluster1:Cluster2 ratios
# Provide 100, 99, 50, 1, 0% ratios  (add/change as needed)
# Based on
# https://www.nginx.com/blog/dynamic-a-b-testing-with-nginx-plus/
# Chris Akker – Jan 2024
#
 
split_clients $request_id $split100 {
   * cluster1-cafe;                     # All traffic to cluster1
   } 

split_clients $request_id $split99 {
   99% cluster1-cafe;                   # 99% cluster1, 1% cluster2
   * cluster2-cafe;
   } 
 
split_clients $request_id $split50 { 
   50% cluster1-cafe;                   # 50% cluster1, 50% cluster2
   * cluster2-cafe;
   }
    
split_clients $request_id $split1 { 
   1.0% cluster1-cafe;                  # 1% to cluster1, 99% to cluster2
   * cluster2-cafe;
   }

split_clients $request_id $split0 { 
   * cluster2-cafe;                     # All traffic to cluster2
   }
 
# Choose which cluster upstream based on the ratio
 
map $split_level $upstream { 
   100 $split100; 
   99 $split99; 
   50 $split50; 
   1.0 $split1; 
   0 $split0;
   default $split50;
}

You can add or edit the configuration above to match the ratios that you need (e.g., 90:10, 80:20, 60:40, and so on).

Note: NGINX also has a Split Clients module for TCP connections in the stream context, which can be used for non-HTTP traffic. This splits the traffic based on new TCP connections, instead of HTTP requests.

NGINX Plus Key-Value Store

The next feature you can use is the NGINX Plus key-value store. This is a key-value object in an NGINX shared memory zone that can be used for many different data storage use cases. Here, we use it to store the split ratio value mentioned in the section above. NGINX Plus allows you to change any key-value record without reloading NGINX. This enables you to change this split value with an API call, creating the dynamic split function.

Based on our example, it looks like this:

{“cafe.example.com”:90}

This KeyVal record reads:
The Key is the “cafe.example.com” hostname
The Value is “90” for the split ratio

Instead of hard-coding the split ratio in the NGINX configuration files, you can instead use the key-value memory. This eliminates the NGINX reload required to change a static split value in NGINX.

In this example, NGINX is configured to use 90:10 for the split ratio with the large Cluster1 for the 90% and the small Cluster2 for the remaining 10%. Because this is a key-value record, you can change this ratio using the NGINX Plus API dynamically with no configuration reloads! The Split Clients module will use this new ratio value as soon as you change it, on the very next request.

Create the KV Record, start with a 50/50 ratio:

Add a new record to the KeyValue store, by sending an API command to NGINX Plus:

curl -iX POST -d '{"cafe.example.com":50}' http://nginxlb:9000/api/8/http/keyvals/split

Change the KV Record, change to the 90/10 ratio:

Change the KeyVal Split Ratio to 90, using an HTTP PATCH Method to update the KeyVal record in memory:

curl -iX PATCH -d '{"cafe.example.com":90}' http://nginxlb:9000/api/8/http/keyvals/split

Next, the pre-production testing team verifies the new application code is ready, you deploy it to the large Cluster1, and change the ratio to 100%. This immediately sends all the traffic to Cluster1 and your new application is “live” without any disruption to traffic, no service outages, no maintenance windows, reboots, reloads, or lots of tickets. It only takes one API call to change this split ratio at the time of your choosing.

Of course, being that easy to move from 90% to 100% means you have an easy way to change the ratio from 100:0 to 50:50 (or even 0:100). So, you can have a hot backup cluster or can scale your clusters horizontally with new resources. At full throttle, you can even completely build a new cluster with the latest software, hardware, and software patches – deploying the application and migrating the traffic over a period of time without dropping a single connection!

Use Cases

Using the HTTP Split Clients module with the dynamic key-value store can deliver the following use cases:

  • Active-active load balancing – For load balancing to multiple clusters.
  • Active-passive load balancing – For load balancing to primary, backup, and DR clusters and applications.
  • A/B, blue-green, and canary testing – Used with new Kubernetes applications.
  • Horizontal cluster scaling – Adds more cluster resources and changes the ratio when you’re ready.
  • Hitless cluster upgrades – Ability to use one cluster while you upgrade, patch, or repair the other cluster.
  • Instant failover – If one cluster has a serious issue, you can change the ratio to use your other cluster.

Configuration Examples

Here is an example of the key-value configuration:

# Define Key Value store, backup state file, timeout, and enable sync
 
keyval_zone zone=split:1m state=/var/lib/nginx/state/split.keyval timeout=365d sync;

keyval $host $split_level zone=split;

And this is an example of the cafe.example.com application configuration:

# Define server and location blocks for cafe.example.com, with TLS

server {
   listen 443 ssl;
   server_name cafe.example.com; 

   status_zone https://cafe.example.com;
      
   ssl_certificate /etc/ssl/nginx/cafe.example.com.crt; 
   ssl_certificate_key /etc/ssl/nginx/cafe.example.com.key;
   
   location / {
   status_zone /;
   
   proxy_set_header Host $host;
   proxy_http_version 1.1;
   proxy_set_header "Connection" "";
   proxy_pass https://$upstream;   # traffic split to upstream blocks
   
   }

# Define 2 upstream blocks – one for each cluster
# Servers managed dynamically by NLK, state file backup

# Cluster1 upstreams
 
upstream cluster1-cafe {
   zone cluster1-cafe 256k;
   least_time last_byte;
   keepalive 16;
   #servers managed by NLK Controller
   state /var/lib/nginx/state/cluster1-cafe.state; 
}
 
# Cluster2 upstreams
 
upstream cluster2-cafe {
   zone cluster2-cafe 256k;
   least_time last_byte;
   keepalive 16;
   #servers managed by NLK Controller
   state /var/lib/nginx/state/cluster2-cafe.state; 
}

The upstream server IP:ports are managed by NGINX Loadbalancer for Kubernetes, a new controller that also uses the NGINX Plus API to configure NGINX Plus dynamically. Details are in the next section.

Let’s take a look at the HTTP split traffic over time with Grafana, a popular monitoring and visualization tool. You use the NGINX Prometheus Exporter (based on njs) to export all of your NGINX Plus metrics, which are then collected and graphed by Grafana. Details for configuring Prometheus and Grafana can be found here.

There are four upstreams servers in the graph: Two for Cluster1 and two for Cluster2. We use an HTTP load generation tool to create HTTP requests and send them to NGINX Plus.

In the three graphs below, you can see the split ratio is at 50:50 at the beginning of the graph.

LB Upstream Requests diagram

Then, the ratio changes to 10:90 at 12:56:30.

LB Upstream Requests diagram

Then it changes to 90:10 at 13:00:00.

LB Upstream Requests diagram

You can find working configurations of Prometheus and Grafana on the NGINX Loadbalancer for Kubernetes GitHub repository.

Dynamic HTTP Upstreams: NGINX Loadbalancer for Kubernetes

You can change the static NGINX Upstream configuration to dynamic cluster upstreams using the NGINX Plus API and the NGINX Loadbalancer for Kubernetes controller. This free project is a Kubernetes controller that watches NGINX Ingress Controller and automatically updates an external NGINX Plus instance configured for TCP/HTTP load balancing. It’s very straightforward in design and simple to install and operate. With this solution in place, you can implement TCP/HTTP load balancing in Kubernetes environments, ensuring new apps and services are immediately detected and available for traffic – with no reload required.

Architecture and Flow

NGINX Loadbalancer for Kubernetes sits inside a Kubernetes cluster. It is registered with Kubernetes to watch the NGINX Ingress Controller (nginx-ingress) Service. When there is a change to the Ingress controller(s), NGINX Loadbalancer for Kubernetes collects the Worker Ips and the NodePort TCP port numbers, then sends the IP:ports to NGINX Plus via the NGINX Plus API.

The NGINX upstream servers are updated with no reload required, and NGINX Plus load balances traffic to the correct upstream servers and Kubernetes NodePorts. Additional NGINX Plus instances can be added to achieve high availability.

Diagram of NGINX Loadbalancer in action

A Snapshot of NGINX Loadbalancer for Kubernetes in Action

In the screenshot below, there are two windows that demonstrate NGINX Loadbalancer for Kubernetes deployed and doing its job:

  1. Service TypeLoadBalancer for nginx-ingress
  2. External IP – Connects to the NGINX Plus servers
  3. Ports – NodePort maps to 443:30158 with matching NGINX upstream servers (as shown in the NGINX Plus real-time dashboard)
  4. Logs – Indicates NGINX Loadbalancer for Kubernetes is successfully sending data to NGINX Plus

NGINX Plus window

Note: In this example, the Kubernetes worker nodes are 10.1.1.8 and 10.1.1.10

Adding NGINX Plus Security Features

As more and more applications running in Kubernetes are exposed to the open internet, security becomes necessary. Fortunately, NGINX Plus has enterprise-class security features that can be used to create a layered, defense-in-depth architecture.

With NGINX Plus in front of your clusters and performing the split_clients function, why not leverage that presence and add some beneficial security features? Here are a few of the NGINX Plus features that could be used to enhance security, with links and references to other documentation that can be used to configure, test, and deploy them.

Get Started Today

If you’re frustrated with networking challenges at the edge of your Kubernetes cluster, consider trying out this NGINX multi-cluster Solution. Take the NGINX Loadbalancer for Kubernetes software for a test drive and let us know what you think. The source code is open source (under the Apache 2.0 license) and all installation instructions are available on GitHub.

To provide feedback, drop us a comment in the repo or message us in the NGINX Community Slack.

The post Dynamic A/B Kubernetes Multi-Cluster Load Balancing and Security Controls with NGINX Plus appeared first on NGINX.

]]>
Automate TCP Load Balancing to On-Premises Kubernetes Services with NGINX https://www.nginx.com/blog/automate-tcp-load-balancing-to-on-premises-kubernetes-services-with-nginx/ Tue, 22 Aug 2023 15:00:07 +0000 https://www.nginx.com/?p=72639 You are a modern app developer. You use a collection of open source and maybe some commercial tools to write, test, deploy, and manage new apps and containers. You’ve chosen Kubernetes to run these containers and pods in development, test, staging, and production environments. You’ve bought into the architectures and concepts of microservices, the Cloud [...]

Read More...

The post Automate TCP Load Balancing to On-Premises Kubernetes Services with NGINX appeared first on NGINX.

]]>
You are a modern app developer. You use a collection of open source and maybe some commercial tools to write, test, deploy, and manage new apps and containers. You’ve chosen Kubernetes to run these containers and pods in development, test, staging, and production environments. You’ve bought into the architectures and concepts of microservices, the Cloud Native Computing Foundation, and other modern industry standards.

On this journey, you’ve discovered that Kubernetes is indeed powerful. But you’ve probably also been surprised at how difficult, inflexible, and frustrating it can be. Implementing and coordinating changes and updates to routers, firewalls, load balancers and other network devices can become overwhelming – especially in your own data center! It’s enough to bring a developer to tears.

How you handle these challenges has a lot to do with where and how you run Kubernetes (as a managed service or on premises). This article addresses TCP load balancing, a key area where deployment choices impact ease of use.

TCP Load Balancing with Managed Kubernetes (a.k.a. the Easy Option)

If you use a managed service like a public cloud provider for Kubernetes, much of that tedious networking stuff is handled for you. With just one command (kubectl apply -f loadbalancer.yaml), the Service type LoadBalancer gives you a Public IP, DNS record, and TCP load balancer. For example, you could configure Amazon Elastic Load Balancer to distribute traffic to pods containing NGINX Ingress Controller and, using this command, have no worries when the backends change. It’s so easy, we bet you take it for granted!

TCP Load Balancing with On-Premises Kubernetes (a.k.a. the Hard Option)

With on-premises clusters, it’s a totally different scenario. You or your networking peers must provide the networking pieces. You might wonder, “Why is getting users to my Kubernetes apps so difficult?” The answer is simple but a bit shocking: The Service type LoadBalancer, the front door to your cluster, doesn’t actually exist.

To expose your apps and Services outside the cluster, your network team probably requires tickets, approvals, procedures, and perhaps even security reviews – all before they reconfigure their equipment. Or you might need to do everything yourself, slowing the pace of application delivery to a crawl. Even worse, you dare not make changes to any Kubernetes Services, for if the NodePort changes, the traffic could get blocked! And we all know how much users like getting 500 errors. Your boss probably likes it even less.

A Better Solution for On-Premises TCP Load Balancing: NGINX Loadbalancer for Kubernetes

You can turn the “hard option” into the “easy option” with our new project: NGINX Loadbalancer for Kubernetes. This free project is a Kubernetes controller that watches NGINX Ingress Controller and automatically updates an external NGINX Plus instance configured for load balancing. Being very straightforward in design, it’s simple to install and operate. With this solution in place, you can implement TCP load balancing in on-premises environments, ensuring new apps and services are immediately detected and available for traffic – with no need to get hands on.

Architecture and Flow

NGINX Loadbalancer for Kubernetes sits inside a Kubernetes cluster. It is registered with Kubernetes to watch the nginx-ingress Service (NGINX Ingress Controller). When there is a change to the backends, NGINX Loadbalancer for Kubernetes collects the Worker IPs and the NodePort TCP port numbers, then sends the IP:ports to NGINX Plus via the NGINX Plus API. The NGINX upstream servers are updated with no reload required, and NGINX Plus load balances traffic to the correct upstream servers and Kubernetes NodePorts. Additional NGINX Plus instances can be added to achieve high availability.

Diagram of NGINX Loadbalancer in action

A Snapshot of NGINX Loadbalancer for Kubernetes in Action

In the screenshot below, there are two windows that demonstrate NGINX Loadbalancer for Kubernetes deployed and doing its job:

  1. Service Type – LoadBalancer (for nginx-ingress)
  2. External IP – Connects to the NGINX Plus servers
  3. Ports – NodePort maps to 443:30158 with matching NGINX upstream servers (as shown in the NGINX Plus real-time dashboard)
  4. Logs – Indicates NGINX Loadbalancer for Kubernetes is successfully sending data to NGINX Plus

Note: In this example, the Kubernetes worker nodes are 10.1.1.8 and 10.1.1.10

A screenshot of NGINX Loadbalancer for Kubernetes in Action

Get Started Today

If you’re frustrated with networking challenges at the edge of your Kubernetes cluster, take the project for a spin and let us know what you think. The source code for NGINX Loadbalancer for Kubernetes is open source (under the Apache 2.0 license) with all installation instructions available on GitHub.  

To provide feedback, drop us a comment in the repo or message us in the NGINX Community Slack.

The post Automate TCP Load Balancing to On-Premises Kubernetes Services with NGINX appeared first on NGINX.

]]>
Shaping the Future of Kubernetes Application Connectivity with F5 NGINX https://www.nginx.com/blog/shaping-future-of-kubernetes-application-connectivity-with-f5-nginx/ Thu, 08 Jun 2023 15:02:27 +0000 https://www.nginx.com/?p=71667 Application connectivity in Kubernetes can be extremely complex, especially when you deploy hundreds – or even thousands – of containers across various cloud environments, including on-premises, public, private, or hybrid and multi-cloud. At NGINX, we firmly believe that integrating a unified approach to manage connectivity to, from, and within a Kubernetes cluster can dramatically simplify [...]

Read More...

The post Shaping the Future of Kubernetes Application Connectivity with F5 NGINX appeared first on NGINX.

]]>
Application connectivity in Kubernetes can be extremely complex, especially when you deploy hundreds – or even thousands – of containers across various cloud environments, including on-premises, public, private, or hybrid and multi-cloud. At NGINX, we firmly believe that integrating a unified approach to manage connectivity to, from, and within a Kubernetes cluster can dramatically simplify and streamline operations for development, infrastructure, platform engineering, and security teams.

In this blog, we want to share some reflections and thoughts on how NGINX created one of the most popular Ingress controllers today, and ways we plan continue delivering the best-in-class capabilities to manage Kubernetes app connectivity in the future.

Also, don’t miss a chance to chat with our engineers and architects to discover the latest cool and exciting projects that NGINX is working on and see these technologies in action. NGINX, a part of F5, is proud to be a Platinum Sponsor of KubeCon North America 2023, and we hope to see you there! Come meet us at the NGINX booth to discuss how we can help enhance security, scalability, and observability of your Kubernetes platform.

Before anything, we want to note the importance of putting the customer first. NGINX does so by looking at each customer’s specific scenario and use cases, goals they aim to achieve, and challenges they might encounter on their journey. Then, we develop a solution leveraging our technology innovations that helps the customer achieve those goals and address any challenges in the most efficient way.

Ingress Controller

In 2017, we released the first version of NGINX Ingress Controller to answer the demand for enterprise-class Kubernetes-native app delivery. NGINX Ingress Controller helps improve user experience with load balancing, SSL termination, URI rewrites, session persistence, JWT authentication, and other key application delivery features. It is built on the most popular data plan in the world – NGINX – and leverages the Kubernetes Ingress API.

After its release, NGINX Ingress Controller gained immediate traction due to its ease of deployment and configuration, low resource utilization (even under heavy loads), and fast and reliable operations.

Ingress Controller ecosystem diagram

As our journey advanced, we reached limitations with the Ingress object in the Kubernetes API, such as support for protocols other than HTTP and the inability to attach customized request-handling policies like security policy. Due to these limitations, we introduced Custom Resource Definitions (CRDs) to enhance NGINX Ingress Controller capabilities and enable advanced use cases for our customers.

NGINX Ingress Controller provides the CRDs VirtualServer, VirtualServerRoute, TransportServer, and Policy to enhance performance, resilience, uptime, and security, along with observability for the API gateway, load balancer, and Ingress functionality at the edge of a Kubernetes cluster. In support of frequent app releases, these NGINX CRDs also enable role-oriented self-service governance across multi-tenant development and operations teams.

Ingress Controller custom resources

With our most recent release at the time of writing (version 3.1), we added JWT authorization and introduced Deep Service Insight to help customers monitor status of their apps behind NGINX Ingress Controller. This helps implement advanced failover scenarios (e.g., from on-premises to cloud ). Many other features are planned in the roadmap, so stay tuned for the new releases.

Learn more about how you can reduce complexity, increase uptime, and provide better insights into app health and performance at scale on the NGINX Ingress Controller web page.

Service Mesh

In 2020, we continued our Kubernetes app connectivity journey by introducing NGINX Service Mesh, a purpose-built, developer-friendly, lightweight yet comprehensive solution to power a variety of service-to-service connectivity use cases, including security and visibility, within the Kubernetes cluster.

NGINX Service Mesh Control and Data Planes

NGINX Service Mesh and NGINX Ingress Controller leverage the same data plane technology and can be tightly and seamlessly integrated for unified connectivity to, from, and within a cluster.

Prior to the latest release (version 2.0), NGINX Service Mesh used SMI specifications and a bespoke API server to deliver service-to-service connectivity within a Kubernetes cluster. With version 2.0, we decided to deprecate the SMI resources and replace them by mimicking the resources from Gateway API for Mesh Management and Administration (GAMMA). With this approach, we ensure unified north-south and east-west connectivity that leverages the same CRD types, simplifying and streamlining configuration and operations.

NGINX Service Mesh is available as a free download from GitHub.

Gateway API

The Gateway API is an open source project intended to improve and standardize app and service networking in Kubernetes. Managed by the Kubernetes community, the Gateway API specification evolved from the Kubernetes Ingress API to solve limitations of the Ingress resource in production environments. These limitations include defining fine-grained policies for request processing and delegating control over configuration across multiple teams and roles. It’s an exciting project – and since the Gateway API’s introduction, NGINX has been an active participant.

Gateway API Resources

That said, we intentionally didn’t want to include the Gateway API specifications in NGINX Ingress Controller because it already has a robust set of CRDs that cover a diverse variety of use cases, and some of those use cases are the same ones the Gateway API is intended to address.

In 2021, we decided to spin off a separate new project that covers all aspects of Kubernetes connectivity with the Gateway API: NGINX Kubernetes Gateway.

We decided to start our NGINX Kubernetes Gateway project, rather than just using NGINX Ingress Controller, for these reasons:

  • To ensure product stability, reliability, and production readiness (we didn’t want to include beta-level specs into a mature, enterprise-class Ingress controller).
  • To deliver comprehensive, vendor-agnostic configuration interoperability for Gateway API resources without mixing them with vendor-specific CRDs.
  • To experiment with data and control plane architectural choices and decisions with the goal to provide easy-to-use, fast, reliable, and secure Kubernetes connectivity that is future-proof.

In addition, the Gateway API formed a GAMMA subgroup to research and define capabilities and resources of the Gateway API specifications for service mesh use cases. Here at NGINX, we see the long-term future of unified north-south and east-west Kubernetes connectivity in the Gateway API and heading in this direction.

The Gateway API is truly a collaborative effort across vendors and projects – all working together to build something better for Kubernetes users, based on experience and expertise, common touchpoints, and joint decisions. There will always be room for individual implementations to innovate and for data planes to shine. With NGINX Kubernetes Gateway, we continue working on native NGINX implementation of the Gateway API, and we encourage you to join us in shaping the future of Kubernetes app connectivity.

Ways you can get involved in NGINX Kubernetes Gateway include:

  • Join the project as a contributor
  • Try the implementation in your lab
  • Test and provide feedback

To join the project, visit NGINX Kubernetes Gateway on GitHub.

Even with this evolution of the Kubernetes Ingress API, NGINX Ingress Controller is not going anywhere and will stay here for the foreseeable future. We’ll continue to invest into and develop our proven and mature technology to satisfy both current and future customer needs and help users who need to manage app connectivity at the edge of a Kubernetes cluster.

Get Started Today

To learn more about how you can simplify application delivery with NGINX Kubernetes solutions, visit the Connectivity Stack for Kubernetes web page.

The post Shaping the Future of Kubernetes Application Connectivity with F5 NGINX appeared first on NGINX.

]]>
The Mission-Critical Patient-Care Use Case That Became a Kubernetes Odyssey https://www.nginx.com/blog/mission-critical-patient-care-use-case-became-kubernetes-odyssey/ Wed, 17 May 2023 15:00:51 +0000 https://www.nginx.com/?p=71589 Downtime can lead to serious consequences. These words are truer for companies in the medical technology field than in most other industries – in their case, the "serious consequences" can literally include death. We recently had the chance to dissect the tech stack of a company that’s seeking to transform medical record keeping from pen-and-paper [...]

Read More...

The post The Mission-Critical Patient-Care Use Case That Became a Kubernetes Odyssey appeared first on NGINX.

]]>
Downtime can lead to serious consequences.

These words are truer for companies in the medical technology field than in most other industries – in their case, the "serious consequences" can literally include death. We recently had the chance to dissect the tech stack of a company that’s seeking to transform medical record keeping from pen-and-paper to secure digital data that is accessible anytime, and anywhere, in the world. These data range from patient information to care directives, biological markers, medical analytics, historical records, and everything else shared between healthcare teams.

From the outset, the company has sought to address a seemingly simple question: “How can we help care workers easily record data in real time?” As the company has grown, however, the need to scale and make data constantly available has made solving that challenge increasingly complex. Here we describe how the company’s tech journey has led them to adopt Kubernetes and NGINX Ingress Controller.

Tech Stack at a Glance

Here’s a look at where NGINX fits into their architecture:

Diagram how NGINX fits into their architecture

The Problem with Paper

Capturing patient status and care information at regular intervals is a core duty for healthcare personnel. Traditionally, they have recorded patient information on paper, or more recently on laptop or tablet. There are a couple serious downsides:

  • Healthcare workers may interact dozens of patients per day, so it’s usually not practical to write detailed notes while providing care. As a result, workers end up writing their notes at the end of their shift. At that point, mental and physical fatigue make it tempting to record only generic comments.
  • The workers must also depend on their memory of details about patient behavior. Inaccuracies might mask patterns that facilitate diagnosis of larger health issues if documented correctly and consistently over time.
  • Paper records can’t easily be shared among departments within a single department, let alone with other entities like EMTs, emergency room staff, and insurance companies. The situation isn’t much better with laptops or tablets if they’re not connected to a central data store or the cloud.

To address these challenges, the company created a simplified data recording system that provides shortcuts for accessing patient information and recording common events like dispensing medication. This ease of access and use makes it possible to record patient interactions in real time as they happen.

All data is stored in cloud systems maintained by the company, and the app integrates with other electronic medical records systems to provide a comprehensive longitudinal view of resident behaviors. This helps caregivers provide better continuity of care, creates a secure historical record, and can be easily shared with other healthcare software systems.

Physicians and other specialists also use the platform when admitting or otherwise engaging with patients. There’s a record of preferences and personal needs that travel with the patient to any facility. These can be used to help patients feel comfortable in a new setting, which improve outcomes like recovery time.

There are strict legal requirements about how long companies must store patient data. The company’s developers have built the software to offer extremely high availability with uptime SLAs that are much better than those of generic cloud applications. Keeping an ambulance waiting because a patient’s file won’t load isn’t an option.

The Voyage from the Garage to the Cloud to Kubernetes

Like many startups, the company initially saved money by running the first proof-of-concept application on a server in a co-founder’s home. Once it became clear the idea had legs, the company moved its infrastructure to the cloud rather than manage hardware in a data center. Being a Microsoft shop, they chose Azure. The initial architecture ran applications on traditional virtual machines (VMs) in Azure App Service, a managed application delivery service that runs Microsoft’s IIS web server. For data storage and retrieval, the company opted to use Microsoft’s SQL Server running in a VM as a managed application.

After several years running in the cloud, the company was growing quickly and experiencing scaling pains. It needed to scale infinitely, and horizontally rather than vertically because the latter is slow and expensive with VMs. This requirement led rather naturally to containerization and Kubernetes as a possible solution. A further point in favor of containerization was that the company’s developers need to ship updates to the application and infrastructure frequently, without risking outages. With patient notes being constantly added across multiple time zones, there is no natural downtime to push changes to production without the risk of customers immediately being affected by glitches.

A logical starting point for the company was Microsoft’s managed Kubernetes offering, Azure Kubernetes Services (AKS). The team researched Kubernetes best practices and realized they needed an Ingress controller running in front of their Kubernetes clusters to effectively manage traffic and applications running in nodes and pods on AKS.

Traffic Routing Must Be Flexible Yet Precise

The team tested AKS’s default Ingress controller, but found its traffic-routing features simply could not deliver updates to the company’s customers in the required manner. When it comes to patient care, there’s no room for ambiguity or conflicting information – it’s unacceptable for one care worker to see an orange flag and another a red flag for the same event, for example. Hence, all users in a given organization must use the same version of the app. This presents a big challenge when it comes to upgrades. There’s no natural time to transition a customer to a new version, so the company needed a way to use rules at the server and network level to route different customers to different app versions.

To achieve this, the company runs the same backend platform for all users in an organization and does not offer multi-tenancy with segmentation at the infrastructure layer within the organization. With Kubernetes, it is possible to split traffic using virtual network routes and cookies on browsers along with detailed traffic rules. However, the company’s technical team found that AKS’s default Ingress controller can split traffic only on a percentage basis, not with rules that operate at level of customer organization or individual user as required.

In its basic configuration, the NGINX Ingress Controller based on NGINX Open Source has the same limitation, so the company decided to pivot to the more advanced NGINX Ingress Controller based on NGINX Plus, an enterprise-grade product which supports granular traffic control. Finding recommendations from NGINX Ingress Controller from Microsoft and the Kubernetes community based on the high level of flexibility and control helped solidify the choice. The configuration better supports the company’s need for pod management (as opposed to classic traffic management), ensuring that pods are running in the appropriate zones and traffic is routed to those services. Sometimes traffic is being routed internally but in most use cases, it is routed back out through NGINX Ingress Controller for observability reasons.

Here Be Dragons: Monitoring, Observability and Application Performance

With NGINX Ingress Controller, the technical team has complete control over the developer and end user experience. Once users log in and establish a session, they can immediately be routed to a new version or reverted back to an older one. Patches can be pushed simultaneously and nearly instantaneously to all users in an organization. The software isn’t reliant on DNS propagation or updates on networking across the cloud platform.

NGINX Ingress Controller also meets the company’s requirement for granular and continuous monitoring. Application performance is extremely important in healthcare. Latency or downtime can hamper successful clinical care, especially in life-or-death situations. After the move to Kubernetes, customers started reporting downtime that the company hadn’t noticed. The company soon discovered the source of the problem: Azure App Service relies on sampled data. Sampling is fine for averages and broad trends, but it completely misses things like rejected requests and missing resources. Nor does it show the usage spikes that commonly occur every half hour as care givers check in and log patient data. The company was getting only an incomplete picture of latency, error sources, bad requests, and unavailable service.

The problems didn’t stop there. By default Azure App Service preserves stored data for only a month – far short of the dozens of years mandated by laws in many countries.  Expanding the data store as required for longer preservation was prohibitively expensive. In addition, the Azure solution cannot see inside of the Kubernetes networking stack. NGINX Ingress Controller can monitor both infrastructure and application parameters as it handles Layer 4 and Layer 7 traffic.

For performance monitoring and observability, the company chose a Prometheus time-series database attached to a Grafana visualization engine and dashboard. Integration with Prometheus and Grafana is pre-baked into the NGINX data and control plane; the technical team had to make only a small configuration change to direct all traffic through the Prometheus and Grafana servers. The information was also routed into a Grafana Loki logging database to make it easier to analyze logs and give the software team more control over data over time. 

This configuration also future-proofs against incidents requiring extremely frequent and high-volume data sampling for troubleshooting and fixing bugs. Addressing these types of incidents might be costly with the application monitoring systems provided by most large cloud companies, but the cost and overhead of Prometheus, Grafana, and Loki in this use case are minimal. All three are stable open source products which generally require little more than patching after initial tuning.

Stay the Course: A Focus on High Availability and Security

The company has always had a dual focus, on security to protect one of the most sensitive types of data there is, and on high availability to ensure the app is available whenever it’s needed. In the shift to Kubernetes, they made a few changes to augment both capacities.

For the highest availability, the technical team deploys an active-active, multi-zone, and multi-geo distributed infrastructure design for complete redundancy with no single point of failure. The team maintains N+2 active-active infrastructure with dual Kubernetes clusters in two different geographies. Within each geography, the software spans multiple data centers to reduce downtime risk, providing coverage in case of any failures at any layer in the infrastructure. Affinity and anti-affinity rules can instantly reroute users and traffic to up-and-running pods to prevent service interruptions. 

For security, the team deploys a web application firewall (WAF) to guard against bad requests and malicious actors. Protection against the OWASP Top 10 is table stakes provided by most WAFs. As they created the app, the team researched a number of WAFs including the native Azure WAF and ModSecurity. In the end, the team chose NGINX App Protect with its inline WAF and distributed denial-of-service (DDoS) protection.

A big advantage of NGINX App Protect is its colocation with NGINX Ingress Controller, which both eliminates a point of redundancy and reduces latency. Other WAFs must be placed outside of the Kubernetes environment, contributing to latency and cost. Even miniscule delays (say 1 millisecond extra per request) add up quickly over time.

Surprise Side Quest: No Downtime for Developers

Having completed the transition to AKS for most of its application and networking infrastructure, the company has also realized significant improvements to its developer experience (DevEx). Developers now almost always spot problems before customers notice any issues themselves. Since the switch, the volume of support calls about errors is down about 80%!

The company’s security and application-performance teams have a detailed Grafana dashboard and unified alerting, eliminating the need to check multiple systems or implement triggers for warning texts and calls coming from different processes. The development and DevOps teams can now ship code and infrastructure updates daily or even multiple times per day and use extremely granular blue-green patterns. Formerly, they were shipping updates once or twice per week and having to time there for low-usage windows, a stressful proposition. Now, code is shipped when ready and the developers can monitor the impact directly by observing application behavior.

The results are positive all around – an increase in software development velocity, improvement in developer morale, and more lives saved.

The post The Mission-Critical Patient-Care Use Case That Became a Kubernetes Odyssey appeared first on NGINX.

]]>
Making Better Decisions with Deep Service Insight from NGINX Ingress Controller https://www.nginx.com/blog/making-better-decisions-with-deep-service-insight-from-nginx-ingress-controller/ Thu, 06 Apr 2023 13:15:13 +0000 https://www.nginx.com/?p=71485 We released version 3.0 of NGINX Ingress Controller in January 2023 with a host of significant new features and enhanced functionality. One new feature we believe you’ll find particularly valuable is Deep Service Insight, available with the NGINX Plus edition of NGINX Ingress Controller. Deep Service Insight addresses a limitation that hinders optimal functioning when a routing decision [...]

Read More...

The post Making Better Decisions with Deep Service Insight from NGINX Ingress Controller appeared first on NGINX.

]]>
We released version 3.0 of NGINX Ingress Controller in January 2023 with a host of significant new features and enhanced functionality. One new feature we believe you’ll find particularly valuable is Deep Service Insight, available with the NGINX Plus edition of NGINX Ingress Controller.

Deep Service Insight addresses a limitation that hinders optimal functioning when a routing decision system such as a load balancer sits in front of one or more Kubernetes clusters – namely, that the system has no access to information about the health of individual services running in the clusters behind the Ingress controller. This prevents it from routing traffic only to clusters with healthy services, which potentially exposes your users to outages and errors like 404 and 500.

Deep Service Insight eliminates that problem by exposing the health status of backend service pods (as collected by NGINX Ingress Controller) at a dedicated endpoint where your systems can access and use it for better routing decisions.

In this post we take an in‑depth look at the problem solved by Deep Service Insight, explain how it works in some common use cases, and show how to configure it.

Why Deep Service Insight?

The standard Kubernetes liveness, readiness, and startup probes give you some information about the backend services running in your clusters, but not enough for the kind of insight you need to make better routing decisions all the way up your stack. Lacking the right information becomes even more problematic as your Kubernetes deployments grow in complexity and your business requirements for uninterrupted uptime become more pressing.

A common approach to improving uptime as you scale your Kubernetes environment is to deploy load balancers, DNS managers, and other automated decision systems in front of your clusters. However, because of how Ingress controllers work, a load balancer sitting in front of a Kubernetes cluster normally has no access to status information about the services behind the Ingress controller in the cluster – it can verify only that the Ingress controller pods themselves are healthy and accepting traffic.

NGINX Ingress Controller, on the other hand, does have information about service health. It already monitors the health of the upstream pods in a cluster by sending periodic passive health checks for HTTP, TCP, UDP, and gRPC services, monitoring request responsiveness, and tracking successful response codes and other metrics. It uses this information to decide how to distribute traffic across your services’ pods to provide a consistent and predictable user experience. Normally, NGINX Ingress Controller is performing all this magic silently in the background, and you might never think twice about what’s happening under the hood. Deep Service Insight “surfaces” this valuable information so you can use it more effectively at other layers of your stack.

How Does Deep Service Insight Work?

Deep Service Insight is available for services you deploy using the NGINX VirtualServer and TransportServer custom resources (for HTTP and TCP/UDP respectively). Deep Service Insight uses the NGINX Plus API to share NGINX Ingress Controller’s view of the individual pods in a backend service at a dedicated endpoint unique to Deep Service Insight:

  • For VirtualServer – <IP_address> :<port> /probe/<hostname>
  • For TransportServer – <IP_address> :<port> /probe/ts/<service_name>

where

  • <IP_address> belongs to NGINX Ingress Controller
  • <port> is the Deep Service Insight port number (9114 by default)
  • <hostname> is the domain name of the service as defined in the spec.host field of the VirtualServer resource
  • <service_name> is the name of the service as defined in the spec.upstreams.service field in the TransportServer resource

The output includes two types of information:

  1. An HTTP status code for the hostname or service name:

    • 200 OK – At least one pod is healthy
    • 418 I’m a teapot – No pods are healthy
    • 404 Not Found – There are no pods matching the specified hostname or service name
  2. Three counters for the specified hostname or service name:

    • Total number of service instances (pods)
    • Number of pods in the Up (healthy) state
    • Number of pods in the Unhealthy state

Here’s an example where all three pods for a service are healthy:

HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
Date: Day, DD Mon YYYY hh:mm:ss TZ
Content-Length: 32
{"Total":3,"Up":3,"Unhealthy":0}

For more details, see the NGINX Ingress Controller documentation.

You can further customize the criteria that NGINX Ingress Controller uses to decide a pod is healthy by configuring active health checks. You can configure the path and port to which the health check is sent, the number of failed checks that must occur within a specified time period for a pod to be considered unhealthy, the expected status code, timeouts for connecting or receiving a response, and more. Include the Upstream.Healthcheck field in the VirtualServer or TransportServer resource.

Sample Use Cases for Deep Service Insight

One use case where Deep Service Insight is particularly valuable is when a load balancer is routing traffic to a service that’s running in two clusters, say for high availability. Within each cluster, NGINX Ingress Controller tracks the health of upstream pods as described above. When you enable Deep Service Insight, information about the number of healthy and unhealthy upstream pods is also exposed on a dedicated endpoint. Your routing decision system can access the endpoint and use the information to divert application traffic away from unhealthy pods in favor of healthy ones.

The diagram illustrates how Deep Service Insight works in this scenario.

Diagram showing how NGINX Ingress Controller provides information about Kubernetes pod health on the dedicated Deep Service Insight endpoing where a routing decision system uses it to divert traffic away from the cluster where the Tea service pods are unhealthy

You can also take advantage of Deep Service Insight when performing maintenance on a cluster in a high‑availability scenario. Simply scale the number of pods for a service down to zero in the cluster where you’re doing maintenance. The lack of healthy pods shows up automatically at the Deep Service Insight endpoint and your routing decision system uses that information to send traffic to the healthy pods in the other cluster. You effectively get automatic failover without having to change configuration on either NGINX Ingress Controller or the system, and your customers never experience a service interruption.

Enabling Deep Service Insight

To enable Deep Service Insight, include the -enable-service-insight command‑line argument in the Kubernetes manifest, or set the serviceInsight.create parameter to true if using Helm.

There are two optional arguments which you can include to tune the endpoint for your environment:

  • -service-insight-listen-port <port> – Change the Deep Service Insight port number from the default, 9114 (<port> is an integer in the range 1024–65535). The Helm equivalent is the serviceInsight.port parameter.
  • -service-insight-tls-string <secret> – A Kubernetes secret (TLS certificate and key) for TLS termination of the Deep Service Insight endpoint (<secret> is a character string with format <namespace>/<secret_name>). The Helm equivalent is the serviceInsight.secret parameter.

Example: Enable Deep Service Insight for the Cafe Application

To see Deep Service Insight in action, you can enable it for the Cafe application often used as an example in the NGINX Ingress Controller documentation.

  1. Install the NGINX Plus edition of NGINX Ingress Controller with support for NGINX custom resources and enabling Deep Service Insight:

    • If using Helm, set the serviceInsight.create parameter to true.
    • If using a Kubernetes manifest (Deployment or DaemonSet), include the -enable-service-insight argument in the manifest file.
  2. Verify that NGINX Ingress Controller is running:

    $ kubectl get pods -n nginx-ingress
    NAME                                          READY ...
    ingress-plus-nginx-ingress-6db8dc5c6d-cb5hp   1/1   ...  
    
        ...  STATUS   RESTARTS   AGE
        ...  Running   0          9d
  3. Deploy the Cafe application according to the instructions in the README.
  4. Verify that the NGINX VirtualServer custom resource is deployed for the Cafe application (the IP address is omitted for legibility):

    $ kubectl get vs 
    NAME   STATE   HOST               IP    PORTS      AGE
    cafe   Valid   cafe.example.com   ...   [80,443]   7h1m
  5. Verify that there are three upstream pods for the Cafe service running at cafe.example.com:

    $ kubectl get pods 
    NAME                     READY   STATUS    RESTARTS   AGE
    coffee-87cf76b96-5b85h   1/1     Running   0          7h39m
    coffee-87cf76b96-lqjrp   1/1     Running   0          7h39m
    tea-55bc9d5586-9z26v     1/1     Running   0          111m
  6. Access the Deep Service Insight endpoint:

    $ curl -i <NIC_IP_address>:9114/probe/cafe.example.com

    The 200 OK response code indicates that the service is ready to accept traffic (at least one pod is healthy). In this case all three pods are in the Up state.

    HTTP/1.1 200 OK
    Content-Type: application/json; charset=utf-8
    Date: Day, DD Mon YYYY hh:mm:ss TZ
    Content-Length: 32
    {"Total":3,"Up":3,"Unhealthy":0}

    The 418 I’m a teapot status code indicates that the service is unavailable (all pods are unhealthy).

    HTTP/1.1 418 I'm a teapot
    Content-Type: application/json; charset=utf-8
    Date: Day, DD Mon YYYY hh:mm:ss TZ
    Content-Length: 32
    {"Total":3,"Up":0,"Unhealthy":3}

    The 404 Not Found status code indicates that there is no service running at the specified hostname.

    HTTP/1.1 404 Not Found
    Date: Day, DD Mon YYYY hh:mm:ss TZ
    Content-Length: 0

Resources

For the complete changelog for NGINX Ingress Controller release 3.0.0, see the Release Notes.

To try NGINX Ingress Controller with NGINX Plus and NGINX App Protect, start your 30-day free trial today or contact us to discuss your use cases.

The post Making Better Decisions with Deep Service Insight from NGINX Ingress Controller appeared first on NGINX.

]]>
Managing Kubernetes Cost and Performance with Kubecost and NGINX https://www.nginx.com/blog/managing-kubernetes-cost-performance-with-kubecost-nginx/ Wed, 05 Apr 2023 15:08:18 +0000 https://www.nginx.com/?p=71483 Balancing cost and risk is top of mind for enterprises today. But without sufficient visibility, it is impossible to know if resources are being used effectively or consistently. Kubernetes enables complex deployments of containerized workloads, which are often transient and consume variable amounts of cluster resources. That makes cloud environments a great fit for Kubernetes, [...]

Read More...

The post Managing Kubernetes Cost and Performance with Kubecost and NGINX appeared first on NGINX.

]]>
Balancing cost and risk is top of mind for enterprises today. But without sufficient visibility, it is impossible to know if resources are being used effectively or consistently.

Kubernetes enables complex deployments of containerized workloads, which are often transient and consume variable amounts of cluster resources. That makes cloud environments a great fit for Kubernetes, because they offer pricing models where you only pay for what you use, instead of having to overprovision in anticipation of peak loads. Of course, cloud vendors charge a premium for that convenience. What if you could unlock the dynamic load balancing of public cloud, without the cost? And what if you could use the same solution for your on‑premises and public cloud deployments?

Now you can. Kubecost and NGINX are helping Kubernetes users reduce complexity and costs in countless deployments. When you use these solutions together, you get optimum performance and the ultimate visibility into that performance and associated costs.

With the insight from Kubecost, you can dramatically reduce the cost of your Kubernetes deployments while increasing performance and security. Examples of what you can achieve with Kubecost include:

  • Identify misconfiguration where a pod is creating significant egress traffic to a storage bucket in another region.
  • Consolidate load balancer and Ingress controller tooling across a multi‑cluster Kubernetes footprint to reduce costs and improve performance.
  • Understand how your containers are performing so you can correctly size them to reduce costs without risks.

NGINX Delivers the Performance You Need

NGINX Ingress Controller is one of the most widely used Ingress technologies – with more than a billion pulls on Docker Hub to date – and is synonymous with high‑performance, scalable, and secure modern apps running in production.

NGINX Ingress Controller runs alongside NGINX Open Source or NGINX Plus instances in a Kubernetes environment. It monitors standard Kubernetes Ingress resources and NGINX custom resources to discover requests for services that require Ingress load balancing. NGINX Ingress Controller then automatically configures NGINX or NGINX Plus to route and load balance traffic to these services.

NGINX Ingress Controller can be used as a universal tool to combine API gateway, load balancer, and Ingress controller functions, simplifying operations and reducing cost and complexity.

Kubecost Reveals the True Cost of Network Operations

Kubecost gives Kubernetes users visibility into the cost of running each container in their clusters. This includes the obvious CPU, memory, and storage costs on each node. But Kubecost goes beyond those basics to reveal per‑pod network transfer costs which are typically incurred on data egress from the cloud provider.

There are two configuration options that determine how accurately Kubecost allocates costs to the correct workloads.

The first option is integrated cloud billing. Kubecost pulls billing data from the cloud provider, including the network transfer costs associated with the node that handled the traffic. Kubecost distributes this cost among the pods on that node by their share of container traffic.

While the total reported network costs are accurate, this method is not ideal. For many pods, the only significant traffic is within its own zone (and thus free), but Kubecost shows network costs for these workloads.

The second option, network cost configuration, addresses this limitation of cloud billing integration by looking at the source and destination of all traffic. The Kubecost Allocations dashboard displays the proportion of spend across multiple categories including Kubernetes concepts – like namespace, label, and service – and organizational divisions like team, product, project, department, and environment.

Kubecost Allocations dashboard showing cumulative costs for past 60 days, categorized by namespace

Get All the Details at Our Upcoming Webinar

Join us on April 11 at 10:00 a.m. Pacific Time for a joint webinar, Managing Kubernetes Cost and Performance with NGINX & Kubecost. In live demos and how‑tos, we’ll show you how to implement the Kubecost configuration options mentioned here to reduce the cost and optimize the performance of your Kubernetes deployments.

The post Managing Kubernetes Cost and Performance with Kubecost and NGINX appeared first on NGINX.

]]>
Get Me to the Cluster…with BGP? https://www.nginx.com/blog/get-me-to-the-clusterwith-bgp/ Tue, 28 Feb 2023 16:02:59 +0000 https://www.nginx.com/?p=71288 Creating and managing a robust Kubernetes environment demands smooth collaboration between your Network and Application teams. But their priorities and working styles are usually quite different, leading to conflicts with potentially serious consequences – slow app development, delayed deployment, and even network downtime. Only the success of both teams, working towards a common goal, can ensure [...]

Read More...

The post Get Me to the Cluster…with BGP? appeared first on NGINX.

]]>
Creating and managing a robust Kubernetes environment demands smooth collaboration between your Network and Application teams. But their priorities and working styles are usually quite different, leading to conflicts with potentially serious consequences – slow app development, delayed deployment, and even network downtime.

Only the success of both teams, working towards a common goal, can ensure today’s modern applications are delivered on time with proper security and scalability. So, how do you leverage the skills and expertise of each team, while helping them work in tandem?

In our whitepaper Get Me to the Cluster, we detail a solution for enabling external access to Kubernetes services that enables Network and Application teams to combine their strengths without conflict.

How to Expose Apps in Kubernetes Clusters

The solution works specifically for Kubernetes clusters hosted on premises, with nodes running on bare metal or traditional Linux virtual machines (VMs) and standard Layer 2 switches and Layer 3 routers providing the networking for communication in the data center. It doesn’t extend to cloud‑hosted Kubernetes clusters, because cloud providers don’t allow us to control the core networking in their data centers nor the networking in their managed Kubernetes environment.

Diagram of Kubernetes clusters hosted on premises, with nodes and standard Layer 2 switches and Layer 3 routers providing the networking for communication in the data center.

Before we go over the specifics of our solution, let’s review why other standard ways to expose applications in a Kubernetes cluster don’t work for on‑premises deployments:

  • Service – Groups together pods running the same apps. This is great for internal pod-to-pod communication, but is only visible inside the cluster, so it doesn’t help expose apps externally.
  • NodePort – Opens a specific port on every node in the cluster and forwards traffic to the corresponding app. While this allows external users to access the service, it’s not ideal because the configuration is static and you have to use high‑numbered TCP ports (instead of well‑known lower port numbers) and coordinate port numbers with other apps. You also can’t share common TCP ports among different apps.
  • LoadBalancer – Uses the NodePort definitions on each node to create a network path from the outside world to your Kubernetes nodes. It’s great for cloud‑hosted Kubernetes, because AWS, Google Cloud Platform, Microsoft Azure and most other cloud providers support it as an easily configured feature that works well and provides the required public IP address and matching DNS A record for a service. Unfortunately, there’s no equivalent for on‑premises clusters.

Enabling External User Access to On‑Premises Kubernetes Clusters

That leaves us with the Kubernetes Ingress object, which is specifically designed for traffic that flows from users outside the cluster to pods inside the cluster (north‑south traffic). The Ingress creates an external HTTP/HTTPS entry point for the cluster – a single IP address or DNS name at which external users can access multiple services. This is just what’s needed! The Ingress object is implemented by an Ingress controller – in our solution the enterprise‑grade F5 NGINX Ingress Controller based on NGINX Plus.

It might surprise you that another key component of the solution is Border Gateway Protocol (BGP), a Layer 3 routing protocol. But a great solution doesn’t have to be complex!

The solution outlined in Get Me to the Cluster actually has four components:

  1. iBGP network – Internal BGP (iBGP) is used to exchange routing information within an autonomous system (AS) in the data center and helps ensure the network is reliable and scalable. iBGP is already in place and supported by the Network team in most data centers.
  2. Project Calico CNI networking – Project Calico is an open source networking solution that flexibly connects environments in on‑premises data centers while giving fine‑grained control over traffic flow. We use the CNI plug‑in from Project Calico for networking in the Kubernetes cluster, with BGP enabled. This allows you to control IP address pools allocated for pods, which helps to quickly identify any networking issues.
  3. NGINX Ingress Controller based on NGINX Plus – With NGINX Ingress Controller you can watch the service endpoint IP addresses of the pods and automatically reconfigure the list of upstream services with no interruption of traffic processing. Application teams can also take advantage of the many other enterprise‑grade Layer 7 HTTP features in NGINX Plus, including active health checks, mTLS, and JWT‑based authentication.
  4. NGINX Plus as a reverse proxy at the edge – NGINX Plus sits as a reverse proxy at the edge of the Kubernetes cluster, providing a path between switches and routers in the data center and the internal network in the Kubernetes cluster. This functions as replacement for the Kubernetes LoadBalancer object and uses Quagga for BGP.

The diagram illustrates the solution architecture, indicating which protocols the solution components use to communicate, not the order in which data is exchanged during request processing.

Diagram illustrating the solution architecture, indicating which protocols the solution components use to communicate

Download the Whitepaper for Free

By working together to implement a solution with well‑defined components, Network and Application teams can easily deliver optimal performance and reliability.

Our solution uses modern networking tools, protocols, and existing architectures. Because it is designed to be inexpensive and easy to implement, manage, and support, it adds ease and builds bridges between your teams.

To see the code in action and learn step-by-step how to deploy our solution, download Get Me to the Cluster for free.

The post Get Me to the Cluster…with BGP? appeared first on NGINX.

]]>
Shifting Security Left with F5 NGINX App Protect on Amazon EKS https://www.nginx.com/blog/shifting-security-left-f5-nginx-app-protect-amazon-eks/ Tue, 22 Nov 2022 16:00:17 +0000 https://www.nginx.com/?p=70738 According to The State of Application Strategy in 2022 report from F5, digital transformation in the enterprise continues to accelerate globally. Most enterprises deploy between 200 and 1,000 apps spanning across multiple cloud zones, with today’s apps moving from monolithic to modern distributed architectures. Kubernetes first hit the tech scene for mainstream use in 2016, a mere six years [...]

Read More...

The post Shifting Security Left with F5 NGINX App Protect on Amazon EKS appeared first on NGINX.

]]>
According to The State of Application Strategy in 2022 report from F5, digital transformation in the enterprise continues to accelerate globally. Most enterprises deploy between 200 and 1,000 apps spanning across multiple cloud zones, with today’s apps moving from monolithic to modern distributed architectures.

Kubernetes first hit the tech scene for mainstream use in 2016, a mere six years ago. Yet today more than 75% of organizations world‑wide run containerized applications in production, up 30% from 2019. One critical issue in Kubernetes environments, including Amazon Elastic Kubernetes Service (EKS), is security. All too often security is “bolted on” at the end of the app development process, and sometimes not even until after a containerized application is already up and running.

The current wave of digital transformation, accelerated by the COVID‑19 pandemic, has forced many businesses to take a more holistic approach to security and consider a “shift left” strategy. Shifting security left means introducing security measures early into the software development lifecycle (SDLC) and using security tools and controls at every stage of the CI/CD pipeline for applications, containers, microservices, and APIs. It represents a move to a new paradigm called DevSecOps, where security is added to DevOps processes and integrates into the rapid release cycles typical of modern software app development and delivery.

DevSecOps represents a significant cultural shift. Security and DevOps teams work with a common purpose: to bring high‑quality products to market quickly and securely. Developers no longer feel stymied at every turn by security procedures that stop their workflow. Security teams no longer find themselves fixing the same problems repeatedly. This makes it possible for the organization to maintain a strong security posture, catching and preventing vulnerabilities, misconfigurations, and violations of compliance or policy as they occur.

Shifting security left and automating security as code protects your Amazon EKS environment from the outset. Learning how to become production‑ready at scale is a big part of building a Kubernetes foundation. Proper governance of Amazon EKS helps drive efficiency, transparency, and accountability across the business while also controlling cost. Strong governance and security guardrails create a framework for better visibility and control of your clusters. Without them, your organization is exposed to greater risk of security breaches and the accompanying longtail costs associated with damage to revenue and reputation.

To find out more about what to consider when moving to a security‑first strategy, take a look at this recent report from O’Reilly, Shifting Left for Application Security.

Automating Security for Amazon EKS with GitOps

Automation is an important enabler for DevSecOps, helping to maintain consistency even at a rapid pace of development and deployment. Like infrastructure as code, automating with a security-as-code approach entails using declarative policies to maintain the desired security state.

GitOps is an operational framework that facilitates automation to support and simplify application delivery and cluster management. The main idea of GitOps is having a Git repository that stores declarative policies of Kubernetes objects and the applications running on Kubernetes, defined as code. An automated process completes the GitOps paradigm to make the production environment match all stored state descriptions.

The repository acts as a source of truth in the form of security policies, which are then referenced by declarative configuration-as-code descriptions as part of the CI/CD pipeline process. As an example, NGINX maintains a GitHub repository with an Ansible role for F5 NGINX App Protect which we hope is useful for helping teams wanting to shift security left.

With such a repo, all it takes to deploy a new application or update an existing one is to update the repo. The automated process manages everything else, including applying configurations and making sure that updates are successful. This ensures that everything happens in the version control system for developers and is synchronized to enforce security on business‑critical applications.

When running on Amazon EKS, GitOps makes security seamless and robust, while virtually eliminating human errors and keeping track of all versioning changes that are applied over time.

Diagram showing how to shift left using security as code with NGINX App Protect WAF and DoS, Jenkins, and Ansible
Figure 1: NGINX App Protect helps you shift security lift with security as code at all phases of your software development lifecycle

NGINX App Protect and NGINX Ingress Controller Protect Your Apps and APIs in Amazon EKS

A robust design for Kubernetes security policy must accommodate the needs of both SecOps and DevOps and include provisions for adapting as the environment scales. Kubernetes clusters can be shared in many ways. For example, a cluster might have multiple applications running in it and sharing its resources, while in another case there are multiple instances of one application, each for a different end user or group. This implies that security boundaries are not always sharply defined and there is a need for flexible and fine‑grained security policies.

The overall security design must be flexible enough to accommodate exceptions, must integrate easily into the CI/CD pipeline, and must support multi‑tenancy. In the context of Kubernetes, a tenant is a logical grouping of Kubernetes objects and applications that are associated with a specific business unit, team, use case, or environment. Multi‑tenancy, then, means multiple tenants securely sharing the same cluster, with boundaries between tenants enforced based on technical security requirements that are tightly connected to business needs.

An easy way to implement low‑latency, high‑performance security on Amazon EKS is by embedding the NGINX App Protect WAF and DoS modules with NGINX Ingress Controller. None of our other competitors provide this type of inline solution. Using one product with synchronized technology provides several advantages, including reduced compute time, costs, and tool sprawl. Here are some additional benefits.

  • Securing the application perimeter – In a well‑architected Kubernetes deployment, NGINX Ingress Controller is the only point of entry for data‑plane traffic flowing to services running within Kubernetes, making it an ideal location for a WAF and DoS protection.
  • Consolidating the data plane – Embedding the WAF within NGINX Ingress Controller eliminates the need for a separate WAF device. This reduces complexity, cost, and the number of points of failure.
  • Consolidating the control plane – WAF and DoS configuration can be managed with the Kubernetes API, making it significantly easier to automate CI/CD processes. NGINX Ingress Controller configuration complies with Kubernetes role‑based access control (RBAC) practices, so you can securely delegate the WAF and DoS configurations to a dedicated DevSecOps team.

The configuration objects for NGINX App Protect WAF and DoS are consistent across both NGINX Ingress Controller and NGINX Plus. A master configuration can easily be translated and deployed to either device, making it even easier to manage WAF configuration as code and deploy it to any application environment

To build NGINX App Protect WAF and DoS into NGINX Ingress Controller, you must have subscriptions for both NGINX Plus and NGINX App Protect WAF or DoS. A few simple steps are all it takes to build the integrated NGINX Ingress Controller image (Docker container). After deploying the image (manually or with Helm charts, for example), you can manage security policies and configuration using the familiar Kubernetes API.

Diagram showing topology for deploying NGINX App Protect WAF and DoS on NGINX Ingress Controller in Amazon EKS
Figure 2: NGINX App Protect WAF and DoS on NGINX Ingress Controller routes app and API traffic to pods and microservices running in Amazon EKS

The NGINX Ingress Controller based on NGINX Plus provides granular control and management of authentication, RBAC‑based authorization, and external interactions with pods. When the client is using HTTPS, NGINX Ingress Controller can terminate TLS and decrypt traffic to apply Layer 7 routing and enforce security.

NGINX App Protect WAF and NGINX App Protect DoS can then be deployed to enforce security policies to protect against point attacks at Layer 7 as a lightweight software security solution. NGINX App Protect WAF secures Kubernetes apps against OWASP Top 10 attacks, and provides advanced signatures and threat protection, bot defense, and Dataguard protection against exploitation of personally identifiable information (PII). NGINX App Protect DoS provides an additional line of defense at Layers 4 and 7 to mitigate sophisticated application‑layer DoS attacks with user behavior analysis and app health checks to protect against attacks that include Slow POST, Slowloris, flood attacks, and Challenger Collapsar.

Such security measures protect both REST APIs and applications accessed using web browsers. API security is also enforced at the Ingress level following the north‑south traffic flow.

NGINX Ingress Controller with NGINX App Protect WAF and DoS can secure Amazon EKS traffic on a per‑request basis rather than per‑service: this is a more useful view of Layer 7 traffic and a far better way to enforce SLAs and north‑south WAF security.

Diagram showing NGINX Ingress Controller with NGINX App Protect WAF and DoS routing north-south traffic to nodes in Amazon EKS
Figure 3: NGINX Ingress Controller with NGINX App Protect WAF and DoS routes north-south traffic to nodes in Amazon EKS

The latest High‑Performance Web Application Firewall Testing report from GigaOm shows how NGINX App Protect WAF consistently delivers strong app and API security while maintaining high performance and low latency, outperforming the other three WAFs tested – AWS WAF, Azure WAF, and Cloudflare WAF – at all tested attack rates.

As an example, Figure 4 shows the results of a test where the WAF had to handle 500 requests per second (RPS), with 95% (475 RPS) of requests valid and 5% of requests (25 RPS) “bad” (simulating script injection). At the 99th percentile, latency for NGINX App Protect WAF was 10x less than AWS WAF, 60x less than Cloudflare WAF, and 120x less than Azure WAF.

Graph showing latency at 475 RPS with 5% bad traffic at various percentiles for 4 WAFs: NGINX App Protect WAF, AWS WAF, Azure WAF, and Cloudflare WAF
Figure 4: Latency for 475 RPS with 5% bad traffic

Figure 5 shows the highest throughput each WAF achieved at 100% success (no 5xx or 429 errors) with less than 30 milliseconds latency for each request. NGINX App Protect WAF handled 19,000 RPS versus Cloudflare WAF at 14,000 RPS, AWS WAF at 6,000 RPS, and Azure WAF at only 2,000 RPS.

Graph showing maximum throughput at 100% success rate: 19,000 RPS for NGINX App Protect WAF; 14,000 RPS for Cloudflare WAF; 6,000 RPS for AWS WAF; 2,000 RPS for Azure WAF
Figure 5: Maximum throughput at 100% success rate

How to Deploy NGINX App Protect and NGINX Ingress Controller on Amazon EKS

NGINX App Protect WAF and DoS leverage an app‑centric security approach with fully declarative configurations and security policies, making it easy to integrate security into your CI/CD pipeline for the application lifecycle on Amazon EKS.

NGINX Ingress Controller provides several custom resource definitions (CRDs) to manage every aspect of web application security and to support a shared responsibility and multi‑tenant model. CRD manifests can be applied following the namespace grouping used by the organization, to support ownership by more than one operations group.

When publishing an application on Amazon EKS, you can build in security by leveraging the automation pipeline already in use and layering the WAF security policy on top.

Additionally, with NGINX App Protect on NGINX Ingress Controller you can configure resource usage thresholds for both CPU and memory utilization, to keep NGINX App Protect from starving other processes. This is particularly important in multi‑tenant environments such as Kubernetes which rely on resource sharing and can potentially suffer from the ‘noisy neighbor’ problem.

Configuring Logging with NGINX CRDs

The logs for NGINX App Protect and NGINX Ingress Controller are separate by design, to reflect how security teams usually operate independently of DevOps and application owners. You can send NGINX App Protect logs to any syslog destination that is reachable from the Kubernetes pods, by setting the parameter to the app-protect-security-log-destination annotation to the cluster IP address of the syslog pod. Additionally, you can use the APLogConf resource to specify which NGINX App Protect logs you care about, and by implication which logs are pushed to the syslog pod. NGINX Ingress Controller logs are forwarded to the local standard output, as for all Kubernetes containers.

This sample APLogConf resource specifies that all requests are logged (not only malicious ones) and sets the maximum message and request sizes that can be logged.

apiVersion: appprotect.f5.com/v1beta1 
kind: APLogConf 
metadata: 
 name: logconf 
 namespace: dvwa 
spec: 
 content: 
   format: default 
   max_message_size: 64k 
   max_request_size: any 
 filter: 
   request_type: all

Defining a WAF Policy with NGINX CRDs

The APPolicy Policy object is a CRD that defines a WAF security policy with signature sets and security rules based on a declarative approach. This approach applies to both NGINX App Protect WAF and DoS, while the following example focuses on WAF. Policy definitions are usually stored on the organization’s source of truth as part of the SecOps catalog.

apiVersion: appprotect.f5.com/v1beta1 
kind: APPolicy 
metadata: 
  name: sample-policy
spec: 
  policy: 
    name: sample-policy 
    template: 
      name: POLICY_TEMPLATE_NGINX_BASE 
    applicationLanguage: utf-8 
    enforcementMode: blocking 
    signature-sets: 
    - name: Command Execution Signatures 
      alarm: true 
      block: true
[...]

Once the security policy manifest has been applied on the Amazon EKS cluster, create an APLogConf object called log-violations to define the type and format of entries written to the log when a request violates a WAF policy:

apiVersion: appprotect.f5.com/v1beta1 
kind: APLogConf 
metadata: 
  name: log-violations
spec: 
  content: 
    format: default 
    max_message_size: 64k 
    max_request_size: any 
  filter: 
    request_type: illegal

The waf-policy Policy object then references sample-policy for NGINX App Protect WAF to enforce on incoming traffic when the application is exposed by NGINX Ingress Controller. It references log-violations to define the format of log entries sent to the syslog server specified in the logDest field.

apiVersion: k8s.nginx.org/v1 
kind: Policy 
metadata: 
  name: waf-policy 
spec: 
  waf: 
    enable: true 
    apPolicy: "default/sample-policy" 
    securityLog: 
      enable: true 
      apLogConf: "default/log-violations" 
      logDest: "syslog:server=10.105.238.128:5144"

Deployment is complete when DevOps publishes a VirtualServer object that configures NGINX Ingress Controller to expose the application on Amazon EKS:

apiVersion: k8s.nginx.org/v1
kind: VirtualServer
metadata:
  name: eshop-vs
spec:
  host: eshop.lab.local
  policies:
  - name: default/waf-policy
  upstreams:
  - name: eshop-upstream
    service: eshop-service
    port: 80
  routes:
  - path: /
    action:
      pass: eshop-upstream

The VirtualServer object makes it easy to publish and secure containerized apps running on Amazon EKS while upholding the shared responsibility model, where SecOps provides a comprehensive catalog of security policies and DevOps relies on it to shift security left from day one. This enables organizations to transition to a DevSecOps strategy.

Conclusion

For companies with legacy apps and security solutions built up over years, shifting security left on Amazon EKS is likely a gradual process. But reframing security as code that is managed and maintained by the security team and consumed by DevOps helps deliver services faster and make them production ready.

To secure north‑south traffic in Amazon EKS, you can leverage NGINX Ingress Controller embedded with NGINX App Protect WAF for protect against point attacks at Layer 7 and NGINX App Protect DoS for DoS mitigation at Layers 4 and 7.

To try NGINX Ingress Controller with NGINX App Protect WAF, start a free 30-day trial on the AWS Marketplace or contact us to discuss your use cases.

To discover how you can prevent security breaches and protect your Kubernetes apps at scale using NGINX Ingress Controller and NGINX App Protect WAF and DoS on Amazon EKS, please download our eBook, Add Security to Your Amazon EKS with F5 NGINX.

To learn more about how NGINX App Protect WAF outperforms the native WAFs for AWS, Azure, and Cloudflare, download the High-Performance Web Application Firewall Testing report from GigaOm and register for the webinar on December 6 where GigaOm analyst Jake Dolezal reviews the results.

The post Shifting Security Left with F5 NGINX App Protect on Amazon EKS appeared first on NGINX.

]]>