The News Archives - NGINX https://www.nginx.com/category/news/ The High Performance Reverse Proxy, Load Balancer, Edge Cache, Origin Server Fri, 29 Mar 2024 14:41:06 +0000 en-US hourly 1 Announcing NGINX Gateway Fabric Release 1.2.0 https://www.nginx.com/blog/announcing-nginx-gateway-fabric-release-1-2-0/ Fri, 29 Mar 2024 14:30:24 +0000 https://www.nginx.com/?p=72950 We are thrilled to share the latest news on NGINX Gateway Fabric, which is our conformant implementation of the Kubernetes Gateway API. We recently updated it to version 1.2.0, with several exciting new features and improvements. This release focuses on enhancing the platform’s capabilities and ensuring it meets our users’ demands. We have included F5 [...]

Read More...

The post Announcing NGINX Gateway Fabric Release 1.2.0 appeared first on NGINX.

]]>
We are thrilled to share the latest news on NGINX Gateway Fabric, which is our conformant implementation of the Kubernetes Gateway API. We recently updated it to version 1.2.0, with several exciting new features and improvements. This release focuses on enhancing the platform’s capabilities and ensuring it meets our users’ demands. We have included F5 NGINX Plus support and expanded our API surface to cover the most demanded use cases. We believe these enhancements will create a better experience for all our users and help them achieve their goals more efficiently.

NGINX Gateway Fabric’s design and architecture overview

Figure 1: NGINX Gateway Fabric’s design and architecture overview


NGINX Gateway Fabric 1.2.0 at a glance:

  • NGINX Plus Support – NGINX Gateway Fabric now supports NGINX Plus for the data plane, which offers additional stability and higher resource utilization, metrics, and observability dashboards.
  • BackendTLSPolicy – TLS verification allows NGINX Gateway Fabric to confirm the identity of the backend application, protecting against potential hijacking of the connection by malicious applications. Additionally, TLS encrypts traffic within the cluster, ensuring secure communication between the client and the backend application.
  • URLRewrite – NGINX Gateway Fabric now supports URL rewrites in Route objects. With this feature, you can easily modify the original request URL and redirect it to a more appropriate destination. That way, as your backend applications undergo API changes, you can keep the APIs you expose to your clients consistent.
  • Product Telemetry – With product telemetry now present in NGINX Gateway Fabric, we can help further improve operational efficiency of your infrastructure by learning about how you use the product in your environment. Also, we are planning to share these insights regularly with the community during our meetings.

We’ll take a deeper look at the new features below.

What’s New in NGINX Gateway Fabric 1.2.0?

NGINX Plus Support

NGINX Gateway Fabric version 1.2.0 has been released with support for NGINX Plus, providing users with many new benefits. With the new upgrade, users can now leverage the advanced features of NGINX Plus in their deployments including additional Prometheus metrics, dynamic upstream reloads, and the NGINX Plus dashboard.

This upgrade also allows you the option to get support directly from NGINX for your environment.

Additional Prometheus Metrics

While using NGINX Plus as your data plane, additional advanced metrics will be exported alongside the metrics you would normally get with NGINX Open Source. Some highlights include metrics around http requests, streams, connections, and many more. For the full list, you can check NGINX’s Prometheus exporter for a convenient list, but note that the exporter is not strictly required for NGINX Gateway Fabric.

With any installation of Prometheus or Prometheus compatible scraper, you can scrape these metrics into your observability stack and build dashboards and alerts using one consistent layer within your architecture. Prometheus metrics are automatically available in the NGINX Gateway Fabric through HTTP Port 9113. You can also change the default port by updating the Pod template.

If you are looking for a simple setup, you can visit our GitHub page for more information on how to deploy and configure Prometheus to start collecting. Alternatively, if you are just looking to view the metrics and skip the setup, you can use the NGINX Plus dashboard, explained in the next section.

After installing Prometheus in your cluster, you can access its dashboard by running port-forwarding in the background.

kubectl -n monitoring port-forward svc/prometheus-server 9090:80

Prometheus Graph with NGINX Gateway Fabric connections accepted

Figure 2: Prometheus Graph showing NGINX Gateway Fabric connections accepted

The above setup will work even if you are using the default NGINX Open Source as your data plane as well! However, you will not see any of the additional metrics that NGINX Plus provides. As the size and scope of your cluster grows, we recommend looking at how NGINX Plus metrics can help quickly resolve your capacity planning issues, incidents, and even backend application faults.

Dynamic Upstream Reloads

Dynamic upstream reloads, enabled by NGINX Gateway Fabric automatically when installed with NGINX Plus, allow NGINX Gateway Fabric to make updates to NGINX configurations without a NGINX reload.

Traditionally, when a NGINX reload occurs, the existing connections are handled by the old worker processes while the newly configured workers handle new ones. When all the old connections are complete, the old workers are stopped, and NGINX continues with only the newly configured workers. In this way, configuration changes are handled gracefully even in NGINX Open Source.

However, when NGINX is under high load, maintaining both old and new workers can create a resource overhead that may cause problems, especially if trying to run NGINX Gateway Fabric as lean as possible. The dynamic upstream reloads featured in NGINX Plus bypass this problem by providing an API endpoint for configuration changes that NGINX Gateway Fabric will use automatically if present, reducing the need for extra resource overhead to handle old and new workers during the reload process.

As you begin to make changes more often to NGINX Gateway Fabric, reloads will occur more frequently. If you are curious how often or when reloads occur in your current installation of NGF, you can look at the Prometheus metric nginx_gateway_fabric_nginx_reloads_total. For a full, deep dive into the problem, check out Nick Shadrin’s article here!

Here’s an example of the metric in an environment with two deployments of NGINX Gateway Fabric in the Prometheus dashboard:

Prometheus graph with the NGINX Gateway Fabric reloads total

Figure 3: Prometheus graph showing the NGINX Gateway Fabric reloads total

NGINX Plus Dashboard

As previously mentioned, if you are looking for a quick way to view NGINX Plus metrics without a Prometheus installation or observability stack, the NGINX Plus dashboard gives you real-time monitoring of performance metrics you can use to troubleshoot incidents and keep an eye on resource capacity.

The dashboard gives you different views for all metrics NGINX Plus provides right away and is easily accessible on an internal port. If you would like to take a quick look for yourself as to what the dashboard capabilities look like, check out our dashboard demo site at demo.nginx.com.

To access the NGINX Plus dashboard on your NGINX Gateway Fabric installation, you can forward connections to Port 8765 on your local machine via port forwarding:

kubectl port-forward -n nginx-gateway 8765:8765

Next, open your preferred browser and type http://localhost:8765/dashboard.html in the address bar.

NGINX Plus Dashboard

Figure 4: NGINX Plus Dashboard overview

BackendTLSPolicy

This release now comes with the much-awaited support for the BackendTLSPolicy. The BackendTLSPolicy introduces encrypted TLS communication between NGINX Gateway Fabric and the application, greatly enhancing the communication channel’s security. Here’s an example that shows how to apply the policy by specifying settings such as TLS ciphers and protocols when validating server certificates against a trusted certificate authority (CA).

The BackendTLSPolicy enables users to secure their traffic between NGF and their backends. You can also set the minimum TLS version and cipher suites. This protects against malicious applications hijacking the connection and encrypts the traffic within the cluster.

To configure backend TLS termination, first create a ConfigMap with the CA certification you want to use. For help with managing internal Kubernetes certificates, check out this guide.


kind: ConfigMap
apiVersion: v1
metadata:
  name: backend-cert
data:
  ca.crt: 
         < -----BEGIN CERTIFICATE-----
	   -----END CERTIFICATE-----
          >

Next, we create the BackendTLSPolicy, which targets our secure-app Service and refers to the ConfigMap created in the previous step:


apiVersion: gateway.networking.k8s.io/v1alpha2
kind: BackendTLSPolicy
metadata:
  name: backend-tls
spec:
  targetRef:
    group: ''
    kind: Service
    name: secure-app
    namespace: default
  tls:
    caCertRefs:
    - name: backend-cert
      group: ''
      kind: ConfigMap
    hostname: secure-app.example.com

URLRewrite

With a URLRewrite filter, you can modify the original URL of an incoming request and redirect it to a different URL with zero performance impact. This is particularly useful when your backend applications change their exposed API, but you want to maintain backwards compatibility for your existing clients. You can also use this feature to expose a consistent API URL to your clients while redirecting the requests to different applications with different API URLs, providing an “experience” API that combines the functionality of several different APIs for your clients’ convenience and performance.

To get started, let’s create a gateway for the NGINX gateway fabric. This will enable us to define HTTP listeners and configure the Port 80 for optimal performance.


apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: cafe
spec:
  gatewayClassName: nginx
  listeners:
  - name: http
    port: 80
    protocol: HTTP

Let’s create an HTTPRoute resource and configure request filters to rewrite any requests for /coffee to /beans. We can also provide a /latte endpoint that is rewritten to include the /latte prefix for the backend to handle (“/latte/126” becomes “/126”).


apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: coffee
spec:
  parentRefs:
  - name: cafe
    sectionName: http
  hostnames:
  - "cafe.example.com"
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /coffee
    filters:
    - type: URLRewrite
      urlRewrite:
        path:
          type: ReplaceFullPath
          replaceFullPath: /beans
    backendRefs:
    - name: coffee
      port: 80
  - matches:
    - path:
        type: PathPrefix
        value: /latte
    filters:
    - type: URLRewrite
      urlRewrite:
        path:
          type: ReplacePrefixMatch
          replacePrefixMatch: /
    backendRefs:
    - name: coffee
      port: 80

The HTTP rewrite feature helps ensure flexibility between the endpoints on the client side and how they are mapped with the backend. It also allows traffic redirection from one URL to another, which is particularly helpful when migrating content to a new website or API traffic.

Although NGINX Gateway Fabric supports path-based rewrites, it currently does not support path-based redirects. Let us know if this is a feature you need for your environment.

Product Telemetry

We have decided to include product telemetry as a mechanism to passively collect feedback as a part of the 1.2 release. This feature will collect a variety of metrics from your environment and send them to our data collection platform every 24 hours. No PII is collected, and you can see the full list of what is collected here.

We are committed to providing complete transparency around our telemetry functionality. While we will document every field we collect, and you can validate what we collect by our code, you always have the option to disable it completely. We are planning to regularly review interesting observations based on the statistics we collect with the community in our community meetings, so make sure to drop by!

Resources

For the complete changelog for NGINX Gateway Fabric 1.2.0, see the Release Notes. To try NGINX Gateway Fabric for Kubernetes with NGINX Plus, start your free 30-day trial today or contact us to discuss your use cases.

If you would like to get involved, see what is coming next, or see the source code for NGINX Gateway Fabric, check out our repository on GitHub!

We have bi-weekly community meetings on Mondays at 9AM Pacific/5PM GMT. The meeting link, updates, agenda, and notes are on the NGINX Gateway Fabric Meeting Calendar. Links are also always available from our GitHub readme.

The post Announcing NGINX Gateway Fabric Release 1.2.0 appeared first on NGINX.

]]>
Our Design Vision for NGINX One: The Ultimate Data Plane SaaS https://www.nginx.com/blog/our-design-vision-for-nginx-one-the-ultimate-data-plane-saas/ Wed, 13 Mar 2024 19:15:38 +0000 https://www.nginx.com/?p=72934 A Deeper Dive into F5 NGINX One, and an Invitation to Participate in Early Access A few weeks ago, we introduced NGINX One to our customers at AppWorld 2024. We also opened NGINX One Early Access, and a waiting list is now building. The solution is also being featured at AppWorld EMEA and AppWorld Asia [...]

Read More...

The post Our Design Vision for NGINX One: The Ultimate Data Plane SaaS appeared first on NGINX.

]]>
A Deeper Dive into F5 NGINX One, and an Invitation to Participate in Early Access

A few weeks ago, we introduced NGINX One to our customers at AppWorld 2024. We also opened NGINX One Early Access, and a waiting list is now building. The solution is also being featured at AppWorld EMEA and AppWorld Asia Pacific. Events throughout both regions will continue through June.

So the timing seems appropriate, in the midst of all this in-person activity, to share a bit more of our thinking and planning for NGINX One with our blog readers and re-extend that early access invitation to our global NGINX community.

Taking NGINX to Greater Heights

At the heart of all NGINX products lies our remarkable data plane. Designed and coded originally by Igor Sysoev, the NGINX data plane has stood the test of time. It is remarkably self-contained and performant. The code base has remained small and compact, with few dependencies and rare security issues. Our challenge was to make the data plane the center of a broader, complete product offering encompassing everything we build — and make that data plane more extensible, accessible, affordable, and manageable.

We also wanted to make NGINX a more accessible option for our large base of F5 customers. These are global teams for enterprise-wide network operations and security, many of which are responsible for NGINX deployments and ensuring that application development and platform ops teams get what they need to build modern applications.

Core Principles: No Silos, Consumption-Based, One Management Plane, Global Presence

With all this in mind, when we started planning NGINX One, we laid out a handful of design conventions that we wanted to follow:

  • Non-opinionated and flexible — NGINX One will be easy to implement across the entire range of NGINX use cases (web server, reverse proxy, application delivery, Kubernetes/microservices, application security, CDN).
  • Simple API interface — NGINX One will be easy to connect to any existing developer toolchain, platform, or system via RESTful APIs.
  • A single management system — NGINX One will provide one console and one management plane to run and configure everything NGINX. The console will be delivered ”as-a-service” with zero installation required and easy extensibility to other systems, such as Prometheus.
  • Consumption-based — With NGINX One, users will pay only for what they consume, substantially reducing barriers to entry and lowering overall cost of ownership.
  • Scales quickly, easily, and affordably in any cloud environment — NGINX One will be cloud and environment agnostic, delivering data plane, app delivery, and security capabilities on any cloud, any PaaS or orchestration engine, and for function-based and serverless environments.
  • Simplified security — NGINX One will make securing your applications in any environment easier to implement and manage, utilizing NGINX App Protect capabilities such as OneWAF and DDoS protection.
  • Intelligence for optimizing configurations — NGINX One will leverage all of NGINX’s global intelligence to offer intelligent suggestions on configuring your data plane, reducing errors, and increasing application performance.
  • Extensibility — NGINX One will be easy to integrate with other platforms for networking, observability and security, and application delivery. NGINX One will simplify integration with F5 Big-IP and other products, making it easier for network operations and security operations teams to secure and manage their technology estate across our product families.

NGINX One Is Designed to Be the Ultimate Data Plane Multi-Tool

We wanted to deliver all this while leveraging our core asset — the NGINX data plane. In fact, foundational to our early thinking on NGINX One was an acknowledgment that we needed to return to our data plane roots and make that the center of our universe.

NGINX One takes the core NGINX data plane software you’re familiar with and enhances it with SaaS-based tools for observability, management, and security. Whether you’re working on small-scale deployments or large, complex systems, NGINX One integrates seamlessly. You can use it as a drop-in replacement for any existing NGINX product.

For those of you navigating hybrid and multicloud environments, NGINX One simplifies the process. Integrating into your existing systems, CI/CD workflows, and cloud services is straightforward. NGINX One can be deployed in minutes and is consumable via API, giving you the flexibility to scale as needed. This service includes all essential NGINX products: NGINX Plus, NGINX Open Source, NGINX Instance Manager, NGINX Ingress Controller, and NGINX Gateway Fabric. NGINX One itself is hosted across multiple clouds for resilience.

In a nutshell, NGINX One can unify all your NGINX products into a single management sphere. Most importantly, with NGINX One you pay only for what you use. There are no annual license charges or per-seat costs. For startups, a generous free tier will allow you to scale and grow without fear of getting whacked with “gotcha” pricing. You can provision precisely what you need when you need it. You can dial it up and down as needed and automate scaling to ensure your apps are always performant.

NGINX One + F5 Big-IP = One Management Plane and Global Presence

To make NGINX easier to manage as part of F5 products, NGINX One better integrates with F5 while leveraging F5’s global infrastructure. To start with, NGINX One will be deployed on the F5 Distributed Cloud, adjoining NGINX One users with many additional capabilities. They can easily network across clouds with our Multicloud Network fabric without enduring complex integrations. They can configure granular security policies for specific teams and applications at the global firewall layer with less toil and fewer tickets. NGINX One users will benefit from our global network of points-of-presence, bringing applications much closer to end-users without having to bring in an additional content delivery network layer.

F5 users can easily leverage NGINX One to discover all instances of NGINX running in their enterprise environments and instrument those instances for better observability. In addition, F5’s security portfolio shares a single WAF engine, commonly referred to as “OneWAF”. This allows organizations to migrate the same policies they use in BIG-IP Advanced WAF to NGINX App Protect and to keep those policies synchronized.

A View into the Future

As we continue to mature NGINX One, we will ensure greater availability and scalability of your applications and infrastructure. We will do this by keeping your apps online with built-in high-availability and granular traffic controls, and by addressing predictable and unpredictable changes through automation and extensibility. And when you discover issues and automatically apply supervised configuration changes to multiple instances simultaneously you dramatically reduce your operational costs.

You will be able to resolve problems before your customers notice any disruptions by leveraging detailed AI-driven insights into the health of your apps, APIs, and infrastructure.
Identifying trends and cycles with historical data will enable you to accurately assess upcoming requirements, make better decisions, and streamline troubleshooting.

You can secure and control your network, applications and APIs while ensuring that your DevOps teams can integrate seamlessly with their CI/CD systems and tooling. Security will be closer to your application code and APIs and will be delivered on the shift-left promise. Organizations implementing zero trust will be able to validate users from edge to cloud without introducing complexity or unnecessary overhead. Moreover, you’ll further enhance your security posture by immediately discovering and quickly mitigating NGINX instances impacted by common vulnerabilities and exposures (CVEs), ensuring uniform protection across your infrastructure.

NGINX One will also change the way that you consume our product. We are moving to a SaaS-delivered model that allows you to pay for a single product and deliver our services wherever your teams need them –in your datacenter, the public cloud, or F5 Distributed Cloud. In the future more capabilities will come to our data plane, such as Webassembly. We will introduce new use cases like AI gateway. We are making it frictionless and ridiculously easy for you to consume these services with a consumption-based tiered pricing.

There will even be a free tier for a small number of NGINX instances and first-time customers. With consumption pricing you have a risk-free entry with low upfront costs.

It will be easier for procurement teams, because NGINX One will be included in all F5’s buying programs, including our Flexible Consumption Program.

No longer will pricing be a barrier for development teams. With NGINX One they will get all the capabilities and management that they need to secure, deliver, and optimize every App and API everywhere.

When Can I Get NGINX One, and How Can I Prepare?

In light of our recent news, many NGINX customers have asked when they can purchase NGINX One and what can they do now to get ready.

We expect NGINX One to be commercially available later this year. However, as mentioned above, customers can raise their hands now to get early access, try it out, and share their feedback for us to incorporate into our planning. In the meantime, all commercially available NGINX products will be compatible with NGINX One, so there is no need to worry that near-term purchases will soon be obsolete. They won’t.

In preparation to harness all the benefits of NGINX One, customers should ensure they are using the latest releases of their NGINX instances and ensure they are running NGINX Instance Manager as prescribed in their license.

The post Our Design Vision for NGINX One: The Ultimate Data Plane SaaS appeared first on NGINX.

]]>
NGINX One: A SaaS Solution for Modern Application Management and Delivery https://www.nginx.com/blog/nginx-one-a-saas-solution-for-modern-application-management-and-delivery/ Wed, 07 Feb 2024 21:00:11 +0000 https://www.nginx.com/?p=72877 NGINX One will soon be available “as-a-service,” with a single license and a unified management console for enterprise-wide security, availability, observability, and scalability—with a friendly pay-as-you-go pricing model. Today at AppWorld, we are introducing and opening “early access” for NGINX One: a global SaaS offering for deploying, securing, monitoring, scaling, and managing all NGINX instances [...]

Read More...

The post NGINX One: A SaaS Solution for Modern Application Management and Delivery appeared first on NGINX.

]]>
NGINX One will soon be available “as-a-service,” with a single license and a unified management console for enterprise-wide security, availability, observability, and scalability—with a friendly pay-as-you-go pricing model.

Today at AppWorld, we are introducing and opening “early access” for NGINX One: a global SaaS offering for deploying, securing, monitoring, scaling, and managing all NGINX instances (whether they are on-prem, in the cloud, or at the edge), and all from a single management interface. It includes support for all our data plane components—NGINX Plus, NGINX Open Source, NGINX Unit, NGINX Gateway Fabric, and Kubernetes Ingress Controller—under a single enterprise license.

Breaking down silos, making NGINX easier for all

Unlike previous NGINX pricing models, NGINX One is completely consumption-based—you only pay for what you use. For every organization from startups to global enterprises, NGINX One makes deploying any NGINX use case simpler, faster, more secure, more scalable, easier to observe and monitor, and more cost effective.

Moreover, we are unifying our management plane into a more cohesive package on the F5 Distributed Cloud Platform, as enterprises have come to expect. This benefits not only traditional NGINX users in application development and DevOps roles, but also the broader community of F5 customers with other responsibilities, most of whom have NGINX deployed somewhere in their organizations. CIOs, CISOs, and their teams—network operations, security operations, and infrastructure—frequently share overlapping responsibilities for enterprise application delivery and security.

On the Distributed Cloud Platform, NGINX One users will benefit from many adjacent security and network capabilities that hybrid, multicloud enterprises are demanding. They can easily network across clouds with our multicloud network fabric without enduring complex integrations. They can configure granular security policies for specific teams and applications at a global level with less toil and fewer tickets. F5’s security portfolio shares a single WAF engine, commonly referred to as “OneWAF,” which allows organizations to migrate the same policies they were using in BIG-IP Advanced WAF to NGINX App Protect. And the Distributed Cloud Platform’s global network of points-of-presence can bring applications much closer to end users without having to bring in an additional content delivery network layer.

We envision NGINX One meeting our customers wherever they are, with a rich ecosystem of supported providers that can be used to seamlessly integrate existing systems, from automation to monitoring and beyond.

NGINX One diagram
Figure 1: NGINX One unites NGINX’s data plane components, a SaaS management console hosted on F5 Distributed Cloud, and pay-as-you-go pricing into a single product offer.

(F5 + NGINX) * SaaS = Better together

F5 and NGINX are truly better together. And bringing our capabilities together will make life easier for everyone, particularly our customers. NetOps and SecOps teams can shift multicloud deployment left to app dev teams without sacrificing security or control. Developers and DevOps teams can more easily experiment and innovate, building and deploying apps more quickly. Best of all, DevOps and platform ops teams, NetOps, and SecOps can better collaborate by utilizing the same set of tools for security, observability, and scalability.

As a SaaS, NGINX One enables our product team to accelerate innovation and feature development. And we can safely and easily make NGINX more extensible, allowing our users to seamlessly integrate NGINX with all their developer tools, CI/CD workflows, and other infrastructure and application delivery systems.

When we brought NGINX into the F5 fold almost five years ago, we had a vision—a single management plane for cloud applications spanning all environments and numerous use cases. We saw a unified system that made it possible for everyone responsible for and dependent on NGINX to use it more easily, securely, and efficiently. It has taken a while. NGINX One is the giant leap on that journey.

Sign up for access and tell us what you think

Today at AppWorld, we are beginning an early access phase with a select group of customers. We are initially inviting existing NGINX customers to participate in these early phases for feedback. As soon as we can, we’ll expand access to more F5 customers and move to general availability later in 2024. Interested NGINX customers can connect with us here and we’ll be in touch soon. We’ll be onboarding additional customers throughout the next several months, and we expect a waiting list to develop.

I want to thank both our NGINX and F5 communities for being so generous with their feedback and time in helping us shape NGINX One. Our work as product builders is principally driven to reflect your needs and wisdom. We hope that you will continue to provide input and guide the future development of NGINX One. We are listening and more excited than ever. Thanks again.

The post NGINX One: A SaaS Solution for Modern Application Management and Delivery appeared first on NGINX.

]]>
Announcing NGINX Plus R31 https://www.nginx.com/blog/nginx-plus-r31-released/ Tue, 19 Dec 2023 19:33:42 +0000 https://www.nginx.com/?p=72823 We’re happy to announce the availability of NGINX Plus Release 31 (R31). Based on NGINX Open Source, NGINX Plus is the only all-in-one software web server, load balancer, reverse proxy, content cache, and API gateway. New and enhanced features in NGINX Plus R31 include: Native NGINX usage reporting – NGINX Plus now has native support for reporting on [...]

Read More...

The post Announcing NGINX Plus R31 appeared first on NGINX.

]]>
We’re happy to announce the availability of NGINX Plus Release 31 (R31). Based on NGINX Open Source, NGINX Plus is the only all-in-one software web server, load balancer, reverse proxy, content cache, and API gateway.

New and enhanced features in NGINX Plus R31 include:

  • Native NGINX usage reporting – NGINX Plus now has native support for reporting on NGINX deployments across your organization, enabling a consolidated view of your NGINX infrastructure in NGINX Instance Manager. This feature enables enhanced governance of NGINX instances for compliance purposes.
  • Enhancements to SNI configuration – Previously, the server name that passed through Server Name Identification (SNI) used the proxy_ssl_name directive and was used by all the servers in the upstream group. NGINX Plus R31 enables this SNI to be set to a selected upstream server.
  • Periodic task execution with NGINX JavaScript – NGINX JavaScript introduces the js_periodic directive to allow running content at periodic intervals. This enhancement eliminates the need to set up a cron job and can be configured to run on all or specific worker processes for optimal performance.
  • A better NGINX startup experience – NGINX Plus R31 brings in improvements in the overall NGINX startup experience in cases where there are a high number of “locations” in the configuration.
  • QUIC+HTTP/3 optimizations and improvements – NGINX Plus R31 adds many enhancements and performance optimizations to the QUIC implementation, including support for path maximum transmission unit (MTU) discovery, congestion control improvements, and the ability to reuse the cryptographic context across your entire QUIC session.

Rounding out the release are new features and bug fixes inherited from NGINX Open Source and updates to the NGINX JavaScript module.

Important Changes in Behavior

Note: If you are upgrading from a release other than NGINX Plus R30, be sure to check the Important Changes in Behavior section in previous announcement blogs for all releases between your current version and this one.

Deprecation of the OpenTracing Module

The OpenTracing module that was introduced in NGINX Plus R18 is now being deprecated. It is marked to be removed starting in the future release of NGINX Plus R34. The package will be made available with all NGINX Plus releases until then. It is strongly advised to use the OpenTelemetry module that was introduced in NGINX Plus R29.

Warning Message for Not Reporting NGINX Usage

NGINX Plus users are required to report their NGINX usage to F5 for compliance purposes. With the release of NGINX Plus R31, the ability to report your NGINX usage to NGINX Instance Manager is natively present and is enabled by default. A warning message is logged if the NGINX instance is not able to provide its usage information to NGINX Instance Manager for any reason.

Refer to the Native NGINX Usage Reporting section for details on how to configure this feature in your environment.

Changes to Platform Support

New operating systems supported:

  • FreeBSD 14
  • Alpine 3.19

Older operating systems removed:

  • Alpine 3.15, which reached end-of-life (EOL) on Nov 1, 2023

Older operating systems deprecated and scheduled for removal in NGINX Plus R32:

  • FreeBSD 12 which will reach EOL on Dec 31, 2023

New Features in Detail

Native NGINX Usage Reporting

NGINX Plus R31 introduces native communication with NGINX Instance Manager on your network to automate licensing compliance. If you participate in the F5 Flex Consumption Program, you will no longer need to manually track your NGINX Plus instances.

By default, NGINX Plus will attempt to discover NGINX Instance Manager on startup via a DNS lookup of the nginx-mgmt.local hostname. While the hostname is configurable, we suggest (for simplicity) to add an A record to your local DNS, associating the default hostname with the IP address of the system running NGINX Instance Manager. NGINX Plus will then establish a TLS connection to NGINX Instance Manager, reporting its version number, hostname, and unique identifier every thirty minutes.

For an added layer of security, we also suggest provisioning this connection with mTLS by using the optional mgmt configuration block. At a regular cadence, NGINX Instance Manager will then report the total usage of NGINX Plus instances to an F5 service.

You will see a warning message in your error log if NGINX Plus experiences any problems resolving the nginx-mgmt.local hostname or communicating with NGINX Instance Manager.

This is an example of an error message indicating that the NGINX Plus instance is unable to resolve nginx-mgmt.local:

2023/12/21 21:02:01 [warn] 3050#3050: usage report: host not found resolving endpoint "nginx-mgmt.local”

And here is an example of an error message indicating that the NGINX Plus instance is experiencing difficulties communicating with NGINX Instance Manager:

2023/12/21 21:02:01 [warn] 3184#3184: usage report: connection timed out

Customizing the mgmt Configuration Block Settings

If you prefer to fine tune how your NGINX Plus instance communicates with NGINX Instance Manager, you may opt to use the new mgmt configuration block and associated directives. Doing so allows you to define a custom resolver, use an IP address or alternate hostname to identify your NGINX Instance Manager system, specify TLS options, use mTLS for enhanced security, and specify other custom parameters.

The following is a sample custom configuration:

mgmt {
    usage_report endpoint=instance-manager.local interval=30m;
    resolver 192.168.0.2; # Sample internal DNS IP

    uuid_file /var/lib/nginx/nginx.id;

    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers DEFAULT;

    ssl_certificate          client.pem;
    ssl_certificate_key      client.key;

    ssl_trusted_certificate  trusted_ca_cert.crt;
    ssl_verify               on;
    ssl_verify_depth         2;
}

For additional details on these directives, please see the product documentation.

For more information on downloading and installing NGINX Instance Manager, see the installation guide.

Note: If you are using an earlier version of NGINX Plus, you can still report your instances by following these instructions.

Enhancements to SNI Configuration

Prior to this release, NGINX Plus assumed that all servers in an upstream group were identical. This means they needed to be able to answer the same requests, respond to the same SNI name (when proxy_ssl_server_name is used), and return SSL certificates matching the same name.

However, scenarios exist where this behavior is not sufficient. For ex. if multiple virtual servers are shared behind an upstream server and need to be distinguished by a different SNI and/or host header to route requests to specific resources. It’s also possible that the same certificate can’t be used on all servers in the upstream group or there are limitations to put upstream servers into separate upstream groups.

NGINX Plus R31 introduces support for SNI to be configured per upstream server. The variable $upstream_last_server_name refers to the name of the selected upstream server, which can then be passed to the proxied server using the proxy_ssl_server_name and proxy_ssl_name directives.

Here is how you set proxy_ssl_server_name to on, enabling a server name to pass through SNI:
proxy_ssl_server_name on;

And this is how to pass the selected upstream server name using proxy_ssl_name:
proxy_ssl_name $upstream_last_server_name;

Periodic Task Execution with NGINX JavaScript

NGINX JavaScript v0.8.1 introduced a new directive js_periodic that is available in both the http and stream contexts. This directive allows specifying a JavaScript content handler to run at regular intervals. This is useful in cases where custom code needs to run at periodic intervals and might require access to NGINX variables. The content handler receives a session object as an argument and also has access to global objects.

By default, the content handler runs on worker process 0, but it can be configured to run on specific or all worker processes.

This directive is available in the location context:

example.conf:

location @periodics {

    # to be run at 15 minute intervals in worker processes 1 and 3
    js_periodic main.handler interval=900s worker_affinity=0101;

    resolver 10.0.0.1;
    js_fetch_trusted_certificate /path/to/certificate.pem;
}

example.js:

async function handler(s) {
    let reply = await ngx.fetch('https://nginx.org/en/docs/njs/');
    let body = await reply.text();

    ngx.log(ngx.INFO, body);
}

For syntax and configuration details, please refer to the NGINX JavaScript docs.

A Better NGINX Startup Experience

In scenarios where an NGINX configuration contains a high number of “locations,” your NGINX startup time may take a considerable amount of time. In many cases, this might not be acceptable. The root issue exists in the sorting algorithm that is used to sort the list of locations.

NGINX R31 introduces an enhancement that swaps out the existing sorting algorithm from insertion sort, which has a time complexity of O(n2), to merge sort with a time complexity of O(n*log n).

In a test configuration with 20,000 locations, it was observed that the total startup time was reduced from 8 seconds to 0.9 seconds after this update.

QUIC+HTTP/3 Optimizations and Improvements

NGINX Plus R31 introduces several enhancements and performance optimizations to the QUIC+HTTP/3 implementation, such as:

  • Path maximum transmission unit (MTU) discovery when using QUIC+HTTP/3 – Path MTU is a measurement in bytes of the largest size frame or data packet that can be transmitted across a network. Prior to this change, the QUIC implementation used a path MTU of 1200 bytes for all datagrams. NGINX Plus now has support to discover the path MTU size, which is then used for all outgoing datagrams.
  • Reuse cryptographic context across the entire QUIC session – This optimization relates to the encryption and decryption behavior of QUIC packets. Previously, a separate cryptographic context was created for each encryption or decryption operation. Now, the same context gets used across the whole QUIC session, resulting in better performance.

Additional performance optimizations include reducing potential delays when sending acknowledgement packets, putting acknowledgement (ACK) frames in the front of the queue to reduce frame retransmissions and delays in delivery of ACK frames, and improvements to the congestion control behavior in Generic Segmentation Offload (GSO) mode.

Other Enhancements and Bug Fixes in NGINX Plus R31

Additional mgmt Module

In NGINX Plus R31, ngx_mgmt_module enables you to report NGINX usage information to NGINX Instance Manager. This information includes the NGINX hostname, NGINX version, and a unique instance identifier.

The module provides several directives to fine tune how your NGINX instance communicates with NGINX Instance Manager. For a complete list of available directives and configuration options, refer to the NGINX Docs.

Bug Fixes in the MQTT Module

Message Queuing Telemetry Transport (MQTT) support was introduced in NGINX Plus R29 and this release contains a few bug fixes for issues observed in the MQTT module.

One important fix addresses an issue of CONNECT messages being rejected when a password was not provided. Previously, we unconditionally expected that the username field would be followed by password. There are, however, special cases in the MQTT specification – such as anonymous authentication – where providing a password is not mandatory. The fix conditionally checks if the password is expected or not by looking at the cflags field of the packet. If the flag is not set, it implies that the password is not mandatory.

Another bug fix stops the parsing of MQTT CONNECT messages when the message length is less than the number of bytes received.

HTTP/3 server_tokens support with variables

NGINX Plus R31 adds support for missing server_tokens variables for HTTP/3 connections. The string field can be used to explicitly set the signature on error pages and the “Server” response header field value. If the string field is empty, it disables the emission of the “Server” field.

Changes Inherited from NGINX Open Source

NGINX Plus R31 is based on NGINX Open Source 1.25.3 and inherits functional changes, features, and bug fixes made since NGINX Plus R30 was released (in NGINX 1.25.2 and 1.25.3).

Features

  • Path MTU discovery when using QUIC – Previously, a default size of 1200 MTU was used for all datagrams. As part of the QUIC+HTTP/3 improvements, we added support to discover the path MTU size which is then used for all outgoing datagrams.
  • Performance optimizations in QUIC – NGINX mainline version 1.25.2 introduced optimizations in the QUIC implementation to reuse the cryptographic context for the entire QUIC session. This reduces delays in sending ACK packets and puts ACK frames in the front of the queue to lessen frame retransmissions and delays in delivery of ACK frames.
  • Support for the TLS_AES_128_CCM_SHA256 cipher suite when using HTTP/3 – This enhancement adds TLS_AES_128_CCM_SHA256 support to QUIC, which currently is the only cipher suite not supported by the NGINX QUIC implementation. It’s disabled by default in OpenSSL and can be enabled with this directive: ssl_conf_command Ciphersuites TLS_AES_128_CCM_SHA256;
  • Provide nginx appName while loading OpenSSL configs – When using the OPENSSL_init_ssl() interface, instead of checking OPENSSL_VERSION_NUMBER, NGINX now tests for OPENSSL_INIT_LOAD_CONFIG to be defined and true. This ensures that the interface is not used with BoringSSL and LibreSSL, as they do not provide additional library initialization settings (notably, the OPENSSL_INIT_set_config_appname() call).

Changes

  • Change to the NGINX queue sort algorithm – As detailed above, NGINX now uses merge sort, which has a time complexity of O(n*log n). This creates a better NGINX startup experience, especially when there is a very high number of “locations” in the configuration.
  • HTTP/2 iteration stream handling limit – This improvement ensures early detection of flood attacks on NGINX by imposing a limit on the number of new streams that can be introduced in one event loop. This limit is twice the value and is configured using http2_max_concurrent_streams. It is applied even if the maximum threshold of allowed concurrent streams is never reached to account for cases when streams are reset immediately after sending the requests.

Bug Fixes

  • Fixed buffer management with HTTP/2 autodetection – As part of HTTP/2 autodetection on plain TCP connections, initial data is first read into a buffer specified by the client_header_buffer_size directive that does not have state reservation. This caused an issue where the buffer could be overread while saving the state. The current fix allows reading only the available buffer size instead of the fixed buffer size. This bug first appeared in NGINX mainline version 1.25.1 (NGINX Plus R30).
  • Incorrect transport mode in OpenSSL compatibility mode – Prior to this release, the OpenSSL Compatibility Layer caused the connection to delay, in the event that an incorrect transport parameter was sent by the client. The fix effortlessly handles this behavior by first notifying the user about the incorrect parameter and subsequently closing the connection.
  • Fixed handling of tatus headers without reason-phrase – A status header with an empty “reason phrase” like Status: 404 was valid per Common Gateway Interface (CGI) specification but lost the trailing space during parsing. This resulted in an HTTP/1.1 404 status line in the response, which violates HTTP specification due to a missing trailing space. With this bug fix, only the status code is used from such short Status header lines, so NGINX will generate the status line itself with the space and appropriate reason phrase if available.
  • Fixed memory leak on configuration reloads with PCRE2 – This issue occurred when NGINX was configured to use PCRE2 in version 1.21.5 or higher.

For the full list of new changes, features, bug fixes, and workarounds inherited from recent releases, see the NGINX CHANGES file.

Changes to the NGINX JavaScript Module

NGINX Plus R31 incorporates changes from the NGINX JavaScript (njs) module version 0.8.2. Here is the list of noticeable changes in njs since 0.8.0 (which was the part of NGINX Plus R30 release).

Features

  • Introduced console object. These methods were introduced: error(), info(), log(), time(), timeEnd(), and warn().
  • Introduced the js_periodic directive for http and stream that allows specifying a JS handler to run at regular intervals.
  • Implemented items() method of a shared dictionary. This method returns all the non-expired key-value pairs.

Changes

  • Extended the “fs” module. Added existsSync() method.

Bug Fixes

  • Fixed the “xml” module. Fixed broken XML exception handling in parse() method.
  • Fixed RegExp.prototype.exec() with global regular expression (regexp) and Unicode input.
  • Fixed size(), and keys() methods of a shared dictionary.
  • Fixed erroneous exception in r.internalRedirect() that was introduced in 0.8.0.
  • Fixed incorrect order of keys in Object.getOwnPropertyNames().
  • Fixed HEAD response handling with large Content-Length in fetch API.
  • Fixed items() method for a shared dictionary.

For a comprehensive list of all the features, changes, and bug fixes, see the njs Changes log.

Upgrade or Try NGINX Plus

If you’re running NGINX Plus, we strongly encourage you to upgrade to NGINX Plus R31 as soon as possible. In addition to all the great new features, you’ll also pick up several additional fixes and improvements, and being up to date will help NGINX to help you if you need to raise a support ticket.

If you haven’t tried NGINX Plus, we encourage you to check it out. You can use it for security, load balancing, and API gateway use cases, or as a fully supported web server with enhanced monitoring and management APIs. Get started today with a free 30-day trial.

The post Announcing NGINX Plus R31 appeared first on NGINX.

]]>
Watch: NGINX Gateway Fabric at KubeCon North America 2023 https://www.nginx.com/blog/watch-nginx-gateway-fabric-at-kubecon-north-america-2023/ Thu, 30 Nov 2023 21:15:23 +0000 https://www.nginx.com/?p=72804 This year at KubeCon North America 2023, we were thrilled to share the first version of NGINX Gateway Fabric. Amidst the sea of exciting new tech, the conference served as the ideal stage for unveiling our implementation of the Kubernetes Gateway API. Booth attendees were excited to learn about our unified app delivery fabric approach [...]

Read More...

The post Watch: NGINX Gateway Fabric at KubeCon North America 2023 appeared first on NGINX.

]]>
This year at KubeCon North America 2023, we were thrilled to share the first version of NGINX Gateway Fabric. Amidst the sea of exciting new tech, the conference served as the ideal stage for unveiling our implementation of the Kubernetes Gateway API.

Booth attendees were excited to learn about our unified app delivery fabric approach to managing app and API connectivity in Kubernetes. NGINX Gateway Fabric is a conformant implementation of Kubernetes Gateway API specifications that provides fast, reliable, and secure Kubernetes app and API connectivity leveraging one of the most widely used data planes in the world – NGINX.

As always, F5 DevCentral was there covering the action. Here is the moment we got into talking about NGINX Gateway Fabric:

Another hot topic at KubeCon this year was multi-cluster configuration. As organizations adopt distributed architectures like Kubernetes, multi-cluster plays a crucial role for scalability and availability. One of the options to achieve multi-cluster setup is adding NGINX Plus in front of your Kubernetes clusters. Leveraging the cloud native, easy-to-use features of NGINX Plus, including its reverse proxy, load balancing, and API gateway capabilities, users can enhance performance, availability, and security of their multi-cluster Kubernetes environment. Stay tuned for more info on this topic soon!

How to Try NGINX Gateway Fabric

If you’d like to get started with our new Kubernetes implementation, visit the NGINX Gateway Fabric project on GitHub to get involved:

  • Try the implementation in your lab
  • Test and provide feedback
  • Join the project as a contributor

The post Watch: NGINX Gateway Fabric at KubeCon North America 2023 appeared first on NGINX.

]]>
Announcing NGINX Gateway Fabric Version 1.0 https://www.nginx.com/blog/announcing-nginx-gateway-fabric-version-1-0/ Mon, 06 Nov 2023 16:40:52 +0000 https://www.nginx.com/?p=72771 Today, we reached a significant milestone and are very excited to announce the first major release of NGINX Gateway Fabric – version 1.0! NGINX Gateway Fabric provides fast, reliable, and secure Kubernetes app connectivity leveraging Gateway API specifications and one of the most widely used data planes in the world, NGINX. With NGINX Gateway Fabric, [...]

Read More...

The post Announcing NGINX Gateway Fabric Version 1.0 appeared first on NGINX.

]]>
Today, we reached a significant milestone and are very excited to announce the first major release of NGINX Gateway Fabric – version 1.0!

NGINX Gateway Fabric provides fast, reliable, and secure Kubernetes app connectivity leveraging Gateway API specifications and one of the most widely used data planes in the world, NGINX.

With NGINX Gateway Fabric, we’ve created a new tool category for Kubernetes – a unified application delivery fabric that is designed to streamline and simplify app, service, and API connectivity in Kubernetes, reducing complexity, improving availability, and providing security and visibility at scale.

NGINX Gateway Fabric, a part of Connectivity Stack for Kubernetes, is our conformant implementation of the Gateway API that is built on the proven NGINX data plane. The Gateway API is a cross-vendor, open source project intended to standardize and improve app and service networking in Kubernetes, and NGINX is an active participant of this project.

The Gateway API evolved from the Kubernetes Ingress API to address the limitations of using Ingress objects in production, including complexity and error proneness when configuring advanced use cases and supporting multi-tenant teams in the same infrastructure. In addition, the Gateway API formed the Gateway API for Mesh Management and Administration (GAMMA) subgroup to research and define capabilities and resources of the Gateway API specifications for service mesh use cases.

At NGINX, we see the long-term future of unified app and API connectivity to, from, and within a Kubernetes cluster in the Gateway API, and NGINX Gateway Fabric is the reflection of our vision. It is architected to enable both north-south and east-west Kubernetes app and service connectivity use cases, effectively combining Ingress controller and service mesh capabilities in one unified tool that leverages the same control and data planes with centralized management across any Kubernetes environment.

With version 1.0, we are focusing on advanced connectivity use cases at the edge of a Kubernetes cluster, such as blue-green deployments, TLS termination, and SNI routing. In the future roadmap, there are plans to expand these capabilities with more security and observability features, including addressing service-to-service communications use cases.

What Is NGINX Gateway Fabric?

NGINX Gateway Fabric is architected to deliver future-proof connectivity for apps and services to, from, and within a Kubernetes cluster with its built-in support for advanced use cases, role-based API model, and extensibility that unlocks the true power of NGINX.

NGINX Gateway Fabric standardizes on three primary Gateway API resources (GatewayClass, Gateway, and Routes) with role‑based access control (RBAC) mapping to the associated roles (infrastructure providers, cluster operators, and application developers).

Clearly defining the scope of responsibility and separation for different roles streamlines and simplifies administration. Specifically, infrastructure providers define GatewayClasses for Kubernetes clusters while cluster operators deploy and configure Gateways within a cluster, including policies. Application developers are then free to attach Routes to Gateways to expose their applications externally while sharing the same underlying infrastructure. When clients connect to their apps, NGINX Gateway Fabric routes these requests to the respective application.

To learn more on how NGINX Gateway Fabric processes the complex routing rules, read our blog How NGINX Gateway Fabric Implements Complex Routing Rules.

NGINX Gateway Fabric Benefits

NGINX Gateway Fabric helps increase uptime and reduce complexity of your Kubernetes environment from edge to cloud. It is designed to simplify operations, unlock advanced capabilities, and provide seamless interoperability for Kubernetes environments, delivering improved, future-proof Kubernetes app and service connectivity.

Benefits of NGINX Gateway Fabric include:

  • Data plane – Built on one of the world’s most popular data planes, NGINX Gateway Fabric provides fast, reliable, and secure connectivity for Kubernetes apps. It simplifies and streamlines Kubernetes platform deployment and management by leveraging the same data and control planes across any hybrid, multi-cloud Kubernetes environment, reducing complexity and tool sprawl.
  • Extensibility – Unlike Kubernetes Ingress resources, many advanced use cases are available “out of the box” with NGINX Gateway Fabric, including blue-green and canary deployments, A/B testing, and request/response manipulation. It also defines an annotation-less extensibility model with extension points and policy attachments to unlock advanced NGINX data plane features that are not supported by the API itself.
  • Interoperability – NGINX Gateway Fabric is a dedicated and conformant implementation of the Gateway API, which provides high-level configuration compatibility and portability for easier migration across different implementations. Its Kubernetes-native design ensures seamless ecosystem integration with other Kubernetes platform tools and processes like Prometheus and Grafana.
  • Governance – NGINX Gateway Fabric features a native role-based API model that enables self-service governance capabilities to share the infrastructure across multi-tenant teams. As an open source project, it operates compliant with established community governance procedures, delivering full transparency in its development process, features roadmap, and contributions.
  • Conformance – NGINX Gateway Fabric is tested and validated to conform with the Gateway API specifications in accordance with standardized conformance tests, ensuring a consistent experience with API operations.

NGINX Gateway Fabric Architecture

Rather than shoehorn Gateway API capabilities into NGINX Ingress Controller, we created NGINX Gateway Fabric as an entirely separate project to implement the Kubernetes Gateway API. If you are curious about the reasoning behind that, read our blog Why We Decided to Start Fresh with Our NGINX Gateway Fabric.

An NGINX Gateway Fabric pod consists of two containers:

  • nginx container – Provides the data plane and consists of an NGINX master process and NGINX worker processes. The master process controls the worker processes, which handle the client traffic and load balance the traffic to the backend applications.
  • nginx-gateway container – Provides the control plane, watches Kubernetes objects (Services, Endpoints, Secrets, and Gateway API CRDs), and configures NGINX.

For the detailed description of NGINX Gateway Fabric’s design, architecture, and component interactions, refer to the project documentation.

Getting Started with NGINX Gateway Fabric

If you are interested in NGINX’s implementation of the Gateway API, check out the NGINX Gateway Fabric project on GitHub. You can get involved by:

  • Joining the project as a contributor
  • Trying the implementation in your lab
  • Testing and providing feedback

To learn more about how you can enhance application delivery with NGINX Kubernetes solutions, visit the Connectivity Stack for Kubernetes web page.

Are you still thinking about why you should try the Gateway API? Read our blog 5 Reasons to Try the Kubernetes Gateway API to get the answer.

Also, don’t miss the chance to visit the NGINX booth at KubeCon North America 2023 to chat with the developers of NGINX Gateway Fabric. NGINX, part of F5, is proud to be a Platinum sponsor of KubeCon NA 2023, and we hope to see you there!

The post Announcing NGINX Gateway Fabric Version 1.0 appeared first on NGINX.

]]>
Which NGINX Ingress Controllers Are Impacted by CVE-2022-4886, CVE-2023-5043, and CVE-2023-5044? https://www.nginx.com/blog/which-nginx-ingress-controllers-are-impacted-by-cve-2022-4886-cve-2023-5043-and-cve-2023-5044/ Fri, 03 Nov 2023 22:45:46 +0000 https://www.nginx.com/?p=72773 On October 25, 2023, three CVEs were reported by the National Institute of Standards and Technology (NIST) that affected NGINX Ingress Controller for Kubernetes: CVE-2022-4886 – ingress-nginx path sanitization can be bypassed with log_format directive. CVE-2023-5043 – ingress-nginx annotation injection causes arbitrary command execution. CVE-2023-5044 – Code injection occurs via nginx.ingress.kubernetes.io/permanent-redirect annotation. That report and [...]

Read More...

The post Which NGINX Ingress Controllers Are Impacted by CVE-2022-4886, CVE-2023-5043, and CVE-2023-5044? appeared first on NGINX.

]]>
On October 25, 2023, three CVEs were reported by the National Institute of Standards and Technology (NIST) that affected NGINX Ingress Controller for Kubernetes:

  • CVE-2022-4886 – ingress-nginx path sanitization can be bypassed with log_format directive.
  • CVE-2023-5043 – ingress-nginx annotation injection causes arbitrary command execution.
  • CVE-2023-5044 – Code injection occurs via nginx.ingress.kubernetes.io/permanent-redirect annotation.

That report and subsequent publications (such as Urgent: New Security Flaws Discovered in NGINX Ingress Controller for Kubernetes) caused some confusion (and a number of support inquiries) pertaining to which NGINX Ingress controllers are actually affected and who should be concerned about addressing vulnerabilities described by these CVEs.

The confusion is totally understandable – did you know that there is more than one Ingress controller based on NGINX? To start, there are two completely different projects named “NGINX Ingress Controller”:

  • Community project – Found in the kubernetes/ingress-nginx repo on GitHub, this Ingress controller is based on the NGINX Open Source data plane but developed and maintained by the Kubernetes community, with docs hosted on GitHub.
  • NGINX project – Found in the nginxinc/kubernetes-ingress repo on GitHub, NGINX Ingress Controller is developed and maintained by F5 NGINX with docs on docs.nginx.com. This official NGINX project is available in two editions:
    • NGINX Open Source‑based (free and open source option)
    • NGINX Plus-based (commercial option)

There are also other Ingress controllers based on NGINX, such as Kong. Fortunately, their names are easily distinguished. If you’re not sure which one you’re using, check the container image of the running Ingress controller, then compare the Docker image name with the repos listed above.

The vulnerabilities (CVE-2022-4886, CVE-2023-5043, and CVE-2023-5044) described above only apply to the community project (kubernetes/ingress-nginx). NGINX projects for NGINX Ingress Controller (nginxinc/kubernetes-ingress, both open source and commercial) are not affected by these CVEs.

For more information about the differences between NGINX Ingress Controller and Ingress controller projects, read our blog A Guide to Choosing an Ingress Controller, Part 4: NGINX Ingress Controller Options.

The post Which NGINX Ingress Controllers Are Impacted by CVE-2022-4886, CVE-2023-5043, and CVE-2023-5044? appeared first on NGINX.

]]>
How NGINX Gateway Fabric Implements Complex Routing Rules https://www.nginx.com/blog/how-nginx-gateway-fabric-implements-complex-routing-rules/ Thu, 02 Nov 2023 15:00:17 +0000 https://www.nginx.com/?p=72768 NGINX Gateway Fabric is an implementation of the Kubernetes Gateway API specification that uses NGINX as the data plane. It handles Gateway API resources such as GatewayClass, Gateway, ReferenceGrant, and HTTPRoute to configure NGINX as an HTTP load balancer that exposes applications running in Kubernetes to outside of the cluster. In this blog post, we [...]

Read More...

The post How NGINX Gateway Fabric Implements Complex Routing Rules appeared first on NGINX.

]]>
NGINX Gateway Fabric is an implementation of the Kubernetes Gateway API specification that uses NGINX as the data plane. It handles Gateway API resources such as GatewayClass, Gateway, ReferenceGrant, and HTTPRoute to configure NGINX as an HTTP load balancer that exposes applications running in Kubernetes to outside of the cluster.

In this blog post, we explore how NGINX Gateway Fabric uses the NGINX JavaScript scripting language (njs) to simplify an implementation of HTTP request matching based on a request’s headers, query parameters, and method.

Before we dive into NGINX JavaScript, let’s go over how NGINX Gateway Fabric configures the data plane.

Configuring NGINX from Gateway API Resources Using Go Templates

To configure the NGINX data plane, we generate configuration files based on the Gateway API resources created in the Kubernetes cluster. These files are generated from Go templates. To generate the files, we process the Gateway API resources, translate them into data structures that represent NGINX configuration, and then execute the NGINX configuration templates by applying them to the NGINX data structures. The NGINX data structures contain fields that map to NGINX directives.

For the majority of cases, this works very well. Most fields in the Gateway API resources can be easily translated into NGINX directives. Take, for example, traffic splitting. In the Gateway API, traffic splitting is configured by listing multiple Services and their weights in the backendRefs field of an HTTPRouteRule.

This configuration snippet splits 50% of the traffic to service-v1 and the other 50% to service-v2:


backendRefs: 
- name: service-v1 
   port: 80 
   weight: 50 
- name: service-v2 
   port: 80 
   weight: 50 

Since traffic splitting is natively supported by the NGINX HTTP split clients module, it is straightforward to convert this to an NGINX configuration using a template.

The generated configuration would look like this:


split_clients $request_id $variant { 
    50% upstream-service-v1; 
    50% upstream-service-v2; 
}  

In cases like traffic splitting, Go templates are simple yet powerful tools that enable you to generate an NGINX configuration that reflects the traffic rules that the user configured through the Gateway API resources.

However, we found that more complex routing rules defined in the Gateway API specification could not easily be mapped to NGINX directives using Go templates, and we needed a higher-level language to evaluate these rules. That’s when we turned to NGINX JavaScript.

What Is NGINX JavaScript?

NGINX JavaScript is a general-purpose scripting framework for NGINX and NGINX Plus that’s implemented as a Stream and HTTP NGINX module. The NGINX JavaScript module allows you to extend NGINX’s configuration syntax with njs code, a subset of the JavaScript language that was designed to be a modern, fast, and robust high-level scripting tailored for the NGINX runtime. Unlike standard JavaScript, which is primarily intended for web browsers, njs is a server-side language. This approach was taken to meet the requirements of server-side code execution and to integrate with NGINX’s request-processing architecture.

There are many use cases for njs (including response filtering, diagnostic logging, and joining subrequests) but this blog specifically explores how NGINX Gateway Fabric uses njs to perform HTTP request matching.

HTTP Request Matching

Before we dive into the NGINX JavaScript solution, let’s talk about the Gateway API feature being implemented.

HTTP request matching is the process of matching requests to routing rules based on certain conditions (matches) – e.g., the headers, query parameters, and/or method of the request. The Gateway API allows you to specify a set of HTTPRouteRules that will result in client requests being sent to specific backends based on the matches defined in the rules.

For example, if you have two versions of your application running on Kubernetes and you want to route requests with the header version:v2 to version 2 of your application and all other requests version 1, you can achieve this with the following routing rules:


rules: 
  - matches: 
      - path: 
          type: PathPrefix 
          value: / 
    backendRefs: 
      - name: v1-app 
        port: 80 
  - matches: 
      - path: 
          type: PathPrefix 
          value: / 
        headers: 
          - name: version 
            value: v2 
    backendRefs: 
      - name: v2-app 
        port: 80 

Now, say you also want to send traffic with the query parameter TEST=v2 to version 2 of your application, you can add another rule that matches that query parameter:


- matches 
  - path: 
      type: PathPrefix 
      value: /coffee 
    queryParams: 
      - name: TEST 
        value: v2 

These are the three routing rules defined in the example above:

  1. Matches requests with path / and routes them to backend v1-app
  2. Matches requests with path / and the header version:v2 and routes them to the backend v2-app.
  3. Matches requests with path / and the query parameter TEST=v2 and routes them to the backend v2-app.

NGINX Gateway Fabric must process these routing rules and configure NGINX to route requests accordingly. In the next section, we will use NGINX JavaScript to handle this routing.

The NGINX JavaScript Solution

To determine where to route a request when matches are defined, we wrote a location handler function in njs – named redirect – which redirects requests to an internal location block based on the request’s headers, arguments, and method.

Let’s look at the NGINX configuration generated by NGINX Gateway Fabric for the three routing rules defined above.

Note: this config has been simplified for the purpose of this blog.


# nginx.conf 
load_module /usr/lib/nginx/modules/ngx_http_js_module.so; # load NGINX JavaScript Module 
events {}  
http {  
    js_import /usr/lib/nginx/modules/httpmatches.js; # Import the njs script 
    server {  
        listen 80; 
        location /_rule1 {  
            internal; # Internal location block that corresponds to rule 1 
            proxy_pass http://upstream-v1-app$request_uri;  
         }  
        location /_rule2{  
            internal; # Internal location block that corresponds to rule 2 
            proxy_pass http://upstream-v2-app$request_uri; 
        } 
  location /_rule3{ 
internal; # Internal location block that corresponds to rule 3 
proxy_pass http://upstream-v2-app$request_uri; 
  } 
        location / {  
            # This is the location block that handles the client requests to the path / 
           set $http_matches "[{\"redirectPath\":\"/_rule2\",\"headers\":[\"version:v2\"]},{\"redirectPath\":\"/_rule3\",\"params\":[\"TEST=v2\"]},{\"redirectPath\":\"/_rule1\",\"any\":true}]"; 
             js_content httpmatches.redirect; # Executes redirect njs function 
        } 
     }  
} 

The js_import directive is used to specify the file that contains the redirect function and the js_content directive is used to execute the redirect function.

The redirect function depends on the http_matches variable. The http_matches variable contains a JSON-encoded list of the matches defined in the routing rules. The JSON match holds the required headers, query parameters, and method, as well as the redirectPath, which is the path to redirect the request to a match. Every redirectPath must correspond to an internal location block.

Let’s take a closer look at each JSON match in the http_matches variable (shown in the same order as the routing rules above):

  1. {"redirectPath":"/_rule1","any":true} – The “any” boolean means that all requests match this rule and should be redirected to the internal location block with the path /_rule1.
  2. {"redirectPath":"/_rule2","headers"[“version:v2”]} – Requests that have the header version:v2 match this rule and should be redirected to the internal location block with the path /_rule2.
  3. {"redirectPath":"/_rule3","params"[“TEST:v2”]} – Requests that have the query parameter TEST=v2 match this rule and should be redirected to the internal location block with the path /_rule3.

One last thing to note about the http_matches variable is that the order of the matches matters. The redirect function will accept the first match that the request satisfies. NGINX Gateway Fabric will sort the matches according to the algorithm defined by the Gateway API to make sure the correct match is chosen.

Now let’s look at the JavaScript code for the redirect function (the full code can be found here):


// httpmatches.js 
function redirect(r) { 
  let matches; 

  try { 
    matches = extractMatchesFromRequest(r); 
  } catch (e) { 
    r.error(e.message); 
    r.return(HTTP_CODES.internalServerError); 
    return; 
  } 

  // Matches is a list of http matches in order of precedence. 
  // We will accept the first match that the request satisfies. 
  // If there's a match, redirect request to internal location block. 
  // If an exception occurs, return 500. 
  // If no matches are found, return 404. 
  let match; 
  try { 
    match = findWinningMatch(r, matches); 
  } catch (e) { 
    r.error(e.message); 
    r.return(HTTP_CODES.internalServerError); 
    return; 
  } 

  if (!match) { 
    r.return(HTTP_CODES.notFound); 
    return; 
  } 

  if (!match.redirectPath) { 
    r.error( 
      `cannot redirect the request; the match ${JSON.stringify( 
        match, 
      )} does not have a redirectPath set`, 
    ); 
    r.return(HTTP_CODES.internalServerError); 
    return; 
  } 

  r.internalRedirect(match.redirectPath); 
} 

The redirect function accepts the NGINX HTTP request object as an argument and extracts the http_matches variable from it. It then finds the winning match by comparing the request’s attributes (found on the request object) to the list of matches and internally redirects the request to the winning match’s redirect path.

Why Use NGINX JavaScript?

While it’s possible to implement HTTP request matching using Go templates to generate an NGINX configuration, it’s not straightforward when compared to simpler use cases like traffic splitting. Unlike the split_clients directive, there’s no native way to compare a request’s attributes to a list of matches in a low-level NGINX configuration.

We chose to use njs to HTTP request match in NGINX Gateway Fabric for these reasons:

  • Simplicity – Makes complex HTTP request matching easy to implement, enhancing code readability and development efficiency.
  • Debugging – Simplifies debugging by allowing descriptive error messages, speeding up issue resolution.
  • Unit Testing – Code can be thoroughly unit tested, ensuring robust and reliable functionality.
  • Extensibility – High-level scripting nature enables easy extension and modification, accommodating evolving project needs without complex manual configuration changes.
  • Performance – Purpose-built for NGINX and designed to be fast.

Next Steps

If you are interested in our implementation of the Gateway API using the NGINX data plane, visit our NGINX Gateway Fabric project on GitHub to get involved:

  • Join the project as a contributor
  • Try the implementation in your lab
  • Test and provide feedback

And if you are interested to chat about this project and other NGINX projects, stop by the NGINX booth at KubeCon North America 2023. NGINX, part of F5, is proud to be a Platinum Sponsor of KubeCon NA, and we hope to see you there!

To learn more about njs, check out additional examples or read this blog.

The post How NGINX Gateway Fabric Implements Complex Routing Rules appeared first on NGINX.

]]>
Why We Decided to Start Fresh with Our NGINX Gateway Fabric https://www.nginx.com/blog/why-we-decided-to-start-fresh-with-our-nginx-gateway-fabric/ Thu, 26 Oct 2023 15:00:59 +0000 https://www.nginx.com/?p=72737 In the world of Kubernetes Ingress controllers, NGINX has had a very successful run. NGINX Ingress Controller is widely deployed for commercial Kubernetes production use cases while also being developed and maintained as an open source version. So, you might think that when a big improvement came to Kubernetes networking – the Gateway API – [...]

Read More...

The post Why We Decided to Start Fresh with Our NGINX Gateway Fabric appeared first on NGINX.

]]>
In the world of Kubernetes Ingress controllers, NGINX has had a very successful run. NGINX Ingress Controller is widely deployed for commercial Kubernetes production use cases while also being developed and maintained as an open source version. So, you might think that when a big improvement came to Kubernetes networking – the Gateway API – we’d keep a good thing going and implement it in our existing Ingress products.

Instead, we chose a different path. Looking at the new Gateway API’s amazing possibilities and our chance to completely reimagine how to handle connectivity in Kubernetes, we realized that shoehorning a Gateway API implementation into our existing Ingress products would limit this boundless future.

This is why we decided to launch our own Gateway API project – NGINX Gateway Fabric. The project is open source and will be operated transparently and collaboratively. We’re excited to work with outside contributors and to share this journey with others, as we hope to create something that is special and unique.

How We Arrived at Our Gateway API Decision

While the decision to create an entirely new project around the Gateway API comes from optimism and excitement, it’s grounded in sound business and product strategy logic.

Longtime Kubernetes followers likely already know about NGINX Ingress Controller’s open source and commercial versions. Both deploy the same battle-tested NGINX data plane that runs in the NGINX Plus and NGINX Open Source reverse proxies. Before Kubernetes, NGINX’s data plane already worked great for load balancing and reverse proxying. In Kubernetes, our Ingress controllers achieve the same types of critical request routing and application delivery tasks.

NGINX prides itself on building commercial products that are lightweight, high-performance, well-tested, and ready for demanding environments. So, the product strategy for Kubernetes Ingress control mirrored our product strategy for reverse proxies – make a robust open source product for simpler use cases and a commercial product with additional features and capabilities for production Ingress control in business-critical application environments. That strategy worked in the world of Ingress control, partially because Ingress control lacked standardization and required significant custom resource definitions (CRDs) to deliver advanced capabilities like load balancing and reverse proxy, which developers and architects enjoyed in networking products outside of Kubernetes.

Our customers rely on and trust NGINX Ingress Controller, and the commercial version already has many of the key advanced capabilities that the Gateway API was designed to address. Additionally, NGINX has been participating in the Gateway API project since early on, and we recognized that it was going to take a few years for the Gateway API ecosystem to fully mature. (In fact, many of the Gateway API’s specifications continue to evolve, such as the GAMMA specification to make it better able to integrate with service meshes.)

But we decided that shoehorning in beta-level Gateway API specifications to NGINX Ingress Controller would inject unnecessary uncertainty and complexity into a mature, enterprise-class Ingress controller. Anything we sell commercially must be stable, reliable, and 100% production ready. Gateway API solutions will get there too, but the process is still only at its beginning.

Our Goals with NGINX Gateway Fabric

With NGINX Gateway Fabric, our primary goal is to create a product that stands the test of time, in the way that NGINX Plus and NGINX Open Source have. To reach the point where we felt comfortable labeling our Gateway API project “future-proof,” we realized we’d need to experiment with architectural choices for its data and control planes. For example, we might need to look at different ways to manage Layer 4 and Layer 7 connectivity or minimize external dependencies. Such experimentation is best performed on a blank slate, free of historical precedents and requirements. While we’re using the tried and tested NGINX data plane as a foundational component of the NGINX Gateway Fabric, we’re open to new ideas beyond that.

We also wanted to deliver comprehensive, vendor-agnostic configuration interoperability for Gateway API resources. One of the Gateway API’s biggest improvements over the existing Kubernetes Ingress paradigm is that it standardizes many elements of service networking. This standardization should, in theory, lead to a better future where many Gateway API resources can easily interact and connect.

However, a key to building this future is to leave behind the world of vendor-specific CRDs (which can result in vendor lock-in). That can get very challenging in blended products that must support CRDs designed for the world of Ingress control. And it’s easier in an open source project that focuses on interoperability as a first-order concern. To ditch the tightly linked CRDs, we needed to build something that solely focuses on the new surfaces exposed by the Gateway API and its constituent APIs.

Join Us on the Gateway API Journey

We’re still in the very early days. Only a handful of projects and products have implemented the Gateway API specification, and most of them have elected to embed it within existing projects and products.

That means it’s a time of great opportunity – the best time to start a new project. We’re running the NGINX Gateway Fabric project completely in the open, with transparent decision-making and project governance. Because the project is written in Go, we invite the massive Gopher community to make suggestions, start filing PRs, and or reach out to us with ideas.

It’s possible the Gateway API will shift the whole Kubernetes landscape. Entire classes of products may no longer be necessary, and new products might pop up. The Gateway API offers such a rich set of possibilities that we honestly don’t know where this will end up – but we’re really looking forward to the ride. Come along for the journey, it’s going to be fun!

You can start by:

  • Joining the project as a contributor
  • Trying the implementation in your lab
  • Testing and providing feedback

To join the project, visit NGINX Gateway Fabric on GitHub.

If you want to chat live with our experts on this and other NGINX projects, stop by the NGINX booth at KubeCon North America 2023! NGINX, a part of F5, is proud to be a Platinum sponsor of KubeCon NA this year, and we hope to see you there!

The post Why We Decided to Start Fresh with Our NGINX Gateway Fabric appeared first on NGINX.

]]>
HTTP/2 Rapid Reset Attack Impacting F5 NGINX Products https://www.nginx.com/blog/http-2-rapid-reset-attack-impacting-f5-nginx-products/ Tue, 10 Oct 2023 12:00:25 +0000 https://www.nginx.com/?p=72714 This blog post centers on a vulnerability that was recently discovered related to the HTTP/2 protocol. Under certain conditions, this vulnerability can be exploited to execute a denial-of-service attack on NGINX Open Source, NGINX Plus, and related products that implement the server-side portion of the HTTP/2 specification. To protect your systems from this attack, we’re [...]

Read More...

The post HTTP/2 Rapid Reset Attack Impacting F5 NGINX Products appeared first on NGINX.

]]>
This blog post centers on a vulnerability that was recently discovered related to the HTTP/2 protocol. Under certain conditions, this vulnerability can be exploited to execute a denial-of-service attack on NGINX Open Source, NGINX Plus, and related products that implement the server-side portion of the HTTP/2 specification. To protect your systems from this attack, we’re recommending an immediate update to your NGINX configuration.

The Problem with HTTP/2 Stream Resets

After establishing a connection with a server, the HTTP/2 protocol allows clients to initiate concurrent streams for data exchange. Unlike previous iterations of the protocol, if an end user decides to navigate away from the page or halt data exchange for any other reason, HTTP/2 provides a method for canceling the stream. It does this by issuing an RST_STREAM frame to the server, saving it from executing work needlessly.

The vulnerability is exploited by initiating and rapidly canceling a large number of HTTP/2 streams over an established connection, thereby circumventing the server’s concurrent stream maximum. This happens because incoming streams are reset faster than subsequent streams arrive, allowing the client to overload the server without ever reaching its configured threshold.

Impact on NGINX

For performance and resource consumption reasons, NGINX limits the number of concurrent streams to a default of 128 (see http2_max_concurrent_streams). In addition, to optimally balance network and server performance, NGINX allows the client to persist HTTP connections for up to 1000 requests by default using an HTTP keepalive (see keepalive_requests).

By relying on the default keepalive limit, NGINX prevents this type of attack. Creating additional connections to circumvent this limit exposes bad actors via standard layer 4 monitoring and alerting tools.

However, if NGINX is configured with a keepalive that is substantially higher than the default and recommended setting, the attack may deplete system resources. When a stream reset occurs, the HTTP/2 protocol requires that no subsequent data is returned to the client on that stream. Typically, the reset results in negligible server overhead in the form of tasks that gracefully handle the cancellation. However, circumventing NGINX’s stream threshold enables a client to take advantage of this overhead and amplify it by rapidly initiating thousands of streams. This forces the server CPU to spike, denying service to legitimate clients.

DoS Attack via HTTP2 Streams
Denial-of-service by establishing HTTP/2 streams, followed by stream cancellations under abnormally high keepalive limits.

Steps for Mitigating Attack Exposure

As a fully featured server and proxy, NGINX provides administrators with powerful tools for mitigating denial-of-service attacks. To take advantage of these features, it is essential that the following updates are made to NGINX configuration files, minimizing the server’s attack surface:

We also recommend that these safety measures are added as a best practice:

  • limit_conn enforces a limit on the number of connections allowed from a single client. This directive should be added with a reasonable setting balancing application performance and security.
  • limit_req enforces a limit on the number of requests that will be processed within a given amount of time from a single client. This directive should be added with a reasonable setting balancing application performance and security.

How We’re Responding

We experimented with multiple mitigation strategies that helped us gain an understanding into how this attack could impact our wide range of customers and users. While this research confirmed that NGINX is already equipped with all the necessary tools to avoid the attack, we wanted to take additional steps to ensure that users who do need to configure NGINX beyond recommended specifications are able to do so.

Our investigation yielded a method for improving server resiliency under various forms of flood attacks that are theoretically possible over the HTTP/2 protocol. As a result, we’ve issued a patch that increases system stability under these conditions. To protect against such threats, we recommend that NGINX Open Source users rebuild binaries from the latest codebase and NGINX Plus customers update to the latest packages (R29p1 or R30p1) immediately.

How the Patch Works

To ensure the early detection of flood attacks on NGINX, the patch imposes a limit on the number of new streams that can be introduced within one event loop. This limit is set to twice the value configured using the http2_max_concurrent_streams directive. The limit will be applied even if the maximum threshold is never reached, like when streams are reset right after sending the request (as in the case of this attack).

Affected Products

This vulnerability impacts the NGINX HTTP/2 module (ngx_http_v2_module). For information about your specific NGINX or F5 product that might be affected, please visit: https://my.f5.com/manage/s/article/K000137106.

For more information on CVE-2023-44487 – HTTP/2 Rapid Reset Attack, please see: https://www.cve.org/CVERecord?id=CVE-2023-44487

Acknowledgements

We would like to recognize Cloudflare, Amazon, and Google for their part in the discovery and collaboration in identifying and mitigating this vulnerability.

The post HTTP/2 Rapid Reset Attack Impacting F5 NGINX Products appeared first on NGINX.

]]>