NGINX https://www.nginx.com/ The High Performance Reverse Proxy, Load Balancer, Edge Cache, Origin Server Fri, 29 Mar 2024 14:41:06 +0000 en-US hourly 1 Announcing NGINX Gateway Fabric Release 1.2.0 https://www.nginx.com/blog/announcing-nginx-gateway-fabric-release-1-2-0/ Fri, 29 Mar 2024 14:30:24 +0000 https://www.nginx.com/?p=72950 We are thrilled to share the latest news on NGINX Gateway Fabric, which is our conformant implementation of the Kubernetes Gateway API. We recently updated it to version 1.2.0, with several exciting new features and improvements. This release focuses on enhancing the platform’s capabilities and ensuring it meets our users’ demands. We have included F5 [...]

Read More...

The post Announcing NGINX Gateway Fabric Release 1.2.0 appeared first on NGINX.

]]>
We are thrilled to share the latest news on NGINX Gateway Fabric, which is our conformant implementation of the Kubernetes Gateway API. We recently updated it to version 1.2.0, with several exciting new features and improvements. This release focuses on enhancing the platform’s capabilities and ensuring it meets our users’ demands. We have included F5 NGINX Plus support and expanded our API surface to cover the most demanded use cases. We believe these enhancements will create a better experience for all our users and help them achieve their goals more efficiently.

NGINX Gateway Fabric’s design and architecture overview

Figure 1: NGINX Gateway Fabric’s design and architecture overview


NGINX Gateway Fabric 1.2.0 at a glance:

  • NGINX Plus Support – NGINX Gateway Fabric now supports NGINX Plus for the data plane, which offers additional stability and higher resource utilization, metrics, and observability dashboards.
  • BackendTLSPolicy – TLS verification allows NGINX Gateway Fabric to confirm the identity of the backend application, protecting against potential hijacking of the connection by malicious applications. Additionally, TLS encrypts traffic within the cluster, ensuring secure communication between the client and the backend application.
  • URLRewrite – NGINX Gateway Fabric now supports URL rewrites in Route objects. With this feature, you can easily modify the original request URL and redirect it to a more appropriate destination. That way, as your backend applications undergo API changes, you can keep the APIs you expose to your clients consistent.
  • Product Telemetry – With product telemetry now present in NGINX Gateway Fabric, we can help further improve operational efficiency of your infrastructure by learning about how you use the product in your environment. Also, we are planning to share these insights regularly with the community during our meetings.

We’ll take a deeper look at the new features below.

What’s New in NGINX Gateway Fabric 1.2.0?

NGINX Plus Support

NGINX Gateway Fabric version 1.2.0 has been released with support for NGINX Plus, providing users with many new benefits. With the new upgrade, users can now leverage the advanced features of NGINX Plus in their deployments including additional Prometheus metrics, dynamic upstream reloads, and the NGINX Plus dashboard.

This upgrade also allows you the option to get support directly from NGINX for your environment.

Additional Prometheus Metrics

While using NGINX Plus as your data plane, additional advanced metrics will be exported alongside the metrics you would normally get with NGINX Open Source. Some highlights include metrics around http requests, streams, connections, and many more. For the full list, you can check NGINX’s Prometheus exporter for a convenient list, but note that the exporter is not strictly required for NGINX Gateway Fabric.

With any installation of Prometheus or Prometheus compatible scraper, you can scrape these metrics into your observability stack and build dashboards and alerts using one consistent layer within your architecture. Prometheus metrics are automatically available in the NGINX Gateway Fabric through HTTP Port 9113. You can also change the default port by updating the Pod template.

If you are looking for a simple setup, you can visit our GitHub page for more information on how to deploy and configure Prometheus to start collecting. Alternatively, if you are just looking to view the metrics and skip the setup, you can use the NGINX Plus dashboard, explained in the next section.

After installing Prometheus in your cluster, you can access its dashboard by running port-forwarding in the background.

kubectl -n monitoring port-forward svc/prometheus-server 9090:80

Prometheus Graph with NGINX Gateway Fabric connections accepted

Figure 2: Prometheus Graph showing NGINX Gateway Fabric connections accepted

The above setup will work even if you are using the default NGINX Open Source as your data plane as well! However, you will not see any of the additional metrics that NGINX Plus provides. As the size and scope of your cluster grows, we recommend looking at how NGINX Plus metrics can help quickly resolve your capacity planning issues, incidents, and even backend application faults.

Dynamic Upstream Reloads

Dynamic upstream reloads, enabled by NGINX Gateway Fabric automatically when installed with NGINX Plus, allow NGINX Gateway Fabric to make updates to NGINX configurations without a NGINX reload.

Traditionally, when a NGINX reload occurs, the existing connections are handled by the old worker processes while the newly configured workers handle new ones. When all the old connections are complete, the old workers are stopped, and NGINX continues with only the newly configured workers. In this way, configuration changes are handled gracefully even in NGINX Open Source.

However, when NGINX is under high load, maintaining both old and new workers can create a resource overhead that may cause problems, especially if trying to run NGINX Gateway Fabric as lean as possible. The dynamic upstream reloads featured in NGINX Plus bypass this problem by providing an API endpoint for configuration changes that NGINX Gateway Fabric will use automatically if present, reducing the need for extra resource overhead to handle old and new workers during the reload process.

As you begin to make changes more often to NGINX Gateway Fabric, reloads will occur more frequently. If you are curious how often or when reloads occur in your current installation of NGF, you can look at the Prometheus metric nginx_gateway_fabric_nginx_reloads_total. For a full, deep dive into the problem, check out Nick Shadrin’s article here!

Here’s an example of the metric in an environment with two deployments of NGINX Gateway Fabric in the Prometheus dashboard:

Prometheus graph with the NGINX Gateway Fabric reloads total

Figure 3: Prometheus graph showing the NGINX Gateway Fabric reloads total

NGINX Plus Dashboard

As previously mentioned, if you are looking for a quick way to view NGINX Plus metrics without a Prometheus installation or observability stack, the NGINX Plus dashboard gives you real-time monitoring of performance metrics you can use to troubleshoot incidents and keep an eye on resource capacity.

The dashboard gives you different views for all metrics NGINX Plus provides right away and is easily accessible on an internal port. If you would like to take a quick look for yourself as to what the dashboard capabilities look like, check out our dashboard demo site at demo.nginx.com.

To access the NGINX Plus dashboard on your NGINX Gateway Fabric installation, you can forward connections to Port 8765 on your local machine via port forwarding:

kubectl port-forward -n nginx-gateway 8765:8765

Next, open your preferred browser and type http://localhost:8765/dashboard.html in the address bar.

NGINX Plus Dashboard

Figure 4: NGINX Plus Dashboard overview

BackendTLSPolicy

This release now comes with the much-awaited support for the BackendTLSPolicy. The BackendTLSPolicy introduces encrypted TLS communication between NGINX Gateway Fabric and the application, greatly enhancing the communication channel’s security. Here’s an example that shows how to apply the policy by specifying settings such as TLS ciphers and protocols when validating server certificates against a trusted certificate authority (CA).

The BackendTLSPolicy enables users to secure their traffic between NGF and their backends. You can also set the minimum TLS version and cipher suites. This protects against malicious applications hijacking the connection and encrypts the traffic within the cluster.

To configure backend TLS termination, first create a ConfigMap with the CA certification you want to use. For help with managing internal Kubernetes certificates, check out this guide.


kind: ConfigMap
apiVersion: v1
metadata:
  name: backend-cert
data:
  ca.crt: 
         < -----BEGIN CERTIFICATE-----
	   -----END CERTIFICATE-----
          >

Next, we create the BackendTLSPolicy, which targets our secure-app Service and refers to the ConfigMap created in the previous step:


apiVersion: gateway.networking.k8s.io/v1alpha2
kind: BackendTLSPolicy
metadata:
  name: backend-tls
spec:
  targetRef:
    group: ''
    kind: Service
    name: secure-app
    namespace: default
  tls:
    caCertRefs:
    - name: backend-cert
      group: ''
      kind: ConfigMap
    hostname: secure-app.example.com

URLRewrite

With a URLRewrite filter, you can modify the original URL of an incoming request and redirect it to a different URL with zero performance impact. This is particularly useful when your backend applications change their exposed API, but you want to maintain backwards compatibility for your existing clients. You can also use this feature to expose a consistent API URL to your clients while redirecting the requests to different applications with different API URLs, providing an “experience” API that combines the functionality of several different APIs for your clients’ convenience and performance.

To get started, let’s create a gateway for the NGINX gateway fabric. This will enable us to define HTTP listeners and configure the Port 80 for optimal performance.


apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: cafe
spec:
  gatewayClassName: nginx
  listeners:
  - name: http
    port: 80
    protocol: HTTP

Let’s create an HTTPRoute resource and configure request filters to rewrite any requests for /coffee to /beans. We can also provide a /latte endpoint that is rewritten to include the /latte prefix for the backend to handle (“/latte/126” becomes “/126”).


apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: coffee
spec:
  parentRefs:
  - name: cafe
    sectionName: http
  hostnames:
  - "cafe.example.com"
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /coffee
    filters:
    - type: URLRewrite
      urlRewrite:
        path:
          type: ReplaceFullPath
          replaceFullPath: /beans
    backendRefs:
    - name: coffee
      port: 80
  - matches:
    - path:
        type: PathPrefix
        value: /latte
    filters:
    - type: URLRewrite
      urlRewrite:
        path:
          type: ReplacePrefixMatch
          replacePrefixMatch: /
    backendRefs:
    - name: coffee
      port: 80

The HTTP rewrite feature helps ensure flexibility between the endpoints on the client side and how they are mapped with the backend. It also allows traffic redirection from one URL to another, which is particularly helpful when migrating content to a new website or API traffic.

Although NGINX Gateway Fabric supports path-based rewrites, it currently does not support path-based redirects. Let us know if this is a feature you need for your environment.

Product Telemetry

We have decided to include product telemetry as a mechanism to passively collect feedback as a part of the 1.2 release. This feature will collect a variety of metrics from your environment and send them to our data collection platform every 24 hours. No PII is collected, and you can see the full list of what is collected here.

We are committed to providing complete transparency around our telemetry functionality. While we will document every field we collect, and you can validate what we collect by our code, you always have the option to disable it completely. We are planning to regularly review interesting observations based on the statistics we collect with the community in our community meetings, so make sure to drop by!

Resources

For the complete changelog for NGINX Gateway Fabric 1.2.0, see the Release Notes. To try NGINX Gateway Fabric for Kubernetes with NGINX Plus, start your free 30-day trial today or contact us to discuss your use cases.

If you would like to get involved, see what is coming next, or see the source code for NGINX Gateway Fabric, check out our repository on GitHub!

We have bi-weekly community meetings on Mondays at 9AM Pacific/5PM GMT. The meeting link, updates, agenda, and notes are on the NGINX Gateway Fabric Meeting Calendar. Links are also always available from our GitHub readme.

The post Announcing NGINX Gateway Fabric Release 1.2.0 appeared first on NGINX.

]]>
Our Design Vision for NGINX One: The Ultimate Data Plane SaaS https://www.nginx.com/blog/our-design-vision-for-nginx-one-the-ultimate-data-plane-saas/ Wed, 13 Mar 2024 19:15:38 +0000 https://www.nginx.com/?p=72934 A Deeper Dive into F5 NGINX One, and an Invitation to Participate in Early Access A few weeks ago, we introduced NGINX One to our customers at AppWorld 2024. We also opened NGINX One Early Access, and a waiting list is now building. The solution is also being featured at AppWorld EMEA and AppWorld Asia [...]

Read More...

The post Our Design Vision for NGINX One: The Ultimate Data Plane SaaS appeared first on NGINX.

]]>
A Deeper Dive into F5 NGINX One, and an Invitation to Participate in Early Access

A few weeks ago, we introduced NGINX One to our customers at AppWorld 2024. We also opened NGINX One Early Access, and a waiting list is now building. The solution is also being featured at AppWorld EMEA and AppWorld Asia Pacific. Events throughout both regions will continue through June.

So the timing seems appropriate, in the midst of all this in-person activity, to share a bit more of our thinking and planning for NGINX One with our blog readers and re-extend that early access invitation to our global NGINX community.

Taking NGINX to Greater Heights

At the heart of all NGINX products lies our remarkable data plane. Designed and coded originally by Igor Sysoev, the NGINX data plane has stood the test of time. It is remarkably self-contained and performant. The code base has remained small and compact, with few dependencies and rare security issues. Our challenge was to make the data plane the center of a broader, complete product offering encompassing everything we build — and make that data plane more extensible, accessible, affordable, and manageable.

We also wanted to make NGINX a more accessible option for our large base of F5 customers. These are global teams for enterprise-wide network operations and security, many of which are responsible for NGINX deployments and ensuring that application development and platform ops teams get what they need to build modern applications.

Core Principles: No Silos, Consumption-Based, One Management Plane, Global Presence

With all this in mind, when we started planning NGINX One, we laid out a handful of design conventions that we wanted to follow:

  • Non-opinionated and flexible — NGINX One will be easy to implement across the entire range of NGINX use cases (web server, reverse proxy, application delivery, Kubernetes/microservices, application security, CDN).
  • Simple API interface — NGINX One will be easy to connect to any existing developer toolchain, platform, or system via RESTful APIs.
  • A single management system — NGINX One will provide one console and one management plane to run and configure everything NGINX. The console will be delivered ”as-a-service” with zero installation required and easy extensibility to other systems, such as Prometheus.
  • Consumption-based — With NGINX One, users will pay only for what they consume, substantially reducing barriers to entry and lowering overall cost of ownership.
  • Scales quickly, easily, and affordably in any cloud environment — NGINX One will be cloud and environment agnostic, delivering data plane, app delivery, and security capabilities on any cloud, any PaaS or orchestration engine, and for function-based and serverless environments.
  • Simplified security — NGINX One will make securing your applications in any environment easier to implement and manage, utilizing NGINX App Protect capabilities such as OneWAF and DDoS protection.
  • Intelligence for optimizing configurations — NGINX One will leverage all of NGINX’s global intelligence to offer intelligent suggestions on configuring your data plane, reducing errors, and increasing application performance.
  • Extensibility — NGINX One will be easy to integrate with other platforms for networking, observability and security, and application delivery. NGINX One will simplify integration with F5 Big-IP and other products, making it easier for network operations and security operations teams to secure and manage their technology estate across our product families.

NGINX One Is Designed to Be the Ultimate Data Plane Multi-Tool

We wanted to deliver all this while leveraging our core asset — the NGINX data plane. In fact, foundational to our early thinking on NGINX One was an acknowledgment that we needed to return to our data plane roots and make that the center of our universe.

NGINX One takes the core NGINX data plane software you’re familiar with and enhances it with SaaS-based tools for observability, management, and security. Whether you’re working on small-scale deployments or large, complex systems, NGINX One integrates seamlessly. You can use it as a drop-in replacement for any existing NGINX product.

For those of you navigating hybrid and multicloud environments, NGINX One simplifies the process. Integrating into your existing systems, CI/CD workflows, and cloud services is straightforward. NGINX One can be deployed in minutes and is consumable via API, giving you the flexibility to scale as needed. This service includes all essential NGINX products: NGINX Plus, NGINX Open Source, NGINX Instance Manager, NGINX Ingress Controller, and NGINX Gateway Fabric. NGINX One itself is hosted across multiple clouds for resilience.

In a nutshell, NGINX One can unify all your NGINX products into a single management sphere. Most importantly, with NGINX One you pay only for what you use. There are no annual license charges or per-seat costs. For startups, a generous free tier will allow you to scale and grow without fear of getting whacked with “gotcha” pricing. You can provision precisely what you need when you need it. You can dial it up and down as needed and automate scaling to ensure your apps are always performant.

NGINX One + F5 Big-IP = One Management Plane and Global Presence

To make NGINX easier to manage as part of F5 products, NGINX One better integrates with F5 while leveraging F5’s global infrastructure. To start with, NGINX One will be deployed on the F5 Distributed Cloud, adjoining NGINX One users with many additional capabilities. They can easily network across clouds with our Multicloud Network fabric without enduring complex integrations. They can configure granular security policies for specific teams and applications at the global firewall layer with less toil and fewer tickets. NGINX One users will benefit from our global network of points-of-presence, bringing applications much closer to end-users without having to bring in an additional content delivery network layer.

F5 users can easily leverage NGINX One to discover all instances of NGINX running in their enterprise environments and instrument those instances for better observability. In addition, F5’s security portfolio shares a single WAF engine, commonly referred to as “OneWAF”. This allows organizations to migrate the same policies they use in BIG-IP Advanced WAF to NGINX App Protect and to keep those policies synchronized.

A View into the Future

As we continue to mature NGINX One, we will ensure greater availability and scalability of your applications and infrastructure. We will do this by keeping your apps online with built-in high-availability and granular traffic controls, and by addressing predictable and unpredictable changes through automation and extensibility. And when you discover issues and automatically apply supervised configuration changes to multiple instances simultaneously you dramatically reduce your operational costs.

You will be able to resolve problems before your customers notice any disruptions by leveraging detailed AI-driven insights into the health of your apps, APIs, and infrastructure.
Identifying trends and cycles with historical data will enable you to accurately assess upcoming requirements, make better decisions, and streamline troubleshooting.

You can secure and control your network, applications and APIs while ensuring that your DevOps teams can integrate seamlessly with their CI/CD systems and tooling. Security will be closer to your application code and APIs and will be delivered on the shift-left promise. Organizations implementing zero trust will be able to validate users from edge to cloud without introducing complexity or unnecessary overhead. Moreover, you’ll further enhance your security posture by immediately discovering and quickly mitigating NGINX instances impacted by common vulnerabilities and exposures (CVEs), ensuring uniform protection across your infrastructure.

NGINX One will also change the way that you consume our product. We are moving to a SaaS-delivered model that allows you to pay for a single product and deliver our services wherever your teams need them –in your datacenter, the public cloud, or F5 Distributed Cloud. In the future more capabilities will come to our data plane, such as Webassembly. We will introduce new use cases like AI gateway. We are making it frictionless and ridiculously easy for you to consume these services with a consumption-based tiered pricing.

There will even be a free tier for a small number of NGINX instances and first-time customers. With consumption pricing you have a risk-free entry with low upfront costs.

It will be easier for procurement teams, because NGINX One will be included in all F5’s buying programs, including our Flexible Consumption Program.

No longer will pricing be a barrier for development teams. With NGINX One they will get all the capabilities and management that they need to secure, deliver, and optimize every App and API everywhere.

When Can I Get NGINX One, and How Can I Prepare?

In light of our recent news, many NGINX customers have asked when they can purchase NGINX One and what can they do now to get ready.

We expect NGINX One to be commercially available later this year. However, as mentioned above, customers can raise their hands now to get early access, try it out, and share their feedback for us to incorporate into our planning. In the meantime, all commercially available NGINX products will be compatible with NGINX One, so there is no need to worry that near-term purchases will soon be obsolete. They won’t.

In preparation to harness all the benefits of NGINX One, customers should ensure they are using the latest releases of their NGINX instances and ensure they are running NGINX Instance Manager as prescribed in their license.

The post Our Design Vision for NGINX One: The Ultimate Data Plane SaaS appeared first on NGINX.

]]>
The Ingress Controller: Touchstone for Securing AI/ML Apps in Kubernetes https://www.nginx.com/blog/the-ingress-controller-touchstone-for-securing-ai-ml-apps-in-kubernetes/ Wed, 28 Feb 2024 19:15:14 +0000 https://www.nginx.com/?p=72925 One of the key advantages of running artificial intelligence (AI) and machine learning (ML) workloads in Kubernetes is a having a central point of control for all incoming requests through the Ingress Controller. It is a versatile module that serves as a load balancer and API gateway, providing a solid foundation for securing AI/ML applications [...]

Read More...

The post The Ingress Controller: Touchstone for Securing AI/ML Apps in Kubernetes appeared first on NGINX.

]]>
One of the key advantages of running artificial intelligence (AI) and machine learning (ML) workloads in Kubernetes is a having a central point of control for all incoming requests through the Ingress Controller. It is a versatile module that serves as a load balancer and API gateway, providing a solid foundation for securing AI/ML applications in a Kubernetes environment.

As a unified tool, the Ingress Controller is a convenient touchpoint for applying security and performance measures, monitoring activity, and mandating compliance. More specifically, securing AI/ML applications at the Ingress Controller in a Kubernetes environment offers several strategic advantages that we explore in this blog.

Diagram of Ingress Controller ecosystem

Centralized Security and Compliance Control

Because Ingress Controller acts as a gateway to your Kubernetes cluster, it allows MLOps and platform engineering teams to implement a centralized point for enforcing security policies. This reduces the complexity of configuring security settings on a per-pod or per-service basis. By centralizing security controls at the Ingress level, you simplify the compliance process and make it easier to manage and monitor compliance status.

Consolidated Authentication and Authorization

The Ingress Controller is also the logical location to implement and enforce authentication and authorization for access to all your AI/ML applications. By adding strong certificate authority management, the Ingress Controller is also the linchpin of building zero trust (ZT) architectures for Kubernetes. ZT is crucial for ensuring continuous security and compliance of sensitive AI applications running on highly valuable proprietary data.

Rate Limiting and Access Control

The Ingress Controller is an ideal place to enforce rate limiting, protecting your applications from abuse, like DDoS attacks or excessive API calls, which is crucial for public-facing AI/ML APIs. With the rise of novel AI threats like model theft and data leaking, enforcing rate limiting and access control becomes more important in protecting against brute force attacks. It also helps prevent adversaries from abusing business logic or jailbreaking guardrails to extract data and model training or weight information.

Web Application Firewall (WAF) Integration

Many Ingress Controllers support integration with WAFs, which are table stakes for protecting exposed applications and services. WAFs provide an additional layer of security against common web vulnerabilities and attacks like the OWASP 10. Even more crucial, when properly tuned, WAFs protect against more targeted attacks aimed at AI/ML applications. A key consideration for AI/ML apps, where latency and performance are crucial, is potential overhead introduced by a WAF. Also, to be effective for AI/ML apps, the WAF must be tightly integrated into the Ingress Controller for monitoring and observability dashboards and alerting structures. If the WAF and Ingress Controller can share a common data plane, this is ideal.

Conclusion: Including the Ingress Controller Early in Planning for AI/ML Architectures

Because the Ingress Controller occupies such an important place in Kubernetes application deployment for AI/ML apps, it is best to include its capabilities as part of architecting AI/ML applications. This can alleviate duplication of functionality and can lead to a better decision on an Ingress Controller that will scale and grow with your AI/ML application needs. For MLOps teams, the Ingress Controller becomes a central control point for many of their critical platform and ops capabilities, with security among the top priorities.

Get Started with NGINX

NGINX offers a comprehensive set of tools and building blocks to meet your needs and enhance security, scalability, and observability of your Kubernetes platform.

You can get started today by requesting a free 30-day trial of Connectivity Stack for Kubernetes.

The post The Ingress Controller: Touchstone for Securing AI/ML Apps in Kubernetes appeared first on NGINX.

]]>
Scale, Secure, and Monitor AI/ML Workloads in Kubernetes with Ingress Controllers https://www.nginx.com/blog/scale-secure-and-monitor-ai-ml-workloads-in-kubernetes-with-ingress-controllers/ Thu, 22 Feb 2024 20:09:02 +0000 https://www.nginx.com/?p=72919 AI and machine learning (AI/ML) workloads are revolutionizing how businesses operate and innovate. Kubernetes, the de facto standard for container orchestration and management, is the platform of choice for powering scalable large language model (LLM) workloads and inference models across hybrid, multi-cloud environments. In Kubernetes, Ingress controllers play a vital role in delivering and securing [...]

Read More...

The post Scale, Secure, and Monitor AI/ML Workloads in Kubernetes with Ingress Controllers appeared first on NGINX.

]]>
AI and machine learning (AI/ML) workloads are revolutionizing how businesses operate and innovate. Kubernetes, the de facto standard for container orchestration and management, is the platform of choice for powering scalable large language model (LLM) workloads and inference models across hybrid, multi-cloud environments.

In Kubernetes, Ingress controllers play a vital role in delivering and securing containerized applications. Deployed at the edge of a Kubernetes cluster, they serve as the central point of handling communications between users and applications.

In this blog, we explore how Ingress controllers and F5 NGINX Connectivity Stack for Kubernetes can help simplify and streamline model serving, experimentation, monitoring, and security for AI/ML workloads.

Deploying AI/ML Models in Production at Scale

When deploying AI/ML models at scale, out-of-the-box Kubernetes features and capabilities can help you:

  • Accelerate and simplify the AI/ML application release life cycle.
  • Enable AI/ML workload portability across different environments.
  • Improve compute resource utilization efficiency and economics.
  • Deliver scalability and achieve production readiness.
  • Optimize the environment to meet business SLAs.

At the same time, organizations might face challenges with serving, experimenting, monitoring, and securing AI/ML models in production at scale:

  • Increasing complexity and tool sprawl makes it difficult for organizations to configure, operate, manage, automate, and troubleshoot Kubernetes environments on-premises, in the cloud, and at the edge.
  • Poor user experiences because of connection timeouts and errors due to dynamic events, such as pod failures and restarts, auto-scaling, and extremely high request rates.
  • Performance degradation, downtime, and slower and harder troubleshooting in complex Kubernetes environments due to aggregated reporting and lack of granular, real-time, and historical metrics.
  • Significant risk of exposure to cybersecurity threats in hybrid, multi-cloud Kubernetes environments because traditional security models are not designed to protect loosely coupled distributed applications.

Enterprise-class Ingress controllers like F5 NGINX Ingress Controller can help address these challenges. By leveraging one tool that combines Ingress controller, load balancer, and API gateway capabilities, you can achieve better uptime, protection, and visibility at scale – no matter where you run Kubernetes. In addition, it reduces complexity and operational cost.

Diagram of NGINX Ingress Controller ecosystem

NGINX Ingress Controller can also be tightly integrated with an industry-leading Layer 7 app protection technology from F5 that helps mitigate OWASP Top 10 cyberthreats for LLM Applications and defends AI/ML workloads from DoS attacks.

Benefits of Ingress Controllers for AI/ML Workloads

Ingress controllers can simplify and streamline deploying and running AI/ML workloads in production through the following capabilities:

  • Model serving – Deliver apps non-disruptively with Kubernetes-native load balancing, auto-scaling, rate limiting, and dynamic reconfiguration features.
  • Model experimentation – Implement blue-green and canary deployments, and A/B testing to roll out new versions and upgrades without downtime.
  • Model monitoring – Collect, represent, and analyze model metrics to gain better insight into app health and performance.
  • Model security – Configure user identity, authentication, authorization, role-based access control, and encryption capabilities to protect apps from cybersecurity threats.

NGINX Connectivity Stack for Kubernetes includes NGINX Ingress Controller and F5 NGINX App Protect to provide fast, reliable, and secure communications between Kubernetes clusters running AI/ML applications and their users – on-premises and in the cloud. It helps simplify and streamline model serving, experimentation, monitoring, and security across any Kubernetes environment, enhancing capabilities of cloud provider and pre-packaged Kubernetes offerings with higher degree of protection, availability, and observability at scale.

Get Started with NGINX Connectivity Stack for Kubernetes

NGINX offers a comprehensive set of tools and building blocks to meet your needs and enhance security, scalability, and visibility of your Kubernetes platform.

You can get started today by requesting a free 30-day trial of Connectivity Stack for Kubernetes.

The post Scale, Secure, and Monitor AI/ML Workloads in Kubernetes with Ingress Controllers appeared first on NGINX.

]]>
Dynamic A/B Kubernetes Multi-Cluster Load Balancing and Security Controls with NGINX Plus https://www.nginx.com/blog/dynamic-a-b-kubernetes-multi-cluster-load-balancing-and-security-controls-with-nginx-plus/ Thu, 15 Feb 2024 16:00:56 +0000 https://www.nginx.com/?p=72906 You’re a modern Platform Ops or DevOps engineer. You use a library of open source (and maybe some commercial) tools to test, deploy, and manage new apps and containers for your Dev team. You’ve chosen Kubernetes to run these containers and pods in development, test, staging, and production environments. You’ve bought into the architectures and [...]

Read More...

The post Dynamic A/B Kubernetes Multi-Cluster Load Balancing and Security Controls with NGINX Plus appeared first on NGINX.

]]>
You’re a modern Platform Ops or DevOps engineer. You use a library of open source (and maybe some commercial) tools to test, deploy, and manage new apps and containers for your Dev team. You’ve chosen Kubernetes to run these containers and pods in development, test, staging, and production environments. You’ve bought into the architectures and concepts of microservices and, for the most part, it works pretty well. However, you’ve encountered a few speed bumps along this journey.

For instance, as you build and roll out new clusters, services, and applications, how do you easily integrate or migrate these new resources into production without dropping any traffic? Traditional networking appliances require reloads or reboots when implementing configuration changes to DNS records, load balancers, firewalls, and proxies. These adjustments are not reconfigurable without causing downtime because a “service outage” or “maintenance window” is required to update DNS, load balancer, and firewall rules. More often than not, you have to submit a dreaded service ticket and wait for another team to approve and make the changes.

Maintenance windows can drive your team into a ditch, stall application delivery, and make you declare, “There must be a better way to manage traffic!” So, let’s explore a solution that gets you back in the fast lane.

Active-Active Multi-Cluster Load Balancing

If you have multiple Kubernetes clusters, it’s ideal to route traffic to both clusters at the same time. An even better option is to perform A/B, canary, or blue-green traffic splitting and send a small percentage of your traffic as a test. To do this, you can use NGINX Plus with ngx_http_split_clients_module.

K8s with NGINX Plus diagram

The HTTP Split Clients module is written by NGINX Open Source and allows the ratio of requests to be distributed based on a key. In this use case, the clusters are the “upstreams” of NGINX. So, as the client requests arrive, the traffic is split between two clusters. The key that is used to determine the client request is any available NGINX client $variable. That said, to control this for every request, use the $request_id variable, which is a unique number assigned by NGINX to every incoming request.

To configure the split ratios, determine which percentages you’d like to go to each cluster. In this example, we use K8s Cluster1 as a “large cluster” for production and Cluster2 as a “small cluster” for pre-production testing. If you had a small cluster for staging, you could use a 90:10 ratio and test 10% of your traffic on the small cluster to ensure everything is working before you roll out new changes to the large cluster. If that sounds too risky, you can change the ratio to 95:5. Truthfully, you can pick any ratio you’d like from 0 to 100%.

For most real-time production traffic, you likely want a 50:50 ratio where your two clusters are of equal size. But you can easily provide other ratios, based on the cluster size or other details. You can easily set the ratio to 0:100 (or 100:0) and upgrade, patch, repair, or even replace an entire cluster with no downtime. Let NGINX split_clients route the requests to the live cluster while you address issues on the other.


# Nginx Multi Cluster Load Balancing
# HTTP Split Clients Configuration for Cluster1:Cluster2 ratios
# Provide 100, 99, 50, 1, 0% ratios  (add/change as needed)
# Based on
# https://www.nginx.com/blog/dynamic-a-b-testing-with-nginx-plus/
# Chris Akker – Jan 2024
#
 
split_clients $request_id $split100 {
   * cluster1-cafe;                     # All traffic to cluster1
   } 

split_clients $request_id $split99 {
   99% cluster1-cafe;                   # 99% cluster1, 1% cluster2
   * cluster2-cafe;
   } 
 
split_clients $request_id $split50 { 
   50% cluster1-cafe;                   # 50% cluster1, 50% cluster2
   * cluster2-cafe;
   }
    
split_clients $request_id $split1 { 
   1.0% cluster1-cafe;                  # 1% to cluster1, 99% to cluster2
   * cluster2-cafe;
   }

split_clients $request_id $split0 { 
   * cluster2-cafe;                     # All traffic to cluster2
   }
 
# Choose which cluster upstream based on the ratio
 
map $split_level $upstream { 
   100 $split100; 
   99 $split99; 
   50 $split50; 
   1.0 $split1; 
   0 $split0;
   default $split50;
}

You can add or edit the configuration above to match the ratios that you need (e.g., 90:10, 80:20, 60:40, and so on).

Note: NGINX also has a Split Clients module for TCP connections in the stream context, which can be used for non-HTTP traffic. This splits the traffic based on new TCP connections, instead of HTTP requests.

NGINX Plus Key-Value Store

The next feature you can use is the NGINX Plus key-value store. This is a key-value object in an NGINX shared memory zone that can be used for many different data storage use cases. Here, we use it to store the split ratio value mentioned in the section above. NGINX Plus allows you to change any key-value record without reloading NGINX. This enables you to change this split value with an API call, creating the dynamic split function.

Based on our example, it looks like this:

{“cafe.example.com”:90}

This KeyVal record reads:
The Key is the “cafe.example.com” hostname
The Value is “90” for the split ratio

Instead of hard-coding the split ratio in the NGINX configuration files, you can instead use the key-value memory. This eliminates the NGINX reload required to change a static split value in NGINX.

In this example, NGINX is configured to use 90:10 for the split ratio with the large Cluster1 for the 90% and the small Cluster2 for the remaining 10%. Because this is a key-value record, you can change this ratio using the NGINX Plus API dynamically with no configuration reloads! The Split Clients module will use this new ratio value as soon as you change it, on the very next request.

Create the KV Record, start with a 50/50 ratio:

Add a new record to the KeyValue store, by sending an API command to NGINX Plus:

curl -iX POST -d '{"cafe.example.com":50}' http://nginxlb:9000/api/8/http/keyvals/split

Change the KV Record, change to the 90/10 ratio:

Change the KeyVal Split Ratio to 90, using an HTTP PATCH Method to update the KeyVal record in memory:

curl -iX PATCH -d '{"cafe.example.com":90}' http://nginxlb:9000/api/8/http/keyvals/split

Next, the pre-production testing team verifies the new application code is ready, you deploy it to the large Cluster1, and change the ratio to 100%. This immediately sends all the traffic to Cluster1 and your new application is “live” without any disruption to traffic, no service outages, no maintenance windows, reboots, reloads, or lots of tickets. It only takes one API call to change this split ratio at the time of your choosing.

Of course, being that easy to move from 90% to 100% means you have an easy way to change the ratio from 100:0 to 50:50 (or even 0:100). So, you can have a hot backup cluster or can scale your clusters horizontally with new resources. At full throttle, you can even completely build a new cluster with the latest software, hardware, and software patches – deploying the application and migrating the traffic over a period of time without dropping a single connection!

Use Cases

Using the HTTP Split Clients module with the dynamic key-value store can deliver the following use cases:

  • Active-active load balancing – For load balancing to multiple clusters.
  • Active-passive load balancing – For load balancing to primary, backup, and DR clusters and applications.
  • A/B, blue-green, and canary testing – Used with new Kubernetes applications.
  • Horizontal cluster scaling – Adds more cluster resources and changes the ratio when you’re ready.
  • Hitless cluster upgrades – Ability to use one cluster while you upgrade, patch, or repair the other cluster.
  • Instant failover – If one cluster has a serious issue, you can change the ratio to use your other cluster.

Configuration Examples

Here is an example of the key-value configuration:

# Define Key Value store, backup state file, timeout, and enable sync
 
keyval_zone zone=split:1m state=/var/lib/nginx/state/split.keyval timeout=365d sync;

keyval $host $split_level zone=split;

And this is an example of the cafe.example.com application configuration:

# Define server and location blocks for cafe.example.com, with TLS

server {
   listen 443 ssl;
   server_name cafe.example.com; 

   status_zone https://cafe.example.com;
      
   ssl_certificate /etc/ssl/nginx/cafe.example.com.crt; 
   ssl_certificate_key /etc/ssl/nginx/cafe.example.com.key;
   
   location / {
   status_zone /;
   
   proxy_set_header Host $host;
   proxy_http_version 1.1;
   proxy_set_header "Connection" "";
   proxy_pass https://$upstream;   # traffic split to upstream blocks
   
   }

# Define 2 upstream blocks – one for each cluster
# Servers managed dynamically by NLK, state file backup

# Cluster1 upstreams
 
upstream cluster1-cafe {
   zone cluster1-cafe 256k;
   least_time last_byte;
   keepalive 16;
   #servers managed by NLK Controller
   state /var/lib/nginx/state/cluster1-cafe.state; 
}
 
# Cluster2 upstreams
 
upstream cluster2-cafe {
   zone cluster2-cafe 256k;
   least_time last_byte;
   keepalive 16;
   #servers managed by NLK Controller
   state /var/lib/nginx/state/cluster2-cafe.state; 
}

The upstream server IP:ports are managed by NGINX Loadbalancer for Kubernetes, a new controller that also uses the NGINX Plus API to configure NGINX Plus dynamically. Details are in the next section.

Let’s take a look at the HTTP split traffic over time with Grafana, a popular monitoring and visualization tool. You use the NGINX Prometheus Exporter (based on njs) to export all of your NGINX Plus metrics, which are then collected and graphed by Grafana. Details for configuring Prometheus and Grafana can be found here.

There are four upstreams servers in the graph: Two for Cluster1 and two for Cluster2. We use an HTTP load generation tool to create HTTP requests and send them to NGINX Plus.

In the three graphs below, you can see the split ratio is at 50:50 at the beginning of the graph.

LB Upstream Requests diagram

Then, the ratio changes to 10:90 at 12:56:30.

LB Upstream Requests diagram

Then it changes to 90:10 at 13:00:00.

LB Upstream Requests diagram

You can find working configurations of Prometheus and Grafana on the NGINX Loadbalancer for Kubernetes GitHub repository.

Dynamic HTTP Upstreams: NGINX Loadbalancer for Kubernetes

You can change the static NGINX Upstream configuration to dynamic cluster upstreams using the NGINX Plus API and the NGINX Loadbalancer for Kubernetes controller. This free project is a Kubernetes controller that watches NGINX Ingress Controller and automatically updates an external NGINX Plus instance configured for TCP/HTTP load balancing. It’s very straightforward in design and simple to install and operate. With this solution in place, you can implement TCP/HTTP load balancing in Kubernetes environments, ensuring new apps and services are immediately detected and available for traffic – with no reload required.

Architecture and Flow

NGINX Loadbalancer for Kubernetes sits inside a Kubernetes cluster. It is registered with Kubernetes to watch the NGINX Ingress Controller (nginx-ingress) Service. When there is a change to the Ingress controller(s), NGINX Loadbalancer for Kubernetes collects the Worker Ips and the NodePort TCP port numbers, then sends the IP:ports to NGINX Plus via the NGINX Plus API.

The NGINX upstream servers are updated with no reload required, and NGINX Plus load balances traffic to the correct upstream servers and Kubernetes NodePorts. Additional NGINX Plus instances can be added to achieve high availability.

Diagram of NGINX Loadbalancer in action

A Snapshot of NGINX Loadbalancer for Kubernetes in Action

In the screenshot below, there are two windows that demonstrate NGINX Loadbalancer for Kubernetes deployed and doing its job:

  1. Service TypeLoadBalancer for nginx-ingress
  2. External IP – Connects to the NGINX Plus servers
  3. Ports – NodePort maps to 443:30158 with matching NGINX upstream servers (as shown in the NGINX Plus real-time dashboard)
  4. Logs – Indicates NGINX Loadbalancer for Kubernetes is successfully sending data to NGINX Plus

NGINX Plus window

Note: In this example, the Kubernetes worker nodes are 10.1.1.8 and 10.1.1.10

Adding NGINX Plus Security Features

As more and more applications running in Kubernetes are exposed to the open internet, security becomes necessary. Fortunately, NGINX Plus has enterprise-class security features that can be used to create a layered, defense-in-depth architecture.

With NGINX Plus in front of your clusters and performing the split_clients function, why not leverage that presence and add some beneficial security features? Here are a few of the NGINX Plus features that could be used to enhance security, with links and references to other documentation that can be used to configure, test, and deploy them.

Get Started Today

If you’re frustrated with networking challenges at the edge of your Kubernetes cluster, consider trying out this NGINX multi-cluster Solution. Take the NGINX Loadbalancer for Kubernetes software for a test drive and let us know what you think. The source code is open source (under the Apache 2.0 license) and all installation instructions are available on GitHub.

To provide feedback, drop us a comment in the repo or message us in the NGINX Community Slack.

The post Dynamic A/B Kubernetes Multi-Cluster Load Balancing and Security Controls with NGINX Plus appeared first on NGINX.

]]>
Updating NGINX for the Vulnerabilities in the HTTP/3 Module https://www.nginx.com/blog/updating-nginx-for-the-vulnerabilities-in-the-http-3-module/ Wed, 14 Feb 2024 15:15:40 +0000 https://www.nginx.com/?p=72902 Today, we are releasing updates to NGINX Plus, NGINX Open source, and NGINX Open Source subscription in response to the internally discovered vulnerabilities in the HTTP/3 module ngx_http_v3_module. These vulnerabilities were discovered based on two bug reports in NGINX open source (trac #2585 and trac #2586). Note that this module is not enabled by default [...]

Read More...

The post Updating NGINX for the Vulnerabilities in the HTTP/3 Module appeared first on NGINX.

]]>
Today, we are releasing updates to NGINX Plus, NGINX Open source, and NGINX Open Source subscription in response to the internally discovered vulnerabilities in the HTTP/3 module ngx_http_v3_module. These vulnerabilities were discovered based on two bug reports in NGINX open source (trac #2585 and trac #2586). Note that this module is not enabled by default and is documented as experimental.

The vulnerabilities have been registered in the Common Vulnerabilities and Exposures (CVE) database and the F5 Security Incident Response Team (F5 SIRT) has assigned scores to them using the Common Vulnerability Scoring System (CVSS v3.1) scale.

The following vulnerabilities in the HTTP/3 module apply to NGINX Plus, NGINX Open source subscription, and NGINX Open source.

CVE-2024-24989: The patch for this vulnerability is included in following software versions:

  • NGINX Plus R31 P1
  • NGINX Open source subscription R6 P1
  • NGINX Open source mainline version 1.25.4. (The latest NGINX Open source stable version 1.24.0 is not affected.)

CVE-2024-24990: The patch for this vulnerability is included in following software versions:

  • NGINX Plus R30 P2
  • NGINX Plus R31 P1
  • NGINX Open source subscription R5 P2
  • NGINX Open source subscription R6 P1
  • NGINX Open source mainline version 1.25.4. (The latest NGINX Open source stable version 1.24.0 is not affected.)

You are impacted if you are running NGINX Plus R30 or R31, NGINX Open source subscription packages R5 or R6 or NGINX Open source mainline version 1.25.3 or earlier. We strongly recommend that you upgrade your NGINX software to the latest version.

For NGINX Plus upgrade instructions, see Upgrading NGINX Plus in the NGINX Plus Admin Guide. NGINX Plus customers can also contact our support team for assistance at https://my.f5.com/.

The post Updating NGINX for the Vulnerabilities in the HTTP/3 Module appeared first on NGINX.

]]>
NGINX’s Continued Commitment to Securing Users in Action https://www.nginx.com/blog/nginx-continued-commitment-to-securing-users-in-action/ Wed, 14 Feb 2024 15:15:34 +0000 https://www.nginx.com/?p=72903 F5 NGINX is committed to a secure software lifecycle, including design, development, and testing optimized to find security concerns before release. While we prioritize threat modeling, secure coding, training, and testing, vulnerabilities do occasionally occur. Last month, a member of the NGINX Open Source community reported two bugs in the HTTP/3 module that caused a [...]

Read More...

The post NGINX’s Continued Commitment to Securing Users in Action appeared first on NGINX.

]]>
F5 NGINX is committed to a secure software lifecycle, including design, development, and testing optimized to find security concerns before release. While we prioritize threat modeling, secure coding, training, and testing, vulnerabilities do occasionally occur.

Last month, a member of the NGINX Open Source community reported two bugs in the HTTP/3 module that caused a crash in NGINX Open Source. We determined that a bad actor could cause a denial-of-service attack on NGINX instances by sending specially crafted HTTP/3 requests. For this reason, NGINX just announced two vulnerabilities: CVE-2024-24989 and CVE-2024-24990.

The vulnerabilities have been registered in the Common Vulnerabilities and Exposures (CVE) database, and the F5 Security Incident Response Team (F5 SIRT) has assigned them scores using the Common Vulnerability Scoring System (CVSS v3.1) scale.

Upon release, the QUIC and HTTP/3 features in NGINX were considered experimental. Historically, we did not issue CVEs for experimental features and instead would patch the relevant code and release it as part of a standard release. For commercial customers of NGINX Plus, the previous two versions would be patched and released to customers. We felt that not issuing a similar patch for NGINX Open Source would be a disservice to our community. Additionally, fixing the issue in the open source branch would have exposed users to the vulnerability without providing a binary.

Our decision to release a patch for both NGINX Open Source and NGINX Plus is rooted in doing what is right – to deliver highly secure software for our customers and community. Furthermore, we’re making a commitment to document and release a clear policy for how future security vulnerabilities will be addressed in a timely and transparent manner.

The post NGINX’s Continued Commitment to Securing Users in Action appeared first on NGINX.

]]>
Meetup Recap: NGINX’s Commitments to the Open Source Community https://www.nginx.com/blog/meetup-recap-nginxs-commitments-to-the-open-source-community/ Wed, 14 Feb 2024 01:13:11 +0000 https://www.nginx.com/?p=72904 Last week, we hosted the NGINX community’s first San Jose, California meetup since the outbreak of COVID-19. It was great to see our Bay Area open source community in person and hear from our presenters. After an introduction by F5 NGINX General Manager Shawn Wormke, NGINX CTO and Co-Founder Maxim Konovalov detailed NGINX’s history – [...]

Read More...

The post Meetup Recap: NGINX’s Commitments to the Open Source Community appeared first on NGINX.

]]>
Last week, we hosted the NGINX community’s first San Jose, California meetup since the outbreak of COVID-19. It was great to see our Bay Area open source community in person and hear from our presenters.

After an introduction by F5 NGINX General Manager Shawn Wormke, NGINX CTO and Co-Founder Maxim Konovalov detailed NGINX’s history – from the project’s “dark ages” through recent events. Building on that history, we looked to the future. Specifically, Principal Engineer Oscar Spencer and Principal Technical Product Manager Timo Stark covered the exciting new technology WebAssembly and how it can be used to solve complex problems securely and efficiently. Timo also gave us an overview of NGINX JavaScript (njs), breaking down its architecture and demonstrating ways it can solve many of today’s intricate application scenarios.

Above all, the highlight of the meetup was our renewed, shared set of commitments to NGINX’s open source community.

Our goal at NGINX is to continue to be an open source standard, similar to OpenSSL and Linux. Our open source projects are sponsored by F5 and, up until now, have been largely supported by paid employees of F5 with limited contributions from the community. While this has served our projects well, we believe that long-term success hinges on engaging a much larger and diverse community of contributors. Growing our open source community ensures that the best ideas are driving innovation, as we strive to solve complex problems with modern applications.

To achieve this goal, we are making the following commitments that will guarantee the longevity, transparency, and impact of our open source projects:

  • We will be open, consistent, transparent, and fair in our acceptance of contributions.
  • We will continue to enhance and open source new projects that move technology forward.
  • We will continue to offer projects under OSI-approved software licenses.
  • We will not remove and commercialize existing projects or features.
  • We will not impose limits on the use of our projects.

With these commitments, we hope that our projects will gain more community contributions, eventually leading to maintainers and core members outside of F5.

However, these commitments do present a pivotal change to our ways of working. For many of our projects that have a small number of contributors, this change will be straightforward. For our flagship NGINX proxy, with its long history and track record of excellence, these changes will take some careful planning. We want to be sensitive to this by ensuring plenty of notice to our community, so they may adopt and adjust to these changes with little to no disruption.

We are very excited about these commitments and their positive impact on our community. We’re also looking forward to opportunities for more meetups in the future! In the meantime, stay tuned for additional information and detailed timelines on this transition at nginx.org.

The post Meetup Recap: NGINX’s Commitments to the Open Source Community appeared first on NGINX.

]]>
NGINX One: A SaaS Solution for Modern Application Management and Delivery https://www.nginx.com/blog/nginx-one-a-saas-solution-for-modern-application-management-and-delivery/ Wed, 07 Feb 2024 21:00:11 +0000 https://www.nginx.com/?p=72877 NGINX One will soon be available “as-a-service,” with a single license and a unified management console for enterprise-wide security, availability, observability, and scalability—with a friendly pay-as-you-go pricing model. Today at AppWorld, we are introducing and opening “early access” for NGINX One: a global SaaS offering for deploying, securing, monitoring, scaling, and managing all NGINX instances [...]

Read More...

The post NGINX One: A SaaS Solution for Modern Application Management and Delivery appeared first on NGINX.

]]>
NGINX One will soon be available “as-a-service,” with a single license and a unified management console for enterprise-wide security, availability, observability, and scalability—with a friendly pay-as-you-go pricing model.

Today at AppWorld, we are introducing and opening “early access” for NGINX One: a global SaaS offering for deploying, securing, monitoring, scaling, and managing all NGINX instances (whether they are on-prem, in the cloud, or at the edge), and all from a single management interface. It includes support for all our data plane components—NGINX Plus, NGINX Open Source, NGINX Unit, NGINX Gateway Fabric, and Kubernetes Ingress Controller—under a single enterprise license.

Breaking down silos, making NGINX easier for all

Unlike previous NGINX pricing models, NGINX One is completely consumption-based—you only pay for what you use. For every organization from startups to global enterprises, NGINX One makes deploying any NGINX use case simpler, faster, more secure, more scalable, easier to observe and monitor, and more cost effective.

Moreover, we are unifying our management plane into a more cohesive package on the F5 Distributed Cloud Platform, as enterprises have come to expect. This benefits not only traditional NGINX users in application development and DevOps roles, but also the broader community of F5 customers with other responsibilities, most of whom have NGINX deployed somewhere in their organizations. CIOs, CISOs, and their teams—network operations, security operations, and infrastructure—frequently share overlapping responsibilities for enterprise application delivery and security.

On the Distributed Cloud Platform, NGINX One users will benefit from many adjacent security and network capabilities that hybrid, multicloud enterprises are demanding. They can easily network across clouds with our multicloud network fabric without enduring complex integrations. They can configure granular security policies for specific teams and applications at a global level with less toil and fewer tickets. F5’s security portfolio shares a single WAF engine, commonly referred to as “OneWAF,” which allows organizations to migrate the same policies they were using in BIG-IP Advanced WAF to NGINX App Protect. And the Distributed Cloud Platform’s global network of points-of-presence can bring applications much closer to end users without having to bring in an additional content delivery network layer.

We envision NGINX One meeting our customers wherever they are, with a rich ecosystem of supported providers that can be used to seamlessly integrate existing systems, from automation to monitoring and beyond.

NGINX One diagram
Figure 1: NGINX One unites NGINX’s data plane components, a SaaS management console hosted on F5 Distributed Cloud, and pay-as-you-go pricing into a single product offer.

(F5 + NGINX) * SaaS = Better together

F5 and NGINX are truly better together. And bringing our capabilities together will make life easier for everyone, particularly our customers. NetOps and SecOps teams can shift multicloud deployment left to app dev teams without sacrificing security or control. Developers and DevOps teams can more easily experiment and innovate, building and deploying apps more quickly. Best of all, DevOps and platform ops teams, NetOps, and SecOps can better collaborate by utilizing the same set of tools for security, observability, and scalability.

As a SaaS, NGINX One enables our product team to accelerate innovation and feature development. And we can safely and easily make NGINX more extensible, allowing our users to seamlessly integrate NGINX with all their developer tools, CI/CD workflows, and other infrastructure and application delivery systems.

When we brought NGINX into the F5 fold almost five years ago, we had a vision—a single management plane for cloud applications spanning all environments and numerous use cases. We saw a unified system that made it possible for everyone responsible for and dependent on NGINX to use it more easily, securely, and efficiently. It has taken a while. NGINX One is the giant leap on that journey.

Sign up for access and tell us what you think

Today at AppWorld, we are beginning an early access phase with a select group of customers. We are initially inviting existing NGINX customers to participate in these early phases for feedback. As soon as we can, we’ll expand access to more F5 customers and move to general availability later in 2024. Interested NGINX customers can connect with us here and we’ll be in touch soon. We’ll be onboarding additional customers throughout the next several months, and we expect a waiting list to develop.

I want to thank both our NGINX and F5 communities for being so generous with their feedback and time in helping us shape NGINX One. Our work as product builders is principally driven to reflect your needs and wisdom. We hope that you will continue to provide input and guide the future development of NGINX One. We are listening and more excited than ever. Thanks again.

The post NGINX One: A SaaS Solution for Modern Application Management and Delivery appeared first on NGINX.

]]>
Building the Next NGINX Experience, Designed for the Reality of Modern Apps https://www.nginx.com/blog/building-the-next-nginx-experience-designed-for-the-reality-of-modern-apps/ Tue, 23 Jan 2024 16:00:27 +0000 https://www.nginx.com/?p=72861 As the VP of Product at NGINX, I speak frequently with customers and users. Whether you’re a Platform Ops team, Kubernetes architect, application developer, CISO, CIO, or CTO – I’ve talked to someone like you. In our conversations, you gave me your honest thoughts about NGINX, including our products, pricing and licensing models, highlighting both [...]

Read More...

The post Building the Next NGINX Experience, Designed for the Reality of Modern Apps appeared first on NGINX.

]]>
As the VP of Product at NGINX, I speak frequently with customers and users. Whether you’re a Platform Ops team, Kubernetes architect, application developer, CISO, CIO, or CTO – I’ve talked to someone like you. In our conversations, you gave me your honest thoughts about NGINX, including our products, pricing and licensing models, highlighting both our strengths and weaknesses.

The core thing we learned is that our “NGINX is the center of the universe” approach does not serve our users well. We had been building products that aimed to make NGINX the “platform” – the unified management plane for everything related to application deployment. We knew that some of our previous products geared towards that goal had been lightly used and adopted. You told us that NGINX is a mission critical component of your existing platform, homegrown or otherwise, but that NGINX was not the platform. Therefore, we needed to integrate better with the rest of your components to make it easier to deploy, manage, secure our products with (and this is important) transparent pricing and consumption models. And to make it all possible via API, of course.

The underlying message was straightforward: make it easier for you to integrate NGINX into your workflows, existing toolchains, and processes in an unopinionated manner. We heard you. In 2024, we will be taking a much more flexible, simple, repeatable, and scalable approach towards use-case configuration and management for data plane and security.

Your desire makes complete sense. Your world has changed and continues to change! You transitioned through various stages, moving from cloud to hybrid to multi-cloud and multi-cloud-hybrid setups. There have also been changes from VMs to Kubernetes, and from APIs to microservices and serverless. Many of you have shifted left and that has led to complexity. More teams have more tools that require more management, observability, and robust security – all powering apps that must be able to scale out in minutes; not hours, days, or weeks. And the latest accelerant, artificial intelligence (AI), puts significant pressure on legacy application and infrastructure architectures.

What We Plan to Address in Upcoming NGINX Product Releases

While the bones of NGINX products have always been rock solid, battle-tested, and performant, the way our users could consume, manage, and observe all aspects of NGINX didn’t keep up with the times. We are moving quickly to remedy that with a new product launch and a slew of new capabilities. We will be announcing more about this at F5’s conference AppWorld 2024, happening February 6 through 8. Here are specific pain points we plan on addressing in upcoming product releases.

Pain Point #1: Modern Apps Are Challenging to Manage Due to the Diversity of Deployment Environments

Today, CIOs and CTOs can pick from a wide variety of application deployment modalities. This is a blessing because it enables far more choice in terms of performance, capabilities, and resilience. It’s also a curse because diversity leads to complexity and sprawl. For example, managing applications running in AWS requires different configurations, tools, and tribal knowledge than managing applications in Azure Cloud.

While containers have standardized, large swathes of application deployment, everything below containers (or going in and out of containers) remains differentiated. As the de facto container orchestration platform, Kubernetes was supposed to clean that process up. But anyone who has deployed on Amazon EKS, Azure Kubernetes Service (AKS), and Google Kubernetes Engine (GKE) can tell you – they’re not at all alike.

You have told us that managing NGINX products across this huge diversity of environments requires significant operational resources and leads to waste. And, frankly, pricing models based on annual licenses collapse in dynamic environments where you might launch an app in a serverless environment, scale it up on a Kubernetes environment, and maintain a small internal deployment running on the cloud for development purposes.

Pain Point #2: Apps Running in Many Environments and Spanning License Types Are Challenging to Secure

The complexity of diverse environments can make it difficult to discover and monitor where modern apps are deployed and then apply the right security measures. Maybe you deployed NGINX Plus as your global load balancer and NGINX Open Source for various microservices, with each running in different clouds or on top of different types of applications. Additionally, they could be requiring different things for privacy, data protection, and traffic management.
Each permutation adds a new security twist. There is no standard, comprehensive solution and that injects operational complexity and potential for configuration errors. Admittedly, we’ve added to that complexity by making it confusing as to which types of security can be applied to which NGINX solutions.

We understand. Customers need a single way to secure all applications that leverage NGINX. This unified security solution must cover the vast majority of use cases and deploy the same tools, dashboards, and operational processes across all cloud, on-prem, serverless, and other environments. We also recognize the importance of moving towards more intelligent security approach, leveraging the collective intelligence of the NGINX community and the unprecedented view of global traffic that we are fortunate to have.

Pain Point #3: Managing the Cost of Modern Apps Is Complex and Results in Waste

In a shift-left world, every organization wants to empower developers and practitioners to do their jobs better, without filing a ticket or sending a Slack. The reality has been different. Some marginal abstraction of complexity has been achieved with Kubernetes, serverless, and other mechanisms for managing distributed applications and applications spanning on-prem, cloud, and multi-cloud environments. But this progress has largely been confined inside the container and application. It has not translated well to the layers around applications like networking, security, and observability, nor to CI/CD.

I have hinted at these issues in the previous pain points, but the bottom line is this: complexity has great costs when it comes to hours and toil, compromised security, and resilience. Maintaining increasingly complex systems is fundamentally challenging and resource intensive. Pricing and license complexity adds another unhappy layer. NGINX has never been a “true-up” company that sticks it to users when they mistakenly overconsume.

But in a world of SaaS, APIs, and microservices, you want to pay as you go and not pay by the year, nor by the seat or site license. You want an easy-to-understand pricing model based on consumption, for all NGINX products and services, across your entire technology infrastructure and application portfolio. You also want a way to incorporate support and security for any open source modules that your teams run, paying for just the bits that you want.

This will require some shifts in how NGINX packages and prices products. The ultimate solution must be simplicity, transparency, and pay-for-what-you-consume, just like any other SaaS. We hear you. And we have something great in store which will address all three of the above pain points.

Join Us at App World 2024

We will be talking about these exciting updates at AppWorld 2024 and will be rolling out pieces of the solution as part of our longer-term plan and roadmap over the next twelve months.

Join me on this journey and tune in to AppWorld for a full breakdown of what’s in store. Early bird pricing is available through January 21. Please check out the AppWorld 2024 registration page for further details. You’re also invited to join NGINX leaders and other members of the community on the night of February 6 at the San Jose F5 office for an evening of looking forward into the future of NGINX, community connections, and indulging in the classics: pizza and swag! See the event page for registration and details.

We hope to see you next month in San Jose!

The post Building the Next NGINX Experience, Designed for the Reality of Modern Apps appeared first on NGINX.

]]>