NGINX Open Source Archives - NGINX https://www.nginx.com/blog/tag/nginx-open-source/ The High Performance Reverse Proxy, Load Balancer, Edge Cache, Origin Server Wed, 14 Feb 2024 15:15:40 +0000 en-US hourly 1 NGINX’s Continued Commitment to Securing Users in Action https://www.nginx.com/blog/nginx-continued-commitment-to-securing-users-in-action/ Wed, 14 Feb 2024 15:15:34 +0000 https://www.nginx.com/?p=72903 F5 NGINX is committed to a secure software lifecycle, including design, development, and testing optimized to find security concerns before release. While we prioritize threat modeling, secure coding, training, and testing, vulnerabilities do occasionally occur. Last month, a member of the NGINX Open Source community reported two bugs in the HTTP/3 module that caused a [...]

Read More...

The post NGINX’s Continued Commitment to Securing Users in Action appeared first on NGINX.

]]>
F5 NGINX is committed to a secure software lifecycle, including design, development, and testing optimized to find security concerns before release. While we prioritize threat modeling, secure coding, training, and testing, vulnerabilities do occasionally occur.

Last month, a member of the NGINX Open Source community reported two bugs in the HTTP/3 module that caused a crash in NGINX Open Source. We determined that a bad actor could cause a denial-of-service attack on NGINX instances by sending specially crafted HTTP/3 requests. For this reason, NGINX just announced two vulnerabilities: CVE-2024-24989 and CVE-2024-24990.

The vulnerabilities have been registered in the Common Vulnerabilities and Exposures (CVE) database, and the F5 Security Incident Response Team (F5 SIRT) has assigned them scores using the Common Vulnerability Scoring System (CVSS v3.1) scale.

Upon release, the QUIC and HTTP/3 features in NGINX were considered experimental. Historically, we did not issue CVEs for experimental features and instead would patch the relevant code and release it as part of a standard release. For commercial customers of NGINX Plus, the previous two versions would be patched and released to customers. We felt that not issuing a similar patch for NGINX Open Source would be a disservice to our community. Additionally, fixing the issue in the open source branch would have exposed users to the vulnerability without providing a binary.

Our decision to release a patch for both NGINX Open Source and NGINX Plus is rooted in doing what is right – to deliver highly secure software for our customers and community. Furthermore, we’re making a commitment to document and release a clear policy for how future security vulnerabilities will be addressed in a timely and transparent manner.

The post NGINX’s Continued Commitment to Securing Users in Action appeared first on NGINX.

]]>
SSL/TLS Certificate Rotation Without Restarts in NGINX Open Source https://www.nginx.com/blog/ssl-tls-certificate-rotation-without-restarts-in-nginx-open-source/ Tue, 26 Sep 2023 19:10:19 +0000 https://www.nginx.com/?p=72697 In the world of high-performance web servers, NGINX is a popular choice because its lightweight and efficient architecture enables it to handle large loads of traffic. With the introduction of the shared dictionary function as part of the NGINX JavaScript module (njs), NGINX’s performance capabilities reach the next level. In this blog post, we explore [...]

Read More...

The post SSL/TLS Certificate Rotation Without Restarts in NGINX Open Source appeared first on NGINX.

]]>
In the world of high-performance web servers, NGINX is a popular choice because its lightweight and efficient architecture enables it to handle large loads of traffic. With the introduction of the shared dictionary function as part of the NGINX JavaScript module (njs), NGINX’s performance capabilities reach the next level.

In this blog post, we explore the njs shared dictionary’s functionality and benefits, and show how to set up NGINX Open Source without the need to restart when rotating SSL/TLS certificates.

Shared Dictionary Basics and Benefits

The new js_shared_dict_zone directive allows NGINX Open Source users to enable shared memory zones for efficient data exchange between worker processes. These shared memory zones act as key-value dictionaries, storing dynamic configuration settings that can be accessed and modified in real-time.

Key benefits of the shared dictionary include:

  • Minimal Overhead and Easy to Use – Built directly into njs, it’s easy to provision and utilize with an intuitive API and straightforward implementation. It also helps you simplify the process of managing and sharing data between worker processes.
  • Lightweight and Efficient – Integrates seamlessly with NGINX, leveraging its event-driven, non-blocking I/O model. This approach reduces memory usage, and improves concurrency, enabling NGINX to handle many concurrent connections efficiently.
  • Scalability – Leverages NGINX’s ability to scale horizontally across multiple worker processes so you can share and synchronize data across those processes without needing complex inter-process communication mechanisms. The time-to-live (TTL) setting allows you to manage records in shared dictionary entries by removing them from the zone due to inactivity. The evict parameter removes the oldest key-value pair to make space for new entries.

SSL Rotation with the Shared Dictionary

One of the most impactful use cases for the shared dictionary is SSL/TLS rotation. When using js_shared_dict_zone, there’s no need to restart NGINX in the event of an SSL/TLS certificate or key update. Additionally, it gives you a REST-like API to manage certificates on NGINX.

Below is an example of the NGINX configuration file that sets up the HTTPS server with the js_set and ssl_certificate directives. The JavaScript handlers use js_set to read the SSL/TLS certificate or key from a file.

This configuration snippet uses the shared dictionary to store certificates and keys in shared memory as a cache. If the key is not present, then it reads the certificate or key from the disk and puts it into the cache.

You can also expose a location that clears the cache. Once files on the disk are updated (e.g., the certificates and keys are renewed), the shared dictionary enforces reading from the disk. This adjustment allows rotating certificates/keys without the need to restart the NGINX process.

http {
     ...
    js_shared_dict_zone zone=kv:1m;
   
  server {   …    # Sets an njs function for the variable. Returns a value of cert/key     js_set $dynamic_ssl_cert main.js_cert;     js_set $dynamic_ssl_key main.js_key;  
    # use variable's data     ssl_certificate data:$dynamic_ssl_cert;     ssl_certificate_key data:$dynamic_ssl_key;    
   # a location to clear cache  location = /clear {     js_content main.clear_cache;     # allow 127.0.0.1;     # deny all;   }
  ... }

And here is the JavaScript implementation for rotation of SSL/TLS certificates and keys using js_shared_dict_zone:

function js_cert(r) {
  if (r.variables['ssl_server_name']) {
    return read_cert_or_key(r, '.cert.pem');
  } else {
    return '';
  }
}

function js_key(r) {
  if (r.variables['ssl_server_name']) {
    return read_cert_or_key(r, '.key.pem');
  } else {
    return '';
  }
}
/** 
   * Retrieves the key/cert value from Shared memory or fallback to disk
   */
  function read_cert_or_key(r, fileExtension) {
    let data = '';
    let path = '';
    const zone = 'kv';
    let certName = r.variables.ssl_server_name;
    let prefix =  '/etc/nginx/certs/';
    path = prefix + certName + fileExtension;
    r.log('Resolving ${path}');
    const key = ['certs', path].join(':');
    const cache = zone && ngx.shared && ngx.shared[zone];
   
  if (cache) { data = cache.get(key) || ''; if (data) { r.log(`Read ${key} from cache`); return data; } } try { data = fs.readFileSync(path, 'utf8'); r.log('Read from cache'); } catch (e) { data = ''; r.log(`Error reading from file:${path}. Error=${e}`); } if (cache && data) { try { cache.set(key, data); r.log('Persisted in cache'); } catch (e) { const errMsg = `Error writing to shared dict zone: ${zone}. Error=${e}`; r.log(errMsg); } } return data }

By sending the /clear request, the cache is invalidated and NGINX loads the SSL/TLS certificate or key from the disk on the next SSL/TLS handshake. Additionally, you can implement a js_content that takes an SSL/TLS certificate or key from the request while persisting and updating the cache too.

The full code of this example can be found in the njs GitHub repo.

Get Started Today

The shared dictionary function is a powerful tool for your application’s programmability that brings significant advantages in streamlining and scalability. By harnessing the capabilities of js_shared_dict_zone, you can unlock new opportunities for growth and efficiently handle increasing traffic demands.

Ready to supercharge your NGINX deployment with js_shared_dict_zone? You can upgrade your NGINX deployment with js_shared_dict_zone to unlock new use cases and learn more about this feature in our documentation. In addition, you can see a complete example of a shared dictionary function in the recently introduced njs-acme project, which enables the njs module runtime to work with ACME providers.

If you’re interested in getting started with NGINX Open Source and have questions, join NGINX Community Slack – introduce yourself and get to know this community of NGINX users!

The post SSL/TLS Certificate Rotation Without Restarts in NGINX Open Source appeared first on NGINX.

]]>
QUIC+HTTP/3 Support for OpenSSL with NGINX https://www.nginx.com/blog/quic-http3-support-openssl-nginx/ Wed, 13 Sep 2023 15:24:32 +0000 https://www.nginx.com/?p=72672 Developers usually want to build applications and infrastructure using released, official, and supported libraries. Even with HTTP/3, there is a strong need for a convenient library that supports QUIC and doesn’t increase the maintenance costs or operational complexity in the production infrastructure. For many QUIC+HTTP/3 users, that default cryptographic library is OpenSSL. Installed on most [...]

Read More...

The post QUIC+HTTP/3 Support for OpenSSL with NGINX appeared first on NGINX.

]]>
Developers usually want to build applications and infrastructure using released, official, and supported libraries. Even with HTTP/3, there is a strong need for a convenient library that supports QUIC and doesn’t increase the maintenance costs or operational complexity in the production infrastructure.

For many QUIC+HTTP/3 users, that default cryptographic library is OpenSSL. Installed on most Linux-based operating systems by default, OpenSSL is the number one Transport Layer Security (TLS) library and is used by the majority of network applications.

The Problem: Incompatibility Between OpenSSL and QUIC+HTTP/3

Even with such wide usage, OpenSSL does not provide the TLS API required for QUIC support. Instead, the OpenSSL Management Committee decided to implement a complete QUIC stack on their own. This endeavor is a considerable effort planned for OpenSSL v3.4 but, according to the OpenSSL roadmap, that won’t likely happen before the end of 2024. Furthermore, the initial Minimum Viable Product of the OpenSSL implementation won’t contain the QUIC API implementation, so there is no clear path for users to get HTTP/3 support with OpenSSL.

Options for QUIC TLS Support

In this situation, there are two options for users looking for QUIC TLS support for their HTTP/3 needs:

  • OpenSSL QUIC implementation – As mentioned above, OpenSSL is currently working on implementing a complete QUIC stack on its own. This development will encapsulate all QUIC functionality within the implementation, making it much easier for HTTP/3 users to use the OpenSSL TLS API without worrying about QUIC-specific functionality.
  • Libraries supporting the BoringSSL QUIC API – Various SSL libraries like BoringSSL, quicTLS, and LibreSSL (all of which started as forks of OpenSSL) now provide QUIC TLS functionality by implementing BoringSSL QUIC API. However, these libraries aren’t as widely adopted as OpenSSL. This option also requires building the SSL library from source and installing it on every server that needs QUIC+HTTP/3 support, which might not be a feasible option for everyone. That said, this is currently the only option for users wanting to use HTTP/3 because the OpenSSL QUIC TLS implementation is not ready yet.

A New Solution: The OpenSSL Compatibility Layer

At NGINX, we felt inspired by these challenges and created the OpenSSL Compatibility Layer to simplify QUIC+HTTP/3 deployments that use OpenSSL and help avoid complexities associated with maintaining a separate SSL library in production environments.

Available with NGINX Open Source mainline since version 1.25.0 and NGINX Plus R30, the OpenSSL Compatibility Layer allows NGINX to run QUIC+HTTP/3 on top of OpenSSL without needing to patch or rebuild it. This removes the dependency of compiling and deploying third-party TLS libraries to get QUIC support. Since users don’t need to use third-party libraries, it also alleviates the dependency on schedules and roadmaps of those libraries, making it a comparatively easier solution to deploy in production.

How the OpenSSL Compatibility Layer Works

The OpenSSL Compatibility Layer implements these steps:

  • Converts a QUIC handshake to a TLS 1.3 handshake that is supported by OpenSSL.
  • Passes the TLS handshake messages in and out of OpenSSL.
  • Gets the encryption keys for handshake and application encryption levels out of OpenSSL.
  • Passes the QUIC transport parameters in and out of OpenSSL.

Based on the amount of OpenSSL adoption today and knowing its status with official QUIC+HTTP/3 support, we believe an easy and scalable option to enable QUIC is a step in the right direction. It will also promote HTTP/3 adoption and allow for valuable feedback. Most importantly, we trust that the OpenSSL Compatibility Layer will help us provide a more robust and scalable solution for our enterprise users and the entire NGINX community.

Note: While we are making sure NGINX users have an easy and scalable option with the availability of the OpenSSL Compatibility Layer, users still have options to use third-party libraries like BoringSSL, quicTLS, or LibreSSL with NGINX. To decide which one is the right path for you, consider what approach best meets your requirements and how comfortable you are with compiling and managing libraries as dependencies.

A Note on 0-RTT

0-RTT is a feature in QUIC that allows a client to send application data before the TLS handshake is complete. 0-RTT functionality is made possible by reusing negotiated parameters from a previous connection. It is enabled by the client remembering critical parameters and providing the server with a TLS session ticket that allows the server to recover the same information.

While this feature is an important part of QUIC, it is not yet supported in the OpenSSL Compatibility Layer. If you have specific use cases that need 0-RTT, we welcome your feedback to inform our roadmap.

Learn More about NGINX with QUIC+HTTP/3 and OpenSSL

You can begin using NGINX’s OpenSSL Compatibility Layer today with NGINX Open Source or by starting a 30-day free trial of NGINX Plus. We hope you find it useful and welcome your feedback.

More information about NGINX with QUIC+HTTP/3 and OpenSSL is available in the resources below.

The post QUIC+HTTP/3 Support for OpenSSL with NGINX appeared first on NGINX.

]]>
Announcing the Open Source Subscription by F5 NGINX https://www.nginx.com/blog/announcing-open-source-subscription-f5-nginx/ Wed, 14 Jun 2023 15:01:00 +0000 https://www.nginx.com/?p=71674 As a reader of the NGINX blog, you’ve likely already gathered that NGINX Open Source is pretty popular. But it isn’t just because it’s free (though that’s nice, too!) – NGINX Open Source is so popular because it’s known for being stable, lightweight, and the developer’s Swiss Army Knife™. Whether you need a web server, [...]

Read More...

The post Announcing the Open Source Subscription by F5 NGINX appeared first on NGINX.

]]>
As a reader of the NGINX blog, you’ve likely already gathered that NGINX Open Source is pretty popular. But it isn’t just because it’s free (though that’s nice, too!) – NGINX Open Source is so popular because it’s known for being stable, lightweight, and the developer’s Swiss Army Knife™.

Tweet screenshot: "Ok world. What say you? Favorite webserver? @nginx , Apache or are you using @caddyserver ?" and the response "Nothing compares to nginx. Used it yesterday to emergency fix a problem by reverse proxying in a handful of lines of config. Swiss army knife of hosting software."

Whether you need a web server, reverse proxy, API gateway, Ingress controller, or cache, NGINX (which is lightweight enough to be installed from a floppy disk) has your back. But there’s one thing NGINX Open Source users have told us is missing: Enterprise support. So, that (and more) is what we’re excited to introduce with the new Open Source Subscription!

What Is the Open Source Subscription?

The Open Source Subscription is a new bundle that includes:

Enterprise Support for NGINX Open Source

NGINX Open Source has a reputation for reliability and the community provides fantastic support, but sometimes more is necessary. With the Open Source Subscription, F5 adds enterprise support to NGINX Open Source, including:

  • SLA options of business hours or 24/7
  • Security patches and bug fixes
  • Security notifications
  • Debugging and error correction
  • Clarification of documentation discrepancies

Next, let’s dive into some of the benefits of having enterprise support.

Timely Patches and Fixes

A common vulnerability with any open source software (OSS) is the time it can take to address Common Vulnerabilities and Exposures (CVEs) and bugs. In fact, we’ve seen forks of NGINX Open Source take weeks, or even months, to patch. For example, on October 19, 2022, we announced fixes to CVE-2022-41741 and CVE-2022-41742 but the corresponding Ubuntu and Debian patches weren’t made available until November 15, 2022.

As a customer of the Open Source Subscription, you’ll get immediate access to patches and fixes, proactive notifications of CVEs, and more, including:

  • Security patches in the latest mainline and stable releases
  • Critical bug fixes in the latest mainline release
  • Non-critical bug fixes in the latest or a future mainline release

Regulatory Compliance

An increasing number of companies and governments are concerned about software supply chain issues, with many adhering to the practice of building a software bill of materials (SBOM). As the SBOM concept matures, regulators are starting to require patching "on a reasonably justified regular cycle", with timely patches for serious vulnerabilities found outside of the normal patch cycle.

With the Open Source Subscription, you can ensure that your NGINX Open Source instances meet your organization’s OSS software requirements by demonstrating due diligence, traceability, and compliance with relevant regulations, especially when it comes to security aspects.

Confidentiality

Getting good support requires sharing configuration files. However, if you’re sharing configs with a community member or in forums, then you’re exposing your organization to security vulnerabilities (or even breaches). Just one simple piece of NGINX code shared on Stack Overflow could offer bad actors insight into how to exploit your apps or architecture.

The Open Source Subscription grants you direct access to F5’s team of security experts, so you can be assured that your configs stay confidential. To learn more, see the NGINX Open Source Support Policy.

Note: The Open Source Subscription includes support for Linux packages of NGINX Open Source stable and mainline versions obtained directly from NGINX. We are exploring how we might be able to support packages customized and distributed by other vendors, so tell us in the comments which distros are important to you!

Enterprise Features Via Automatic Access to NGINX Plus

With the Open Source Subscription, you get access to NGINX Plus at no added cost. The subscription lets you choose when to use NGINX Open Source or NGINX Plus based on your business needs.

NGINX Open Source is perfect for many app delivery use cases, and is particularly outstanding for web serving, content caching, and basic traffic management. And while you can extend NGINX Open Source for other use cases, this can result in stability and latency issues. For example, it’s common to use Lua scripts to detect endpoint changes (where the Lua handler chooses which upstream service to route requests to, thus eliminating the need to reload the NGINX configuration). However, Lua must continuously check for changes, so it ends up consuming resources which, in turn, increases the processing time of incoming requests. In addition to causing timeouts, this also results in complexity and higher resource costs.

NGINX Plus can handle advanced use cases and provides out-of-the-box capabilities for load balancing, API gateway, Ingress controller, and more. Many customers choose NGINX Plus for business-critical apps and APIs that have stringent requirements related to uptime, availability, security, and identity.

Maintain Uptime and Availability at Scale

Uptime and availability are crucial to mission-critical apps and APIs because your customers (both internal and external) are directly impacted by any problems that arise when scaling up.

You can use NGINX Plus to:

Improve Security and Identity Management

By building non-functional requirements into your traffic management strategy, you can offload those requirements from your apps. This reduces errors and frees up developers to work on core requirements.

With NGINX Plus, you can enhance security by:

  • Using JWT authentication, OpenID Connect (OIDC), and SAML to centralize authentication and authorization at the load balancer, API gateway, or Ingress controller
  • Enforcing end-to-end encryption and certificate management with SSL/TLS offloading and SSL termination
  • Enabling FIPS 140-2 for the processing of all SSL/TLS and HTTP/2 traffic
  • Implementing PCI DDS best practices for protecting consumer’s credit card numbers and other personal data
  • Adding NGINX App Protect for Layer 7 WAF and denial-of-service (DoS) protection

Fleet Management with Instance Manager

Administration of a NGINX fleet at scale can be difficult. With NGINX Open Source, you might have hundreds of instances (maybe even thousands!) at your organization, which can introduce a lot of complexity and risk related to CVEs, configuration issues, and expired certificates. That’s why the Open Source Subscription includes NGINX Management Suite Instance Manager, which enables you to centrally inventory all of your NGINX Open Source, NGINX Plus, and NGINX App Protect WAF instances so you can configure, secure, and monitor your NGINX fleet with ease.

Diagram showing how NGINX Instance Manager manages your fleet of NGINX Open Source, Plus, and App Protect WAF

Understand Your NGINX Estate

With Instance Manager you can get an accurate count of your instances in any environment, including Kubernetes. Instance Manager allows you to:

  • Inventory instances and discover software versions with potential CVE exposures
  • Learn about configuration problems and resolve them with a built-in editor that leverages best practice recommendations
  • Visualize protection insights, analyze possible threats, and identify opportunities for tuning your WAF policies with Security Monitoring

Manage Certificates

Expired certificates have become a notorious cause of breaches. Use Instance Manager to ensure secure communication between NGINX instances and their clients. With Instance manager, you can track, manage, and deploy SSL/TLS certificates on all of your instances (including by finding and updating expiring certificates) and rotate the encryption keys regularly (or whenever a key has been compromised).

Simplify Visibility

The amount of data you can get from NGINX instances can be staggering. To help you get the most out of that data and your third-party tools, Instance Manager provides events and metrics data that helps you collect valuable NGINX metrics then forward them to commonly used monitoring, visibility, and alerting tools via API. In addition, you can get unique, curated insights into the protection of your apps and APIs, such as when NGINX App Protect is added.

Get Started with the Open Source Subscription

If you’re interested in getting started with the new Open Source Subscription, contact us today to discuss your use cases.

Dive deeper into the use cases you can enable with NGINX Plus:

Learn more about NGINX Management Suite Instance Manager:

The post Announcing the Open Source Subscription by F5 NGINX appeared first on NGINX.

]]>
How to Scan Your Environment for NGINX Instances https://www.nginx.com/blog/how-to-scan-your-environment-for-nginx-instances/ Thu, 01 Jun 2023 15:01:07 +0000 https://www.nginx.com/?p=71648 As the core module of F5 NGINX Management Suite, Instance Manager is an invaluable resource that enables you to locate, manage, and monitor all your NGINX Open Source and NGINX Plus instances easily and efficiently. Keeping track of NGINX instances is now simple with Instance Manager – the easy-to-use interface allows organizations to conveniently monitor [...]

Read More...

The post How to Scan Your Environment for NGINX Instances appeared first on NGINX.

]]>
As the core module of F5 NGINX Management Suite, Instance Manager is an invaluable resource that enables you to locate, manage, and monitor all your NGINX Open Source and NGINX Plus instances easily and efficiently. Keeping track of NGINX instances is now simple with Instance Manager – the easy-to-use interface allows organizations to conveniently monitor all instances from a single pane of glass.

Instance Manager can also identify instances affected by Common Vulnerabilities and Exposures (CVEs) and instances with potentially expired SSL certificates. This wide scanning capability is crucial to ensure the security and safety of your Information Technology (IT) assets. The module also notifies when a new version exists to help resolve these vulnerabilities, making it essential for anyone who wants to proactively manage and secure NGINX instances.

With Instance Manager, you can be certain that your assets are being precisely tracked – leading to better management and enhanced overall security.

How NGINX Management Suite Instance Manager Works

Instance Manager makes it easy to scan your environment for NGINX instances by identifying active hosts using the Internet Control Message Protocol (ICMP).

Two primary methods can be used to identify active hosts:

  1. Enabling ICMP
  2. Disabling ICMP

To scan for an instance, navigate to the scan page and provide the IP address along with the port number. This process is straightforward and can be accomplished by following the steps provided on the scan page.

Overview of a NGINX scan when ICMP is enabled
Figure 1.  Overview of a NGINX scan when ICMP is enabled

To identify active hosts, you first verify port accessibility using ICMP Hello packets and then perform a TCP handshake. To detect NGINX, analyze the HTTP header of the server.

Note: If HTTP is enabled in NGINX Plus, your scan may reveal any CVE vulnerabilities. However, disabling HTTP on NGINX Plus could potentially affect the accuracy of this approach. If you choose to disable it, your scan will not be able to identify any CVEs. Therefore, it is recommended to keep HTTP enabled on NGINX Plus to achieve the most comprehensive and effective results in identifying active hosts.

Wireshark capture of when ICMP is enabled
Figure 2. Wireshark capture of when ICMP is enabled

When ICMP is disabled, you can ensure the proper functioning of a port by verifying it through the TCP handshake method. This method assesses the port’s response and confirms that the port is working as expected. If the SYN request is answered, Instance Manager can determine if the port is running NGINX or if the certificate has expired.

Note: If the SYN request goes unanswered, the process may be delayed and can potentially cause port exhaustion issues.

Overview of a NGINX scan when ICMP is disabled
Figure 3. Overview of a NGINX scan when ICMP is disabled

Instance Manager has the capability to check the SSL certificate date of any server, whether or not it is part of NGINX servers. The module conducts a comprehensive evaluation of each server’s SSL certificate date to identify any potential expirations. Scans done by Instance Manager cover all requested ports, alert you of any expired SSL certificates, and provide valuable insights to help keep your enterprise safe.

Wireshark capture when ICMP is disabled
Figure 4. Wireshark capture when ICMP is disabled

Lastly, implementing role-based access control (RBAC) affords you complete control over who can initiate a scan and who has granted access to your scan results. With this feature, your sensitive information remains confidential and secure, as only authorized personnel can access the results.

Additional Resources

Complete documentation on NGINX Management Suite Instance Manager can be found here.

If you are interested in exploring Instance Manager today, you can reach out to us to discuss your specific use cases.

The post How to Scan Your Environment for NGINX Instances appeared first on NGINX.

]]>
Managing NGINX Configuration at Scale with Instance Manager https://www.nginx.com/blog/managing-nginx-configuration-at-scale-with-instance-manager/ Mon, 20 Mar 2023 15:20:18 +0000 https://www.nginx.com/?p=71366 Since releasing NGINX Instance Manager in early 2021, we have continually added functionality based on feedback from our users about their top priorities and pain points. Instance Manager is now the core module of NGINX Management Suite, our collection of management‑plane modules which make it easier to manage and monitor NGINX at scale. After two years [...]

Read More...

The post Managing NGINX Configuration at Scale with Instance Manager appeared first on NGINX.

]]>
Since releasing NGINX Instance Manager in early 2021, we have continually added functionality based on feedback from our users about their top priorities and pain points. Instance Manager is now the core module of NGINX Management Suite, our collection of management‑plane modules which make it easier to manage and monitor NGINX at scale. After two years of focused work, today’s Instance Manager is, quite simply, better than ever.

Some of the most notable recent enhancements to Instance Manager are:

  • Remote configuration and configuration groups to help you scale
  • Robust and granular role‑based access control (RBAC) to empower multiple teams to manage their deployments
  • Improved monitoring options that offer more flexibility and deeper insight
  • Enhanced security with capabilities for monitoring and managing NGINX App Protect WAF

In this post we focus on the enhancements to Instance Manager’s configuration‑management features. One of biggest reasons for NGINX’s popularity is the wide range of use cases it covers, from web serving and content caching to reverse proxying and load balancing. As you scale out your NGINX deployment across more use cases, configurations grow more complex and diverse across your NGINX estate, and accurately setting and tracking them manually becomes tedious and prone to errors.

Instance Manager greatly eases scaling as a centralized control station for remote management of your entire NGINX fleet. It helps ensure that your customers have a consistent and high‑quality user experience no matter how complex your systems become.

In this post we focus on three Instance Manager configuration‑management features that help you scale.

Remote Configuration Management Saves Time

With Instance Manager, you manage all your NGINX Open Source and NGINX Plus configurations remotely from a single pane of glass. You can navigate among hundreds of managed NGINX instances to make updates and monitor status and traffic, either using the web interface or the API. The API makes it easy to integrate NGINX configuration management into your CI/CD pipeline.

The configuration editor built into the web interface is powered by the open source Monaco editor and makes it easy to edit NGINX configuration. The Instance Manager Analyzer automatically highlights errors as you edit and recommends fixes based on best practices.

Screenshot of configuration editor in NGINX Management Suite Instance manager 2.8.0

With instance groups, you can apply the same configuration to multiple instances. This makes scaling much easier because you maintain just a single copy of the configuration and apply it to all instances in the group with the single press of a button. As you create additional instances, add them to an instance group during onboarding for instant application of the correct configuration.

With staged configurations, you can create a configuration from scratch or copy the configuration from an individual instance, and save it to be deployed later on one or more instances.

Efficient SSL/TLS Certificate Management Maintains Security

Secure communication between NGINX instances and their clients relies on proper management of SSL/TLS certificates and their associated keys. Expired or invalid certificates can put the integrity of your entire organization at risk.

With Instance Manager you can conveniently track, manage, and deploy SSL/TLS certificates on all your instances, using either the web interface or API. The web interface highlights any certs that are expired or are expiring soon, helping you avert costly and time‑consuming outages.

Role-Based Access Control Improves Workflows

As they adopt DevOps practices, organizations are increasingly delegating responsibility for app and infrastructure management to development teams. This makes it more complicated, but no less critical, to ensure that the right users have the right levels of access. With Instance Manager, you can seamlessly integrate your single sign‑on (SSO) solution and use role‑based access control (RBAC) to create “swim lanes” for different teams.

RBAC ensures that security and compliance policies are properly enforced, but also empowers teams to manage their own resources. You can create roles that assign permissions broken down along both functional and resource lines.

With RBAC, teams can focus on their areas of expertise and therefore work faster and more efficiently. At the same time, administrators can be assured that the entire organization is adhering to important guidelines and policies.

Get Started

The recent enhancements to Instance Manager’s tools for managing remote configurations, SSL/TLS certificates, and access control make it easier and more convenient than ever to manage your NGINX fleet as it scales.

To try Instance Manager for yourself, start a 30-day free trial of NGINX Management Suite. You can also trial NGINX Plus for production‑grade traffic management.

The post Managing NGINX Configuration at Scale with Instance Manager appeared first on NGINX.

]]>
5 Videos from Sprint 2022 that Reeled Us In https://www.nginx.com/blog/5-videos-from-sprint-2022-that-reeled-us-in/ Tue, 10 Jan 2023 15:35:43 +0000 https://www.nginx.com/?p=70927 The oceanic theme at this year’s virtual Sprint conference made for smooth sailing – all the way back to our open source roots. The water was indeed fine, but the ship would never have left the dock without all of the great presentations from our open source team and community. That’s why we want to highlight [...]

Read More...

The post 5 Videos from Sprint 2022 that Reeled Us In appeared first on NGINX.

]]>
The oceanic theme at this year’s virtual Sprint conference made for smooth sailing – all the way back to our open source roots. The water was indeed fine, but the ship would never have left the dock without all of the great presentations from our open source team and community. That’s why we want to highlight some of our favorite videos, from discussions around innovative OSS projects to demos involving writing poetry with code. Here, we’ve picked five of our favorites to tide you over until next year’s Sprint.

In addition to the videos below, you can find all the talks, demos, and fireside chats from NGINX Sprint 2022 in the NGINX Sprint 2022 YouTube playlist.

Best Practices for Getting Started with NGINX Open Source

NGINX Open Source is the world’s most popular web server, but also much more – you can configure it as a reverse proxy, load balancer, API gateway, and cache. In this breakout session, Alessandro Fael Garcia, Senior Solutions Engineer for NGINX, simplifies the journey for anyone who’s just getting started with NGINX. Learn multiple best practices including using the official NGINX repo to install NGINX, the importance of knowing your NGINX key commands, and how small adjustments with NGINX directives can improve performance.

For more resources on installing NGINX Open Source, check out our blog and webinar Back to Basics: Installing NGINX Open Source and NGINX Plus.

How to Get Started with OpenTelemetry

In cloud‑native architectures, observability is critical for providing insight into application performance. OpenTelemetry has taken observability to the next level with the concept of distributed tracing. As one of the most active projects at the Cloud Native Computing Foundation (CNCF), OpenTelemetry is quickly becoming the standard for instrumentation and collection of observability data. If you can’t already tell, we believe it’s one of the top open source projects to keep on your radar.

In this session, Steve Flanders, Director of Engineering at Splunk, covers the fundamentals of OpenTelemetry and demos how you can start integrating it into your modern app infrastructure.

To learn how NGINX is using OpenTelemetry, read Integrating OpenTelemetry into the Modern Apps Reference Architecture – A Progress Report on our blog.

The Future of Kubernetes Connectivity

Kubernetes has become the de facto standard for container management and orchestration. But as organizations deploy Kubernetes in production across multi‑cloud environments, complex challenges often emerge. Processes need to be streamlined so teams can manage connectivity to Kubernetes services from cloud to edge. In this Sprint session, Brian Ehlert, Director of Product Management for NGINX, discusses the history of Kubernetes networking, current trends, and the potential future for providing client access to applications in a shared, multi‑tenant Kubernetes environment.

At NGINX, we recognize the importance of Kubernetes connectivity, which is why we developed a Secure Kubernetes Connectivity solution. With NGINX’s Secure Kubernetes Connectivity you can scale, observe, govern, and secure your Kubernetes apps in production while reducing complexity.

Fun Ways to Script NGINX Using the NGINX Javascript Module

Feeling overwhelmed by all the open source offerings and possibilities? Take a break! In this entertaining talk, Javier Evans, Solutions Engineer for NGINX, guides you through some fun ways to script NGINX Open Source using the NGINX JavaScript module (njs). You’ll learn how to generate QR codes, implement weather‑based authentication (WBA) to compose a unique poem based on the location’s current weather, and more. Creativity abounds and Javier holds nothing back.

New features and improvements are regularly added to njs to make it easier for teams to work and share njs code. Recent updates help make your NGINX config even more modular and reusable.

A Journey Through NGINX and Open Source with Kelsey Hightower

We were beyond excited to have renowned developer advocate Kelsey Hightower join us at Sprint for a deep dive into NGINX Unit, our open source, universal web app server. In a discussion with Rob Whiteley, NGINX Product Group VP and GM, Kelsey emphasizes how one of his primary goals when working with technology is to save time. Using a basic application inside a single container as his demo environment, Kelsey shows how NGINX Unit enables you to write less code. And while Kelsey’s impressed by NGINX Unit, he also offers feedback on how it can improve. We appreciate that, as we are committed to refining and enhancing our open source offerings.

Watch On Demand

Again, thank you for diving into the open source waters with us this year at Sprint! It was a blast and we loved seeing all of your comments, insight, and photos from the virtual booth.

Reminder: You can find all these videos, and more insightful talks, in the NGINX Sprint 2022 YouTube playlist and watch them on demand for free.

The post 5 Videos from Sprint 2022 that Reeled Us In appeared first on NGINX.

]]>
Observability and Remote Configuration with NGINX Agent https://www.nginx.com/blog/observability-and-remote-configuration-nginx-agent/ Thu, 22 Dec 2022 16:42:45 +0000 https://www.nginx.com/?p=70857 At NGINX Sprint 2022, we committed to modernizing the way we manage NGINX open source projects and engage with our community. As part of that promise, we announced the upcoming release of NGINX Agent, a daemon that manages individual NGINX deployments as companion software, providing observability and a configuration API. Today we’re proud to follow through on [...]

Read More...

The post Observability and Remote Configuration with NGINX Agent appeared first on NGINX.

]]>
NGINX Agent - An NGINX Project

At NGINX Sprint 2022, we committed to modernizing the way we manage NGINX open source projects and engage with our community. As part of that promise, we announced the upcoming release of NGINX Agent, a daemon that manages individual NGINX deployments as companion software, providing observability and a configuration API. Today we’re proud to follow through on that promise by launching NGINX Agent under the Apache 2 license.

At F5 NGINX, our vision is to build an ecosystem that extends into every facet of application deployment and management. NGINX Agent plays a pivotal role in that vision by providing Development and Platform Ops teams with granular controls and added functionality for configuring, monitoring, and managing NGINX instances.

What Does NGINX Agent Do?

NGINX Agent is a lightweight daemon that can be deployed alongside your NGINX Open Source or NGINX Plus instance. Significantly, NGINX Agent enables a number of capabilities not provided by NGINX Open Source:

  • Reporting and monitoring of NGINX instances
    NGINX Agent provides broader visibility into NGINX Open Source and NGINX Plus instances with an extended set of metrics you can use to detect, investigate, and correct infrastructure issues. Along with operating system metrics, NGINX Agent automatically collects metrics from the NGINX access and error logs; for NGINX Plus instances, it also collects metrics from the RESTful NGINX Plus API. NGINX Agent also reports on key sets of events happening on the NGINX instance. The result is a richly detailed picture of the performance, health, and usage of your NGINX instance which can be exported in Prometheus format for visualization by third‑party tools such as Grafana.
  • Remote NGINX configuration management
    NGINX Agent provides HTTP (REST) and HTTP/2 (gRPC) interfaces for remotely applying NGINX configuration to an NGINX instance. You can also manage instance configuration remotely through an API. Automation and remote deployment of NGINX configuration greatly reduces operational overhead and saves time, especially when managing numerous instances.
  • Management plane integration
    As businesses scale, infrastructure deployment and management becomes more complex. We’re glad the NGINX community isn’t shy about sharing their scaling and delivery challenges, and the NGINX Agent roadmap aims to address them. NGINX Agent enables you to develop advanced mechanisms to control and manage NGINX in your environment – both with your own management solution which interfaces with NGINX instances and with NGINX Management Suite for its enterprise‑grade data plane management capabilities.

How Does NGINX Agent Work?

NGINX Agent runs alongside an NGINX instance, exposing both REST and gRPC interfaces for remote interaction with the instance from both the control and management planes, enabling you to build sophisticated monitoring and automation capabilities.

Diagram showing how NGINX Agent is colocated on the data plane with NGINX instances and communicates with a server on the control/management plane for metrics collection and configuration management

Why Are We Open Sourcing NGINX Agent?

We have several goals in open sourcing NGINX Agent.

Complement NGINX Open Source

We want to empower the community to use NGINX Open Source in more use cases and with far more flexibility. Open sourcing NGINX Agent helps fill in some current functional gaps in NGINX Open Source, and opens a completely new avenue for us to extend NGINX Open Source and bring features to the community more quickly. It can be installed alongside your NGINX Open Source instance to let you manage NGINX configuration using a REST or gRPC interface, or enable you to develop sophisticated visualizations from NGINX events and metrics.

Be Transparent

We take pride in bringing industry‑leading open source software to our community and enabling you to build highly scalable, resilient infrastructures to power your business. One of the core pillars of this success is the trust the open source community places in NGINX software. Our design philosophy with NGINX Agent is to be completely open and transparent about how and what data it touches in your NGINX infrastructure. We think being fully transparent with the community and bringing in features which delight you is a key enabler to realizing our open source vision.

Make NGINX Developer‑Friendly

Staying true to another promise we made at Sprint – to optimize the developer experience – NGINX Agent accelerates the “time to value” of NGINX products by providing controls and functionality that we hope make NGINX more attractive to more adopters. NGINX Agent provides granular controls so developers can make smart decisions about managing, deploying, and configuring NGINX in their environment. Our goal is to meet developers where they are by enabling them to integrate with NGINX’s suite of products on the control and management planes or bring in their own.

Get Started with NGINX Agent

NGINX Agent started out as the agent used by NGINX Management Suite Instance Manager to find all the NGINX instances in your environment. And while it will continue to serve that function, by open sourcing it in version 2.17.0 we’ve launched it on an independent path towards usefulness to the broader NGINX open source community. Given that history, we expect there are many ways NGINX Agent needs to grow, so we are inviting you to visit the NGINX Agent repo on GitHub to get started and learn how to contribute, make suggestions, and report issues.

The post Observability and Remote Configuration with NGINX Agent appeared first on NGINX.

]]>
Back to Basics: Installing NGINX Open Source and NGINX Plus https://www.nginx.com/blog/back-to-basics-installing-nginx-open-source-and-nginx-plus/ Thu, 13 Oct 2022 16:00:45 +0000 https://www.nginx.com/?p=70559 Today, NGINX continues to be the world’s most popular web server – powering more than a third of all websites and nearly half of the 1000 busiest as of this writing. With so many products and solutions, NGINX is like a Swiss Army Knife™ you can use for numerous website and application‑delivery use cases, but we [...]

Read More...

The post Back to Basics: Installing NGINX Open Source and NGINX Plus appeared first on NGINX.

]]>
Today, NGINX continues to be the world’s most popular web server – powering more than a third of all websites and nearly half of the 1000 busiest as of this writing. With so many products and solutions, NGINX is like a Swiss Army Knife™ you can use for numerous website and application‑delivery use cases, but we understand it might seem intimidating if you’re just getting started.

If you’re new to NGINX, we want to simplify your first steps. There are many tutorials online, but some are outdated or contradict each other, only making your journey more challenging. Here, we’ll quickly point you to the right resources.

Resources for Installing NGINX

A good place to start is choosing which NGINX offering is right for you:

  • NGINX Open Source – Our free, open source offering
  • NGINX Plus – Our enterprise‑grade offering with commercial support

To find out which works best for you or your company, look at this side-by-side comparison of NGINX Open Source and NGINX Plus. Also, don’t be afraid to experiment – NGINX users often try and learn tricks they pick up from others or the NGINX documentation. There’s always a lot to learn about how to get the most out of both NGINX Open Source and NGINX Plus.

If NGINX Open Source is your choice, we strongly recommend you install it from the official NGINX repo, as third‑party distributions often provide outdated NGINX versions. For complete NGINX Open Source installation instructions, see Installing NGINX Open Source in the NGINX Plus Admin Guide.

If you think NGINX Plus might work better for your needs, you can begin a 30-day free trial and head over to Installing NGINX Plus for complete installation instructions.

For both NGINX Open Source and NGINX Plus, we provide specific steps for all supported operating systems, including Alpine Linux, Amazon Linux 2, CentOS, Debian, FreeBSD, Oracle Linux, Red Hat Enterprise Linux (RHEL), SUSE Linux Enterprise Server (SLES), and Ubuntu.

Watch the Webinar

Beyond documentation, you can advance to the next stage on your NGINX journey by watching our free on‑demand webinar, NGINX: Basics and Best Practices. Go in‑depth on NGINX Open Source and NGINX Plus and learn:

  • Ways to verify NGINX is running properly
  • Basic and advanced NGINX configurations
  • How to improve performance with keepalives
  • The basics of using NGINX logs to debug and troubleshoot

If you’re interested in getting started with NGINX Open Source and still have questions, join NGINX Community Slack – introduce yourself and get to know this community of NGINX power users! If you’re ready for NGINX Plus, start your 30-day free trial today or contact us to discuss your use cases.

The post Back to Basics: Installing NGINX Open Source and NGINX Plus appeared first on NGINX.

]]>
The Future of NGINX: Getting Back to Our Open Source Roots https://www.nginx.com/blog/future-of-nginx-getting-back-to-our-open-source-roots/ Tue, 23 Aug 2022 15:30:25 +0000 https://www.nginx.com/?p=70273 Time flies when you’re having fun. So it’s hard to believe that NGINX is now 18 years old. Looking back, the community and company have accomplished a lot together. We recently hit a huge milestone – as of this writing 55.6% of all websites are powered by NGINX (either by our own software or by products built [...]

Read More...

The post The Future of NGINX: Getting Back to Our Open Source Roots appeared first on NGINX.

]]>
Time flies when you’re having fun. So it’s hard to believe that NGINX is now 18 years old. Looking back, the community and company have accomplished a lot together. We recently hit a huge milestone – as of this writing 55.6% of all websites are powered by NGINX (either by our own software or by products built atop NGINX). We are also the number one web server by market share. We are very proud of that and grateful that you, the NGINX community, have given us this resounding vote of confidence.

We also recognize, more and more, that open source software continues to change the world. A larger and larger percentage of applications are built using open source code. From Bloomberg terminals and news to the Washington Post to Slack to Airbnb to Instagram and Spotify, thousands of the world’s most recognizable brands and properties rely on NGINX Open Source to power their websites. In my own life – between Zoom for work meetings and Netflix at night – I probably spend 80% of my day using applications built atop NGINX.

Image reading "Open Source Software Changed the World" with logos of prominent open source projects

NGINX is only one element in the success story of open source. We would not be able to build the digital world – and increasingly, to control and manage the physical world – without all the amazing open source projects, from Kubernetes and containers to Python and PyTorch, from WordPress to Postgres to Node.js. Open source has changed the way we work. There are more than 73 million developers on GitHub who have collectively merged more than 170 million pull requests (PRs). A huge percentage of those PRs have been on code repositories with open source licenses.

We are thrilled that NGINX has played such a fundamental role in the rise and success of open source – and we intend to both keep it going and pay it forward. At the same time, we need to reflect on our open source work and adapt to the ongoing evolution of the movement. Business models for companies profiting from open source have become controversial at times. This is why NGINX has always tried to be super clear about what is open source and what is commercial. Above all, this meant never, ever trying to charge for functionality or capabilities that we had included in the open source versions of our software.

Open Source is Evolving Fast. NGINX Is Evolving, Too.

We now realize that we need to think hard about our commitment to open source, provide more value and capabilities in our open source products, and, yes, up our game in the commercial realm as well. We can’t simply keep charging for the same things as in the past, because the world has changed – some features included only in our commercial products are now table stakes for open source developers. We also know that open source security is top of mind for developers. For that reason, our open source projects need to be just as secure as our commercial products.

We also have to acknowledge reality. Internally, we have had a habit of saying that open source was not really production‑ready because it lacked features or scalability. The world has been proving us wrong on that count for some time now: many thousands of organizations are running NGINX open source software in production environments. And that’s a good thing, because it shows how much they believe in our open source versions. We can build on that.

In fact, we are doing that constantly with our core products. To those who say that the original NGINX family of products has grown long of tooth, I say you have not been watching us closely:

  • For the core NGINX Open Source software, we continue to add new features and functionality and to support more operating system platforms. Two critical capabilities for security and scalability of web applications and traffic, HTTP3 and QUIC, are coming in the next version we ship.
  • A quiet but incredibly innovative corner of the NGINX universe is NGINX JavaScript (njs), which enables developers to integrate JavaScript code into the event‑processing model of the NGINX HTTP and TCP/UDP (Stream) modules and extend NGINX configuration syntax to implement sophisticated capabilities. Our users have done some pretty amazing things, everything from innovative cache purging and header manipulations to support for more advanced protocols like MQTTv5.
  • Our universal web application server, NGINX Unit, was conceived by the original author of NGINX Open Source, Igor Sysoev, and it continues to evolve. Unit occupies an important place in our vision for modern applications and a modern application stack that goes well beyond our primary focus on the data plane and security. As we develop Unit, we are rethinking how applications should be architected for the evolving Web, with more capabilities that are cloud‑native and designed for distributed and highly dynamic apps.

The Modern Apps Reference Architecture

We want to continue experimenting and pushing forward on ways to help our core developer constituency more efficiently and easily deploy modern applications. Last year at Sprint 2.0 we announced the NGINX Modern Apps Reference Architecture (MARA), and I am happy to say it recently went into general availability as version 1.0.0. MARA is a curated and opinionated stack of tools, including Kubernetes, that we have wired together to make it easy to deploy infrastructure and application architecture as code. With a few clicks, you can configure and deploy a MARA reference architecture that is integrated with everything you need to create a production‑grade, cloud‑native environment – security, logging, networking, application server, configuration and YAML management, and more.

Diagram showing topology of the NGINX Modern Apps Reference Architecture

MARA is a modular architecture, and by design. You can choose your own adventure and design from the existing modules a customized reference architecture that can actually run applications. The community has supported our idea and we have partnered with a number of innovative technology companies on MARA. Sumo Logic has added their logging chops to MARA and Pulumi has contributed modules for automation and workflow orchestration. Our hope is that, with MARA, any developer can get a full Kubernetes environment up and running in a matter of minutes, complete with all the supporting pieces, secure, and ready for app deployment. This is just one example of how I think we can all put our collective energy into advancing a big initiative in the industry.

The Future of NGINX: Modernize, Optimize, Extend

Each year at NGINX Sprint, our virtual user conference, we make new commitments for the coming year. This year is no different. Our promises for the next twelve months can be captured in three words: modernize, optimize, and extend. We intend to make sure these are not just business buzzwords; we have substantial programs for each one and we want you to hold us to our promises.

Promise #1: Modernize Our Approach, Presence, and Community Management

Obviously, we are rapidly modernizing our code and introducing new products and projects. But modernization is not just about code – it encompasses code management, transparency around decision making, and how we show up in the community. While historically the NGINX Open Source code base has run on the Mercurial version control system, we recognize that the open source world now lives on GitHub. Going forward, all NGINX projects will be born and hosted on GitHub because that’s where the developer and open source communities work.

We also are going to modernize how we govern and manage NGINX projects. We pledge to be more open to contributions, more transparent in our stewardship, and more approachable to the community. We will follow all expected conventions for modern open source work and will be rebuilding our GitHub presence, adding Codes of Conduct to all our projects, and paying close attention to community feedback. As part of this commitment to modernize, we are adding an NGINX Community channel on Slack. We will staff the channel with our own experts to answer your questions. And you, the community, will be there to help each other, as well – in the messaging tool you already use for your day jobs.

Promise #2: Optimize Our Developer Experience

Developers are our primary users. They build and create the applications that have made us who we are. Our tenet has always been that NGINX is easy to use. And that’s basically true – NGINX does not take days to install, spin up, and configure. That said, we can do better. We can accelerate the “time to value” that developers experience on our products by making the learning curve shorter and the configuration process easier. By “value” I mean deploying code that does something truly valuable, in production, full stop. We are going to revamp our developer experience by streamlining the installation experience, improving our documentation, and adding coverage and heft to our community forums.

We are also going to release a new SaaS offering that natively integrates with NGINX Open Source and will help you make it useful and valuable in seconds. There will be no registration, no gate, no paywall. This SaaS will be free to use, forever.

In addition, we recognize that many critical features which developers now view as table stakes are on the wrong side of the paywall for NGINX Open Source and NGINX Plus. For example, DNS service discovery is essential for modern apps. Our promise is to make those critical features free by adding them to NGINX Open Source. We haven’t yet decided on all of the features to move and we want your input. Tell us how to optimize your experience as developers. We are listening.

Promise #3: Extend the Power and Capabilities of NGINX

As popular as NGINX is today, we know we need to keep improving if we want to be just as relevant ten years from now. Our ambitious goal is this: we want to create a full stack of NGINX applications and supporting capabilities for managing and operating modern applications at scale.

To date, NGINX has mostly been used as a Layer 7 data plane. But developers have to put up a lot of scaffolding around NGINX to make it work. You have to wire up automation and CI/CD capabilities, set up proper logging, add authentication and certificate management, and more. We want to make a much better extension of NGINX where every major requirement to test and deploy an app is satisfied by one or more high‑quality open source components that seamlessly integrate with NGINX. In short, we want to provide value at every layer of the stack and make it free. For example, if you are using NGINX Open Source or NGINX Plus as an API gateway, we want to make sure you have everything you need to manage and scale that use case – API import, service discovery, firewall, policy rules and security – all available as high‑quality open source options.

To summarize, our dream is to build an ecosystem around NGINX that extends into every facet of application management and deployment. MARA is the first step in building that ecosystem and we want to continue to attract partners. My goal is to see, by the end of 2022, an entire pre‑wired app launch and run in minutes in an NGINX environment, instrumented with a full complement of capabilities – distributed tracing, logging, autoscaling, security, CI/CD hooks – that are all ready to do their jobs.

Introducing Kubernetes API Gateway, a Brand New Amplify, and NGINX Agent

We are committed to all this. And here are three down payments on my three promises.

  1. Earlier this year we launched NGINX Kubernetes Gateway, based on the Kubernetes API Gateway SIG’s reference architecture. This modernizes our product family and keeps us in line with the ongoing evolution of cloud native. The NGINX Kubernetes Gateway is also something of an olive branch we’re extending to the community. We realize it complicated matters when we created both a commercial and an open source Ingress controller for Kubernetes, both different from the community Ingress solution (also built on NGINX). The range of choices confused the community and put us in a bad position.

    It’s pretty clear that the Gateway API is going to take the place of the Ingress controller in the Kubernetes architecture. So we are changing our approach and will make the NGINX Kubernetes Gateway – which will be offered only as an open source product – the focal point of our Kubernetes networking efforts (in lockstep with the evolving standard). It will both integrate and extend into other NGINX products and optimize the developer experience on Kubernetes.

  2. A few years back, we launched NGINX Amplify, a monitoring and telemetry SaaS offering for NGINX fleets. We didn’t really publicize it much. But thousands of developers found it and are still using it today. Amplify was and remains free. As part of our modernization pledge, we are adding a raft of new capabilities to Amplify. We aim to make it your trusted co‑pilot for standing up, watching over, and managing NGINX products at scale in real time. Amplify will not only monitor your NGINX instances but will help you configure, apply scripts to, and troubleshoot NGINX deployments.
  3. We are launching NGINX Agent, a lightweight app that you deploy alongside NGINX Open Source instances. It will include features previously offered only in commercial products, for example the dynamic configuration API. With NGINX Agent, you’ll be able to use NGINX Open Source in many more use cases and with far greater flexibility. It will also include far more granular controls which you can use to extend your applications and infrastructure. Agent helps you make smarter decisions about managing, deploying, and configuring NGINX. We’re working hard on NGINX Agent – keep an eye out for a blog coming in the next couple months to announce its availability!

Looking Ahead

In a year, I hope you ask me about these promises. If I can’t report real progress on all three, then hold me to it, please. And please understand – we are engaged and ready to talk with all of you. You are our best product roadmap. Please take our annual survey. Join NGINX Community Slack and tell us what you think. Comment and file PRs on the projects at our GitHub repo.

It’s going to be a great year, the best ever. We look forward to hearing more from you and please count on hearing more from us. Help us help you. It’s going to be a great year, the best ever. We look forward to hearing more from you and please count on hearing more from us.

The post The Future of NGINX: Getting Back to Our Open Source Roots appeared first on NGINX.

]]>