security Archives - NGINX https://www.nginx.com/blog/tag/security/ The High Performance Reverse Proxy, Load Balancer, Edge Cache, Origin Server Fri, 29 Sep 2023 21:10:24 +0000 en-US hourly 1 QUIC+HTTP/3 Support for OpenSSL with NGINX https://www.nginx.com/blog/quic-http3-support-openssl-nginx/ Wed, 13 Sep 2023 15:24:32 +0000 https://www.nginx.com/?p=72672 Developers usually want to build applications and infrastructure using released, official, and supported libraries. Even with HTTP/3, there is a strong need for a convenient library that supports QUIC and doesn’t increase the maintenance costs or operational complexity in the production infrastructure. For many QUIC+HTTP/3 users, that default cryptographic library is OpenSSL. Installed on most [...]

Read More...

The post QUIC+HTTP/3 Support for OpenSSL with NGINX appeared first on NGINX.

]]>
Developers usually want to build applications and infrastructure using released, official, and supported libraries. Even with HTTP/3, there is a strong need for a convenient library that supports QUIC and doesn’t increase the maintenance costs or operational complexity in the production infrastructure.

For many QUIC+HTTP/3 users, that default cryptographic library is OpenSSL. Installed on most Linux-based operating systems by default, OpenSSL is the number one Transport Layer Security (TLS) library and is used by the majority of network applications.

The Problem: Incompatibility Between OpenSSL and QUIC+HTTP/3

Even with such wide usage, OpenSSL does not provide the TLS API required for QUIC support. Instead, the OpenSSL Management Committee decided to implement a complete QUIC stack on their own. This endeavor is a considerable effort planned for OpenSSL v3.4 but, according to the OpenSSL roadmap, that won’t likely happen before the end of 2024. Furthermore, the initial Minimum Viable Product of the OpenSSL implementation won’t contain the QUIC API implementation, so there is no clear path for users to get HTTP/3 support with OpenSSL.

Options for QUIC TLS Support

In this situation, there are two options for users looking for QUIC TLS support for their HTTP/3 needs:

  • OpenSSL QUIC implementation – As mentioned above, OpenSSL is currently working on implementing a complete QUIC stack on its own. This development will encapsulate all QUIC functionality within the implementation, making it much easier for HTTP/3 users to use the OpenSSL TLS API without worrying about QUIC-specific functionality.
  • Libraries supporting the BoringSSL QUIC API – Various SSL libraries like BoringSSL, quicTLS, and LibreSSL (all of which started as forks of OpenSSL) now provide QUIC TLS functionality by implementing BoringSSL QUIC API. However, these libraries aren’t as widely adopted as OpenSSL. This option also requires building the SSL library from source and installing it on every server that needs QUIC+HTTP/3 support, which might not be a feasible option for everyone. That said, this is currently the only option for users wanting to use HTTP/3 because the OpenSSL QUIC TLS implementation is not ready yet.

A New Solution: The OpenSSL Compatibility Layer

At NGINX, we felt inspired by these challenges and created the OpenSSL Compatibility Layer to simplify QUIC+HTTP/3 deployments that use OpenSSL and help avoid complexities associated with maintaining a separate SSL library in production environments.

Available with NGINX Open Source mainline since version 1.25.0 and NGINX Plus R30, the OpenSSL Compatibility Layer allows NGINX to run QUIC+HTTP/3 on top of OpenSSL without needing to patch or rebuild it. This removes the dependency of compiling and deploying third-party TLS libraries to get QUIC support. Since users don’t need to use third-party libraries, it also alleviates the dependency on schedules and roadmaps of those libraries, making it a comparatively easier solution to deploy in production.

How the OpenSSL Compatibility Layer Works

The OpenSSL Compatibility Layer implements these steps:

  • Converts a QUIC handshake to a TLS 1.3 handshake that is supported by OpenSSL.
  • Passes the TLS handshake messages in and out of OpenSSL.
  • Gets the encryption keys for handshake and application encryption levels out of OpenSSL.
  • Passes the QUIC transport parameters in and out of OpenSSL.

Based on the amount of OpenSSL adoption today and knowing its status with official QUIC+HTTP/3 support, we believe an easy and scalable option to enable QUIC is a step in the right direction. It will also promote HTTP/3 adoption and allow for valuable feedback. Most importantly, we trust that the OpenSSL Compatibility Layer will help us provide a more robust and scalable solution for our enterprise users and the entire NGINX community.

Note: While we are making sure NGINX users have an easy and scalable option with the availability of the OpenSSL Compatibility Layer, users still have options to use third-party libraries like BoringSSL, quicTLS, or LibreSSL with NGINX. To decide which one is the right path for you, consider what approach best meets your requirements and how comfortable you are with compiling and managing libraries as dependencies.

A Note on 0-RTT

0-RTT is a feature in QUIC that allows a client to send application data before the TLS handshake is complete. 0-RTT functionality is made possible by reusing negotiated parameters from a previous connection. It is enabled by the client remembering critical parameters and providing the server with a TLS session ticket that allows the server to recover the same information.

While this feature is an important part of QUIC, it is not yet supported in the OpenSSL Compatibility Layer. If you have specific use cases that need 0-RTT, we welcome your feedback to inform our roadmap.

Learn More about NGINX with QUIC+HTTP/3 and OpenSSL

You can begin using NGINX’s OpenSSL Compatibility Layer today with NGINX Open Source or by starting a 30-day free trial of NGINX Plus. We hope you find it useful and welcome your feedback.

More information about NGINX with QUIC+HTTP/3 and OpenSSL is available in the resources below.

The post QUIC+HTTP/3 Support for OpenSSL with NGINX appeared first on NGINX.

]]>
Using 1Password CLI to Securely Build NGINX Plus Containers https://www.nginx.com/blog/using-1password-cli-to-securely-build-nginx-plus-containers/ Tue, 29 Aug 2023 15:00:06 +0000 https://www.nginx.com/?p=72648 If you’re a regular user of F5 NGINX Plus, it’s likely that you’re building containers to try out new features or functionality. And when building NGINX Plus containers, you often end up storing sensitive information like the NGINX repository certificate and key on your local file system. While it’s straightforward to add sensitive files to [...]

Read More...

The post Using 1Password CLI to Securely Build NGINX Plus Containers appeared first on NGINX.

]]>
If you’re a regular user of F5 NGINX Plus, it’s likely that you’re building containers to try out new features or functionality. And when building NGINX Plus containers, you often end up storing sensitive information like the NGINX repository certificate and key on your local file system. While it’s straightforward to add sensitive files to a .gitignore repository file, that process is not ideal nor secure – in fact, there are many examples where engineers accidentally commit sensitive information to a repository.

A better method is to use a secrets management solution. Personally, I’m a longtime fan of 1Password and recently discovered their CLI tool. This tool makes it easier for developers and platform engineers to interact with secrets in their day-to-day workflow.

In this blog post, we outline how to use 1Password CLI to securely build an NGINX Plus container. This example assumes you have an NGINX Plus subscription, a 1Password subscription with the CLI tool installed, access to an environment with a shell (Bash or Zsh), and Docker installed.

Store Secrets in 1Password

The first step is to store your secrets in 1Password, which supports multiple secret types like API credentials, files, notes, and passwords. In this NGINX Plus use case, we leverage 1Password’s secure file feature.

You can obtain your NGINX repository certificate and key from the MyF5 portal. Follow the 1Password documentation to create a secure document for both the NGINX repository certificate and key. Once you have created the two secure documents , follow the steps to collect the 1Password secret reference.

Note: At the time of this writing, 1Password does not support multiple files on the same record.

Build the NGINX Plus Container

Now it’s time to build the NGINX Plus container that leverages your secure files and their secret reference Uniform Resource Identifiers (URIs). This step uses the example Dockerfile from the NGINX Plus Admin Guide.

Prepare the docker build Process

After saving the Dockerfile to a new directory, prepare the docker build process. To pass your 1Password secrets into the docker build, first store each secret reference URI in an environment variable. Then, open a new Bash terminal in the directory where you saved your Dockerfile.

Enter these commands into the Bash terminal:

export NGINX_CRT="op://Work/nginx-repo-crt/nginx-repo.crt"
export NGINX_KEY="op://Work/nginx-repo-key/nginx-repo.key"

Replace Secret Reference URIs

The op run command enables your 1Password CLI to replace secret reference URIs in environment variables with the secret’s value. You can leverage this in your docker build command to pass the NGINX repository certificate and key into the build container.

To finish building your container, run the following commands in the same terminal used in the previous step:

op run -- docker build --no-cache --secret id=nginx-key,env=NGINX_KEY --secret id=nginx-crt,env=NGINX_CRT -t nginxplus --load .

In this command, op run executes the docker build command and detects two environment variable references (NGINX_CRT and NGINX_KEY) with the 1Password secret reference URIs. The op command replaces the URI with the secret’s actual value.

Get Started Today

By following the simple steps and using 1Password CLI, you can build NGINX Plus containers against the NGINX Plus repository without storing the certificate and key on your local file system – creating an environment for better security.

If you’re new to NGINX Plus, you can start your 30-day free trial today or contact us to discuss your use cases.

The post Using 1Password CLI to Securely Build NGINX Plus Containers appeared first on NGINX.

]]>
Multi-Cloud API Security with NGINX and F5 Distributed Cloud WAAP https://www.nginx.com/blog/multi-cloud-api-security-with-nginx-and-f5-distributed-cloud-waap/ Tue, 01 Aug 2023 22:23:21 +0000 https://www.nginx.com/?p=72597 The question is no longer if you’re in the cloud, but how many clouds you’re in. Most enterprises today recognize there isn’t a “one cloud fits all” solution and have shifted toward a hybrid or multi-cloud architecture. According to data from F5’s State of Application Strategy in 2023 report, 85% of enterprises operate applications with [...]

Read More...

The post Multi-Cloud API Security with NGINX and F5 Distributed Cloud WAAP appeared first on NGINX.

]]>
The question is no longer if you’re in the cloud, but how many clouds you’re in. Most enterprises today recognize there isn’t a “one cloud fits all” solution and have shifted toward a hybrid or multi-cloud architecture. According to data from F5’s State of Application Strategy in 2023 report, 85% of enterprises operate applications with two or more different architectures.

For development and API teams, this creates a lot of pressure. They’re tasked with securely delivering APIs at scale in complex, distributed environments. Connections are no longer simply between clients and backend services – they are now between applications deployed in different clouds, regions, data centers, or edge locations . Meanwhile, every API must meet the organization’s security and compliance requirements, regardless of where it is deployed and what tools are used to deliver and secure it.

Securing APIs in these highly distributed environments requires a unique set of capabilities and best practices. I previously wrote about the importance of a two-pronged approach to API security: “shifting left” to build security in from the start and “shielding right” with a set of global posture management practices. In this blog post, we’ll look at how to put that strategy into practice while securely delivering APIs across cloud, on-premises, and edge environments.

Hybrid and Multi-Cloud API Security Reference Architecture

Hybrid and multi-cloud architectures have many definite advantages – especially for agility, scalability, and resilience. But they add an extra layer of complexity. In fact, F5’s State of Application Strategy in 2023 report showed how increased complexity is the most common challenge facing organizations today. The second most common challenge? Applying consistent security.

The problem today is that some security solutions, like certain WAFs, lack the context and protection APIs need. At the same time, dedicated API security solutions lack the ability to create and enforce policies to stop attacks. You need a solution that treats your architecture and technology as an interconnected stack that spans discovery, observability, management, and enforcement.

Practically, API security needs to be incorporated across three tiers to provide protection as API traffic traverses critical infrastructure points:

  • Global tier – Edge protection from bot and DoS attacks, as well as discovery and visibility
  • Site tier – Protection within an individual cloud, data center, or edge deployment
  • App tier – Fine-grained access control and threat protection deployed near the API runtime

The reference architecture below provides an overview of how F5 Distributed Cloud Services and F5 NGINX work together to provide comprehensive API protection in multi-cloud and hybrid architectures:

F5 Distributed Cloud provides a global tier of protection across edge, cloud, and on-premises deployments.

In this reference architecture, F5 Distributed Cloud provides a global tier of protection across edge, cloud, and on-premises deployments. NGINX Plus with NGINX App Protect WAF provides fine-grained protection at the site tier and/or app tier by integrating into software development lifecycles to enforce runtime security.

Let’s look at the security protections provided by each component of this architecture.

API Discovery and Monitoring with F5 Distributed Cloud

To start, API traffic from public clients traverses through the F5 Distributed Cloud Web Application and API Protection (WAAP), which is deployed at the edge. Critically, this provides global protection from DDoS attacks, bot abuse, and other exploits. It also provides important global visibility into API traffic entering different clouds, on-premises data centers, and edge deployments.

API traffic is increasing rapidly and most API attacks unfold slowly over weeks or even months. Finding malicious traffic inside the flood of regular API requests and responses can be like finding a needle in a haystack. To solve this problem, F5 Distributed Cloud uses artificial intelligence (AI) and machine learning (ML) to generate insights into API traffic, including API discovery, endpoint mapping, and actively learning and detect ion of anomalies which could represent emerging threats.

Acting as the global tier of app and API security, F5 Distributed Cloud WAAP provides the following benefits:

  • Automatic API discovery – Detects and maps APIs for a complete view into your ecosystem, including visibility into third-party and shadow APIs, authentication status, and more.
  • Sensitive data leak prevention – Detects, characterizes, and masks sensitive data like social security numbers, credit numbers, and other personally identifiable information (PII) from being exposed.
  • Monitoring and Anomaly Detection – Continuously inspects and analyzes traffic to detect anomalies and vulnerabilities with AI and ML tools.
  • Enhanced API visibility – Observes how traffic flows across all API endpoints to understand connectivity across edge APIs, internal services, and third-party integrations.
  • Enforced security across environments – Uses a positive security model by enforcing schema validation, rate limiting, and blocking of undesirable or malicious traffic.

To get started with F5 Distributed Cloud WAAP, you can request a free enterprise trial of F5 Distributed Cloud Services, which includes API security, bot defense, edge compute, and multi-cloud networking.

Access Control and Runtime Protection with F5 NGINX

Once API traffic flows through the global tier, it arrives at the site tier and/or app tiers. While the global tier is typically managed by IT networking and security teams, individual APIs in the site tier and app tier are built and managed by software engineering teams.

When it comes to access control, an API gateway is a common choice because it enables developers to offload some of the most common security requirements to a shared infrastructure tier above the application. This reduces duplicated effort (e.g., having each developer or team build their own authentication and authorization service).

F5 NGINX Management Suite API Connectivity Manager enables platform engineering and DevOps teams to provide access to shared infrastructure, such as API gateways and developer portals, without requiring developers to fill out request tickets and other cumbersome systems.

With API Connectivity Manager, you can set security policies to configure NGINX Plus as an API gateway and configure and monitor NGINX App Protect WAF policies. Together, they provide critical API runtime protection, including the ability to:

  • Enforce access control – Manage fine-grained access (authentication and authorization) to API endpoints and create access control lists to allow or deny traffic based on IP address or JWT claims.
  • Encrypt and mask sensitive data – Secure communications between APIs with mTLS and end-to-end encryption, and detect and mask sensitive data like credit card numbers in API responses.
  • Detect and block threats – Go beyond protection from the OWASP API Security Top 10 with advanced protection from more than 7,500 threat campaigns and attack signatures.
  • Monitor WAFs and API traffic at scale – Visualize API traffic across all your API gateways with NGINX App Protect WAF to detect false positives and potential threats.

You can start a free 30-day trial of the NGINX API Connectivity Stack to access NGINX Management Suite and its API Connectivity Manager, Instance Manager, and Security Monitoring modules, in addition to NGINX Plus as an API gateway and NGINX App Protect for WAF and DoS protection.

Conclusion

NGINX provides excellent runtime protection across cloud and on-premises data center environments. When combined with F5 Distributed Cloud, security and platform engineering teams gain continuous visibility into APIs endpoints regardless of where the associated apps are deployed. Together, F5 Distributed Cloud and NGINX provide complete flexibility to both build and secure your architecture in any way you need. 

Additional Resources

The post Multi-Cloud API Security with NGINX and F5 Distributed Cloud WAAP appeared first on NGINX.

]]>
Announcing the Open Source Subscription by F5 NGINX https://www.nginx.com/blog/announcing-open-source-subscription-f5-nginx/ Wed, 14 Jun 2023 15:01:00 +0000 https://www.nginx.com/?p=71674 As a reader of the NGINX blog, you’ve likely already gathered that NGINX Open Source is pretty popular. But it isn’t just because it’s free (though that’s nice, too!) – NGINX Open Source is so popular because it’s known for being stable, lightweight, and the developer’s Swiss Army Knife™. Whether you need a web server, [...]

Read More...

The post Announcing the Open Source Subscription by F5 NGINX appeared first on NGINX.

]]>
As a reader of the NGINX blog, you’ve likely already gathered that NGINX Open Source is pretty popular. But it isn’t just because it’s free (though that’s nice, too!) – NGINX Open Source is so popular because it’s known for being stable, lightweight, and the developer’s Swiss Army Knife™.

Tweet screenshot: "Ok world. What say you? Favorite webserver? @nginx , Apache or are you using @caddyserver ?" and the response "Nothing compares to nginx. Used it yesterday to emergency fix a problem by reverse proxying in a handful of lines of config. Swiss army knife of hosting software."

Whether you need a web server, reverse proxy, API gateway, Ingress controller, or cache, NGINX (which is lightweight enough to be installed from a floppy disk) has your back. But there’s one thing NGINX Open Source users have told us is missing: Enterprise support. So, that (and more) is what we’re excited to introduce with the new Open Source Subscription!

What Is the Open Source Subscription?

The Open Source Subscription is a new bundle that includes:

Enterprise Support for NGINX Open Source

NGINX Open Source has a reputation for reliability and the community provides fantastic support, but sometimes more is necessary. With the Open Source Subscription, F5 adds enterprise support to NGINX Open Source, including:

  • SLA options of business hours or 24/7
  • Security patches and bug fixes
  • Security notifications
  • Debugging and error correction
  • Clarification of documentation discrepancies

Next, let’s dive into some of the benefits of having enterprise support.

Timely Patches and Fixes

A common vulnerability with any open source software (OSS) is the time it can take to address Common Vulnerabilities and Exposures (CVEs) and bugs. In fact, we’ve seen forks of NGINX Open Source take weeks, or even months, to patch. For example, on October 19, 2022, we announced fixes to CVE-2022-41741 and CVE-2022-41742 but the corresponding Ubuntu and Debian patches weren’t made available until November 15, 2022.

As a customer of the Open Source Subscription, you’ll get immediate access to patches and fixes, proactive notifications of CVEs, and more, including:

  • Security patches in the latest mainline and stable releases
  • Critical bug fixes in the latest mainline release
  • Non-critical bug fixes in the latest or a future mainline release

Regulatory Compliance

An increasing number of companies and governments are concerned about software supply chain issues, with many adhering to the practice of building a software bill of materials (SBOM). As the SBOM concept matures, regulators are starting to require patching "on a reasonably justified regular cycle", with timely patches for serious vulnerabilities found outside of the normal patch cycle.

With the Open Source Subscription, you can ensure that your NGINX Open Source instances meet your organization’s OSS software requirements by demonstrating due diligence, traceability, and compliance with relevant regulations, especially when it comes to security aspects.

Confidentiality

Getting good support requires sharing configuration files. However, if you’re sharing configs with a community member or in forums, then you’re exposing your organization to security vulnerabilities (or even breaches). Just one simple piece of NGINX code shared on Stack Overflow could offer bad actors insight into how to exploit your apps or architecture.

The Open Source Subscription grants you direct access to F5’s team of security experts, so you can be assured that your configs stay confidential. To learn more, see the NGINX Open Source Support Policy.

Note: The Open Source Subscription includes support for Linux packages of NGINX Open Source stable and mainline versions obtained directly from NGINX. We are exploring how we might be able to support packages customized and distributed by other vendors, so tell us in the comments which distros are important to you!

Enterprise Features Via Automatic Access to NGINX Plus

With the Open Source Subscription, you get access to NGINX Plus at no added cost. The subscription lets you choose when to use NGINX Open Source or NGINX Plus based on your business needs.

NGINX Open Source is perfect for many app delivery use cases, and is particularly outstanding for web serving, content caching, and basic traffic management. And while you can extend NGINX Open Source for other use cases, this can result in stability and latency issues. For example, it’s common to use Lua scripts to detect endpoint changes (where the Lua handler chooses which upstream service to route requests to, thus eliminating the need to reload the NGINX configuration). However, Lua must continuously check for changes, so it ends up consuming resources which, in turn, increases the processing time of incoming requests. In addition to causing timeouts, this also results in complexity and higher resource costs.

NGINX Plus can handle advanced use cases and provides out-of-the-box capabilities for load balancing, API gateway, Ingress controller, and more. Many customers choose NGINX Plus for business-critical apps and APIs that have stringent requirements related to uptime, availability, security, and identity.

Maintain Uptime and Availability at Scale

Uptime and availability are crucial to mission-critical apps and APIs because your customers (both internal and external) are directly impacted by any problems that arise when scaling up.

You can use NGINX Plus to:

Improve Security and Identity Management

By building non-functional requirements into your traffic management strategy, you can offload those requirements from your apps. This reduces errors and frees up developers to work on core requirements.

With NGINX Plus, you can enhance security by:

  • Using JWT authentication, OpenID Connect (OIDC), and SAML to centralize authentication and authorization at the load balancer, API gateway, or Ingress controller
  • Enforcing end-to-end encryption and certificate management with SSL/TLS offloading and SSL termination
  • Enabling FIPS 140-2 for the processing of all SSL/TLS and HTTP/2 traffic
  • Implementing PCI DDS best practices for protecting consumer’s credit card numbers and other personal data
  • Adding NGINX App Protect for Layer 7 WAF and denial-of-service (DoS) protection

Fleet Management with Instance Manager

Administration of a NGINX fleet at scale can be difficult. With NGINX Open Source, you might have hundreds of instances (maybe even thousands!) at your organization, which can introduce a lot of complexity and risk related to CVEs, configuration issues, and expired certificates. That’s why the Open Source Subscription includes NGINX Management Suite Instance Manager, which enables you to centrally inventory all of your NGINX Open Source, NGINX Plus, and NGINX App Protect WAF instances so you can configure, secure, and monitor your NGINX fleet with ease.

Diagram showing how NGINX Instance Manager manages your fleet of NGINX Open Source, Plus, and App Protect WAF

Understand Your NGINX Estate

With Instance Manager you can get an accurate count of your instances in any environment, including Kubernetes. Instance Manager allows you to:

  • Inventory instances and discover software versions with potential CVE exposures
  • Learn about configuration problems and resolve them with a built-in editor that leverages best practice recommendations
  • Visualize protection insights, analyze possible threats, and identify opportunities for tuning your WAF policies with Security Monitoring

Manage Certificates

Expired certificates have become a notorious cause of breaches. Use Instance Manager to ensure secure communication between NGINX instances and their clients. With Instance manager, you can track, manage, and deploy SSL/TLS certificates on all of your instances (including by finding and updating expiring certificates) and rotate the encryption keys regularly (or whenever a key has been compromised).

Simplify Visibility

The amount of data you can get from NGINX instances can be staggering. To help you get the most out of that data and your third-party tools, Instance Manager provides events and metrics data that helps you collect valuable NGINX metrics then forward them to commonly used monitoring, visibility, and alerting tools via API. In addition, you can get unique, curated insights into the protection of your apps and APIs, such as when NGINX App Protect is added.

Get Started with the Open Source Subscription

If you’re interested in getting started with the new Open Source Subscription, contact us today to discuss your use cases.

Dive deeper into the use cases you can enable with NGINX Plus:

Learn more about NGINX Management Suite Instance Manager:

The post Announcing the Open Source Subscription by F5 NGINX appeared first on NGINX.

]]>
Shaping the Future of Kubernetes Application Connectivity with F5 NGINX https://www.nginx.com/blog/shaping-future-of-kubernetes-application-connectivity-with-f5-nginx/ Thu, 08 Jun 2023 15:02:27 +0000 https://www.nginx.com/?p=71667 Application connectivity in Kubernetes can be extremely complex, especially when you deploy hundreds – or even thousands – of containers across various cloud environments, including on-premises, public, private, or hybrid and multi-cloud. At NGINX, we firmly believe that integrating a unified approach to manage connectivity to, from, and within a Kubernetes cluster can dramatically simplify [...]

Read More...

The post Shaping the Future of Kubernetes Application Connectivity with F5 NGINX appeared first on NGINX.

]]>
Application connectivity in Kubernetes can be extremely complex, especially when you deploy hundreds – or even thousands – of containers across various cloud environments, including on-premises, public, private, or hybrid and multi-cloud. At NGINX, we firmly believe that integrating a unified approach to manage connectivity to, from, and within a Kubernetes cluster can dramatically simplify and streamline operations for development, infrastructure, platform engineering, and security teams.

In this blog, we want to share some reflections and thoughts on how NGINX created one of the most popular Ingress controllers today, and ways we plan continue delivering the best-in-class capabilities to manage Kubernetes app connectivity in the future.

Also, don’t miss a chance to chat with our engineers and architects to discover the latest cool and exciting projects that NGINX is working on and see these technologies in action. NGINX, a part of F5, is proud to be a Platinum Sponsor of KubeCon North America 2023, and we hope to see you there! Come meet us at the NGINX booth to discuss how we can help enhance security, scalability, and observability of your Kubernetes platform.

Before anything, we want to note the importance of putting the customer first. NGINX does so by looking at each customer’s specific scenario and use cases, goals they aim to achieve, and challenges they might encounter on their journey. Then, we develop a solution leveraging our technology innovations that helps the customer achieve those goals and address any challenges in the most efficient way.

Ingress Controller

In 2017, we released the first version of NGINX Ingress Controller to answer the demand for enterprise-class Kubernetes-native app delivery. NGINX Ingress Controller helps improve user experience with load balancing, SSL termination, URI rewrites, session persistence, JWT authentication, and other key application delivery features. It is built on the most popular data plan in the world – NGINX – and leverages the Kubernetes Ingress API.

After its release, NGINX Ingress Controller gained immediate traction due to its ease of deployment and configuration, low resource utilization (even under heavy loads), and fast and reliable operations.

Ingress Controller ecosystem diagram

As our journey advanced, we reached limitations with the Ingress object in the Kubernetes API, such as support for protocols other than HTTP and the inability to attach customized request-handling policies like security policy. Due to these limitations, we introduced Custom Resource Definitions (CRDs) to enhance NGINX Ingress Controller capabilities and enable advanced use cases for our customers.

NGINX Ingress Controller provides the CRDs VirtualServer, VirtualServerRoute, TransportServer, and Policy to enhance performance, resilience, uptime, and security, along with observability for the API gateway, load balancer, and Ingress functionality at the edge of a Kubernetes cluster. In support of frequent app releases, these NGINX CRDs also enable role-oriented self-service governance across multi-tenant development and operations teams.

Ingress Controller custom resources

With our most recent release at the time of writing (version 3.1), we added JWT authorization and introduced Deep Service Insight to help customers monitor status of their apps behind NGINX Ingress Controller. This helps implement advanced failover scenarios (e.g., from on-premises to cloud ). Many other features are planned in the roadmap, so stay tuned for the new releases.

Learn more about how you can reduce complexity, increase uptime, and provide better insights into app health and performance at scale on the NGINX Ingress Controller web page.

Service Mesh

In 2020, we continued our Kubernetes app connectivity journey by introducing NGINX Service Mesh, a purpose-built, developer-friendly, lightweight yet comprehensive solution to power a variety of service-to-service connectivity use cases, including security and visibility, within the Kubernetes cluster.

NGINX Service Mesh Control and Data Planes

NGINX Service Mesh and NGINX Ingress Controller leverage the same data plane technology and can be tightly and seamlessly integrated for unified connectivity to, from, and within a cluster.

Prior to the latest release (version 2.0), NGINX Service Mesh used SMI specifications and a bespoke API server to deliver service-to-service connectivity within a Kubernetes cluster. With version 2.0, we decided to deprecate the SMI resources and replace them by mimicking the resources from Gateway API for Mesh Management and Administration (GAMMA). With this approach, we ensure unified north-south and east-west connectivity that leverages the same CRD types, simplifying and streamlining configuration and operations.

NGINX Service Mesh is available as a free download from GitHub.

Gateway API

The Gateway API is an open source project intended to improve and standardize app and service networking in Kubernetes. Managed by the Kubernetes community, the Gateway API specification evolved from the Kubernetes Ingress API to solve limitations of the Ingress resource in production environments. These limitations include defining fine-grained policies for request processing and delegating control over configuration across multiple teams and roles. It’s an exciting project – and since the Gateway API’s introduction, NGINX has been an active participant.

Gateway API Resources

That said, we intentionally didn’t want to include the Gateway API specifications in NGINX Ingress Controller because it already has a robust set of CRDs that cover a diverse variety of use cases, and some of those use cases are the same ones the Gateway API is intended to address.

In 2021, we decided to spin off a separate new project that covers all aspects of Kubernetes connectivity with the Gateway API: NGINX Kubernetes Gateway.

We decided to start our NGINX Kubernetes Gateway project, rather than just using NGINX Ingress Controller, for these reasons:

  • To ensure product stability, reliability, and production readiness (we didn’t want to include beta-level specs into a mature, enterprise-class Ingress controller).
  • To deliver comprehensive, vendor-agnostic configuration interoperability for Gateway API resources without mixing them with vendor-specific CRDs.
  • To experiment with data and control plane architectural choices and decisions with the goal to provide easy-to-use, fast, reliable, and secure Kubernetes connectivity that is future-proof.

In addition, the Gateway API formed a GAMMA subgroup to research and define capabilities and resources of the Gateway API specifications for service mesh use cases. Here at NGINX, we see the long-term future of unified north-south and east-west Kubernetes connectivity in the Gateway API and heading in this direction.

The Gateway API is truly a collaborative effort across vendors and projects – all working together to build something better for Kubernetes users, based on experience and expertise, common touchpoints, and joint decisions. There will always be room for individual implementations to innovate and for data planes to shine. With NGINX Kubernetes Gateway, we continue working on native NGINX implementation of the Gateway API, and we encourage you to join us in shaping the future of Kubernetes app connectivity.

Ways you can get involved in NGINX Kubernetes Gateway include:

  • Join the project as a contributor
  • Try the implementation in your lab
  • Test and provide feedback

To join the project, visit NGINX Kubernetes Gateway on GitHub.

Even with this evolution of the Kubernetes Ingress API, NGINX Ingress Controller is not going anywhere and will stay here for the foreseeable future. We’ll continue to invest into and develop our proven and mature technology to satisfy both current and future customer needs and help users who need to manage app connectivity at the edge of a Kubernetes cluster.

Get Started Today

To learn more about how you can simplify application delivery with NGINX Kubernetes solutions, visit the Connectivity Stack for Kubernetes web page.

The post Shaping the Future of Kubernetes Application Connectivity with F5 NGINX appeared first on NGINX.

]]>
The Mission-Critical Patient-Care Use Case That Became a Kubernetes Odyssey https://www.nginx.com/blog/mission-critical-patient-care-use-case-became-kubernetes-odyssey/ Wed, 17 May 2023 15:00:51 +0000 https://www.nginx.com/?p=71589 Downtime can lead to serious consequences. These words are truer for companies in the medical technology field than in most other industries – in their case, the "serious consequences" can literally include death. We recently had the chance to dissect the tech stack of a company that’s seeking to transform medical record keeping from pen-and-paper [...]

Read More...

The post The Mission-Critical Patient-Care Use Case That Became a Kubernetes Odyssey appeared first on NGINX.

]]>
Downtime can lead to serious consequences.

These words are truer for companies in the medical technology field than in most other industries – in their case, the "serious consequences" can literally include death. We recently had the chance to dissect the tech stack of a company that’s seeking to transform medical record keeping from pen-and-paper to secure digital data that is accessible anytime, and anywhere, in the world. These data range from patient information to care directives, biological markers, medical analytics, historical records, and everything else shared between healthcare teams.

From the outset, the company has sought to address a seemingly simple question: “How can we help care workers easily record data in real time?” As the company has grown, however, the need to scale and make data constantly available has made solving that challenge increasingly complex. Here we describe how the company’s tech journey has led them to adopt Kubernetes and NGINX Ingress Controller.

Tech Stack at a Glance

Here’s a look at where NGINX fits into their architecture:

Diagram how NGINX fits into their architecture

The Problem with Paper

Capturing patient status and care information at regular intervals is a core duty for healthcare personnel. Traditionally, they have recorded patient information on paper, or more recently on laptop or tablet. There are a couple serious downsides:

  • Healthcare workers may interact dozens of patients per day, so it’s usually not practical to write detailed notes while providing care. As a result, workers end up writing their notes at the end of their shift. At that point, mental and physical fatigue make it tempting to record only generic comments.
  • The workers must also depend on their memory of details about patient behavior. Inaccuracies might mask patterns that facilitate diagnosis of larger health issues if documented correctly and consistently over time.
  • Paper records can’t easily be shared among departments within a single department, let alone with other entities like EMTs, emergency room staff, and insurance companies. The situation isn’t much better with laptops or tablets if they’re not connected to a central data store or the cloud.

To address these challenges, the company created a simplified data recording system that provides shortcuts for accessing patient information and recording common events like dispensing medication. This ease of access and use makes it possible to record patient interactions in real time as they happen.

All data is stored in cloud systems maintained by the company, and the app integrates with other electronic medical records systems to provide a comprehensive longitudinal view of resident behaviors. This helps caregivers provide better continuity of care, creates a secure historical record, and can be easily shared with other healthcare software systems.

Physicians and other specialists also use the platform when admitting or otherwise engaging with patients. There’s a record of preferences and personal needs that travel with the patient to any facility. These can be used to help patients feel comfortable in a new setting, which improve outcomes like recovery time.

There are strict legal requirements about how long companies must store patient data. The company’s developers have built the software to offer extremely high availability with uptime SLAs that are much better than those of generic cloud applications. Keeping an ambulance waiting because a patient’s file won’t load isn’t an option.

The Voyage from the Garage to the Cloud to Kubernetes

Like many startups, the company initially saved money by running the first proof-of-concept application on a server in a co-founder’s home. Once it became clear the idea had legs, the company moved its infrastructure to the cloud rather than manage hardware in a data center. Being a Microsoft shop, they chose Azure. The initial architecture ran applications on traditional virtual machines (VMs) in Azure App Service, a managed application delivery service that runs Microsoft’s IIS web server. For data storage and retrieval, the company opted to use Microsoft’s SQL Server running in a VM as a managed application.

After several years running in the cloud, the company was growing quickly and experiencing scaling pains. It needed to scale infinitely, and horizontally rather than vertically because the latter is slow and expensive with VMs. This requirement led rather naturally to containerization and Kubernetes as a possible solution. A further point in favor of containerization was that the company’s developers need to ship updates to the application and infrastructure frequently, without risking outages. With patient notes being constantly added across multiple time zones, there is no natural downtime to push changes to production without the risk of customers immediately being affected by glitches.

A logical starting point for the company was Microsoft’s managed Kubernetes offering, Azure Kubernetes Services (AKS). The team researched Kubernetes best practices and realized they needed an Ingress controller running in front of their Kubernetes clusters to effectively manage traffic and applications running in nodes and pods on AKS.

Traffic Routing Must Be Flexible Yet Precise

The team tested AKS’s default Ingress controller, but found its traffic-routing features simply could not deliver updates to the company’s customers in the required manner. When it comes to patient care, there’s no room for ambiguity or conflicting information – it’s unacceptable for one care worker to see an orange flag and another a red flag for the same event, for example. Hence, all users in a given organization must use the same version of the app. This presents a big challenge when it comes to upgrades. There’s no natural time to transition a customer to a new version, so the company needed a way to use rules at the server and network level to route different customers to different app versions.

To achieve this, the company runs the same backend platform for all users in an organization and does not offer multi-tenancy with segmentation at the infrastructure layer within the organization. With Kubernetes, it is possible to split traffic using virtual network routes and cookies on browsers along with detailed traffic rules. However, the company’s technical team found that AKS’s default Ingress controller can split traffic only on a percentage basis, not with rules that operate at level of customer organization or individual user as required.

In its basic configuration, the NGINX Ingress Controller based on NGINX Open Source has the same limitation, so the company decided to pivot to the more advanced NGINX Ingress Controller based on NGINX Plus, an enterprise-grade product which supports granular traffic control. Finding recommendations from NGINX Ingress Controller from Microsoft and the Kubernetes community based on the high level of flexibility and control helped solidify the choice. The configuration better supports the company’s need for pod management (as opposed to classic traffic management), ensuring that pods are running in the appropriate zones and traffic is routed to those services. Sometimes traffic is being routed internally but in most use cases, it is routed back out through NGINX Ingress Controller for observability reasons.

Here Be Dragons: Monitoring, Observability and Application Performance

With NGINX Ingress Controller, the technical team has complete control over the developer and end user experience. Once users log in and establish a session, they can immediately be routed to a new version or reverted back to an older one. Patches can be pushed simultaneously and nearly instantaneously to all users in an organization. The software isn’t reliant on DNS propagation or updates on networking across the cloud platform.

NGINX Ingress Controller also meets the company’s requirement for granular and continuous monitoring. Application performance is extremely important in healthcare. Latency or downtime can hamper successful clinical care, especially in life-or-death situations. After the move to Kubernetes, customers started reporting downtime that the company hadn’t noticed. The company soon discovered the source of the problem: Azure App Service relies on sampled data. Sampling is fine for averages and broad trends, but it completely misses things like rejected requests and missing resources. Nor does it show the usage spikes that commonly occur every half hour as care givers check in and log patient data. The company was getting only an incomplete picture of latency, error sources, bad requests, and unavailable service.

The problems didn’t stop there. By default Azure App Service preserves stored data for only a month – far short of the dozens of years mandated by laws in many countries.  Expanding the data store as required for longer preservation was prohibitively expensive. In addition, the Azure solution cannot see inside of the Kubernetes networking stack. NGINX Ingress Controller can monitor both infrastructure and application parameters as it handles Layer 4 and Layer 7 traffic.

For performance monitoring and observability, the company chose a Prometheus time-series database attached to a Grafana visualization engine and dashboard. Integration with Prometheus and Grafana is pre-baked into the NGINX data and control plane; the technical team had to make only a small configuration change to direct all traffic through the Prometheus and Grafana servers. The information was also routed into a Grafana Loki logging database to make it easier to analyze logs and give the software team more control over data over time. 

This configuration also future-proofs against incidents requiring extremely frequent and high-volume data sampling for troubleshooting and fixing bugs. Addressing these types of incidents might be costly with the application monitoring systems provided by most large cloud companies, but the cost and overhead of Prometheus, Grafana, and Loki in this use case are minimal. All three are stable open source products which generally require little more than patching after initial tuning.

Stay the Course: A Focus on High Availability and Security

The company has always had a dual focus, on security to protect one of the most sensitive types of data there is, and on high availability to ensure the app is available whenever it’s needed. In the shift to Kubernetes, they made a few changes to augment both capacities.

For the highest availability, the technical team deploys an active-active, multi-zone, and multi-geo distributed infrastructure design for complete redundancy with no single point of failure. The team maintains N+2 active-active infrastructure with dual Kubernetes clusters in two different geographies. Within each geography, the software spans multiple data centers to reduce downtime risk, providing coverage in case of any failures at any layer in the infrastructure. Affinity and anti-affinity rules can instantly reroute users and traffic to up-and-running pods to prevent service interruptions. 

For security, the team deploys a web application firewall (WAF) to guard against bad requests and malicious actors. Protection against the OWASP Top 10 is table stakes provided by most WAFs. As they created the app, the team researched a number of WAFs including the native Azure WAF and ModSecurity. In the end, the team chose NGINX App Protect with its inline WAF and distributed denial-of-service (DDoS) protection.

A big advantage of NGINX App Protect is its colocation with NGINX Ingress Controller, which both eliminates a point of redundancy and reduces latency. Other WAFs must be placed outside of the Kubernetes environment, contributing to latency and cost. Even miniscule delays (say 1 millisecond extra per request) add up quickly over time.

Surprise Side Quest: No Downtime for Developers

Having completed the transition to AKS for most of its application and networking infrastructure, the company has also realized significant improvements to its developer experience (DevEx). Developers now almost always spot problems before customers notice any issues themselves. Since the switch, the volume of support calls about errors is down about 80%!

The company’s security and application-performance teams have a detailed Grafana dashboard and unified alerting, eliminating the need to check multiple systems or implement triggers for warning texts and calls coming from different processes. The development and DevOps teams can now ship code and infrastructure updates daily or even multiple times per day and use extremely granular blue-green patterns. Formerly, they were shipping updates once or twice per week and having to time there for low-usage windows, a stressful proposition. Now, code is shipped when ready and the developers can monitor the impact directly by observing application behavior.

The results are positive all around – an increase in software development velocity, improvement in developer morale, and more lives saved.

The post The Mission-Critical Patient-Care Use Case That Became a Kubernetes Odyssey appeared first on NGINX.

]]>
2 Ways to View and Manage Your WAF Fleet at Scale with F5 NGINX https://www.nginx.com/blog/2-ways-view-manage-waf-fleet-at-scale-f5-nginx/ Thu, 23 Mar 2023 15:03:14 +0000 https://www.nginx.com/?p=71385 As organizations transform digitally and grow their application portfolios, security challenges also transform and multiply. In F5’s The State of Application Strategy in 2022, we saw how many organizations today have more apps to monitor than ever – often anywhere from 200 to 1000! That high number creates more potential attack surfaces, making today’s apps particularly susceptible to bad [...]

Read More...

The post 2 Ways to View and Manage Your WAF Fleet at Scale with F5 NGINX appeared first on NGINX.

]]>
As organizations transform digitally and grow their application portfolios, security challenges also transform and multiply. In F5’s The State of Application Strategy in 2022, we saw how many organizations today have more apps to monitor than ever – often anywhere from 200 to 1000!

That high number creates more potential attack surfaces, making today’s apps particularly susceptible to bad actors. This vulnerability worsens when a web application needs to handle increased amounts of traffic. To minimize downtime (or even better, eliminate it!), it’s crucial to develop a strategy that puts security first.

WAF: Your First Line of Defense

In our webinar Easily View, Manage, and Scale Your App Security with F5 NGINX, we cover why a web application firewall (WAF) is the tool of choice for securing and protecting web applications. By monitoring and filtering traffic, a WAF is the first line of defense to protect applications against sophisticated Layer 7 attacks like distributed denial of service (DDoS).

The following WAF capabilities ensure a robust app security solution:

  • HTTP protocol and traffic validation
  • Data protection
  • Automated attack blocking
  • Easy policy integration into CI/CD pipelines
  • Centralized visualization
  • Configuration management at scale

But while the WAF is monitoring the apps, how does your team monitor the WAF? And what about when you deploy multiple WAFs in a fleet to handle numerous attacks? In the webinar, we answer these questions and also do a real‑time demo.

As a preview of the webinar, in this post we look into two key findings to help you get started managing your WAF fleet at scale:

  1. How to increase visibility
  2. How to enable security-as-code

Increase Visibility with NGINX Management Suite

The success of any WAF strategy depends on the level of visibility available to the teams implementing and managing the WAFs during creation, deployment, and modification. This is where a management plane comes in. Rather than making your teams look at each WAF through a separate, individual lens, it’s important to have one, centralized pane of glass for monitoring all your WAFs. With centralized visibility, you can make informed decisions about current attacks and easily gain insights to fine‑tune your security policies.

Additionally, it’s critical that your SecOps, Platform Ops, and DevOps teams share a clear and cohesive strategy. When these three teams work together on both the setup and maintenance of your WAFs, you achieve stronger app security at scale.

Here’s how each team benefits from using our management plane, F5 NGINX Management Suite, which easily integrates with NGINX App Protect WAF:

  • SecOps – Gains centralized visibility into app security and compliance, the ability to apply uniform policies across teams, and support for a shift‑left strategy.
  • Platform Ops – Can provide app security support to multiple users, centralized visibility across the entire WAF fleet, and scalable DevOps across the entire enterprise.
  • DevOps – Can automate security within the CI/CD pipeline, easily and quickly deploy app security, and provide better customer experience by building apps that are more reliable and less subject to attack.

Enable Security as Code with NGINX App Protect WAF

Instance Manager is the core module in NGINX Management Suite and enables centralized management of NGINX App Protect WAF security policies at scale. When your DevOps team can easily consume SecOps‑managed security policies, it can start moving towards a DevSecOps culture, immediately integrating security at all phases of the CI/CD pipeline, shifting security left.

Shifting left and centrally managing your WAF fleet means:

  • A declarative security policy (in JSON from SecOps) enables DevOps to use CI/CD tools natively.
  • Your security policy can be pushed to the application from a developer tool.
  • SecOps and DevOps can independently own their files.

With platform‑agnostic NGINX App Protect WAF, you can easily shift left and automate security into the CI/CD pipeline. Learn more in this short clip from the webinar:

Watch the Full Webinar On Demand

To dive deeper into these topics and see the ten‑minute real‑time demo, watch our on‑demand webinar Easily View, Manage, and Scale Your App Security with F5 NGINX.

In addition to the findings discussed in this post, the webinar covers:

  • Additional considerations for managing a WAF fleet at scale
  • How visibility of top WAF violations, attacks, and CVEs helps you determine how to tune policies
  • Ways to reduce policy errors with centralized WAF visibility and management
  • Details on automation of security-as-code

Ready to try NGINX Management Suite for managing your WAFs? Request your free 30-day trial.

The post 2 Ways to View and Manage Your WAF Fleet at Scale with F5 NGINX appeared first on NGINX.

]]>
NGINX Tutorial: How to Securely Manage Secrets in Containers https://www.nginx.com/blog/nginx-tutorial-securely-manage-secrets-containers/ Tue, 14 Mar 2023 15:06:21 +0000 https://www.nginx.com/?p=71340 This post is one of four tutorials that help you put into practice concepts from Microservices March 2023: Start Delivering Microservices: How to Deploy and Configure Microservices How to Securely Manage Secrets in Containers (this post) How to Use GitHub Actions to Automate Microservices Canary Releases How to Use OpenTelemetry Tracing to Understand Your Microservices Many [...]

Read More...

The post NGINX Tutorial: How to Securely Manage Secrets in Containers appeared first on NGINX.

]]>
This post is one of four tutorials that help you put into practice concepts from Microservices March 2023: Start Delivering Microservices:

Many of your microservices need secrets to operate securely. Examples of secrets include the private key for an SSL/TLS certificate, an API key to authenticate to another service, or an SSH key for remote login. Proper secrets management requires strictly limiting the contexts where secrets are used to only the places they need to be and preventing secrets from being accessed except when needed. But this practice is often skipped in the rush of application development. The result? Improper secrets management is a common cause of information leakage and exploits.

Tutorial Overview

In this tutorial, we show how to safely distribute and use a JSON Web Token (JWT) which a client container uses to access a service. In the four challenges in this tutorial, you experiment with four different methods for managing secrets, to learn not only how to manage secrets correctly in your containers but also about methods that are inadequate:

Although this tutorial uses a JWT as a sample secret, the techniques apply to anything for containers that you need to keep secret, such as database credentials, SSL private keys, and other API keys.

The tutorial leverages two main software components:

  • API server – A container running NGINX Open Source and some basic NGINX JavaScript code that extracts a claim from the JWT and returns a value from one of the claims or, if no claim is present, an error message
  • API client – A container running very simple Python code that simply makes a GET request to the API server

Watch this video for a demo of the tutorial in action.

The easiest way to do this tutorial is to register for Microservices March and use the browser‑based lab that’s provided. This post provides instructions for running the tutorial in your own environment.

Prerequisites and Set Up

Prerequisites

To complete the tutorial in your own environment, you need:

Notes:

  • The tutorial makes use of a test server listening on port 80. If you’re already using port 80, use the ‑p flag to set a different value for the test server when you start it with the docker run command. Then include the :<port_number> suffix on localhost in the curl commands.
  • Throughout the tutorial the prompt on the Linux command line is omitted, to make it easier to cut and paste the commands into your terminal. The tilde (~) represents your home directory.

Set Up

In this section you clone the tutorial repo, start the authentication server, and send test requests with and without a token.

Clone the Tutorial Repo

  1. In your home directory, create the microservices-march directory and clone the GitHub repository into it. (You can also use a different directory name and adapt the instructions accordingly.) The repo includes config files and separate versions of the API client application that use different methods to obtain secrets.

    mkdir ~/microservices-march
    cd ~/microservices-march
    git clone https://github.com/microservices-march/auth.git
  2. Display the secret. It’s a signed JWT, commonly used to authenticate API clients to servers.

    cat ~/microservices-march/auth/apiclient/token1.jwt
    "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCIsImtpZCI6InNpZ24ifQ.eyJpYXQiOjE2Nz UyMDA4MTMsImlzcyI6ImFwaUtleTEiLCJhdWQiOiJhcGlTZXJ2aWNlIiwic3ViIjoiYXBpS2V5MSJ9._6L_Ff29p9AWHLLZ-jEZdihy-H1glooSq_z162VKghA"

While there are a few ways to use this token for authentication, in this tutorial the API client app passes it to the authentication server using the OAuth 2.0 Bearer Token Authorization framework. That involves prefixing the JWT with Authorization: Bearer as in this example:

"Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCIsImtpZCI6InNpZ24ifQ.eyJpYXQiOjE2NzUyMDA4MTMsImlzcyI6ImFwaUtleTEiLCJhdWQiOiJhcGlTZXJ2aWNlIiwic3ViIjoiYXBpS2V5MSJ9._6L_Ff29p9AWHLLZ-jEZdihy-H1glooSq_z162VKghA"

Build and Start the Authentication Server

  1. Change to the authentication server directory:

    cd apiserver
  2. Build the Docker image for the authentication server (note the final period):

    docker build -t apiserver .
  3. Start the authentication server and confirm that it’s running (the output is spread over multiple lines for legibility):

    docker run -d -p 80:80 apiserver
    docker ps
    CONTAINER ID   IMAGE       COMMAND                  ...
    2b001f77c5cb   apiserver   "nginx -g 'daemon of..." ...  
    
    
        ... CREATED         STATUS          ...                                    
        ... 26 seconds ago  Up 26 seconds   ... 
    
    
        ... PORTS                                      ...
        ... 0.0.0.0:80->80/tcp, :::80->80/tcp, 443/tcp ...
    
    
        ... NAMES
        ... relaxed_proskuriakova

Test the Authentication Server

  1. Verify that the authentication server rejects a request that doesn’t include the JWT, returning 401 Authorization Required:

    curl -X GET http://localhost
    <html>
    <head><title>401 Authorization Required</title></head>
    <body>
    <center><h1>401 Authorization Required</h1></center>
    <hr><center>nginx/1.23.3</center>
    </body>
    </html>
  2. Provide the JWT using the Authorization header. The 200 OK return code indicates the API client app authenticated successfully.

    curl -i -X GET -H "Authorization: Bearer `cat $HOME/microservices-march/auth/apiclient/token1.jwt`" http://localhost
    HTTP/1.1 200 OK
    Server: nginx/1.23.2
    Date: Day, DD Mon YYYY hh:mm:ss TZ
    Content-Type: text/html
    Content-Length: 64
    Last-Modified: Day, DD Mon YYYY hh:mm:ss TZ
    Connection: keep-alive
    ETag: "63dc0fcd-40"
    X-MESSAGE: Success apiKey1
    Accept-Ranges: bytes
    
    
    { "response": "success", "authorized": true, "value": "999" }

Challenge 1: Hardcode Secrets in Your App (Not!)

Before you begin this challenge, let’s be clear: hardcoding secrets into your app is a terrible idea! You’ll see how anyone with access to the container image can easily find and extract hardcoded credentials.

In this challenge, you copy the code for the API client app into the build directory, build and run the app, and extract the secret.

Copy the API Client App

The app_versions subdirectory of the apiclient directory contains different versions of the simple API client app for the four challenges, each slightly more secure than the previous one (see Tutorial Overview for more information).

  1. Change to the API client directory:

    cd ~/microservices-march/auth/apiclient
  2. Copy the app for this challenge – the one with a hardcoded secret – to the working directory:

    cp ./app_versions/very_bad_hard_code.py ./app.py
  3. Take a look at the app:

    cat app.py
    import urllib.request
    import urllib.error
    
    jwt = "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCIsImtpZCI6InNpZ24ifQ.eyJpYXQiOjE2NzUyMDA4MTMsImlzcyI6ImFwaUtleTEiLCJhdWQiOiJhcGlTZXJ2aWNlIiwic3ViIjoiYXBpS2V5MSJ9._6L_Ff29p9AWHLLZ-jEZdihy-H1glooSq_z162VKghA"
    authstring = "Bearer " + jwt
    req = urllib.request.Request("http://host.docker.internal")
    req.add_header("Authorization", authstring)
    try:
        with urllib.request.urlopen(req) as response:
            the_page = response.read()
            message = response.getheader("X-MESSAGE")
            print("200  " + message)
    except urllib.error.URLError as e:
        print(str(e.code) + " s " + e.msg)

    The code simply makes a request to a local host and prints out either a success message or failure code.

    The request adds the Authorization header on this line:

    req.add_header("Authorization", authstring)

    Do you notice anything else? Perhaps a hardcoded JWT? We will get to that in a minute. First let’s build and run the app.

Build and Run the API Client App

We’re using the docker compose command along with a Docker Compose YAML file – this makes it a little easier to understand what’s going on.

(Notice that in Step 2 of the previous section you renamed the Python file for the API client app that’s specific to Challenge 1 (very_bad_hard_code.py) to app.py. You’ll also do this in the other three challenges. Using app.py each time simplifies logistics because you don’t need to change the Dockerfile. It does mean that you need to include the ‑build argument on the docker compose command to force a rebuild of the container each time.)

The docker compose command builds the container, starts the application, makes a single API request, and then shuts down the container, while displaying the results of the API call on the console.

The 200 Success code on the second-to-last line of the output indicates that authentication succeeded. The apiKey1 value is further confirmation, because it shows the auth server was able to decode the claim of that name in the JWT:

docker compose -f docker-compose.hardcode.yml up -build
...
apiclient-apiclient-1  | 200  Success apiKey1
apiclient-apiclient-1 exited with code 0

So hardcoded credentials worked correctly for our API client app – not surprising. But is it secure? Maybe so, since the container runs this script just once before it exits and doesn’t have a shell?

In fact – no, not secure at all.

Retrieve the Secret from the Container Image

Hardcoding credentials leaves them open to inspection by anyone who can access the container image, because extracting the filesystem of a container is a trivial exercise.

  1. Create the extract directory and change to it:

    mkdir extract
    cd extract
  2. List basic information about the container images. The --format flag makes the output more readable (and the output is spread across two lines here for the same reason):

    docker ps -a --format "table {{.ID}}\t{{.Names}}\t{{.Image}}\t{{.RunningFor}}\t{{.Status}}"
    CONTAINER ID   NAMES                   IMAGE       ...
    11b73106fdf8   apiclient-apiclient-1   apiclient   ...
    ad9bdc05b07c   exciting_clarke         apiserver   ...
    
    
        ... CREATED          STATUS
        ... 6 minutes ago    Exited (0) 4 minutes ago
        ... 43 minutes ago   Up 43 minutes
  3. Extract the most recent apiclient image as a .tar file. For <container_ID>, substitute the value from the CONTAINER ID field in the output above (11b73106fdf8 in this tutorial):

    docker export -o api.tar <container_ID>

    It takes a few seconds to create the api.tar archive, which includes the container’s entire file system. One approach to finding secrets is to extract the whole archive and parse it, but as it turns out there is a shortcut for finding what’s likely to be interesting – displaying the container’s history with the docker history command. (This shortcut is especially handy because it also works for containers that you find on Docker Hub or another container registry and thus might not have the Dockerfile, but only the container image).

  4. Display the history of the container:

    docker history apiclient
    IMAGE         CREATED        ...
    9396dde2aad0  8 minutes ago  ...                    
    <missing>     8 minutes ago  ...   
    <missing>     28 minutes ago ...  
                   
        ... CREATED BY                          SIZE ... 
        ... CMD ["python" "./app.py"]           622B ...   
        ... COPY ./app.py ./app.py # buildkit   0B   ... 
        ... WORKDIR /usr/app/src                0B   ...   
                 
        ... COMMENT
        ... buildkit.dockerfile.v0
        ... buildkit.dockerfile.v0
        ... buildkit.dockerfile.v0

    The lines of output are in reverse chronological order. They show that the working directory was set to /usr/app/src, then the file of Python code for the app was copied in and run. It doesn’t take a great detective to deduce that the core codebase of this container is in /usr/app/src/app.py, and as such that’s a likely location for credentials.

  5. Armed with that knowledge, extract just that file:

    tar --extract --file=api.tar usr/app/src/app.py
  6. Display the file’s contents and, just like that, we have gained access to the “secure” JWT:

    cat usr/app/src/app.py
    ...
    jwt="eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCIsImtpZCI6InNpZ24ifQ.eyJpYXQiOjE2NzUyMDA4MTMsImlzcyI6ImFwaUtleTEiLCJhdWQiOiJhcGlTZXJ2aWNlIiwic3ViIjoiYXBpS2V5MSJ9._6L_Ff29p9AWHLLZ-jEZdihy-H1glooSq_z162VKghA"
    ...

Challenge 2: Pass Secrets as Environment Variables (Again, No!)

If you completed Unit 1 of Microservices March 2023 (Apply the Twelve‑Factor App to Microservices Architectures), you’re familiar with using environment variables to pass configuration data to containers. If you missed it, never fear – it’s available on demand after you register.

In this challenge, you pass secrets as environment variables. Like the method from Challenge 1, we don’t recommend this one! It’s not as bad as hardcoding secrets, but as you’ll see it has some weaknesses.

There are four ways to pass environment variables to a container:

  • Use the ENV statement in a Dockerfile to do variable substitution (set the variable for all images built). For example:

    ENV PORT $PORT
  • Use the ‑e flag on the docker run command. For example:

    docker run -e PASSWORD=123 mycontainer
  • Use the environment key in a Docker Compose YAML file.
  • Use a .env file containing the variables.

In this challenge, you use an environment variable to set the JWT and examine the container to see if the JWT is exposed.

Pass an Environment Variable

  1. Change back to the API client directory:

    cd ~/microservices-march/auth/apiclient
  2. Copy the app for this challenge – the one that uses environment variables – to the working directory, overwriting the app.py file from Challenge 1:

    cp ./app_versions/medium_environment_variables.py ./app.py
  3. Take a look at the app. In the relevant lines of output, the secret (JWT) is read as an environment variable in the local container:

    cat app.py
    ...
    jwt = ""
    if "JWT" in os.environ:
        jwt = "Bearer " + os.environ.get("JWT")
    ...
  4. As explained above, there’s a choice of ways to get the environment variable into the container. For consistency, we’re sticking with Docker Compose. Display the contents of the Docker Compose YAML file, which uses the environment key to set the JWT environment variable:

    cat docker-compose.env.yml
    ---
    version: "3.9"
    services:
      apiclient:
        build: .
        image: apiclient
        extra_hosts:
          - "host.docker.internal:host-gateway"
        environment:
          - JWT
  5. Run the app without setting the environment variable. The 401 Unauthorized code on the second-to-last line of the output confirms that authentication failed because the API client app didn’t pass the JWT:

    docker compose -f docker-compose.env.yml up -build
    ...
    apiclient-apiclient-1  | 401  Unauthorized
    apiclient-apiclient-1 exited with code 0
  6. For simplicity, set the environment variable locally. It’s fine to do that at this point in the tutorial, since it’s not the security issue of concern right now:

    export JWT=`cat token1.jwt`
  7. Run the container again. Now the test succeeds, with the same message as in Challenge 1:

    docker compose -f docker-compose.env.yml up -build
    ... 
    apiclient-apiclient-1  | 200  Success apiKey1
    apiclient-apiclient-1 exited with code 0

So at least now the base image doesn’t contain the secret and we can pass it at run time, which is safer. But there is still a problem.

Examine the Container

  1. Display information about the container images to get the container ID for the API client app (the output is spread across two lines for legibility):

    docker ps -a --format "table {{.ID}}\t{{.Names}}\t{{.Image}}\t{{.RunningFor}}\t{{.Status}}"
    CONTAINER ID   NAMES                   IMAGE      ...
    6b20c75830df   apiclient-apiclient-1   apiclient  ...
    ad9bdc05b07c   exciting_clarke         apiserver  ...
    
    
        ... CREATED             STATUS
        ... 6 minutes ago       Exited (0) 6 minutes ago
        ... About an hour ago   Up About an hour
  2. Inspect the container for the API client app. For <container_ID>, substitute the value from the CONTAINER ID field in the output above (here 6b20c75830df).

    The docker inspect command lets you inspect all launched containers, whether they are currently running or not. And that’s the problem – even though the container is not running, the output exposes the JWT in the Env array, insecurely saved in the container config.

    docker inspect <container_ID>
    ...
    "Env": [
      "JWT=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCIsImtpZCI6InNpZ24ifQ.eyJpYXQiOjE2NzUyMDA...",
      "PATH=/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
      "LANG=C.UTF-8",
      "GPG_KEY=A035C8C19219BA821ECEA86B64E628F8D684696D",
      "PYTHON_VERSION=3.11.2",
      "PYTHON_PIP_VERSION=22.3.1",
      "PYTHON_SETUPTOOLS_VERSION=65.5.1",
      "PYTHON_GET_PIP_URL=https://github.com/pypa/get-pip/raw/1a96dc5acd0303c4700e026...",
      "PYTHON_GET_PIP_SHA256=d1d09b0f9e745610657a528689ba3ea44a73bd19c60f4c954271b790c..."
    ]

Challenge 3: Use Local Secrets

By now you’ve learned that hardcoding secrets and using environment variables is not as safe as you (or your security team) need it to be.

To improve security, you can try using local Docker secrets to store sensitive information. Again, this isn’t the gold‑standard method, but it’s good to understand how it works. Even if you don’t use Docker in production, the important takeaway is how you can make it difficult to extract the secret from a container.

In Docker, secrets are exposed to a container via the file system mount /run/secrets/ where there’s a separate file containing the value of each secret.

In this challenge you pass a locally stored secret to the container using Docker Compose, then verify that the secret isn’t visible in the container when this method is used.

Pass a Locally Stored Secret to the Container

  1. As you might expect by now, you start by changing to the apiclient directory:

    cd ~/microservices-march/auth/apiclient
  2. Copy the app for this challenge – the one that uses secrets from within a container – to the working directory, overwriting the app.py file from Challenge 2:

    cp ./app_versions/better_secrets.py ./app.py
  3. Take a look at the Python code, which reads the JWT value from the /run/secrets/jot file. (And yes, we should probably be checking that the file only has one line. Maybe in Microservices March 2024?)

    cat app.py
    ...
    jotfile = "/run/secrets/jot"
    jwt = ""
    if os.path.isfile(jotfile):
        with open(jotfile) as jwtfile:
            for line in jwtfile:
                jwt = "Bearer " + line
    ...

    OK, so how are we going to create this secret? The answer is in the docker-compose.secrets.yml file.

  4. Take a look at the Docker Compose file, where the secret file is defined in the secrets section and then referenced by the apiclient service:

    cat docker-compose.secrets.yml
    ---
    version: "3.9"
    secrets:
      jot:
        file: token1.jwt
    services:
      apiclient:
        build: .
        extra_hosts:
          - "host.docker.internal:host-gateway"
        secrets:
          - jot

Verify the Secret Isn’t Visible in the Container

  1. Run the app. Because we’ve made the JWT accessible within the container, authentication succeeds with the now‑familiar message:

    docker compose -f docker-compose.secrets.yml up -build
    ...
    apiclient-apiclient-1  | 200 Success apiKey1
    apiclient-apiclient-1 exited with code 0
  2. Display information about the container images, noting the container ID for the API client app (for sample output, see Step 1 in Examine the Container from Challenge 2):

    docker ps -a --format "table {{.ID}}\t{{.Names}}\t{{.Image}}\t{{.RunningFor}}\t{{.Status}}"
  3. Inspect the container for the API client app. For <container_ID>, substitute the value from the CONTAINER ID field in the output from the previous step. Unlike the output in Step 2 of Examine the Container, there is no JWT= line at the start of the Env section:

    docker inspect <container_ID>
    "Env": [
      "PATH=/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
      "LANG=C.UTF-8",
      "GPG_KEY=A035C8C19219BA821ECEA86B64E628F8D684696D",
      "PYTHON_VERSION=3.11.2",
      "PYTHON_PIP_VERSION=22.3.1",
      "PYTHON_SETUPTOOLS_VERSION=65.5.1",
      "PYTHON_GET_PIP_URL=https://github.com/pypa/get-pip/raw/1a96dc5acd0303c4700e026...",
      "PYTHON_GET_PIP_SHA256=d1d09b0f9e745610657a528689ba3ea44a73bd19c60f4c954271b790c..."
    ]

    So far, so good, but our secret is in the container filesystem at /run/secrets/jot. Maybe we can extract it from there using the same method as in Retrieve the Secret from the Container Image from Challenge 1.

  4. Change to the extract directory (which you created during Challenge 1) and export the container into a tar archive:

    cd extract
    docker export -o api2.tar <container_ID>
  5. Look for the secrets file in the tar file:

    tar tvf api2.tar | grep jot
    -rwxr-xr-x  0 0      0           0 Mon DD hh:mm run/secrets/jot

    Uh oh, the file with the JWT in it is visible. Didn’t we say embedding secrets in the container was “secure”? Are things just as bad as in Challenge 1?

  6. Let’s see – extract the secrets file from the tar file and look at its contents:

    tar --extract --file=api2.tar run/secrets/jot
    cat run/secrets/jot

    Good news! There’s no output from the cat command, meaning the run/secrets/jot file in the container filesystem is empty – no secret to see in there! Even if there is a secrets artifact in our container, Docker is smart enough to not store any sensitive data in the container.

That said, even though this container configuration is secure, it has one shortcoming. It depends on the existence of a file called token1.jwt in the local filesystem when you run the container. If you rename the file, an attempt to restart the container fails. (You can try this yourself by renaming [not deleting!] token1.jwt and running the docker compose command from Step 1 again.)

So we are halfway there: the container uses secrets in a way that protects them from easy compromise, but the secret is still unprotected on the host. You don’t want secrets stored unencrypted in a plain text file. It’s time to bring in a secrets management tool.

Challenge 4: Use a Secrets Manager

A secrets manager helps you manage, retrieve, and rotate secrets throughout their lifecycles. There are a lot of secrets managers to choose from and they all fulfill similar a similar purpose:

  • Store secrets securely
  • Control access
  • Distribute them at run time
  • Enable secret rotation

Your options for secrets management include:

For simplicity, this challenge uses Docker Swarm, but the principles are the same for many secrets managers.

In this challenge, you create a secret in Docker, copy over the secret and API client code, deploy the container, see if you can extract the secret, and rotate the secret.

Configure a Docker Secret

  1. As is tradition by now, change to the apiclient directory:

    cd ~/microservices-march/auth/apiclient
  2. Initialize Docker Swarm:

    docker swarm init
    Swarm initialized: current node (t0o4eix09qpxf4ma1rrs9omrm) is now a manager.
    ...
  3. Create a secret and store it in token1.jwt:

    docker secret create jot ./token1.jwt
    qe26h73nhb35bak5fr5east27
  4. Display information about the secret. Notice that the secret value (the JWT) is not itself displayed:

    docker secret inspect jot
    [
      {
        "ID": "qe26h73nhb35bak5fr5east27",
        "Version": {
          "Index": 11
        },
        "CreatedAt": "YYYY-MM-DDThh:mm:ss.msZ",
        "UpdatedAt": "YYYY-MM-DDThh:mm:ss.msZ",
        "Spec": {
          "Name": "jot",
          "Labels": {}
        }
      }
    ]

Use a Docker Secret

Using the Docker secret in the API client application code is exactly the same as using a locally created secret – you can read it from the /run/secrets/ filesystem. All you need to do is change the secret qualifier in your Docker Compose YAML file.

  1. Take a look at the Docker Compose YAML file. Notice the value true in the external field, indicating we are using a Docker Swarm secret:

    cat docker-compose.secretmgr.yml
    ---
    version: "3.9"
    secrets:
      jot:
        external: true
    services:
      apiclient:
        build: .
        image: apiclient
        extra_hosts:
          - "host.docker.internal:host-gateway"
        secrets:
          - jot

    So, we can expect this Compose file to work with our existing API client application code. Well, almost. While Docker Swarm (or any other container orchestration platform) brings a lot of extra value, it does bring some additional complexity.

    Since docker compose does not work with external secrets, we’re going to have to use some Docker Swarm commands, docker stack deploy in particular. Docker Stack hides the console output, so we have to write the output to a log and then inspect the log.

    To make things easier, we also use a continuous while True loop to keep the container running.

  2. Copy the app for this challenge – the one that uses a secrets manager – to the working directory, overwriting the app.py file from Challenge 3. Displaying the contents of app.py, we see that the code is nearly identical to the code for Challenge 3. The only difference is the addition of the while True loop:

    cp ./app_versions/best_secretmgr.py ./app.py
    cat ./app.py
    ...
    while True:
        time.sleep(5)
        try:
            with urllib.request.urlopen(req) as response:
                the_page = response.read()
                message = response.getheader("X-MESSAGE")
                print("200 " + message, file=sys.stderr)
        except urllib.error.URLError as e:
            print(str(e.code) + " " + e.msg, file=sys.stderr)

Deploy the Container and Check the Logs

  1. Build the container (in previous challenges Docker Compose took care of this):

    docker build -t apiclient .
  2. Deploy the container:

    docker stack deploy --compose-file docker-compose.secretmgr.yml secretstack
    Creating network secretstack_default
    Creating service secretstack_apiclient
  3. List the running containers, noting the container ID for secretstack_apiclient (as before, the output is spread across multiple lines for readability).

    docker ps --format "table {{.ID}}\t{{.Names}}\t{{.Image}}\t{{.RunningFor}}\t{{.Status}}"
    CONTAINER ID  ...  
    20d0c83a8b86  ... 
    ad9bdc05b07c  ... 
    
        ... NAMES                                             ...  
        ... secretstack_apiclient.1.0e9s4mag5tadvxs6op6lk8vmo ...  
        ... exciting_clarke                                   ...                                 
    
        ... IMAGE              CREATED          STATUS
        ... apiclient:latest   31 seconds ago   Up 30 seconds
        ... apiserver          2 hours ago      Up 2 hours
  4. Display the Docker log file; for <container_ID>, substitute the value from the CONTAINER ID field in the output from the previous step (here, 20d0c83a8b86). The log file shows a series of success messages, because we added the while True loop to the application code. Press Ctrl+c to exit the command.

    docker logs -f <container_ID>
    200 Success apiKey1
    200 Success apiKey1
    200 Success apiKey1
    200 Success apiKey1
    200 Success apiKey1
    200 Success apiKey1
    ...
    ^c

Try to Access the Secret

We know that no sensitive environment variables are set (but you can always check with the docker inspect command as in Step 2 of Examine the Container in Challenge 2).

From Challenge 3 we also know that /run/secrets/jot file is empty, but you can check:

cd extract
docker export -o api3.tar 
tar --extract --file=api3.tar run/secrets/jot
cat run/secrets/jot

Success! You can’t get the secret from the container, nor read it directly from the Docker secret.

Rotate the Secret

Of course, with the right privileges we can create a service and configure it to read the secret into the log or set it as an environment variable. In addition, you might have noticed that communication between our API client and server is unencrypted (plain text).

So leakage of secrets is still possible with almost any secrets management system. One way to limit the possibility of resulting damage is to rotate (replace) secrets regularly.

With Docker Swarm, you can only delete and then re‑create secrets (Kubernetes allows dynamic update of secrets). You also can’t delete secrets attached to running services.

  1. List the running services:

    docker service ls
    ID             NAME                    MODE         ... 
    sl4mvv48vgjz   secretstack_apiclient   replicated   ... 
    
    
        ... REPLICAS   IMAGE              PORTS
        ... 1/1        apiclient:latest
  2. Delete the secretstack_apiclient service.

    docker service rm secretstack_apiclient
  3. Delete the secret and re‑create it with a new token:

    docker secret rm jot
    docker secret create jot ./token2.jwt
  4. Re‑create the service:

    docker stack deploy --compose-file docker-compose.secretmgr.yml secretstack
  5. Look up the container ID for apiclient (for sample output, see Step 3 in Deploy the Container and Check the Logs):

    docker ps --format "table {{.ID}}\t{{.Names}}\t{{.Image}}\t{{.RunningFor}}\t{{.Status}}"
  6. Display the Docker log file, which shows a series of success messages. For <container_ID>, substitute the value from the CONTAINER ID field in the output from the previous step. Press Ctrl+c to exit the command.

    docker logs -f <container_ID>
    200 Success apiKey2
    200 Success apiKey2
    200 Success apiKey2
    200 Success apiKey2
    ...
    ^c

See the change from apiKey1 to apiKey2? You’ve rotated the secret.

In this tutorial, the API server still accepts both JWTs, but in a production environment you can deprecate older JWTs by requiring certain values for claims in the JWT or checking the expiration dates of JWTs.

Note also that if you’re using a secrets system that allows your secret to be updated, your code needs to reread the secret frequently so as to pick up new secret values.

Clean Up

To clean up the objects you created in this tutorial:

  1. Delete the secretstack_apiclient service.

    docker service rm secretstack_apiclient
  2. Delete the secret.

    docker secret rm jot
  3. Leave the swarm (assuming you created a swarm just for this tutorial).

    docker swarm leave --force
  4. Kill the running apiserver container.

    docker ps -a | grep "apiserver" | awk {'print $1'} |xargs docker kill
  5. Delete unwanted containers by listing and then deleting them.

    docker ps -a --format "table {{.ID}}\t{{.Names}}\t{{.Image}}\t{{.RunningFor}}\t{{.Status}}"
    docker rm <container_ID>
  6. Delete any unwanted container images by listing and deleting them.

    docker image list   
    docker image rm <image_ID>

Next Steps

You can use this blog to implement the tutorial in your own environment or try it out in our browser‑based lab (register here). To learn more on the topic of exposing Kubernetes services, follow along with the other activities in Unit 2: Microservices Secrets Management 101.

To learn more about production‑grade JWT authentication with NGINX Plus, check out our documentation and read Authenticating API Clients with JWT and NGINX Plus on our blog.

Banner reading 'Microservices March 2023: Sign Up for Free, Register Today'

The post NGINX Tutorial: How to Securely Manage Secrets in Containers appeared first on NGINX.

]]>
Why Managing WAFs at Scale Requires Centralized Visibility and Configuration Management https://www.nginx.com/blog/why-managing-wafs-at-scale-requires-centralized-visibility-and-configuration-management/ Wed, 11 Jan 2023 17:00:34 +0000 https://www.nginx.com/?p=70939 In F5’s The State of Application Strategy in 2022 report, 90% of IT decision makers reported that their organizations manage between 200 and 1,000 apps, up 31% from five years ago. In another survey by Enterprise Strategy Group about how Modern App Security Trends Drive WAAP Adoption (May 2022, available courtesy of F5), the majority of IT decision makers said application [...]

Read More...

The post Why Managing WAFs at Scale Requires Centralized Visibility and Configuration Management appeared first on NGINX.

]]>
In F5’s The State of Application Strategy in 2022 report, 90% of IT decision makers reported that their organizations manage between 200 and 1,000 apps, up 31% from five years ago. In another survey by Enterprise Strategy Group about how Modern App Security Trends Drive WAAP Adoption (May 2022, available courtesy of F5), the majority of IT decision makers said application security has become more difficult over the past 2 years, with 72% using a WAF to protect their web applications. As organizations continue their digital transformation and web applications continue to proliferate, so too does the need for increased WAF protection. But as with most tools, the more WAFs you have, the harder they are to manage consistently and effectively.

The challenges of managing WAFs at scale include:

  • Lack of adequate visibility into application‑layer attack vectors and vulnerabilities, especially given the considerable number of them
  • Balancing WAF configurations between overly permissive or overly protective; it’s time‑consuming to fix the resulting false positives or negatives, especially manually and at scale
  • Ensuring consistent application policy management at high volumes, which is required to successfully identify suspicious code and injection attempts
  • Potential longtail costs – some extremely damaging – of failure to maintain even a single WAF in your fleet, including monetary loss, damage to reputation and brand, loss of loyal customers, and penalties for regulatory noncompliance
  • The need to support and update WAF configuration over time

WAF management at scale means both security and application teams are involved in setup and maintenance. To effectively manage WAFs – and secure applications properly – they need proper tooling that combines holistic visibility into attacks and WAF performance with the ability to edit and publish configurations on a global scale. In this blog, we explore the benefits of centralized security visualization and configuration management for your WAF fleet.

Actionable Security Insights at Scale with Centralized WAF Visibility

To easily manage WAFs at scale and gain the insight needed to make informed decisions, you need a management plane that offers visibility across your WAF fleet from a single pane of glass. You can view information about top violations and attacks, false positives and negatives, apps under attack, and bad actors. You can discover how to tune your security policies based on attack graphs – including geo‑locations – and drill down into WAF event logs.

How NGINX Can Help: F5 NGINX Management Suite Security Monitoring

We are happy to announce the general availability of the Security Monitoring module in F5 NGINX Management Suite, the unified traffic management and security solution for your NGINX fleet which we introduced in August 2022. Security Monitoring is a visualization tool for F5 NGINX App Protect WAF that’s easy to use out of the box. It not only reduces the need for third‑party tools, but also delivers unique, curated insights into the protection of your apps and APIs. Your security, development, and Platform Ops teams gain the ability to analyze threats, view protection insights, and identify areas for policy tuning – making it easier for them to detect problems and quickly remediate issues.

NMS Security Monitoring dashboard showing web attacks, bot attacks, threat intelligence, attack requests and top attack geolocations
Figure 1: The Security Monitoring main dashboard provides security teams overview visibility of all web attacks, bot attacks, threat intelligence, attack requests, and top attack geolocations, plus tabs for further detailed threat analysis and quick remediation of issues.

With the Security Monitoring module, you can:

  • Use dashboards to quickly see top violations, bot attacks, signatures, attacked instances, CVEs, and threat campaigns triggered per app or in aggregate. Filter across various security log parameters for more detailed analysis.
  • Make tuning decisions with insights into signature‑triggered events, including information about accuracy, level of risk, and what part of the request payload triggered signatures for enforcement.
  • Discover top attack actors (client IP addresses), geolocation vectors, and attack targets (URLs) per app or in aggregate.
  • See WAF events with details about requests and violations, searchable by request identifiers and other metrics logged by NGINX App Protect WAF.

Configuration Management for Your Entire NGINX App Protect WAF Fleet

While awareness and visibility are vital to identifying app attacks and vulnerabilities, they’re of little value if you can’t also act on the insights you gain by implementing WAF policies that detect and mitigate attacks automatically. The real value of a WAF is defined by the speed and ease with which you can create, deploy, and modify policies across your fleet of WAFs. Manual updates require vast amounts of time and accurate recordkeeping, leaving you more susceptible to attacks and vulnerabilities. And third‑party tools – while potentially effective – add unnecessary complexity.

A centralized management plane enables configuration management with the ability to update security policies and push them to one, several, or all your WAFs with a single press of a button. This method has two clear benefits:

  • You can quickly deploy and scale policy updates in response to current threats across your total WAF environment.
  • Your security team has the ability to control the protection of all the apps and APIs your developers are building.

How NGINX Can Help: F5 NGINX Management Suite Instance Manager – Configuration Management

You can now manage NGINX App Protect WAF at scale with the Instance Manager module in NGINX Management Suite. This enhancement gives you a centralized interface for creating, modifying, and publishing policies, attack signatures, and threat campaigns for NGINX App Protect WAF, resulting in more responsive protection against threats and handling of traffic surges.

NMS Instance Manager showing policies selection for a publication to a WAF instance group.
Figure 2: Instance Manager enables security teams to create, modify, and publish policies to one, several, or an entire fleet of NGINX App Protect WAF instances. This image shows policies being selected for publication to a WAF instance group.

With the Instance Manager module, you can:

  • Define configuration objects in a single location and push them out to the NGINX App Protect WAF instances of your choosing. The objects include security policies and deployments of attack signature updates and threat campaign packages.
  • Choose a graphical user interface (GUI) or REST API for configuration management. With the API, you can deploy configuration objects in your CI/CD pipeline.
  • See which policies and versions are deployed on different instances.
  • Use a JSON visual editor to create, view, and edit NGINX App Protect WAF policies, with the option to deploy instantly.
  • Compile NGINX App Protect WAF policies before deployment, to decrease the time required for updates on WAF instances.
  • View WAF logs and metrics through NGINX Management Suite Security Monitoring.

Take Control of Your WAF Security with NGINX Management Suite

To learn more, visit NGINX Management Suite and Instance Manager on our website or check out our documentation:

Ready to try NGINX Management Suite for managing your WAFs? Request your free 30-day trial.

The post Why Managing WAFs at Scale Requires Centralized Visibility and Configuration Management appeared first on NGINX.

]]>
2022 NGINX State of App and API Delivery Report https://www.nginx.com/blog/2022-nginx-state-of-app-api-delivery-report/ Tue, 13 Dec 2022 16:44:19 +0000 https://www.nginx.com/?p=70833 December is a natural time for reflection and introspection. As the year draws to a close, many organizations – including NGINX – are thinking about lessons learned over the past 12 months. Like us, you might be asking questions like: What insights can our data provide? What did we learn? What will we do differently and where should [...]

Read More...

The post 2022 NGINX State of App and API Delivery Report appeared first on NGINX.

]]>
Banner reading '2002 NGINX State of App and API Delivery Report'
December is a natural time for reflection and introspection. As the year draws to a close, many organizations – including NGINX – are thinking about lessons learned over the past 12 months. Like us, you might be asking questions like:

  • What insights can our data provide?
  • What did we learn?
  • What will we do differently and where should we keep powering forward?

At NGINX, our retrospective includes analyzing the input and feedback that our community shares with us in our annual survey. In 2022, the survey both yielded surprises and confirmed trends we’d been picking up throughout the year. In this blog, we surface key insights and share the 2022 NGINX State of App and API Delivery Report.

2022 Insights

Insight #1: Security (still) isn’t everybody’s job…and that’s ok.

As is typical in most surveys, we asked respondents to select from a list of job roles so we can understand who is completing the survey and detect interesting trends. We then used “job role” as a filter for a question about the extent to which respondents are responsible for security in their organizations. A mere 15% of respondents overall say they have nothing to do with security, with the expected slight variations across job roles (for example, data scientists are less likely than Platform Ops engineers to deal with security).

However, it gets more interesting when we segmented by size of organization: 44% of employees at large enterprises said they have nothing to do with security. This may sound alarming, but it doesn’t indicate that security isn’t important at large enterprises. Instead, the data tells us that large enterprises are more likely to have dedicated security teams.

Insight #2: Hardware isn’t dead and hybrid cloud is here to stay.

Not that long ago it was predicted that organizations delivering modern apps would shift their investments from on‑premises hardware to cloud deployments. We observed the rise of microservices architectures and containerization technologies (including Kubernetes) reflected in our annual surveys, and like others, thought that someday this might result in the end of hardware.

But the growing usage of clouds and modern app technologies only tells part of the story. Our 2022 survey data reinforces what we’re hearing from customers: most organizations use hybrid architectures. In fact, respondents indicated the usage of both on‑premises hardware and public cloud increases in parallel with their workload size. The reason behind this trend is simple: cloud can be expensive and hard to secure. When an app doesn’t need the flexibility and agility offered by a cloud architecture, then it makes sense to choose a traditional architecture located on premises where it’s generally easier to secure.

Insight #3: System Administrators are not an endangered species.

The title System Administrator – or SysAdmin – is often associated with hardware, and there have been murmurs for years that the role was becoming obsolete. Given that hardware isn’t going away, it’s logical to think that SysAdmin jobs are also “safe”, but the survey data tells us an even more interesting and reassuring story. Of the 562 people who said they do SysAdmin work:

  • 57% also claimed a Development team role
  • 43% also claimed a Platform Operations role
  • 38% also claimed a Leadership role
  • 31% also claimed a SecOps role
  • 17% also claimed a Data Science role

So what does this mean? When we add a filter by company size, we discover respondents from small companies (under 50 employees) are more likely to hold a SysAdmin role and unsurprisingly, are responsible for multiple job functions. Our conclusion is that if you’re a SysAdmin looking to broaden your skill set for modern app development, a startup is a great place to leverage what you already know and potentially transition towards a new career. #StartupLife

Insight #4: Organizational constraints are more prevalent than tool/technology limitations.

Each year we seek to learn about the barriers facing people working on app and API delivery projects. This year uncovered a list of problems that can be divided into two categories: work challenges and tool challenges.

While 17% reported they aren’t facing any challenges (great news!), the remaining 83% are encountering problems. The most common challenge is a lack of technical skills, and within that group, many told us they’re also facing a steep learning curve for tools along with a lack of staffing or resources. These work challenges typically relate to an organization’s practices around skills development, processes, and tool sprawl.

2022 NGINX State of App and API Delivery Report

The four insights we’ve shared are just a few of the many you shared with us in your survey responses. Check out the full report:

Thank You!

A huge thank you to everyone who participated in our survey!

Did you miss the survey or have more feedback? We still want to hear from you! Just drop us a note in the blog comments, DM us on social media (we’re on LinkedIn, Twitter, and Facebook), or join NGINX Community Slack.

You can view the findings from past surveys here:

The post 2022 NGINX State of App and API Delivery Report appeared first on NGINX.

]]>