NGINX App Protect Archives - NGINX https://www.nginx.com/blog/tag/nginx-app-protect/ The High Performance Reverse Proxy, Load Balancer, Edge Cache, Origin Server Thu, 22 Feb 2024 20:09:03 +0000 en-US hourly 1 Scale, Secure, and Monitor AI/ML Workloads in Kubernetes with Ingress Controllers https://www.nginx.com/blog/scale-secure-and-monitor-ai-ml-workloads-in-kubernetes-with-ingress-controllers/ Thu, 22 Feb 2024 20:09:02 +0000 https://www.nginx.com/?p=72919 AI and machine learning (AI/ML) workloads are revolutionizing how businesses operate and innovate. Kubernetes, the de facto standard for container orchestration and management, is the platform of choice for powering scalable large language model (LLM) workloads and inference models across hybrid, multi-cloud environments. In Kubernetes, Ingress controllers play a vital role in delivering and securing [...]

Read More...

The post Scale, Secure, and Monitor AI/ML Workloads in Kubernetes with Ingress Controllers appeared first on NGINX.

]]>
AI and machine learning (AI/ML) workloads are revolutionizing how businesses operate and innovate. Kubernetes, the de facto standard for container orchestration and management, is the platform of choice for powering scalable large language model (LLM) workloads and inference models across hybrid, multi-cloud environments.

In Kubernetes, Ingress controllers play a vital role in delivering and securing containerized applications. Deployed at the edge of a Kubernetes cluster, they serve as the central point of handling communications between users and applications.

In this blog, we explore how Ingress controllers and F5 NGINX Connectivity Stack for Kubernetes can help simplify and streamline model serving, experimentation, monitoring, and security for AI/ML workloads.

Deploying AI/ML Models in Production at Scale

When deploying AI/ML models at scale, out-of-the-box Kubernetes features and capabilities can help you:

  • Accelerate and simplify the AI/ML application release life cycle.
  • Enable AI/ML workload portability across different environments.
  • Improve compute resource utilization efficiency and economics.
  • Deliver scalability and achieve production readiness.
  • Optimize the environment to meet business SLAs.

At the same time, organizations might face challenges with serving, experimenting, monitoring, and securing AI/ML models in production at scale:

  • Increasing complexity and tool sprawl makes it difficult for organizations to configure, operate, manage, automate, and troubleshoot Kubernetes environments on-premises, in the cloud, and at the edge.
  • Poor user experiences because of connection timeouts and errors due to dynamic events, such as pod failures and restarts, auto-scaling, and extremely high request rates.
  • Performance degradation, downtime, and slower and harder troubleshooting in complex Kubernetes environments due to aggregated reporting and lack of granular, real-time, and historical metrics.
  • Significant risk of exposure to cybersecurity threats in hybrid, multi-cloud Kubernetes environments because traditional security models are not designed to protect loosely coupled distributed applications.

Enterprise-class Ingress controllers like F5 NGINX Ingress Controller can help address these challenges. By leveraging one tool that combines Ingress controller, load balancer, and API gateway capabilities, you can achieve better uptime, protection, and visibility at scale – no matter where you run Kubernetes. In addition, it reduces complexity and operational cost.

Diagram of NGINX Ingress Controller ecosystem

NGINX Ingress Controller can also be tightly integrated with an industry-leading Layer 7 app protection technology from F5 that helps mitigate OWASP Top 10 cyberthreats for LLM Applications and defends AI/ML workloads from DoS attacks.

Benefits of Ingress Controllers for AI/ML Workloads

Ingress controllers can simplify and streamline deploying and running AI/ML workloads in production through the following capabilities:

  • Model serving – Deliver apps non-disruptively with Kubernetes-native load balancing, auto-scaling, rate limiting, and dynamic reconfiguration features.
  • Model experimentation – Implement blue-green and canary deployments, and A/B testing to roll out new versions and upgrades without downtime.
  • Model monitoring – Collect, represent, and analyze model metrics to gain better insight into app health and performance.
  • Model security – Configure user identity, authentication, authorization, role-based access control, and encryption capabilities to protect apps from cybersecurity threats.

NGINX Connectivity Stack for Kubernetes includes NGINX Ingress Controller and F5 NGINX App Protect to provide fast, reliable, and secure communications between Kubernetes clusters running AI/ML applications and their users – on-premises and in the cloud. It helps simplify and streamline model serving, experimentation, monitoring, and security across any Kubernetes environment, enhancing capabilities of cloud provider and pre-packaged Kubernetes offerings with higher degree of protection, availability, and observability at scale.

Get Started with NGINX Connectivity Stack for Kubernetes

NGINX offers a comprehensive set of tools and building blocks to meet your needs and enhance security, scalability, and visibility of your Kubernetes platform.

You can get started today by requesting a free 30-day trial of Connectivity Stack for Kubernetes.

The post Scale, Secure, and Monitor AI/ML Workloads in Kubernetes with Ingress Controllers appeared first on NGINX.

]]>
Multi-Cloud API Security with NGINX and F5 Distributed Cloud WAAP https://www.nginx.com/blog/multi-cloud-api-security-with-nginx-and-f5-distributed-cloud-waap/ Tue, 01 Aug 2023 22:23:21 +0000 https://www.nginx.com/?p=72597 The question is no longer if you’re in the cloud, but how many clouds you’re in. Most enterprises today recognize there isn’t a “one cloud fits all” solution and have shifted toward a hybrid or multi-cloud architecture. According to data from F5’s State of Application Strategy in 2023 report, 85% of enterprises operate applications with [...]

Read More...

The post Multi-Cloud API Security with NGINX and F5 Distributed Cloud WAAP appeared first on NGINX.

]]>
The question is no longer if you’re in the cloud, but how many clouds you’re in. Most enterprises today recognize there isn’t a “one cloud fits all” solution and have shifted toward a hybrid or multi-cloud architecture. According to data from F5’s State of Application Strategy in 2023 report, 85% of enterprises operate applications with two or more different architectures.

For development and API teams, this creates a lot of pressure. They’re tasked with securely delivering APIs at scale in complex, distributed environments. Connections are no longer simply between clients and backend services – they are now between applications deployed in different clouds, regions, data centers, or edge locations . Meanwhile, every API must meet the organization’s security and compliance requirements, regardless of where it is deployed and what tools are used to deliver and secure it.

Securing APIs in these highly distributed environments requires a unique set of capabilities and best practices. I previously wrote about the importance of a two-pronged approach to API security: “shifting left” to build security in from the start and “shielding right” with a set of global posture management practices. In this blog post, we’ll look at how to put that strategy into practice while securely delivering APIs across cloud, on-premises, and edge environments.

Hybrid and Multi-Cloud API Security Reference Architecture

Hybrid and multi-cloud architectures have many definite advantages – especially for agility, scalability, and resilience. But they add an extra layer of complexity. In fact, F5’s State of Application Strategy in 2023 report showed how increased complexity is the most common challenge facing organizations today. The second most common challenge? Applying consistent security.

The problem today is that some security solutions, like certain WAFs, lack the context and protection APIs need. At the same time, dedicated API security solutions lack the ability to create and enforce policies to stop attacks. You need a solution that treats your architecture and technology as an interconnected stack that spans discovery, observability, management, and enforcement.

Practically, API security needs to be incorporated across three tiers to provide protection as API traffic traverses critical infrastructure points:

  • Global tier – Edge protection from bot and DoS attacks, as well as discovery and visibility
  • Site tier – Protection within an individual cloud, data center, or edge deployment
  • App tier – Fine-grained access control and threat protection deployed near the API runtime

The reference architecture below provides an overview of how F5 Distributed Cloud Services and F5 NGINX work together to provide comprehensive API protection in multi-cloud and hybrid architectures:

F5 Distributed Cloud provides a global tier of protection across edge, cloud, and on-premises deployments.

In this reference architecture, F5 Distributed Cloud provides a global tier of protection across edge, cloud, and on-premises deployments. NGINX Plus with NGINX App Protect WAF provides fine-grained protection at the site tier and/or app tier by integrating into software development lifecycles to enforce runtime security.

Let’s look at the security protections provided by each component of this architecture.

API Discovery and Monitoring with F5 Distributed Cloud

To start, API traffic from public clients traverses through the F5 Distributed Cloud Web Application and API Protection (WAAP), which is deployed at the edge. Critically, this provides global protection from DDoS attacks, bot abuse, and other exploits. It also provides important global visibility into API traffic entering different clouds, on-premises data centers, and edge deployments.

API traffic is increasing rapidly and most API attacks unfold slowly over weeks or even months. Finding malicious traffic inside the flood of regular API requests and responses can be like finding a needle in a haystack. To solve this problem, F5 Distributed Cloud uses artificial intelligence (AI) and machine learning (ML) to generate insights into API traffic, including API discovery, endpoint mapping, and actively learning and detect ion of anomalies which could represent emerging threats.

Acting as the global tier of app and API security, F5 Distributed Cloud WAAP provides the following benefits:

  • Automatic API discovery – Detects and maps APIs for a complete view into your ecosystem, including visibility into third-party and shadow APIs, authentication status, and more.
  • Sensitive data leak prevention – Detects, characterizes, and masks sensitive data like social security numbers, credit numbers, and other personally identifiable information (PII) from being exposed.
  • Monitoring and Anomaly Detection – Continuously inspects and analyzes traffic to detect anomalies and vulnerabilities with AI and ML tools.
  • Enhanced API visibility – Observes how traffic flows across all API endpoints to understand connectivity across edge APIs, internal services, and third-party integrations.
  • Enforced security across environments – Uses a positive security model by enforcing schema validation, rate limiting, and blocking of undesirable or malicious traffic.

To get started with F5 Distributed Cloud WAAP, you can request a free enterprise trial of F5 Distributed Cloud Services, which includes API security, bot defense, edge compute, and multi-cloud networking.

Access Control and Runtime Protection with F5 NGINX

Once API traffic flows through the global tier, it arrives at the site tier and/or app tiers. While the global tier is typically managed by IT networking and security teams, individual APIs in the site tier and app tier are built and managed by software engineering teams.

When it comes to access control, an API gateway is a common choice because it enables developers to offload some of the most common security requirements to a shared infrastructure tier above the application. This reduces duplicated effort (e.g., having each developer or team build their own authentication and authorization service).

F5 NGINX Management Suite API Connectivity Manager enables platform engineering and DevOps teams to provide access to shared infrastructure, such as API gateways and developer portals, without requiring developers to fill out request tickets and other cumbersome systems.

With API Connectivity Manager, you can set security policies to configure NGINX Plus as an API gateway and configure and monitor NGINX App Protect WAF policies. Together, they provide critical API runtime protection, including the ability to:

  • Enforce access control – Manage fine-grained access (authentication and authorization) to API endpoints and create access control lists to allow or deny traffic based on IP address or JWT claims.
  • Encrypt and mask sensitive data – Secure communications between APIs with mTLS and end-to-end encryption, and detect and mask sensitive data like credit card numbers in API responses.
  • Detect and block threats – Go beyond protection from the OWASP API Security Top 10 with advanced protection from more than 7,500 threat campaigns and attack signatures.
  • Monitor WAFs and API traffic at scale – Visualize API traffic across all your API gateways with NGINX App Protect WAF to detect false positives and potential threats.

You can start a free 30-day trial of the NGINX API Connectivity Stack to access NGINX Management Suite and its API Connectivity Manager, Instance Manager, and Security Monitoring modules, in addition to NGINX Plus as an API gateway and NGINX App Protect for WAF and DoS protection.

Conclusion

NGINX provides excellent runtime protection across cloud and on-premises data center environments. When combined with F5 Distributed Cloud, security and platform engineering teams gain continuous visibility into APIs endpoints regardless of where the associated apps are deployed. Together, F5 Distributed Cloud and NGINX provide complete flexibility to both build and secure your architecture in any way you need. 

Additional Resources

The post Multi-Cloud API Security with NGINX and F5 Distributed Cloud WAAP appeared first on NGINX.

]]>
Tutorial: Deliver and Secure GraphQL APIs with F5 NGINX https://www.nginx.com/blog/tutorial-deliver-secure-graphql-apis-with-f5-nginx/ Thu, 20 Jul 2023 15:00:24 +0000 https://www.nginx.com/?p=72582 Developers are increasingly embracing GraphQL as a preferred method to build APIs. GraphQL simplifies retrieving data from multiple sources, streamlining data access and aggregation. By querying multiple data sources with one POST request from a single endpoint, developers using GraphQL can precisely request the data they need from various sources. This approach helps solve limitations [...]

Read More...

The post Tutorial: Deliver and Secure GraphQL APIs with F5 NGINX appeared first on NGINX.

]]>
Developers are increasingly embracing GraphQL as a preferred method to build APIs. GraphQL simplifies retrieving data from multiple sources, streamlining data access and aggregation. By querying multiple data sources with one POST request from a single endpoint, developers using GraphQL can precisely request the data they need from various sources. This approach helps solve limitations encountered in REST API architectures, where problems like under-querying (requests lacking all the necessary data) or over-querying (requests going to multiple endpoints and gathering excessive data) can occur.

GraphQL is the optimal choice for microservices architectures, as it grants clients the ability to retrieve only the essential data from each service or data source. This fosters heightened flexibility and agility, which are critical components to thrive in a modern business environment.

Security Is Critical for GraphQL APIs

With a greater degree of access and flexibility, GraphQL APIs also present a more extensive attack surface that’s enticing to bad actors. Despite being relatively new, GraphQL is still prone to many of the same vulnerabilities found in other API architectures. Fortunately, you can protect GraphQL APIs from these common threats by leveraging some of your existing infrastructure and tools.

Tutorial Overview

This tutorial helps build an understanding of how to deliver and secure GraphQL APIs. We illustrate how to deploy an Apollo GraphQL server on F5 NGINX Unit with F5 NGINX Plus as the API gateway. In addition, we show how to deploy F5 NGINX App Protect WAF at the API gateway for advanced security and use F5 NGINX Management Suite to configure your WAF and monitor for potential threats. This setup also allows you to use NGINX App Protect WAF to detect attacks like SQL injection.

Follow the steps in these sections to complete the tutorial:

Architecture with NGINX providing security and authentication for GraphQL APIs, monitoring for attacks, and the Apollo GraphQL server running on NGINX Unit
Figure 1: Architecture with NGINX Plus and NGINX App Protect WAF providing security and authentication for GraphQL APIs, NGINX Management Suite monitoring for attacks, and the Apollo GraphQL server running on NGINX Unit

Prerequisites

Before you begin this tutorial, you need the following:

Install and Configure NGINX Management Suite Security Monitoring

NGINX Management Suite integrates several advanced features into a unified platform to simplify the process of configuring, monitoring, and troubleshooting NGINX instances. It also facilitates the management and governance of APIs, optimizes application load balancing, and enhances overall security for organizations.

Follow these steps to install and configure NGINX Management Suite:

  • Follow the instructions on the Virtual Machine or Bare Metal page of the Installation page. This page includes the Security Monitoring module (used in this tutorial) and other modules you may install at your discretion.
  • Add the license for each installed module.
  • Install the NGINX Agent package from the NGINX Management Suite host and set up Security Monitoring for NGINX App Protect instances by following the instructions here.

Deploy NGINX Unit and Install the Apollo GraphQL Server

NGINX Unit is an efficient and streamlined runtime application with a lightweight design, making it an ideal choice for organizations seeking high performance without compromising speed or agility. It’s an open-source server that can handle TLS, request routing, and run application code. You can learn more about NGINX Unit on its Key Features page.

In this tutorial, we use Express as a Node.js web application framework on NGINX Unit that offers extensive capabilities for constructing an Apollo GraphQL server. At the time of this writing, the current version is Apollo Server 4.

Follow these steps to deploy NGINX Unit and install the Apollo GraphQL server:

  1. Install NGINX Unit on a supported operating system.
  2. Follow the GitHub repo to build an Apollo GraphQL server and create your Apollo GraphQL hello app.

Deploy NGINX Plus as an API Gateway and Install NGINX App Protect WAF

Select a suitable environment to deploy an NGINX Plus instance. In this tutorial, we use an AWS Ubuntu instance and set up an API gateway reverse proxy using NGINX Plus. We then deploy NGINX App Protect WAF in front of our API gateway for added security.

Follow these instructions to install NGINX Plus and NGINX App Protect WAF:

  1. Install NGINX Plus on a supported operating system.
  2. Install the NGINX JavaScript module (njs).
  3. Add the load_modules in the nginx.conf directory.
  4. load_module modules/ngx_http_js_module.so;
    load_module modules/ngx_stream_js_module.so;
  5. Install NGINX App Protect WAF on a supported operating system.
  6. Add the NGINX App Protect WAF module to the main context in the nginx.conf file:
  7. load_module modules/ngx_http_app_protect_module.so;
  8. Enable NGINX App Protect WAF on an http/server/location context in the nginx.conf file:
  9. app_protect_enable on;
  10. Create a GraphQL policy configuration in the directory /etc/app_protect/conf. For further information on how to create an NGINX App Protect WAF policy, please refer to the relevant guidelines.

    Here is an example GraphQL policy configuration:

  11. {
       "name": "graphql_policy",
          "template": {
              "name": "POLICY_TEMPLATE_NGINX_BASE"
          },
          "applicationLanguage": "utf-8",
          "caseInsensitive": false,
          "enforcementMode": "blocking",
          "blocking-settings": {
              "violations": [
                  {
                      "name": "VIOL_GRAPHQL_FORMAT",
                      "alarm": true,
                      "block": false
                  },
                  {
                      "name": "VIOL_GRAPHQL_MALFORMED",
                      "alarm": true,
                      "block": false
                  },
                  {
                      "name": "VIOL_GRAPHQL_INTROSPECTION_QUERY",
                      "alarm": true,
                      "block": false
                  },
                  {
                      "name": "VIOL_GRAPHQL_ERROR_RESPONSE",
                      "alarm": true,
                      "block": false
                  }
              ]
          } 
  12. To enforce GraphQL settings, update the app_protect_policy_file field with the GraphQL policy name in the nginx.conf file. Once you have updated the file, perform an NGINX reload to enforce the GraphQL settings.

    Here is an example of the nginx.conf file that includes an NGINX App Protect policy:

  13. user nginx;
      worker_processes  4;
      load_module modules/ngx_http_js_module.so;
      load_module modules/ngx_stream_js_module.so; 
      load_module modules/ngx_http_app_protect_module.so;
      error_log /var/log/nginx/error.log debug;
      events {
          worker_connections  65536;
      }
    http {     include       /etc/nginx/mime.types;     default_type  application/octet-stream;     sendfile        on;     keepalive_timeout  65;     server {         listen       <port>;         server_name  <name>;         app_protect_enable on;         app_protect_security_log_enable on; # This section enables the logging capability         app_protect_security_log "/etc/app_protect/conf/log_sm.json" syslog:server=127.0.0.1:514; # This is where the remote logger is defined in terms of: logging options (defined in the referenced file), log server IP, log server port         app_protect_security_log "/etc/app_protect/conf/log_default.json" /var/log/app_protect/security.log;         proxy_http_version 1.1;            location / {             client_max_body_size 0;             default_type text/html;             proxy_pass http://<ip addr>:<port>$request_uri;#<ip addr> of Nginx unit         }         location /graphql {             client_max_body_size 0;             default_type text/html;             app_protect_policy_file "/etc/app_protect/conf/graphql_policy.json";             proxy_pass http://<ip addr>:<port>$$request_uri; #<ip addr> of Nginx unit         
            }     } }
  14. Restart NGINX Plus by running this command:
  15.    

    $ nginx -s reload

Test the Configuration

Now you can test your configuration by following these steps:

  1. Start the Apollo GraphQL application by navigating to the NGINX Unit server and typing in this command:
  2. $ curl -X PUT --data-binary @demo.json --unix-socket /var/run/control.unit.sock http://localhost/config
  3. After a successful update, you should see that the app is available on the listener’s IP address and port:
  4. {
              "success": "Reconfiguration done."
      }
  5. To access the Apollo GraphQL server, open your web browser and paste the public IP address of your server. For example: http://3..X.X.X:4003/graphql (this example uses port 4003).
  6. To test the application, enter the correct GraphQL query and run the query.
  7. Screenshot of Apollo GraphQL UI playground
    Figure 2: Apollo GraphQL UI playground
  8. Consider a situation where an individual inputs an SQL injection into a query. In this case, NGINX App Protect WAF safeguards the Apollo GraphQL server and produces a support ID that describes the nature of the attack.
  9. $ curl -X POST http://3.X.X.X:4003/graphql/ -H "Content-Type:application/json" -d '{"query": "query {hello OR 1=1;} "}'
    <html><head><title>Request Rejected</title></head><body>The requested URL was rejected. Please consult with your administrator.<br><br>Your support ID is: 7313092578494613509<br><br><ahref='javascript:history.back();'>[GoBack]</a></body><html>
  10. To check the attack’s details, refer to the NGINX Plus instance’s logs, which are located in /var/log/app_protect/security.log.
  11. attack_type="Non-browser Client,Abuse of Functionality,SQL-Injection,Other Application Activity,HTTP Parser Attack",blocking_exception_reason="N/A",date_time="2023-07-05 21:22:38",dest_port="4003",ip_client="99.187.244.63",is_truncated="false",method="POST",policy_name="graphql_policy",protocol="HTTP",request_status="blocked",response_code="0",severity="Critical",sig_cves="N/A,N/A",sig_ids="200002147,200002476",sig_names="SQL-INJ expressions like ""or 1=1"" (3),SQL-INJ expressions like ""or 1=1"" (6) (Parameter)",sig_set_names="{SQL Injection Signatures},{SQL Injection Signatures}",src_port="64257",sub_violations="HTTP protocol compliance failed:Host header contains IP address",support_id="7313092578494613509",
  12. In the NGINX Management Security Monitoring module, you can monitor the data of your instances and review potential threats, as well as adjust policies as needed for optimal protection.
  13. Screenshot of overview of the NGINX Management Suite Security Monitoring module
    Figure 3: Overview of the NGINX Management Suite Security Monitoring module
    Screenshot of comprehensive summary of security violations in the Security Monitoring module
    Figure 4: Comprehensive summary of security violations in the Security Monitoring module

Conclusion

In this tutorial, you learned how to set up an Apollo GraphQL Server on NGINX Unit, deploy NGINX Plus as an API gateway, and secure your GraphQL APIs with NGINX App Protect WAF in front of your API gateway.

You can also use NGINX Management Suite Security Monitoring to identify and block common advanced threats before they compromise your GraphQL APIs. This simple architecture defends GraphQL APIs from some of the most common API vulnerabilities, including missing authentication and authorization, injection attacks, unrestricted resource consumption, and more.

Test drive NGINX today with a 30-day free trial of the API Connectivity Stack, which includes NGINX Plus, NGINX App Protect, and NGINX Management Suite.

Additional Resources

The post Tutorial: Deliver and Secure GraphQL APIs with F5 NGINX appeared first on NGINX.

]]>
The Mission-Critical Patient-Care Use Case That Became a Kubernetes Odyssey https://www.nginx.com/blog/mission-critical-patient-care-use-case-became-kubernetes-odyssey/ Wed, 17 May 2023 15:00:51 +0000 https://www.nginx.com/?p=71589 Downtime can lead to serious consequences. These words are truer for companies in the medical technology field than in most other industries – in their case, the "serious consequences" can literally include death. We recently had the chance to dissect the tech stack of a company that’s seeking to transform medical record keeping from pen-and-paper [...]

Read More...

The post The Mission-Critical Patient-Care Use Case That Became a Kubernetes Odyssey appeared first on NGINX.

]]>
Downtime can lead to serious consequences.

These words are truer for companies in the medical technology field than in most other industries – in their case, the "serious consequences" can literally include death. We recently had the chance to dissect the tech stack of a company that’s seeking to transform medical record keeping from pen-and-paper to secure digital data that is accessible anytime, and anywhere, in the world. These data range from patient information to care directives, biological markers, medical analytics, historical records, and everything else shared between healthcare teams.

From the outset, the company has sought to address a seemingly simple question: “How can we help care workers easily record data in real time?” As the company has grown, however, the need to scale and make data constantly available has made solving that challenge increasingly complex. Here we describe how the company’s tech journey has led them to adopt Kubernetes and NGINX Ingress Controller.

Tech Stack at a Glance

Here’s a look at where NGINX fits into their architecture:

Diagram how NGINX fits into their architecture

The Problem with Paper

Capturing patient status and care information at regular intervals is a core duty for healthcare personnel. Traditionally, they have recorded patient information on paper, or more recently on laptop or tablet. There are a couple serious downsides:

  • Healthcare workers may interact dozens of patients per day, so it’s usually not practical to write detailed notes while providing care. As a result, workers end up writing their notes at the end of their shift. At that point, mental and physical fatigue make it tempting to record only generic comments.
  • The workers must also depend on their memory of details about patient behavior. Inaccuracies might mask patterns that facilitate diagnosis of larger health issues if documented correctly and consistently over time.
  • Paper records can’t easily be shared among departments within a single department, let alone with other entities like EMTs, emergency room staff, and insurance companies. The situation isn’t much better with laptops or tablets if they’re not connected to a central data store or the cloud.

To address these challenges, the company created a simplified data recording system that provides shortcuts for accessing patient information and recording common events like dispensing medication. This ease of access and use makes it possible to record patient interactions in real time as they happen.

All data is stored in cloud systems maintained by the company, and the app integrates with other electronic medical records systems to provide a comprehensive longitudinal view of resident behaviors. This helps caregivers provide better continuity of care, creates a secure historical record, and can be easily shared with other healthcare software systems.

Physicians and other specialists also use the platform when admitting or otherwise engaging with patients. There’s a record of preferences and personal needs that travel with the patient to any facility. These can be used to help patients feel comfortable in a new setting, which improve outcomes like recovery time.

There are strict legal requirements about how long companies must store patient data. The company’s developers have built the software to offer extremely high availability with uptime SLAs that are much better than those of generic cloud applications. Keeping an ambulance waiting because a patient’s file won’t load isn’t an option.

The Voyage from the Garage to the Cloud to Kubernetes

Like many startups, the company initially saved money by running the first proof-of-concept application on a server in a co-founder’s home. Once it became clear the idea had legs, the company moved its infrastructure to the cloud rather than manage hardware in a data center. Being a Microsoft shop, they chose Azure. The initial architecture ran applications on traditional virtual machines (VMs) in Azure App Service, a managed application delivery service that runs Microsoft’s IIS web server. For data storage and retrieval, the company opted to use Microsoft’s SQL Server running in a VM as a managed application.

After several years running in the cloud, the company was growing quickly and experiencing scaling pains. It needed to scale infinitely, and horizontally rather than vertically because the latter is slow and expensive with VMs. This requirement led rather naturally to containerization and Kubernetes as a possible solution. A further point in favor of containerization was that the company’s developers need to ship updates to the application and infrastructure frequently, without risking outages. With patient notes being constantly added across multiple time zones, there is no natural downtime to push changes to production without the risk of customers immediately being affected by glitches.

A logical starting point for the company was Microsoft’s managed Kubernetes offering, Azure Kubernetes Services (AKS). The team researched Kubernetes best practices and realized they needed an Ingress controller running in front of their Kubernetes clusters to effectively manage traffic and applications running in nodes and pods on AKS.

Traffic Routing Must Be Flexible Yet Precise

The team tested AKS’s default Ingress controller, but found its traffic-routing features simply could not deliver updates to the company’s customers in the required manner. When it comes to patient care, there’s no room for ambiguity or conflicting information – it’s unacceptable for one care worker to see an orange flag and another a red flag for the same event, for example. Hence, all users in a given organization must use the same version of the app. This presents a big challenge when it comes to upgrades. There’s no natural time to transition a customer to a new version, so the company needed a way to use rules at the server and network level to route different customers to different app versions.

To achieve this, the company runs the same backend platform for all users in an organization and does not offer multi-tenancy with segmentation at the infrastructure layer within the organization. With Kubernetes, it is possible to split traffic using virtual network routes and cookies on browsers along with detailed traffic rules. However, the company’s technical team found that AKS’s default Ingress controller can split traffic only on a percentage basis, not with rules that operate at level of customer organization or individual user as required.

In its basic configuration, the NGINX Ingress Controller based on NGINX Open Source has the same limitation, so the company decided to pivot to the more advanced NGINX Ingress Controller based on NGINX Plus, an enterprise-grade product which supports granular traffic control. Finding recommendations from NGINX Ingress Controller from Microsoft and the Kubernetes community based on the high level of flexibility and control helped solidify the choice. The configuration better supports the company’s need for pod management (as opposed to classic traffic management), ensuring that pods are running in the appropriate zones and traffic is routed to those services. Sometimes traffic is being routed internally but in most use cases, it is routed back out through NGINX Ingress Controller for observability reasons.

Here Be Dragons: Monitoring, Observability and Application Performance

With NGINX Ingress Controller, the technical team has complete control over the developer and end user experience. Once users log in and establish a session, they can immediately be routed to a new version or reverted back to an older one. Patches can be pushed simultaneously and nearly instantaneously to all users in an organization. The software isn’t reliant on DNS propagation or updates on networking across the cloud platform.

NGINX Ingress Controller also meets the company’s requirement for granular and continuous monitoring. Application performance is extremely important in healthcare. Latency or downtime can hamper successful clinical care, especially in life-or-death situations. After the move to Kubernetes, customers started reporting downtime that the company hadn’t noticed. The company soon discovered the source of the problem: Azure App Service relies on sampled data. Sampling is fine for averages and broad trends, but it completely misses things like rejected requests and missing resources. Nor does it show the usage spikes that commonly occur every half hour as care givers check in and log patient data. The company was getting only an incomplete picture of latency, error sources, bad requests, and unavailable service.

The problems didn’t stop there. By default Azure App Service preserves stored data for only a month – far short of the dozens of years mandated by laws in many countries.  Expanding the data store as required for longer preservation was prohibitively expensive. In addition, the Azure solution cannot see inside of the Kubernetes networking stack. NGINX Ingress Controller can monitor both infrastructure and application parameters as it handles Layer 4 and Layer 7 traffic.

For performance monitoring and observability, the company chose a Prometheus time-series database attached to a Grafana visualization engine and dashboard. Integration with Prometheus and Grafana is pre-baked into the NGINX data and control plane; the technical team had to make only a small configuration change to direct all traffic through the Prometheus and Grafana servers. The information was also routed into a Grafana Loki logging database to make it easier to analyze logs and give the software team more control over data over time. 

This configuration also future-proofs against incidents requiring extremely frequent and high-volume data sampling for troubleshooting and fixing bugs. Addressing these types of incidents might be costly with the application monitoring systems provided by most large cloud companies, but the cost and overhead of Prometheus, Grafana, and Loki in this use case are minimal. All three are stable open source products which generally require little more than patching after initial tuning.

Stay the Course: A Focus on High Availability and Security

The company has always had a dual focus, on security to protect one of the most sensitive types of data there is, and on high availability to ensure the app is available whenever it’s needed. In the shift to Kubernetes, they made a few changes to augment both capacities.

For the highest availability, the technical team deploys an active-active, multi-zone, and multi-geo distributed infrastructure design for complete redundancy with no single point of failure. The team maintains N+2 active-active infrastructure with dual Kubernetes clusters in two different geographies. Within each geography, the software spans multiple data centers to reduce downtime risk, providing coverage in case of any failures at any layer in the infrastructure. Affinity and anti-affinity rules can instantly reroute users and traffic to up-and-running pods to prevent service interruptions. 

For security, the team deploys a web application firewall (WAF) to guard against bad requests and malicious actors. Protection against the OWASP Top 10 is table stakes provided by most WAFs. As they created the app, the team researched a number of WAFs including the native Azure WAF and ModSecurity. In the end, the team chose NGINX App Protect with its inline WAF and distributed denial-of-service (DDoS) protection.

A big advantage of NGINX App Protect is its colocation with NGINX Ingress Controller, which both eliminates a point of redundancy and reduces latency. Other WAFs must be placed outside of the Kubernetes environment, contributing to latency and cost. Even miniscule delays (say 1 millisecond extra per request) add up quickly over time.

Surprise Side Quest: No Downtime for Developers

Having completed the transition to AKS for most of its application and networking infrastructure, the company has also realized significant improvements to its developer experience (DevEx). Developers now almost always spot problems before customers notice any issues themselves. Since the switch, the volume of support calls about errors is down about 80%!

The company’s security and application-performance teams have a detailed Grafana dashboard and unified alerting, eliminating the need to check multiple systems or implement triggers for warning texts and calls coming from different processes. The development and DevOps teams can now ship code and infrastructure updates daily or even multiple times per day and use extremely granular blue-green patterns. Formerly, they were shipping updates once or twice per week and having to time there for low-usage windows, a stressful proposition. Now, code is shipped when ready and the developers can monitor the impact directly by observing application behavior.

The results are positive all around – an increase in software development velocity, improvement in developer morale, and more lives saved.

The post The Mission-Critical Patient-Care Use Case That Became a Kubernetes Odyssey appeared first on NGINX.

]]>
Secure Your GraphQL and gRPC Bidirectional Streaming APIs with F5 NGINX App Protect WAF https://www.nginx.com/blog/secure-graphql-grpc-bidirectional-streaming-apis-with-f5-nginx-app-protect-waf/ Thu, 27 Apr 2023 16:19:10 +0000 https://www.nginx.com/?p=71563 The digital economy continues to expand since the COVID-19 pandemic, with 90% of organizations growing their modern app architectures. In F5’s 2023 State of Application Strategy Report, more than 40% of the 1,000 global IT decision makers surveyed describe their app portfolios as "modern". This percentage has been growing steadily over the last few years [...]

Read More...

The post Secure Your GraphQL and gRPC Bidirectional Streaming APIs with F5 NGINX App Protect WAF appeared first on NGINX.

]]>
The digital economy continues to expand since the COVID-19 pandemic, with 90% of organizations growing their modern app architectures. In F5’s 2023 State of Application Strategy Report, more than 40% of the 1,000 global IT decision makers surveyed describe their app portfolios as "modern". This percentage has been growing steadily over the last few years and is projected to exceed 50% by 2025. However, the increase in modern apps and use of microservices is accompanied by a proliferation of APIs and API endpoints, exponentially increasing the potential for vulnerabilities and the surface area for attacks.

According to Continuous API Sprawl, a report from the F5 Office of the CTO, there were approximately 200 million APIs worldwide in 2021, a number expected to approach 2 billion by 2030.  Compounding the complexity resulting from this rapid API growth is the challenge of managing distributed applications across hybrid and multi-cloud environments. Respondents to the 2023 State of Application Strategy Report cited the complexity of managing multiple tools and APIs as their #1 challenge as they deploy apps in multi-cloud environments. Applying consistent security policies and optimizing app performance were tied in a close second place.

Poll results for challenges people currently have with deploying applications in multiple clouds. Complexity and security issues continue, while visibility— number 1 in 2022—fell to seventh.
Figure 1: Top challenges of deploying apps in a multi-cloud environment (source: 2023 State of Application Strategy Report).

Why API Security is Critical to Your Bottom line

Not only are APIs the building blocks of modern applications, they’re at the core of digital business – 58% of organizations surveyed in the F5 2023 report say they derive at least half of their revenue from digital services. APIs enable user-to-app and app-to-app communication, and the access they provide to private customer data and internal corporate information make them lucrative targets for attackers. APIs were the attack vector of choice in 2022.

Protecting APIs is paramount in an overall application security strategy. Attacks can have devastating consequences that go far beyond violating consumer privacy (bad as that is), to an increased level of severity that harms public safety and leads to loss of intellectual property. Here are some examples of each of these types of API attacks that occurred in 2022.

  • Consumer privacy – Twitter experienced a multi-year API attack. In December 2022, hackers stole the profile data and email addresses of 200 million Twitter users. Four months earlier, 3,207 mobile applications leaking valid Twitter API keys and secrets were discovered by CloudSEK researchers. And a month prior to that, hackers had exploited an API vulnerability to seize and sell data from 5.4 million users .
  • Public safety – A team of researchers found critical API security vulnerabilities across approximately 20 top automotive manufacturers, including Toyota, Mercedes, and BMW. With so many cars today acting like smart devices, hackers can go well beyond stealing VINs and personal information about car owners. They can track car locations and control the remote management system, allowing them to unlock and start the car or disable the car completely.
  • Intellectual property – A targeted employee at CircleCI, a CI/CD platform used by over 1 million developers worldwide to ship code, was the victim of a malware attack. This employee had privileges to generate production access tokens, and as a result hackers were able to steal customers’ API keys and secrets. The breach went unnoticed for nearly three weeks. Unable to tell whether a customer’s secrets were stolen and used for unauthorized access to third-party systems, CircleCI could only advise customers to rotate project and personal API tokens.

These API attacks serve as cautionary tales. When APIs have security vulnerabilities and are left unprotected, the longtail consequences can go far beyond monetary costs. The significance of API security cannot be overstated.

How F5 NGINX Helps You Secure Your APIs

The NGINX API Connectivity Stack solution helps you manage your API gateways and APIs across multi-cloud environments. By deploying NGINX Plus as your API gateway with NGINX App Protect WAF, you can help prevent and mitigate common API exploits that address the top three API challenges identified in the F5 2023 State of Application Strategy Report – managing API complexity across multi-cloud environments, ensuring security policies, and optimizing app performance – as well as the types of API attacks discussed in the previous section. NGINX Plus can be used in several ways, including as an API gateway where you can route API requests quickly, authenticate and authorize API clients to secure your APIs, and rate limit traffic to protect your API‑based services from overload.

NGINX Plus provides out-of-the-box protection against not only the OWASP API Security Top 10 vulnerabilities. It also checks for malformed cookies, JSON, and XML, validates allowed file types and response status codes, and detects evasion techniques used to mask attacks. A NGINX Plus API gateway ensures protection for HTTP or HTTP/2 API protocols including REST, GraphQL, and gRPC.

NGINX App Protect WAF provides lightweight, high-performance app and API security that goes beyond basic protection against the OWASP API Security Top 10 and OWASP (Application) Top 10, with protection from over 7,500 advanced signatures, bot signatures, and threat campaigns. It enables a shift-left strategy and easy automation of API security for integrating security-as-code into CI/CD pipelines. In testing against the AWS, Azure, and Cloudflare WAFs, NGINX App Protect WAF was found to deliver strong app and API security while maintaining better performance and lower latency. For more details, check out this GigaOm Report.  

NGINX App Protect WAF is embedded into the NGINX Plus API gateway, resulting in one less hop for API traffic. Fewer hops between layers reduces latency, complexity, and points of failure. This is in stark contrast with typical API-management solutions which do not integrate with a WAF (you must deploy the WAF separately and, once it is set up, API traffic must traverse the WAF and API gateway separately). NGINX’s tight integration means high performance without compromise on security.

GraphQL and gRPC Are on the Rise

App and API developers are constantly looking for new ways to increase flexibility, speed, and ease of use and deployment. According to the 2022 State of the API Report from Postman, REST is still the most popular API protocol used today (89%), but GraphQL (28%) and gRPC (11%) continue to grow in popularity. Ultimately the choice of API protocol is highly dependent on the purpose of application and the best solution for your business. Each protocol has its own benefits.

Why Use GraphQL APIs?

Key benefits of using GraphQL APIs include:

  • Adaptability – The client decides on the data request, type, and format.
  • Efficiency – There is no over-fetching, requests are run against a created schema, and the data returned is exactly (and only) what was requested. The formatting of data in request and response is identical, making GraphQL APIs fast, predictable, and easy to scale.
  • Flexibility – Supports over a dozen languages and platforms.

GitHub is one well-known user of GraphQL. They made the switch to GraphQL in 2016 for scalability and flexibility reasons.

Why Use gRPC APIs?

Key benefits of using gRPC APIs include:

  • Performance – The lightweight, compact data format minimizes resource demands and enables fast message encoding and decoding
  • Efficient – The protobufs data format streamlines communication by serializing structured data
  • Reliability – HTTP/2 and TLS/SSL are required, improving security by default

Most of the power comes from the client side, while management and computations are offloaded to a remote server hosting the resource. gRPC is suited for use cases that routinely need a set amount of data or processing, such as traffic between microservices or data collection in which the requester (such as an IOT device) needs to conserve limited resources.

Netflix is an example of a well know user of gRPC APIs.

Secure Your GraphQL APIs with NGINX App Protect WAF

NGINX App Protect WAF now supports GraphQL APIs in addition to REST and gRPC APIs. It secures GraphQL APIs by applying attack signatures, eliminating malicious exploits, and defending against attacks. GraphQL traffic is natively parsed, enabling NGINX App Protect WAF to detect violations based on GraphQL syntax and profile and apply attack signatures. Visibility into introspection queries enables NGINX App Protect WAF to block them, as well as block detected patterns in responses. This method helps to detect attacks and run signatures in the appropriate segments of a payload, and by doing so, helps to reduce false positives.
 
Learn how NGINX App Protect WAF can defend your GraphQL APIs from attacks in this demo.

Benefits of GraphQL API security with NGINX App Protect WAF:

  • Define security parameters – Set in accordance with your organizational policy the total length and value of parameters in the GraphQL template and content profile as part of the app security policy
  • Reduce false positives – Improve accuracy of attack prevention with granular controls for better detection of attacks in a GraphQL request
  • Alleviate malicious exploits – Define maximum batched queries in one HTTP request to reduce the risk of malicious exploitation and attacks
  • Eliminate DoS attacks – Configure maximum structure depth in content profiles to stop DoS attacks caused by recursive queries
  • Limit API risk exposure – Enforce constraints on introspection queries to prevent hackers from understanding the API structure, which can lead to a breach

Secure gRPC Bidirectional Streaming APIs with NGINX App Protect WAF

NGINX App Protect WAF now supports gRPC bidirectional streaming in addition to unary message types, enabling you to secure gRPC-based APIs that use message streams (client, server, or both). This provides complete security for gRPC APIs regardless of the communication type.

NGINX App Protect WAF secures gRPC APIs by enforcing your schema, setting size limits, blocking unknown files, and preventing resource-exhaustion types of DoS attacks. You can import your Interface Definition Language (IDL) file to NGINX App Protect WAF so that it can enforce the structure and schema of your gRPC messages and scan for attacks in the right places. This enables accurate detection of attempts to exploit your application through gRPC and avoids false positives that can occur when scanning for security in the wrong places without context.

Learn how NGINX App Protect WAF can defend your gRPC bidirectional APIs from attacks in this demo.

Benefits of gRPC API security with NGINX App Protect WAF:

  • Comprehensive gRPC protection – From unary to bidirectional streaming, complete security regardless of communication type
  • Reduce false positives – Improved accuracy from enforcement of gRPC message structure and schema, for better detection of attacks in a gRPC request
  • Block malicious exploits – Enforcement that each field in the gRPC message has the correct type and expected content, with the ability to block unknown fields
  • Eliminate DoS attacks – Message size limits to prevent resource-exhaustion types of DoS attacks

Both SecOps and API Dev Teams Can Manage and Automate API Security

In Postman’s 2022 State of the API Report, 20% of the 37,000 developers and API professionals surveyed stated that API incidents occur at least once a month at their organization, resulting in loss of data, loss of service, abuse, or inappropriate access. In contrast, 52% of respondents suffered an API attack less than once per year, underscoring the importance of incorporating security early as part of a shift-left strategy for API security. With APIs being published more frequently than applications, a shift left strategy is increasingly being applied to API security. When organizations adopt a shift-left culture and integrate security-as-code into CI/CD pipelines, they build security into each stage of API development, enable developers to remain agile, and accelerate deployment velocity.

Diagram showing how to shift left using security as code with NGINX App Protect WAF, Jenkins, and Ansible
Figure 2: NGINX App Protect WAF enables API security integration into CI/CD pipelines for automated protection that spans the entire API lifecycle.

A key area where protection must be API specific is the validation of API schemata, including gRPC IDL files and GraphQL queries. Schemata are unique to each API and change with each API version. When automating the API schema, any time you update an API you also need to update the configuration and code in that file. WAF configurations can be deployed in an automated fashion to keep up with API version changes. NGINX App Protect WAF can validate schemata, verifying that requests comply with what the API supports (methods, endpoints, parameters, and so on). NGINX App Protect WAF enables consistent app security with declarative policies that can be created by SecOps teams, with API Dev teams able to manage and deploy API security for more granular control and agility. If you are looking to automate your API security at scale across hybrid and multi-cloud environments, NGINX App Protect WAF can help.

Summary

Modern app portfolios continue to grow, and with the use of microservices comes an even greater proliferation of APIs. API security is complex and challenging, especially for organizations operating in hybrid or multi-cloud environments. Lack of API security can have devastating longtail effects beyond monetary costs. NGINX App Protect WAF provides comprehensive API security that includes protection for your REST, GraphQL, and gRPC APIs and helps your SecOps and API teams shift left and automate security throughout the entire API lifecycle and across distributed environments.

Test drive NGINX App Protect WAF today with a 30-day free trial.

Additional Resources

Blog: Secure Your API Gateway with NGINX App Protect WAF
eBook: Modern App and API Security
eBook: Mastering API Architecture from O’Reilly
Datasheet: NGINX App Protect WAF

The post Secure Your GraphQL and gRPC Bidirectional Streaming APIs with F5 NGINX App Protect WAF appeared first on NGINX.

]]>
2 Ways to View and Manage Your WAF Fleet at Scale with F5 NGINX https://www.nginx.com/blog/2-ways-view-manage-waf-fleet-at-scale-f5-nginx/ Thu, 23 Mar 2023 15:03:14 +0000 https://www.nginx.com/?p=71385 As organizations transform digitally and grow their application portfolios, security challenges also transform and multiply. In F5’s The State of Application Strategy in 2022, we saw how many organizations today have more apps to monitor than ever – often anywhere from 200 to 1000! That high number creates more potential attack surfaces, making today’s apps particularly susceptible to bad [...]

Read More...

The post 2 Ways to View and Manage Your WAF Fleet at Scale with F5 NGINX appeared first on NGINX.

]]>
As organizations transform digitally and grow their application portfolios, security challenges also transform and multiply. In F5’s The State of Application Strategy in 2022, we saw how many organizations today have more apps to monitor than ever – often anywhere from 200 to 1000!

That high number creates more potential attack surfaces, making today’s apps particularly susceptible to bad actors. This vulnerability worsens when a web application needs to handle increased amounts of traffic. To minimize downtime (or even better, eliminate it!), it’s crucial to develop a strategy that puts security first.

WAF: Your First Line of Defense

In our webinar Easily View, Manage, and Scale Your App Security with F5 NGINX, we cover why a web application firewall (WAF) is the tool of choice for securing and protecting web applications. By monitoring and filtering traffic, a WAF is the first line of defense to protect applications against sophisticated Layer 7 attacks like distributed denial of service (DDoS).

The following WAF capabilities ensure a robust app security solution:

  • HTTP protocol and traffic validation
  • Data protection
  • Automated attack blocking
  • Easy policy integration into CI/CD pipelines
  • Centralized visualization
  • Configuration management at scale

But while the WAF is monitoring the apps, how does your team monitor the WAF? And what about when you deploy multiple WAFs in a fleet to handle numerous attacks? In the webinar, we answer these questions and also do a real‑time demo.

As a preview of the webinar, in this post we look into two key findings to help you get started managing your WAF fleet at scale:

  1. How to increase visibility
  2. How to enable security-as-code

Increase Visibility with NGINX Management Suite

The success of any WAF strategy depends on the level of visibility available to the teams implementing and managing the WAFs during creation, deployment, and modification. This is where a management plane comes in. Rather than making your teams look at each WAF through a separate, individual lens, it’s important to have one, centralized pane of glass for monitoring all your WAFs. With centralized visibility, you can make informed decisions about current attacks and easily gain insights to fine‑tune your security policies.

Additionally, it’s critical that your SecOps, Platform Ops, and DevOps teams share a clear and cohesive strategy. When these three teams work together on both the setup and maintenance of your WAFs, you achieve stronger app security at scale.

Here’s how each team benefits from using our management plane, F5 NGINX Management Suite, which easily integrates with NGINX App Protect WAF:

  • SecOps – Gains centralized visibility into app security and compliance, the ability to apply uniform policies across teams, and support for a shift‑left strategy.
  • Platform Ops – Can provide app security support to multiple users, centralized visibility across the entire WAF fleet, and scalable DevOps across the entire enterprise.
  • DevOps – Can automate security within the CI/CD pipeline, easily and quickly deploy app security, and provide better customer experience by building apps that are more reliable and less subject to attack.

Enable Security as Code with NGINX App Protect WAF

Instance Manager is the core module in NGINX Management Suite and enables centralized management of NGINX App Protect WAF security policies at scale. When your DevOps team can easily consume SecOps‑managed security policies, it can start moving towards a DevSecOps culture, immediately integrating security at all phases of the CI/CD pipeline, shifting security left.

Shifting left and centrally managing your WAF fleet means:

  • A declarative security policy (in JSON from SecOps) enables DevOps to use CI/CD tools natively.
  • Your security policy can be pushed to the application from a developer tool.
  • SecOps and DevOps can independently own their files.

With platform‑agnostic NGINX App Protect WAF, you can easily shift left and automate security into the CI/CD pipeline. Learn more in this short clip from the webinar:

Watch the Full Webinar On Demand

To dive deeper into these topics and see the ten‑minute real‑time demo, watch our on‑demand webinar Easily View, Manage, and Scale Your App Security with F5 NGINX.

In addition to the findings discussed in this post, the webinar covers:

  • Additional considerations for managing a WAF fleet at scale
  • How visibility of top WAF violations, attacks, and CVEs helps you determine how to tune policies
  • Ways to reduce policy errors with centralized WAF visibility and management
  • Details on automation of security-as-code

Ready to try NGINX Management Suite for managing your WAFs? Request your free 30-day trial.

The post 2 Ways to View and Manage Your WAF Fleet at Scale with F5 NGINX appeared first on NGINX.

]]>
Shifting Security Left with F5 NGINX App Protect on Amazon EKS https://www.nginx.com/blog/shifting-security-left-f5-nginx-app-protect-amazon-eks/ Tue, 22 Nov 2022 16:00:17 +0000 https://www.nginx.com/?p=70738 According to The State of Application Strategy in 2022 report from F5, digital transformation in the enterprise continues to accelerate globally. Most enterprises deploy between 200 and 1,000 apps spanning across multiple cloud zones, with today’s apps moving from monolithic to modern distributed architectures. Kubernetes first hit the tech scene for mainstream use in 2016, a mere six years [...]

Read More...

The post Shifting Security Left with F5 NGINX App Protect on Amazon EKS appeared first on NGINX.

]]>
According to The State of Application Strategy in 2022 report from F5, digital transformation in the enterprise continues to accelerate globally. Most enterprises deploy between 200 and 1,000 apps spanning across multiple cloud zones, with today’s apps moving from monolithic to modern distributed architectures.

Kubernetes first hit the tech scene for mainstream use in 2016, a mere six years ago. Yet today more than 75% of organizations world‑wide run containerized applications in production, up 30% from 2019. One critical issue in Kubernetes environments, including Amazon Elastic Kubernetes Service (EKS), is security. All too often security is “bolted on” at the end of the app development process, and sometimes not even until after a containerized application is already up and running.

The current wave of digital transformation, accelerated by the COVID‑19 pandemic, has forced many businesses to take a more holistic approach to security and consider a “shift left” strategy. Shifting security left means introducing security measures early into the software development lifecycle (SDLC) and using security tools and controls at every stage of the CI/CD pipeline for applications, containers, microservices, and APIs. It represents a move to a new paradigm called DevSecOps, where security is added to DevOps processes and integrates into the rapid release cycles typical of modern software app development and delivery.

DevSecOps represents a significant cultural shift. Security and DevOps teams work with a common purpose: to bring high‑quality products to market quickly and securely. Developers no longer feel stymied at every turn by security procedures that stop their workflow. Security teams no longer find themselves fixing the same problems repeatedly. This makes it possible for the organization to maintain a strong security posture, catching and preventing vulnerabilities, misconfigurations, and violations of compliance or policy as they occur.

Shifting security left and automating security as code protects your Amazon EKS environment from the outset. Learning how to become production‑ready at scale is a big part of building a Kubernetes foundation. Proper governance of Amazon EKS helps drive efficiency, transparency, and accountability across the business while also controlling cost. Strong governance and security guardrails create a framework for better visibility and control of your clusters. Without them, your organization is exposed to greater risk of security breaches and the accompanying longtail costs associated with damage to revenue and reputation.

To find out more about what to consider when moving to a security‑first strategy, take a look at this recent report from O’Reilly, Shifting Left for Application Security.

Automating Security for Amazon EKS with GitOps

Automation is an important enabler for DevSecOps, helping to maintain consistency even at a rapid pace of development and deployment. Like infrastructure as code, automating with a security-as-code approach entails using declarative policies to maintain the desired security state.

GitOps is an operational framework that facilitates automation to support and simplify application delivery and cluster management. The main idea of GitOps is having a Git repository that stores declarative policies of Kubernetes objects and the applications running on Kubernetes, defined as code. An automated process completes the GitOps paradigm to make the production environment match all stored state descriptions.

The repository acts as a source of truth in the form of security policies, which are then referenced by declarative configuration-as-code descriptions as part of the CI/CD pipeline process. As an example, NGINX maintains a GitHub repository with an Ansible role for F5 NGINX App Protect which we hope is useful for helping teams wanting to shift security left.

With such a repo, all it takes to deploy a new application or update an existing one is to update the repo. The automated process manages everything else, including applying configurations and making sure that updates are successful. This ensures that everything happens in the version control system for developers and is synchronized to enforce security on business‑critical applications.

When running on Amazon EKS, GitOps makes security seamless and robust, while virtually eliminating human errors and keeping track of all versioning changes that are applied over time.

Diagram showing how to shift left using security as code with NGINX App Protect WAF and DoS, Jenkins, and Ansible
Figure 1: NGINX App Protect helps you shift security lift with security as code at all phases of your software development lifecycle

NGINX App Protect and NGINX Ingress Controller Protect Your Apps and APIs in Amazon EKS

A robust design for Kubernetes security policy must accommodate the needs of both SecOps and DevOps and include provisions for adapting as the environment scales. Kubernetes clusters can be shared in many ways. For example, a cluster might have multiple applications running in it and sharing its resources, while in another case there are multiple instances of one application, each for a different end user or group. This implies that security boundaries are not always sharply defined and there is a need for flexible and fine‑grained security policies.

The overall security design must be flexible enough to accommodate exceptions, must integrate easily into the CI/CD pipeline, and must support multi‑tenancy. In the context of Kubernetes, a tenant is a logical grouping of Kubernetes objects and applications that are associated with a specific business unit, team, use case, or environment. Multi‑tenancy, then, means multiple tenants securely sharing the same cluster, with boundaries between tenants enforced based on technical security requirements that are tightly connected to business needs.

An easy way to implement low‑latency, high‑performance security on Amazon EKS is by embedding the NGINX App Protect WAF and DoS modules with NGINX Ingress Controller. None of our other competitors provide this type of inline solution. Using one product with synchronized technology provides several advantages, including reduced compute time, costs, and tool sprawl. Here are some additional benefits.

  • Securing the application perimeter – In a well‑architected Kubernetes deployment, NGINX Ingress Controller is the only point of entry for data‑plane traffic flowing to services running within Kubernetes, making it an ideal location for a WAF and DoS protection.
  • Consolidating the data plane – Embedding the WAF within NGINX Ingress Controller eliminates the need for a separate WAF device. This reduces complexity, cost, and the number of points of failure.
  • Consolidating the control plane – WAF and DoS configuration can be managed with the Kubernetes API, making it significantly easier to automate CI/CD processes. NGINX Ingress Controller configuration complies with Kubernetes role‑based access control (RBAC) practices, so you can securely delegate the WAF and DoS configurations to a dedicated DevSecOps team.

The configuration objects for NGINX App Protect WAF and DoS are consistent across both NGINX Ingress Controller and NGINX Plus. A master configuration can easily be translated and deployed to either device, making it even easier to manage WAF configuration as code and deploy it to any application environment

To build NGINX App Protect WAF and DoS into NGINX Ingress Controller, you must have subscriptions for both NGINX Plus and NGINX App Protect WAF or DoS. A few simple steps are all it takes to build the integrated NGINX Ingress Controller image (Docker container). After deploying the image (manually or with Helm charts, for example), you can manage security policies and configuration using the familiar Kubernetes API.

Diagram showing topology for deploying NGINX App Protect WAF and DoS on NGINX Ingress Controller in Amazon EKS
Figure 2: NGINX App Protect WAF and DoS on NGINX Ingress Controller routes app and API traffic to pods and microservices running in Amazon EKS

The NGINX Ingress Controller based on NGINX Plus provides granular control and management of authentication, RBAC‑based authorization, and external interactions with pods. When the client is using HTTPS, NGINX Ingress Controller can terminate TLS and decrypt traffic to apply Layer 7 routing and enforce security.

NGINX App Protect WAF and NGINX App Protect DoS can then be deployed to enforce security policies to protect against point attacks at Layer 7 as a lightweight software security solution. NGINX App Protect WAF secures Kubernetes apps against OWASP Top 10 attacks, and provides advanced signatures and threat protection, bot defense, and Dataguard protection against exploitation of personally identifiable information (PII). NGINX App Protect DoS provides an additional line of defense at Layers 4 and 7 to mitigate sophisticated application‑layer DoS attacks with user behavior analysis and app health checks to protect against attacks that include Slow POST, Slowloris, flood attacks, and Challenger Collapsar.

Such security measures protect both REST APIs and applications accessed using web browsers. API security is also enforced at the Ingress level following the north‑south traffic flow.

NGINX Ingress Controller with NGINX App Protect WAF and DoS can secure Amazon EKS traffic on a per‑request basis rather than per‑service: this is a more useful view of Layer 7 traffic and a far better way to enforce SLAs and north‑south WAF security.

Diagram showing NGINX Ingress Controller with NGINX App Protect WAF and DoS routing north-south traffic to nodes in Amazon EKS
Figure 3: NGINX Ingress Controller with NGINX App Protect WAF and DoS routes north-south traffic to nodes in Amazon EKS

The latest High‑Performance Web Application Firewall Testing report from GigaOm shows how NGINX App Protect WAF consistently delivers strong app and API security while maintaining high performance and low latency, outperforming the other three WAFs tested – AWS WAF, Azure WAF, and Cloudflare WAF – at all tested attack rates.

As an example, Figure 4 shows the results of a test where the WAF had to handle 500 requests per second (RPS), with 95% (475 RPS) of requests valid and 5% of requests (25 RPS) “bad” (simulating script injection). At the 99th percentile, latency for NGINX App Protect WAF was 10x less than AWS WAF, 60x less than Cloudflare WAF, and 120x less than Azure WAF.

Graph showing latency at 475 RPS with 5% bad traffic at various percentiles for 4 WAFs: NGINX App Protect WAF, AWS WAF, Azure WAF, and Cloudflare WAF
Figure 4: Latency for 475 RPS with 5% bad traffic

Figure 5 shows the highest throughput each WAF achieved at 100% success (no 5xx or 429 errors) with less than 30 milliseconds latency for each request. NGINX App Protect WAF handled 19,000 RPS versus Cloudflare WAF at 14,000 RPS, AWS WAF at 6,000 RPS, and Azure WAF at only 2,000 RPS.

Graph showing maximum throughput at 100% success rate: 19,000 RPS for NGINX App Protect WAF; 14,000 RPS for Cloudflare WAF; 6,000 RPS for AWS WAF; 2,000 RPS for Azure WAF
Figure 5: Maximum throughput at 100% success rate

How to Deploy NGINX App Protect and NGINX Ingress Controller on Amazon EKS

NGINX App Protect WAF and DoS leverage an app‑centric security approach with fully declarative configurations and security policies, making it easy to integrate security into your CI/CD pipeline for the application lifecycle on Amazon EKS.

NGINX Ingress Controller provides several custom resource definitions (CRDs) to manage every aspect of web application security and to support a shared responsibility and multi‑tenant model. CRD manifests can be applied following the namespace grouping used by the organization, to support ownership by more than one operations group.

When publishing an application on Amazon EKS, you can build in security by leveraging the automation pipeline already in use and layering the WAF security policy on top.

Additionally, with NGINX App Protect on NGINX Ingress Controller you can configure resource usage thresholds for both CPU and memory utilization, to keep NGINX App Protect from starving other processes. This is particularly important in multi‑tenant environments such as Kubernetes which rely on resource sharing and can potentially suffer from the ‘noisy neighbor’ problem.

Configuring Logging with NGINX CRDs

The logs for NGINX App Protect and NGINX Ingress Controller are separate by design, to reflect how security teams usually operate independently of DevOps and application owners. You can send NGINX App Protect logs to any syslog destination that is reachable from the Kubernetes pods, by setting the parameter to the app-protect-security-log-destination annotation to the cluster IP address of the syslog pod. Additionally, you can use the APLogConf resource to specify which NGINX App Protect logs you care about, and by implication which logs are pushed to the syslog pod. NGINX Ingress Controller logs are forwarded to the local standard output, as for all Kubernetes containers.

This sample APLogConf resource specifies that all requests are logged (not only malicious ones) and sets the maximum message and request sizes that can be logged.

apiVersion: appprotect.f5.com/v1beta1 
kind: APLogConf 
metadata: 
 name: logconf 
 namespace: dvwa 
spec: 
 content: 
   format: default 
   max_message_size: 64k 
   max_request_size: any 
 filter: 
   request_type: all

Defining a WAF Policy with NGINX CRDs

The APPolicy Policy object is a CRD that defines a WAF security policy with signature sets and security rules based on a declarative approach. This approach applies to both NGINX App Protect WAF and DoS, while the following example focuses on WAF. Policy definitions are usually stored on the organization’s source of truth as part of the SecOps catalog.

apiVersion: appprotect.f5.com/v1beta1 
kind: APPolicy 
metadata: 
  name: sample-policy
spec: 
  policy: 
    name: sample-policy 
    template: 
      name: POLICY_TEMPLATE_NGINX_BASE 
    applicationLanguage: utf-8 
    enforcementMode: blocking 
    signature-sets: 
    - name: Command Execution Signatures 
      alarm: true 
      block: true
[...]

Once the security policy manifest has been applied on the Amazon EKS cluster, create an APLogConf object called log-violations to define the type and format of entries written to the log when a request violates a WAF policy:

apiVersion: appprotect.f5.com/v1beta1 
kind: APLogConf 
metadata: 
  name: log-violations
spec: 
  content: 
    format: default 
    max_message_size: 64k 
    max_request_size: any 
  filter: 
    request_type: illegal

The waf-policy Policy object then references sample-policy for NGINX App Protect WAF to enforce on incoming traffic when the application is exposed by NGINX Ingress Controller. It references log-violations to define the format of log entries sent to the syslog server specified in the logDest field.

apiVersion: k8s.nginx.org/v1 
kind: Policy 
metadata: 
  name: waf-policy 
spec: 
  waf: 
    enable: true 
    apPolicy: "default/sample-policy" 
    securityLog: 
      enable: true 
      apLogConf: "default/log-violations" 
      logDest: "syslog:server=10.105.238.128:5144"

Deployment is complete when DevOps publishes a VirtualServer object that configures NGINX Ingress Controller to expose the application on Amazon EKS:

apiVersion: k8s.nginx.org/v1
kind: VirtualServer
metadata:
  name: eshop-vs
spec:
  host: eshop.lab.local
  policies:
  - name: default/waf-policy
  upstreams:
  - name: eshop-upstream
    service: eshop-service
    port: 80
  routes:
  - path: /
    action:
      pass: eshop-upstream

The VirtualServer object makes it easy to publish and secure containerized apps running on Amazon EKS while upholding the shared responsibility model, where SecOps provides a comprehensive catalog of security policies and DevOps relies on it to shift security left from day one. This enables organizations to transition to a DevSecOps strategy.

Conclusion

For companies with legacy apps and security solutions built up over years, shifting security left on Amazon EKS is likely a gradual process. But reframing security as code that is managed and maintained by the security team and consumed by DevOps helps deliver services faster and make them production ready.

To secure north‑south traffic in Amazon EKS, you can leverage NGINX Ingress Controller embedded with NGINX App Protect WAF for protect against point attacks at Layer 7 and NGINX App Protect DoS for DoS mitigation at Layers 4 and 7.

To try NGINX Ingress Controller with NGINX App Protect WAF, start a free 30-day trial on the AWS Marketplace or contact us to discuss your use cases.

To discover how you can prevent security breaches and protect your Kubernetes apps at scale using NGINX Ingress Controller and NGINX App Protect WAF and DoS on Amazon EKS, please download our eBook, Add Security to Your Amazon EKS with F5 NGINX.

To learn more about how NGINX App Protect WAF outperforms the native WAFs for AWS, Azure, and Cloudflare, download the High-Performance Web Application Firewall Testing report from GigaOm and register for the webinar on December 6 where GigaOm analyst Jake Dolezal reviews the results.

The post Shifting Security Left with F5 NGINX App Protect on Amazon EKS appeared first on NGINX.

]]>
Automate Security with F5 NGINX App Protect and F5 NGINX Plus to Reduce the Cost of Breaches https://www.nginx.com/blog/automate-security-f5-nginx-app-protect-f5-nginx-plus-to-reduce-cost-of-breaches/ Thu, 07 Jul 2022 17:12:58 +0000 https://www.nginx.com/?p=69950 It might surprise you to learn that the money you spend on improving your security posture with automation and artificial intelligence (AI) ends up saving you much greater amounts of money. In its Cost of a Data Breach 2021 Report, the IBM Security team reveals that a security breach costs organizations without security automation and AI [...]

Read More...

The post Automate Security with F5 NGINX App Protect and F5 NGINX Plus to Reduce the Cost of Breaches appeared first on NGINX.

]]>
It might surprise you to learn that the money you spend on improving your security posture with automation and artificial intelligence (AI) ends up saving you much greater amounts of money. In its Cost of a Data Breach 2021 Report, the IBM Security team reveals that a security breach costs organizations without security automation and AI a whopping 80% more on average than organizations with fully deployed automation and AI – $6.71 million versus $2.90 million, a difference of $3.81 million. By prioritizing security automation and AI, organizations can faster identify and contain a breach, saving both money and time.

Bar chart showing average cost of a data breach by security automation deployment level
Data breaches cost organizations without security automation and AI millions more
(Source: Cost of a Data Breach 2021 Report)

As you integrate security into your CI/CD pipeline, however, it is important not to overload your tools. The fewer times you inspect traffic, the less latency you introduce. The business corollary is that technical complexity is the enemy of agility.

At F5 NGINX, we offer a security platform that is unique in its seamless integration and ability to help teams “shift security left” during the software development lifecycle. When organizations integrate “security as code” into their CI/CD pipeline, they enable security automation and AI, leading to the huge savings described by IBM. With security built into each stage of application and API development, configuration and security policy files are consumed as code and your SecOps team can create and maintain a declarative security policy for DevOps to use when building apps. These same policies can be repeatedly applied to new apps, thereby automating your security into the CI/CD pipeline.

Let’s peel back the onion and look at three phases of traffic processing where F5 NGINX App Protect and F5 NGINX Plus help you automate security protections:

  • Phase 1 – F5 NGINX App Protect DoS detects and defends against denial-of-service (DoS) attacks
  • Phase 2 – F5 NGINX App Protect WAF protects against OWASP Top 10 attacks and malicious bots
  • Phase 3 – F5 NGINX Plus authenticates app and API clients and enforces authorization requirements using native features and integrations with third‑party, single sign‑on (SSO) solutions
Diagram of three-phase security solution for efficient application traffic management to block illegitimate traffic and automate security, using NGINX App Protect DoS and WAF plus NGINX Plus
A three-phase security solution for efficient application traffic management to block illegitimate traffic and automate security, saving time and money

This three-part security solution saves you money in two ways, especially in a public cloud environment:

  1. Only legitimate traffic reaches your apps because NGINX App Protect DoS filters out traffic from DoS attacks and then NGINX App Protect WAF eliminates multiple attack vectors like the OWASP Top 10.
  2. NGINX Plus’s highly efficient, event-driven design means it can process huge numbers of requests per second with low CPU consumption. Processing only legitimate application traffic on a highly efficient platform requires significantly fewer compute resources, saving you time and money and ensuring protection against multiple attack vectors without overwhelming your tools.

If you are currently using F5 BIG-IP Advanced WAF to secure apps running in your data center, it is straightforward to add NGINX Plus as an Ingress controller for Kubernetes with NGINX App Protect Dos and WAF as a comprehensive solution to scale and secure your modern applications and orchestrate Kubernetes apps in the cloud. Using F5’s security-as-code approach, you can define infrastructure and security policy controls as code via a declarative API or declarative JSON‑formatted definitions in a file, and you can convert BIG‑IP XML files into JSON files as well. Your policies – the standard corporate security controls owned and maintained by SecOps – can reside in a code repository from which they are fetched and integrated into your development pipeline just like any other piece of code. This approach helps DevOps and SecOps bridge operational gaps and bring apps to market faster with lower cost and better security.

F5 incorporates WAF policy construction and baselining into the development process, an important part of “shifting security left” in the application development pipeline and automating application deployment.

Visibility tools in BIG‑IP and NGINX complement one another and enable SecOps to bake automation processes early into your DevOps lifecycle. BIG‑IP allows teams to convert XML files into JSON files used by NGINX to maintain your consistent security policies. NGINX allows teams to fine‑tune apps already in place, while bringing modern app security automation to help offset future breaches and the cost of those potential attacks.

Phase 1: NGINX App Protect DoS Defends Against DoS Attacks

The first stop on our secured traffic management journey is weeding out denial-of-service (DoS) attacks. Shutting this abuse down at the outset is a necessary first line of defense.

We have previously noted that attackers are increasingly switching from traditional volumetric attacks to using HTTP and HTTPS requests or API calls to attack at Layer 7. Why? They are following the path of least resistance. Infrastructure engineers have spent years building effective defenses against Layer 3 and Layer 4 attacks, making them easier to block and less likely to be successful. Layer 7 attacks can bypass these traditional defenses.

Not all DoS attacks are volumetric. Attacks designed to consume server resources through “low and slow” methods like Slow POST or Slowloris can be easily hidden in legitimate traffic. And while open source HTTP/2 remote procedure call frameworks like gRPC provide the speed and flexibility needed for modern apps, their characteristically open nature potentially makes them more vulnerable to DoS attacks than proprietary protocols.

Unlike traditional DoS protections, NGINX App Protect DoS detects today’s attacks by leveraging automated user and site behavior analysis, proactive health checks, and no‑touch configuration. It is a low‑latency solution to stopping common attacks including HTTP GET flood, HTTP POST flood, Slowloris, Slow read, and Slow POST.

To combat these attacks, SecOps and DevOps teams need to integrate “security as code” automation into their CI/CD workflow – part of the shift‑left mindset. NGINX App Protect DoS enables this. It secures modern, distributed apps and APIs with advanced protection from DoS threats and helps align the sometimes‑clashing priorities of SecOps and DevOps by facilitating this model with a lightweight software package, continuous threat mitigation feedback loop, and integration with familiar DevOps tools via RESTful APIs.

NGINX App Protect DoS integrates the machine learning (ML) technology that the IBM Security report highlights as key to significant cost savings. It analyzes client behavior and application health to model normal traffic patterns, uses unique algorithms to create a dynamic statistical model that provides the most accurate protections, and deploys dynamic signatures to automatically mitigate attacks. It also continuously measures mitigation effectiveness and adapts to changing behavior or health conditions. These features enable NGINX App Protect DoS to block DoS attacks where each attacking request looks completely legal, and a single attacker might even generate less traffic than the average legitimate user.

Diagram depicting eight types of attacks blocked by NGINX App Protect WAF and DoS
Security Solution Phase 1 and Phase 2: Application security with efficient traffic processing using
NGINX App Protect DoS and WAF

Phase 2: NGINX App Protect WAF Protects Against the OWASP Top Ten

While DoS protection clearly stops malicious traffic from entering your infrastructure, attacks can still make their way through. That is why you need a web application firewall (WAF) for the next phase of successful defense, where it focuses on traffic from bad actors that is masked as legitimate.

Lightweight and with high performance, NGINX App Protect WAF provides comprehensive security protection that inspects responses, enforces HTTP protocol compliance, detects evasion techniques, masks credit card numbers and other sensitive personal information with Data Guard, and checks for disallowed metacharacters and file types, malformed JSON and XML, and sensitive parameters. It also protects against the updated OWASP Top 10:

It is no surprise that cyberattacks against OWASP Top 10 vulnerabilities such as A03:2021 Injection remain popular. In July 2021, the open source e‑commerce site WooCommerce announced that many of its plug‑ins were vulnerable to SQL injection, and several attacks were occurring at that time. With businesses and customers operating primarily online, it makes sense that attackers focus on web‑based apps, which are often complex, composed of microservices, and span distributed environments with many APIs communicating with one another, increasing the number of endpoints vulnerable to exploitation.

Modern attacks also shift and adapt quickly. This is where AI comes in, and why IBM noted its importance. As in NGINX App Protect DoS, the rich ML system in NGINX App Protect WAF makes it easy for Platform Ops, DevOps, and SecOps teams to share attack trends and data. One new capability – the Adaptive Violation Rating feature – will further leverage ML by detecting when an application’s behavior changes. With this ML capability, NGINX App Protect WAF constantly assesses predictive behavior for each application. Based on this learning, it can enable client requests that otherwise would be blocked, lowering an app’s violation rating score and significantly reducing false positives for a better user experience with lower management costs.

NGINX App Protect WAF also provides bot protection. Today, nearly 50% of Internet traffic comes from bots. By eliminating known malicious traffic up front, NGINX App Protect WAF can quickly block bot traffic using its Bot Signature database.

Introducing WAF as a security layer early in your CI/CD pipeline helps mitigate security risk. Because NGINX App Protect WAF is CI/CD‑friendly, you can bake in and automate security as code early in your application development process. With early security awareness and the right collaboration among teams, you also eliminate bottlenecks like delivery risk. Multi‑stage DoS and WAF protection creates many points of inspection, giving Security teams visibility into app usage and App teams knowledge of how they are being maintained.

Phase 3: NGINX Plus Authenticates and Authorizes App and API Clients

Even after NGINX App Protect Dos and NGINX App Protect WAF weed out malicious traffic, you still need to verify that clients are legitimate and are authorized to access the resources they are requesting. That is where NGINX Plus enters the picture, handling authentication and authorization and then routing requests to the appropriate servers. By deploying NGINX Plus as an API gateway, you can provide one consistent entry point for multiple APIs and, again, simplify your stack.

Authentication and authorization can also be automated with single sign‑on (SSO) to enable DevOps teams to maintain their desired agility. NGINX Plus supports OpenID Connect (OIDC), an identity layer atop the OAuth 2.0 protocol. In the NGINX docs, we explain how to use OIDC to enable SSO for applications proxied by NGINX Plus.

Authentication and authorization using NGINX Plus and OAuth 2.0/OIDC IdP-compliant examples for SSO
Security Solution Phase 3: Authentication and authorization using NGINX Plus and OAuth 2.0/OIDC IdP‑compliant examples for SSO

Given their characteristic open nature, APIs are vulnerable targets. In their annual report, Gartner Research predicted that APIs would become the most common attack vector during 2022, causing countless data breaches for enterprise web apps. That prediction rings true as we make our way through 2022 and observe the API attack surface continuing to grow across organizations.

The API Authentication Incidents: 2020 Application Protection Report from F5 Labs highlights three common reasons for API incidents:

  1. No Authentication at API Endpoints
  2. Broken API Authentication
  3. Broken API Authorization

No Authentication at API Endpoints

When you implement authentication of API traffic, clients that successfully prove their identity receive a token from a trusted identity provider. The client then presents the access token with each HTTP request. Before the request is passed to the application, NGINX Plus ensures the authentication and authorization of the client by validating the token and extracting identity and other data (group membership, for example) encoded in the token. Assuming the token is validated and the client is authorized to access the resource, the request is passed to the application server. There are several methods to accomplish this validation, but OpenID Connect (built on the OAuth 2.0 protocol) is a popular way to enable third‑party authentication of API requests.

However, many APIs on the market are unprotected at the authentication layer. In 2021, interactive fitness platform Peloton was revealed to have a leaky API. A security researcher discovered it was possible to make unauthenticated requests to Peloton’s API, easily retrieving user data without authentication. While Peloton fixed the code before any major breach, this mishap highlights how a monolithic approach to security does not consider the inherent multiplicity of API structures and the consequent need for agility in defending them.

Broken API Authentication

APIs are designed for connecting computer to computer, so many DevOps teams assume humans do not communicate with the API endpoint. One example in the F5 Labs report involves a researcher who chained together several API requests to “earn” hundreds of thousands of dollars in credits on a mobile app. The app continuously generated tokens designed to prevent abuse, but did not set expiration dates on them, allowing them to be used over and over again.

If you don’t properly validate API authentication tokens, attackers can exploit API vulnerabilities. If this type of vulnerability is discovered by a bad actor, rather than a researcher, it can compromise an entire business.

Broken API Authorization

Unsuccessful API authentication naturally leads to broken API authorization. The F5 Labs reports also describes an incident where a bug in the operating system allowed malicious HTTP requests to the API, giving bad actors easy access to authorization tokens. Once the attackers acquired this authorization token, they had administrator permissions.

NGINX offers several approaches for protecting APIs and authenticating API clients. For more information, see the documentation for IP address‑based access control lists (ACLs), digital certificate authentication, and HTTP Basic authentication. Additionally, NGINX Plus natively supports the validation of JSON Web Tokens (JWTs) for API authentication. Learn more in Authenticating API Clients with JWT and NGINX Plus on our blog.

Get Started Today

Automating security makes it everyone’s responsibility. By prioritizing security automation, your organization can build more reliable apps, mitigate risk, reduce OpEx, and accelerate release velocity. This means your microservices, apps, and APIs get agile security that is scalable and fast enough to keep pace with today’s competition.

This three‑phase security structure also offers the best flow because you do not want to bog down your WAF inspecting traffic from a DoS attack or waste valuable resources trying to authenticate and authorize malicious actors. By eliminating the easily identified attacks early, you can save time, money, and accelerate your app performance.

Ready to try NGINX Plus and NGINX App Protect for yourself? Start a 30-day free trial today or contact us to discuss your use cases.

The post Automate Security with F5 NGINX App Protect and F5 NGINX Plus to Reduce the Cost of Breaches appeared first on NGINX.

]]>
Secure Your gRPC Apps Against Severe DoS Attacks with NGINX App Protect DoS https://www.nginx.com/blog/secure-grpc-apps-against-severe-dos-attacks-nginx-app-protect-dos/ Thu, 10 Feb 2022 23:31:07 +0000 https://www.nginx.com/?p=68882 Customer demand for goods and services over the past two years has underlined how crucial it is for organizations to scale easily and innovate faster, leading many of them to accelerate the move from a monolithic to a cloud‑native architecture. According to the recent F5 report, The State of Application Strategy 2021, the number of organizations [...]

Read More...

The post Secure Your gRPC Apps Against Severe DoS Attacks with NGINX App Protect DoS appeared first on NGINX.

]]>
Customer demand for goods and services over the past two years has underlined how crucial it is for organizations to scale easily and innovate faster, leading many of them to accelerate the move from a monolithic to a cloud‑native architecture. According to the recent F5 report, The State of Application Strategy 2021, the number of organizations modernizing applications increased 133% in the last year alone. Cloud‑enabled applications are designed to be modular, distributed, deployed, and managed in an automated way. While it’s possible simply to lift-and-shift an existing monolithic application, doing so provides no advantage in terms of costs or flexibility. The best way to benefit from the distributed model that cloud computing services afford is to think modular – enter microservices.

Microservices is an architectural approach that enables developers to build an application as a set of lightweight application services that are structurally independent of each other and the underlying platform. Each microservice runs as a unique process and communicates with other services through well‑defined and standardized APIs. Each service can be developed and deployed by a small independent team. This flexibility provides organizations greater benefits in terms of performance, cost, scalability, and the ability to quickly innovate.

Developers are always looking for ways to increase efficiency and expedite application deployment. APIs enable software-to-software communication and provide the building blocks for development. To request data from servers using HTTP, web developers originally used SOAP, which sends details of the request in an XML document. However, XML documents tend to be large, require substantial overhead, and take a long time to develop.

Many developers have since moved to REST, an architectural style and set of guidelines for creating stateless, reliable web APIs. A web API that obeys these guidelines is called RESTful. RESTful web APIs are typically based on HTTP methods to access resources via URL‑encoded parameters and use JSON or XML to transmit data. With the use of RESTful APIs, applications are quicker to develop and incur less overhead.

Advances in technology bring new opportunities to advance application design. In 2015 Google developed Google Remote Procedure Call (gRPC) as a modern open source RPC framework that can run in any environment. While REST is built on the HTTP 1.1 protocol and uses a request‑response communication model, gRPC uses HTTP/2 for transport and a client‑response model of communication, implemented in protocol buffers (protobuf) as the interface description language (IDL) used to describe services and data. Protobuf is used to serialize structured data and is designed for simplicity and performance. gRPC is approximately 7 times faster than REST when receiving data and 10 times faster when sending data, due to the efficiency of protobuf and use of HTTP/2. gRPC also allows streaming communication and serves multiple requests simultaneously.

Developers find building microservices with gRPC an attractive alternative to using RESTful APIs due to its low latency, support for multiple languages, compact data representation, and real‑time streaming, all of which make it especially suitable for communication among microservices and over low‑power, low‑bandwidth networks. gRPC has increased in popularity because it makes it easier to build new services rapidly and efficiently, with greater reliability and scalability, and with language independence for both clients and servers.

Although the open nature of the gRPC protocol offers several positive benefits, the standard doesn’t provide any protection from the impact that a DoS attack can have on an application. A gRPC application can still suffer from the same types of DoS attacks as a traditional application.

Why Identifying a DoS Attack on a gRPC App Is Challenging

While microservices and containers give developers more freedom and autonomy to rapidly release new features to customers, they also introduce new vulnerabilities and opportunities for exploitation. One type of cyberattack that has gained in popularity is denial-of-service (DoS) attacks, which in recent years have been responsible for an increasing number of common vulnerabilities and exposures (CVEs), many caused by the improper handling of gRPC requests. Layer 7 DoS attacks on applications and APIs have spiked by 20% in recent years while the scale and severity of impact has risen by nearly 200%.

A DoS attack commonly sends large amounts of traffic that appears legitimate, to exhaust the application’s resources and make it unresponsive. In a typical DoS attack, a bad actor floods a website or application with so much traffic that the servers become overwhelmed by all the requests, causing them to stall or even crash. DoS attacks are designed to slow or completely disable machines or networks, making them inaccessible to the people who need them. Until the attack is mitigated, services that depend on the machine or network – such as e‑commerce sites, email, and online accounts – are unusable.

Increasingly, we have seen more DoS attacks using HTTP and HTTP/2 requests or API calls to attack at the application layer (Layer 7), in large part because Layer 7 attacks can bypass traditional defenses that are not designed to defend modern application architectures. Why have attackers switched from traditional volumetric attacks at the network layers (Layers 3 and 4) to Layer 7 attacks? They are following the path of least resistance. Infrastructure engineers have spent years building effective defense mechanisms against Layer 3 and Layer 4 attacks, making them easier to block and less likely to be successful. That makes such attacks more expensive to launch, in terms of both money and time, and so attackers have moved on.

Detecting DoS attacks on gRPC applications is extremely hard, especially in modern environments where scaling out is performed automatically. A gRPC service may not be designed to handle high‑volume traffic which makes it an easy target for attackers to take down. gRPC services are also vulnerable to HTTP/2 flood attacks with tools such as h2load. Additionally, gRPC services can easily be targeted when the attacker exploits data definitions that are properly declared in a protobuf specification.

A typical, if unintentional, misuse of a gRPC service is when a bug in a script causes it to produce excessive requests to the service. For example, suppose an automation script issues an API call when a certain condition occurs, which the designers expect to happen every two seconds. Due to a mistake in the definition of the condition, however, the script issues the call every two milliseconds, creating an unexpected burden on the backend gRPC service.

Other examples of DoS attacks on a gRPC application include:

  • The insertion of a malicious field in a gRPC message may cause the application to fail.
  • A Slow POST attack sends partial requests in the gRPC header. Anticipating the arrival of the remainder of the request, the application or server keep the connection open. The concurrent connection pool might become full, causing rejection of additional connection attempts from clients.
  • An HTTP/2 Settings Flood (CVE-2019-9515), in which the attacker sends empty SETTING frames to the gRPC protocol, consumes NGINX resources, making it unable to serve new requests.

Unleash the Power of Dynamic User and Site Behavior Analysis to Mitigate gRPC DoS Attacks with NGINX App Protect DoS

Securing applications from today’s DoS attacks requires a modern approach. To protect complex and adaptive applications, you need a solution that provides highly precise, dynamic protection based on user and site behavior and removes the burden from security teams while supporting rapid application development and competitive advantage.

F5 NGINX App Protect DoS is a lightweight software module for NGINX Plus, built on F5’s market‑leading WAF and behavioral protection. Designed to defend against even the most sophisticated Layer 7 DoS attacks, NGINX App Protect DoS uses unique algorithms to create a dynamic statistical model that provides adaptive machine learning and automated protection. It continuously measures mitigation effectiveness and adapts to changing user and site behavior and performs proactive server health checks. For details, see How NGINX App Protect Denial of Service Adapts to the Evolving Attack Landscape on our blog.

Behavioral analysis is provided for both traditional HTTP apps and modern HTTP/2 app headers. NGINX App Protect DoS mitigates attacks based on both signatures and bad actor identification. In the initial signature‑mitigation phase, NGINX App Protect DoS profiles the attributes associated with anomalous behavior to create dynamic signatures that then block requests that match this behavior going forward. If the attack persists, NGINX App Protect DoS moves into the bad‑actor mitigation phase.

Based on statistical anomaly detection, NGINX App Protect DoS successfully identifies bad actors by source IP address and TLS fingerprints, enabling it to generate and deploy dynamic signatures that automatically identify and mitigate these specific patterns of attack traffic. This approach is unlike traditional DoS solutions on the market that detect when a volumetric threshold is exceeded. NGINX App Protect DoS can block attacks where requests look completely legitimate and each attacker might even generate less traffic than the average legitimate user.

The following Kibana dashboards show how NGINX App Protect DoS quickly and automatically detects and mitigates a DoS flood attack on a gRPC application.

Figure 1 displays a gRPC application experiencing a DoS flood attack. In the context of gRPC, the critical metric is datagrams per second (DPS) which corresponds to the rate of messages per second. In this image, the yellow curve represents the learning process: when the Baseline DPS prediction converges toward the Incoming DPS value (in blue), NGINX App Protect has learned what “normal” traffic for this application looks like. The sharp rise in DPS at 12:25:00 indicates the start of an attack. The red alarm bell indicates the point when NGINX App Protect DoS is confident that there is an attack in progress, and starts the mitigation phases.

Figure 1: A gRPC application experiencing a DoS attack

Figure 2 shows NGINX App Protect DoS in the process of detecting anomalous behavior and thwarting a gRPC DoS flood attack using a phased mitigation approach. The red spike indicates the number of HTTP/2 redirections sent to all clients during the global rate‑mitigation phase. The purple graph represents the redirections sent to specific clients when their requests match a signature that models the anomalous behavior. The yellow graph represents the redirections sent to specific detected bad actors identified by IP address and TLS fingerprint.

Figure 2: NGINX App Protect DoS using a phased mitigation approach to thwart a gRPC DoS flood attack

Figure 3 shows a dynamic signature created by NGINX App Protect DoS that is powered by machine learning and profiles the attributes associated with the anomalous behavior from this gRPC flood attack. The signature blocks requests that match it during the initial signature‑mitigation phase.

Figure 3: A dynamic signature

Figure 4 shows how NGINX App Protect DoS moves from signature‑based mitigation to bad‑actor mitigation when an attack persists. By analyzing user behavior, NGINX App Protect DoS has identified bad actors by the source IP address and TLS fingerprints shown here. Instead of looking at every request for specific signatures that indicate anomalous behavior, here service is denied to specific attackers. This enables generation of dynamic signatures that identify these specific attack patterns and mitigate them automatically.

Figure 4: NGINX App Protect DoS identifying bad actors by IP address and TLS fingerprint

With gRPC APIs, you use the gRPC interface to set security policy in the type library (IDL) file and the proto definition files for the protobuf. This provides a zero‑touch security policy solution – you don’t have to rely on the protobuf definition to protect the gRPC service from attacks. gRPC proto files are frequently used as a part of CI/CD pipelines, aligning security and development teams by automating protection and enabling security-as-code for full CI/CD pipeline integration. NGINX App Protect DoS ensures consistent security by seamlessly integrating protection into your gRPC applications so they that are always protected by the latest, most up-to-date security policies.

While gRPC provides the speed and flexibility developers need to design and deploy modern applications, the inherent open nature of its framework makes it highly vulnerable to DoS attacks. To stay ahead of severe Layer 7 DoS attacks that can result in performance degradation, application and website outages, abandoned revenue, and damage to customer loyalty and brand, you need a modern defense. That’s why NGINX App Protect DoS is essential for your modern gRPC applications.

To try NGINX App Protect DoS with NGINX Plus for yourself, start your free 30-day trial today or contact us to discuss your use cases.

For more information, check out our whitepaper, Securing Modern Apps Against Layer 7 DoS Attacks.

The post Secure Your gRPC Apps Against Severe DoS Attacks with NGINX App Protect DoS appeared first on NGINX.

]]>
F5 and NGINX Together Extend Robust Security Across Your Hybrid Environment https://www.nginx.com/blog/f5-nginx-together-extend-robust-security-across-your-hybrid-environment/ Thu, 20 Jan 2022 17:52:09 +0000 https://www.nginx.com/?p=68610 When one of the world’s most successful premium car makers picks an application security solution, you can be confident they’ve made sure it meets their standards for performance and reliability. That’s why we’re proud that the Audi Group – active in more than 100 markets worldwide – recently chose F5 NGINX App Protect WAF to secure its Kubernetes‑based [...]

Read More...

The post F5 and NGINX Together Extend Robust Security Across Your Hybrid Environment appeared first on NGINX.

]]>

When one of the world’s most successful premium car makers picks an application security solution, you can be confident they’ve made sure it meets their standards for performance and reliability. That’s why we’re proud that the Audi Group – active in more than 100 markets worldwide – recently chose F5 NGINX App Protect WAF to secure its Kubernetes‑based platform for modern application development.

NGINX App Protect is a prime example of how F5 enables customers on their digital transformation journeys by integrating its industry‑leading security expertise into tools for modern apps. In this case, we’ve ported the security engine from F5 Advanced Web Application Firewall (WAF) – tried and tested over decades by our BIG‑IP customers – into NGINX, known as an ideal platform for modern app delivery thanks to its exceptional performance, flexible programmability, and ease of deployment in any environment.

Like many F5 customers, Audi relies on both BIG‑IP and NGINX. By leveraging a common security engine in products with the right form factor for different environments, Audi can be confident that its entire infrastructure is protected from the OWASP Top 10 and other advanced threats. It also means that Audi’s DevOps and SecOps teams can operate in harmony with robust support from F5.

F5 acquired NGINX in 2019 because it recognized the changes it was seeing in the app‑delivery landscape as inexorable. NGINX App Protect is one of the first demonstrations of the synergy that makes F5 and NGINX better together. We look forward to building further on that synergy, strengthening both F5’s security portfolio and its role in the modern application landscape.

How NGINX Helps Make F5 Better

In the mid‑2010s, F5 realized that to continue succeeding in the modern app‑delivery landscape it needed to build out its product portfolio. Today we see those changes accelerating, as evidenced by these trends:

  • Enterprises are adopting Kubernetes for modern app environments
  • Enterprises are adopting DevSecOps, which shifts security left, closer to developers
  • Developers prefer – and often insist on – open source software

As enterprises move to modern app deployments and architectures, the world of application security is also witnessing a shift away from models that treat infrastructure as a shared service. Increasingly, microservices and Kubernetes dominate the modern app landscape, with security tools fully integrated into the delivery process. According to the 2021 Kubernetes Adoption report, 89% of IT professionals expect Kubernetes to expand its role in infrastructure management over the next two to three years as Kubernetes adoption and functionality continue to grow.

BIG‑IP and NGINX provide similar core application‑delivery functionality but are suited to different app development and delivery environments. BIG‑IP’s relatively large footprint isn’t ideal for all application types, especially highly distributed and dynamic ones. Particularly as DevSecOps shifts security left – and developers deploy new and updated software faster than ever – enterprises need a solution with a smaller footprint that integrates easily into DevOps workflows.

F5 provides that solution in the form of NGINX App Protect and other NGINX products. Additionally, NGINX satisfies the craving of today’s modern app developers – and anyone who focuses on building applications rather than managing networks and security – for open source technology. The DevSecOps culture also leans towards open source, and NGINX brought to F5 its large, enthusiastic open source community and modern mindset. Beyond that, NGINX’s modern modular architecture makes it easy to incorporate F5 security technology in the form of modules.

With its open source roots, NGINX has put a community‑forward mindset front and center in its app development and microservices architectures. Now NGINX is helping influence F5 to extend its more traditional culture and embrace open source as part of product development. As a clear example, at Sprint 2.0, F5 announced its expanded participation in open source projects like the Kubernetes Gateway API SIG and community.

How F5 Helps Make NGINX Better

The F5 Advanced WAF is a perfect fit for security‑focused organizations that wish to self‑manage and tailor granular controls for traditional apps. Its WAF and DoS security engines have long been available to BIG‑IP customers as modules in Advanced WAF, but not in a lightweight form factor suitable for microservices architectures and environments. NGINX customers, on the other hand, had trouble finding a WAF with the rich feature set of Advanced WAF that didn’t drive up latency.

After the NGINX acquisition, F5 made it a top priority to port its trusted application security solutions to NGINX, offering enterprise‑grade security expertise in a high‑performance and lightweight form factor that serves the needs of DevOps and DevSecOps teams building modern applications. NGINX App Protect is the result. Immediately upon its release in 2020, it set new benchmarks for low latency, high performance, and resistance to bypass techniques.

The many benefits from integrating Advanced WAF’s power into NGINX include:

  • Protecting and scaling mission‑critical, advanced front‑end services in your modern application stack
  • Achieving the time-to-market benefits of a microservices architecture without compromising reliability and security controls
  • Providing consistent, robust, and high‑performance application security wherever application traffic moves – whether through BIG‑IP or through microservices architectures enabled by NGINX

NGINX App Protect WAF provides high performance in a small footprint, optimized for microservices architectures, cloud, and containers. NGINX App Protect DoS defends against hard-to-detect Layer 7 attacks.

And how does F5 serve enterprises who want to shift left? By enabling them to inject battle‑tested and superior application security into their CI/CD pipelines, reducing the inherent risks of rapid and frequent releases. The F5 NGINX Controller App Security add‑on for both API management and application delivery enable AppDev and DevOps teams to implement WAF protection in their development pipelines in a self‑service manner while still complying with corporate security requirements. You can also apply consistent policies across all of your BIG‑IP and NGINX deployment environments with the NGINX App Protect Policy Converter.

Improving Governance and Observability with Machine Learning and Portable Policies

Of course, technology never stops evolving, and F5 and NGINX plan to continue innovating.

F5’s “Adaptive Applications” Vision Promises Comprehensive Security

As modern threats become increasingly complex, an app’s ability to adapt to threats and other changes becomes ever more crucial. In an ideal world, app services independently scale based on demand. F5 sees this as entering a new world of “Adaptive Applications” – one where a consistent, declarative API layer enables easy management of applications that learn to take care of themselves and avoid evolving security threats, allowing customers to safely deliver modernized experiences.

Acquisitions like Shape and Threat Stack Enrich F5 with ML and Observability

Further expanding its world‑class portfolio of application security and delivery technology, F5 acquired Shape Security, a leader in online fraud and abuse prevention, in 2020, and Threat Stack, a cloud‑ and container‑native observability solution, in 2021. Incorporating Shape and Threat Stack technology gives F5 an end-to-end application security solution with proactive risk identification and real‑time threat mitigation, plus enhanced visibility across application infrastructures and workloads. Dashboards and monitoring are already in the works, along with projects focusing on machine learning (ML). F5 sees the need for sophisticated, adaptive protection and is dedicated to expanding its offerings in that area.

One WAF Engine Across Platforms Ensures Effective Security Everywhere

Using common WAF technology, F5 customers can maintain their standardized security policies when migrating from traditional environments to containerized and cloud environments, and from the F5 Advanced WAF to NGINX App Protect. Portability across our WAF products ensures continued security and confidence for F5 customers by use of a shared declarative API for WAF policy. Staying close to the application workloads, F5 is committed to enabling WAF capabilities in form factors best able to meet the needs of the application and its architecture.

Get Started with F5 NGINX Today

To stay up to date with F5 NGINX, engage with your trusted technology advisors – whether that be your account team or partner. Environments are constantly being streamlined for better management, and it’s easier than ever to stay plugged‑in and subscribe, especially with our focus on community. Whether you’re shifting left, requiring complex protection, or looking for time-to-market benefits, F5 NGINX’s tested technology, smaller footprint, and high‑performance solutions ensure agile and lightweight security both now and for the future.

Regardless of where you are in your app development journey, you can get started with free 30-day trials of our commercial security solutions:

The post F5 and NGINX Together Extend Robust Security Across Your Hybrid Environment appeared first on NGINX.

]]>