NGINX Management Suite Archives - NGINX https://www.nginx.com/blog/tag/nginx-management-suite/ The High Performance Reverse Proxy, Load Balancer, Edge Cache, Origin Server Mon, 14 Aug 2023 23:13:26 +0000 en-US hourly 1 Use Infrastructure as Code to Deploy F5 NGINX Management Suite https://www.nginx.com/blog/use-infrastructure-as-code-to-deploy-f5-nginx-management-suite/ Tue, 08 Aug 2023 15:00:17 +0000 https://www.nginx.com/?p=72609 Unlocking the full potential of F5 NGINX Management Suite can help your organization simplify app and API deployment, management, and security. The new NGINX Management Suite Infrastructure as Code (IaC) project aims to help you get started as quickly as possible, while also encouraging the best practices for your chosen deployment environment. If you are [...]

Read More...

The post Use Infrastructure as Code to Deploy F5 NGINX Management Suite appeared first on NGINX.

]]>
Unlocking the full potential of F5 NGINX Management Suite can help your organization simplify app and API deployment, management, and security. The new NGINX Management Suite Infrastructure as Code (IaC) project aims to help you get started as quickly as possible, while also encouraging the best practices for your chosen deployment environment.

If you are responsible for building software infrastructure, you’re likely familiar with IaC as a modern approach to getting consistent results. However, because there are many ways to achieve an IaC setup, it may be daunting to get started or time consuming to create from scratch.

This blog post introduces the NGINX Management Suite Infrastructure as Code repository and outlines how to set up its individual modules to quickly get them up and running.

Project Overview

There are two established methods to design your IaC. One method is the baked approach, where images are created with the required software and configuration. The other method, the fried approach, is to deploy your servers and continuously configure them using a configuration management tool. You can watch this NGINX talk to learn about immutable infrastructure, including the differences between baked and fried images.

In the NGINX Management Suite IaC repository, we take the baked approach – using Packer to bake the images and then Terraform to deploy instances of these images. By creating a pre-baked image, you can speed up the deployment process of your individual NGINX Management Suite systems as well as the consistency of your infrastructure.

Baked Approach – using Packer to bake and then Terraform to deploy instances.

Working with the GitHub Repo

The Packer output is an image/machine with NGINX Management Suite and all supported modules installed (at the time of writing, these are Instance Manager, API Connectivity Manager, Security Monitoring, and Application Delivery Manager ). The license you apply determines which modules you are able to utilize. You can find your license information in the MyF5 Customer Portal or, if you’re not already a customer, you can request a 30-day free trial of API Connectivity Stack or App Delivery Stack to get started.

Confidential information, such as passwords or certificates, are removed during the image generation process. The images can be built using any NGINX Management Suite supported OS and by modifying build parameters. NGINX provides support for several cloud and on-premises environments for both image building and deployment with the intent to actively add support for more. At the time of writing, the setups in the table below are supported.

Cloud Provider

Packer for NGINX Management Suite

Packer for NGINX Plus

Terraform for Basic Reference Architecture

Terraform for Standalone NGINX Management Suite

AWS

GCP

Azure

vSphere

The basic reference architecture deploys an NGINX Management Suite instance with the required amount of NGINX Plus instances. The network topology deployed adheres to best practices for the targeted cloud provider.

For example, if you are using Amazon Web Services (AWS), you can deploy this infrastructure:

AWS Infrastructure example

How to Get Started

To start using IaC for NGINX Management Suite, clone this repository and follow the README for building your images. For the basic reference architecture, you will need to follow the Packer guides to generate an NGINX Management Suite and NGINX Plus image.

After you have generated your images, you can use them to deploy your reference architecture. The Terraform stack uses sensible defaults with configuration options that can be edited to suit your needs.

How to Contribute

This repository is in active development and we welcome contributions from the community. For more information please view our contributing guide.

Additional Resources

The post Use Infrastructure as Code to Deploy F5 NGINX Management Suite appeared first on NGINX.

]]>
Building a Docker Image of NGINX Plus with NGINX Agent for Kubernetes https://www.nginx.com/blog/building-docker-image-nginx-plus-with-nginx-agent-kubernetes/ Tue, 18 Apr 2023 23:39:45 +0000 https://www.nginx.com/?p=71159 F5 NGINX Management Suite is a family of modules for managing the NGINX data plane from a single pane of glass. By simplifying management of NGINX Open Source and NGINX Plus instances, NGINX Management Suite simplifies your processes for scaling, securing, and monitoring applications and APIs. You need to install the NGINX Agent on each NGINX instance [...]

Read More...

The post Building a Docker Image of NGINX Plus with NGINX Agent for Kubernetes appeared first on NGINX.

]]>
p.indent { margin-left: 20px; white-space: nowrap; }

F5 NGINX Management Suite is a family of modules for managing the NGINX data plane from a single pane of glass. By simplifying management of NGINX Open Source and NGINX Plus instances, NGINX Management Suite simplifies your processes for scaling, securing, and monitoring applications and APIs.

You need to install the NGINX Agent on each NGINX instance you want to manage from NGINX Management Suite, to enable communication with the control plane and remote configuration management.

For NGINX instances running on bare metal or a virtual machine (VM), we provide installation instructions in our documentation. In this post we show how to build a Docker image for NGINX Plus and NGINX Agent, to broaden the reach of NGINX Management Suite to NGINX Plus instances deployed in Kubernetes or other microservices infrastructures.

There are three build options, depending on what you want to include in the resulting Docker image:

[Editor – This post was updated in April 2023 to clarify the instructions, and add the ACM_DEVPORTAL field, in Step 1 of Running the Docker Image in Kubernetes.]

Prerequisites

We provide a GitHub repository of the resources you need to create a Docker image of NGINX Plus and NGINX Agent, with support for version 2.8.0 and later of the Instance Manager module from NGINX Management Suite.

To build the Docker image, you need:

  • A Linux host (bare metal or VM)
  • Docker 20.10+
  • A private registry to which you can push the target Docker image
  • A running NGINX Management Suite instance with Instance Manager, and API Connectivity Manager if you want to leverage support for the developer portal
  • A subscription (or 30-day free trial) for NGINX Plus and optionally NGINX App Protect

To run the Docker image, you need:

  • A running Kubernetes cluster
  • kubectl with access to the Kubernetes cluster

Building the Docker Image

Follow these instructions to build the Docker image.

  1. Clone the GitHub repository:

    $ git clone https://github.com/nginxinc/NGINX-Demos 
    Cloning into 'NGINX-Demos'... 
    remote: Enumerating objects: 126, done. 
    remote: Counting objects: 100% (126/126), done. 
    remote: Compressing objects: 100% (85/85), done. 
    remote: Total 126 (delta 61), reused 102 (delta 37), pack-reused 0 
    Receiving objects: 100% (126/126), 20.44 KiB | 1.02 MiB/s, done. 
    Resolving deltas: 100% (61/61), done.
  2. Change to the build directory:

    $ cd NGINX-Demos/nginx-agent-docker/
  3. Run docker ps to verify that Docker is running and then run the build.sh script to include the desired software in the Docker image. The base options are:

    • ‑C – Name of the NGINX Plus license certificate file (nginx-repo.crt in the sample commands below)
    • ‑K – Name of the NGINX Plus license key file (nginx-repo.key in the sample commands below)
    • ‑t – The registry and target image in the form

      <registry_name>/<image_name>:<tag>

      (registry.ff.lan:31005/nginx-plus-with-agent:2.7.0 in the sample commands below)

    • ‑n – Base URL of your NGINX Management Suite instance (https://nim.f5.ff.lan in the sample commands below)

    The additional options are:

    • ‑d – Add data‑plane support for the developer portal when using NGINX API Connectivity Manager
    • ‑w – Add NGINX App Protect WAF

    Here are the commands for the different combinations of software:

    • NGINX Plus and NGINX Agent:

      $ ./scripts/build.sh -C nginx-repo.crt -K nginx-repo.key \
      -t registry.ff.lan:31005/nginx-plus-with-agent:2.7.0 \
      -n https://nim.f5.ff.lan
    • NGINX Plus, NGINX Agent, and NGINX App Protect WAF (add the ‑w option):

      $ ./scripts/build.sh -C nginx-repo.crt -K nginx-repo.key \
      -t registry.ff.lan:31005/nginx-plus-with-agent:2.7.0 -w \
      -n https://nim.f5.ff.lan
    • NGINX Plus, NGINX Agent, and developer portal support (add the ‑d option):

      $ ./scripts/build.sh -C nginx-repo.crt -K nginx-repo.key \ 
      -t registry.ff.lan:31005/nginx-plus-with-agent:2.7.0 -d \ 
      -n https://nim.f5.ff.lan

    Here’s a sample trace of the build for a basic image. The Build complete message at the end indicates a successful build.

    $ ./scripts/build.sh -C nginx-repo.crt -K nginx-repo.key -t registry.ff.lan:31005/nginx-plus-with-agent:2.7.0 -n https://nim.f5.ff.lan 
    => Target docker image is nginx-plus-with-agent:2.7.0 
    [+] Building 415.1s (10/10) FINISHED 
    => [internal] load build definition from Dockerfile
    => transferring dockerfile: 38B
    => [internal] load .dockerignore 
    => transferring context: 2B 
    => [internal] load metadata for docker.io/library/centos:7
    => [auth] library/centos:pull token for registry-1.docker.io
    => CACHED [1/4] FROM docker.io/library /centos:7@sha256:be65f488b7764ad3638f236b7b515b3678369a5124c47b8d32916d6487418ea4
    => [internal] load build context 
    => transferring context: 69B 
    => [2/4] RUN yum -y update  && yum install -y wget ca-certificates epel-release curl  && mkdir -p /deployment /etc/ssl/nginx  && bash -c 'curl -k $NMS_URL/install/nginx-agent | sh' && echo "A  299.1s 
    => [3/4] COPY ./container/start.sh /deployment/
    => [4/4] RUN --mount=type=secret,id=nginx-crt,dst=/etc/ssl/nginx/nginx-repo.crt  --mount=type=secret,id=nginx-key,dst=/etc/ssl/nginx/nginx-repo.key  set -x  && chmod +x /deployment/start.sh &  102.4s  
    => exporting to image 
    => exporting layers 
    => writing image sha256:9246de4af659596a290b078e6443a19b8988ca77f36ab90af3b67c03d27068ff 
    => naming to registry.ff.lan:31005/nginx-plus-with-agent:2.7.0 
    => Build complete for registry.ff.lan:31005/nginx-plus-with-agent:2.7.0

    Running the Docker Image in Kubernetes

    Follow these instructions to prepare the Deployment manifest and start NGINX Plus with NGINX Agent on Kubernetes.

    1. Using your preferred text editor, open manifests/1.nginx-with-agent.yaml and make the following changes (the code snippets show the default values that you can or must change, highlighted in orange):

      • In the spec.template.spec.containers section, replace the default image name (your.registry.tld/nginx-with-nim2-agent:tag) with the Docker image name you specified with the ‑t option in Step 3 of Building the Docker Image (in our case, registry.ff.lan:31005/nginx-plus-with-agent:2.7.0):

        spec:
          ...
          template:
            ...    
            spec:
              containers:
              - name: nginx-nim
                image: your.registry.tld/nginx-with-nim2-agent:tag
      • In the spec.template.spec.containers.env section, make these substitutions in the value field for each indicated name:

        • NIM_HOST – (Required) Replace the default (nginx-nim2.nginx-nim2) with the FQDN or IP address of your NGINX Management Suite instance (in our case nim2.f5.ff.lan).
        • NIM_GRPC_PORT – (Optional) Replace the default (443) with a different port number for gRPC traffic.
        • NIM_INSTANCEGROUP – (Optional) Replace the default (lab) with the instance group to which the NGINX Plus instance belongs.
        • NIM_TAGS – (Optional) Replace the default (preprod,devops) with a comma‑delimited list of tags for the NGINX Plus instance.
        spec:
          ...
          template:
            ...    
          spec:
            containers:
              ...
              env:
                - name: NIM_HOST
                ...
                  value: "nginx-nim2.nginx-nim2"
                - name: NIM_GRPC_PORT
                  value: "443"
                - name: NIM_INSTANCEGROUP
                  value: "lab"
                - name: NIM_TAGS
                  value: "preprod,devops"
      • Also in the spec.template.spec.containers.env section, uncomment these namevalue field pairs if the indicated condition applies:

        • NIM_WAF and NIM_WAF_PRECOMPILED_POLICIES – NGINX App Protect WAF is included in the image (you included the -w option in Step 3 of Building the Docker Image), so the value is "true".
        • ACM_DEVPORTAL – Support for the App Connectivity Manager developer portal is included in the image (you included the -d option in Step 3 of Building the Docker Image), so the value is "true".
        spec:
          ...
          template:
            ...    
          spec:
            containers:
              ...
              env:
                - name: NIM_HOST
                ...
                #- name: NAP_WAF
                #  value: "true"
                #- name: NAP_WAF_PRECOMPILED_POLICIES
                #  value: "true"
                ...
                #- name: ACM_DEVPORTAL
                #  value: "true"
    2. Run the nginxwithAgentStart.sh script as indicated to apply the manifest and start two pods (as specified by the replicas: 2 instruction in the spec section of the manifest), each with NGINX Plus and NGINX Agent:

      $ ./scripts/nginxWithAgentStart.sh start
      $ ./scripts/nginxWithAgentStart.sh stop
    3. Verify that two pods are now running: each pod runs an NGINX Plus instance and an NGINX Agent to communicate with the NGINX Management Suite control plane.

      $ kubectl get pods -n nim-test  
      NAME                        READY  STATUS   RESTARTS  AGE 
      nginx-nim-7f77c8bdc9-hkkck  1/1    Running  0         1m 
      nginx-nim-7f77c8bdc9-p2s94  1/1    Running  0         1m
    4. Access the NGINX Instance Manager GUI in NGINX Management Suite and verify that two NGINX Plus instances are running with status Online. In this example, NGINX App Protect WAF is not enabled.

      Screenshot of Instances Overview window in NGINX Management Suite Instance Manager version 2.7.0

    Get Started

    To try out the NGINX solutions discussed in this post, start a 30-day free trial today or contact us to discuss your use cases:

    Download NGINX Agent – it’s free and open source.

    The post Building a Docker Image of NGINX Plus with NGINX Agent for Kubernetes appeared first on NGINX.

    ]]> 2 Ways to View and Manage Your WAF Fleet at Scale with F5 NGINX https://www.nginx.com/blog/2-ways-view-manage-waf-fleet-at-scale-f5-nginx/ Thu, 23 Mar 2023 15:03:14 +0000 https://www.nginx.com/?p=71385 As organizations transform digitally and grow their application portfolios, security challenges also transform and multiply. In F5’s The State of Application Strategy in 2022, we saw how many organizations today have more apps to monitor than ever – often anywhere from 200 to 1000! That high number creates more potential attack surfaces, making today’s apps particularly susceptible to bad [...]

    Read More...

    The post 2 Ways to View and Manage Your WAF Fleet at Scale with F5 NGINX appeared first on NGINX.

    ]]>
    As organizations transform digitally and grow their application portfolios, security challenges also transform and multiply. In F5’s The State of Application Strategy in 2022, we saw how many organizations today have more apps to monitor than ever – often anywhere from 200 to 1000!

    That high number creates more potential attack surfaces, making today’s apps particularly susceptible to bad actors. This vulnerability worsens when a web application needs to handle increased amounts of traffic. To minimize downtime (or even better, eliminate it!), it’s crucial to develop a strategy that puts security first.

    WAF: Your First Line of Defense

    In our webinar Easily View, Manage, and Scale Your App Security with F5 NGINX, we cover why a web application firewall (WAF) is the tool of choice for securing and protecting web applications. By monitoring and filtering traffic, a WAF is the first line of defense to protect applications against sophisticated Layer 7 attacks like distributed denial of service (DDoS).

    The following WAF capabilities ensure a robust app security solution:

    • HTTP protocol and traffic validation
    • Data protection
    • Automated attack blocking
    • Easy policy integration into CI/CD pipelines
    • Centralized visualization
    • Configuration management at scale

    But while the WAF is monitoring the apps, how does your team monitor the WAF? And what about when you deploy multiple WAFs in a fleet to handle numerous attacks? In the webinar, we answer these questions and also do a real‑time demo.

    As a preview of the webinar, in this post we look into two key findings to help you get started managing your WAF fleet at scale:

    1. How to increase visibility
    2. How to enable security-as-code

    Increase Visibility with NGINX Management Suite

    The success of any WAF strategy depends on the level of visibility available to the teams implementing and managing the WAFs during creation, deployment, and modification. This is where a management plane comes in. Rather than making your teams look at each WAF through a separate, individual lens, it’s important to have one, centralized pane of glass for monitoring all your WAFs. With centralized visibility, you can make informed decisions about current attacks and easily gain insights to fine‑tune your security policies.

    Additionally, it’s critical that your SecOps, Platform Ops, and DevOps teams share a clear and cohesive strategy. When these three teams work together on both the setup and maintenance of your WAFs, you achieve stronger app security at scale.

    Here’s how each team benefits from using our management plane, F5 NGINX Management Suite, which easily integrates with NGINX App Protect WAF:

    • SecOps – Gains centralized visibility into app security and compliance, the ability to apply uniform policies across teams, and support for a shift‑left strategy.
    • Platform Ops – Can provide app security support to multiple users, centralized visibility across the entire WAF fleet, and scalable DevOps across the entire enterprise.
    • DevOps – Can automate security within the CI/CD pipeline, easily and quickly deploy app security, and provide better customer experience by building apps that are more reliable and less subject to attack.

    Enable Security as Code with NGINX App Protect WAF

    Instance Manager is the core module in NGINX Management Suite and enables centralized management of NGINX App Protect WAF security policies at scale. When your DevOps team can easily consume SecOps‑managed security policies, it can start moving towards a DevSecOps culture, immediately integrating security at all phases of the CI/CD pipeline, shifting security left.

    Shifting left and centrally managing your WAF fleet means:

    • A declarative security policy (in JSON from SecOps) enables DevOps to use CI/CD tools natively.
    • Your security policy can be pushed to the application from a developer tool.
    • SecOps and DevOps can independently own their files.

    With platform‑agnostic NGINX App Protect WAF, you can easily shift left and automate security into the CI/CD pipeline. Learn more in this short clip from the webinar:

    Watch the Full Webinar On Demand

    To dive deeper into these topics and see the ten‑minute real‑time demo, watch our on‑demand webinar Easily View, Manage, and Scale Your App Security with F5 NGINX.

    In addition to the findings discussed in this post, the webinar covers:

    • Additional considerations for managing a WAF fleet at scale
    • How visibility of top WAF violations, attacks, and CVEs helps you determine how to tune policies
    • Ways to reduce policy errors with centralized WAF visibility and management
    • Details on automation of security-as-code

    Ready to try NGINX Management Suite for managing your WAFs? Request your free 30-day trial.

    The post 2 Ways to View and Manage Your WAF Fleet at Scale with F5 NGINX appeared first on NGINX.

    ]]>
    Managing NGINX Configuration at Scale with Instance Manager https://www.nginx.com/blog/managing-nginx-configuration-at-scale-with-instance-manager/ Mon, 20 Mar 2023 15:20:18 +0000 https://www.nginx.com/?p=71366 Since releasing NGINX Instance Manager in early 2021, we have continually added functionality based on feedback from our users about their top priorities and pain points. Instance Manager is now the core module of NGINX Management Suite, our collection of management‑plane modules which make it easier to manage and monitor NGINX at scale. After two years [...]

    Read More...

    The post Managing NGINX Configuration at Scale with Instance Manager appeared first on NGINX.

    ]]>
    Since releasing NGINX Instance Manager in early 2021, we have continually added functionality based on feedback from our users about their top priorities and pain points. Instance Manager is now the core module of NGINX Management Suite, our collection of management‑plane modules which make it easier to manage and monitor NGINX at scale. After two years of focused work, today’s Instance Manager is, quite simply, better than ever.

    Some of the most notable recent enhancements to Instance Manager are:

    • Remote configuration and configuration groups to help you scale
    • Robust and granular role‑based access control (RBAC) to empower multiple teams to manage their deployments
    • Improved monitoring options that offer more flexibility and deeper insight
    • Enhanced security with capabilities for monitoring and managing NGINX App Protect WAF

    In this post we focus on the enhancements to Instance Manager’s configuration‑management features. One of biggest reasons for NGINX’s popularity is the wide range of use cases it covers, from web serving and content caching to reverse proxying and load balancing. As you scale out your NGINX deployment across more use cases, configurations grow more complex and diverse across your NGINX estate, and accurately setting and tracking them manually becomes tedious and prone to errors.

    Instance Manager greatly eases scaling as a centralized control station for remote management of your entire NGINX fleet. It helps ensure that your customers have a consistent and high‑quality user experience no matter how complex your systems become.

    In this post we focus on three Instance Manager configuration‑management features that help you scale.

    Remote Configuration Management Saves Time

    With Instance Manager, you manage all your NGINX Open Source and NGINX Plus configurations remotely from a single pane of glass. You can navigate among hundreds of managed NGINX instances to make updates and monitor status and traffic, either using the web interface or the API. The API makes it easy to integrate NGINX configuration management into your CI/CD pipeline.

    The configuration editor built into the web interface is powered by the open source Monaco editor and makes it easy to edit NGINX configuration. The Instance Manager Analyzer automatically highlights errors as you edit and recommends fixes based on best practices.

    Screenshot of configuration editor in NGINX Management Suite Instance manager 2.8.0

    With instance groups, you can apply the same configuration to multiple instances. This makes scaling much easier because you maintain just a single copy of the configuration and apply it to all instances in the group with the single press of a button. As you create additional instances, add them to an instance group during onboarding for instant application of the correct configuration.

    With staged configurations, you can create a configuration from scratch or copy the configuration from an individual instance, and save it to be deployed later on one or more instances.

    Efficient SSL/TLS Certificate Management Maintains Security

    Secure communication between NGINX instances and their clients relies on proper management of SSL/TLS certificates and their associated keys. Expired or invalid certificates can put the integrity of your entire organization at risk.

    With Instance Manager you can conveniently track, manage, and deploy SSL/TLS certificates on all your instances, using either the web interface or API. The web interface highlights any certs that are expired or are expiring soon, helping you avert costly and time‑consuming outages.

    Role-Based Access Control Improves Workflows

    As they adopt DevOps practices, organizations are increasingly delegating responsibility for app and infrastructure management to development teams. This makes it more complicated, but no less critical, to ensure that the right users have the right levels of access. With Instance Manager, you can seamlessly integrate your single sign‑on (SSO) solution and use role‑based access control (RBAC) to create “swim lanes” for different teams.

    RBAC ensures that security and compliance policies are properly enforced, but also empowers teams to manage their own resources. You can create roles that assign permissions broken down along both functional and resource lines.

    With RBAC, teams can focus on their areas of expertise and therefore work faster and more efficiently. At the same time, administrators can be assured that the entire organization is adhering to important guidelines and policies.

    Get Started

    The recent enhancements to Instance Manager’s tools for managing remote configurations, SSL/TLS certificates, and access control make it easier and more convenient than ever to manage your NGINX fleet as it scales.

    To try Instance Manager for yourself, start a 30-day free trial of NGINX Management Suite. You can also trial NGINX Plus for production‑grade traffic management.

    The post Managing NGINX Configuration at Scale with Instance Manager appeared first on NGINX.

    ]]>
    Building a Docker Image for Deploying NGINX Management Suite Without Helm https://www.nginx.com/blog/building-a-docker-image-for-deploying-nginx-management-suite-without-helm/ Tue, 28 Feb 2023 03:15:24 +0000 https://www.nginx.com/?p=70819 Earlier this year we introduced NGINX Management Suite as our new control plane for NGINX software solutions, enabling you to configure, scale, secure and monitor user applications and REST APIs on the NGINX data plane from a single pane of glass. NGINX Management Suite has a modular design: at its core is the Instance Manager [...]

    Read More...

    The post Building a Docker Image for Deploying NGINX Management Suite Without Helm appeared first on NGINX.

    ]]>
    Earlier this year we introduced NGINX Management Suite as our new control plane for NGINX software solutions, enabling you to configure, scale, secure and monitor user applications and REST APIs on the NGINX data plane from a single pane of glass.

    NGINX Management Suite has a modular design: at its core is the Instance Manager module, which provides tracking, configuration, and visibility for your entire fleet of NGINX Open Source and NGINX Plus instances. As of this writing, API Connectivity Manager is the other available module, used to manage and orchestrate NGINX Plus running as an API gateway.

    NGINX Management Suite can run on bare metal, as a Linux virtual machine, or containerized. The recommended way to deploy it on Kubernetes is using the Helm chart we provide, but for specific purposes you might need to build your own Docker image and manage its lifecycle through a custom CI/CD pipeline that doesn’t necessarily rely on Helm.

    [Editor – This post was updated in February 2023 to fully automate the process of building the Docker image.]

    Prerequisites

    We provide a GitHub repository of the resources you need to create a Docker image for NGINX Management Suite, with support for these versions of Instance Manager and API Connectivity Manager:

    • Instance Manager 2.4.0+
    • API Connectivity Manager 1.0.0+
    • Security Monitoring 1.0.0+

    To build the Docker image, you need:

    • A Linux host (bare metal or VM)
    • Docker 20.10+
    • A private registry to which you can push the target Docker image
    • A subscription (or 30-day free trial) for NGINX Management Suite

    To run the Docker image, you need:

    • A running Kubernetes cluster
    • kubectl with access to the Kubernetes cluster
    • A subscription (or 30-day free trial) for the NGINX Ingress Controller based on NGINX Plus

    Building the Docker Image

    Follow these instructions to build the Docker image.

    Note: We have made every effort to accurately represent the NGINX Management Suite UI at the time of publication, but the UI is subject to change. Use these instructions as a reference and adapt them to the current UI as necessary.

    1. Clone the GitHub repository:

      $ git clone https://github.com/nginxinc/NGINX-Demos
      Cloning into 'NGINX-Demos'... 
      remote: Enumerating objects: 215, done. 
      remote: Counting objects: 100% (215/215), done. 
      remote: Compressing objects: 100% (137/137), done. 
      remote: Total 215 (delta 108), reused 171 (delta 64), pack-reused 0 
      Receiving objects: 100% (215/215), 2.02 MiB | 1.04 MiB/s, done. 
      Resolving deltas: 100% (108/108), done.
    2. Change to the build directory:

      $ cd NGINX-Demos/nginx-nms-docker/
    3. Run docker ps to verify that Docker is running and then run the buildNIM.sh script to build the Docker image. The ‑i option sets the automated build mode, ‑C and ‑K are required options which name the NGINX Management Suite certificate and key respectively, and the ‑t option specifies the location and name of the private registry to which the image is pushed.

      $ ./scripts/buildNIM.sh -i -C nginx-repo.crt -K nginx-repo.key -t registry.ff.lan:31005/nginx-nms:2.5.1 
      ==> Building NGINX Management Suite docker image 
      Sending build context to Docker daemon  92.19MB 
      Step 1/18 : FROM ubuntu:22.04 
      ---> a8780b506fa4 
      Step 2/18 : ARG NIM_DEBFILE 
      ---> Running in 0f2354280c34 
      Removing intermediate container 0f2354280c34
      [...]
      ---> 0588a050c852 
      Step 18/18 : CMD /deployment/startNIM.sh 
      ---> Running in d0cc5466a43d 
      Removing intermediate container d0cc5466a43d 
      ---> 25117ec0410a 
      Successfully built 25117ec0410a 
      Successfully tagged registry.ff.lan:31005/nginx-nms:2.5.1 
      The push refers to repository [registry.ff.lan:31005/nginx-nms] 
      9c4918474e3a: Pushed
      42543d044dbb: Pushed
      1621b2ec0a5e: Pushed
      c6a464fc6a79: Pushed
      75fa1d3c61bb: Pushed
      3501fcf5dbd8: Pushed
      d4a221057e67: Pushed
      9ad05eafed57: Pushed
      f4a670ac65b6: Pushed
      2.5.1: digest: sha256:9a70cfdb63b71dc31ef39e4f20a1420d8202c85784cb037b45dc0e884dad74c9 size: 2425

    Running NGINX Management Suite on Kubernetes

    Follow these instructions to prepare the Deployment manifest and start NGINX Management Suite on Kubernetes.

    1. Base64‑encode the NGINX Management Suite license you downloaded in Step 4 of the previous section, and copy the output to the clipboard:

      $ base64 -w0 nginx-mgmt-suite.lic
      TulNRS1WZXJz...
    2. Using your favorite editor, open manifests/1.nginx-nim.yaml and make the following changes:

      • In the spec.template.spec.containers section, replace the default image name (your.registry.tld/nginx-nim2:tag) with the Docker image name you specified with the ‑t option in Step 3 of the previous section (in our case, registry.ff.lan:31005/nginx-nms:2.5.1):

        spec:
          ...
          template:
        	...
        	spec:
          	containers:
              - name: nginx-nim2
        	    image: your.registry.tld/nginx-nim2:tag
      • In the spec.template.spec.containers.env section, configure authentication credentials by making these substitutions in the value field for each indicated name:

        • NIM_USERNAME – (Optional) Replace the default admin with an admin account name.
        • NIM_PASSWORD – (Required) Replace the default nimadmin with a strong password.
        • NIM_LICENSE – (Required) Replace the default <BASE64_ENCODED_LICENSE_FILE> with the base64‑encoded license you generated in Step 1 above.
        spec:
          ...
          template:
            ...
              spec:
                containers:
                  ...
                  env:
                    ...
                    - name: NIM_USERNAME
                      value: admin
                    - name: NIM_PASSWORD
                      value: nimadmin
                    - name: NIM_LICENSE
                      value: "<BASE64_ENCODED_LICENSE_FILE>"
    3. Check and modify files under manifests/certs to customize the TLS certificate and key used for TLS offload by setting the FQDN you want to use. By default, the nimDockerStart.sh startup script publishes the containerized NGINX Management Suite through NGINX Ingress Controller.
    4. Optionally, edit manifests/3.vs.yaml and customize the hostnames used to reach NGINX Management Suite.

    5. Run nimDockerStart.sh to start NGINX Management Suite in your Kubernetes cluster. As indicated in the trace, it runs as the nginx-nim2 pod. The script also initializes pods for ClickHouse as the backend database and Grafana for analytics visualization. For more information, see the README at the GitHub repo.

      $ ./scripts/nimDockerStart.sh start 
      namespace/nginx-nim2 created 
      ~/NGINX-NIM2-Docker/manifests/certs ~/NGINX-NIM2-Docker 
      Generating a RSA private key 
      .....................................+++++ 
      .....................................+++++ 
      writing new private key to 'nim2.f5.ff.lan.key' 
      ----- 
      secret/nim2.f5.ff.lan created 
      configmap/clickhouse-conf created 
      configmap/clickhouse-users created 
      persistentvolumeclaim/pvc-clickhouse created 
      deployment.apps/clickhouse created 
      service/clickhouse created 
      deployment.apps/nginx-nim2 created 
      service/nginx-nim2 created 
      service/nginx-nim2-grpc created 
      persistentvolumeclaim/pvc-grafana-data created 
      persistentvolumeclaim/pvc-grafana-log created 
      deployment.apps/grafana created 
      service/grafana created 
      virtualserver.k8s.nginx.org/nim2 created 
      virtualserver.k8s.nginx.org/grafana created 
      ~/NGINX-NIM2-Docker
    6. Verify that three pods are now running:

      $ kubectl get pods -n nginx-nim2 
      NAME                        READY     STATUS    RESTARTS   AGE 
      clickhouse-759b65db8c-74pn5   1/1     Running   0          63s 
      grafana-95fbbf5c-jczgk        1/1     Running   0          63s 
      nginx-nim2-5f54664754-lrhmn   1/1     Running   0          63s

    Accessing NGINX Management Suite

    To access NGINX Management Suite, navigate in a browser to https://nim2.f5.ff.lan (or the alternate hostname you set in Step 4 of the previous section). Log in using the credentials you set in Step 2 of the previous section.

    Stopping NGINX Management Suite

    To stop and remove the Docker instance of NGINX Management Suite, run this command:

    $ ./scripts/nimDockerStart.sh stop 
    namespace "nginx-nim2" deleted
  4. Get Started

    To try out the NGINX solutions discussed in this post, start a 30-day free trial today or contact us to discuss your use cases:

    The post Building a Docker Image for Deploying NGINX Management Suite Without Helm appeared first on NGINX.

    ]]> The Benefits of an API-First Approach to Building Microservices https://www.nginx.com/blog/benefits-of-api-first-approach-to-building-microservices/ Thu, 19 Jan 2023 16:01:37 +0000 https://www.nginx.com/?p=70971 APIs are the connective tissue of cloud‑native applications – the means by which an application’s component microservices communicate. As applications grow and scale, so does the number of microservices and APIs. While this is an unavoidable outcome in most cases, it creates significant challenges for the Platform Ops teams responsible for ensuring the reliability, scalability, and [...]

    Read More...

    The post The Benefits of an API-First Approach to Building Microservices appeared first on NGINX.

    ]]>
    APIs are the connective tissue of cloud‑native applications – the means by which an application’s component microservices communicate. As applications grow and scale, so does the number of microservices and APIs. While this is an unavoidable outcome in most cases, it creates significant challenges for the Platform Ops teams responsible for ensuring the reliability, scalability, and security of modern applications. We call this problem API sprawl and wrote about it in a previous blog post.

    As a first attempt to solve API sprawl, an organization might try to use a top‑down approach by implementing tools for automated API discovery and remediation. While this is effective in the near term, it often imposes an undue burden on the teams responsible for building and operating APIs and microservices. They either have to rework existing microservices and APIs to address security and compliance issues or go through an arduous review process to obtain the required approvals. This is why many large software organizations adopt a decentralized approach that uses adaptive governance to give developers the autonomy they need.

    Rather than putting in last‑minute safeguards, a bottom‑up approach to the problem is more effective over the long term. The teams building and operating APIs for different microservices and applications are the first to be involved, and often begin by adopting an API‑first approach to software development in your organization.

    What Is API-First?

    APIs have been around for decades. But they are no longer simply “application programming interfaces”. At their heart APIs are developer interfaces. Like any user interface, APIs need planning, design, and testing. API‑first is about acknowledging and prioritizing the importance of connectivity and simplicity across all the teams operating and using APIs. It prioritizes communication, reuseability, and functionality for API consumers, who are almost always developers.

    There are many paths to API‑first, but a design‑led approach to software development is the end goal for most companies embarking on an API‑first journey. In practice, this approach means API are completely defined before implementation. Work begins with designing and documenting how the API will function. The team relies on the resulting artifact, often referred to as the API contract, to inform how they implement the application’s functionality.

    Explore design techniques to support an API‑first approach to software development that is both durable and flexible in Chapter 1 of the eBook Mastering API Architecture from O’Reilly, compliments of NGINX.

    The Value of API-First for Organizations

    An API‑first strategy is often ideal for microservices architectures because it ensures application ecosystems begin life as modular and reusable systems. Adopting an API‑first software development model provides significant benefits for both developers and organizations, including:

    • Increased developer productivity – Development teams can work in parallel, able to update backend applications without impacting the teams working on other microservices which depend on the applications’ APIs. Collaboration is often easier across the API lifecycle since every team can refer to the established API contract.
    • Enhanced developer experience – API‑first design prioritizes the developer experience by ensuring that an API is logical and well‑documented. This creates a seamless experience for developers when they interact with an API. Learn why it’s so important for Platform Ops teams to take the API developer experience into consideration.
    • Consistent governance and security – Cloud and platform architects can organize the API ecosystem in a consistent way by incorporating security and governance rules during the API design phase. This avoids the costly reviews required when issues are discovered later in the software process.
    • Improved software quality – Designing APIs first ensures security and compliance requirements are met early in the development process, well before the API is ready to be deployed to production. With less need to fix security flaws in production, your operations, quality, and security engineering teams have more time to work directly with the development teams to ensure quality and security standards are met in the design phase.
    • Faster time to market – With fewer dependencies and a consistent framework for interservice communication, different teams can build and improve their services much more efficiently. A consistent, machine‑readable API specification is one tool that can help developers and Platform Ops teams to work better together.

    Overall, adopting an API‑first approach can help a company build a more flexible, scalable, and secure microservices architecture.

    How Adopting a Common API Specification Can Help

    In the typical enterprise microservice and API landscape, there are more components in play than a Platform Ops team can keep track of day to day. Embracing and adopting a standard, machine‑readable API specification helps teams understand, monitor, and make decisions about the APIs currently operating in their environments.

    Adopting a common API specification can also help improve collaboration with stakeholders during the API design phase. By producing an API contract and formalizing it into a standard specification, you can ensure that all stakeholders are on the same page about how an API will work. It also makes it easier to share reusable definitions and capabilities across teams.

    Today there are three common API specifications, each supporting most types of APIs:

    • OpenAPI – JSON or YAML descriptions of all web APIs and webhooks
    • AsyncAPI – JSON or YAML descriptions of event‑driven APIs
    • JSON Schema – JSON descriptions of the schema objects used for APIs

    REST APIs make up the bulk of APIs in production today and the OpenAPI Specification is the standard way to write an API definition for a REST API. It provides a machine‑readable contract that describes how a given API functions. The OpenAPI Specification is widely supported by a variety of API management and API gateway tools, including NGINX. The rest of this blog will focus on how you can use the OpenAPI Specification to accomplish a few important use cases.

    The OpenAPI Specification is an open source format for defining APIs in either JSON or YAML. You can include a wide range of API characteristics, as illustrated by the following simple API example. Here a simple HTTP GET request returns a list of items on an imaginary grocery list.

    openapi: 3.0.0
    info:
      version: 1.0.0
      title: Grocery List API
      description: An example API to illustrate the OpenAPI Specification
    
    servers:
    url: https://api.example.io/v1
    
    paths:
      /list:
        get:
          description: Returns a list of stuff on your grocery list             
          responses:
            '200':
              description: Successfully returned a list
              content:
                schema:
                  type: array
                  items:
                    type: object
                    properties:
                       item_name:
                          type: string

    Definitions that follow the OpenAPI Specification are both human‑ and machine‑readable. This means there is a single source of truth that documents how each API functions, which is especially important in organizations with many teams building and operating APIs. Of course, to manage, govern, and secure APIs at scale you need to make sure that the rest of the tools in your API platform – API gateways, developer portals, and advanced security – also support the OpenAPI Specification.

    Dive deeper into how to design REST APIs using the OpenAPI Specification in Chapter 1 of Mastering API Architecture.

    Benefits of Adopting a Common API Specification

    Using a common API specification, such as the OpenAPI Specification, has several benefits:

    • Improved interoperability – A common, machine‑readable specification means different systems and clients can consume and use the API contract. This makes it easier for Platform Ops teams to integrate, manage, and monitor complex architectures.
    • Consistent documentation – The API contract is documented in a standard format, including the endpoints, request and response formats, and other relevant details. Many systems can use the contract to generate comprehensive documentation, providing clarity and making it easier for developers to understand how to use the API.
    • Better testing – API specifications can be used to automatically generate and run tests, which can help ensure the API implementation adheres to the contract and is working as expected. This can help identify issues with an API before it is published to production.
    • Improved security – Advanced security tools can use the OpenAPI Specification to analyze API traffic and user behavior. They can apply positive security by verifying that API requests comply with the methods, endpoints, and parameters supported by the API endpoint. Non‑conforming traffic is blocked by default, reducing the number of calls your microservices have to process.
    • Easier evolution – API specifications can help facilitate the evolution of the API contract and application itself over time by providing a clear and standard way to document and communicate changes in both machine‑ and human‑readable formats. When coupled with proper versioning practices, this helps minimize the impacts of API changes on API consumers and ensures that an API remains backward compatible.

    Overall, using a common API specification can help to improve the interoperability, documentation, testing, security, and gradual evolution of an API.

    How NGINX Supports API-First Software Development

    NGINX provides a set of lightweight, cloud‑native tools that make it easy to operate, monitor, govern, and secure APIs at scale. For example, API Connectivity Manager, part of F5 NGINX Management Suite, provides a single management plane for your API operations. With it you can configure and manage API gateways and developer portals. As an API‑first tool itself, every function is accessible via REST API, making CI/CD automation and integration easy for DevOps teams.

    Using the OpenAPI Specification, you can use NGINX products to:

    Diagram showing how API Connectivity Manager leverages an OpenAPI Specification for three uses: publishing the API to an API gateway, publishing documentation at the developer portal, and setting security policies on a WAF
    Use the OpenAPI Specification to publish an API to the API gateway and documentation to the developer portal, and to set security policies for the WAF via CI/CD pipelines or the user interface

    Publish APIs to the API Gateway

    API Connectivity Manager uses the OpenAPI Specification to streamline API publication and management. API developers can publish APIs to the API gateway using either the NGINX Management Suite user interface or the fully declarative REST API. APIs are added to the gateway as API proxies, which contain all the ingress, backend, and routing configurations the API gateway needs to direct incoming API requests to the backend microservice. You can use the REST API to deploy and manage APIs as code by creating simple CI/CD automation scripts with tools like Ansible.

    For complete instructions on using the OpenAPI Specification to publish an API, see the API Connectivity Manager documentation.

    Generate API Documentation for the Developer Portal

    Maintaining documentation is often a headache for API teams. But out-of-date documentation on developer portals is also a major symptom of API sprawl. API Connectivity Manager uses the OpenAPI Specification to automatically generate documentation and publish it to the developer portal, saving API developers time and ensuring API consumers can always find what they need. You can upload OpenAPI Specification files directly via the API Connectivity Manager user interface or REST API.

    For complete instructions on publishing API documentation to the developer portal, see the API Connectivity Manager documentation.

    Apply Positive Security to Protect API Endpoints

    You can also use the OpenAPI Specification to verify that API requests to the NGINX Plus API gateway comply with what an API supports. By applying positive security (a security model that defines what is allowed and blocks everything else), you can prevent malicious requests from probing your backend services for potential vulnerabilities.

    At time of writing, you can’t use API Connectivity Manager to configure NGINX App Protect WAF; this functionality will be available later in 2023. You can, however, use Instance Manager (another NGINX Management Suite module) and the OpenAPI Specification to write custom policies for your WAF. For additional information, see the documentation for NGINX App Protect WAF and Instance Manager.

    Learn more about API security and threat modeling, and how to apply authentication and authorization at the API gateway in Chapter 7 of Mastering API Architecture.

    Summary

    An API‑first approach to building microservices and applications can benefit your organization in many ways. Aligning teams around the OpenAPI Specification (or another common API specification that is both human‑ and machine‑readable) helps enable collaboration, communication, and operations across teams.

    Modern applications operate in complex, cloud‑native environments. Adopting tools that enable an API‑first approach to operating APIs is a critical step towards realizing your API‑first strategy. With NGINX you can use the OpenAPI Specification to manage your APIs at scale across distributed teams and environments.

    Start a 30‑day free trial of NGINX Management Suite, which includes access to API Connectivity Manager, NGINX Plus as an API gateway, and NGINX App Protect to secure your APIs.

    The post The Benefits of an API-First Approach to Building Microservices appeared first on NGINX.

    ]]>
    Apply Fine-Grained Access Control and Routing with API Connectivity Manager https://www.nginx.com/blog/apply-fine-grained-access-control-and-routing-with-api-connectivity-manager/ Thu, 12 Jan 2023 16:00:16 +0000 https://www.nginx.com/?p=70953 An important part of managing APIs across their lifecycle is fine‑grained control over API access and traffic routing. Access tokens have emerged as the de facto standard for managing access to APIs. One of the advantages of authentication schemes based on JSON Web Tokens (JWTs) is being able to leverage the claims in the JWT [...]

    Read More...

    The post Apply Fine-Grained Access Control and Routing with API Connectivity Manager appeared first on NGINX.

    ]]>
    An important part of managing APIs across their lifecycle is fine‑grained control over API access and traffic routing. Access tokens have emerged as the de facto standard for managing access to APIs. One of the advantages of authentication schemes based on JSON Web Tokens (JWTs) is being able to leverage the claims in the JWT to implement that fine level of access control. Permissions can be encoded as custom claims, which API owners can use to control access to their APIs. Once the API proxy has validated the JWT, it has access to all the fields in the token as variables and can base access decisions on them.

    In a previous post, we discussed how API Connectivity Manager can help operators and developers work better together. The teams from different lines of business that own and operate APIs need full control as they develop and enhance the experience of their APIs and services.

    API Connectivity Manager provides policies that enable API owners to configure service‑level settings like API authentication, authorization, and additional security requirements. In this post we show how API owners can use the Access Control Routing policy to enforce fine‑grained control for specific routes and further fine‑tune it to apply per HTTP method and per route based on specific claims in the token.

    Prerequisites

    You must have a trial or paid subscription of F5 NGINX Management Suite, which includes Instance Manager and API Connectivity Manager along with NGINX Plus as an API gateway and NGINX App Protect to secure your APIs. To get started, request a free, 30-day trial of NGINX Management Suite.

    For instructions on how to install NGINX Management Suite and API Connectivity Manager, see the Installation Guide.

    Granting Access and Routing Traffic to a Specific Service

    Let’s say we have published a warehouse API proxy with several endpoints such as inventory, orders, and so on. Now we want to introduce a new service called pricing, but make it accessible only to a few clients who have signed up for a beta trial. Such clients are identified by a claim called betatester. In this sample access token, that claim is true for the user identified in the sub claim, user1@abc.com.

    Header
    
    {
      "kid": "123WoAbc4xxxX-o8LiartvSA9zzzzzLIiAbcdzYx",
      "alg": "RS256"
    }
    Payload
    
    {
      "ver": 1,
      "jti": "AT.xxxxxxxxxxxx",
      "iss": "https://oauthserver/oauth2/",
      "aud": "inventoryAPI",
      "iat": 1670290877,
      "exp": 1670294477,
      "cid": "AcvfDzzzzzzzzz",
      "uid": "yyyyyyyWPXulqE4x6",
      "scp": [
        "apigw"
      ],
      "auth_time": 1670286138,
      "sub": "user1@abc.com",
      "betatester": true,
      "domain": "abc"
    }

    For user2@abc.com , who was not chosen for the beta program, the betatester claim is set to
    false:

    "sub": "user2@abc.com",
    "betatester": false,

    Now we configure the Access Control Routing policy (access-control-routing) to grant access to the pricing service for users whose JWT contains the betatester claim with value true.

    For brevity, we show only the policy payload. This policy works only with token‑based policies in API Connectivity Manager, such as JWT Assertion and OAuth2 Introspection.

    "policies": {
      "access-control-routing": [
        {
          "action": {
            "conditions": [
              {
                "allowAccess": {
                  "uri": "/pricing"
                },
                "when": [
                  {
                    "key": "token.betatester",
                      "matchOneOf": {
                        "values": [
                          "true"
                        ]
                      }
                  }
                ]
              }
            ],
            "returnCode": 403
          }
        }
      ]
    }

    Once we apply the policy, the API proxy validates the claims in the JWT for authenticated users, performing fine‑grained access control to route requests from user1@abc.com requests and reject requests from user2@abc.com.

    Controlling Use of Specific Methods

    We can further fine‑tune the access-control-routing policy to control which users can use specific HTTP methods. In this example, the API proxy allows only admins (users whose token includes the value Admin) to use the DELETE method.

    "policies":{
      "access-control-routing":[
        "action":{
          "conditions":[
            {
              "when":[
                {
                  "key":"token.{claims}",
                  "matchOneOf":{
                    "values":[
                      "Admin"
                    ]
                  }
                }
              ],
              "allowAccess":{
                "uri":"/v1/customer",
                "httpMethods":[
                  "DELETE"
                ]
              },
              "routeTo" : {
                "targetBackendLabel" : ""
              }
            }
          ]
        }
      ]
    }

    Header-Based Routing

    Yet another use of the access-control-routing policy is to make routing decisions based on header values in incoming requests. API owners can configure rules or conditions which specify the values in the header that must be matched for the request to be routed. Requests are forwarded if they contain the header and dropped if they do not.

    In this example, a request is routed to the /seasons endpoint only when the version request header has value v1. The returnCode value specifies the error code to return when version is not v1 – in this case, it’s 403 (Forbidden).

    "access-control-routing": [
      {
        "action": {
          "conditions": [
            {
              "allowAccess": {
                "uri": "/seasons"
              },
              "when": [
                {
                  "key": "header.version",
                  "matchOneOf": {
                    "values": [
                      "v1"
                    ]
                  }
                }
              ]
            }
          ],
          "returnCode": 403
        }
      }
    ]

    In this sample curl request, we send a request to the seasons service with the version header set to v2:

    curl -H "version: v2" http://example.com/seasons

    Because the value of the version header is not v1 as required by the policy, the service returns status code 403.

    Including Multiple Rules in the Policy

    You can include multiple rules in an access-control-routing policy to control routing based on one, two, or all three of the criteria discussed in this post: JWT claims, valid methods, and request header values. A request must match the conditions in all rules to be routed; otherwise, it is blocked.

    Summary

    API Connectivity Manager enables API owners to control and enhance the experience of their APIs and services with API‑level policies that apply fine‑grained access control and make dynamic routing decisions.

    To get started with API Connectivity Manager, request a free, 30-day trial of NGINX Management Suite.

    The post Apply Fine-Grained Access Control and Routing with API Connectivity Manager appeared first on NGINX.

    ]]>
    Why Managing WAFs at Scale Requires Centralized Visibility and Configuration Management https://www.nginx.com/blog/why-managing-wafs-at-scale-requires-centralized-visibility-and-configuration-management/ Wed, 11 Jan 2023 17:00:34 +0000 https://www.nginx.com/?p=70939 In F5’s The State of Application Strategy in 2022 report, 90% of IT decision makers reported that their organizations manage between 200 and 1,000 apps, up 31% from five years ago. In another survey by Enterprise Strategy Group about how Modern App Security Trends Drive WAAP Adoption (May 2022, available courtesy of F5), the majority of IT decision makers said application [...]

    Read More...

    The post Why Managing WAFs at Scale Requires Centralized Visibility and Configuration Management appeared first on NGINX.

    ]]>
    In F5’s The State of Application Strategy in 2022 report, 90% of IT decision makers reported that their organizations manage between 200 and 1,000 apps, up 31% from five years ago. In another survey by Enterprise Strategy Group about how Modern App Security Trends Drive WAAP Adoption (May 2022, available courtesy of F5), the majority of IT decision makers said application security has become more difficult over the past 2 years, with 72% using a WAF to protect their web applications. As organizations continue their digital transformation and web applications continue to proliferate, so too does the need for increased WAF protection. But as with most tools, the more WAFs you have, the harder they are to manage consistently and effectively.

    The challenges of managing WAFs at scale include:

    • Lack of adequate visibility into application‑layer attack vectors and vulnerabilities, especially given the considerable number of them
    • Balancing WAF configurations between overly permissive or overly protective; it’s time‑consuming to fix the resulting false positives or negatives, especially manually and at scale
    • Ensuring consistent application policy management at high volumes, which is required to successfully identify suspicious code and injection attempts
    • Potential longtail costs – some extremely damaging – of failure to maintain even a single WAF in your fleet, including monetary loss, damage to reputation and brand, loss of loyal customers, and penalties for regulatory noncompliance
    • The need to support and update WAF configuration over time

    WAF management at scale means both security and application teams are involved in setup and maintenance. To effectively manage WAFs – and secure applications properly – they need proper tooling that combines holistic visibility into attacks and WAF performance with the ability to edit and publish configurations on a global scale. In this blog, we explore the benefits of centralized security visualization and configuration management for your WAF fleet.

    Actionable Security Insights at Scale with Centralized WAF Visibility

    To easily manage WAFs at scale and gain the insight needed to make informed decisions, you need a management plane that offers visibility across your WAF fleet from a single pane of glass. You can view information about top violations and attacks, false positives and negatives, apps under attack, and bad actors. You can discover how to tune your security policies based on attack graphs – including geo‑locations – and drill down into WAF event logs.

    How NGINX Can Help: F5 NGINX Management Suite Security Monitoring

    We are happy to announce the general availability of the Security Monitoring module in F5 NGINX Management Suite, the unified traffic management and security solution for your NGINX fleet which we introduced in August 2022. Security Monitoring is a visualization tool for F5 NGINX App Protect WAF that’s easy to use out of the box. It not only reduces the need for third‑party tools, but also delivers unique, curated insights into the protection of your apps and APIs. Your security, development, and Platform Ops teams gain the ability to analyze threats, view protection insights, and identify areas for policy tuning – making it easier for them to detect problems and quickly remediate issues.

    NMS Security Monitoring dashboard showing web attacks, bot attacks, threat intelligence, attack requests and top attack geolocations
    Figure 1: The Security Monitoring main dashboard provides security teams overview visibility of all web attacks, bot attacks, threat intelligence, attack requests, and top attack geolocations, plus tabs for further detailed threat analysis and quick remediation of issues.

    With the Security Monitoring module, you can:

    • Use dashboards to quickly see top violations, bot attacks, signatures, attacked instances, CVEs, and threat campaigns triggered per app or in aggregate. Filter across various security log parameters for more detailed analysis.
    • Make tuning decisions with insights into signature‑triggered events, including information about accuracy, level of risk, and what part of the request payload triggered signatures for enforcement.
    • Discover top attack actors (client IP addresses), geolocation vectors, and attack targets (URLs) per app or in aggregate.
    • See WAF events with details about requests and violations, searchable by request identifiers and other metrics logged by NGINX App Protect WAF.

    Configuration Management for Your Entire NGINX App Protect WAF Fleet

    While awareness and visibility are vital to identifying app attacks and vulnerabilities, they’re of little value if you can’t also act on the insights you gain by implementing WAF policies that detect and mitigate attacks automatically. The real value of a WAF is defined by the speed and ease with which you can create, deploy, and modify policies across your fleet of WAFs. Manual updates require vast amounts of time and accurate recordkeeping, leaving you more susceptible to attacks and vulnerabilities. And third‑party tools – while potentially effective – add unnecessary complexity.

    A centralized management plane enables configuration management with the ability to update security policies and push them to one, several, or all your WAFs with a single press of a button. This method has two clear benefits:

    • You can quickly deploy and scale policy updates in response to current threats across your total WAF environment.
    • Your security team has the ability to control the protection of all the apps and APIs your developers are building.

    How NGINX Can Help: F5 NGINX Management Suite Instance Manager – Configuration Management

    You can now manage NGINX App Protect WAF at scale with the Instance Manager module in NGINX Management Suite. This enhancement gives you a centralized interface for creating, modifying, and publishing policies, attack signatures, and threat campaigns for NGINX App Protect WAF, resulting in more responsive protection against threats and handling of traffic surges.

    NMS Instance Manager showing policies selection for a publication to a WAF instance group.
    Figure 2: Instance Manager enables security teams to create, modify, and publish policies to one, several, or an entire fleet of NGINX App Protect WAF instances. This image shows policies being selected for publication to a WAF instance group.

    With the Instance Manager module, you can:

    • Define configuration objects in a single location and push them out to the NGINX App Protect WAF instances of your choosing. The objects include security policies and deployments of attack signature updates and threat campaign packages.
    • Choose a graphical user interface (GUI) or REST API for configuration management. With the API, you can deploy configuration objects in your CI/CD pipeline.
    • See which policies and versions are deployed on different instances.
    • Use a JSON visual editor to create, view, and edit NGINX App Protect WAF policies, with the option to deploy instantly.
    • Compile NGINX App Protect WAF policies before deployment, to decrease the time required for updates on WAF instances.
    • View WAF logs and metrics through NGINX Management Suite Security Monitoring.

    Take Control of Your WAF Security with NGINX Management Suite

    To learn more, visit NGINX Management Suite and Instance Manager on our website or check out our documentation:

    Ready to try NGINX Management Suite for managing your WAFs? Request your free 30-day trial.

    The post Why Managing WAFs at Scale Requires Centralized Visibility and Configuration Management appeared first on NGINX.

    ]]>
    Why You Need an API Developer Portal for API Discovery https://www.nginx.com/blog/why-you-need-api-developer-portal-for-api-discovery/ Tue, 06 Dec 2022 16:00:51 +0000 https://www.nginx.com/?p=70811 Enterprises increasingly rely on APIs to connect applications and data across business lines, integrate with partners, and deliver customer experiences. According to TechRadar, today the average enterprise is leveraging a total of 15,564 APIs, up 201% year-on-year. As the number of APIs continues to grow, the complexity of managing your API portfolio increases. It gets harder to [...]

    Read More...

    The post Why You Need an API Developer Portal for API Discovery appeared first on NGINX.

    ]]>
    Enterprises increasingly rely on APIs to connect applications and data across business lines, integrate with partners, and deliver customer experiences. According to TechRadar, today the average enterprise is leveraging a total of 15,564 APIs, up 201% year-on-year.

    As the number of APIs continues to grow, the complexity of managing your API portfolio increases. It gets harder to discover and track what APIs are available and where they are located, as well as find documentation about how to use them. Without a holistic API strategy in place, APIs can proliferate faster than your Platform Ops teams can manage them. We call this problem API sprawl and in a previous post we explained why it’s such a significant threat. In this post we explore in detail how you can fight API sprawl by setting up an API developer portal with help from NGINX.

    Build an Inventory of Your APIs

    Ultimately, APIs can’t be useful until they are used – which means API consumers need a way to find them. Without the proper systems in place, API sprawl makes it difficult for developers to find the APIs they need for their applications. At best, lists of APIs are kept by different lines of business and knowledge is shared across teams through informal networks of engineers.

    One of the first steps toward fighting API sprawl is creating a single source of truth for your APIs. That process starts with building an inventory of your APIs. An accurate inventory is a challenge, though – it’s a constantly moving target as new APIs are introduced and old ones are deprecated. You also need to find any “shadow APIs” across your environments – APIs that have been forgotten over time, were improperly deprecated, or were built outside your standard processes.

    Unmanaged APIs are one of the most insidious symptoms of API sprawl, with both obvious security implications and hidden costs. Without an accurate inventory of available APIs, your API teams must spend time hunting down documentation. There’s significant risk of wasteful duplicated effort as various teams build similar functionality. And changes to a given API can lead to costly cascades of rework or even outages without proper version control.

    Techniques like automated API discovery can help you identify and treat the symptom of unmanaged APIs. But to solve the problem, you need to eliminate the root causes: broken processes and lack of ownership. In practice, integrating API inventory and documentation into your CI/CD pipelines is the only approach that ensures visibility across your API portfolio in the long term. Instead of having to manually track every API as it comes online, you only need to identify and remediate exceptions.

    Streamline API Discovery with an API Developer Portal

    Streamlining API discovery is one area where an API developer portal can help. It provides a central location for API consumers to discover APIs, read documentation, and try out APIs before integrating them into their applications. Your API developer portal can also serve as the central API catalog, complete with ownership and contact info, so everyone knows who is responsible for maintaining APIs for different services.

    A core component of our API reference architecture, an effective API developer portal enables a few key use cases:

    • Streamline API discovery – Publish your APIs in an accessible location so developers can easily find and use your APIs in their projects
    • Provide clear, up-to-date documentation – Ensure developers always have access to the most up-to-date documentation about how an API functions
    • Ensure proper versioning – Introduce new versions of an API without creating outages for downstream applications, with support for versioning
    • Generate API credentials – Streamline the onboarding process so developers can sign in and generate credentials to use for API access
    • Try out APIs – Enable developers to try out APIs on the portal before they integrate them into their projects

    As part of your API strategy, you need to figure out how to maintain your API developer portal. You need an automated, low‑touch approach that seamlessly integrates publishing, versioning, and documenting APIs without creating more work for your API teams.

    Create a Single Source of Truth for Your APIs with NGINX

    To enable seamless API discovery, you need to create a single source of truth where developers can find your APIs, learn how to use them, and onboard them into their projects. That means you’ll need a developer portal – and up-to-date documentation.

    API Connectivity Manager, part of F5 NGINX Management Suite, helps you integrate publication, versioning, and documentation of APIs directly into your development workflows, so your API developer portal is never out of date. In addition to making it easy to create API developer portals to host your APIs and documentation, API Connectivity Manager lets you add custom pages and completely customize the developer portal to match your branding.

    Let’s look at how API Connectivity Manager helps you address some specific use cases. Refer to the API Connectivity Manager documentation for detailed instructions about setting up a developer portal cluster and publishing a developer portal.

    Automatically Generate API Documentation

    There is often a wide gulf between the level of quality and detail your API consumers expect from documentation and what your busy API developers can realistically deliver with limited time and resources. Many homegrown documentation tools fail to integrate with the development lifecycle or other engineering systems. This doesn’t have to be the case.

    How NGINX can help: API Connectivity Manager uses the OpenAPI Specification to publish APIs to the API gateway and automatically generate the accompanying documentation on the developer portal, saving API developers time and ensuring API consumers can always find what they need. You can upload OpenAPI Specification files directly via the API Connectivity Manager user interface, or by sending a call via the REST API. This makes it easy to automate the documentation process via your CI/CD pipeline.

    To publish documentation in API Connectivity Manager, click Services in the left navigation column to open the Services tab. Click the name of your Workspace or create a new one.

    Once you are in the Workspace, click API Docs below the box that has the name and description of your Workspace (example-api in the screenshot). Simply click the  Add API Doc  button to upload your OpenAPI Specification file. Click the  Save  button to publish the documentation to the developer portal.

    Screenshot of window for adding API documentation in API Connectivity Manager
    Figure 1: Creating documentation by uploading an OpenAPI Specification file to
    API Connectivity Manager

    Ensure Proper Versioning

    Version changes must always be handled with care, and this is especially true in microservices environments where many services might be interacting with a single API. Without a careful approach to introducing new versions and retiring old ones, a single breaking change can lead to a cascading outage across dozens of microservices.

    How NGINX can help: Using OpenAPI Specification files with API Connectivity Manager enables easy version control for your APIs. In addition to setting the version number, you can provide documentation for each version and manage its status (latest, active, retired, or deprecated).

    To publish a new version of an API, click Services in the left navigation column. Click the name of your Workspace in the table, and then click the name of your Environment on the page that opens. Next, click the  + Add Proxy  button. From here you can upload the OpenAPI Specification, set the base path and version to create the URI (for example, /api/v2/), and input other important metadata. Click the  Publish  button to save and publish your API proxy.

    The original version of the API remains available alongside your new version. This gives your users time to gradually migrate their applications or services to the most recent version. When you are ready, you can fully deprecate the original version of your API. Figure 2 shows two versions of the Sentence Generator API published and in production.

    Screenshot of Services Workspace in API Connectivity Manager with two API versions
    Figure 2: Managing active API versions within API Connectivity Manager

    Generate API Credentials

    To drive adoption of your APIs, you need to make the onboarding process as simple as possible for your API consumers. Once users find their APIs, they need a method to securely sign into the developer portal and generate credentials. These credentials grant them access to the functionality of your API. Most often you’ll want to implement a self‑managed workflow so users can sign up on their own.

    How NGINX can help: API Connectivity Manager supports self‑managed API workflows on the developer portal so users can generate their own resource credentials for accessing APIs. Resource credentials can be managed on the portal using API keys or HTTP Basic authentication. You can also enable single sign‑on (SSO) on the developer portal to secure access and allow authenticated API consumers to manage resource credentials.

    To quickly enable SSO on the developer portal, click Infrastructure in the left navigation column. Click the name of your Workspace in the table (in Figure 3, it’s team-sentence).

    Screenshot of Infrastructure Workspaces tab in API Connectivity Manager
    Figure 3: List of Workspaces on the Infrastructure tab

    In the table on the Workspace page, click the name of the Environment you want to configure (in Figure 4, it’s prod).

    Screenshot of list of Environments on Infrastructure Workspaces tab in API Connectivity Manager
    Figure 4: List of Environments in a Workspace

    In the Developer Portal Clusters section, click the icon in the Actions column for the developer portal you are working on and select Edit Advanced Config from the drop‑down menu. In Figure 5, the single Developer Portal Cluster is devportal-cluster.

    Screenshot of how to edit a Developer Portal Cluster to define a policy in API Connectivity Manager
    Figure 5: Selecting Edit Advanced Config option for a Developer Portal Cluster

    Next, click Global Policies in the left navigation column. Configure the OpenID Connect Relying Party policy by clicking on the icon in the Actions column of its row and selecting Edit Policy from the drop‑down menu. For more information, see the API Connectivity Manager documentation.

    Screenshot of how to activate a policy for single sign-on in API Connectivity Manager
    Figure 6: Configuring the OpenID Connect Relaying Party global policy to enable single sign‑on

    Try Out APIs on the Developer Portal

    One way you might measure the success of your API strategy is to track the “time to first API call” metric, which reveals how long it takes a developer to send a basic request with your API.

    We’ve established that clear, concise documentation is essential as the first entry point for your API, where your users get a basic understanding of how to work with an API. Usually, developers must then write new code to integrate the API into their application before they can test API requests. You can help developers get started much faster by providing a way to directly interact with an API on the developer portal using real data – effectively making their first API call without writing a single line of code for their application!

    How NGINX can help: Once you enable SSO for your API Connectivity Manager developer portals, API consumers can use the API Explorer to try out API calls on your documentation pages. They can use API Explorer to explore the API’s endpoints, parameters, responses, and data models, and test API calls directly with their browsers.

    Figure 7 shows the API Explorer in action – in this case, trying out the Sentence Generator API. The user selects the appropriate credentials, creates the request, and receives a response with actual data from the API.

    Screenshot of testing an API in a developer portal with the API Connectivity Manager API Explorer tool
    Figure 7: Testing an API on the developer portal

    Summary

    APIs are crucial to your organization. And the first step towards governing and securing your APIs starts with taking an inventory of every API, wherever it is. But API discovery is only part of the solution – you need to build API inventory, documentation, and versioning into your development and engineering lifecycle to address the root causes of API sprawl.

    Start a 30-day free trial of NGINX Management Suite, which includes access to API Connectivity Manager, NGINX Plus as an API gateway, and NGINX App Protect to secure your APIs.

    The post Why You Need an API Developer Portal for API Discovery appeared first on NGINX.

    ]]>
    Adaptive Governance Gives API Developers the Autonomy They Need https://www.nginx.com/blog/adaptive-governance-gives-api-developers-the-autonomy-they-need/ Thu, 10 Nov 2022 19:29:38 +0000 https://www.nginx.com/?p=70698 Today’s enterprise is often made up of globally distributed teams building and deploying APIs and microservices, usually across more than one deployment environment. According to F5’s State of Application Strategy Report, 81% of organizations operate across three or more environments ranging across public cloud, private cloud, on premises, and edge. Ensuring the reliability and security of [...]

    Read More...

    The post Adaptive Governance Gives API Developers the Autonomy They Need appeared first on NGINX.

    ]]>
    Today’s enterprise is often made up of globally distributed teams building and deploying APIs and microservices, usually across more than one deployment environment. According to F5’s State of Application Strategy Report, 81% of organizations operate across three or more environments ranging across public cloud, private cloud, on premises, and edge.

    Ensuring the reliability and security of these complex, multi‑cloud architectures is a major challenge for the Platform Ops teams responsible for maintaining them. According to software engineering leaders surveyed in the F5 report, visibility (45%) and consistent security (44%) top the list of multi‑cloud challenges faced by Platform Ops teams.

    With the growing number of APIs and microservices today, API governance is quickly becoming one of the most important topics for planning and implementing an enterprise‑wide API strategy. But what is API governance, and why is it so important for your API strategy?

    What Is API Governance?

    At the most basic level, API governance involves creating policies and running checks and validations to ensure APIs are discoverable, reliable, observable, and secure. It provides visibility into the state of the complex systems and business processes powering your modern applications, which you can use to guide the evolution of your API infrastructure over time.

    Why Do You Need API Governance?

    The strategic importance of API governance cannot be overestimated: it’s the means by which you realize your organization’s overall API strategy. Without proper governance, you can never achieve consistency across the design, operation, and productization of your APIs.

    When done poorly, governance often imposes burdensome requirements that slow teams down. When done well, however, API governance reduces work, streamlines approvals, and allows different teams in your organization to function independently while delivering on the overall goals of your API strategy.

    What Types of APIs Do You Need to Govern?

    Building an effective API governance plan as part of your API strategy starts with identifying the types of APIs you have in production, and the tools, policies, and guidance you need to manage them. Today, most enterprise teams are working with four primary types of APIs:

    • External APIs – Delivered to external consumers and developers to enable self‑service integrations with data and capabilities
    • Internal APIs – Used for connecting internal applications and microservices and only available to your organization’s developers
    • Partner APIs – Facilitate strategic business relationships by sharing access to your data or applications with developers from partner organizations
    • Third-Party APIs – Consumed from third‑party vendors as a service, often for handling payments or enabling access to data or applications

    Each type of API in the enterprise must be governed to ensure it is secure, reliable, and accessible to the teams and users who need to access it.

    What API Governance Models Can You Use?

    There are many ways to define and apply API governance. At NGINX, we typically see customers applying one of two models:

    • Centralized – A central team reviews and approves changes; depending on the scale of operations, this team can become a bottleneck that slows progress
    • Decentralized – Individual teams have autonomy to build and manage APIs; this increases time to market but sacrifices overall security and reliability

    As companies progress in their API‑first journeys, however, both models start to break down as the number of APIs in production grows. Centralized models often try to implement a one-size-fits-all approach that requires various reviews and signoffs along the way. This slows dev teams down and creates friction – in their frustration, developers sometimes even find ways to work around the requirements (the dreaded “shadow IT”).

    The other model – decentralized governance – works well for API developers at first, but over time complexity increases. Unless the different teams deploying APIs communicate frequently, the overall experience becomes inconsistent across APIs: each is designed and functions differently, version changes result in outages between services, and security is enforced inconsistently across teams and services. For the teams building APIs, the additional work and complexity eventually slows development to a crawl, just like the centralized model.

    Cloud‑native applications rely on APIs for the individual microservices to communicate with each other, and to deliver responses back to the source of the request. As companies continue to embrace microservices for their flexibility and agility, API sprawl will not be going away. Instead, you need a different approach to governing APIs in these complex, constantly changing environments.

    Use Adaptive Governance to Empower API Developers

    Fortunately, there is a better way. Adaptive governance offers an alternative model that empowers API developers while giving Platform Ops teams the control they need to ensure the reliability and security of APIs across the enterprise.

    At the heart of adaptive governance is balancing control (the need for consistency) with autonomy (the ability to make local decisions) to enable agility across the enterprise. In practice, the adaptive governance model unbundles and distributes decision making across teams.

    Platform Ops teams manage shared infrastructure (API gateways and developer portals) and set global policies to ensure consistency across APIs. Teams building APIs, however, act as the subject matter experts for their services or line of business. They are empowered to set and apply local policies for their APIs – role‑based access control (RBAC), rate limiting for their service, etc. – to meet requirements for their individual business contexts.

    Adaptive governance allows each team or line of business to define its workflows and balance the level of control required, while using the organization’s shared infrastructure.

    Implement Adaptive Governance for Your APIs with NGINX

    As you start to plan and implement your API strategy, follow these best practices to implement adaptive governance in your organization:

    Let’s look at how you can accomplish these use cases with API Connectivity Manager, part of F5 NGINX Management Suite.

    Provide Shared Infrastructure

    Teams across your organization are building APIs, and they need to include similar functionality in their microservices: authentication and authorization, mTLS encryption, and more. They also need to make documentation and versioning available to their API consumers, be those internal teams, business partners, or external developers.

    Rather than requiring teams to build their own solutions, Platform Ops teams can provide access to shared infrastructure. As with all actions in API Connectivity Manager, you can set this up in just a few minutes using either the UI or the fully declarative REST API, which enables you to integrate API Connectivity Manager into your CI/CD pipelines. In this post we use the UI to illustrate some common workflows.

    API Connectivity Manager supports two types of Workspaces: infrastructure and services. Infrastructure Workspaces are used by Platform Ops teams to onboard and manage shared infrastructure in the form of API Gateway Clusters and Developer Portal Clusters. Services Workspaces are used by API developers to publish and manage APIs and documentation.

    To set up shared infrastructure, first add an infrastructure Workspace. Click Infrastructure in the left navigation column and then the  + Add  button in the upper right corner of the tab. Give your Workspace a name (here, it’s team-sentence – an imaginary team building a simple “Hello, World!” API).

    Screenshot of Workspaces page on Infrastructure tab of API Connectivity Manager UI
    Figure 1: Add Infrastructure Workspaces

    Next, add an Environment to the Workspace. Environments contain API Gateway Clusters and Developer Portal Clusters. Click the name of your Workspace and then the icon in the Actions column; select Add from the drop‑down menu.

    The Create Environment panel opens as shown in Figure 2. Fill in the Name (and optionally, Description) field, select the type of environment (production or non‑production), and click the + Add button for the infrastructure you want to add (API Gateway Clusters, Developer Portal Clusters, or both). Click the  Create  button to finish setting up your Environment. For complete instructions, see the API Connectivity Manager documentation.

    Screenshot of Create Environment panel in API Connectivity Manager UI
    Figure 2: Create an Environment and onboard infrastructure

    Give Teams Agency

    Providing logical separation for teams by line of business, geographic region, or other logical boundary makes sense – if that doesn’t deprive them of access to the tools they need to succeed. Having access to shared infrastructure shouldn’t mean teams have to worry about activities at the global level. Instead, you want to have them focus on defining their own requirements, charting a roadmap, and building their microservices.

    To help teams organize, Platform Ops teams can provide services Workspaces for teams to organize and operate their services and documentation. These create logical boundaries and provide access to different environments – development, testing, and production, for example – for developing services. The process is like creating the infrastructure Workspace we made in the previous section.

    First, click Services in the left navigation column and then the  + Add  button in the upper right corner of the tab. Give and give your Workspace a name (here, api-sentence for our “Hello, World” service), and optionally provide a description and contact information.

    Screenshot of Workspaces page on Services tab of API Connectivity Manager UI
    Figure 3: Create a services Workspace

    At this point, you can invite API developers to start publishing proxies and documentation in the Workspace you’ve created for them. For complete instructions on publishing API proxies and documentation, see the API Connectivity Manager documentation.

    Balance Global Policies and Local Control

    Adaptive governance requires a balance between enforcing global policies and empowering teams to make decisions that boost agility. You need to establish a clear separation of responsibilities by defining the global settings enforced by Platform Ops and setting “guardrails” that define the tools API developers use and the decisions they can make.

    API Connectivity Manager provides a mix of global policies (applied to shared infrastructure) and granular controls managed at the API proxy level.

    Global policies available in API Connectivity Manager include:

    • Error Response Format – Customize the API gateway’s error code and response structure
    • Log Format – Enable access logging and customize the format of log entries
    • OpenID Connect – Secure access to APIs with an OpenID Connect policy
    • Response Headers – Include or exclude headers in the response
    • Request Body Size – Limit the size of incoming API payloads
    • Inbound TLS – Set the policy for TLS connections with API clients
    • Backend TLS – Secure the connection to backend services with TLS

    API proxy policies available in API Connectivity Manager include:

    • Allowed HTTP Methods – Define which request methods can be used (GET, POST, PUT, etc.)
    • Access Control – Secure access to APIs using different authentication and authorization techniques (API keys, HTTP Basic Authentication, JSON Web Tokens)
    • Backend Health Checks – Run continuous health checks to avoid failed requests to backend services
    • CORS – Enable controlled access to resources by clients from external domains
    • Caching – Improve API proxy performance with caching policies
    • Proxy Request Headers – Pass select headers to backend services
    • Rate Limiting – Limit incoming requests and secure API workloads

    In the following example, we use the UI to define a policy that secures communication between an API Gateway Proxy and backend services.

    Click Infrastructure in the left navigation column. After you click the name of the Environment containing the API Gateway Cluster you want to edit, the tab displays the API Gateway Clusters and Developer Portal Clusters in that Environment.

    Screenshot of Environment page on Infrastructure tab of API Connectivity Manager UI
    Figure 4: Configure global policies for API Gateway Clusters and Developer Portal Clusters

    In the row for the API Gateway Cluster to which you want to apply a policy, click the icon in the Actions column and select Edit Advanced Configuration from the drop‑down menu. Click Global Policies in the left column to display a list of all the global policies you can configure.

    Screenshot of Global Policies page in API Connectivity Manager UI
    Figure 5: Configure policies for an API Gateway Cluster

    To apply the TLS Backend policy, click the icon at the right end of its row and select Add Policy from the drop‑down menu. Fill in the requested information, upload your certificate, and click Add. Then click the  Save and Submit  button. From now on, traffic between the API Gateway Cluster and the backend services is secured with TLS. For complete instructions, see the API Connectivity Manager documentation.

    Summary

    Planning and implementing API governance is a crucial step ensuring your API strategy is successful. By working towards a distributed model and relying on adaptive governance to address the unique requirements of different teams and APIs, you can scale and apply uniform governance without sacrificing the speed and agility that make APIs, and cloud‑native environments, so productive.

    Get Started

    Start a 30‑day free trial of NGINX Management Suite, which includes access to API Connectivity Manager, NGINX Plus as an API gateway, and NGINX App Protect to secure your APIs.

    The post Adaptive Governance Gives API Developers the Autonomy They Need appeared first on NGINX.

    ]]>