NGINX Unit Archives - NGINX https://www.nginx.com/blog/tag/nginx-unit/ The High Performance Reverse Proxy, Load Balancer, Edge Cache, Origin Server Tue, 05 Sep 2023 15:00:04 +0000 en-US hourly 1 Server-Side WebAssembly with NGINX Unit https://www.nginx.com/blog/server-side-webassembly-nginx-unit/ Tue, 05 Sep 2023 15:00:04 +0000 https://www.nginx.com/?p=72658 WebAssembly (abbreviated to Wasm) has a lot to offer the world of web applications. In the browser, it provides a secure, sandboxed execution environment that enables frontend developers to work in a variety of high-level languages (not just JavaScript!) without compromising on performance. And at the backend (server-side), WebAssembly’s cross-platform support and multi-architecture portability promise [...]

Read More...

The post Server-Side WebAssembly with NGINX Unit appeared first on NGINX.

]]>
WebAssembly (abbreviated to Wasm) has a lot to offer the world of web applications. In the browser, it provides a secure, sandboxed execution environment that enables frontend developers to work in a variety of high-level languages (not just JavaScript!) without compromising on performance. And at the backend (server-side), WebAssembly’s cross-platform support and multi-architecture portability promise to make development, deployment, and scalability easier than ever.

At NGINX, we envision a world where you can create a server-side WebAssembly module and run it anywhere – without modification and without multiple build pipelines. Instead, your WebAssembly module would start at local development and run all the way to mission-critical, multi-cloud environments.

With the release of NGINX Unit 1.31, we’re excited to deliver on this vision. NGINX Unit is a universal web app server where application code is executed alongside the other essential attributes of TLS, static files, and request routing. Moreover, NGINX Unit does all of this while providing a consistent developer experience for seven programming language runtimes, and now also WebAssembly.

Adding WebAssembly to NGINX Unit makes sense on many levels:

  • HTTP’s request-response pattern is a natural fit with the WebAssembly sandbox’s input/output (I/O) byte stream.
  • Developers can enjoy high-level language productivity without compromising on runtime performance.
  • NGINX Unit’s request router can facilitate construction of complex applications from multiple WebAssembly modules.
  • WebAssembly’s fast startup time makes it equally suitable for deploying single microservices and functions, or even full-featured applications.
  • Universal portability and cross-platform compatibility enables local development without complex build pipelines.
  • NGINX Unit already provides per-application isolation and the WebAssembly sandbox makes it even safer to run untrusted code.

Note: At the time of this writing, WebAssembly module is a Technology Preview – more details below.

How Does the NGINX Unit WebAssembly Module Work?

NGINX Unit’s architecture decouples networking protocols from the application runtime. The unitd: router process handles the incoming HTTP request, taking care of the TLS layer as required. After deciding what to do with this request, the “HTTP context” (URI, headers, and body) is then passed to the application runtime.

Many programming languages have a precise specification for how the HTTP context is made available to the application code, and how a developer can access URI, headers, and body. NGINX Unit provides several language modules that implement an interface layer between NGINX Unit’s router and the application runtime.

The WebAssembly language module for NGINX Unit provides a similar interface layer between the WebAssembly runtime and the router process. The WebAssembly sandbox’s linear memory is initialized with the HTTP context of the current request and the finalized response is sent back to the router for transmission to the client.

The sandboxed execution environment is provided by the Wasmtime runtime. The diagram below illustrates the flow of an HTTP request from client, through the router, to the WebAssembly module executed by Wasmtime.

Diagram of the flow of an HTTP request from client, through the router, to the WebAssembly module executed by Wasmtime.

Running WebAssembly Modules on NGINX Unit

Configuring NGINX Unit to execute a WebAssembly module is as straightforward as for any other language. In the configuration snippet below, there is an application called helloworld with these attributes:

  • type defines the language module to be loaded for this application
  • module points to a compiled WebAssembly bytecode
  • access is a feature of the Wasmtime runtime that enables the application to access resources outside of the sandbox
  • request_handler, malloc_handler, and free_handler relate to the SDK functions that transfer the HTTP context to Wasmtime (more on that in the next section)
{
   "applications":{
      "helloworld":{
         "type":"wasm",
         "module":"/path/to/wasm_module.wasm",
         "access":{
            "filesystem":[
               "/tmp",
               "/var/tmp"
            ]
         },
         "request_handler":"luw_request_handler",
         "malloc_handler":"luw_malloc_handler",
         "free_handler":"luw_free_handler"
      }
   }
}

Finding HTTP Context Inside the WebAssembly Sandbox

As mentioned above, NGINX Unit’s WebAssembly language module initializes the WebAssembly execution sandbox with the HTTP context of the current request. Where many programming language runtimes would provide native, direct access to the HTTP metadata, no such standard exists for WebAssembly.

We expect the WASI-HTTP standard to ultimately satisfy this need but, in the meantime, we provide a software development kit (SDK) for Rust and C. The Unit-Wasm SDK makes it easy to write web applications and APIs that compile to WebAssembly and run on NGINX Unit. In our how-to guide for WebAssembly, you can explore the development environment and build steps.

Despite our vision and desire to realize WebAssembly’s potential as a universal runtime, applications built with this SDK will only run on NGINX Unit. This is why we introduce WebAssembly support as a Technology Preview – we expect to replace it with WASI-HTTP support as soon as that is possible.

Try the Technology Preview

The Technology Preview is here to showcase the potential for server-side WebAssembly while providing a lightweight server for running web applications. Please approach it with a “kick the tires” mindset – experiment with it and provide us with feedback. We’d love to hear from you on the NGINX Community Slack or through the NGINX Unit GitHub repo.

To get started, install NGINX Unit and jump to the how-to guide for WebAssembly

The post Server-Side WebAssembly with NGINX Unit appeared first on NGINX.

]]>
Learn to Configure NGINX Unit with Zero Pain in Our Video Course https://www.nginx.com/blog/learn-nginx-unit-with-zero-pain-video-course/ Mon, 23 Jan 2023 16:00:15 +0000 https://www.nginx.com/?p=70981 NGINX Unit is a universal web application server that can be used as a building block for any web architecture, regardless of its complexity – from personal websites to startups to enterprise‑grade production deployments. NGINX Unit compresses multiple layers of the typical web application stack by solving for multiple use cases, including simplifying modern microservices environments and [...]

Read More...

The post Learn to Configure NGINX Unit with Zero Pain in Our Video Course appeared first on NGINX.

]]>
NGINX Unit is a universal web application server that can be used as a building block for any web architecture, regardless of its complexity – from personal websites to startups to enterprise‑grade production deployments. NGINX Unit compresses multiple layers of the typical web application stack by solving for multiple use cases, including simplifying modern microservices environments and modernizing legacy and monolithic applications.

With NGINX Unit, you can:

  • Serve static assets as a web server
  • Natively run application code in multiple languages
  • Proxy requests to backend servers
  • Achieve true end-to-end TLS for your web apps
  • Reconfigure runtime behavior on the fly with the configuration API (also called the control API)

Given its many capabilities, where do you start learning about NGINX Unit? Well, we’ve developed a comprehensive video course with over a dozen lessons that cover all the details! But don’t be intimidated by the number of lessons – each is under 10 minutes and breaks its topic into simple steps, with live demos that make learning easy and accessible.

By the end of this video course, you’ll know how to install NGINX Unit, use the configuration API, run a “Hello, World!” application, configure NGINX Unit as a web server, and much more.

NGINX Unit Videos Available Now

Start by watching the course overview:

It gives you the background you need to journey on to the videos that explore NGINX Unit in depth.

What Is NGINX Unit? – In this introductory video, learn the basics of NGINX Unit, about the complex application‑stack problems it can solve, and how even as a single component NGINX Unit can manage a large ecosystem.

How to Install NGINX Unit on Linux Debian and Ubuntu – Explore the NGINX Unit repository through with a live demo and discover how to install the software on two types of Linux open source software, Debian and Ubuntu.

Explore the Config API and Unit Components – Building on the Ubuntu installation, learn ways to verify that NGINX Unit is running properly, and find out the most important things about the configuration API.

NGINX Unit as a Web Server – Dive into NGINX Unit as a web server (creating a simple HTML index file and JSON configuration) and learn about popular web‑server use cases like deploying a single‑page app (SPA).

Writing Your First “Hello World” in NGINX Unit – Using the NGINX Unit configuration API, learn how to define a simple “Hello, World!” program. By the end of the demo, you’ll have written your own!

Introduction to Language Modules in NGINX Unit – Learn about the available language modules, and how to install them in this introduction to NGINX Unit as an app server.

Upcoming NGINX Unit Videos

In upcoming videos, you’ll learn about using Unit to serve apps written in these languages:

  • Go
  • Java
  • JavaScript (NodeJS)
  • PHP
  • Python

To get notified when upcoming videos are available, subscribe to NGINX on YouTube.

You can find all videos listed above in the NGINX Unit playlist on YouTube (and keep an eye out for future videos too). We also provide complete written documentation – get started with the installation instructions.

We’re excited to see how you use NGINX Unit in your environment! Let us know in the comments section below.

The post Learn to Configure NGINX Unit with Zero Pain in Our Video Course appeared first on NGINX.

]]>
5 Videos from Sprint 2022 that Reeled Us In https://www.nginx.com/blog/5-videos-from-sprint-2022-that-reeled-us-in/ Tue, 10 Jan 2023 15:35:43 +0000 https://www.nginx.com/?p=70927 The oceanic theme at this year’s virtual Sprint conference made for smooth sailing – all the way back to our open source roots. The water was indeed fine, but the ship would never have left the dock without all of the great presentations from our open source team and community. That’s why we want to highlight [...]

Read More...

The post 5 Videos from Sprint 2022 that Reeled Us In appeared first on NGINX.

]]>
The oceanic theme at this year’s virtual Sprint conference made for smooth sailing – all the way back to our open source roots. The water was indeed fine, but the ship would never have left the dock without all of the great presentations from our open source team and community. That’s why we want to highlight some of our favorite videos, from discussions around innovative OSS projects to demos involving writing poetry with code. Here, we’ve picked five of our favorites to tide you over until next year’s Sprint.

In addition to the videos below, you can find all the talks, demos, and fireside chats from NGINX Sprint 2022 in the NGINX Sprint 2022 YouTube playlist.

Best Practices for Getting Started with NGINX Open Source

NGINX Open Source is the world’s most popular web server, but also much more – you can configure it as a reverse proxy, load balancer, API gateway, and cache. In this breakout session, Alessandro Fael Garcia, Senior Solutions Engineer for NGINX, simplifies the journey for anyone who’s just getting started with NGINX. Learn multiple best practices including using the official NGINX repo to install NGINX, the importance of knowing your NGINX key commands, and how small adjustments with NGINX directives can improve performance.

For more resources on installing NGINX Open Source, check out our blog and webinar Back to Basics: Installing NGINX Open Source and NGINX Plus.

How to Get Started with OpenTelemetry

In cloud‑native architectures, observability is critical for providing insight into application performance. OpenTelemetry has taken observability to the next level with the concept of distributed tracing. As one of the most active projects at the Cloud Native Computing Foundation (CNCF), OpenTelemetry is quickly becoming the standard for instrumentation and collection of observability data. If you can’t already tell, we believe it’s one of the top open source projects to keep on your radar.

In this session, Steve Flanders, Director of Engineering at Splunk, covers the fundamentals of OpenTelemetry and demos how you can start integrating it into your modern app infrastructure.

To learn how NGINX is using OpenTelemetry, read Integrating OpenTelemetry into the Modern Apps Reference Architecture – A Progress Report on our blog.

The Future of Kubernetes Connectivity

Kubernetes has become the de facto standard for container management and orchestration. But as organizations deploy Kubernetes in production across multi‑cloud environments, complex challenges often emerge. Processes need to be streamlined so teams can manage connectivity to Kubernetes services from cloud to edge. In this Sprint session, Brian Ehlert, Director of Product Management for NGINX, discusses the history of Kubernetes networking, current trends, and the potential future for providing client access to applications in a shared, multi‑tenant Kubernetes environment.

At NGINX, we recognize the importance of Kubernetes connectivity, which is why we developed a Secure Kubernetes Connectivity solution. With NGINX’s Secure Kubernetes Connectivity you can scale, observe, govern, and secure your Kubernetes apps in production while reducing complexity.

Fun Ways to Script NGINX Using the NGINX Javascript Module

Feeling overwhelmed by all the open source offerings and possibilities? Take a break! In this entertaining talk, Javier Evans, Solutions Engineer for NGINX, guides you through some fun ways to script NGINX Open Source using the NGINX JavaScript module (njs). You’ll learn how to generate QR codes, implement weather‑based authentication (WBA) to compose a unique poem based on the location’s current weather, and more. Creativity abounds and Javier holds nothing back.

New features and improvements are regularly added to njs to make it easier for teams to work and share njs code. Recent updates help make your NGINX config even more modular and reusable.

A Journey Through NGINX and Open Source with Kelsey Hightower

We were beyond excited to have renowned developer advocate Kelsey Hightower join us at Sprint for a deep dive into NGINX Unit, our open source, universal web app server. In a discussion with Rob Whiteley, NGINX Product Group VP and GM, Kelsey emphasizes how one of his primary goals when working with technology is to save time. Using a basic application inside a single container as his demo environment, Kelsey shows how NGINX Unit enables you to write less code. And while Kelsey’s impressed by NGINX Unit, he also offers feedback on how it can improve. We appreciate that, as we are committed to refining and enhancing our open source offerings.

Watch On Demand

Again, thank you for diving into the open source waters with us this year at Sprint! It was a blast and we loved seeing all of your comments, insight, and photos from the virtual booth.

Reminder: You can find all these videos, and more insightful talks, in the NGINX Sprint 2022 YouTube playlist and watch them on demand for free.

The post 5 Videos from Sprint 2022 that Reeled Us In appeared first on NGINX.

]]>
NGINX Unit Greets Autumn 2022 with New Features (a Statistics Engine!) and Exciting Plans https://www.nginx.com/blog/nginx-unit-autumn-2022-new-features-statistics-engine-exciting-plans/ Thu, 27 Oct 2022 16:00:25 +0000 https://www.nginx.com/?p=70626 First things first: it’s been quite a while since we shared news from the NGINX Unit team – these tumultuous times have affected everyone, and we’re no exception. This March, two founding members of the Unit team, Valentin Bartenev and Maxim Romanov, decided to move on to other opportunities after putting years of work and tons of creativity into [...]

Read More...

The post NGINX Unit Greets Autumn 2022 with New Features (a Statistics Engine!) and Exciting Plans appeared first on NGINX.

]]>

First things first: it’s been quite a while since we shared news from the NGINX Unit team – these tumultuous times have affected everyone, and we’re no exception. This March, two founding members of the Unit team, Valentin Bartenev and Maxim Romanov, decided to move on to other opportunities after putting years of work and tons of creativity into NGINX Unit. Let’s give credit where credit is due: without them, NGINX Unit wouldn’t be where it is now. Kudos, guys.

Still, our resolve stays strong, as does our commitment to bringing NGINX co‑founder Igor Sysoev’s original aspirations for NGINX Unit to fruition. The arrival of the two newest team members, Alejandro Colomar and Andrew Clayton, has boosted the development effort, so now we have quite a few noteworthy items from NGINX Unit versions 1.25 through 1.28 to share with you.

Observability Is a Thing Now

One of Unit’s key aspirations has always been observability, and version 1.28.0 includes the first iteration of one of the most eagerly awaited features: a statistics engine. Its output is exposed at the new /status API endpoint:

$ curl --unix-socket /var/run/control.unit.sock http://localhost/status

Most of the fields here are self‑descriptive: connections (line 2) and requests (line 9) provide instance‑wide data, whereas the applications object (line 13) mirrors /config/applications, covering processes and requests that specifically concern the application.

Lines 3–6 show the four categories of connections tracked by NGINX Unit: accepted, active, idle, and closed. The categories for processes are running, starting, and idle (lines 16–18). These categories reflect the internal representation of connections and processes, so now you know just as much about them as your server does.

Seems terse? That’s pretty much all there is to know for now. Sure, we’re working to expose more useful metrics; however, you already can query this API from your command line to see what’s going on at your server and even plug the output into a dashboard or your choice for a more fanciful approach. Maybe you don’t have a dashboard? Well, some of our plans include providing a built‑in one, so follow us to see how this plays out.

For more details, see Usage Statistics in the NGINX Unit documentation.

More Variables, More Places to Use Them

The list of variables introduced since version 1.24 is quite extensive and includes $body_bytes_sent, $header_referer, $header_user_agent, $remote_addr, $request_line, $request_uri, $status, and $time_local.

Most of these are rather straightforward, but here are some of the more noteworthy:

  • $request_uri contains the path and query from the requested URI with browser encoding preserved
  • The similarly named $request_line stores the entire request, such as GET /docs/help.html HTTP/1.1, and is intended for logging…
  • As is $status which contains the HTTP response status code

Did you notice? We mentioned responses. Yes, we’re moving into that territory as well: the variables in earlier Unit versions focused on incoming requests, but now we have variables that capture the response properties as well, such as $status and $body_bytes_sent.

Regarding new places to use variables, the first to mention is the new customizable access log format. Want to use JSON in NGINX Unit’s log entries? In addition to specifying a simple path string, the access_log option can be an object that also sets the format of log entries:

Thus, you can go beyond the usual log format any way you like.

A second noteworthy use case for variables is the location value of a route action:

Here we’re using $request_uri to relay the request, including the query part, to the same website over HTTPS.

The chroot option now supports variables just as the share option does, which is only logical:

NGINX Unit now supports dynamic variables too. Request arguments, cookies, and headers are exposed as variables: for instance, the query string Type=car&Color=red results in two argument variables, $arg_Type and $arg_Color. At runtime, these variables expand into dynamic values; if you reference a non‑existent variable, it is considered empty.

For more details, see Variables in the NGINX Unit documentation.

Extensive Support for the X-Forwarded-* Headers

You asked, and we delivered. Starting in version 1.25.0, NGINX Unit has offered some TLS configuration facilities for its listeners, including a degree of X-Forwarded-* awareness; now, you can configure client IP addresses and protocol replacement in the configuration for your listeners:

Note: This new syntax deprecates the previous client_ip syntax, which will be removed in a future release.

For more details, see IP, Protocol Forwarding in the NGINX Unit documentation.

The Revamped share Option

NGINX Unit version 1.11.0 introduced the share routing option for serving static content. It’s comparable to the root directive in NGINX:

Initially, the share option specified the so‑called document root directory. To determine which file to serve, Unit simply appended the URI from the request to this share path. For example, in response to a simple GET request for /some/file.html, Unit served /path/to/dir/some/file.html. Still, we kept bumping into border cases that required finer control over the file path, so we decided to evolve. Starting with version 1.26.0, the share option specifies the entire path to a shared file rather than just the document root.

You want to serve a specific file? Fine:

Use variables within the path? Cool, not a problem:

But how do you go about imitating the behavior you’re already used to from NGINX and previous Unit versions? You know, the document root thing that we deemed obsolete a few paragraphs ago? We have a solution.

You can now rewrite configurations like this:

as follows, appending the requested URI to the path, but explicitly!

Finally, the share directive now can accept an array of paths, trying them one by one until it finds a file:

If no file is found, routing passes to a fallback action; if there’s no fallback, the 404 (Not Found) status code is returned.

For more details, see Static Files in the NGINX Unit documentation.

Plans: njs, URI Rewrite, Action Chaining, OpenAPI

As you read this, we’re already at work on the next release; here’s a glimpse of what we have up our sleeves.

First, we’re integrating NGINX Unit with the NGINX JavaScript module (njs), another workhorse project under active development at NGINX. In short, this means NGINX Unit will support invoking JavaScript modules. Consider this:

After importing the module in NGINX Unit, you’ll be able to do some neat stuff with the configuration:

Also, we’re aiming to introduce something akin to the ever‑popular NGINX rewrite directive:

Our plans don’t stop there, though. How about tying NGINX Unit’s routing to the output from the apps themselves (AKA action chaining)?

The idea here is that the auth_check app authenticates the incoming request and returns a status code to indicate the result. If authentication succeeds, 200 OK is returned and the request passes on to my_app.

Meanwhile, we’re also working on an OpenAPI specification to define once and for all NGINX Unit’s API and its exact capabilities. Wish us luck, for this is a behemoth undertaking.

If that’s still not enough to satisfy your curiosity, refer to our roadmap for a fine‑grained dissection of our plans; it’s interactive, so any input from you, dear reader, is most welcome.

The post NGINX Unit Greets Autumn 2022 with New Features (a Statistics Engine!) and Exciting Plans appeared first on NGINX.

]]>
Three Steps for Starting Your Cloud-Native Journey with Kubernetes https://www.nginx.com/blog/three-steps-for-starting-your-cloud-native-journey-with-kubernetes/ Thu, 15 Sep 2022 16:00:36 +0000 https://www.nginx.com/?p=70368 Today, many companies are on a cloud‑native journey or – as F5 puts it – a journey to adaptive applications. By “adaptive” we primarily mean that the app responds to changes in its operating environment by automatically scaling in response to the level of demand, detecting and mitigating problems, and recovering from failures, among other capabilities. But [...]

Read More...

The post Three Steps for Starting Your Cloud-Native Journey with Kubernetes appeared first on NGINX.

]]>
Today, many companies are on a cloud‑native journey or – as F5 puts it – a journey to adaptive applications. By “adaptive” we primarily mean that the app responds to changes in its operating environment by automatically scaling in response to the level of demand, detecting and mitigating problems, and recovering from failures, among other capabilities. But we also mean that it’s fairly easy to update the app to meet changing requirements in the business and technology landscape as traffic patterns shift, the underlying infrastructure develops, and cyberattacks get more numerous and sophisticated.

Sounds like a great outcome, doesn’t it? Well, the journey to adaptive apps doesn’t happen overnight. It takes time – and it’s okay if you’re still in the early stages. Your expedition can be made easier by simplifying crucial points along the way.

Most often, we think of adaptive apps as driven by microservices architectures, running in containers within an elastic cloud environment, a world that comes with a lot of complexity. As nodes and pods and containers spin up and down, you can’t keep up if you have to deal with every change manually. You need orchestration (in particular, container orchestration) to keep the chaos at bay.

At a minimum, adaptive apps need to respond to environmental changes across four factors:

  1. Performance
  2. Scalability (or flexibility)
  3. Observability
  4. Security

How do you seamlessly orchestrate and manage your adaptive apps – as a group – to facilitate fast, automated responses to regularly changing conditions? Well, it’s hard to talk about orchestration without talking about Kubernetes. That’s why Granville Schmidt and I discussed it recently at GlueCon. You can watch our conversation here:

Kubernetes is one of the most popular projects from the Cloud Native Computing Foundation (CNCF) – and for good reason. It provides a lot of amazing functionality (especially for enterprises that need to make continual adjustments) and is often the right tool for the heart of your scalable application because it bakes in the needed flexibility. When it was launched in July 2015, it was already a mature product (because of all the work Google had done on its precursor, the Borg project) and as it evolves, the ecosystem of supporting technologies needed to make it a complete solution continues to emerge. But while Kubernetes is a powerful tool, it doesn’t solve every problem and it can be complex and challenging.

Starting Your Cloud-Native Voyage

Unless you’re starting with a brand‑new code base, you likely already have applications in production. Not every application needs to be cloud native, especially if it already fulfills our requirements for modern apps. Also, just because Kubernetes has so many capabilities doesn’t mean you need to use it everywhere. Kubernetes has a time and place. But making the move to microservices and containers can definitely be advantageous.

So, how do you get started with your cloud‑native journey? How do you go from existing monoliths to a microservices‑based, cloud‑enabled world?

With so many resources out there, we want to break down the simplest way to go from monolith to cloud native with Kubernetes orchestration. Back in 2010, Gartner outlined five approaches to cloud migration: rehost, refactor, revise, rebuild, and replace. Microsoft has built on Gartner’s thought leadership to build a framework for what it calls cloud rationalization (in the process renaming the third approach to rearchitect). But you can also think of the approaches/framework components as steps on a journey – one that touches developers and operations, code, and process – and one that you might be just starting or already on. Here we’re focusing on three of the steps: rehost, refactor, and rearchitect.

Rehost (Lift and Shift)

The cloud‑native journey often starts with an existing monolith. But you don’t want to go from monolith to cloud native in one fell swoop. We recommend starting with rehosting, or encapsulating the infrastructure.

While we regularly hear about the splendors of microservices in the cloud, there are still a whole lot of monoliths out there. And why not? They work and they aren’t broken (yet). However, even if the monolith’s traditional three‑tier architecture is still in use, apps that use it are likely to be running up against scaling issues, or even forcing you to build hybrid models to connect with today’s users.

This first, rehosting step is also called lift and shift. In short, you make a copy of your existing application and “paste” it onto an appropriate cloud platform. Basically, you’re moving to “someone else’s computers” with as little impact on your application as possible.

This often happens in a virtual machine (VM) world, which gets you started with the management of apps in the cloud and helps identify the issues you need to deal with. By just lifting and shifting, you’re not really leveraging many of the advantages of adaptive apps, since you can scale only by duplicating the entire app, even if only one piece of functionality is the bottleneck. And having entered the cloud‑native world, you’re faced with unfamiliar issues that apply specifically to web apps and static assets dynamically running code.

Even though you’re still above the level of containers and orchestration, you’re on your way and ready for the next steps.

Refactor

In the next step, you refactor your app into pieces (usually implemented as modules) each of which performs a single function (or perhaps a set of closely related functions). At the same time, you might add new app capabilities or tie some functions to specific cloud services (like databases or messaging services). You’re also likely moving to containers, while retaining some of the VMs that make up your infrastructure, and orchestration is becoming more important.

Being farther along in the process, you’ve got more moving pieces than when you simply rehost. Your refactored services are probably in containers so that you can scale them up and down while retaining the loosely coupled communication paths and API calls required to make things work. Now, it’s time to bring in Kubernetes to orchestrate your new containers at the right level of control.

Of course, given the complexity incurred with Kubernetes, there are some challenges to consider. One of the big ones is Ingress, the Kubernetes resource that lets you configure HTTP load balancing for applications hosted in your Kubernetes cluster (represented by services) and deliver those services to clients outside of the cluster. With NGINX, you can use the NGINX Ingress Controller to support the standard Ingress features of content‑based routing and TLS/SSL termination. NGINX Ingress Controller also extends the standard Ingress resource with several additional features like load balancing for WebSocket, gRPC, TCP, and UDP applications.

Depending on your early refactoring work, your emerging cloud‑native app may need the flexibility to communicate in multiple ways. And since NGINX Ingress Controller itself runs in a Kubernetes pod, you’re well set for the next step, rearchitecting.

Open Source Refactoring with NGINX Unit

Using an open source tool like NGINX Unit can help make refactoring easier, with dynamic reconfiguration via API, request routing, and security features. As a next‑gen technology, NGINX Unit can also help you modernize your monolith by turning it into a cloud‑native monolith. NGINX Unit’s RESTful configuration method provides uniform interfaces and separates client concerns from server concerns. While your three‑tier monolith might already strive to do that, NGINX Unit makes cloud native approachable and leads to easier operations. This clarifies the lines of communication lines and helps identify the further steps required after refactoring.

The refactoring step often stumbles on application control. Since NGINX Unit is already a container‑friendly technology, as you refactor your app, its application‑control features (including dynamic reconfiguration) allow you to seamlessly add services as they are separated from the monolith. NGINX Unit provides application control from Layer 4 all the way into user space, including high‑performance HTTP, proxies, backends, and true end-to-end SSL/TLS. Also, the fact that NGINX Unit supports multiple languages, and can use the right language at the right time, becomes especially important in your early refactoring work.

Rearchitect

After refactoring to add and replace services in your initial application, you next need to rearchitect and look at a stable and sensible redesign for your microservices architecture. Duct tape and wire might work for a while, but in production systems stability is highly desirable.

In rearchitecting as we envision it, you continue to devolve the initial application so that all functions are performed by microservices (or serverless functions) that live in containers powered by Kubernetes. Here, communication is key. Each microservice is as small as necessary (which does not mean “as small as possible”) and is often developed by an independent team.

New issues and even some chaos inevitably arise when an app consists of discrete, independent microservices with loosely coupled communications. Remember that external Ingress issue? It’s back with a vengeance. Now, you’re dealing with a more complex collection of services needing Ingress and more teams you have to collaborate with.

Rearchitecting with NGINX Ingress Controller and NGINX Kubernetes Gateway

Rather than just relying on the standard Kubernetes Ingress resource, NGINX Ingress Controller also supports its own set of custom resource definitions (CRDs) which extend the power of NGINX to a wider range of use cases, such as object delegation using the role‑based access control (RBAC) capabilities of the Kubernetes API.

It is also worth looking into a controller that implements the Kubernetes Gateway API specification. A Kubernetes Gateway allows you to delegate infrastructure management to multiple teams and simplifies deployment and administration. Such a gateway does not displace an Ingress controller, but as the spec matures it may become the solution of choice. NGINX’s implementation of the Kubernetes Gateway API, NGINX Kubernetes Gateway, uses NGINX technology on the data plane and (as of this writing) is in beta.

In line with the Kubernetes Gateway API specification, NGINX Kubernetes Gateway enables multiple teams to control different aspects of the Ingress configuration as they develop and deliver the app. In the cloud‑native world, it is common for developers and site reliability engineers (SREs) to collaborate continually, and the gateway makes it easier to delegate different controls to the appropriate teams. The gateway also incorporates core capabilities that were formerly part of custom Kubernetes annotations and CRDs, providing even more simplicity to get your cloud‑native world up and rocking.

Cloud Native and Open Source

So, there you have it, a simple set of steps for moving from monolith to cloud native, driven by the power of containers and Kubernetes. You might find many other approaches suitable for your use cases, like retaining the initial app while building a new framework around it that combines the capabilities of NGINX Unit and Kubernetes. Or you might build a hybrid model. If you’d like to look at a reference model of NGINX in a Kubernetes world, check out our open source Modern App Reference Architecture (MARA) project.

No matter which approach to cloud native you choose, keep in mind that you always need control and communication capabilities in your Kubernetes cluster to achieve the performance and stability you want for your production systems. NGINX enterprise and open source technologies can help deliver all of that, from data plane to management plane, with security and performance in mind.

Learn how to get started with NGINX Unit in our installation guide. And if you’d like to try the enhanced functionality of the NGINX Plus-based NGINX Ingress Controller, start your 30-day free trial today or contact us to discuss your use cases.

Related Posts

The post Three Steps for Starting Your Cloud-Native Journey with Kubernetes appeared first on NGINX.

]]>
The Future of NGINX: Getting Back to Our Open Source Roots https://www.nginx.com/blog/future-of-nginx-getting-back-to-our-open-source-roots/ Tue, 23 Aug 2022 15:30:25 +0000 https://www.nginx.com/?p=70273 Time flies when you’re having fun. So it’s hard to believe that NGINX is now 18 years old. Looking back, the community and company have accomplished a lot together. We recently hit a huge milestone – as of this writing 55.6% of all websites are powered by NGINX (either by our own software or by products built [...]

Read More...

The post The Future of NGINX: Getting Back to Our Open Source Roots appeared first on NGINX.

]]>
Time flies when you’re having fun. So it’s hard to believe that NGINX is now 18 years old. Looking back, the community and company have accomplished a lot together. We recently hit a huge milestone – as of this writing 55.6% of all websites are powered by NGINX (either by our own software or by products built atop NGINX). We are also the number one web server by market share. We are very proud of that and grateful that you, the NGINX community, have given us this resounding vote of confidence.

We also recognize, more and more, that open source software continues to change the world. A larger and larger percentage of applications are built using open source code. From Bloomberg terminals and news to the Washington Post to Slack to Airbnb to Instagram and Spotify, thousands of the world’s most recognizable brands and properties rely on NGINX Open Source to power their websites. In my own life – between Zoom for work meetings and Netflix at night – I probably spend 80% of my day using applications built atop NGINX.

Image reading "Open Source Software Changed the World" with logos of prominent open source projects

NGINX is only one element in the success story of open source. We would not be able to build the digital world – and increasingly, to control and manage the physical world – without all the amazing open source projects, from Kubernetes and containers to Python and PyTorch, from WordPress to Postgres to Node.js. Open source has changed the way we work. There are more than 73 million developers on GitHub who have collectively merged more than 170 million pull requests (PRs). A huge percentage of those PRs have been on code repositories with open source licenses.

We are thrilled that NGINX has played such a fundamental role in the rise and success of open source – and we intend to both keep it going and pay it forward. At the same time, we need to reflect on our open source work and adapt to the ongoing evolution of the movement. Business models for companies profiting from open source have become controversial at times. This is why NGINX has always tried to be super clear about what is open source and what is commercial. Above all, this meant never, ever trying to charge for functionality or capabilities that we had included in the open source versions of our software.

Open Source is Evolving Fast. NGINX Is Evolving, Too.

We now realize that we need to think hard about our commitment to open source, provide more value and capabilities in our open source products, and, yes, up our game in the commercial realm as well. We can’t simply keep charging for the same things as in the past, because the world has changed – some features included only in our commercial products are now table stakes for open source developers. We also know that open source security is top of mind for developers. For that reason, our open source projects need to be just as secure as our commercial products.

We also have to acknowledge reality. Internally, we have had a habit of saying that open source was not really production‑ready because it lacked features or scalability. The world has been proving us wrong on that count for some time now: many thousands of organizations are running NGINX open source software in production environments. And that’s a good thing, because it shows how much they believe in our open source versions. We can build on that.

In fact, we are doing that constantly with our core products. To those who say that the original NGINX family of products has grown long of tooth, I say you have not been watching us closely:

  • For the core NGINX Open Source software, we continue to add new features and functionality and to support more operating system platforms. Two critical capabilities for security and scalability of web applications and traffic, HTTP3 and QUIC, are coming in the next version we ship.
  • A quiet but incredibly innovative corner of the NGINX universe is NGINX JavaScript (njs), which enables developers to integrate JavaScript code into the event‑processing model of the NGINX HTTP and TCP/UDP (Stream) modules and extend NGINX configuration syntax to implement sophisticated capabilities. Our users have done some pretty amazing things, everything from innovative cache purging and header manipulations to support for more advanced protocols like MQTTv5.
  • Our universal web application server, NGINX Unit, was conceived by the original author of NGINX Open Source, Igor Sysoev, and it continues to evolve. Unit occupies an important place in our vision for modern applications and a modern application stack that goes well beyond our primary focus on the data plane and security. As we develop Unit, we are rethinking how applications should be architected for the evolving Web, with more capabilities that are cloud‑native and designed for distributed and highly dynamic apps.

The Modern Apps Reference Architecture

We want to continue experimenting and pushing forward on ways to help our core developer constituency more efficiently and easily deploy modern applications. Last year at Sprint 2.0 we announced the NGINX Modern Apps Reference Architecture (MARA), and I am happy to say it recently went into general availability as version 1.0.0. MARA is a curated and opinionated stack of tools, including Kubernetes, that we have wired together to make it easy to deploy infrastructure and application architecture as code. With a few clicks, you can configure and deploy a MARA reference architecture that is integrated with everything you need to create a production‑grade, cloud‑native environment – security, logging, networking, application server, configuration and YAML management, and more.

Diagram showing topology of the NGINX Modern Apps Reference Architecture

MARA is a modular architecture, and by design. You can choose your own adventure and design from the existing modules a customized reference architecture that can actually run applications. The community has supported our idea and we have partnered with a number of innovative technology companies on MARA. Sumo Logic has added their logging chops to MARA and Pulumi has contributed modules for automation and workflow orchestration. Our hope is that, with MARA, any developer can get a full Kubernetes environment up and running in a matter of minutes, complete with all the supporting pieces, secure, and ready for app deployment. This is just one example of how I think we can all put our collective energy into advancing a big initiative in the industry.

The Future of NGINX: Modernize, Optimize, Extend

Each year at NGINX Sprint, our virtual user conference, we make new commitments for the coming year. This year is no different. Our promises for the next twelve months can be captured in three words: modernize, optimize, and extend. We intend to make sure these are not just business buzzwords; we have substantial programs for each one and we want you to hold us to our promises.

Promise #1: Modernize Our Approach, Presence, and Community Management

Obviously, we are rapidly modernizing our code and introducing new products and projects. But modernization is not just about code – it encompasses code management, transparency around decision making, and how we show up in the community. While historically the NGINX Open Source code base has run on the Mercurial version control system, we recognize that the open source world now lives on GitHub. Going forward, all NGINX projects will be born and hosted on GitHub because that’s where the developer and open source communities work.

We also are going to modernize how we govern and manage NGINX projects. We pledge to be more open to contributions, more transparent in our stewardship, and more approachable to the community. We will follow all expected conventions for modern open source work and will be rebuilding our GitHub presence, adding Codes of Conduct to all our projects, and paying close attention to community feedback. As part of this commitment to modernize, we are adding an NGINX Community channel on Slack. We will staff the channel with our own experts to answer your questions. And you, the community, will be there to help each other, as well – in the messaging tool you already use for your day jobs.

Promise #2: Optimize Our Developer Experience

Developers are our primary users. They build and create the applications that have made us who we are. Our tenet has always been that NGINX is easy to use. And that’s basically true – NGINX does not take days to install, spin up, and configure. That said, we can do better. We can accelerate the “time to value” that developers experience on our products by making the learning curve shorter and the configuration process easier. By “value” I mean deploying code that does something truly valuable, in production, full stop. We are going to revamp our developer experience by streamlining the installation experience, improving our documentation, and adding coverage and heft to our community forums.

We are also going to release a new SaaS offering that natively integrates with NGINX Open Source and will help you make it useful and valuable in seconds. There will be no registration, no gate, no paywall. This SaaS will be free to use, forever.

In addition, we recognize that many critical features which developers now view as table stakes are on the wrong side of the paywall for NGINX Open Source and NGINX Plus. For example, DNS service discovery is essential for modern apps. Our promise is to make those critical features free by adding them to NGINX Open Source. We haven’t yet decided on all of the features to move and we want your input. Tell us how to optimize your experience as developers. We are listening.

Promise #3: Extend the Power and Capabilities of NGINX

As popular as NGINX is today, we know we need to keep improving if we want to be just as relevant ten years from now. Our ambitious goal is this: we want to create a full stack of NGINX applications and supporting capabilities for managing and operating modern applications at scale.

To date, NGINX has mostly been used as a Layer 7 data plane. But developers have to put up a lot of scaffolding around NGINX to make it work. You have to wire up automation and CI/CD capabilities, set up proper logging, add authentication and certificate management, and more. We want to make a much better extension of NGINX where every major requirement to test and deploy an app is satisfied by one or more high‑quality open source components that seamlessly integrate with NGINX. In short, we want to provide value at every layer of the stack and make it free. For example, if you are using NGINX Open Source or NGINX Plus as an API gateway, we want to make sure you have everything you need to manage and scale that use case – API import, service discovery, firewall, policy rules and security – all available as high‑quality open source options.

To summarize, our dream is to build an ecosystem around NGINX that extends into every facet of application management and deployment. MARA is the first step in building that ecosystem and we want to continue to attract partners. My goal is to see, by the end of 2022, an entire pre‑wired app launch and run in minutes in an NGINX environment, instrumented with a full complement of capabilities – distributed tracing, logging, autoscaling, security, CI/CD hooks – that are all ready to do their jobs.

Introducing Kubernetes API Gateway, a Brand New Amplify, and NGINX Agent

We are committed to all this. And here are three down payments on my three promises.

  1. Earlier this year we launched NGINX Kubernetes Gateway, based on the Kubernetes API Gateway SIG’s reference architecture. This modernizes our product family and keeps us in line with the ongoing evolution of cloud native. The NGINX Kubernetes Gateway is also something of an olive branch we’re extending to the community. We realize it complicated matters when we created both a commercial and an open source Ingress controller for Kubernetes, both different from the community Ingress solution (also built on NGINX). The range of choices confused the community and put us in a bad position.

    It’s pretty clear that the Gateway API is going to take the place of the Ingress controller in the Kubernetes architecture. So we are changing our approach and will make the NGINX Kubernetes Gateway – which will be offered only as an open source product – the focal point of our Kubernetes networking efforts (in lockstep with the evolving standard). It will both integrate and extend into other NGINX products and optimize the developer experience on Kubernetes.

  2. A few years back, we launched NGINX Amplify, a monitoring and telemetry SaaS offering for NGINX fleets. We didn’t really publicize it much. But thousands of developers found it and are still using it today. Amplify was and remains free. As part of our modernization pledge, we are adding a raft of new capabilities to Amplify. We aim to make it your trusted co‑pilot for standing up, watching over, and managing NGINX products at scale in real time. Amplify will not only monitor your NGINX instances but will help you configure, apply scripts to, and troubleshoot NGINX deployments.
  3. We are launching NGINX Agent, a lightweight app that you deploy alongside NGINX Open Source instances. It will include features previously offered only in commercial products, for example the dynamic configuration API. With NGINX Agent, you’ll be able to use NGINX Open Source in many more use cases and with far greater flexibility. It will also include far more granular controls which you can use to extend your applications and infrastructure. Agent helps you make smarter decisions about managing, deploying, and configuring NGINX. We’re working hard on NGINX Agent – keep an eye out for a blog coming in the next couple months to announce its availability!

Looking Ahead

In a year, I hope you ask me about these promises. If I can’t report real progress on all three, then hold me to it, please. And please understand – we are engaged and ready to talk with all of you. You are our best product roadmap. Please take our annual survey. Join NGINX Community Slack and tell us what you think. Comment and file PRs on the projects at our GitHub repo.

It’s going to be a great year, the best ever. We look forward to hearing more from you and please count on hearing more from us. Help us help you. It’s going to be a great year, the best ever. We look forward to hearing more from you and please count on hearing more from us.

The post The Future of NGINX: Getting Back to Our Open Source Roots appeared first on NGINX.

]]>
Getting Your Time Back with NGINX Unit https://www.nginx.com/blog/getting-your-time-back-with-nginx-unit/ Thu, 18 Aug 2022 17:34:43 +0000 https://www.nginx.com/?p=70260 NGINX Sprint, our annual (and yes, virtual!) event is almost here. This year’s oceanic theme is Deep Dive into the World of NGINX, and August 23 through 25 NGINX sails home to its open source roots. Our destination? To show you how NGINX open source innovations help bring our vision for modern applications to life, [...]

Read More...

The post Getting Your Time Back with NGINX Unit appeared first on NGINX.

]]>
NGINX Sprint, our annual (and yes, virtual!) event is almost here. This year’s oceanic theme is Deep Dive into the World of NGINX, and August 23 through 25 NGINX sails home to its open source roots. Our destination? To show you how NGINX open source innovations help bring our vision for modern applications to life, in collaboration with CNCF contributors and projects like Grafana Labs and OpenTelemetry.

The modern application landscape is expansive, and in thought‑provoking demos and talks our Sprint experts will make sure you don’t get lost at sea. For example, renowned developer advocate and Principal Engineer at Google, Kelsey Hightower, takes the helm at the end of Day 2 to show Sprint attendees how to use NGINX Unit – our open source, universal web application server, reverse proxy, and static file server – to its full, time‑saving potential.

A Journey Through NGINX and Open Source with Kelsey Hightower airs August 24 at 9:10 AM PDT on the NGINX Sprint platform.

 

We really hope you’ll register and attend Sprint (it’s completely free), so we’re not going to give too much away about Kelsey’s session. But read on for a sneak peek.

NGINX Unit Lets You Write Less Code

When it comes to application servers, Kelsey’s #1 goal is save time. The less code you have to write to deploy your app, the better. In his Sprint demo, Kelsey keeps it simple. With a basic application inside a single container, he shows just how much time you can get back when deploying NGINX Unit as a web application server.

Kelsey gets NGINX Unit to run on Cloud Run, serving multiple Go applications and static files from a single container image – and many things happen simultaneously in the background. For example, he doesn’t write any code for logging but logs still emerge for free, with NGINX Unit as the web application server doing the heavy lifting.

Kelsey steers away from complexity and discusses how NGINX Unit checks all the boxes by:

  • Running multiple binaries in the background
  • Proxying using a lower‑level protocol
  • Sending data to the application
  • Returning the response to the requester

Kelsey also offers critiques on ways that NGINX Unit can improve. We appreciate that, because our aim is to continuously refine and enhance our open source offerings.

Our Community Comes First

When F5 acquired NGINX, we made a promise to stay committed to open source. This included increasing our investments in developing NGINX open source projects. Today, we’re still committed to keeping that promise.

In a chat with NGINX Product Group VP and GM Rob Whiteley during the session, Kelsey admits he was initially skeptical about NGINX keeping our word about open source. However, once he played around with NGINX Unit, he saw NGINX does in fact innovate – rather than copying and pasting what’s already out there – while staying continuously aware of the patterns open source communities crave.

Just as we value Kelsey’s opinion, we also want your thoughts and feedback. NGINX is committed both to listening to our community and to investing in a better world. For every post‑event survey filled out during Sprint, we will donate to The Ocean Cleanup, a non‑profit organization developing advanced technologies to rid the oceans of plastic – their goal is to remove 90% over time!

Dive into NGINX Open Source

There’s still time to gear up for NGINX Sprint, which takes place August 23–25 and is packed with educational discussions, demos, and self‑paced labs. Dive on in and register today for free – the water’s fine!

The post Getting Your Time Back with NGINX Unit appeared first on NGINX.

]]>
Running Spring Boot Applications in a Zero Trust Environment with NGINX Unit https://www.nginx.com/blog/running-spring-boot-applications-zero-trust-environment-nginx-unit/ Thu, 19 Aug 2021 03:55:20 +0000 https://www.nginx.com/?p=67684 It feels like the word “security” is on everyone’s lips these days. It has never been easy to protect and secure applications, but running them in the cloud presents even more challenges. One concept that seems like a promising solution is “zero trust”, which Gartner defines as: …an approach where implicit trust is removed from [...]

Read More...

The post Running Spring Boot Applications in a Zero Trust Environment with NGINX Unit appeared first on NGINX.

]]>
p.indent { margin-left: 20px; }

It feels like the word “security” is on everyone’s lips these days. It has never been easy to protect and secure applications, but running them in the cloud presents even more challenges. One concept that seems like a promising solution is “zero trust”, which Gartner defines as:

…an approach where implicit trust is removed from all computing infrastructure. Instead, trust levels are explicitly and continuously calculated and adapted to allow just-in-time, just‑enough access to enterprise resources.

But exactly how does “zero trust” work in a cloud context, and what technologies are available to help you implement it? In this blog, we’ll consider zero trust in the context of a common use case:

You’re an insurance company with a varied set of APIs and services powered by Java. Now that you’ve migrated to the cloud, production workloads are automatically built by a CI/CD pipeline and deployed in a Kubernetes cluster at a public cloud provider. As you are dealing with sensitive customer information, one major requirement is to encrypt all traffic with TLS.

You’ve enabled encryption on the edge load balancer and the Ingress controller, but what’s the best way to encrypt the traffic between the Ingress controller and the application itself? That involves enabling the application server to handle TLS.

Many Java shops use Apache Tomcat as the application server of choice, and Spring Boot as a framework to build stand‑alone and production‑ready Spring applications more easily than with Java itself. In this blog, we show in detail how to configure Spring Boot with Apache Tomcat (and then NGINX Unit) for applications that can handle HTTPS traffic.

Spring Boot: Talk HTTPS to Me…

With almost 60,000 stars on GitHub as of this writing, Spring Boot is the shining star in the Java frameworks sky: easy to get started with, lightweight, and powerful at the same time. A Spring Boot project can be compiled into a self‑contained .jar file with a built‑in application server like Apache Tomcat. To start the Java service, you simply execute the .jar file and start sending traffic to the exposed port (by default, 8080). Simple!

Following are a few more steps you need to perform for your service to handle TLS connections (HTTPS traffic) properly. For more details about the steps, see the Spring documentation.

These instructions are for a self‑signed certificate and key, but for production environments we strongly recommend that you substitute a certificate‑key pair from an official Certificate Authority (CA).

  1. Create a key store containing a certificate and a key:

    # keytool -genkey -alias tomcat -keyalg RSA -keystore certstore
  2. Place the key store in your container image, which Tomcat must be able to access.

  3. Add these properties to your application.properties file, replacing secret with the appropriate password:

    server.port = 8443
    server.ssl.key-store = classpath:keystore.jks
    server.ssl.key-store-password = secret

With this configuration in place your Spring Boot application listens on port 8443 for HTTPS connections. But what if you also want to accept HTTP connections? Once you’ve configured HTTPS in the application.properties file, you can’t also configure HTTP there; you must implement HTTP handling in the Java code itself. For a good example, see the spring-projects repo on GitHub.

So it turns out that delegating Layer 4 TLS encryption to application frameworks like Spring Boot is possible, but not very straightforward. If you also write applications in other languages and frameworks (like Ruby and Rails, or Python and Flask), the situation is even more complicated – each framework has its own way of configuring listeners and handling keys and certificates. Fortunately, there’s something that makes the situation much simpler!

NGINX Unit to the Rescue!

NGINX Unit is an open source polyglot application server, a reverse proxy, and a static file server, written by the core NGINX engineering team for Unix‑like systems. With NGINX Unit you use a standardized API to simultaneously run and manage applications written in many different languages – as of this writing, it supports seven languages in addition to Java: assembly, Go, JavaScript (Node.js®), Perl, PHP, Python, and Ruby.

Unit also enables you to configure HTTP and HTTPS interfaces independently of the applications using them. Let’s explore this powerful feature with our Spring Boot API example. First, we have to build the Spring Boot application for our Unit server. As Unit implements the Java Servlet API version 3, the only change is a line added to the Gradle or Maven build definitions. I used Gradle for my testing.

  1. Add the war plug‑in to the build.gradle file:

    plugins {
      id 'org.springframework.boot' version '2.4.4'
      id 'io.spring.dependency-management' version '1.0.11.RELEASE'
      id 'java'
      id 'war'
    }
  2. Build the .war file:

    # ./gradlew build
  3. The resulting file is build/libs/rootProject‑Version.war, where:

  4. Define the Unit configuration in a file called config.json:

    {
        "listeners": {
            "*:8080": {
                "pass": "applications/java"
            }
        },
        "applications": {
            "java": {
                "user": "unit",
                "group": "unit",
                "type": "java",
                "environment": {
                    "Deployment": "0.0.1"
                },
                "classpath": [],
                "webapp": "/path/to/build/libs/demo-0.0.1-SNAPSHOT.war"
            }
        }
    }
  5. Activate the configuration (for details, see the documentation):

    # curl -X PUT --data-binary @config.json --unix-socket \
           /path/to/control.unit.sock http://localhost/config/applications/java-app

That’s it! The Spring Boot application is now running on Unit. No Tomcat or other Java application server is needed.

Enabling HTTPS

You might be asking, “but what about HTTPS?” Fair enough – let’s enable it! That’s easy with the following steps. (As above we’re using a self‑signed certificate. In a production environment, make sure you use CA‑signed certificates.)

  1. Create the self‑signed certificate bundle:

    # cat cert.pem ca.pem key.pem > bundle.pem
  2. Upload the bundle to Unit:

    # curl -X PUT --data-binary @bundle.pem --unix-socket \
           /path/to/control.unit.sock http://localhost/certificates/bundle
  3. Define the configuration for the HTTPS listener in a file called listener.json:

    "127.0.0.1:443": {
        "pass": "applications/java-app",
        "tls": {
            "certificate": "bundle"
        }
    }
  4. Activate the new listener:

    # curl -X PUT --data-binary @listener.json --unix-socket \ 
    /path/to/control.unit.sock http://localhost/config/listeners

The application now accepts TLS‑encrypted connections – without restarting or rebooting either the application or Unit. But the most powerful thing is that the preceding process is the same for applications written in any of the languages and frameworks supported by Unit. There’s no need to dig into language‑specific details to configure HTTPS.

Summary

The powerful NGINX Unit listener feature makes supporting HTTP and HTTPS simple and completely application‑agnostic, because encryption is applied at the level of the listener, not the application. To learn about other TLS features like Server Name Indication (SNI) and custom OpenSSL configuration commands, see the NGINX Unit documentation.

To get started with NGINX Unit, see the installation instructions.

NGINX Plus subscribers get support for NGINX Unit at no additional charge. Start your free 30-day trial today or contact us to discuss your use cases.

The post Running Spring Boot Applications in a Zero Trust Environment with NGINX Unit appeared first on NGINX.

]]>
Updates to NGINX Unit for Summer 2021 https://www.nginx.com/blog/nginx-unit-updates-for-summer-2021-now-available/ Fri, 13 Aug 2021 15:55:52 +0000 https://www.nginx.com/?p=67637 Welcome to the latest installment in our series of blogs about new features in NGINX Unit! Admittedly, it’s been a while since we provided an update, so the time has come to share the details about our latest versions – NGINX Unit 1.23.0 and 1.24.0  – and explore how they make the life of an NGINX Unit enthusiast [...]

Read More...

The post Updates to NGINX Unit for Summer 2021 appeared first on NGINX.

]]>


Welcome to the latest installment in our series of blogs about new features in NGINX Unit! Admittedly, it’s been a while since we provided an update, so the time has come to share the details about our latest versions – NGINX Unit 1.23.0 and 1.24.0  – and explore how they make the life of an NGINX Unit enthusiast a whole lot easier.

But first, some great news about our increasingly international team: two more developers, Oisín Canty and Zhidao Hong (洪志道), joined us this year, bringing with them lots of diverse expertise, some serious affection for NGINX Unit, and a major boost in overall productivity. Both have hit the ground running, coming up with a plethora of new features in mere months and participating in the design and development of the capabilities discussed below. Meanwhile, Andrei Suvorov, our next‑newest team member, has already proven his worth, providing essential developments to our latest releases. Speaking of which…

TLS: SNI and Configuration Commands

The first set of changes to NGINX Unit revolves around its implementation of the TLS protocol. Version 1.23.0 introduces support for the Server Name Indication (SNI) extension to TLS, providing a straightforward way to map certificates between the multiple sites and domains you serve from a single NGINX Unit deployment.

First, let’s briefly review how NGINX Unit has handled certificates since support for TLS was added in version 1.4.0: you bundle up certificate chains, upload them via the control API, and assign the bundles to listeners. Pretty neat, but the one-to-one mapping between bundles and listeners had its limitations. Now you can specify an array of bundles in a single listener:

Behind the scenes, NGINX Unit works its magic to make the most appropriate choice for each arriving request, using the common and alternative names of the certificates in each bundle. If the client specifies a server name, NGINX Unit responds with a certificate from the corresponding bundle. If the name matches multiple bundles, exact matches have priority over wildcards such as *.example.com. If any ambiguity remains, NGINX Unit uses the first bundle on the list, which it also does when there’s no match or the request doesn’t include a server name at all.

Finally, those familiar with the NGINX ssl_conf_command and ssl_ciphers directives will probably rejoice that NGINX Unit now also enables you to supply a set of OpenSSL configuration commands per listener:

Static Content: MIME Filtering

Unit’s content‑handling abilities received a major boost in version 1.24.0. You can now use Unit’s support of built‑in and customizable MIME types in your routing schemes. Simple as that may seem, it tidies up routing schemes that involve static content sharing. Consider this route portion:

Here, we effectively separate static content from .php scripts within a single share action, achieving what previously required two different route steps.

The types array supports the same pattern mechanism that the route conditions use, so we can employ some neat magic to restrict the static content types we serve. When NGINX Unit cannot infer the MIME type of a requested file, it considers the type to be empty. Thus, we can configure Unit to serve only files whose MIME types it recognizes by including the negated empty string ("!") – meaning “do not serve files with empty MIME types” – in the types array:

Note that you can also define custom MIME types to control which file types Unit serves. In this example, we define the set of filenames and extensions with type text/plain that Unit recognizes and allows to be used in the types array:

This shows how you can use the global mime_types option to extend the capabilities of the types option beyond what’s built in, adapting it to your routing purposes.

Static Content: Chrooting and Path Restrictions

NGINX Unit 1.24.0 introduces three new options for fine‑tuning how Unit serves static content on servers running Linux kernel version 5.6 and higher. They are intended to prevent accidental or deliberate misuse of NGINX Unit’s mechanics for serving static content. The first one is chroot:

This option sets the root directory for pathname resolution within the directory named by the share option. One notable effect is that symbolic links to absolute pathnames within the share directory are treated as relative to the new root. Given the configuration above, for example, if you create a symbolic link in /www/data/static/ to /log/app.log, it is resolved as /www/data/log/app.log. Note, however, that the treatment of symbolic links is also affected by the setting of the follow_symlinks option, discussed next.

The two remaining options, follow_symlinks and traverse_mounts, control NGINX Unit’s attitude toward (surprise!) symbolic links and mount points:

When these options are set to false (the default is true), requests fail if they require resolution of a symbolic link or mount point within the share directory (here, /www/data/static/); Unit is effectively locked within the confines of the share directory.

The interaction of chroot with follow_symlinks and traverse_mounts can be bit intricate so let’s take a moment to consider the merged configuration:

When chroot is set, the values of follow_symlinks and traverse_mounts only affect portions of the path after the new root. This means that subdirectories that are a part of the chroot path (here, www/ and data/) can be symbolic links or mount points, but any symbolic links or mount points beyond the final element in the chroot path (here, that includes static/) are not resolved.

Node.js Override

We hope you’ll be glad to learn that Node.js® apps now require zero code alterations to run in Unit. This is achieved with a new loader module (unit-http/loader.mjs) that works behind the scenes at application startup, laying all the necessary groundwork for a truly server‑agnostic app:

Admittedly, this looks a bit cumbersome. It looks like a win to us, though, because it makes a significant redesign unnecessary, instead achieving the desired effect by using NGINX Unit’s existing mechanics for running a designated executable (here, /usr/bin/env).

Python Targets

NGINX Unit 1.24.0 extends to Python apps the level of granularity previously granted only to PHP apps. You can configure several scripts within a single app to simplify your routing and application setup:

This example puts two modules, foo/wsgi.py and bar/wsgi.py, into the context of a single app and its processes, which conserves system resources otherwise consumed by running multiple scripts that comprise a single app. Elsewhere in the configuration, you can pass requests to these portions of your app quite independently:

Conclusion

In this blog post, we’ve discussed Unit versions 1.23.0 and 1.24.0, but version 1.25.0 is not too far away, so stay alert for the new features and numerous bug fixes it’s about to bring to your table.

As always, we invite you to check out our roadmap, where you can find out whether your favorite feature is going to be implemented any time soon, and rate and comment on our in‑house initiatives. Feel free to open new issues in our repo on GitHub and share your ideas for improvement.

For a complete list of the changes and bug fixes in versions 1.23.0 and 1.24.0, see the NGINX Unit changelog.

NGINX Plus subscribers get support for NGINX Unit at no additional charge. Start your free 30-day trial today or contact us to discuss your use cases.

The post Updates to NGINX Unit for Summer 2021 appeared first on NGINX.

]]>
Demoing NGINX at Sprint 2.0 – From Blast Off to Stable Orbit https://www.nginx.com/blog/demoing-nginx-at-sprint-2-0/ Mon, 02 Aug 2021 20:50:31 +0000 https://www.nginx.com/?p=67288 At NGINX Sprint 2021, our teams are getting together to demo how NGINX accelerates just about every step in an organization’s app development journey – from deploying the first reverse proxy to launching a service mesh. Join us live on Tuesday, August 24 to watch seven demos and engage with the speakers via chat. Can’t watch live? Don’t [...]

Read More...

The post Demoing NGINX at Sprint 2.0 – From Blast Off to Stable Orbit appeared first on NGINX.

]]>
At NGINX Sprint 2021, our teams are getting together to demo how NGINX accelerates just about every step in an organization’s app development journey – from deploying the first reverse proxy to launching a service mesh. Join us live on Tuesday, August 24 to watch seven demos and engage with the speakers via chat. Can’t watch live? Don’t worry – all the sessions will be linked from this blog (and available on our YouTube channel) after the event.

Move from Initial Deployment to Enterprise Ready with NGINX

Before you can get your new application out the door, you need to ensure that it has what it needs to run well, stay up and available, and remain secure. Enter NGINX! Join this session to learn how to begin your application delivery journey with open source tools that start you on the right path. Then see how, when the time is right, it’s easy to transition to NGINX enterprise‑grade services with their advanced capabilities, and achieve visibility into all your deployed instances.

Featuring:

Demo resources:

Manage Apps and APIs at Scale

Once you’ve deployed your NGINX instances, you need to deliver the self‑service and automation capabilities that enable your DevOps teams to deploy their apps faster. It’s also crucial to manage and secure all the APIs exposed by your apps. Join this session to learn how NGINX Controller helps you accomplish these objectives.

Featuring:

Demo resources:

Automate Application Security with NGINX

Gone are the days when you could simply bolt on security late in the application delivery lifecycle. In today’s world, integrated security must become a normal part of any DevOps delivery pipeline. In this session, we demo how the NGINX App Protect WAF and DoS solutions integrate into your workflows without adding friction.

Featuring:

Demo resources:

Bring an Adaptive App to Life with Ecosystem Partners

Modern application architectures and microservices can seem complex and hard to maintain, but that doesn’t have to be the case. This session provides an early look at our modern application architecture, which is easy to deploy on various infrastructure platforms and to scale out globally. We highlight partner technologies that have their roots in open source and are leaders in their spaces.

Featuring:

  • Damian Curry, Community and Alliances Technical Director – NGINX
  • Elijah Zupancic, Solutions Architect, Community and Alliances – NGINX

Demo resources:

Build, Deliver and Visualize Your Kubernetes App

When you’re new to Kubernetes, your highest priorities are probably learning how to build microservices apps and get visibility into their health. Join this session to learn how to serve multiple polyglot apps with NGINX Unit and get insight into their performance with NGINX Ingress Controller.

Featuring:

  • Amir Rawdat, Technical Marketing Engineer – NGINX
  • Jenn Gile, Manager of Product Marketing – NGINX

Demo resources:

Master Microservices with End-to-End Encryption

When services in a Kubernetes environment exchange sensitive data like passwords and credit card numbers, you must implement end-to-end encryption so bad actors can’t access it. In this session, we show how to secure the edge with NGINX Ingress Controller, set up secure access control between services with NGINX Service Mesh, and use both products to secure egress traffic with mTLS.

Featuring:

Demo resources:

Now Arriving: An Immersive Experience Built on NGINX Open Source

In 2020, major cities around the globe became epicenters for trial and perseverance. Learn how NGINX and Creative Technologist Ben Chaykin created the interactive art installation Now Arriving – using a modern application powered by NGINX Open Source, Arduino, AWS, React, and node‑based visual development platform TouchDesigner – to bring light back to their community. We hope to inspire you to do the same!

Featuring:

Demo resources:

Get Started!

No matter where you are in your app development journey, you can get started with free 30-day trials of all our commercial solutions:

Or get started with free and open source offerings:

The post Demoing NGINX at Sprint 2.0 – From Blast Off to Stable Orbit appeared first on NGINX.

]]>