announcement
96 TopicsAI, Red Teaming, and Post-Quantum Cryptography: Key Insights from RSA 2025
Join Aubrey and Byron at RSA Conference 2025 as they dive into transformative topics like artificial intelligence, red-teaming strategies, and post-quantum cryptography. From exploring groundbreaking OWASP sessions to analyzing emerging AI threats, this episode highlights key insights that shape the future of cybersecurity. Discover the challenges in red team AI testing, the implications of APIs in multi-cloud environments, and how quantum-resistant cryptography is rising to meet AI-driven threats. Don't miss this exciting recap of RSA 2025!88Views0likes0CommentsNew F5 OWASP LLM Top 10 (2025) operations guide available!
F5 is excited to announce the release of the OWASP LLM Top 10 2025 Operations Guide. The OWASP Top 10 for LLMs (2025) defines the most serious security risks for large language models (LLMs). The F5 operations guide details each of the ten risks and shows you how to mitigate them with F5's portfolio of products, using a defense-in-depth approach.173Views2likes0CommentsLLMs And Trust, Google A2A Protocol And The Cost Of Politeness In AI: AI Friday
It's AI Friday! We're diving into the world of artificial intelligence like never before! 🎩 On this Hat Day edition (featuring NFL draft banter), we discuss fascinating topics like LLMs (Large Language Models) and their trust—or lack thereof—in humanity; Google’s innovative Agent-to-Agent (A2A) protocol, and how politeness towards AI incurs millions in operational costs. We also touch on pivotal AI conversations around zero-trust, agentic AI, and the dynamic collapse of traditional control and data planes. Join us as we dissect how AI shapes the future of human interaction, enterprise-level security, and even animal communication. Don’t miss out on this engaging, informative, and slightly chaotic conversation about cutting-edge advancements in AI. Remember to like, subscribe, and share with your community to ensure you never miss an episode of AI Friday! Articles: What do LLMs Think Of Us? At What Price, Politeness? Google Agent2Agent Protocol (A2A)43Views0likes0Comments2025 Top AI Use Cases, AI For Nuclear Safety & CaMeL Prompt Injection Fixes
It's AI Friday! This week, we unpack The latest AI news and trends, including the top AI use cases for 2025 Intriguing new developments from OpenAI AI in nuclear safety with PG&E (what could possibly go wrong?) Novel defenses against prompt injection attacks with CaMeL LLM-powered conversations with dolphins Join Aubrey, Joel, Ken, and Byron as they blend in-depth insights with good-natured humor. Like and subscribe as we explore the future of AI together! Related Content: How are people using AI in 2025? OpenAI Models o3 and o4 think in images and concepts. OpenAI's 'Break Glass In Case of SkyNet' paper AI For Nuclear Safety - PG&E Can a CaMeL fix prompt injection?? Google's DolphinGemma LLM.. Yep. It's for talking to dolphins.58Views0likes0CommentsSimplifying OIDC and SSO with the New NGINX Plus R34 OIDC Module
Introduction: Why OIDC and SSO Matter As web infrastructures scale and modernize, strong and standardized methods of authentication become essential. OpenID Connect (OIDC) provides a flexible layer on top of OAuth 2.0, enabling both user authentication (login) and authorization (scopes, roles). By adopting OIDC for SSO, you can: Provide a frictionless login experience across multiple services. Consolidate user session management, removing custom auth code from each app. Lay a foundation for Zero Trust policies, by validating and enforcing identity at the network’s edge. While Zero Trust is a broader security model that extends beyond SSO alone, implementing OIDC at the proxy level is an important piece of the puzzle. It ensures that every request is associated with a verified identity, enabling fine-grained policies and tighter security boundaries across all your applications. NGINX, acting as a reverse proxy, is an ideal place to manage these OIDC flows. However, the journey to robust OIDC support in NGINX has evolved - from an njs-based approach with scripts and maps, to a far more user-friendly native module in NGINX Plus R34. The njs-based OIDC Solution Before the native OIDC module, many users turned to the njs-based reference implementation. This setup combines multiple pieces: njs script to handle OIDC flows (redirecting to the IdP, exchanging tokens, etc.). auth_jwt module for token validation. keyval module to store and pair a session cookie with the actual ID token. While it covers the essential OIDC steps (redirects, code exchanges, and forwarding claims), it has some drawbacks: Configuration complexity. Most of the logic hinges on creative usage of NGINX directives (like map), which can be cumbersome, especially if you used more than one authentication provider or your environment changes frequently. Limited Metadata Discovery. It doesn’t natively fetch the IdP’s `.well-known/openid-configuration`. Instead, a separate bash script queries the IdP and rewrites parts of the NGINX config. Any IdP changes require you to re-run that script and reload NGINX. Performance Overhead. The njs solution effectively revalidates ID tokens on every request. Why? Because NGINX on its own doesn’t maintain a traditional server-side session object. Instead, it simulates a “session” by tying a cookie to the user’s id_token in keyval. auth_jwt checks the token each time, retrieving it from keyval and verifying the signature, expiration, and extract claims. Under heavy load, this constant JWT validation can become expensive. For many, that extra overhead conflicts with how modern OIDC clients usually do short-lived session cookies, validating the token only once per session or relying on a more efficient approach. Hence the motivation for a native OIDC module. Meet the New Native OIDC Module in NGINX Plus R34 With the complexities of the njs-based approach in mind, NGINX introduced a fully integrated OIDC module in NGINX Plus R34. This module is designed to be a “proper” OIDC client, including: Automatic TLS-only communication with the IdP. Full metadata discovery (no external scripts needed). Authorization code flows. Token validation and caching. A real session model using secure cookies. Access token support (including automatic refresh). Straightforward mapping of user claims to NGINX variables. We also have a Deployment Guide that shows how to set up this module for popular IdPs like Okta, Keycloak, Entra ID, and others. That guide focuses on typical use cases (obtaining tokens, verifying them, and passing claims upstream). However, here we’ll go deeper into how the module works behind the scenes, using Keycloak as our IdP. Our Scenario: Keycloak + NGINX Plus R34 We’ll demonstrate a straightforward Keycloak realm called nginx and a client also named nginx with “client authentication” and the “standard flow” enabled. We have: Keycloak as the IdP, running at https://kc.route443.dev/realms/nginx. NGINX Plus R34 configured as a reverse proxy. A simple upstream service at http://127.0.0.1:8080. Minimal Configuration Example: http { resolver 1.1.1.1 ipv4=on valid=300s; oidc_provider keycloak { issuer https://kc.route443.dev/realms/nginx; client_id nginx; client_secret secret; } server { listen 443 ssl; server_name n1.route443.dev; ssl_certificate /etc/ssl/certs/fullchain.pem; ssl_certificate_key /etc/ssl/private/key.pem; location / { auth_oidc keycloak; proxy_set_header sub $oidc_claim_sub; proxy_set_header email $oidc_claim_email; proxy_set_header name $oidc_claim_name; proxy_pass http://127.0.0.1:8080; } } server { # Simple test backend listen 8080; location / { return 200 "Hello, $http_name!\nEmail: $http_email\nKeycloak sub: $http_sub\n"; default_type text/plain; } } } Configuration Breakdown oidc_provider keycloak {}. Points to our Keycloak issuer, plus client_id and client_secret. Automatically triggers .well-known/openid-configuration discovery. Quite an important note: all interaction with the IdP is secured exclusively over SSL/TLS, so NGINX must trust the certificate presented by Keycloak. By default, this trust is validated against your system’s CA bundle (the default CA store for your Linux or FreeBSD distribution). If the IdP’s certificate is not included in the system CA bundle, you can explicitly specify a trusted certificate or chain using the ssl_trusted_certificate directive so that NGINX can validate and trust your Keycloak certificate. auth_oidc keycloak. For any request to https://n1.route443.dev/, NGINX checks if the user has a valid session. If not, it starts the OIDC flow. Passing Claims Upstream. We add headers sub, email, and name based on $oidc_claim_sub, $oidc_claim_email, and $oidc_claim_name, the module’s built-in variables extracted from token. Step-by-Step: Under the Hood of the OIDC Flow Retrieving and Caching OIDC Metadata As soon as you send an HTTP GET request to https://n1.route443.dev, you’ll see that NGINX redirects you to Keycloak’s authentication page. However, before that redirect happens, several interesting steps occur behind the scenes. Let’s take a closer look at the very first thing NGINX does in this flow: NGINX checks if it has cached OIDC metadata for the IdP. If no valid cache exists, NGINX constructs a metadata URL by appending /.well-known/openid-configuration to the issuer you specified in the config. It then resolves the IdP’s hostname using the resolver directive. NGINX parses the JSON response from the IdP, extracting critical parameters such as issuer, authorization_endpoint and token_endpoint. It also inspects response_types_supported to confirm that this IdP supports the authorization code flow and ID tokens. These details are essential for the subsequent steps in the OIDC process. NGINX caches these details for one hour (or for however long the IdP’s Cache-Control headers specify), so it doesn’t need to re-fetch them on every request. This process happens in the background. However, you might notice a slight delay for the very first user if the cache is empty, since NGINX needs a fresh copy of the metadata. Below is an example of the metadata request and response: NGINX -> IdP: HTTP GET /realms/nginx/.well-known/openid-configuration IdP -> NGINX: HTTP 200 OK Content-Type: application/json Cache-Control: no-cache // Means NGINX will store it for 1 hour by default { "issuer": "<https://kc.route443.dev/realms/nginx>", "authorization_endpoint": "<https://kc.route443.dev/realms/nginx/protocol/openid-connect/auth>", "token_endpoint": "<https://kc.route443.dev/realms/nginx/protocol/openid-connect/token>", "jwks_uri": "<http://kc.route443.dev:8080/realms/nginx/protocol/openid-connect/certs>", "response_types_supported": [ "code","none","id_token","token","id_token token", "code id_token","code token","code id_token token" ] // ... other parameters } This metadata tells NGINX everything it needs to know about how to redirect users for authentication, where to request tokens afterward, and which JWT signing keys to trust. By caching these results, NGINX avoids unnecessary lookups on subsequent logins, making the process more efficient for every user who follows. Building the Authorization URL & Setting a Temporary Session Cookie Now that NGINX has discovered and cached the IdP’s metadata, it’s ready to redirect your browser to Keycloak for actual login. Here’s where the OpenID Connect Authorization Code Flow begins in earnest. NGINX adds a few crucial parameters, like response_type=code, client_id, redirect_uri, state, and nonce to the authorization_endpoint it learned from the metadata, then sends you the following HTTP 302 response: NGINX -> User Agent: HTTP 302 Moved Temporarily Location: <https://kc.route443.dev/realms/nginx/protocol/openid-connect/auth?response_type=code&scope=openid&client_id=nginx&redirect_uri=http%3A%2F%2Fn1.route443.dev%2Foidc_callback&state=state&nonce=nonce> Set-Cookie: NGX_OIDC_SESSION=temp_cookie/; Path=/; Secure; HttpOnly At this point, you’re probably noticing the Set-Cookie: NGX_OIDC_SESSION=temp_cookie; line. This is a temporary session cookie, sometimes called a “pre-session” cookie. NGINX needs it to keep track of your “in-progress” authentication state - so once you come back from Keycloak with the authorization code, NGINX will know how to match that code to your browser session. However, since NGINX hasn’t actually validated any tokens yet, this cookie is only ephemeral. It remains a placeholder until Keycloak returns valid tokens and NGINX completes the final checks. Once that happens, you’ll get a permanent session cookie, which will then store your real session data across requests. User Returns to NGINX with an Authorization Code Once the user enters their credentials on Keycloak’s login page and clicks “Login”, Keycloak redirects the browser back to the URL specified in your redirect_uri parameter. In our example, that happens to be http://n1.route443.dev/oidc_callback. It’s worth noting that /oidc_callback is just the default location and if you ever need something different, you can tweak it via the redirect_uri directive in the OIDC module configuration. When Keycloak redirects the user, it includes several query parameters in the URL, most importantly, the code parameter (the authorization code) and state, which NGINX uses to ensure this request matches the earlier session-setup steps. Here’s a simplified example of what the callback request might look like: User Agent -> NGINX: HTTP GET /oidc_callback Query Parameter: state=state Query Parameter: session_state=keycloak_session_state Query Parameter: iss=<https://kc.route443.dev/realms/nginx> Query Parameter: code=code Essentially, Keycloak is handing NGINX a “proof” that this user successfully logged in, along with a cryptographic token (the code) that lets NGINX exchange it for real ID and access tokens. Since /oidc_callback is tied to NGINX’s native OIDC logic, NGINX automatically grabs these parameters, checks whether the state parameter matches what it originally sent to Keycloak, and then prepares to make a token request to the IdP’s token_endpoint. Note that the OIDC module does not use the iss parameter for identifying the provider, provider identity is verified through the state parameter and the pre-session cookie, which references a provider-specific key. Exchanging the Code for Tokens and Validating the ID Token Once NGINX receives the oidc_callback request and checks all parameters, it proceeds by sending a POST request to the Keycloak token_endpoint, supplying the authorization code, client credentials, and the redirect_uri: NGINX -> IdP: POST /realms/nginx/protocol/openid-connect/token Host: kc.route443.dev Authorization: Basic bmdpbng6c2VjcmV0 Form data: grant_type=authorization_code code=5865798e-682e-4eb7-8e3e-2d2c0dc5132e.f2abd107-35c1-4c8c-949f-03953a5249b2.nginx redirect_uri=https://n1.route443.dev/oidc_callback Keycloak responds with a JSON object containing at least an id_token, access_token plus token_type=bearer. Depending on your IdP’s configuration and the scope you requested, the response might also include a refresh_token and an expires_in field. The expires_in value indicates how long the access token is valid (in seconds), and NGINX can use it to decide when to request a new token on the user’s behalf. At this point, the module also spends a moment validating the ID token’s claims - ensuring that fields like iss, aud, exp, and nonce align with what was sent initially. If any of these checks fail, the token is deemed invalid, and the request is rejected. Once everything checks out, NGINX stores the tokens and session details. Here, the OIDC module takes advantage of the keyval mechanism to keep track of user sessions. You might wonder, “Where is that keyval zone configured?” The short answer is that it’s automatic for simplicity, unless you want to override it with your own settings. By default, you get up to 8 MB of session storage, which is more than enough for most use cases. But if you need something else, you can specify a custom zone via the session_store directive. If you’re curious to see this store in action, you can even inspect it through the NGINX Plus API endpoint, for instance: GET /api/9/http/keyvals/oidc_default_store_keycloak (where oidc_default_store_ is the prefix and keycloak is your oidc_provider name). With the tokens now safely validated and stashed, NGINX is ready to finalize the session. The module issues a permanent session cookie back to the user and transitions them into the “logged-in” state, exactly what we’ll see in the next step. Finalizing the Session and Passing Claims Upstream Once NGINX verifies all tokens and securely stores the user’s session data, it sends a final HTTP 302 back to the client, this time setting a permanent session cookie: NGINX -> User Agent: HTTP 302 Moved Temporarily Location: https://n1.route443.dev/ Set-Cookie: NGX_OIDC_SESSION=permanent_cookie; Path=/; Secure; HttpOnly At this point, the user officially has a valid OIDC session in NGINX. Armed with that session cookie, they can continue sending requests to the protected resource (in our case, https://n1.route443.dev/). Each request now carries the NGX_OIDC_SESSION cookie, so NGINX recognizes the user as authenticated and automatically injects the relevant OIDC claims into request headers - such as sub, email, and name. This means your upstream application at http://127.0.0.1:8080 can rely on these headers to know who the user is and handle any additional logic accordingly. Working with OIDC Variables Now, let’s talk about how you can leverage the OIDC module for more than just simple authentication. One of its biggest strengths is its ability to extract token claims and forward them upstream in request headers. Any claim in the token can be used as an NGINX variable named $oidc_claim_name, where name is whichever claim you’d like to extract. In our example, we’ve already shown how to pass sub, email, and name, but you can use any claims that appear in the token. For a comprehensive list of possible claims, check the OIDC specification as well as your IdP’s documentation. Beyond individual claims, you can also access the entire ID and Access Tokens directly via $oidc_id_token and $oidc_access_token. These variables can come in handy if you need to pass an entire token in a request header, or if you’d like to inspect its contents for debugging purposes. As you can see, configuring NGINX as a reverse proxy with OIDC support doesn’t require you to be an authentication guru. All you really need to do is set up the module, specify the parameters you want, and decide which token claims you’d like to forward as headers. Handling Nested or Complex Claims (Using auth_jwt) Sometimes, the claim you need to extract is actually a nested object, or even an array. That’s not super common, but it can happen if your Identity Provider returns complex data structures in the token. Currently, the OIDC module can’t directly parse nested claims - this is a known limitation that should be addressed in future releases. In the meantime, your best workaround is to use the auth_jwt module. Yes, it’s a bit of a detour, but right now it’s the only way (whether you use an njs-based approach or the native OIDC module) to retrieve more intricate structures from a token. Let’s look at an example where the address claim is itself an object containing street, city, and zip, and we only want the city field forwarded as a header: http { auth_jwt_claim_set $city address city; server { ... location / { auth_oidc keycloak; auth_jwt off token=$oidc_id_token; proxy_set_header x-city $city; proxy_pass http://127.0.0.1:8080; } } } Notice how we’ve set auth_jwt off token=$oidc_id_token. We’re effectively telling auth_jwt to not revalidate the token (because it was already validated during the initial OIDC flow) but to focus on extracting additional claims from it. Meanwhile, the auth_jwt_claim_set directive specifies the variable $city and points it to the nested city field in the address claim. With this in place, you can forward that value in a custom header (x-city) to your application. And that’s it. By combining the OIDC module for authentication with the auth_jwt module for more nuanced claim extraction, you can handle even the trickiest token structures in NGINX. In most scenarios, though, you’ll find that the straightforward $oidc_claim_ variables do the job just fine and no extra modules needed. Role-Based Access Control (Using auth_jwt) As you’ve noticed, because we’re not revalidating the token signature on every request, the overhead introduced by the auth_jwt module is fairly minimal. That’s great news for performance. But auth_jwt also opens up additional possibilities, like the ability to leverage the auth_jwt_require directive. With this, you can tap into NGINX not just for authentication, but also for authorization, restricting access to certain parts of your site or API based on claims (or any other variables you might be tracking). For instance, maybe you only want to grant admin-level users access to a specific admin dashboard. If a user’s token doesn’t include the right claim (like role=admin), you want to deny entry. Let’s take a quick look at how this might work in practice: http { map $jwt_claim_role $role_admin { "admin" 1; } server { ... # Location for admin-only resources: location /admin { auth_jwt foo token=$oidc_id_token; # Check that $role_admin is not empty and not "0" -> otherwise return 403: auth_jwt_require $role_admin error=403; # If 403 happens, we show a custom page: error_page 403 /403_custom.html; proxy_pass http://127.0.0.1:8080; } # Location for the custom 403 page location = /403_custom.html { # Internal, so it can't be directly accessed from outside internal; # Return the 403 status and a custom message return 403 "Access restricted to admins only!"; } } } How It Works: In our map block, we check the user’s $jwt_claim_role and set $role_admin to 1 if it matches "admin". Then, inside the /admin location, we have something like: auth_jwt foo token=$oidc_id_token; auth_jwt_require $role_admin error=403; Here, foo is simply the realm name (a generic string you can customize), and token=$oidc_id_token tells NGINX which token to parse. At first glance, this might look like a normal auth_jwt configuration - but notice that we haven’t specified a public key via auth_jwt_key_file or auth_jwt_key_request. That means NGINX isn’t re-verifying the token’s signature here. Instead, it’s only parsing the token so we can use its claims within auth_jwt_require. Thanks to the fact that the OIDC module has already validated the ID token earlier in the flow, this works perfectly fine in practice. We still get access to $jwt_claim_role and can enforce auth_jwt_require $role_admin error=403;, ensuring anyone without the “admin” role gets an immediate 403 Forbidden. Meanwhile, we display a friendlier message by specifying: error_page 403 /403_custom.html; So even though it might look like a normal JWT validation setup, it’s really a lesser-known trick to parse claims without re-checking signatures, leveraging the prior validation done by the OIDC module. This approach neatly ties together the native OIDC flow with role-based access control - without requiring us to juggle another set of keys. Logout in OIDC So far, we’ve covered how to log in with OIDC and handle advanced scenarios like nested claims or role-based control. But there’s another critical topic: how do users log out? The OpenID Connect standard lays out several mechanisms: RP-Initiated Logout: The relying party (NGINX in this case) calls the IdP’s logout endpoint, which can clear sessions both in NGINX and at the IdP level. Front-Channel Logout: The IdP provides a way to notify the RP via a front-channel mechanism (often iframes or redirects) that the user has ended their session. Back-Channel Logout: Uses server-to-server requests between the IdP and the RP to terminate sessions behind the scenes. Right now, the native OIDC module in its first release does not fully implement these logout flows. They’re on the roadmap, but as of today, you may need a workaround if you want to handle sign-outs more gracefully. Still, one of the great things about NGINX is that even if a feature isn’t officially implemented, you can often piece together a solution with a little extra configuration. A Simple Logout Workaround Imagine you have a proxied application that includes a “Logout” button or link. You want clicking that button to end the user’s NGINX session. Below is a conceptual snippet showing how you might achieve that: http { server { listen 443 ssl; server_name n1.route443.dev; # OIDC provider config omitted for brevity # ... location / { auth_oidc keycloak; proxy_pass http://127.0.0.1:8080; } # "Logout" location that invalidates the session location /logout { # Here, we forcibly remove the NGX_OIDC_SESSION cookie add_header Set-Cookie "NGX_OIDC_SESSION=; Path=/; HttpOnly; Secure; Expires=Thu, 01 Jan 1970 00:00:00 GMT"; # Optionally, we can redirect the user to a "logged out" page return 302 "https://n1.route443.dev/logged_out"; } location = /logged_out { # A simple page or message confirming the user is logged out return 200 "You've been logged out."; } } } /logout location: When the user clicks the “logout” link in your app, it can redirect them here. Clearing the cookie: We set NGX_OIDC_SESSION to an expired value, ensuring NGINX no longer recognizes this OIDC session on subsequent requests. Redirect to a “logged out” page: We redirect the user to /logged_out, or wherever you want them to land next. Keep in mind, this approach only logs out at the NGINX layer. The user might still have an active session with the IdP (Keycloak, Entra ID, etc.) because it manages its own cookies. A fully synchronized logout - where both the RP and the IdP sessions end simultaneously, would require an actual OIDC logout flow, which the current module hasn’t fully implemented yet. Conclusion Whether you’re looking to protect a basic web app, parse claims, or enforce role-based policies, the native OIDC module in NGINX Plus R34 offers a way to integrate modern SSO at the proxy layer. Although certain scenarios (like nested claim parsing or fully-fledged OIDC logout) may still require workarounds and careful configuration, the out-of-the-box experience is already much more user-friendly than older njs-based solutions, and new features continue to land in every release. If you’re tackling more complex setups - like UserInfo endpoint support, advanced session management, or specialized logout requirements - stay tuned. The NGINX team is actively improving the module and extend its capabilities. With a little know-how (and possibly a sprinkle of auth_jwt magic), you can achieve an OIDC-based architecture that fits your exact needs, all while preserving the flexibility and performance NGINX is known for.278Views2likes1CommentF5 NGINX Plus R34 Release Now Available
We’re excited to announce the availability of F5 NGINX Plus Release 34 (R34). Based on NGINX Open Source, NGINX Plus is the only all-in-one software web server, load balancer, reverse proxy, content cache, and API gateway. New and enhanced features in NGINX Plus R34 include: Forward proxy support for NGINX usage reporting: With R34, NGINX Plus allows NGINX customers to send their license usage telemetry to F5 via an existing enterprise forward proxy in their environment. Native support for OpenID Connect configuration: With this release, we would like to announce the availability of native OpenID Connect (OIDC) module in NGINX Plus.  The native module brings simplified configuration and better performance while addressing many of the complexities with the existing njs-based solution. SSL Dynamic Certificate Caching: NGINX Plus R34 builds upon the certificate caching improvements in R32 and introduces support for caching of dynamic certificates and preserving this cache across configuration reloads. Important Changes In Behavior Removal of OpenTracing Module: In NGINX Plus R32, we announced deprecation of the OpenTracing module in favor of OpenTelemetry module introduced in NGINX Plus R29. , and was marked to be removed with NGINX Plus R34. The OpenTracing is now being removed from NGINX Plus effective this release. Changes to Platform Support Added Platforms: Alpine Linux 3.21 Removed Platforms: Alpine Linux 3.17 SLES12 Deprecated Platforms: Alpine Linux 3.18 Ubuntu 20.04 New Features in Detail Forward proxy support for NGINX usage reporting In the previous NGINX Plus release (NGINX Plus R33), we introduced major changes to NGINX Plus licensing requiring all NGINX customers to report their commercial NGINX usage to F5. One of the prime feedback we received for this feature was the need to enable NGINX instances to send telemetry via existing outbound proxies, primarily for environments where NGINX instances cannot connect to the F5 licensing endpoint directly. We are pleased to note that NGINX Plus R34 introduces support for using existing forward proxy solutions in customers environment for sending licensing telemetry to F5. With this update, NGINX Plus can now be configured to use the HTTP CONNECT proxy to establish a tunnel to the F5 licensing endpoint to send the usage telemetry. Configuration The following snippet shows the basic NGINX configuration needed in the ngx_mgmt_module module for sending NGINX usage telemetry via a forward proxy solution.  mgmt { proxy HOST:PORT; proxy_username USER; #optional proxy_password PASS; #optional } For complete details, refer the docs here. Native support for OpenID Connect configuration Currently, NGINX Plus relies on a njs-based solution for OpenID Connect (OIDC) implementation that involves intricate JavaScript files and advanced setup steps, which are error-prone. With NGINX Plus R34, we're thrilled to introduce native OIDC support in NGINX Plus. This native implementation eliminates many of the complexities of the njs-based approach, making it faster, highly efficient, and incredibly easy to configure, with no burdensome overheads of maintaining and upgrading the njs module. To this effect, a new module ngx_http_oidc_module is introduced in NGINX Plus R34 that implements authentication as a relying party in OIDC using the Authorization Code Flow. The native implementation allows the flexibility to enable OIDC authentication globally, or at a more granular, per-server or a per-location level. It also allows effortless auto-discovery and retrieval of the OpenID providers' configuration metadata without needing complex external scripts for each Identity Provider (IdP), greatly simplifying the configuration process. For a complete overview and examples of the features in the native implementation of OIDC in NGINX Plus and how it improves upon the njs based implementation, refer the blog. Configuration The configuration to setup OIDC natively in NGINX Plus is relatively straightforward requiring minimal directives when compared to the njs based implementation. http { resolver 10.0.0.1; oidc_provider my_idp { issuer "https://provider.domain"; client_id "unique_id"; client_secret "unique_secret"; } server { location / { auth_oidc my_idp; proxy_set_header username $oidc_claim_sub; proxy_pass http://backend; } } } The example assumes that the “https://<nginx-host>/oidc_callback” redirection URI is configured on the OpenID Provider's side. For instructions on how to configure the native OIDC module for various identity providers, refer the NGINX deployment guide. SSL Certificate Caching improvements In NGINX Plus R32, we introduced changes to cache various SSL objects and reuse the cached objects elsewhere in the configuration. This provided noticeable improvements in the initial configuration load time primarily where a small number of unique objects were being referenced multiple times. With R34, we are adding further enhancements to this functionality where cached SSL objects are reused across configuration reloads, making the reloads even faster. Also, SSL certificates with variables are now cached as well. Refer the blog for a detailed overview of this feature implementation. Other Enhancements and Bug Fixes Keepalive timeout improvements Prior to this release, idle keepalive connections could be closed any time the connection needed to be reused for another client or when the worker was gracefully shutting down.  With NGINX Plus R34, a new directive keepalive_min_timeout is being introduced. This directive sets a timeout during which a keepalive connection will not be closed by NGINX for connection reuse or graceful worker shutdown. The change allows clients that send multiple requests over the same connection without delay or with a small delay between them, to avoid receiving a TCP RST in response to one of them, if not for network reasons or non-graceful worker shutdown.  As a side-effect, it also addresses the TCP reset problem described in RFC 9112, Section 9.6, when the last sent HTTP response could be damaged by a followup TCP RST. It is important for non-idempotent requests, which cannot be retried by client. It is however recommended to not set keepalive_min_timeout to large values as this can introduce an additional delay during worker process shutdown and may restrict NGINX from effective connection reuse. Improved health check logging NGINX Plus R34 adds logging enhancements in the error log for better visibility while troubleshooting upstream health check failures. The server status code is now logged on health check failures. Increased session key size Prior to R34, NGINX accepted an SSL session with maximum 4k(4096) bytes. With NGINX Plus R34, the maximum session size has been increased to 8k(8192) bytes to accommodate use cases where the sessions could be larger than 4k bytes. For ex. in cases where a client certificate is saved in the session, with tickets (in TLS v1.2 or older versions), or with stateless tickets (in TLS v1.3) the sessions maybe of noticeably large size. Certain stateless session resumption implementations may store additional data as well. One such case is with JDK, which is known to include server certificates in the session ticket data which roughly doubles the decoded session size. The changes also include improved logging to capture cases when sessions are not saved in shared memory due to size. Changes in the Open Telemetry Module TLS support in OTEL traces: NGINX now allows enabling TLS for sending OTEL traces. It can be enabled by specifying "https" scheme in the endpoint as shown. otel_exporter { endpoint "https://otel.labt.fp.f5net.com:4433"; trusted_certificate “path/to/custom/ca/bundle“; # optional } By default, system CA bundle is used to verify endpoint's certificate which can be overridden with "trusted_certificate" directive if required. For a complete list of changes to the OTEL module, refer the NGINX OTEL change log.   Changes Inherited from NGINX Open Source NGINX Plus R34 is based on NGINX mainline release and inherits all functional changes, features, and bug fixes made since NGINX Plus R33 was released (in NGINX Open source 1.27.3 and 1.27.4 mainline versions) Features: SSL Certificate Caching "keepalive_min_timeout" directive The "server" directive in the "upstream" block supports the "resolve" parameter. The "resolver" and "resolver_timeout" directives in the "upstream" block. SmarterMail specific mode support for IMAP LOGIN with untagged CAPABILITY response in the mail proxy module. Changes: Now TLSv1 and TLSv1.1 protocols are disabled by default. An IPv6 address in square brackets and no port can be specified in the "proxy_bind", "fastcgi_bind", "grpc_bind", "memcached_bind", "scgi_bind", and "uwsgi_bind" directives, and as client address in ngx_http_realip_module. Bug Fixes: gzip filter failed to use preallocated memory" alerts appeared in logs when using zlib-ng. nginx could not build libatomic library using the library sources if the --with-libatomic=DIR option was used. QUIC connection might not be established when using 0-RTT; the bug had appeared in 1.27.1. NGINX now ignores QUIC version negotiation packets from clients. NGINX could not be built on Solaris 10 and earlier with the ngx_http_v3_module. Bugfixes in HTTP/3. Bugfixes in the ngx_http_mp4_module. The "so_keepalive" parameter of the "listen" directive might be handled incorrectly on DragonFly BSD. Bugfix in the proxy_store directive. Security: Insufficient check in virtual servers handling with TLSv1.3 SNI allowed to reuse SSL sessions in a different virtual server, to bypass client SSL certificates verification (CVE-2025-23419). For the full list of new changes, features, bug fixes, and workarounds inherited from recent releases, see the NGINX changes . Changes to the NGINX Javascript Module NGINX Plus R34 incorporates changes from the NGINX JavaScript (njs) module version 0.8.9. The following is a list of notable changes in njs since 0.8.7 (which was the version shipped with NGINX Plus R33). Features: Added fs module for QuickJS engine. Implemented process object for the QuickJS engine. Implemented the process.kill() method. Bug Fixes: Removed extra VM creation per server. Previously, when js_import was declared in http or stream blocks, an extra copy of the VM instance was created for each server block. This was not needed and consumed a lot of memory for configurations with many server blocks. This issue was introduced in 9b674412 (0.8.6) and was partially fixed for location blocks only in 685b64f0 (0.8.7). Fixed XML tests with libxml2 2.13 and later. Fixed promise resolving when Promise is inherited. Fixed absolute scope in cloned VMs. Fixed limit rated output. Optimized use of SSL contexts for the js_fetch_trusted_certificate directive. For a comprehensive list of all the features, changes, and bug fixes, see the njs Changelog.603Views0likes0CommentsAI Friday: Are Chatbots Passing the Turing Test? Explore the Ethics! - Ep.14
Join the DevCentral crew as we dive into groundbreaking AI research, explore how OpenAI's GPT models mimic humans, and discuss critical advancements in AI security and ethics. From Turing tests to deepfake controversies, this episode of AI Friday is packed with thought-provoking insights and practical takeaways for the AI-powered future. Secure your AI journey. Stay informed. Stay connected. It’s time to redefine intelligence—welcome to AI Friday! Associated articles: AI PASSES TURING TEST!!! Tracing LLM Reasoning AI Image Ethics More On Model Context Protocol And the episode...32Views1like0CommentsAI Friday LIVE w/ Steve Wilson - Vibe Coding, Agentic AI Security And More
Welcome to AI Friday! In this episode, we dive into the latest developments in Generative AI Security, discussing the implications and challenges of this emerging technology. Join Aubrey from DevCentral and the OWASP GenAI Security Project, along with an expert panel including Byron, Ken, Lori, and special guest Steve Wilson, as they explore the complexities of AI in the news and the evolving landscape of AI security. We also take a closer look at the fascinating topic of vibe coding, its impact on software development, and the transformative potential of AI-assisted coding practices. Whether you're a developer, security professional, or an AI enthusiast, this episode is packed with insights and expert opinions that you won't want to miss. Don't forget to like, subscribe, and join the conversation! Topics: Agentic Risk vs. Reward OWASP GenAI Security Project DeepSeek and Inference Performance Vibe Coding Roundtable Annnnd we may have shared some related, mostly wholesome memes.46Views0likes0Comments