Verified Designs
168 TopicsPost-Quantum Cryptography: Building Resilience Against Tomorrow’s Threats
Modern cryptographic systems such as RSA, ECC (Elliptic Curve Cryptography), and DH (Diffie-Hellman) rely heavily on the mathematical difficulty of certain problems, like factoring large integers or computing discrete logarithms. However, with the rise of quantum computing, algorithms like Shor's and Grover's threaten to break these systems, rendering them insecure. Quantum computers are not yet at the scale required to break these encryption methods in practice, but their rapid development has pushed the cryptographic community to act now. This is where Post-Quantum Cryptography (PQC) comes in — a new wave of algorithms designed to remain secure against both classical and quantum attacks. Why PQC Matters Quantum computers exploit quantum mechanics principles like superposition and entanglement to perform calculations that would take classical computers millennia2. This threatens: Public-key cryptography: Algorithms like RSA rely on factoring large primes or solving discrete logarithms-problems quantum computers could crack using Shor’s algorithm. Long-term data security: Attackers may already be harvesting encrypted data to decrypt later ("harvest now, decrypt later") once quantum computers mature. Figure1: Cryptography evolution How PQC Works The National Institute of Standards and Technology (NIST) has led a multi-year standardization effort. Here are the main algorithm families and notable examples. Lattice-Based Cryptography. Lattice problems are believed to be hard for quantum computers. Most of the leading candidates come from this category. CRYSTALS-Kyber (Key Encapsulation Mechanism) CRYSTALS-Dilithium (Digital Signatures) Uses complex geometric structures (lattices) where finding the shortest vector is computationally hard, even for quantum computers Example: ML-KEM (formerly Kyber) establishes encryption keys using lattices but requires more data transfer (2,272 bytes vs. 64 bytes for elliptic curves) The below figure shows an illustration of how Lattice-based cryptography works. Imagine solving a maze with two maps-one public (twisted paths) and one private (shortest route). Only the private map holder can navigate efficiently Code-Based Cryptography Based on the difficulty of decoding random linear codes. Classic McEliece: Resistant to quantum attacks for decades. Pros: Very well-studied and conservative. Cons: Very large public key sizes. Relies on error-correcting codes. The Classic McEliece scheme hides messages by adding intentional errors only the recipient can fix. How it works: Key generation: Create a parity-check matrix (public key) and a secret decoder (private key). Encryption: Encode a message with random errors. Decryption: Use the private key to correct errors and recover the message Figure3: Code-Based Cryptography Illustration Multivariate & Hash-Based Quadratic Equations Multivariate These are based on solving systems of multivariate quadratic equations over finite fields and relies on solving systems of multivariate equations, a problem believed to be quantum-resistant. Hash-Based Use hash functions to construct secure digital signatures. SPHINCS+: Stateless and hash-based, good for long-term digital signature security. Challenges and Adoption Integration: PQC must work within existing TLS, VPN, and hardware stacks. Key sizes: PQC algorithms often require larger keys. For example, Classic McEliece public keys can exceed 1MB. Hybrid Schemes: Combining classical and post-quantum methods for gradual adoption. Performance: Lattice-based methods are fast but increase bandwidth usage. Standardization: NIST has finalized three PQC standards (e.g., ML-KEM) and is testing others. Organizations must start migrating now, as transitions can take decades. Adopting PQC with BIG-IP As of F5 BIG-IP 17.5, the BIG-IP now supports the widely implemented X25519Kyber768Draft00 cipher group for client-side TLS negotiations (BIG-IP as a TLS server). Other cipher groups and capabilities will become available in subsequent releases. Cipher walkthrough Let's take the supported cipher in v17.5.0 (Hybrid X25519_Kyber768) as an example and walk through it. X25519: A classical elliptic-curve Diffie-Hellman (ECDH) algorithm Kyber768: A post-quantum Key Encapsulation Mechanism (KEM) The goal is to securely establish a shared secret key between the two parties using both classical and quantum-resistant cryptography. Key Exchange X25519 Exchange: Alice and Bob exchange X25519 public keys. Each computes a shared secret using their own private key + the other’s public key: Kyber768 Exchange: Alice uses Bob’s Kyber768 public key to encapsulate a secret: Produces a ciphertext and a shared secret Bob uses his Kyber768 private key to decapsulate the ciphertext and recover the same shared secret: Both parties now have: A classical shared secret A post-quantum shared secret They combine them using a KDF (Key Derivation Function): Why the hybrid approach is being followed: If quantum computers are not practical yet, X25519 provides strong classical security. If a quantum computer arrives, Kyber768 keeps communications secure. Helps organizations migrate gradually from classical to post-quantum systems. Implementation guide F5 published article Enabling Post-Quantum Cryptography in F5 BIG-IP TMOS to implement PQC on BIG-IP v17.5 Create a new Cipher Rule To create a new Cipher Rule, log in to the BIG-IP Configuration Utility, go to Local Traffic > Ciphers > Rules. Select Create. In the Name box, provide a name for the Cipher Rule. For Cipher Suits, select any of the suites from the provided Cipher Suites list. Use ALL or DEFAULT to list all of the available suites. For DH Groups, enter X25519KYBER768 to restrict to only this PQC cipher For Example: X25519KYBER768 For Signature Algorithms, select an algorithm. For example: DEFAULT Select Finished. Create a new Cipher Group In the BIG-IP Configuration Utility, go to Local Traffic > Ciphers > Groups Select Create. In the Name box, provide a name for the Cipher Group. Add the newly created Cipher Rule to Allow the following box or Restrict the Allowed list to the following in Group Details. All of the other details, including DH Group, Signature Algorithms, and Cipher Suites, will be reflected in the Group Audit as per the selected rule. Select Finished. Configure a Client SSL Profile In the BIG-IP Configuration Utility, go to Local Traffic > Profiles > SSL > Client. Create a new client SSL profile or edit an existing. For Ciphers, select the Cipher Group radio button and select the created group to enable Post-Quantum cryptography for this client’s SSL profile. NGINX Support for PQC We are pleased to announce support for Post Quantum Cryptography (PQC) starting NGINX Plus R33. NGINX provides PQC support using the Open Quantum Safe provider library for OpenSSL 3.x (oqs-provider). This library is available from the Open Quantum Safe (OQS) project. The oqs-provider library adds support for all post-quantum algorithms supported by the OQS project into network protocols like TLS in OpenSSL-3 reliant applications. All ciphers/algorithms provided by oqs-provider are supported by NGINX. To configure NGINX with PQC support using oqs-provider, follow these steps: Install the necessary dependencies sudo apt update sudo apt install -y build-essential git cmake ninja-build libssl-dev pkg-config Download and install liboqs git clone --branch main https://github.com/open-quantum-safe/liboqs.git cd liboqs mkdir build && cd build cmake -GNinja -DCMAKE_INSTALL_PREFIX=/usr/local -DOQS_DIST_BUILD=ON .. ninja sudo ninja install Download and install oqs-provider git clone --branch main https://github.com/open-quantum-safe/oqs-provider.git cd oqs-provider mkdir build && cd build cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/usr/local -DOPENSSL_ROOT_DIR=/usr/local/ssl .. make -j$(nproc) sudo make install Download and install OpenSSL with oqs-provider support git clone https://github.com/openssl/openssl.git cd openssl ./Configure --prefix=/usr/local/ssl --openssldir=/usr/local/ssl linux-x86_64 make -j$(nproc) sudo make install_sw Configure OpenSSL for oqs-provider /usr/local/ssl/openssl.cnf: openssl_conf = openssl_init [openssl_init] providers = provider_sect [provider_sect] default = default_sect oqsprovider = oqsprovider_sect [default_sect] activate = 1 [oqsprovider_sect] activate = 1 Generate post quantum certificates export OPENSSL_CONF=/usr/local/ssl/openssl.cnf # Generate CA key and certificate /usr/local/ssl/bin/openssl req -x509 -new -newkey dilithium3 -keyout ca.key -out ca.crt -nodes -subj "/CN=Post-Quantum CA" -days 365 # Generate server key and certificate signing request (CSR) /usr/local/ssl/bin/openssl req -new -newkey dilithium3 -keyout server.key -out server.csr -nodes -subj "/CN=your.domain.com" # Sign the server certificate with the CA /usr/local/ssl/bin/openssl x509 -req -in server.csr -out server.crt -CA ca.crt -CAkey ca.key -CAcreateserial -days 365 Download and install NGINX Plus Configure NGINX to use the post quantum certificates server { listen 0.0.0.0:443 ssl; ssl_certificate /path/to/server.crt; ssl_certificate_key /path/to/server.key; ssl_protocols TLSv1.3; ssl_ecdh_curve kyber768; location / { return 200 "$ssl_curve $ssl_curves"; } } Conclusion By adopting PQC, we can future-proof encryption against quantum threats while balancing security and practicality. While technical hurdles remain, collaborative efforts between researchers, engineers, and policymakers are accelerating the transition. Related Content New Features in BIG-IP Version 17.5.0 K000149577: Enabling Post-Quantum Cryptography in F5 BIG-IP TMOS F5 NGINX Plus R33 Release Now Available | DevCentral128Views0likes4CommentsMitigation of OWASP API Security Risk: BOPLA using F5 XC Platform
Introduction: OWASP API Security Top 10 - 2019 has two categories “Mass Assignment” and “Excessive Data Exposure” which focus on vulnerabilities that stem from manipulation of, or unauthorized access to an object's properties. For ex: let’s say there is a user information in json format {“UserName”: ”apisec”, “IsAdmin”: “False”, “role”: ”testing”, “Email”: “[email protected]”}. In this object payload, each detail is considered as a property, and so vulnerabilities around modifying/showing these sensitive properties like email/role/IsAdmin will fall under these categories. These risks shed light on the hidden vulnerabilities that might appear when modifying the object properties and highlighted the essence of having a security solution to validate user access to functions/objects while also ensuring access control for specific properties within objects. As per them, role-based access, sanitizing the user input, and schema-based validation play a crucial role in safeguarding your data from unauthorized access and modifications. Since these two risks are similar, the OWASP community felt they could be brought under one radar and were merged as “Broken Object Property Level Authorization” (BOPLA) in the newer version of OWASP API Security Top 10 – 2023. Mass Assignment: Mass Assignment vulnerability occurs when client requests are not restricted to modifying immutable internal object properties. Attackers can take advantage of this vulnerability by manually parsing requests to escalate user privileges, bypass security mechanisms or other approaches to exploit the API Endpoints in an illegal/invalid way. For more details on F5 Distributed Cloud mitigation solution, check this link: Mitigation of OWASP API6: 2019 Mass Assignment vulnerability using F5 XC Excessive Data Exposure: Application Programming Interfaces (APIs) don’t have restrictions in place and sometimes expose sensitive data such as Personally Identifiable Information (PII), Credit Card Numbers (CCN) and Social Security Numbers (SSN), etc. Because of these issues, they are the most exploited blocks to gain access to customer information, and identifying the sensitive information in these huge chunks of API response data is crucial in data safety. For more details on this risk and F5 Distributed Cloud mitigation solution, check this link: Mitigating OWASP API Security Risk: Excessive Data Exposure using F5 XC Conclusion: Wrapping up, this article covers the overview of the newly added category of BOPLA in OWASP Top 10 – 2023 edition. Finally, we have also provided minutiae on each section in this risk and reference articles to dig deeper into F5 Distributed Cloud mitigation solutions. Reference links or to get started: F5 Distributed Cloud Services F5 Distributed Cloud WAAP Introduction to OWASP API Security Top 10 2023489Views3likes0CommentsMitigating OWASP API Security Risk: Excessive Data Exposure using F5 XC Platform
This is part of the OWASP API Security TOP 10 mitigation series, and you can refer here for an overview of these categories and F5 Distributed Cloud Platform (F5 XC) Web Application and API protection (WAAP). Introduction to Excessive Data Exposure Application Programming Interfaces (APIs) are the foundation stone of modern evolving web applications which are driving the digital world. They are part of all phases in product development life cycle, starting from design, testing to end customer using them in their day-to-day tasks. Since they don't have restrictions in place, sometimes APIs expose sensitive data such as Personally Identifiable Information (PII), Credit Card Numbers (CCN) and Social Security Numbers (SSN), etc. Because of these issues, they are the most exploited blocks in cybercrime to gain access to customer information which can be sold or further used in other exploits like credential stuffing, etc. Most of the time, the design stage doesn't include this security perspective and relies on 3rd party tools to perform sanitization of the data before displaying the results to customers. Identifying the sensitive information in these huge chunks of API response data is sophisticated and most of the available security tools in the market don't support this capability. So instead of relying on third party tools it's recommended to follow shift left strategies and add security as part of the development phase. During this phase, developers must review and ensure that the API returns only required details instead of providing unnecessary properties to avoid sensitive data exposure. Excessive data exposure attack scenario-1 To showcase this category, we are exposing sensitive details like CCN and SSN in one of the product reviews of Juice shop application (refer links for more info) as below - Overview of Data Guard: Data Guard is F5 XC load balancer feature which shields the responses from exposing sensitive information like CCN/SSN by masking these fields with a string of asterisks (*). Depending on the customer's requirement, they can have multiple rules configured to apply or skip processing for certain paths and routes. Preventing excessive data exposure using F5 Distributed Cloud Step1: Create origin pool - Refer here for more information Step2: Create Web Application Firewall policy (WAF) - Refer here for details Step3: Create https load balancer (LB) with above created pool and WAF policy - Refer here for more information Step4: Upload your application swagger file and add it to above load balancer - Refer here for more details Step5: Configure Data Guard on the load balancer with action and path as below Step6: Validate the sensitive data is masked Open postman/browser, check the product reviews section/API and validate these details are hidden and not exposed as in original application In Distributed Cloud Console expand the security event and check the WAF section to understand the reason why these details are masked as below: Excessive data exposure attack scenario-2 In this demonstration we are using an API based vulnerable application VAmPI (VAmPI is a vulnerable API made with Flask, and it includes vulnerabilities from the OWASP top 10 vulnerabilities for APIs, for more info follow the repo link). Follow below steps to bring up the setup: Step1: Host the VAmPI application inside a virtual machine Step2: Login to XC console, create a HTTP LB and add the hosted application as an origin server Step3: Access the application to check its availability. Step4: Now enable API Discovery and configure sensitive data discovery policy by addingall the compliance frameworks in your HTTP LB config. Step5: Hit the vulnerable API Endpoint '/users/v1/_debug' exposing sensitive data like username, password etc. Step6: Navigate to security overview dashboard in the XC console and select the API Endpoints tab. Check for vulnerable endpoint details. Step7: In the Sensitive Data section, click Ellipsis on the right side to get options for action. Step8: Clicking on the option 'Add Sensitive Data Exposure Rule' will automatically add the entries for sensitive data exposure rule to your existing LB configs. Apply the configuration. Step9: Now again, hit the vulnerable API Endpoint '/users/v1/_debug' Here in the above image, you can see masked values in the response. All letters changed to 'a' and number is converted to '1'. Step10: Optionally you can also manually configure sensitive data exposure rule by adding details about the vulnerable API endpoint. Login back to XC console Start configuring API Protection rule in the created HTTP LB Click Configure in the Sensitive Data Exposure Rules section. Click Add Item to create the first rule. In the Target section, enter the path that will respond to the request. Also enter one or more methods with responses containing sensitive information. In the Values field in Pattern section, enter the JSON field value you want to mask. For example, to mask all emails in the array users, enter “users[_].email”. Note that an underscore between the square brackets indicates the array's elements. Once the above rule gets applied, values in the response will be masked as follows: All letters will change to a or A (matching case) and all numbers will convert to 1. Click Apply to save the rule to the list of Sensitive Data Exposure Rules. Optionally, Click Add Item to add more rules. Click Apply to save the list of rules to your load balancer. Step11: After the completion of Step10, Hit back the vulnerable API Endpoint. Here also in the above image, you can see masked values in the response as per the configurations done in Step 10. Conclusion As we have seen in the above use cases sensitive data exposure occurs when an application does not protect sensitive data like PII, CCN, SSN, Auth Credentials etc. Leaking of such information may lead to serious consequences. Hence it becomes extremely critical for organizations to reduce the risk of sensitive data exposure. As demonstrated above, F5 Distributed Cloud Platform can help in protecting the exposure of such sensitive data with its easy to use API Security solution offerings. For further information check the links below OWASP API Security - Excessive Data Exposure OWASP API Security - Overview article F5 XC Data Guard Overview OWASP Juice Shop VAmPI2.7KViews3likes2CommentsMitigating OWASP API Security Risk: Unrestricted Resource Consumption using F5 Distributed Cloud Platform
Introduction: Unrestricted Resource Consumption vulnerability occurs where an API allows end users to over-utilize resources (e.g., CPU, memory, bandwidth, or storage) without enforcing proper limitations. This can lead to overwhelming of the system, performance degradation, denial of service (DoS) or complete unavailability of the services for valid users. Attack Scenario: In this demo, we are going to generate huge traffic and observe the server’s behaviour along with its response time. Fig 1: Using Apache JMeter to send arbitrary number of requests to API endpoint continuously in very short span of time. Fig 2: (From left to right) Response time during normal and server with huge traffic. Above results show higher response time when abnormal traffic is sent to a single API endpoint when compared to normal usage. By further increases in volume, server can become unresponsive, deny requests from real users and result in DoS attacks. Fig 3: Attackers performing arbitrary number of API request to consume the server’s resources Customer Solution: F5 Distributed Cloud (XC) WAAP helps in solving above vulnerability in the application by rate limiting the API requests, thereby preventing complete consumption of memory, file system storage, CPU resources etc. This protects against traffic surge and DoS attacks. This article aims to provide F5 XC WAAP configurations to control the rate of requests sent to the origin server. Step by Step to configure Rate Limiting in F5 XC: These are the steps to enable Rate Limiting feature for APIs and its validation Add API Endpoints with Rate Limiter values Validation of request rate to violate threshold limit Verifying blocked request in F5 XC console Step 1: Add API Endpoints with Rate Limiter values Login to F5 XC console and Navigate to Home > Load Balancers > Manage > Load Balancers Select the load balancer to which API Rate Limiting should be applied. Click on the menu in Actions column of the app’s Load Balancer and click on Manage Configurations as shown below to display load balancer configs. Fig 4: Selecting menu to manage configurations for load balancer Once Load Balancer configurations are displayed, click on Edit configuration button on the top right of the page. Navigate to Security Configuration and select “API Rate Limit” in dropdown of Rate Limiting and click on “Add Item” under API Endpoint section. Fig 5: Choosing API Rate Limit to configure API endpoints. Fig 6: Configuring rate limit to API Endpoint Rate limit is configured to GET request from API Endpoint “/product/OLJCESPC7Z”. Click on Apply button displayed on the right bottom of the screen. Click on “Save and Exit” for above configuration to get saved to Load Balancer. Validation of request rate to violate threshold limit Fig 7: Verifying request for first time Request is sent for the first time after configuring API Endpoint and can be able to see the response along with status code 200. Upon requesting to the same API Endpoint beyond the threshold limit blocks the request as shown below, Fig 8: Rate Limiting the API request Verifying blocked request from F5 XC console From the F5 XC Console homepage, Navigate to WAAP > Apps & APIs > Security and select the Load Balancer. Click on Requests to view the request logs as below, Fig 9: Blocked API request details from F5 XC console You can see requests beyond the rate limiter value get dropped and the response code is 429. Conclusion: In this article, we have seen that when an application receives an abnormal amount of traffic, F5 XC WAAP protects APIs from being overwhelmed by rate limiting the requests. XC's Rate limiting feature helps in preventing DoS attacks and ensures service availability at all times. Related Links: API4:2019 Lack of Resources and Rate Limiting API4:2023 Unrestricted Resource Consumption Creating Load balancer Steps F5 Distributed Cloud Security WAAP F5 Distributed Cloud Platform29Views0likes0CommentsHow To Secure Multi-Cloud Networking with Routing & Web Application and API Protection
Introduction With the proliferation of intra-cloud networking requirements, continuing, organizations are increasingly leveraging multi-cloud solutions to optimize performance, cost, and resilience. However, with this approach comes the challenge of ensuring robust security across diverse cloud and on-prem environments. Secure multi-cloud networking with advanced web application and API protection services, is essential to safeguard digital assets, maintain compliance, and uphold operational integrity. Understanding Multi-Cloud Networking Multi-cloud networking involves orchestrating connectivity between multiple cloud platforms such as AWS, Microsoft Azure, Google Cloud Platform, and others including on-prem and private data centers. This approach allows organizations to avoid vendor lock-in, enhance redundancy, and tailor services to specific workloads. However, managing networking and web application security across these platforms can be complex due to different platform security models, configurations, and interfaces. Key Components of Multi-Cloud Networking Inter-cloud Connectivity: Establishing secure connections between multiple cloud providers to ensure seamless data flow and application interoperability. Unified Management: Implementing centralized management tools to oversee network configurations, policies, and security protocols across all cloud environments. Automated Orchestration: Utilizing automation to provision, configure, and manage network resources dynamically, reducing manual intervention and potential errors. Compliance and Governance: Ensuring adherence to regulatory requirements and best practices for data protection and privacy across all cloud platforms. Securing Multi-Cloud Environments Security is paramount in multi-cloud networking. With multiple entry points and varying security measures across different cloud providers, organizations must adopt a comprehensive strategy to protect their assets. Strategies for Secure Multi-Cloud Networking Zero Trust Architecture: Implementing a zero-trust model that continuously verifies and validates every request, irrespective of its source, to mitigate risks. Encryption: Utilizing advanced encryption methods for data in transit and at rest to protect against unauthorized access. Continuous Monitoring: Deploying monitoring tools to detect, analyze, and respond to threats in real-time. Application Security: Using a common framework for web application and API security reduces the number of steps needed to identify and remediate security risks and misconfigurations across disparate infrastructures. Web Application and API Protection Services Web applications and APIs are critical components of modern digital ecosystems. Protecting these assets from cyber threats is crucial, especially in a multi-cloud environment where they may be distributed across various platforms. Comprehensive Web Application Protection Web Application Firewalls (WAFs) play a vital role in safeguarding web applications. They filter and monitor HTTP traffic between a web application and the internet, blocking malicious requests and safeguarding against common threats such as SQL injection, cross-site scripting (XSS), and DDoS attacks. Advanced Threat Detection: Employing machine learning and artificial intelligence to identify and block sophisticated attacks. Application Layer Defense: Providing protection at the application layer, where traditional network security measures may fall short. Scalability and Performance: Ensuring WAF solutions can scale and perform adequately in response to varying traffic loads and attack volumes. Securing APIs in Multi-Cloud Environments APIs are pivotal for integration and communication between services. Securing APIs involves protecting them from unauthorized access, misuse, and exploitation. Authentication and Authorization: Implement strong authentication mechanisms such as OAuth and JWT to ensure only authorized users and applications can access APIs. Rate Limiting: Controlling the number of API calls to prevent abuse and ensure fair usage across consumers. Input Validation: Validating input data to prevent injection attacks and ensure data integrity. Threat Detection: Monitoring API traffic for anomalies and potential threats, and responding swiftly to mitigate risks. Best Practices for Secure Multi-Cloud Networking To effectively manage and secure multi-cloud networks, organizations should adhere to best practices that align with their operational and security objectives. Adopt a Holistic Security Framework A holistic security framework encompasses the entire multi-cloud environment, focusing on integration and coordination between different security measures across cloud platforms. Unified Policy Enforcement: Implementing consistent security policies across all cloud environments to ensure uniform protection. Regular Audits: Conducting frequent security audits to identify vulnerabilities, assess compliance, and improve security postures. Incident Response Planning: Developing and regularly updating incident response plans to handle potential breaches and disruptions efficiently. Leverage Security Automation Automation can significantly enhance security in multi-cloud environments by reducing human errors and ensuring timely responses to threats. Automated Compliance Checks: Using automation to continuously monitor and enforce compliance with security standards and regulations. Real-time Threat Mitigation: Implementing automated remediation processes to address security threats as they are detected. Demo: Bringing it Together with F5 Distributed Cloud Using services in Distributed Cloud, F5 brings everything together. Deploying and orchestrating connectivity between hybrid and multi-cloud environments, Distributed Cloud not only connects these environments, it also secures them with universal Web App and API Protection policies. The following solution uses Distributed Cloud Network Connect, App Connect, and Web App & API Protection to connect, deliver, and secure an application with services that exist in different cloud and on-prem environments. This video shows how each of the features in this solution come together: Bringing it together with Automation Orchestrating this end-to-end, including the Distributed Cloud components itself is trivial using the combination of GitHub Workflow Actions and Terraform. The following automation workflow guide and companion article at DevCentral provides all the steps and modular code necessary to build a complete multicloud environment, manage Kubernetes clusters, and deploy the sample functional multi-site application, Arcadia Finance. GitHub Repository: https://github.com/f5devcentral/f5-xc-terraform-examples/tree/main/workflow-guides/smcn/mcn-distributed-apps-l3 Conclusion Secure multi-cloud networking, combined with robust web application and API protection services, is vital for organizations seeking to leverage the benefits of a multi-cloud strategy without compromising security. By adopting comprehensive security measures, enforcing best practices, and leveraging advanced technologies, organizations can safeguard their digital assets, ensure compliance, and maintain operational integrity in a dynamic and ever-evolving cloud landscape. Additional Resources Introducing Secure MCN features on F5 Distributed Cloud Driving Down Cost & Complexity: App Migration in the Cloud The App Delivery Fabric with Secure Multicloud Networking Scale Your DMZ with F5 Distributed Cloud Services Seamless Application Migration to OpenShift Virtualization with F5 Distributed Cloud Automate Multicloud Networking w/ Terraform: routing and app connect on F5 Distributed Cloud48Views1like0CommentsBIG-IP Next for Kubernetes, addressing today’s enterprise challenges
Enterprises have started adopting Kubernetes (K8s)—not just cloud service providers—as it offers strategic advantages in agility, cost efficiency, security, and future-proofing. Cloud Native Functions account for around 60% TCO savings Easier to deploy, manage, maintain, and scale. Easier to add and roll out new services. Kubernetes complexities With the move from traditional application deployments to microservices and containerized services, some complexities were introduced, Networking Challenges with Kubernetes Default Deployments Kubernetes networking has some problems when using default settings. These problems can affect performance, security, and reliability in production environments. Core Networking Challenges Flat Network Model All pods can communicate with all other pods by default (east-west traffic) No network segmentation between applications Potential security risks from excessive inter-pod communication Service Discovery Limitations DNS-based service discovery has caching behaviors that can delay updates No built-in load balancing awareness (can route to unhealthy pods during updates) Limited traffic shaping capabilities (all requests treated equally) Ingress Challenges No default ingress controller installed Multiple ingress controllers can conflict if not properly configured SSL/TLS termination requires manual certificate management Network Policy Absence No network policies applied by default (allow all traffic). Difficult to implement zero-trust networking principles No default segmentation between namespaces DNS Issues CoreDNS default cache settings may not be optimal. Pod DNS policies may not match application requirements. Nodelocal DNS cache not enabled by default Load-Balancing Problems Service `ClusterIP` is the default (no external access). NodePort` services can conflict on port allocations. Cloud provider load balancers can be expensive if overused CNI (Container Network Interface) Considerations Default CNI plugin may not support required features Network performance varies significantly between CNI choices IP address management challenges at scale Performance-Specific Issues kube-proxy inefficiencies Default iptables mode becomes slow with many services IPVS (IP Virtual Server) mode requires explicit configuration Service mesh sidecars can double latency Pod Network Overhead Additional hops for cross-node communication Encapsulation overhead with some CNI plugins No QoS guarantees for network traffic Multicluster Communication No default solution for cross-cluster networking Complex to establish secure connections between clusters Service discovery doesn’t span clusters by default Security Challenges No default encryption between pods No default authentication for service-to-service communication. All namespaces are network-accessible to each other by default. External traffic can bypass ingress controllers if misconfigured. These challenges highlight why most production Kubernetes deployments require significant, complex customization beyond the default configuration. Figure 1 shows those workarounds being implemented and how complicated our setup would be, with multiple add-ons required to overcome Kubernetes limitations. In the following section, we are exploring how BIG-IP Next for Kubernetes simplifies and enhances application delivery and security within Kubernetes environment. BIG-IP Next for Kubernetes Introducing BIG-IP Next for Kubernetes not only reduces complexity, but leverages the main networking components to the TMM pods rather than relying on the host server. Think of where current network functions are applied, it’s the host kernel. Whether you are doing NAT or firewalling services, this requires intervention by the host side, which impacts the zero-trust architecture and traffic performance is limited by default kernel IP and routing capabilities. Deployment overview Among the introduced features in 2.0.0 Release API GW CRs (Custom Resources). F5 IPAM Controller to manage IP addresses for Gateway resource. Seamless firewall policy integration in Gateway API. Ingress DDoS protection in Gateway API. Enforced access control for Debug and QKView APIs with Admin Token. In this section, we explore the steps to deploy BIG-IP Next for Kubernetes in your environment, Infrastructure Using different flavors depending on your needs and lab type (demo or production), for labs microk8s, k8s or kind, for example. BIG-IP Next for Kubernetes helm, docker are required packages for this installation. Follow the installation guide BIG-IP Next for Kubernetes current 2.0.0 GA release is available. For the desired objective in this article, you may skip the Nvidia DOCA (that's the focus of the coming article) and go directly for BIG-IP Next for Kubernetes. Install additional CRDs Once the licensing and core pods are ready, you can move to adding additional CRDs (Customer Resources Definition). BIG-IP Next for Kubernetes CRDs BIG-IP Next for Kubernetes CRDs Custom CRDs Install F5 Use case Custom Resource Definitions Related Content BIG-IP Next for Kubernetes v2.0.0 Release Notes System Requirements BIG-IP Next for Kubernetes CRDs BIG-IP Next for Kubernetes BIG-IP Next SPK: a Kubernetes native ingress and egress gateway for Telco workloads F5 BIG-IP Next for Kubernetes deployed on NVIDIA BlueField-3 DPUs BIG-IP Next for Kubernetes running in Amazon EKS251Views1like0CommentsMitigating OWASP 2019 API Security Top 10 risks using F5 NGINX App Protect
This 2019 API Security article provides a valuable summary of the OWASP API Security Top 10 risks identified for that year, outlining key vulnerabilities. We will deep-dive into some of those common risks and how we can protect our applications against these vulnerabilities using F5 NGINX App Protect. API2:2019 - Broken User Authentication Problem Statement: A critical API security risk, Broken Authentication occurs when weaknesses in the API's identity verification process permit attackers to circumvent authentication mechanisms. Successful exploitation leads attackers to impersonate legitimate users, gain unauthorized access to sensitive data, perform actions on behalf of victims, and potentially take over accounts or systems. This demonstration utilizes the Damn Vulnerable Web Application (DVWA) to illustrate the exploitability of Broken Authentication. We will execute a brute-force attack against the login interface, iterating through potential credential pairs to achieve unauthorized authentication. Below is the selenium automated script to execute brute-force attack, submitting multiple credential combinations to attempt authentication. The brute-force attack successfully compromised the authentication controls by iterating through multiple credential pairs, ultimately granting access. Solution: To mitigate the above vulnerability, NGINX App Protect is deployed and configured as reverse proxy in front of the application and requests are first validated by NAP for the vulnerabilities. The NGINX App Protect Brute Force WAF policy is utilized as shown below. Re-attempt to gain access to the application using the brute force approach is rejected and blocked. Support ID verification in the Security logs shows request is blocked because of Brute Force Policy. Request captured in NGINX App Protect security log API3:2019 - Excessive Data Exposure Problem Statement: As shown below in one of the demo application API’s, Personal Identifiable Information (PII) data, like Credit Card Numbers (CCN) and U.S. Social Security Numbers (SSN), are visible in responses that are highly sensitive. So, we must hide these details to prevent personal data exploits. Solution: To prevent this vulnerability, we will use the DataGuard feature in NGINX App Protect, which validates all response data for sensitive details and will either mask the data or block those requests, as per the configured settings. First, we will configure DataGuard to mask the PII data as shown below and will apply this configuration. Next, if we resend the same request, we can see that the CCN/SSN numbers are masked, thereby preventing data breaches. If needed, we can update configurations to block this vulnerability after which all incoming requests for this endpoint will be blocked. If you open the security log and filter with this support ID, we can see that the request is either blocked or PII data is masked, as per the DataGuard configuration applied in the above section. Request captured in NGINX App Protect security log API4:2019 - Lack of Resources & Rate Limiting Problem Statement: APIs do not have any restrictions on the size or number of resources that can be requested by the end user. Above mentioned scenarios sometimes lead to poor API server performance, Denial of Service (DoS), and brute force attacks. Solution: NGINX App Protect provides different ways to rate limit the requests as per user requirements. A simple rate limiting use case configuration is able to block requests after reaching the limit, which is demonstrated below. API6:2019 - Mass Assignment Problem Statement: API Mass Assignment vulnerability arises when clients can modify immutable internal object properties via crafted requests, bypassing API Endpoint restrictions. Attackers exploit this by sending malicious HTTP requests to escalate privileges, bypass security mechanisms, or manipulate the API Endpoint's functionality. Placing an order with quantity as 1: Bypassing API Endpoint restrictions and placing the order with quantity as -1 is also successful. Solution: To overcome this vulnerability, we will use the WAF API Security Policy in NGINX App Protect which validates all the API Security event triggered and based on the enforcement mode set in the validation rules, the request will either get reported or blocked, as shown below. Restricted/updated swagger file with .json extension is added as below: Policy used: App Protect API Security Re-attempting to place the order with quantity as -1 is getting blocked. Validating the support ID in Security log as below: Request captured in NGINX App Protect security log API7:2019 - Security Misconfiguration Problem Statement: Security misconfiguration occurs when security best practices are neglected, leading to vulnerabilities like exposed debug logs, outdated security patches, improper CORS settings, unnecessary allowed HTTP methods, etc. To prevent this, systems must stay up to date with security patches, employ continuous hardening, ensure API communications use secure channels (TLS), etc. Example: Unnecessary HTTP methods/verbs represent a significant security misconfiguration under the OWASP API Top 10. APIs often expose a range of HTTP methods (such as PUT, DELETE, PATCH) that are not required for the application's functionality. These unused methods, if not properly disabled, can provide attackers with additional attack surfaces, increasing the risk of unauthorized access or unintended actions on the server. Properly limiting and configuring allowed HTTP methods is essential for reducing the potential impact of such security vulnerabilities. Let’s dive into a demo application which has exposed “PUT” method., this method is not required as per the design and attackers can make use of this insecure unintended method to modify the original content. Solution: NGINX App Protect makes it easy to block unnecessary or risky HTTP methods by letting you customize which methods are allowed. By easily configuring a policy to block unauthorized methods, like disabling the PUT method by setting "$action": "delete", you can reduce potential security risks and strengthen your API protection with minimal effort. As shown below the attack request is captured in security log which conveys the request was successfully blocked, because of “Illegal method” violation. Request captured in NGINX App Protect security log API8:2019 - Injection Problem Statement: Customer login pages without secure coding practices may have flaws. Intruders could use those flaws to exploit credential validation using different types of injections, like SQLi, command injections, etc. In our demo application, we have found an exploit which allows us to bypass credential validation using SQL injection (by using username as “' OR true --” and any password), thereby getting administrative access, as below: Solution: NGINX App Protect has a database of signatures that match this type of SQLi attacks. By configuring the WAF policy in blocking mode, NGINX App Protect can identify and block this attack, as shown below. App Protect WAF Policy If you check in the security log with this support ID, we can see that request is blocked because of SQL injection risk, as below. Request captured in NGINX App Protect security log API9:2019 - Improper Assets Management Problem Statement: Improper Asset Management in API security signifies the crucial risk stemming from an incomplete awareness and tracking of an organization's full API landscape, including all environments like development and staging, different versions, both internal and external endpoints, and undocumented or "shadow" APIs. This lack of comprehensive inventory leads to an expanded and often unprotected attack surface, as security measures cannot be consistently applied to unknown or unmanaged assets. Consequently, attackers can exploit these overlooked endpoints, potentially find older, less secure versions or access sensitive data inadvertently exposed in non-production environments, thereby undermining overall security posture because you simply cannot protect assets you don't know exist. We’re using a flask database application with multiple API endpoints for demonstration. As part of managing API assets, the “/v1/admin/users” endpoint in the demo Flask application has been identified as obsolete. The continued exposure of the deprecated “/v1/admin/users” endpoint constitutes an Improper Asset Management vulnerability, creating an unnecessary security exposure that could be leveraged for exploitation. <public_ip>/v1/admin/users The current endpoint for user listing is “/v2/users”. <public_ip>/v2/users with user as admin1 Solution: To mitigate the above vulnerability, we are using NGINX as an API Gateway. The API Gateway acts as a filtering gateway for API incoming traffic, controlling, securing, and routing requests before they reach the backend services. The server’s name used for the above case is “f1-api” which is listening to the public IP where our application is running. To query the “/v1/admin/users” endpoint, use the curl command as shown below. Below is the configuration for NGINX as API Gateway, in “api_gateway.conf”, where “/v1/admin/users” endpoint is deprecated. The “api_json_errors.conf” is configured with error responses as shown below and included in the above “api_gateway.conf”. Executing the curl command against the endpoint yields an “HTTP 301 Moved Permanently” response. https://f1-api/v1/admin/users is deprecated API10:2019 - Insufficient Logging & Monitoring Problem Statement: Appropriate logging and monitoring solutions play a pivotal role in identifying attacks and also in finding the root cause for any security issues. Without these solutions, applications are fully exposed to attackers and SecOps is completely blind to identifying details of users and resources being accessed. Solution: NGINX provides different options to track logging details of applications for end-to-end visibility of every request both from a security and performance perspective. Users can change configurations as per their requirements and can also configure different logging mechanisms with different levels. Check the links below for more details on logging: https://www.nginx.com/blog/logging-upstream-nginx-traffic-cdn77/ https://www.nginx.com/blog/modsecurity-logging-and-debugging/ https://www.nginx.com/blog/using-nginx-logging-for-application-performance-monitoring/ https://docs.nginx.com/nginx/admin-guide/monitoring/logging/ https://docs.nginx.com/nginx-app-protect-waf/logging-overview/logs-overview/ Conclusion: In short, this article covered some common API vulnerabilities and shows how NGINX App Protect can be used as a mitigation solution to prevent these OWASP API security risks. Related resources for more information or to get started: F5 NGINX App Protect OWASP API Security Top 10 2019 OWASP API Security Top 10 20232.8KViews7likes0CommentsMitigation of OWASP API Security Top 10 2023 using F5 Distributed Cloud Platform
The OWASP API Security project aims to help organizations by providing a guide with a list of the latest top 10 most critical API vulnerabilities and steps to mitigate them. As part of updating the old OWASP API Security risk categories for 2019, the OWASP API Security Top 10 2023 is released. List of vulnerabilities: API1:2023 Broken Object Level Authorization Broken Object Level Authorization (BOLA) is a vulnerability that occurs when there is a failure in validation of a user’s permissions to perform a specific task over an object, which may eventually lead to leakage, updation, or destruction of data. To prevent this vulnerability, proper authorization mechanisms should be followed, proper checks should be made to validate a user’s actions on a certain record, and security tests should be performed before deploying any production-grade changes. API2:2023 Broken Authentication Broken Authentication is a critical vulnerability that occurs when an application’s authentication endpoints fail to detect attackers impersonating someone else’s identity and allow partial or full control over the account. To prevent this vulnerability, observability and understanding of all possible authentication API endpoints is needed. Re-authentication should be performed for any confidential changes, multi-factor authentication, captcha-challenge, and effective security solutions should be applied to detect and mitigate credential stuffing, dictionary and brute-force types of attacks. Detailed explanation about the vulnerability with a demo showcasing the mitigation part using F5 Distributed Cloud can be found here API3:2023 Broken Object Property Level Authorization Broken Object Property Level Authorization is one of the new risk categories in the OWASP API Security Top 10 2023. This vulnerability occurs when a user is allowed to access an object’s property without validating their access permissions. Excessive Data Exposure and Mass Assignment which were initially a part of OWASP APISec 2019 are now part of this new vulnerability. To prevent this vulnerability, access privileges of users requesting for a specific object’s property should be scrutinized before exposure by the API endpoints. Use of generic methods and automatically binding client inputs to internal objects or code variables should be avoided and schema-based validation should be enforced. Detailed explanation about the vulnerabilities with demos showcasing the mitigation part using F5 Distributed Cloud can be found here (Excessive Data Exposure, Mass Assignment) API4:2023 Unrestricted Resource Consumption Unrestricted Resource Consumption vulnerability occurs when the system’s resources are being unnecessarily consumed, which could eventually lead to degradation of services and performance latency issues. Although the name has changed, the vulnerability is still the same as that of Lack of Resources & Rate Limiting. To prevent this vulnerability, rate-limiting, maximum size for input payload/parameters and server-side validations of requests should be enforced. Detailed explanation about the vulnerability with a demo showcasing the mitigation part using F5 Distributed Cloud can be found here API5:2023 Broken Function Level Authorization Broken Function Level Authorization occurs when vulnerable API endpoints allow normal users to perform administrative actions or a user from one group is allowed to access a function specific to users of another group. To prevent this vulnerability, access control policies and administrative authorization checks based on user’s group/roles should be implemented. API6:2023 Unrestricted Access to Sensitive Business Flows Unrestricted Access to Sensitive Business Flows is also a new addition to the list of API vulnerabilities. While writing API endpoints, it is extremely critical for developers to have a clear understanding of the business flows getting exposed by it. To avoid exposing any sensitive business flow and limit its excessive usage, which if not considered, might eventually lead to exploitation by the attackers and cause some serious harm to the business. This also includes securing and limiting access to B2B APIs that are consumed directly and often integrated with minimal protection mechanisms. By keeping automation to work, now-a-days, attackers can bypass traditional protection mechanisms. APIs inefficiency in detecting automated bot attacks not only causes business loss but also it can adversely impact the services for real users as well. To overcome this vulnerability, enterprises need to have a platform to identify whether the request is from a real user or an automated tool by analyzing and tracking patterns of usage. Device fingerprinting, Integrating Captcha solution, blocking Tor requests, are a few methods which can help to minimize the impact of such automated attacks. For more details on automated threats, you can visit OWASP Automated Threats to Web Applications Note: Although the vulnerability is new but it contains some references of API10:2019 Insufficient Logging & Monitoring Detailed explanation about the vulnerability with a demo showcasing the mitigation part using F5 Distributed Cloud can be found here API7:2023 Server-Side Request Forgery After finding a place in the OWASP Top 10 web application vulnerabilities of 2021, SSRF has now been included in the OWASP API Security Top 10 2023 list as well, showing the severity of this vulnerability. Server-Side Request Forgery (SSRF) vulnerability occurs when an API fetches an internal server resource without validating the URL from the user. Attackers exploit this vulnerability by manipulating the URL, which in turn helps them retrieve sensitive data from internal servers. To overcome this vulnerability, input data validations should be implemented to ensure that the client supplied input data obeys the expected format. Allow lists should be maintained so that only trusted requests/calls will be processed, and HTTP redirections should be disabled. Detailed explanation about the vulnerability with a demo showcasing the mitigation part using F5 Distributed Cloud can be found here API8:2023 Security Misconfiguration Security Misconfiguration is a vulnerability that may arise when security best practices are overlooked. Unwanted exposure of debug logs, unnecessary enabled HTTP Verbs, unapplied latest security patches, missing repeatable security hardening process, improper implementation of CORS policy, etc. are a few examples of security misconfiguration. To prevent this vulnerability, systems and entire API stack should be maintained up to date without missing any security patches. Continuous security hardening and configuration tracking process should be carried out. Make sure all API communications take place over a secure channel (TLS) and all servers in the HTTP server chain process incoming requests. Cross-Origin Resource Sharing (CORS) policy should be set up properly. Unnecessary HTTP verbs should be disabled. Detailed explanation about the vulnerability with a demo showcasing the mitigation part using F5 Distributed Cloud can be found here API9:2023 Improper Inventory Management Improper Inventory Management vulnerability occurs when organizations don’t have much clarity on their own APIs as well as third-party APIs that they use and lack proper documentation. Unawareness with regards to current API version, environment, access control policies, data shared with the third-party etc. can lead to serious business repercussions. Clear understanding and proper documentation are the key to overcoming this vulnerability. All the details related to API hosts, API environment, Network access, API version, Integrated services, redirections, rate limiting, CORS policy should be documented correctly and maintained up to date. Documenting every minor detail is advisable and authorized access should be given to these documents. Exposed API versions should be secured along with the production version. A risk analysis is recommended whenever newer versions of APIs are available. Detailed explanation about the vulnerability with a demo showcasing the mitigation part using F5 Distributed Cloud can be found here API10:2023 Unsafe Consumption of APIs Unsafe Consumption of APIs is again a newly added vulnerability covering a portion of API8:2019 Injection vulnerability. This occurs when developers tend to apply very little or no sanitization to data received from third-party APIs. To overcome this, we should make sure that API interactions take place over an encrypted channel. API data evaluation and sanitization should be carried out before using the data further. Precautionary actions should be taken to avoid unnecessary redirections by using Allow lists. Detailed explanation about the vulnerability with a demo showcasing the mitigation part using F5 Distributed Cloud can be found here Related OWASP API Security article series Broken Authentication Excessive Data Exposure Mass Assignment Lack of Resources & Rate limiting Security Misconfiguration Improper Assets Management Unsafe consumption of APIs Server-Side Request Forgery Unrestricted Access to Sensitive Business Flows OWASP API Security Top 10 - 201971Views0likes0CommentsModern Applications-Demystifying Ingress solutions flavors
In this article, we explore the different ingress services provided by F5 and how those solutions fit within our environment. With different ingress services flavors, you gain the ability to interact with your microservices at different points, allowing for flexible, secure deployment. The ingress services tools can be summarized into two main categories, Management plane: NGINX One BIG-IP CIS Traffic plane: NGINX Ingress Controller / Plus / App Protect / Service Mesh BIG-IP Next for Kubernetes Cloud Native Functions (CNFs) F5 Distributed Cloud Ingress Controller Ingress solutions definitions In this section we go quickly through the Ingress services to understand the concept for each service, and then later move to the use cases’ comparison. BIG-IP Next for Kubernetes Kubernetes' native networking architecture does not inherently support multi-network integration or non-HTTP/HTTPS protocols, creating operational and security challenges for complex deployments. BIG-IP Next for Kubernetes addresses these limitations by centralizing ingress and egress traffic control, aligning with Kubernetes design principles to integrate with existing security frameworks and broader network infrastructure. This reduces operational overhead by consolidating cross-network traffic management into a unified ingress/egress point, eliminating the need for multiple external firewalls that traditionally require isolated configuration. The solution enables zero-trust security models through granular policy enforcement and provides robust threat mitigation, including DDoS protection, by replacing fragmented security measures with a centralized architecture. Additionally, BIG-IP Next supports 5G Core deployments by managing North/South traffic flows in containerized environments, facilitating use cases such as network slicing and multi-access edge computing (MEC). These capabilities enable dynamic resource allocation aligned with application-specific or customer-driven requirements, ensuring scalable, secure connectivity for next-generation 5G consumer and enterprise solutions while maintaining compatibility with existing network and security ecosystems. Cloud Native Functions (CNFs) BIG-IP Next for Kubernetes enables the advanced networking, traffic management and security functionalities; CNFs enables additional advanced services. VNFs and CNFs can be consolidated in the S/Gi-LAN or the N6 LAN in 5G networks. A consolidated approach results in simpler management and operation, reduced operational costs up to reduced TCO by 60% and more opportunities to monetize functions and services. Functions can include DNS, Edge Firewall, DDoS, Policy Enforcer, and more. BIG-IP Next CNFs provide scalable, automated, resilient, manageable, and observable cloud-native functions and applications. Support dynamic elasticity, occupy a smaller footprint with fast restart, and use continuous deployment and automation principles. NGINX for Kubernetes / NGINX One NGINX for Kubernetes is a versatile and cloud-native application delivery platform that aligns closely with DevOps and microservices principles. It is built around two primary models: NGINX Ingress Controller (OSS and Plus): Deployed directly inside Kubernetes clusters, it acts as the primary ingress gateway for HTTP/S, TCP, and UDP traffic. It supports Kubernetes-native CRDs, and integrates easily with GitOps pipelines, service meshes (e.g., Istio, Linkerd), and modern observability stacks like Prometheus and OpenTelemetry. NGINX One/NGINXaaS: This SaaS-delivered, managed service extends the NGINX experience by offloading the operational overhead, providing scalability, resilience, and simplified security configurations for Kubernetes environments across hybrid and multi-cloud platforms. NGINX solutions prioritize lightweight deployment, fast performance, and API-driven automation. NGINX Plus variants offer extended features like advanced WAF (NGINX App Protect), JWT authentication, mTLS, session persistence, and detailed application-layer observability. Some under the hood differences, BIG-IP Next for Kubernetes/CNF make use of F5 own TMM to perform application delivery and security, NGINX rely on Kernel to perform some network level functions like NAT, IP tables and routing. So it’s a matter of the architecture of your environment to go with one or both options to enhance your application delivery and security experience. BIG-IP Container Ingress Services (CIS) BIG-IP CIS works on management flow. The CIS service is deployed at Kubernetes cluster, sending information on created Pods to an integrated BIG-IP external to Kubernetes environment. This allows to automatically create LTM pools and forwarding traffic based on pool members health. This service allows for application teams to focus on microservice development and automatically update BIG-IP, allowing for easier configuration management. Use cases categorization Let’s talk in use cases terms to make it more related to the field and our day-to-day work, NGINX One Access to NGINX commercial products, support for open-source, and the option to add WAF. Unified dashboard and APIs to discover and manage your NGINX instances. Identify and fix configuration errors quickly and easily with the NGINX One configuration recommendation engine. Quickly diagnose bottlenecks and act immediately with real-time performance monitoring across all NGINX instances. Enforce global security polices across diverse environments. Real-time vulnerability management identifies and addresses CVEs in NGINX instances. Visibility into compliance issues across diverse app ecosystems. Update groups of NGINX systems simultaneously with a single configuration file change. Unified view of your NGINX fleet for collaboration, performance tuning, and troubleshooting. NGINX One to automate manual configuration and updating tasks for security and platform teams. BIG-IP CIS Enable self-service Ingress HTTP routing and app services selection by subscribing to events to automatically configure performance, routing, and security services on BIG-IP. Integrate with the BIG-IP platform to scale apps for availability and enable app services insertion. In addition, integrate with the BIG-IP system and NGINX for Ingress load balancing. BIG-IP Next for Kubernetes Supports ingress and egress traffic management and routing for seamless integration to multiple networks. Enables support for 4G and 5G protocols that are not supported by Kubernetes—such as Diameter, SIP, GTP, SCTP, and more. BIG-IP Next for Kubernetes enables security services applied at ingress and egress, such as firewalling and DDoS. Topology hiding at ingress obscures the internal structure within the cluster. As a central point of control, per-subscriber traffic visibility at ingress and egress allows traceability for compliance tracking and billing. Support for multi-tenancy and network isolation for AI applications, enabling efficient deployment of multiple users and workloads on a single AI infrastructure. Optimize AI factories implementations with BIG-IP Next for Kubernetes on Nvidia DPU. F5 Cloud Native Functions (CNFs) Add containerized services for example Firewall, DDoS, and Intrusion Prevention System (IPS) technology Based on F5 BIG-IP AFM. Ease IPv6 migration and improve network scalability and security with IPv4 address management. Deploy as part of a security strategy. Support DNS Caching, DNS over HTTPS (DoH). Supports advanced policy and traffic management use cases. Improve QoE and ARPU with tools like traffic classification, video management and subscriber awareness. NGINX Ingress Controller Provide L4-L7 NGINX services within Kubernetes cluster. Manage user and service identities and authorize access and actions with HTTP Basic authentication, JSON Web Tokens (JWTs), OpenID Connect (OIDC), and role-based access control (RBAC). Secure incoming and outgoing communications through end-to-end encryption (SSL/TLS passthrough, TLS termination). Collect, monitor, and analyze data through prebuilt integrations with leading ecosystem tools, including OpenTelemetry, Grafana, Prometheus, and Jaeger. Easy integration with Kubernetes Ingress API, Gateway API (experimental support), and Red Hat OpenShift Routes F5 Distributed Cloud Ingress Controller The F5 XC Ingress Controller is supported only for Sites running Managed Kubernetes, also known as Physical K8s (PK8s). Deployment of the ingress controller is supported only using Helm. The Ingress Controller manages external access to HTTP services in a Kubernetes cluster using the F5 Distributed Cloud Services Platform. The ingress controller is a K8s deployment that configures the HTTP Load Balancer using the K8s ingress manifest file. The Ingress Controller automates the creation of load balancer and other required objects such as VIP, Layer 7 routes (path-based routing), advertise policy, certificate creation (k8s secrets or automatic custom certificate) Conclusion As you can see, the diverse Ingress controllers tools give you more flexibility, tailoring your architecture based on organization requirements and maintain application delivery and security practices across your applications ecosystem. Related Content and Technical demos BIG-IP Next SPK: a Kubernetes native ingress and egress gateway for Telco workloads F5 BIG-IP Next CNF solutions suite of Kubernetes native 5G Network Functions Announcing F5 NGINX Ingress Controller v4.0.0 | DevCentral JWT authorization with NGINX Ingress Controller My first CRD deployment with CIS | DevCentral BIG-IP Next for Kubernetes BIG-IP Next for Kubernetes (LA) BIG-IP Next Cloud-Native Network Functions (CNFs) CNF Home F5 NGINX Ingress Controller Overview of F5 BIG-IP Container Ingress Services NGINX One117Views1like0CommentsDriving Down Cost & Complexity: App Migration in the Cloud
Introduction The digital transformation journey for many organizations involves migrating applications to the cloud. This process, while beneficial, can be fraught with challenges related to cost, complexity, and security. This white paper aims to provide a comprehensive overview of how organizations can drive down costs and simplify the complexity of app migrations to the cloud, leveraging solutions from F5. The Pain of App Migrations App migrations are often driven by the need for cost optimization, meeting business requirements, and responding to external factors and market pressures. However, the process can be painful due to delays in internal processes, budget constraints, and the need for secure migrations without publicly advertising the app or its API endpoints. The app migration process can be divided into the following phases: Plan: Determine if a lift and shift or refactor is needed and estimate the time required 3 Prepare: Assess infrastructure, ensure compliance and security requirements 4 Build: Begin migrating the application or refactoring the process Pre-Production Testing: Test in staging environments and check for vulnerabilities Production Testing: Run new and legacy environments in production and test for resilience Go Live: Deploy the application to the production environment Decommission Legacy App: Proceed with the decommissioning of the legacy app Individuals within organizations can identify app migration opportunities by listening for key pain points such as the need to consolidate cloud applications; changes in VMware's contract terms, challenges in ongoing migrations, plans for datacenter consolidation, and timelines for new application cutovers. Key persons looking to migrate apps are… VP of IT: Manages IT operations and ensures systems run smoothly Cloud Architect: Designs and implements cloud infrastructure SecOps: Ensures the security posture of apps, focusing on threat detection, incident response, and compliance DevOps: Ensures smooth integration between development and operations, optimizing the deployment pipeline and managing CI/CD workflows Whether your migration is between cloud providers or on-prem, F5’s solution seamlessly ties together all the ends. The Process: Migrating Apps with F5 Comprised of three integrated components to accelerate time to market and reduce redundancies, an F5 solution provides: Deployable Software (CE): Abstracts multi-environment complexity Distributed Cloud App Connect: Provides app connectivity and secure network overlay SaaS-Console: Offers universal visibility and consistent policy enforcement F5 Distributed Cloud Services simplify app migrations by ensuring observability, security, and compliance. End-to-end visibility is maintained throughout the migration, traffic is balanced across both environments, and consistent security policies are enforced. CE’s, deployable to most hypervisors, virtual platforms, and most public cloud providers, deliver many L3 and L7 services and capabilities, some of which include the following: Single dashboard view Multi-site support Multi-tenancy Native service discovery L7 load balancing L3 firewall L3 routing, including with BGP Network segmentation L3 VPN SNAT DHCP Server For migrations that include RedHat OpenShift Virtualization or Nutanix AH-V, free tools are available from each to move virtual machines between environments. For OpenShift, use the Migration toolkit for virtualization (MTV), and for Nutanix, use Nutanix Move. Regardless of the provider or platform, with an F5 XC CE at each location, re-connect the workloads on your VM’s without making any changes. Using F5 XC App Connect or Network Connect, extend L7 or L3 network services with any combination of L7 HTTP load balancing or L3 routing (and use SNAT policies only as needed). Demo The following video shows two migration scenarios and how using F5 XC simplifies the task. The first part covers how to migrate app VMs from a VMWare environment to one powered by RedHat OpenShift Virtualization. In the second part, I show how to use Nutanix Move to migrate VM’s between Nutanix clusters from one cloud provider to another. In both scenarios, F5 XC CE’s are configured to use App Connect to seamlessly deliver connectivity and security. This allows both migrations to happen without downtime to applications. Video: https://youtu.be/SEuSvcyxWDU Conclusion Migrating applications to the cloud can be a time consuming and costly process, but with the right strategies and solutions, organizations can overcome these challenges. F5's integrated components and services provide the necessary tools to simplify migrations, ensure security, and optimize cost. Additional Resources Scale Your DMZ with F5 Distributed Cloud Services Seamless Application Migration to OpenShift Virtualization with F5 Distributed Cloud Deploying F5 Distributed Cloud Customer Edge in Red Hat OpenShift Virtualization VMware NSX to Red Hat OpenShift Virtualization Migration with F5 Distributed Cloud How I Did it - Migrating Applications to Nutanix NC2 with F5 Distributed Cloud Secure Multicloud Networking73Views1like0Comments