Load Balancing as a Service in VMware Cloud Director

Load Balancing as a Service in VMware Cloud Director #

Introduction #

This document is intended for VCPP Cloud Providers who are interested in providing Load Balancing as a Service (LBaaS) in their multi-tenant environments managed by VMware Cloud Director (VCD).

From VMware Cloud Director 10.2 and onwards, VMware Cloud Director provides load balancing services in NSX-T backed organization virtual data centers (VDCs) by leveraging the capabilities of VMware NSX Advanced Load Balancer (Avi).

The content below describes the deployment and configuration procedures and also clearly delineates the cloud provider actions from the actions of the tenant, addressing both self-service and managed service offerings that are possible.

VMware NSX Advanced Load Balancer (Avi) provides multi-cloud load balancing, web application firewall and application analytics across on-premises data centers and any cloud. The software-defined platform delivers applications consistently across bare metal servers, virtual machines and containers to ensure a fast, scalable, and secure application experience.

LBaaS Anatomy #

The NSX Advanced Load Balancer Platform (Avi) is architected on software-defined principles, decoupling the data and control planes. As a result, it centrally manages and dynamically provisions pools of application services, including load balancing, across multi cloud environments.

NSX Advanced Load Balancer (Avi) Architecture

Architecturally, the Platform comprises three core elements:

  • The Avi Controller - provides central control and management of the Avi Service Engines. It orchestrates policy-driven application services, monitors real-time application performance (leveraging data provided by the Avi Service Engines), and provides for predictive auto-scaling of load balancing and other application services. Furthermore, it is capable of delivering per-tenant or per-application load balancing — increasingly in demand in multi cloud contexts — and also facilitates troubleshooting with traffic analytics.
  • The Avi Service Engines (SEs) - distributed software that runs on bare metal servers, virtual machines, and containers. They implement application services across on-premises datacenters, colocation datacenters, and public clouds. They also collect data relating to application performance, security, and clients. As distributed software, Avi Service Engines are capable of horizontal auto-scaling within minutes while functioning as service proxies for micro services.
  • The Avi Console - provides web-based administration and monitoring. It is a web server running on the controller and offers a UI for configuration of application services, delivers visualization of network configurations and virtual IPs (VIPs), and displays application health scores and transaction round-trip times. It’s also where customers can view performance, security, and client insights, as well as where they can view service interactions.

VMware Cloud Director Support #

Starting with version 10.2, VMware Cloud Director provides load-balancing services by using the capabilities of VMware NSX Advanced Load Balancer (Avi). VMware Cloud Director supports L4 and L7 load balancing that you can configure on an NSX-T Data Center edge gateway.

As a system administrator, you deploy the NSX Advanced Load Balancer controller cluster with the other management solutions in the management infrastructure.

Load Balancing as a Service in VMware Cloud Director

The NSX Advanced Load Balancer Controller uses APIs to interface with NSX Manager and vCenter Server to discover the infrastructure. It also manages the lifecycle and network configuration of Service Engines (SE). The Avi Controller cluster uploads the SE OVA image to the vCenter Server content library and uses vCenter APIs to deploy the SE VMs.

This integration happens through an NSX-T Cloud configured in NSX ALB before being imported into VMware Cloud Director.

Load balancing services are associated with NSX-T edge gateways, which can be scoped to an organization VDC or a data center Group.

Load Balancing as a Service in VMware Cloud Director - Logical Design

​The system administrator has the flexibility to decide whether a service engine group is dedicated to a single edge gateway or shared between several edge gateways.

Tenant users have full self-service UI and API load balancing capabilities in VMware Cloud Director.

Load Balancing as a Service in VMware Cloud Director - Topology

A service engine node is a VM with up to 10 network interfaces (NICs). The first NIC is always used for the management and control traffic. The other nine are used to connect to the NSX-T edge gateway (tier-1 gateway) using a service network logical segment. The service networks are created by VMware Cloud Director when you enable the load balancing service on an edge gateway with the DHCP service to provide IP addresses for the attached SEs.

By default, the IP address used is 10.255.255.0/25 subnet. The system administrator can change the IP address if it coincides with the existing organization’s VDC networks.

Service Engines run each service interface in a different VRF. As a result, IP conflicts or cross-tenant communication does not occur. Avi automatically picks a service engine to instantiate the load balancing service when the tenant configures a load balancing pool and virtual service.

When an SE is assigned, Avi configures a static route (/32) on the organization VDC edge gateway pointing the virtual service VIP (virtual IP) to the service engine IP address from the tenant’s load balancing service network.

Provider and Tenant Responsibilities #

As a system administrator, you deploy an Avi Controller Cluster, complete the initial configuration and the association with NSX-T and VMware Cloud Director. Once the integration is ready, you deploy and enable load balancing on an NSX-T edge gateway and assign it a service engine group.

An organization administrator creates load balancer server pools and virtual services.

Deployment & Configuration #

Load Balancing as a Service in VMware Cloud Director - Implementation Steps

Requirements #

Please find the full list of requirements in the planning and preparation page.

Deploying the Avi Controller Cluster #

To ensure complete system redundancy, the Avi Controller must be highly available. Three Avi Controller VMs will form a highly available control plane for the NSX Advanced Load Balancer.
  1. Download the Controller OVA from my.vmware.com portal. Follow this KB article to download the Controller OVA image.
  2. Log into the vCenter server through a vSphere Client and deploy a first Avi Controller:
  3. Follow the Deploy OVA Template wizard instructions:
    • Choose a port group for Destination Network in Network Mapping. This port group is the management network for the Controller and will be used for all management communication (e.g., Avi Controller communication with vCenter).
    • Specify the management IP address and default gateway (only static IP addresses should be used in a production environment).
    • The ‘Sysadmin login authentication’ key is used to specify an SSH public key and is NOT required.
  4. Repeat the last steps to create two additional Avi Controllers to be used to form a three-node Controller cluster which will form the control plane for the NSX Advanced Load Balancer.
  5. Create an anti-affinity ‘VM/Host’ rule to make sure Controller VMs are placed on separate hosts.
  6. Power on Controller VMs.
Just like any other infrastructure management system, CPU and memory should be 100% reserved on Avi controllers.

Avi Controller Cluster Initial Setup #

This section shows the steps to perform initial configuration of the Avi Controller using its deployment wizard. You can change or customize settings following initial deployment using the Avi Controller’s web interface.

Configure NSX Advanced Load Balancer Controller cluster to provide a highly available control plane for the NSX Advanced Load Balancer.

  1. Initialize the first NSX Advanced Load Balancer Controller VM
    1. In a web browser, navigate to the first controller IP or FQDN.
      Note: While the system is booting up, a 503 status code or a page with following message will appear: “Controller is not yet ready. Please try again after a couple of minutes”. Wait for about 5 to 10 minutes and refresh the page.
    2. Once the NSX Advanced Load Balancer welcome screen appears, create an admin account.
    3. Complete the remaining steps by configuring all required parameters (DNS, NTP, SMTP, etc.).
    4. Select No Orchestrator in the Orchestrator Integration page.
    5. Leave the Tenant Settings configured by default.
  2. Configure an NSX Advanced Load Balancer Controller cluster
    1. Navigate to Administration > Controller and select Edit.
    2. Specify the Name and the Controller Cluster IP.
    3. Add the details for each of the three NSX Advanced Load Balancer Controller nodes.  
    4. Click on Save. It will take a few minutes for the services to restart and the Controller cluster to be up.
    5. In a web browser, log in to the Controller cluster VIP.
    6. Navigate to Administration > Controller and ensure all the Controllers show State as Active which represents a healthy Controller cluster.
    7. Setup licensing in Administration > Settings > Licensing.
      Basic or Enterprise licenses are set at the controller cluster level. You cannot mix both licenses in a single Avi Controller cluster instance.
    8. Finish the Controller Cluster configuration (alerting, backup, etc.). Note: the full configuration of the Avi Controllers general settings is outside the scope of this document.

VMware Cloud Director integration with NSX Advanced Load Balancer fails if the default self-signed certificate is used.

By default, the Controller cluster Portal will be setup with a self-signed certificate which does not have a valid SAN (Subject Alternative Name) and makes the integration with VMware Cloud Director impossible. VMware Cloud Director will reject any URL that does not match the values present in the certificate, which is to conform with industry standard security guidelines and our platform criteria.

Setup of Avi Controller cluster Portal certificate is outside the scope of this document.

Additional resources:

NSX-T Cloud #

The point of integration in Avi, with any infrastructure, is named a cloud. For NSX-T environment, an NSX-T cloud has to be configured.

An NSX-T cloud is defined by an NSX-T manager and a transport zone. If an NSX-T manager has multiple transport zones, each will map to a new NSX-T cloud. To manage load balancing for multiple NSX-T environments each NSX-T manager will map to a new NSX-T cloud.

NSX-T cloud general considerations:

  • An NSX-T cloud has a one-to-one relationship with a network pool backed by an NSX-T transport zone.
  • DHCP checkbox: the Service Engines are expected to get an IP via DHCP on the management subnet

To create an NSX-T cloud, log in in to the Avi Controller and:

  1. Navigate to Infrastructure > Clouds.
  2. Click on Create and select NSX-T Cloud.

Load Balancing as a Service in VMware Cloud Director - NSX-T Cloud General Configuration

Following the general parameters, the NSX-T section allows to configure the future service engines network configuration:

  • The transport zone must match the overlay transport zone configured in the network pool in VMware Cloud Director.
  • The NSX-T cloud requires two types of network configurations:
    • Management Network: the tier-1 logical router and overlay segment to be used for management connectivity of the service engine VMs has to be selected. The first vNic of each service engines will be connected to that management network (which must be created upfront, with DHCP enabled).
    • Data Network: although not required for the VMware Cloud Director integration, it is required to set a dummy data network to avoid having the NSX-T cloud object in a degraded state.
VMware Cloud Director will automatically complete the data network tier-1 logical routers and segments when load balancing is enabled on tier-1 acting as NSX-T edge gateways.
  • Each NSX-T cloud can have one or more vCenters associated to it. vCenter objects must be configured on Avi for all the vCenter compute managers added to the NSX-T that has ESXi hosts that belong to the transport zone configured in the NSX-T cloud.

Load Balancing as a Service in VMware Cloud Director - NSX-T Cloud Networking Configuration

Additional resources:

Service Engine Groups #

A service engine group is an isolation domain that also defines service engine node sizing (CPU, memory, storage), bandwidth restrictions, availability modes, and network access.

Resources in a service engine group can be used for different virtual services, depending on your tenant’s needs.

To create a service engine group, log in in to the Avi Controller and:

  1. Navigate to Infrastructure > Cloud Resources > Service Engine Group.
  2. Using the “Select Cloud” drop down menu, select the relevant NSX-T cloud.
  3. Click on Create.

The basic settings page allows to configure the high availability mode, the service engine capacity and limit settings, as well as other advanced parameters. The official Avi documentation can help to size appropriately the service engines in terms of CPU, memory and disk.

Load Balancing as a Service in VMware Cloud Director - Service Engine Groups Configuration

Service engines VMs may be automatically deployed on any host and storage that most closely matches the resources and reachability criteria for placement.

Starting in Avi 20.1.3 and onwards, it is possible to tailor the service engine placement in terms of:

  • vSphere folder
  • vSphere hosts and cluster
  • vSphere datastore

Load Balancing as a Service in VMware Cloud Director - Service Engine Groups Placement Configuration

You may create multiple service engine groups depending on your tenants requirements for high availability, placement or performances.

Additional resources:

VMware Cloud Director Service Admin Portal #

After the NSX Advanced Load Balancer deployment and configuration with the NSX-T infrastructure, the next step is to register the controller cluster with VMware Cloud Director.

To provide virtual service management capabilities to your tenants:

  1. Register your Avi Controller instances with your VMware Cloud Director instance.
  2. Register your NSX-T Cloud instances with VMware Cloud Director.
  3. Import all the relevant service engine groups to your VMware Cloud Director deployment.

Load Balancing as a Service in VMware Cloud Director - Avi Controller Registration in VMware Cloud Director

Load Balancing as a Service in VMware Cloud Director - NSX-T Cloud Registration in VMware Cloud Director

Load Balancing as a Service in VMware Cloud Director - Service Engine Group Import

Consumption #

Enabling Load Balancing #

Before an organization administrator can configure load balancing services, a system administrator must enable the load balancer on the NSX-T edge gateway and assign at least one service engine group to it.

  1. Navigate to the NSX-T edge gateway on which you want to enable load balancing.
  2. Under Load Balancer, click General Settings.
  3. Click Edit and enable the Load Balancer function for this particular edge.
  4. (optional) Enter a network CIDR for a service network subnet.
The service network is an internal construct; as such, it is not exposed to the tenant. Only change the default specification (192.168.255.1/25) if it overlaps with an existing organization VDC network.

Enable Load Balancing as a Service in VMware Cloud Director on an NSX-T Edge Gateway

Once load balancing is enabled, the next step is to manage the service engine group assignment on the edge gateway.

  1. Under Load Balancer, click Service Engine Groups.
  2. Select an available service engine group from the list.
  3. For shared service engine groups, the system administrator must set the maximum and reserved number of virtual services that can be placed on the edge gateway (within the capacity of the service engine group)

Assign a Service Engine Group to an NSX-T Data Center Edge Gateway in VMware Cloud Director

Design considerations:

  • A system administrator can assign one or more service engine groups to an NSX-T Data Center edge gateway.
  • All service engine groups that are assigned to a single edge gateway use the same service network.

Load Balancer Server Pool #

After a system administrator assigns a service engine group to an edge gateway, an organization administrator can create and configure virtual services that run in a specific service engine group.

The heart of a load balancer is its ability to effectively distribute traffic across healthy servers. A server pool is a group of one or more servers that you configure to run the same application and to provide high availability.

If persistence is enabled, only the first connection from a client is load balanced. While the persistence remains in effect, subsequent connections or requests from a client are directed to the same server.

  1. Under Load Balancer, click Pools, and then click Add.
  2. Configure the general settings for the load balancer pool. LBaaS in VMware Cloud Director - Server Pool Creation
  3. Add members to the server pool.

LBaaS in VMware Cloud Director - Server Pool Members

Note: pool health status and pool member health status will remain Down until a virtual service is created and service engines are deployed.

Additional resources:

Virtual Service #

A virtual service listens for traffic to an IP address, processes client requests, and directs valid requests to a member of the load balancer server pool.

A virtual service is a combination of an IP address and a port that uses a single network protocol. The virtual service is advertised to outside networks and is listening for client requests. When a client connects to the virtual service, the load balancer directs the request to a member of the configured load balancer server pool.

To secure SSL termination for a virtual service, you can use a certificate from the certificate library. For more information, see Import Certificates to the Certificates Library.

  1. Under Load Balancer, click Virtual Services, and then click Add.
  2. Configure the general settings for the load balancer pool.
  3. Enter a meaningful name and, optionally, a description, for the virtual service.
  4. To activate the virtual service upon creation, toggle on the Enabled option.
  5. Select a service engine group for the virtual service.
  6. Select a load balancer pool for the virtual service.
  7. Enter an IP address for the virtual service.
  8. Select the virtual service type.

LBaaS in VMware Cloud Director - Virtual Service Creation

The Virtual IP (VIP) can be any arbitrary IPv4 address. The VIP can be a routable external IP address allocated to the organization VDC edge gateway or any internal routed address:

  • An external organization VDC edge gateway allocated IP address; no DNAT is required, but you cannot use this IP for NAT anymore due do the internal packet processing.
  • An arbitrary internal IP (DNAT required). In that situation, the VIP must not coincide with any existing organization VDC networks or with the load balancer service network.

A static route will be automatically created on the Tier-1 from the VIP to the relevant service engine node IP.

Additional resources:

Health Monitoring #

VMware Cloud Director manages the health of the following load balancing components:

  • NSX-T Cloud
  • Virtual services
  • Server pools

VMware Cloud Director provides basic monitoring and metrics about the virtual services and pools to both providers and tenant administrators.

Providers can see the basic usage metrics for each service engine groups deployments (number of applications, usage, and running SE engines).

Tenants can see some basic metrics about each virtual service (up or down and traffic). Analytics is only available if the controller are imported with Enterprise License.

Advanced Features Consumption #

Although some advanced features are not exposed in VMware Cloud Director, they can be provided as a managed service. This includes (but is not limited to) Web Application Firewall (WAF) or custom Health Monitors.

Avi Intelligent Web Application Firewall #

Web application firewalls (WAFs) are intended to protect businesses from web app attacks and proactively prevent threats. Traditional web application security solutions do not provide visibility and security insights that administrators can use to create an effective application security posture. Enterprises need real-time visibility into application traffic, user experience, security and threat landscape, and application performance to identify and protect against the most sophisticated attacks.

Avi leverages software-defined architecture and its strategic location on the network to gain real-time application insights. The built-in WAF solution provides application security and networking teams with an elastic and analytics-driven solution that scales and simplifies policy customization and administration through central management.

Avi intelligent WAF (iWAF) plays an integral role in a defense-in-depth strategy that does comprehensive threat analysis, mitigates risk, provides zero-day protection against unpublished exploits and optimizes application security.

As of today, Avi iWAF capabilities are not exposed in VMware Cloud Director for self-service configuration and consumption. However, a system administrator can assign WAF policies to existing virtual services.

LBaaS in VMware Cloud Director - Web Application Firewall as a Managed Service

Additional resources:

Custom Health Monitor #

Avi validates if the backend servers are functioning efficiently by sending active health monitors on a periodic basis. Avi Vantage also tests if they can accommodate additional workloads before load balancing a client to a server. Health monitors originate from the service engines assigned to the application’s virtual service. The health monitors are attached to the pool for the virtual service.

A pool may have multiple actively concurrent health monitors (such as Ping, TCP, and HTTP), as well as a passive monitor. All active health monitors must be successful for the server to be marked up.

When configuring a server pool from the tenant portal, a tenant administrator can choose between 5 health monitors: HTTP, HTTPS, TCP, UDP and PING. However, a system administrator can create and customize advanced health monitors from the Avi UI, and assign them to existing server pool.

LBaaS in VMware Cloud Director - Custom Health Monitor Creation

Once the additional health monitor is associated with the server pool, it will appear in the tenant portal: a tenant administrator can remove it but not add it in self-service.

LBaaS in VMware Cloud Director - Custom Health Monitor in Pool

Resources #