How to configure F5® BIG-IP® as a front-end proxy for Dell DX Object Storage Platform
About this solution:Rapid growth of unstructured data and increasing retention requirements are driving the need for smarter data management, lower costs and more efficient use of storage systems. Object-based storage offers solutions across a variety of environments. Dell’s DX Object Storage Platform (DX) provides content-addressable storage designed to intelligently access, store, protect and distribute fixed digital content. From web publishing to archiving, DX offers a powerful combination of data and storage management features through an elegant, self-managing, self-healing and peer-scaling architecture.F5’s BIG-IP is an Application Delivery Controller (ADC) and in-line proxy with the ability to present a DX storage cluster as one or more virtual IP addresses (virtual Internet Protocol or VIP) while improving security and scaling, and maintaining high performance and high availability which are core benefits of the DX platform.Benefits of the DX | BIG-IP solution:
- Simplify, protect and massively scale the DX HTTP interface- Better support multi-tenant environments with L7 QoS Rate Shaping, authentication, among others- Customize traffic flows and security policies, and enable chargeback using iRules- Add a hardened layer of protection in front of DX- Enable client-side SSL termination, acceleration and offload of encryption processing- Improve network performance and offload network processing with TCP optimization
About this page:Setting up a secure and scalable DX object storage solution with F5 BIG-IP is straight-forward using this Dell TechCenter configuration guide. The information presented in this wiki is compiled from results of the proof-of-concept lab work completed during 2010 at the Dell Product Group labs located in Round Rock, Texas. From this experience, we are able to provide interesting reference materials, POC lab details and simple step-by-step deployment guidance for BIG-IP to support DX using the native HTTP (REST) interface. A key ingredient to this configuration is the iRules® script, jointly developed by Dell and F5 and available on this page, which integrates support for local DX storage clusters. The script provides base support for the BIG-IP DX proxy and it can be modified by customers to extend support for additional features based on their unique requirements.Table of Contents:- Reference Materials: Recommended web links to information, products and services- Proof of Concept Lab Configuration: Explanation of the lab set up and testing
-- Hardware and Software: Equipment models, software versions and features-- Diagrams: Detailed view of the lab test environment, network addressing and traffic flows-- Test Results: Functional testing and load testing from the perspective of the BIG-IP
- DX iRule: The script and how it works - How-To Configure BIG-IP for DX: Step by step configuration of BIG-IP using Application Templates to simplify and shorten the set up timeUse this wiki as a learning tool and your starting point for configuring BIG-IP with DX.
Here you will find the equipment list for the 2 Gbps test lab environment along with detailed diagrams showing the layout and example traffic flows. With this set up, we are able to successfully perform both functional testing and load testing.Note: Examples for a 12 Gbps throughput infrastructure will be added to this page in the future.Hardware and SoftwareDell DX 6000 Series Object Storage
- Code Version: CAStor version 4.0.2, CAStor revision 28185.g7.3, CSN bundle 1.0- Clusters: 8 clusters each with 2 physical nodes and 1 Cluster Services Node (CSN)- Storage Nodes: 2 x DX6000 per cluster, Mem 12GB, Proc E5640 @ 2.67GHz, HD 250GB, each node running two virtual instances of CAStor- Cluster Services Nodes: 1 x DX6012s per cluster- Clients: Dell PowerEdge MX1000e chassis with M610 blades
F5 BIG-IP Application Delivery Controllers
- Code Version: TMOS 10.2 HF1- Datacenter #1: 2 x BIG-IP 3600 Local Traffic Manager (LTM) + WAN Optimization Module (WOM) (HA pair)- Datacenter #2: 1 x BIG-IP 3600 LTM+WOM- Client network: 1 x BIG-IP 1600 Global Traffic Manager (GTM)
- Code Version: 184.108.40.206, flowcontrol enabled, LACP trunks w/ 802.1q VLAN tagging to BIG-IP- Cluster rack switches: 4 x PowerConnect 6248 (1 per rack)- Datacenter #1 aggregate switch: 1 x PowerConnect 6248- Datacenter #2 aggregate switch: 1 x PowerConnect 6248- Client network switch: 1 x PowerConnect 6248- WAN emulator
DiagramsBIG-IP is a high performance proxy and in-line network device that sits between clients and the DX storage cluster; therefore all of the DX HTTP traffic passes through BIG-IP. This positions BIG-IP to play a key role in security, acceleration, storage node offload and traffic management for DX.Figure 1 shows the physical layout of the POC lab. The address table in the center of the diagram lists the IP subnets, BIG-IP Virtual IP (VIP) addresses and associated DNS information for the full load tests. We configure a total of eight clusters and eight VIPs (one VIP per cluster) and flood the VIPs with traffic until we reach the maximum platform throughput on the BIG-IP unit in datacenter #1.Figure 1. Diagram showing the DX | BIG-IP POC lab set up with a simulated wide area networkFigure 2 shows a typical traffic flow with BIG-IP as a front-end proxy for DX. Moving beyond the simple “one VIP to one cluster” load test scenario mentioned above, we create a more interesting test environment by configuring the BIG-IP with multiple VIPs that share a single backend storage cluster. For example, in figure 2 you can see that there are two VIPs providing access to a single/shared storage cluster. In our tests, each VIP contains a different and unique customer configuration supporting a mix of security and traffic management requirements such as SSL termination, rate shaping, load balancing, and persistence (future tests will include authentication, HTTP header insertion and custom redirects). This set up demonstrates the ability to enforce different client access policies against a single shared DX cluster therefore lending itself to multi-tenant environments.Figure2. Diagram showing typical DX traffic flows through BIG-IPTest ResultsFunctional TestsIn the POC environment we successfully test the following BIG-IP software modules and features with DX.Module: BIG-IP Local Traffic Manager (LTM)
Features:- Load balancing- iRules- OneConnect (TCP multiplexing / re-use)- TCP profiles tuned for LAN- L7 QoS Rate Shaping (limit bandwidth maximum per VIP)- SSL acceleration/offload- HTTP profile RAM cache- Partitions- Route Domains- SNAT and routed configurations
Module: BIG-IP Global traffic Manager (GTM)
Features:- Intelligent DNS traffic distribution of DX client requests- Active/standby datacenter load distribution- Active/standby datacenter failover
Module: BIG-IP WAN Optimization Module (WOM)
Features:- iSessions tunneling between remote sites- Acceleration and SSL encryption of DX HTTP replication traffic between datacenters
In the future, we plan to test the following additional hardware and software:
- BIG-IP 8900 12 Gbps Platform (10 Gbps Ethernet)- BIG-IP Access Policy Manager (APM)- BIG-IP Application Security Manager (ASM)- BIG-IP WebAccelerator
Load TestsWe run a simple set of load tests to show how the BIG-IP scales with the base/core features enabled including load balancing, the DX iRules script, OneConnect (server-side TCP re-use), TCP LAN profiles and HTTP LAN profiles. HTTP storage requests are generated by eight Dell blade servers each with multiple Linux terminal sessions running shell scripts using the CAStor Python client. These client sessions are directed to the 8 BIG-IP VIP addresses via GTM DNS name resolution across both datacenters and perform a combination of read and write operations. File sizes are a mix of 100KB and 1MB photo image files. Client sessions are spawned until we achieve 2.1 Gbps of throughput on the BIG-IP in datacenter #1.During the test runs we use the BIG-IP Dashboard to view the system and network statistics and the following screenshots capture the results from the BIG-IP 3600 model in datacenter #1. With the iRule in place, the system is able to achieve the maximum platform throughput limit and maintain a CPU utilization rate of 53.2%. The number of active TCP connections hovers at 60 and the memory utilization is steady at 43.3%. These are very healthy stats for a system under full load.System utilization may vary depending on the operating environment. Some of the variables that can impact BIG-IP utilization are platform/model in use, number of concurrent client connections, file sizes transferred, and the frequency and volume of client storage requests, among others.Note: Load test results for a 12 Gbps infrastructure using a BIG-IP 8900 model will be added to this page in the future.Figure 3. Screenshot of BIG-IP Dashboard during load test
The base functionality iRule provided by Dell and F5 supports local DX cluster access and it is a minimum requirement when using BIG-IP with DX. During configuration, you can copy and paste this script in to the BIG-IP virtual server set up. See the How-to section for the complete installation instructions.Click HERE to get the BIG-IP iRules script for DX local cluster support!!Note: This script has been executed over 200 million times in the test lab with no errors or issues detected by DX or the BIG-IP.How the iRule works…Figure 4. Dell DX iRules processing flow diagramIdeas for additional DX iRules functionality:Customers can extend/modify the iRules script to add functionality or enforce policies based on their unique requirements. These are just a few ideas for added capability using iRules:iRules HTTP header insertion and/or rewrites of HTTP headers as metadata to be stored along with the object.
Examples:-- Per VIP Security – Objects stored through VIP A can only be retrieved through VIP A by inserting a HTTP header that is unique to the VIP and checked by the iRule on all storage requests, helping to maintain separation of customer data.-- Per VIP Chargeback – Report on storage utilization for all objects with a specific metadata tag and value. Example, insert a header called CustomerName.-- Per VIP Data Management – Enforce lifepoints (retention) and replication policies on stored objects by inserting or rewriting the appropriate headers and values.
Custom redirects – Support for redirects to external DX clusters. For example, if a cluster returns HTTP 404 “Not Found” or HTTP 507 “Insufficient Storage Space”, the iRule could be configured to issue a client redirect to a different DNS name that represents a cluster in a nearby site or remote datacenter where the content is likely to be located or space is available.
Watch this 5 minute video to see the BIG-IP set up for DX.ConfigurationThe following information provides details on the configurations shown in the video.Network ConnectionsTo start off, we interconnect the BIG-IP with the Dell PowerConnect switch using Ethernet, LACP port channeling and 802.1q VLAN tagging. We create a 4 Gbps trunk with tagged VLANs 11 and 44; matching the POC lab connection in datacenter #1 (see figure 1). This is a resilient and flexible configuration that provides interface level redundancy with the ability to easily add/remove bandwidth capacity and VLANs as well as withstand port level failures, often with no interruption in service. The following examples in figure 5 provide the configurations needed to link BIG-IP and PowerConnect switches:
- Name = datapipe- Interfaces = 1.1, 1.2, 1.3, 1.4- LACP = enabled- LACP Mode = Active- LACP Timeout = Short- Link Selection Policy = Auto
- Frame Distribution Hash = SRC/DST IP address
- Name = VLAN11server- Tag = 11- Tagged Interfaces = datapipe*- MTU = 1500
- Name = VLAN44vip- Tag = 44- Tagged Interfaces = datapipe*- MTU = 1500
Figure 5. Table showing the network configurations for BIG-IP and PowerConnect devices With the devices connected, next we configure BIG-IP to support DX.From the BIG-IP Web GUI:Open “Templates and Wizards” – “Templates”
- Select the “Generic HTTP” template- Create the Virtual Server by providing answers to the questions.Our examples:Virtual Server Questions- Name (dx)- IP address (172.16.44.31)- Routing configuration (yes)SSL Encryption Questions- SSL (no)HTTP Server Pool, Load Balancing, and Service Monitor Questions- Create New Pool- LB Method (Dynamic Ratio (member))- Address (172.16.11.200) Service Port (80)- Click “Add”- Address (172.16.11.201) Service Port (80)- Click “Add”- Create New Monitor- Seconds (5)- HTTP Version (Version 1.1)- FQDN (dx.dell.local)
Protocol Optimization Questions- Client Network (LAN)- Click “Finish”- View the objects that are created: Virtual Server, Pool, Monitor and Profiles- Click “Back to Templates”
Open “Local Traffic” – “iRules”
- Create the iRule called “dx_irule”- Copy and paste the contents of the script (see link above)- Click "Finish"
Open “Local Traffic – “Virtual Servers”
- Select the Virtual Server called “dx_virtual_server”- Add the “dx_oneconnect” profile- Click “Update”- Select the “Resources” tab- Remove persistence from the configuration- Click “Update”- Under iRules click “Manage”- Add the “dx_irule”- Click “Finish”
Open “Local Traffic” – “Network Map”
- View the BIG-IP application configuration hierarchy of Virtual Server, iRule, Pool and Members
The system is now ready to support DX storage requests.Open “Overview” – “Dashboard”
- View the system statistics as traffic passes through the BIG-IP