Zurien Blog

We will compare how Local and Global load balancing Works in F5 BIG-IP and AWS Elastic Load Balancer (ELB). For F5 BIG-IP, we will first review some Concepts and terminologies in both Local Traffic Manager (LTM) LTM and DNS (formerly known as Global Traffic Manager) then we will discuss the configuration we did from the previous blog AWS ELB and AWS ASG. Concepts are quite similar just minor difference. This would be the best blog if you are already familiar with F5 BIG-IP and wants to understand how Application Load balancing works in AWS.

Let’s have a little review on F5 BIG-IP with Local Traffic Manager (LTM) on how it works. LTM is the local load balancing feature of F5 BIG-IP and it’s very simple. First we have this topology we have this external network 10.10.0.0/16 and this network is also connected to the client and we have this internal Network

172.16.0.0/16 this network is connected to the servers the server’s IP address are 172.16 20.1, 172.16.20.2 and 172.16.20.3. Next, we need to understand these terminologies. When we say nodes this are the IP Addresses of the Servers. If we add port of this application running on our servers this is now called pool members or sometimes we just call it members. Basically, members are nodes + ports. These members are added in pools and you don’t configure members or pool members outside the pool configuration. Pools are basically containers of pool members.

Last, we have what we called Virtual Servers (VS) these are the listeners. Clients contacts or sends requests to the Virtual Servers (VS) and not directly to the pool members or the notes. Virtual Servers (VS) consist of IP address + Port and to enable load balancing, we need to associate a pool to that Virtual Server. Here’s how the load balancing really works. Client sends HTTP request to the Virtual Server, name is http_vs with an IP address of 10.10.0.100 listening on TCP Port 80. The Virtual Server will process the request and since there is a pool (name is http_pool) associated to our virtual server, it will then forward the request to those pool members that are added to http_pool pool. In our first request, our F5 BIG-IP forwards the traffic to the first pool member (172.16.20.1), the first pool member will respond back to the BIG-IP and the BIG-IP sends the respond back to the client. Now, our second request to http_vs, it does the same processing of the traffic but now it forwards the traffic to the second pool member (172.16 20.2).

There are various types of load balancing, in F5 BIG-IP this is what we call “Load Balancing Methods” but one of the most common load balancing method is Round Robin where F5 BIG-IP distributes the request evenly to All Pool members. So, that is the basics of F5 BIG-IP load balancing using Local Traffic Manager(LTM). Now let’s talk about global load balancing with DNS or formerly known as Global Traffic Manager (GTM), we will skip the LDNS process as well as the query to the root name server. So this is the same query but now it’s sent to the zurien.com name server in this example this name server is F5 BIG-IP DNS and this response with an A record for the destination zurien.com. The A record is the translated IP address. In this example, the IP address that will be responded to our client is 11.11.11.100.

The BIG-IP DNS / Name Server may send either 11.11.11.11.100 or 22.22.22.200 back to the client. So, how do we know which IP address or A record that will send back to the client? Well, this depends on the Global Load Balancing feature that is enabled. Again, there are also many different Load Balancing method available in F5 BIG-IP DNS.

Now, the client connects or sends request to that translated IP address 11.11.11.100. This IP address is residing on the 1st. Data Center. Since 11.11.11.100 is the IP address of the Virtual Server of the 1st BIG-IP LTM it forwards the request to one of the pool member through the use of Local Load Balancing. We have the 2nd request, this is the same but this time the BIG-IP LTM on the same data center (1st Data Center) is now going to forward the traffic to the second pool member.

Now, this is how the Global Load Balancing really works. The client sends query to the Name Server but this time, the A record response is different. It’s not 11.11.11.100 anymore, it’s now 22.22.22.200 which is the IP address residing on the 2nd Data Center. When the client sends request to that IP address 22.22.22.200 which is the virtual server listening on Port 80, It forwards the request to one of the pool members. In this example, it forwarded traffic to the 1st server/member. Then the second request is forwarded to 2nd Data Center, the BIG-IP forwards the HTTP traffic to the 2nd pool member.

If you want to understand more F5 BIG-IP Load Balancing. Open a web browser and type www.zurien.com, this will take you to the Zurien website. Click training and this will redirect you to the training webpage. Here you will see various courses but you can filter F5 courses. It should display F5 101 exam, preparation, F5 201 exam preparation and Building F5 BIG-IP Lab for FREE.

Now let’s talk about Amazon Web Services (AWS) Elastic Load Balancer (ELB) and we will compare this with F5 BIG-IP both LTM and DNS. In our previous AWS (ELB and ASG), everything is in AWS cloud! So the first Elastic Compute 2 (EC2) these are the instances and it’s already created. When we say instances these are the servers or virtual machines (VMs) residing in AWS Cloud. First thing we did was, we created Application Load Balancer (ALB), this is also an ELB as ELB has three types. So our Application Load Balancer we named this Web-Application-ELB. In our topology, we associated our ELB with Virtual Private Cloud (VPC). We named our VPC Lab VPC with a CIDR of 10.0.0.0/16 and we also configured Mappings.

The mapping these are Availability Zones (AZ) and in our example, we have two AZ. The 1st AZ is us-west-2a and it also have a public subnet 10.0.0.0/24. The 2nd AZ is named us-west-2b, there’s a dedicated subnet 10.0.2.0/24. AZ is a distinct location within an AWS region and an availability Zone may have one or more physical data centers. Just imagine, we have two Availability Zone and these are two Data Centers in the same region.

We also created what we call Security Group (SG). Security Group is an Access Control List (ACL) but instance level. It’s a little different than NALC as NaCL is Network. Our Security Group (SG), we named it load-balancer-sg. Now, when we created our load-balancer-sg our security group, we just simply allowed all incoming HTTP traffic. Here is more interesting part of our configuration, the “Listener and Routing”. The Listener is analogous to Virtual Server or F5 BIG-IP but we don’t define IP address. In F5 BIG-IP, the virtual server is IP address + Port then we do some configuration um including the association of pool. The Listener here in our ELB is just the port, this is the internet facing port. The IP address and the domain name is automatically created for us but we can change this later. Also under “Listener and Routing”, we added what we called Target Groups (TG) we name our Target group as lab-app-target-group. The Target Group is analogous to pool where we define those Target servers. Here in ELB, the target servers are called Register Targets and this is analogous too pool members.

Okay so here’s the difference, in F5 BIG-IP we just add an IP address + Port of the server. This is how we configure or we how we add the pool members and this is under pool configuration. In AWS, we select an instance that is already provisioned where IP Address are automatically assigned based on the public Subnet in an Availability Zone (AZ). What we defined are just ports. In this example, this instance or this target is listening on TCP port 80. Once the application load balancer has been created, it automatically creates a domain name and this domain name is where the client sends HTTP requests to. The client’s contacts this domain name and this domain name also listens to TCP Port 80 which is defined in our listener configuration. Now, the ELB process the request and forwards the traffic to that one Registered Target. The Registered Target responds back and ELB responds to the client.

the later part of our video this is part two we enabled AWS Auto scaling group this is when it automatically Provisions new targets and how did it happen we use stress feature and this is a stress test the server will receive high load and this will trigger to scale meaning it adds more registered targets and this is based on the policy we configured so we added three more servers in our Target group this registered targets are provisioned in two different availability Zone in our example there are two Targets in availability Zone us.west.2a and another two Targets to us West Dodge to B and all Targets are listening to TCP port 80. when the client sends requests to our web application the elb will load balance to all four servers slash targets and that Target may be in the first available zone or that Target may be also in the second availability Zone in our testing from the web application it identifies which availability zone is the target residing and we saw it changing from the first available Zone to the second availability Zone and vice versa I hope you have learned something and had fun comparing F5 big IP and AWS elv or the elastic load balancer let me know if this is something that you are interested in so I can create more Cloud networking or AWS application load balancing related videos if you are planning to switch from network engineer to Cloud network engineer I have this video ready for you so what do you think is better fibip or AWS elb

[Music]

foreign

Leave a Reply

Your email address will not be published. Required fields are marked *