Load balancing with Apache proxy_balancer concept
For a proof-of-concept to increase the availability of a web server, I am experimenting with Apache Proxy Load Balancing. The concept is kept simple, with four virtual machines in total. One virtual machine is acting as Load Balancer (Frontend) and three other virtual machines are acting as Backend web servers. To test the load balancing and availability, I’m using a laptop running Kali Linux, which provides the tools for Testing.
To generate the proper load, I’m using Siege, which is a HTTP load testing and benchmarking tool.
The concept looks like this as shown in the simple drawing below:
Virtual setup for Apache Proxy Load Balancer concept
On my virtual environment (running on XCP-NG), I’m distributing two virtual machines (balancer + www1) on the first node, and two virtual machines (www2 + www3) on the second node. The virtual machines use a Debian GNU/Linux standard setup with Apache web servers.
On the virtual machine with the Apache Load Balancer, I am using the following, simple configuration. With a2endmod I make sure the modules proxy, proxy_http, proxy_balancer and lbmethod_byrequests are enabled. In addition, I set up a status page called balancer-member to view the status and function of the load balancing.
Testing the load balancing with siege
For testing I’m using the Kali Linux laptop and perform a load and stress test with the following command:
The screenshot below shows the status of balancer-manager, where all three Backend web servers received traffic and where www2 simulates a failure by shutting down the Apache web server.