Most of the time, randomly balancing requests across hosts is good enough. Every so often though, there is an edge case which benefits from another method of choosing a host.
For example, the following graph shows the request latency of hosts that handle requests that were not all born equal. Requests happened on a recurring period so a longer request had a tendency to end up on the same host again and again, eventually compounding to increase request latency and slow down other requests that reached that host.
On the left is the request latency for multiple servers routed in
roundrobin fashion. On the right is the same but routed with
leastconn instead. Balancing is done by HAProxy.
The issue we were looking to solve was that one server was experiencing larger than average request latency which was slowing down the response to a critical service.
Changing the balancing method meant that request load was more evenly distributed across our servers. Why this worked so well for us is just that the requests hit a sweetspot of being heavy but not too heavy and the reason a particular server was experiencing the problem was because every round with a particularly heavy request in the
roundrobin series just happened, unluckily, to be routed to that server.
In summary: Don’t think too much about your loadbalancing, roundrobin/random is better for 99.9999% of cases but there is a slim possibility that your requests hit just the right set of criteria to benefit from another method.
backend mygroupofservers mode http balance leastconn server my-node-0001 192.168.2.1:80 maxconn 1024 weight 100 server my-node-0002 192.168.2.2:80 maxconn 1024 weight 100 server my-node-0003 192.168.2.3:80 maxconn 1024 weight 100
A couple times a year, I publish a newsletter with a post or two I really enjoyed writing as well as a Scott Hanselman-style list of things that made me smile! If that interests you, subscribe below :)Subscribe Here