Home Accessibility Courses Twitter The Mouth Facebook Resources Site Map About Us Contact
For 2021 - online Python 3 training - see ((here)).

Our plans were to retire in summer 2020 and see the world, but Coronavirus has lead us into a lot of lockdown programming in Python 3 and PHP 7.
We can now offer tailored online training - small groups, real tutors - works really well for groups of 4 to 14 delegates. Anywhere in the world; course language English.

Please ask about private 'maintenance' training for Python 2, Tcl, Perl, PHP, Lua, etc.
Predictive Load Balancing - PHP and / or Java

Load Balancing Algorithms - the standard offerings

How do you share out the requests when you're load balancing? The computer (or other device) that's sharing out the load needs to make a decision as to where it should forward each request, and there are a number of different option commonly available.

a) Round Robin / shared by number of requests. In the simplest example of "Round Robin" sharing, each system in sequence gets the next request. If you had three back end computers called "Duncan", "Wilfred" and "Nick", then Duncan would get first turn, Wilfred the second, Nick the third, and the baton would pass back to Duncan again.

b) Shared by traffic. With this algorithm, requests are forwarded to the system that's returned the least data in the very recent past - thus allowing back end systems which have only been passing small amounts of data back (due to easy requests) to be more heavily loaded.

c) Shared by queue length. In this case, the number of connections currently being forwarded is monitored, and new requests are forwarded to machines which have the shorter queue. Rather like looking for the shortest checkout line in the supermarket!

Each of these algorithms may be overridden by "sticky sessions", where the second and subsequent request of a series from the same client is passed on to the same server that was originally chosen for the first request, even if that server's not the 'quietest' or 'next'. So that once one particular back end server is dealing with a multi-page task (such as an airline booking), the main client can back in touch again and easily continue where it left off. We all do this 'in real life' - having got to speak with someone over the phone about a particular issue, we ask for the same person again when we call back rather than having to start explaining from the beginning again.

Further overrides of the basic scheme are possible, with requests being distributes to the more powerful computers in a back end group (or machines that don't have too much else going on) in greater quantity than to the slower or otherwise-loaded systems.

But even with these overrides, it's going to be rare for any of these algorithms to give you an ideal traffic balance. They'll tend to be based on recent historic data rather that the "here and now" and predicted loading. Taking my supermarket comparison, NONE of the algorithms I have described above actually looks at the baskets of the customers waiting in the queue and forecasts ahead as to how long each of the items there will take, even though that's exactly what we do in our local Tesco.

Predictive Load Balancing

What do I mean by "predictive load balancing"? I'll start with an example. Let's say that I'm deploying a journey planning application. Most journey requests are for quite short distances or for main artery journeys with few intermediate 'nodes', and can be handled quite quickly. But there's going to be the occasional awkward one - Melksham to Maentwrog or Cosham to Campbelltown, which will put a disproportionate load on the server.

By "predictive balancing", I mean noticing these occasional heavy requests even before they are processed, and then cutting right back on the following requests to that server until they are completed. But how do I identify, on the front end load balancer computer, which there requests are? The incoming parameters - the place names - certainly won't give any hint on the front end server unless there's substantial work done there which rather defeats the whole objective of spreading the load by balancing.

What's needed is feedback. If each backend task signals to the front end (perhaps by adding a record into a database) the expected job length, once it has established it in the backend, then the front end can make forwarding decisions based on the lowest predicted workload.

Here's some proof of concept code to show you how that could work:

# Back end task ...
$myname = "duncan"; # To be changed to server name in each instance
# Get me a random number (1 to 9) to indicate size of task
$tasksize = 10 - floor(pow(rand(1,9999),.25));
# Tell the front end server how long it will take
list($usec, $sec) = explode(" ", microtime());
$ts = ((float)$usec + (float)$sec);
mysql_query("insert into actives (size, name, ts) ".
   "values ($tasksize, '$myname', $ts)");
# Something to represent loading ("application goes here")
# Cancel the load from the front end server
mysql_query("delete from actives where ts = $ts and name = '$myname'");

and the front end code to choose which machine should do the work:

# Front end task
$servers = array("wilfred","duncan","nick");
# Look for current loadings
$current = array();
$qs = mysql_query("select size, name from actives");
while ($row = mysql_fetch_assoc($qs)) {
   $current[$row[name]] += $row[size];
# Sort loadings and get lightest
asort ($current);
$inorder = array_keys($current);
$target = $inorder[0];
# Run the request on the appropriate backend
$result = file_get_contents
print $result;

Although these concept codes are written in PHP, they are equally applicable to Java and other languages - indeed, you might choose to use a PHP or Perl front end running on Apache httpd to call up a number of backends running in Java on Apache Tomcat.

Note that code provided here does not rely on any particular Apache module to forward packets (except perhaps that you may have installed PHP as a module!) - with mod_jk, mod_rewrite and mod_proxy, which are generalised forwarders / balancers you don't have the opportunity to place application specific load forecasts into the decision process, which is the extra that my example has provided.

The code / example above came as a result of a private training course during which we set up multiple load balanced servers and discussed customer's specific needs. Our more general Deploying Apache httpd and Tomcat course covers the more common aspects of web server deployment, including a brief look at the issues of clustering and balancing, but if you would like to set up a sachem such as the one above, you really need a private course or an extra "1 on 1 day" after the public course.

Full back end source code
Full front end source code
(written 2008-12-13, updated 2008-12-15)

Associated topics are indexed as below, or enter http://melksh.am/nnnn for individual articles
H305 - PHP - Web server configuration
  [1778] Pointing all the web pages in a directory at a database - (2008-08-30)
  [2478] How did I do THAT? - (2009-10-26)
  [2773] Dynamically watching your web site via a PHP wrapper - (2010-05-21)
  [2774] PHP - Object Oriented Design in use - (2010-05-21)
  [2981] How to set up short and meaningfull alternative URLs - (2010-10-02)
  [3143] On time - (2011-01-23)

A655 - Web Application Deployment - Using Tomcat and Apache httpd Together
  [436] Linking Apache httpd to Apache Tomcat - (2005-09-05)
  [576] Why run two different web servers - (2006-01-25)
  [631] Apache httpd to Tomcat - jk v proxy - (2006-03-03)
  [1376] Choosing between mod_proxy and mod_rewrite - (2007-10-02)
  [1383] Monitoring mod_jk and how it is load balancing - (2007-10-07)
  [1549] http, https and ajp - comparison and choice - (2008-02-22)
  [1552] Extra public classes in deploying Apache httpd and Tomcat - (2008-02-24)
  [1771] More HowTo diagrams - MySQL, Tomcat and Java - (2008-08-24)
  [1940] URL rewriting with front and back servers - (2008-12-14)
  [1944] Forwarding session and cookie requests from httpd to Tomcat - (2008-12-14)
  [2482] Load balancing with sticky sessions (httpd / Tomcat) - (2009-10-29)
  [3018] Tuning Apache httpd and Tomcat to work well together - (2010-10-27)
  [3999] Handling failures / absences of your backend server nicely - (2013-02-08)

Back to
Getting hold of the wrong end of the stick
Previous and next
Horse's mouth home
Forward to
mod_proxy_ajp and mod_proxy_balancer examples
Some other Articles
Christmas scenes and events
Server - Service - Engine - Host, Tomcat
mod_proxy_ajp and mod_proxy_balancer examples
Predictive Load Balancing - PHP and / or Java
Getting hold of the wrong end of the stick
Quick Summary - PHP installation
Summary of MySQL installation on a Linux system
Lidl opens in Melksham
Learning to Program in C
4759 posts, page by page
Link to page ... 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96 at 50 posts per page

This is a page archived from The Horse's Mouth at http://www.wellho.net/horse/ - the diary and writings of Graham Ellis. Every attempt was made to provide current information at the time the page was written, but things do move forward in our business - new software releases, price changes, new techniques. Please check back via our main site for current courses, prices, versions, etc - any mention of a price in "The Horse's Mouth" cannot be taken as an offer to supply at that price.

Link to Ezine home page (for reading).
Link to Blogging home page (to add comments).

You can Add a comment or ranking to this page

© WELL HOUSE CONSULTANTS LTD., 2021: 48 Spa Road • Melksham, Wiltshire • United Kingdom • SN12 7NY
PH: 01144 1225 708225 • EMAIL: info@wellho.net • WEB: http://www.wellho.net • SKYPE: wellho

PAGE: http://www.wellho.net/mouth/1938_Pre ... -Java.html • PAGE BUILT: Sun Oct 11 16:07:41 2020 • BUILD SYSTEM: JelliaJamb