Problem solve Get help with specific problems with your technologies, process and projects.

Developing composite applications with PHP - Advanced AJAX

A better way to store client-side ID, main work on the server side (requiring close collaboration between the server-side PHP script and the client-side JavaScript), and Rich Internet Application (RIA) versus old-style Web.

In this article, I'll get into more details than in my last one concerning the AJAX for the CFS example I presented in my first blog of this series, but AJAX can get much more tricky and complex than I'll talk about here, so don't worry - it won't get too complicated today.

As promised, I'll first show a better way to store things like the client-side ID(the js_rowId I used) than relying on the help of the server to remind us of it. Using this there is an easy way to handle outdated requests. Afterwards I'm taking the approach to do the main work on the server side, which requires close collaboration between the server-side PHP script and the client-side JavaScript. Last I'll spend a few thoughts on RIA versus old-style web.

JavaScript function closures

In the last blog I introduced the asynchronous loading of additional data for each table row. This posed the problem that the callback functions for the AJAX calls must know for which row the received result is intended - since they can get back in any order. How can a function know this association? I can't just add another parameter, since the function declaration is given by the <abbr title="XML HTTP Result Object">XHR</abbr>, and I also can't use a global variable - since global variables are, by nature, singletons and we have a bunch of returning calls.

Since in this example I also control the server I could get away with a cheap trick: I just passed the js_rowId as additional parameter, which the server just mirrors (and embeds in the actual returned data). Sure, this is working, but apart from the additional bytes to be transferred this is not very nice, so here come function closures.

I recommend reading this excellent explanation, but will provide a somewhat shorter overview here for your convenience:

In JavaScript, both functions (the thing which can be executed) and function executions (the actual call/execution of a function) and are object instances. Each time a function is declared a new function-prototype instance is created. Each time a function is called an execution context (object instance) is created. It is possible to create a function and put it in a variable, like this:

f = function (originalRequest);

Now f holds an actual function instance, not a reference to a (singleton) function. In the Ajax.Request call, f can be given as parameter for the callback - which means upon returning of the XHR call the function instance stored in f will be executed. If you give a function name (as opposed to a variable name which contains a function), the global namespace will be searched for this function (instance), which will then be called.

Now things get interesting: every time a function is called a runtime (execution) instance is created and local variables (including parameters) are part of this execution function instance. Usually after a function is executed its instance is terminated - that's just the normal garbage collector in action, just a bit unusual to have it applied to functions and execution contexts. Having a function declared inside a function means the outer function can only be trashed when the inner function is not needed any more (i.e. there is no reference to it).

Now consider the following piece of code:

function receiveTableWrapper(win) {
    f = function(originalRequest) {
        alert('Received result for tab '+win);
    return f;

    var myAjax = new Ajax.Request(url, 
            method: 'post', 
            parameters: pars, 
            onComplete: receiveTableWrapper(win)

The call to receiveTableWrapper is executed immediately, resulting in a new function instance (stored in f) which is not yet executed, but which inherits the context from the receiveTableWrapper execution instance - preventing its inner variable win from being freed until the reference to the newly created function instance is destroyed. But this instance is returned back to the outside (which makes this thing a so-called 'function closure') and given as value for the onComplete parameter. Now when the Ajax request returns, the function instance is executed while still 'magically' having access to its own instance of win. What a nice way of coding!

The js_rowId could be stored equally, but I left it in so I could just reuse the sources from the last blog.

Invalidate outdated requests

There is another thing which should be noted when dealing with asynchronous requests. Since using XHR is a fire-and-forget operation we can neither stop requests from coming in in any order or with any delay, but we also can't stop them from returning at all. What happens if the user executes a search for some term, then, before it returns, changes his mind and starts another search? How should we know the returned result list is actually the most recent one? This is especially true for the loading of the basic data, which might take quite some time to finish and the user may refine (and resubmit) his query in the meantime.

Given the information above, this is luckily rather easy. We can't prevent the first (now outdated) request from returning, but we can simply discard it. For this I added a global, unique token. This token is changed whenever something happens which outdates running calls - in our example, whenever a new search is initiated. A copy of this token is kept with every call (inside the function closure). Upon returning of the call the locally stored token is compared to the global one. If they don't match - i.e. the global token has been changed - then the call is outdated and the function just returns, discarding the result and doing nothing.

Since this token is not cryptographically important and must only be unique I simply used a counter which is incremented every time the user starts a new search request.

Live Table, server-side data storage

The list of returned customers was kept in an array local to the client, as I wrote about in the last blog. For the tables in the CFS popup I used the other approach: The data is cached on the web server in a PHP Session, not on the client in JavaScript, sorting and pagination is also done on the server. There is not much logic used on the client, except for the AJAX calls fired of when changing the display order or page, I created the actual HTML in PHP. So this time neither XML nor JSON is transferred, but HTML directly. Which is then displayed in the corresponding div element of the tab.

The table data is stored on the web server in the $_SESSION variable, thus a session needs to be created. The session id is stored in a cookie, which is PHP default. Additionally each table needs its own id (separate from the session id, as one session holds multiple tables), which must be unique at least inside the same session. While a simple counter would perhaps suffice I followed my habit to create secure applications and used a token generator which can't be just guessed easily:

$token = md5(uniqid(rand(), true));

Additional metadata is stored hardcoded inside PHP which defines the column types for each service. This is used primarily for sorting (whether a column is sortable and what format the data is in), but also for the column header names or special formatting (e.g. right-aligned numbers).

The TableControl.php file holds all the logic - retrieving the data from the Web service(s), caching, sorting and displaying the HTML result. The CFSpopup.php provides the basic data and the tabbed navigation and for each tab calls the TableControl.php with the specific parameters.

Note: the OutboundDeliverySimpleByCustomer call returns a nested array. For display it is flattened to a single 2D array. To not break the associations when trying to sort in the flattened columns these are excluded.

Advantages: client doesn't need to hold the complete set of data, navigating is acceptable quick (though network latencies are now present)

Disadvantages: network activity is needed for every action (i.e. no offline work), server-side cache might time out (loss of cached data) and will cache longer than needed (as it is still cached when the browser is closed)

The last two disadvantages can be reduced. All information required to call the Enterprise service can be kept on the client and retransmitted, so that the cache can be silently (or with confirmation, if needed) refreshed after a session timeout. Also when changing pages or closing the browser an onunload event can be triggered which tries to inform the web server that the particular table is not needed anymore. However, this might not always work. I did implement the re-query in this demo (and a refresh-button was made possible with this, too).

Performance considerations

Concurrent AJAX calls can be thought of as separate threads, being fired off and asynchronously returning the result. JavaScript is not really threaded preemptively- so be careful how you implement a 'waiting loop' - but the considerations are the same.

The AJAX calls are subject to the browsers connection pool: there are a fixed number of simultaneous connections and each call has to share them with both the other AJAX calls as well as other items which are loaded, like images. Vice versa, sending off a bunch off calls might delay the loading of an image, thus images - especially the waiting-image - should be loaded before the calls are made. This limitation is set to a rather low value, as described in the HTTP spec: "A single-user client SHOULD NOT maintain more than 2 connections with any server or proxy." While it might seem outdated in the times of <abbre title="Rich Internet Application">RIA</abbr>s and Comet, it is still very valid today: Both Firefox and Internet Explorer adhere to this limit and have an additional limit of the number of total connections to all servers (usually 10). This is a client-side setting which can be changed (try about:config in Firefox or read here), but only by the user, not by our application.

One obvious speedup would be to group several requests together in a single call. Especially the calls 'get basic data' for each customer can be grouped together. If all were grouped in a single call there would be little difference to having no loop in JavaScript at all and just make the calls in PHP before returning anything. However, then we're back to a longer waiting time for the user until he sees the first results. Trading off 'time to first result' with 'total time for all results' one should experiment with a good number of requests grouped together. This might even be done dynamically depending on network speed/latency.

Naturally coming to mind would be some kind of streaming: setting off all requests at once, getting them back one after another in the same HTTP request and displaying them as soon as another result is completely received. I've not yet done anything like this yet, so you're on your own here.

The other option, and this one is actually widely used, is to circumvent the restriction: the low number of concurrent connections are bound to the host-part of the URL, so a server cluster using different host names can get two connections to each of the hosts. For smaller installations just letting all different host names (FQDN, to be precise) point to the same server works equally well. Keep in mind, though, that each additional connection is not only additional overhead but also subject to the 'maximum of concurrent connections in total' as well.

There are other factors to be aware of, like the way Garbage Collection works in IE6 (see also here). If speed really matters a much deeper dive must be taken, including profiling.

Graceful Degrading

Graceful degrading was a term coined when CSS was new and not every browser would support it. The basic idea was that without CSS support the whole nice design might be gone, but all the information and functionality still remains. An extreme test for this would be the use of a textmode browser like Lynx. Now we have 2007 and Web 2.0 is everywhere, relying heavily on JavaScript. Is it possible to build a graceful degrading for users who have JavaScript disabled? Should we care at all? Peter Micheaux states that you shouldn't and rather pick a target audience and develop only to them (or, if really necessary, develop the same application twice)

I'm not sure how much is possible without JavaScript - all client-side interaction, AJAX loading, drag&drop, etc is gone and other things - like our sorted tables - would need to be developed twice to work also without JavaScript. But we can at least try to keep as much functionality as possible. In the end it's probably a choice of not supporting non-JavaScript-browsers for rich applications, but provide as much degrading as possible for 'just' enhanced webpages.

Does it happen at all? Are there still browsers without scripting capabilities? There might very well be some, but more importantly every browser offers the option to actively disable it. And next to just ALL security incident reports state exactly this as one of or the only possibility to secure yourself from the newest threat.

Components which are easily degradable include tables and tabs. In the CFS popup I use several tabs, from which only one is shown. Without JavaScript it's not possible to change between the tabs - and since they default to being hidden they also can't be seen all at once. How could we do better?

One option would be to display them as visible and hide them in the JavaScript. Unfortunately this would result in a visible flickering (especially when onLoad is used instead of the onDomReady estimations provided by some frameworks). A year ago, Ara Pehlivanian stated that graceful degrading is a myth and can't work. But now there are new ideas: here and here.

The best method (distilled from the above mentioned pages and the many comments therein) seems to be to have separate stylesheet definitions. One could change the body.class very early (e.g. directly after the body tag with inline-javascript) and set the display: none only in the JavaScript version, or to have separate stylesheet files altogether and use a CSS selection as described back in 2005 here.

To be continued...

There is one more article to come, but I'll leave it as a surprise what I'll do there. I just promise you it will be a hyped thing and include more JavaScript.

Also the files I made are finally approved and will be available here shortly. I'm sorry this took so long. The blog was not meant to be just theoretical but you'll get my full sources in a ready-to-use project (that is, if you have an SAP backend at hand) so play around with it.

Frederic-Pascal Ahring is a SAP developer working on Scripting Languages.
This content is reposted from the SAP Developer Network.
Copyright 2007, SAP Developer Network

SAP Developer Network (SDN) is an active online community where ABAP, Java, .NET, and other cutting-edge technologies converge to form a resource and collaboration channel for SAP developers, consultants, integrators, and business analysts. SDN hosts a technical library, expert blogs, exclusive downloads and code samples, an extensive eLearning catalog, and active, moderated discussion forums. SDN membership is free.

Want to read more from this author? Click here to read Frederic-Pascal Ahring's Weblog. Click here to read more about Javascript on SDN.

Dig Deeper on SAP development and programming languages

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.