I got some remarks lately on “how” I benchmark the different Siebel patch sets which I discuss now and then against the usual suspects (IE, Chrome and Firefox). Let me first point back to this old post, from a bit more than a year back. I crafted together with Duncan Ford a small framework which measures the milliseconds spend by the browser between the preload and the postload event. This framework actually made it into the must-read Oracle Siebel Open UI Developer’s Handbook. The time spend between these two events informs me about the number of milliseconds the browser needed to process the DOM and all the triggered events. And with Open UI quite a bit happens to process the ‘raw’ DOM delivered by the Siebel Web Engine to be able to finally display it.
Next I created a set of four progressively complex views, which are based of a dramatically simple virtual business component. The VBC approach makes it extremely simple to port this framework in other environment, when needed. These four views are built per below specifications where I stress that these views not necessarily are representative ;-) The complexity of view 3 & 4 definitely are out of the ordinary. They are primarily meant to identify any kind of ‘hockey-stick’ behavior in performance degradation (call it stress-testing).
View 1 (indicative: 40 controls in total)
- One Form applet with 20 controls
- One List applet with 20 list controls
View 2 (indicative: 240 controls in total)
- One Form applet with 20 controls
- One Form applet with 100 controls
- One List applet with 20 list controls
- One List applet with 100 list controls
View 3 (indicative: 440 controls in total)
- One Form applet with 20 controls
- Two Form applet with 100 controls
- One List applet with 20 list controls
- Two List applet with 100 list controls
View 4 (indicative: 640 controls in total)
- One Form applet with 20 controls
- Three Form applet with 100 controls
- One List applet with 20 list controls
- Three List applet with 100 list controls
When I run my tests, I make sure my laptop is completely offloaded and running at 0-5% CPU. Running the good-old Siebel Dedicated Client is fit-for-purpose to test the views 1 thru 4 in a variety of browsers. First I do a ‘warm-up’ cycle touching all views. Next I take the measurements at least three times, and if the standard deviation between these measurements is too high I re-run the test. Et voilá. That’s it – no rocket science – but an apples & apples comparison.
Below a graph focused on IE11 which just compares IP15.1 (June) with IP15.3 (August) to demonstrate the terrific improvement. Yes, even IE can perform (though still lacking its competition, but anyways).
– Jeroen
