log.info('bbox updater zoom in: ' vars.get('bbox'));</code>
Next up is adding some expectations that we want to be fulfilled, eg. we want to have all responses a HTTP status code of 200 (which means OK), we want image request to have the proper mime type (ie. a png8 image should be returned with ContentType: image/png and we are expecting certain response times). These are set up by adding to our JMeter script.
Now we can add some advanced options such as and to further randomize the load and number of sessions to our service. And finally we want to have our results so we add in some such as graphs and tables. These Listeners can also be used to export the test results to an XML or CSV formatted file or an image.
So get started, and load the script (.jmx) attached to this post. Get a list of point data in a csv file (the attached .csv is in Rijksdriehoek / EPSG:28992 which may or may not be useful to you), when you use your own you may need to tweak the format in the “CSV Data Set Config – adressen”, also you probably also need to tweak some of the JavaScript code in the preprocessors as these are based on a regular grid in meters. I guess if your data is in UTM you should be fine.
You need to specify/adapt the parameters in the test plan (users, iteraties, mapPath, gisHost, gisPort, wkid, fullExtent, and zoomExtent) to suit your mapservice. To start off choose a small number for both users and iteraties such as 1. You should now be ready to go…
(note: wordpress won’t let me upload a .zip/.jmx/.csv so you need to remove the .odt extension from the zipfile/rename the zipfile)
JMeter really doesn’t know what you’re using it for, so interpreting results is sometime a challenge in itself. There are some Timeout parameters (assertions) that you can adjust to specify minimum thresholds for response times and there’s a large number of that will help you visualize the results or export them to something like CSV which you can then use in you favourite spreadsheet or reporting tool.
Using the posted .jmx I ran a test on one of our development test servers, these are virtualized dual cpu win2003 systems running on esx; they are not production grade systems. Below you can see some of the results of an optimized mapservice (.msd) containing an aerial photograph of the Netherlands stored in a 24×7 oracle10 database from our production environment. I have set up the testplan to use 50 users with 10 iterations each and a rampup time of 900 seconds using only the map requests (not the identify); JMeter unfortunately didn’t quite make it to the end of the test due to a memory errror, so this needs to be tweaked a bit in the startup script of JMeter; it’s likely the result of all the listeners in the .jmx so you may want to disable one or two of them.
In the results tree shown in the first screen capture I have only logged the errors; these are erroneous because the response took too long to load (more/longer than the mapRequestResponseTimeMillis set at 950 ms).
In the second screen capture you can see the aggregate results of the testrun, which clearly shows that it takes longer to get a more detailed part of the map from the database.
The test is incomplete without also monitoring the server. There are a number of built-in tools for windows, such as Perfmon and of course the taskmanager.In my set up I noticed a competition between the two SOC processes running the map and the Java process running the servlet engine that runs the REST interface, each of the managed to get up to 1/3 of the total available CPU.
ArcGIS Manager also provides some graphs showing throughput (shown in the last graph); the tested service is show in cyan.