The Performance Log


Any program running within the Partner System generates a performance log in logs/performance.csv.

This log includes metrics on various parts of the software considered to be performance-sensitive. A major example is the refresh and rendering performance of mapsets in the Map Viewer.


Here is an example of logs/performance.csv:

MapView rendering,frames,128,223,573.991031
TiledImageSet rendering,frames,0,0,NaN
TiledRoverSet rendering,frames,6,83,72.289157
MemoryRoverSet rendering,frames,0,0,NaN
DatabaseRoverSet rendering,frames,6,28,214.285714
MapSet Ephemera refresh,cycles,1,5,200.000000

Performance.csv has the following standard fields:

  • metric - name for the activity measured,
  • units - units of measure
  • cycles - how many times the activity occurred
  • milliseconds - total time spend doing the activity
  • hertz - a speed average for the activity, in cycles per second

For rendering, the standard unit is a “frame” - a single draw of the map view for that mapset or mapset type.

Generally, the higher the hertz, the higher the performance.

NaN means “not a number” - generally because of division by zero, because there was no activity.


Focus on a single thing you want to test - for example, a mapset refresh for a dynamic mapset.

Open the map viewer and perform a standard set of actions - for example, triggering refresh ten times. Only running something a single time can be misleading, since loading and other behaviors may be involved.

Close the map viewer and look at the performance log. Record the statistics of interest in a spreadsheet.


Hertz is a good measure of the speed of an individual activity, but when you are looking for the cause of a performance problem, milliseconds may be a better judge. For example, if map viewing is slow, look at the rendering entries and see which one had the highest milliseconds value. That’s the one the map viewer spent the most time doing.

In general you’re looking for something dramatic - e.g. a mapset that takes 10x or more time as the others.

Once again, be sure you have a standard set of actions to test. If you do different things on each run, the results are not comparable.

Table Of Contents

Previous topic


This Page