Understanding the execution and analysis phase

Benchmark execution is the phase during which you tune your database by iteratively running the test, modifying something about the database (for example, the value of one or more database properties or connection parameters), and then running the test again to see the outcome of any changes.

The following procedure assumes you are testing different database properties and/or connection parameters to find the maximum benefit. Repeat this procedure until all parameters that require testing have been tested.

Tip

Choose only those properties or parameters that are significant to the workload and the objectives of your UltraLite deployment.

 Execute your benchmark tests
  1. Create a baseline by running the first iteration of the test. In this case, because you are testing different database properties and/or connection parameters, you would use UltraLite defaults wherever possible.

  2. Begin your normal test runs by tuning only one database property or connection parameter at a time. This limitation ensures that the results you collect are systematic in their approach and helps you more readily determine when you have reached the maximum benefit of your tuning activities.

  3. Output from the benchmark program should include:

    • an identifier or label for each test

    • the iteration of the program execution

    • the name of the element being checked and what you did to change it

    • the recorded elapsed time

    For example, even though you could test other database parameters, if you limited your test to just varying page sizes, cache sizes, and reserve sizes, your output might be saved to a table that looks similar to the example that follows:



    PROP/PARM             VALUES 
    TEST NUMBER           001     002     003    
    page_size             1       2       8      
    CACHE_SIZE            128     256     512 
    RESERVE_SIZE          128     256     512
           
    STMT ID               EXECUTION (seconds)
      01                  01.55   01.50   01.49
      02                  02.01   02.20   01.59
      03                  00.33   00.55   00.44
  4. When you have completed an iteration, return the database to the baseline state to ensure you do not inadvertently contaminate results of subsequent runs.

  5. Depending on the results of the benchmark test, do one of the following:

    • If performance improves, change the value of the same property or parameter and rerun the test. Keep tuning this value until you cannot improve performance any further.

    • If the performance worsens, return the value of the property or parameter to the previous value.

  6. Test a new property or parameter.

 See also