Application Best Practices

While every application is different there are still best practices to help your application perform as well as possible.

Code for the desired user experience

Perception is a large component of application performance. It doesn't really matter to a user that your database infrastructure is perfectly tuned if the user experience is sluggish. Of course a large part of making a user experience pleasant is the use of proper indexes when retrieving data, but there are many other factors to consider.

Sensible limits

Enforcing limits in an application helps guide the users to provide more detailed filtering options for searches and reports. At the very least warning the user that their criteria may run for extended periods of time helps to get you off the hook.

First rows or last row?

Depending on the application function users may be more interested in getting the first few results quickly or only concerned with the entire result set. Search screens typically fall into the first rows category while reports fall into the last row category.

Code your screens to match the user expectations instead of trying to force every function into the same model. This may involve enforcing limits on the number of records returned and/or allowing multiple calls to page through the data.

Background processing

There are always going to be application functions that just take longer to run than the user is willing to be forced to wait for. Meaning that they cannot use the rest of the application until that function completes. Reports and data extracts typically fall into this category.

This is where background processing provides an easy method to run the application function without blocking the user. You can offload the function by creating request records that a set of background queues monitor to execute the requested job or by using asynchronous appserver calls.

The background queue approach allows you to easily control how many jobs can run at once including the ability to schedule jobs to run at certain times. The appserver approach is easier to implement but lacks the ability to easily control the number of jobs or allow future scheduling.

Local file access

Accessing files from a drive local to the client will always outperform accessing a network share. For the best performance avoid storing any of the following on a network drive:

  • OpenEdge executables
  • OpenEdge temporary files (-T)
  • Application R-Code and images

Placing any of these files on a non local drive will cause unnecessary network traffic. This can cause the entire application to appear sluggish with no noticeable database related cause.

Caching is not just for databases

If you have commonly used lookup tables you should strongly consider caching those tables in Temp-Tables on the client. Especially if you are using Client Server connections to the database. Reading data from local Temp-Tables will always be faster than reading them across a network.

Even shared memory connections can benefit from caching by reducing contention for buffer pool blocks and internal latches. Especially for smaller lookup tables that are frequently accessed inside FOR EACH loops.

There are many approaches to implement application caching that allow for time based refreshes of data and partial caching of data. This will explained in more detail below.

Appserver is better than client server

If your clients are using a network to access the database using appservers will always be a better approach than using cient server. Appservers allow you to use shared memory connections to the database or at the very least limit the network impact.

While there are different tuning options for client server connections you will never have the same level of control of network traffic that appservers provide. I have worked with quite a few appserver based applications that perform very well over wide area networks, including connections from the other side of the world. I have yet to see a client server application that performs well over a wide area network.

This doesn't mean that all appserver applications magically perform at high levels. You can still have coding and design issues with appserver applications but they are much easier to correct than client server applications.

Proper transaction scoping is crucial

There are many misconceptions about how OpenEdge handles transactions and transaction scope. Instead of discussing all of those misconceptions and improper coding techniques I will just describe what I consider the only way to handle transaction scope in OpenEdge.

  • Use strongly scoped transactions (DO FOR)
  • Use NO-WAIT to control the number of retries for any given lock
  • Lock tables in a consistent order to help prevent deadly embraces
  • Choose sensible timeout values to undo the entire transaction

Understand your data

Understanding your application data is key to designing appropriate queries. The lack of a cost based optimizer (CBO) forces the developer to choose the proper order of tables for queries and joins. OpenEdge simply executes the queries in the order they are written and will not decide to reorder your code in order to limit the number of records read.

Without a reasonable understanding of your data troubleshooting problem queries becomes harder than it should be. Using the VSTs to determine how many records are read isn't as useful without knowledge of the underlying data. How do you determine if 100 reads for a table is sensible or completely out of bound without a working knowledge of the table and its relationships?

Understand the indexing rules

Because the 4GL/ABL doesn't have a cost based optimizer (CBO) developers need to understand the indexing rules in a much more detailed manner than in other databases. Without a CBO the developer must determine the appropriate order of execution and the exact index paths to resolve the queries.

The proper way to select an index or indexes for your query is to properly structure the where clause. Simply adding USE-INDEX will force OpenEdge to use that specific index, but it does not guarantee that the index will be used effectively and usually degrades performance.

Why Is My Code So Slow?

The easiest way to get detailed information about your code is to download my Session Tracing tools from the Downloads page. It combines the logic from the OpenEdge Profiler with VST (virtual system table) information for table and index reads.

A static class is provided which allows you to easily track metrics across multiple programs without changing all of your code. You can decide to export the metrics to HTML for viewing or text to be imported into the GUI tool. The GUI tool allows you to import previous sessions, run a program to be traced and even write ad hoc queries and trace the code from within the tool itself.

OpenEdge Profiling tracks the execution time of your code on a line by line basis. This makes life much easier when you are trying to track down a performance issue. No more parsing log files or adding your own debugging messages.

The VSTs allow you to tell how many records are actually read from a table compared to how many are returned to the client. This makes identifying index selection or general coding issues much easier. You can see metrics for tables (reads, updates, creates and deletes) and indexes (reads, creates and deletes).

GUI Tool

Note: All of the line numbers reported correspond to the line numbers from the R-Code and not from the source program. In order to match the line numbers you will need to compile your code with the DEBUG-LIST option.

Debugging with source code

If you have the source available you can view the exact lines of code by compiling your code with the DEBUG-LIST option. The DEBUG-LIST option generates a text file where the line numbers match the R-Code line numbers listed.

COMPILE myprogram.p DEBUG-LIST myfile.txt.

Elephants And Mosquitoes

It is important to spend your time correcting the sections that take up the majority of the time (elephants) instead of the lower impact sections (mosquitoes), even if the lower impact sections seem easier to correct.

In the Profiler example above there was one line in one program taking up 85% of the total session time. It made no sense to examine the next two lines (7% and 2%) until we had resolved the main issue. Even a complete removal of the next two lines would have had less of an impact than a mere 20% improvement on the main issue.

Shrinking The Elephant

Now that you have found the sections of code that are causing the majority of your performance problem it is time to correct the issues. The majority of OpenEdge application issues fall into one of the following categories:

  • Improper index selection
  • Client server behavior
  • Network latency
  • Client configuration issues

Once you have identified a specifc line to investigate there are some simple approaches to take based on what that line of code is doing.

Appserver calls

If the average time per call is low but you are calling the appserver numerous times you should investigate consolidating those calls into a single call. Every appserver call means at least one round trip across the network. Depending on your network it can take anywhere from a few milliseconds to well over 200 milliseconds to make a single round trip.

If the average time per call is larger you should investigate the code inside of the appserver procedure to verify it is working as efficiently as possible. After that evaluate how much data you are passing to the appserver and how much data is being returned.

Database queries

Evaluate the query to make sure the indexes chosen are the most efficient indexes to select the records required. You can compare the number of records returned to the number of records actually read by comparing the output of your query to the _UserTableStat VST for your database connection.

If you are running client server there are a number of tuning opportunites with database and client startup options, but be aware there are limits to how much you can reduce network traffic in a client server environment.

Temp-Table access

Just like database tables, Temp-Tables use indexes to speed up access to the data. Improper or missing indexes on a Temp-Table query can drastically impact your performance.

If the issue is with how fast you can create Temp-Tables, first make sure you are creating an appropriate number of records. Then make sure your -T setting is pointed to a local drive and your -Bt and -tmpbsize are set adequately.