The HLR benchmark application was installed on, and always ran on, the application (TimesTen) server. To assess the impact of caching the RAC database in TimesTen, for both use cases, we first ran the HLR workload directly against the backend RAC database and then ran the same HLR workload against the TimesTen cache with updates propagating to the RAC database from the TimesTen cache asynchronously.
Each workload run contained multiple iterations. Each iteration had a varying and increasing number of application threads (thread counts (TC)) or concurrent database connections that the HLR workload generated. For each iteration, we set the ramp-up time to two minutes, the measurement time to five minutes, and the ramp-down time to one minute, so that each iteration’s total duration was eight minutes. During each iteration’s measurement period, the HLR application captured the total transactions throughput measured as Transactions per Second (TPS) and the latency (in microseconds) of each of the seven HLR transactions.
We determined the size of the HLR schema to target for each use case by calculating the number of SUBSCRIBER rows that we could explicitly load into the TimesTen cache. We targeted the same number of SUBSCRIBER rows during the respective use case’s baseline tests where we ran the HLR benchmark directly against the RAC database.
For the baseline runs directly against the RAC database, we ran two separate instances of the benchmark application, each targeting one of the RAC instances and a disjointed set of data that matched the partitioning scheme of the HLR schema tables and indexes. This was to avoid potential RAC data block ping-pong effects which can be detrimental for OLTP-style workloads. Similarly, during the TimesTen benchmarking runs, we configured the TimesTen replication agents to connect to only one RAC instance. This was also done to avoid the potential backend RAC block ping-pong effects.
The TimesTen replication agent asynchronously batches and propagates the committed updates in the TimesTen cache tables to the cached HLR schema tables in the backend RAC database. We used a purpose-built script to measure the replication lag of the propagation of the committed updates to the backend RAC database. During the initial trial and error tests, using the measured replication lag, we then determined the maximum throughput or TPS rate at which the replication lag was not constantly increasing. This was indicative of the fact that the backend RAC database could keep up with the replication flow that the TimesTen agent pushes. We capped both the TimesTen and the baseline RAC-only tests to this same maximum throughput or TPS value during our final measured runs.
Note: The replication lags, latencies, and the maximum throughput or TPS rate we observed in this study are strictly applicable to our setup in the lab and are NOT indicative of the maximum performance capability of the individual hardware and software products, including the schema, dataset, and workload used in the solution. Performance results will vary based on the deployed environment.
For more details on TimesTen replication, refer to TimesTen cache AWT replication overview. For more specific details about the use cases, see the following sections.