Home > Storage > ObjectScale and ECS > Industry Solutions and Verticals > Dell EMC ECS for Confluent Platform Tiered Storage Certification > Tiered storage certification tests and results
This section discusses the Confluent Platform 6.2.0 Tiered Storage feature that successfully completed all the test cases in Confluent’s TOCC test framework. The component-specific test success determines the supportability of the components on ECS as compatible and supported tiered storage for Confluent Kafka brokers through S3 protocol.
All features of the components under the testing scope completed the testing successfully.
Test |
Description |
Supported |
Object store correctness test |
Tests if all basic operations (for example, get/put/delete) on the object store API work well according to the needs of tiered storage. This is a basic test that every object store service should expect to pass ahead of the other tests in this table. It is an assertive test that either passes or fails. |
Yes |
Tiering functionality correctness test |
Tests if the end-to-end tiered storage functionality works well. It is an assertive test that either passes or fails. The test creates a test topic that by default is configured with tiering enabled and highly reduced hotset size. It then produces an event stream to the newly created test topic, waits for the brokers to archive the segments to the object store, consumes the event stream, and validates that the consumed stream matches the produced stream. The number of messages produced to the event stream is configurable, which lets the user generate a sufficiently large workload according to the needs of the testing. The reduced hotset size ensures that the consumer fetches outside the active segment are served only from the object store, which helps test the correctness of the object store for reads. |
Yes |
Tiering functionality correctness test with object store fault injection |
Tests if the end-to-end tiered storage functionality works well with an object store node failure/injected fault (simulated). It is the same as the “Tiering functionality correctness test” with an addition of object store injected failure. |
Yes |
Tier fetch benchmark |
An important part of the performance of the object store APIs for serving reads is the object store’s ability to serve range fetch read requests under heavy load. This benchmark is useful to closely measure the performance of the object store when serving range fetch requests from segments generated by the benchmark. In this benchmark, the client reading/writing from/to the object store is not the Kafka broker, but rather a custom client developed using some of the core libraries directly used by Confluent internally to serve the tier fetch requests. For each record_size_bytes chosen from the list: [500, 50000, 500000, 1000000, 2000000], the benchmark performs 60 iterations, with each iteration consisting of all the following steps. It then measures the avg/min/max time taken to complete the entire range fetch request across all 60 iterations. The steps carried out by the benchmark per iteration are:
|
Yes |
Produce-consume workload generator |
This test should run at least an hour or more, controlled by parameter num.records. More records take longer to finish. A produce-consume workload can be generated using the TOCC script. The produce-consume workload indirectly generates a write workload on the object store through the archival of segments to the store. The read workload is caused by segments read from the object store when serving the fetches of the consumer groups. Thus, the user can check the performance of read and write workloads on the object store, especially while these occur in parallel. The workload generated using the script first creates the test topic and, by default, configures it with tiering enabled and highly reduced hotset size. Then the workload produces N messages in parallel to the test topic at T messages/sec from P producers and consumes the messages from C consumer groups (N, T, P, and C are configurable). The workload initially spawns producers that generate the events to the topic, and then the workload optionally waits for the brokers to archive the segments to the object store. The workload then spawns a second set of producers and head and tail consumer groups in parallel (the number of instances are configurable) and waits for the head consumers to consume all the produced messages (largely fetched from the object store). |
Yes |
Produce-consume workload generator with object store fault injection |
This test should run at least an hour or more, controlled by parameter num.records. More records take longer to finish. A produce-consume workload can be generated using the TOCC script. The produce-consume workload indirectly generates a write workload on the object store through the archival of segments to the store with a fault injection. The read workload is caused by segments read from the object store when serving the fetches of the consumer groups. Thus, the user can check the performance of read and write workloads on the object store, especially while these occur in parallel. |
Yes |
Retention workload generator |
This test should run at least an hour or more. A retention workload can be generated using the TOCC script. It is useful to check the deletion performance of the object store under a heavy topic retention workload. The retention workload generated using the script typically produces many messages in parallel to a test topic that is configured with tiering enabled and highly reduced hotset size. The workload produced is configured at X messages/sec from Y producers, while the test topic is configured with an aggressive size-based/time-based retention setting (for example, only retain 2x the size of a segment per partition). The aggressive retention setting causes the event stream to be continuously purged from the object store, after the segments have been archived. This leads to many deletions issued on the object store by the broker, and along the way the user can check the performance of the object store deletes API. |
Yes |