Dell PowerEdge HS5610—Cold Aisle Service: Blind Mate Rail Kit
Download PDFThu, 20 Apr 2023 15:39:35 -0000
|Read Time: 0 minutes
Summary
Large scale data centers are often designed with contained hot aisles to manage ambient temperature. Hot aisle/cold aisle layout involves lining up server racks in alternating rows with cold air intakes, the fronts of servers, facing each other (the “cold aisle”) and hot air exhausts, the backs of servers, facing each other (the “hot aisle”). While this approach to data center design increases control over temperature and power consumption, it can also lead to safety (OSHA) and management efficiency concerns for the technicians who administer data center infrastructure. To address these issues, customers are pushing for cold-aisle-serviceable configurations to reduce the need to enter the hot aisle.
What are blind mate power rails?
Blind mate power rails are static rails that include extra power pass-through bracket assemblies to allow for the ability to connect power and then remove or service the HS5610 without needing hot-aisle access. These rails do not allow for hot-swapping of internal components or in-rack serviceability and are not compatible with strain relief bars (SRBs) or cable management arms (CMAs).
Blind mate power rails:
- Support stab-in installation of the chassis to the rails
- Support tool-less installation in 19-inch EIA-310-E compliant square or unthreaded round hole 4-post racks including all generations of Dell racks
- Support tooled installation in 19-inch EIA-310-E compliant threaded hole 4-post racks
- Support tooled installation in Dell Titan or Titan-D racks
These rails are compatible with the HS5610 cold-aisle-service configuration only.
Customer installation for static rails
Step 1: Install chassis members onto server.
- Install chassis rail members with extra length behind server for inner bracket.
- Attach inner bracket to rail chassis members, using supplied screws and a Phillips screwdriver.
- Plug the power cord pigtails into the PSUs.
Step 2: Install cabinet members into rack.
- Attach the outer bracket to installed rail cabinet members by aligning the t-nut and pogo-pin features (should “snap in” both sides).
- Plug external power to the outer bracket receptacles, just as you would an installed server.
Step 3: Stab-in the chassis to the cabinet guides, and slide it back until rack ears latch to rail mounting brackets. Go around to the hot aisle and plug in the PSUs to power on the server.
Limits
Any cables that do not come out the front need to have blind-mate functionality. Having cables without blind-mate functionality limits our ability to have in-rack service (and an on-rails “service position”), which limits our rails to “static” designs.
Conclusion
Once the cabinet members are installed and powered, service no longer requires hot-aisle access.
References
For more information about PowerEdge HS5610, see the PowerEdge HS5610 Specification Sheet.
For more information about the PowerEdge HS5610 blind mate rail kit, see A22 Rail Installation Guide.
Related Documents
Dell PowerEdge HS5610 Performance
Thu, 29 Jun 2023 21:55:49 -0000
|Read Time: 0 minutes
Summary
Dell delivers technology optimization without the financial and operational burden of supporting extreme configurations. Dell PowerEdge cloud-scale servers are designed and optimized to give you the ability to scale with server configurations built for CSPs. The servers scale up to two sockets, 32 cores each, 1 TB of memory, and various SAS, SATA, and NVMe storage options. Cloud-scale servers also offer Dell Open Server Manager built on open-source OpenBMC systems management software.
Test configuration
Server | PowerEdge R650xs | PowerEdge HS5610 |
CPU | 2 x Intel® Xeon® Gold 5318Y | 2 x Intel® Xeon® Gold 5418Y |
Memory | 16 x 32 GB at 2933 MT/s | 16 x 32 GB at 4400 MT/s |
Storage | 4 x 960 GB SATA drives | |
RAID controller | H755 Front RAID 5 | |
Operating system | Ubuntu 22.04 TLS |
Database benchmark—Redis
Redis is an open-source (BSD licensed), in-memory data structure store used as a database, cache, message broker, and streaming engine. Redis provides data structures such as strings, hashes, lists, sets, sorted sets with range queries, bitmaps, HyperLogLogs, geospatial indexes, and streams. Redis has integrated replication, Lua scripting, LRU eviction, transactions, and different levels of on-disk persistence, and provides high availability through Redis Sentinel and automatic partitioning with Redis Cluster.
To achieve top performance, Redis works with an in-memory dataset. Depending on your use case, Redis can persist your data either by periodically dumping the dataset to disk or by appending each command to a disk-based log. You can also disable persistence if you just need a feature-rich, networked, in-memory cache.
Redis supports asynchronous replication, with fast nonblocking synchronization and auto-reconnection with partial resynchronization on net split.
Results:
- The data provided highlights the performance of each system running a typical SET command to modify data in the schema in memory. This test leverages AVX512 extensions on both 15G and 16G systems. Relative performance uplift on our 16G configuration was strongly influenced by the increased memory bandwidth provided by DDR5.
- Dell PowerEdge HS5610 database performance improved by 35 percent compared to the previous generation. Veeva claim ID: CLM-007679
- The Dell PowerEdge HS5610 offers a 33 percent increase in price performance per CPU dollar when compared to the previous generation. Veeva claim ID: CLM‑007681
- Dell PowerEdge HS5610 performance has increased by 28 percent per watt compared to the previous generation with Redis database benchmark. Veeva claim ID: CLM-007680
CPU benchmark—V-Ray 5
V-Ray Benchmark is a free, stand-alone application that can be used to test how fast your system renders. It’s simple and fast, and includes three render engine tests:
- V-Ray—CPU compatible
- V-Ray GPU CUDA—GPU and CPU compatible
- V-Ray GPU RTX—RTX GPU compatible
Three custom-built test scenarios are also included to put each V-Ray 5 render engine through its paces.
Discover how your computer ranks alongside others and learn how different hardware combinations can boost your rendering speeds.
Results:
- The data provided highlights the performance of each system running a CPU benchmark with V-Ray using the number of vsamples. Higher is better.
- PowerEdge HS5610 has 15 percent more CPU rendering compared to the previous generation. Veeva claim ID: CLM-007677
Memory benchmark—STREAM
The STREAM benchmark is a simple synthetic benchmark program that measures sustainable memory bandwidth (in MB/s) and the corresponding computation rate for simple vector kernels.
Computer CPUs are getting faster much more quickly than computer memory systems. As these gains progress, an increasing number of programs will be limited in performance by the memory bandwidth of the system, rather than by the computational performance of the CPU.
As an extreme example, several current high-end machines run simple arithmetic kernels for out-of-cache operands at 4 to 5 percent of their rated peak speeds. That means that they are spending 95 to 96 percent of their time idle and waiting for cache misses to be satisfied.
The STREAM benchmark is specifically designed to work with datasets much larger than the available cache on any given system, so that the results are (presumably) more indicative of the performance of very large, vector-style applications.
Results:
- The Copy benchmark measures the transfer rate in the absence of arithmetic. This should be one of the fastest memory operations, but it also represents a common one—fetching two values from memory, a(i) and b(i), and updating one operation.
- PowerEdge HS5610 has 25 percent more memory bandwidth than the similarly configured previous-generation system. Veeva claim ID: CLM-007678
Conclusion
Our engineers select the appropriate benchmarks in coordination with your team. Then, using the benchmarks, we perform iterative testing in a Dell Technologies performance lab to analyze the effects of specific server settings and hardware configurations on a benchmark. This data-driven approach with engineers specializing in PowerEdge system performance allows Dell to identify the optimal system configuration for a given workload and provide guidance that delivers rapid time to value for our cloud customers.
Legal disclosures
- Testing conducted by Dell Server TME Lab March of 2023. Server performance benchmarks were performed on similarly configured Dell PowerEdge HS5610 vs Dell PowerEdge R650xs. See documentation for test and configuration specifics. Actual results will vary by use.
- Testing conducted by Dell Server TME Lab March of 2023. Server performance benchmarks were performed on similarly configured Dell PowerEdge HS5610 vs Dell PowerEdge R650xs. See documentation for test and configuration specifics. The CPU price was based on Intel.com site per March 29, 2023, for Gold 5318Y and 5418Y. Actual results will vary by use.
Spark Machine Learning on Dell HS5610 Platform with Cloudera
Mon, 29 Jan 2024 22:49:51 -0000
|Read Time: 0 minutes
Executive Summary
To establish a thorough solution collateral for the Dell PowerEdge HS5610 platform integrated with Cloudera software, we are commencing benchmarking initiatives this year. These benchmarks will form the foundational baseline for our future testing endeavors, and it's essential to emphasize that we will not be making comparisons with previous generations.
This initiative holds great significance, prompted directly by Dell's explicit request to craft this reference solution. Intel has taken charge of executing the benchmark tests and generously shared their Best-Known Methods (BKMs), providing invaluable guidance for this critical undertaking.
What are the key takeaways?
Cloudera Data Platform built on Dell’s 16G PowerEdge servers with Intel® 4th Generation Xeon processor architecture can accommodate growing enterprise data workloads and efficiently manage increasing demands for analytics and machine learning in a smaller footprint. Cloudera Data Platform delivers easier data management and scalability for data anywhere with optimal performance, scalability, and security.
As organizations create more diverse and more user-focused data products and services, there is a growing need for machine learning, which can be used to develop personalization, recommendations, and predictive insights. But as organizations amass greater volumes and greater varieties of data, data scientists are spending most of their time supporting their infrastructure instead of building the models to solve their data problems. To help solve this problem, as an integrated part of Cloudera’s platform, Spark provides a general machine learning library that is designed for simplicity, scalability, and easy integration with other tools. With the scalability, language compatibility, simple administration and compliance-ready security and governance provided through cloudera, data scientists can solve and iterate through their data problems faster.
Spark MLlib
Spark MLlib is a distributed machine learning framework built on top of Spark Core. The key benefit of MLlib is that it allows data scientists to focus on their data problems and models instead of solving the complexities surrounding distributed data. MLlib leverages the advantages of in-memory computation and is optimized for matrix and vector operations, aligning its capabilities with specific algorithmic requirements for the given use case.
K-means Overview
Clustering stands as a fundamental exploratory data analysis technique, providing valuable insights into the inherent structure of data. One prominent algorithm for this purpose is K-means, widely recognized for partitioning data points into a predefined number of clusters. This technique finds extensive applications in diverse domains including market segmentation, document clustering, image segmentation, search engines, real estate, anomaly detection and image compression, highlighting it’s versatility and importance in data analysis.
- K-Means Overview
K-means clustering performance
We achieved remarkable results by clustering 10 million samples with 1,000 dimensions in just 283 seconds. This accomplishment was made possible through the application of the K-Means algorithm from Spark's ML library, which was provided by Cloudera 7.1.8 and deployed on Dell PowerEdge HS5610 platform.
We conducted a performance evaluation of Spark's MLlib K-Means algorithm using the HiBench Benchmark.
For detailed information on our benchmarking process, you can refer to Intel GitHub repository: https://github.com/Intel-bigdata/HiBench
Note - This result is not compared against any other platform hardware or software. We will use this as baseline for future products.
Configuration Details
Workload Configuration | |
Platform | Dell PowerEdge HS5610 |
CPU | 6448Y |
Memory | 512 GB (16 x 32GB DDR5-4800) |
Boot Device | Dell EMC™ Boot Optimized Server Storage (BOSS-N1) with 2 x 480 GB M.2 NVMe SSDs (RAID1) |
HDFS Data Disk | 2 x Dell Ent NVMe P5500 RI U.2 3.84TB |
HDFS Namenode Disk | 1 x Dell Ent NVMe P5500 RI U.2 3.84TB |
Yarn Cache Disk | 1 x Dell Ent NVMe P5500 RI U.2 3.84TB |
Network Interface Controller | NetXtreme BCM5720 Gigabit Ethernet PCIe |
Cluster size | 1 |
Cloudera Distribution | Cloudera Data Platform 7.1.8 |
Compute Engine | Spark 3.2.0 |
Workload | Hibench 7.1.1 – Kmeans Algorithm |
Iterations and result choice | 3 iterations, average |
Spark Configuration | |
spark.deploy.mode | yarn |
Executor Numbers | 16 |
Executor cores | 8 |
spark.executor.memory | 24g |
spark.executor.memoryOverhead | 4g |
spark.driver.memory | 20g |
spark default parallelism | 128 |
spark.driver.maxResultSize | 20g |
spark.serializer | org.apache.spark.serializer.KryoSerializer |
spark.kryoserializer.buffer.max | 1g |
spark.network.timeout | 1200s |
K-means Configuration | |
Number of clusters | 5 |
Dimensions | 1,000 |
Number of Samples | 10,000,000 |
Samples per inputfile | 10,000 |
Number of Iterations | 40 |
k | 300 |