Dell EMC Solutions for Azure Stack HCI Furthers Customer Value
Mon, 23 Mar 2020 22:39:10 -0000|
Read Time: 0 minutes
As customers address the upgrade cycle of retiring Microsoft Windows Server 2008 into software defined infrastructures using Windows Server 2019, the core tenets of hyperconverged infrastructure (HCI) and hybrid cloud enablement continue to be desired goals. Many customers, however, are unsure how to best leverage their investments in Windows Server to modernize their datacenters to take advantage of software defined infrastructure.
At Dell Technologies, we have leadership positions in converged, hyperconverged, and cloud infrastructures covering several platforms, including being a founding launch partner with Microsoft’s Azure Stack HCI solution. Built over three decades of partnership with Microsoft, we bring the insights and expertise to help our customers with their IT transformation utilizing software defined features of Windows Server 2019, the foundational platform for Azure Stack HCI.
Built on globally available and supported Storage Spaces Direct (S2D) Ready Nodes, Dell EMC offers a wide range of Azure Stack HCI Solutions that provide an excellent value proposition for customers who have standardized on Microsoft Hyper-V and looking to modernize IT infrastructure while utilizing their existing investments and expertise in Windows Server.
As we head to Microsoft’s largest customer event – Microsoft Ignite 2019 – we are delighted to share some new enhancements and offerings to our Azure Stack HCI solution portfolio.
Simplifying Managing Azure Stack HCI via Windows Admin Center (WAC)
With a goal of simplifying Azure Stack HCI management, we have integrated monitoring of S2D Ready Nodes into the Windows Admin Center (WAC) console. The Dell EMC OpenManage Extension for WAC allows our customers to manage Azure Stack HCI clusters from a single pane of glass. The current integration provides health monitoring, hardware inventory, and firmware compliance reporting of S2D Ready Nodes, the core building block of our Azure Stack HCI solution. By using this extension, infrastructure administrators can monitor all their clusters in real time and check if the nodes are compliant to Dell EMC recommended firmware and driver versions. Customers wanting to leverage Azure public cloud to either extend or protect their on-prem applications can do so within the WAC console to utilize services such as Azure Back up, Azure Site Recovery, Azure Monitor, etc.
Here is what Greg Altman, IT Infrastructure Manager at Swiff-Train and one our early customers had to say about our OpenManage integration with WAC:
"The Dell EMC OpenManage Integration with Microsoft Windows Admin Center gives us full visibility to Dell EMC Solutions for Microsoft Azure Stack HCI, enabling us to more easily respond to situations before they become critical. With the new OpenManage integration, we can also manage Microsoft Azure Stack HCI from anywhere, even simultaneously managing our clusters located in different cities."
New HCI Node optimized for Edge and ROBO Use Cases
Customers looking at modernizing infrastructure at edge, remote or small office locations now have an option of utilizing the new Dell EMC R440 S2D Ready Node which provides both hybrid and all-flash options. A 2-node Azure Stack HCI cluster provides a great solution for such use cases that need limited hardware infrastructure, yet superior performance and availability and ease of remote management.
The dual socket R440 S2D Ready Node is shallower (depth of 27.26 in) than a typical rack server, comes with up to 8 or 10 2.5” drive configurations providing up to 76.6TB of all-flash capacity in a single 1U node.
The table below summarizes our S2D Ready Node portfolio.
R440 S2D RN
R640 S2D RN
R740xd S2D RN
R740xd2 S2D RN
Edge/ROBO and space (depth) constrained locations
Density optimized node for applications needing balance of high-performance storage and compute
Capacity and performance optimized node for applications needing balance of compute and storage
Capacity optimized node for data intensive applications and use cases such as backup and archive
Hybrid & All-Flash
Hybrid, All-Flash, All-NVMe including Intel Optane DC Persistent Memory
Hybrid, All-Flash, and All-NVMe
Hybrid with SSDs and 3.5” HDDs
For detailed node specifications, please refer to our website.
Stepping up the Performance Capabilities
With applications and growing data analysis needs increasingly driving the lower latency and higher capacity requirements, it’s imperative the underlying infrastructure does not create performance bottlenecks. The latest refresh of our solution includes several updates to scale infrastructure performance:
- All S2D Ready Nodes now support Intel 2nd Generation Xeon Scalable Processors that provide improved compute performance and security features.
- Support for Intel Optane SSDs and Intel Optane DC memory (on R640 S2D Ready node) enable lower latency storage and persistent memory tier to accelerate application performance. The R640 S2D Ready Node can be configured with 1.5TB of Optane DC persistent memory working in App Direct Mode to a provide a cache tier for the NVMe storage local to the node.
- The new all-NVMe option on R640 S2D Ready Node provides a compact 1U node for applications that are sensitive to both compute and storage performance.
- Faster Networking Options: For applications needing high bandwidth and low latency access to network, the R640 and R740XD S2D Ready Nodes can now be configured with Mellanox CX5 100Gb Ethernet adapters. In addition, we have also qualified the PowerSwitch S5232 100Gb switch to provide a fully validated solution by Dell EMC.
As we drove new hardware enhancements to our Azure Stack HCI portfolio, we also put a configuration to test the performance we can expect from a representative configuration. With just a four node Azure Stack HCI cluster with R640 S2D Ready Nodes configure all NVMe drives and 100Gb Ethernet, we observed:
- 2.95M IOPS with an average read latency of 242μs in a VM Fleet test configured for 4K block size and 100% reads
- 0.8M IOPS with an average write latency of 4121 μs in a VM Fleet test configured for 4K block size and 100% writes
- Up to 63GB/s of 100% sequential read throughput and 9GB/s of 100% sequential write throughput with 512KB block size
Yes, you got it right. Not only the solution is compact, easy to manage but also provides a tremendous performance capability.
Read our detailed blog for more information on our lab performance test results.
Overall, we are very excited to bring so many new capabilities to our customers. We invite you to come meet us at Microsoft Ignite 2019 at Booth 1547, talk to Dell EMC experts and see live demos. Besides the show floor, Dell EMC experts will also be available at Hyatt Regency Hotel, Level 3, Discovery 43 Suite for detailed conversations. Register here for time with our experts.
Related Blog Posts
Evaluating Performance Capabilities of Dell EMC Solutions for Azure Stack HCI
Mon, 23 Mar 2020 22:39:11 -0000|
Read Time: 0 minutes
Just the facts:
- A Dell EMC Storage Spaces Direct four-node cluster was tested with VM Fleet in a 100 random-read workload and achieved 2,953,095 IOPS with an average read latency of 242 microseconds.
- A Dell EMC Storage Spaces Direct four-node cluster was tested with VM Fleet in a 100 percent random-write workload and achieved 818,982 IOPS at an average write latency of 4 milliseconds.
- A Dell EMC Storage Space Direct four-node cluster was tested with VM Fleet in a 100 percent sequential-read workload and achieved 63 GB/s and with a 100 percent sequential-write workload 9 GB/s
User experience is everything. In today’s world, fast and intuitive applications are a necessity, and anything less might be labeled slow and not very useful. Once an application is labeled slow, it’s hard to change that impression with end users. Thus, architecting a system for performance is a key consideration in ensuring a good application experience.
In this blog, we explore a Dell EMC Storage Spaces Direct solution that delivered amazing performance in our internal tests. Storage Spaces Direct is part of Azure Stack HCI and enables customers to use industry-standard servers with locally attached drives to create high-performance and high-availability storage. Azure Stack HCI enables the IT organization to run virtual machines with cloud services on-premises. Benefits include:
- The capability to consolidate data center applications with software-defined compute, storage, and networking.
- Using virtual machines to drive greater operational efficiencies while accelerating performance with Storage Spaces Direct. Support for Non-Volatile Memory Express (NVMe) drives enables software-defined storage to reach new levels of performance.
- Improved high availability with clustering and distributed software resiliency.
Database and other storage-intensive applications could benefit from the faster NVMe drives. NVMe is an open logical device specification that has been designed for low latency and internal parallelism of solid-state storage devices. The result is a significant boost in storage performance because data can be accessed faster and with less I/O overhead.
In our labs, we created a Storage Spaces Direct performance cluster consisting of four Dell EMC PowerEdge R640 nodes. Each storage node had two Intel 6248 Cascade Lake processors, ten P4510 Intel NVMe drives, and one Mellanox CX5 dual-port 100 GbE adapter. Networking between the nodes consisted of a Dell EMC S5232 switch that supports up to thirty-two 100 GbE ports. Our goal was to drive simplicity in the configuration while showing performance value.
We used Storage Spaces Direct three-way mirroring because this configuration offers the greatest performance and protection. Protection does have a cost in terms of capacity. The capacity efficiency of a three-way mirror is 33 percent, meaning 3 TB equates to 1 TB of usable storage space. The data protection benefit with three-way mirroring is that the storage cluster can safely tolerate at least two hardware problems—for example, the loss of a drive and server at the same time. The following diagram is a simple representation of the four-node performance configuration of the Storage Spaces Direct cluster.
Figure 1: Storage Spaces Direct Cluster with four PowerEdge R640 nodes
We ran VM Fleet on the storage cluster to test performance, and the results were impressive! Here is the first test configuration:
- Block size: 4 KB
- Thread count: 2
- Outstanding I/O counts: 32
- Write ratio: 0
- Pattern: Random
Thus, this VM Fleet test used 4 KB block sizes, 100 percent reads, and a random-access pattern. This Storage Spaces Direct configuration achieved 2,953,095 IOPS with an average read latency of 242 microseconds. A microsecond is equal to one-millionth of a second. This is the kind of performance that can really accelerate online transaction processing (OLTP) workloads and make enterprise applications highly responsive to the end users.
We also tested a 100 percent random-write workload on the storage cluster. All the VM Fleet configuration settings remained the same, except the write ratio was 100. With 100 percent writes, the storage cluster achieved 818,982 IOPS at an average write latency of 4 milliseconds. We could have been less aggressive in our internal tests and delivered even lower write latency, but the goal was to push the storage cluster in terms of performance. Both these tests were done internally in our Dell EMC labs, and it’s important to note that results will vary.
Figure 2: Summary of internal test findings for 100 percent read and write workloads for IOPS and latency
Some applications, such as business intelligence and decision support systems, and some analytical workloads are more dependent on throughput. Throughput is defined by the amount of data that is delivered over a fixed period. The greater the throughput the more data that can be read and the faster the analysis or report. Our labs used the following VM Fleet configuration to test throughput:
- Block size: 512 KB
- Thread count: 2
- Outstanding I/O counts: 2
- Write ratio: 0
- Pattern: Sequential
The throughput test configuration uses larger blocks at 512 KB, 100 percent reads, and a sequential read pattern that is like scanning large datasets. The storage cluster sustained 63 gigabytes per second (GB/s). This throughput could enable faster analytics for the business and provide the capability to make timely decisions.
We also ran the same test with 100 percent writes, which simulates a data load activity such as streaming data from an IoT gateway to an internal database. In this test case, the storage cluster sustained a throughput of 9 GB/s for writes. Both the read and write throughput tests show the strength of this all-NVMe configuration from Dell EMC.
Figure 3: Summary of internal test findings for 100 percent read and write workloads for throughput
If performance is what you need, then Dell EMC can use NVMe technology to accelerate your applications. But flexibility is another factor that can be equally important. Not every application requires high IOPS and very low latencies. Dell EMC offers an expanded portfolio of Storage Spaces Direct nodes that can meet most any business requirements. A great resource for reviewing the Dell EMC Storage Spaces Direct options is the Azure Stack HCI certification pages. The following table summarizes all the Dell EMC options but doesn’t contain CPU, RAM, and other details that can be found on the certification pages.
Intel Optane SSD Cache + SDD
NVMe + HDD
NVMe (AIC) + HDD
SDD + HDD
Start with a minimal configuration using the R440 Ready Nodes, which can have up to 44 cores, 1 TB of RAM, and 19.2 TB of storage. Or go big with the R740xd2 hybrid with up to 44 cores, 384 GB of RAM, and 240 TB of storage capacity. The range of options provides you with the flexibility to configure a Storage Spaces Direct solution to meet your business needs.
The Dell EMC Ready Nodes have been configured to work with Windows 2019, so they are future-ready. For example, the Ready Nodes integrate with Windows Admin Center, so you can tier storage, implement resiliency, provision VMs and storage, configure networking, and monitor health and performance, all with just a few clicks. With your Windows Server 2019 Datacenter licenses, no separate hypervisor license is needed for VMs. You can create unlimited VMs, achieve high-availability clusters, and secure your tenants or applications with shielded VMs.
Dell EMC Storage Spaces Direct nodes have been designed to make storage in your Azure Stack HCI easy. If you are interested in learning more, see Dell EMC Cloud for Microsoft Azure Stack HCI and contact a Dell EMC expert.
Database security methodologies of SQL Server
Mon, 01 Jun 2020 23:35:25 -0000|
Read Time: 0 minutes
In general, security touches every aspect and activity of an information system. The subject of security is vast, and we need to understand that security can never be perfect. Every organization has unique way of dealing with security based on their requirements. In this blog, I describe database security models and briefly review SQL Server security principles.
A few definitions:
- Database: A collection of information stored in computer
- Security: Freedom from danger
- Database security: The mechanism that protects the database against intentional or accidental threats or that protects it against malicious attempts to steal (view) or modify data
Database security models
Today’s organizations rely on database systems as the key data management technology for a large variety of tasks ranging from regular business operations to critical decision making. The information in the databases is used, shared, and accessed by various users. It needs to be protected and managed because any changes to the database can affect it or other databases.
The main role of a security system is to preserve integrity of an operational system by enforcing a security policy that is defined by a security model. These security models are the basic theoretical tools to start with when developing a security system.
Database security models include the following elements:
- Subject: Individual who performs some activity on the database
- Object: Database unit that requires authorization in order to manipulate
- Access mode/action: Any activity that might be performed on an object by a subject
- Authorization: Specification of access modes for each subject on each object
- Administrative rights: Who has rights in system administration and what responsibilities administrators have
- Policies: Enterprise-wide accepted security rules
- Constraint: A more specific rule regarding an aspect of an object and action
Database security approaches
A typical DBMS supports basic approaches of data security—discretionary control, mandatory control, and role-based access control.
Discretionary control: A given user typically has different access rights, also known as privileges, for different objects. For discretionary access control, we need a language to support the definition of rights—for example, SQL.
Mandatory control: Each data object is labeled with a certain classification level, and a given object can be accessed only by a user with a sufficient clearance level. Mandatory access control is applicable to the databases in which data has a rather static or rigid classification structure—for example, military or government environments.
In both discretionary and mandatory control cases, the unit of data and the data object to be protected can range from the entire database to a single, specific tuple.
Role-based access control (RBAC): Permissions are associated with roles, and users are made members of appropriate roles. However, a role brings together a set of users on one side and a set of permissions on the other, whereas user groups are typically defined as a set of users only.
Role-based security provides the flexibility to define permissions at a high level of granularity in Microsoft SQL, thus greatly reducing the attack surface area of the database system.
RBAC mechanisms are a flexible alternative to mandatory access control (MAC) and discretionary access control (DAC).
- Objects: Any system, resource file, printer, terminal, database record, etc.
- Operations: An executable image of a program, which upon invocation performs some function for the user.
- Permissions: An approval to perform an operation on one or more RBAC-protected objects
- Role: A job function within the context of an organization with some associated semantics regarding the authority and responsibility conferred on the user assigned to the role.
For more information, see Database Security Models — A Case Study.
Note: Access control mechanisms regulate who can access which data. The need for such mechanisms can be concluded from the variety of actors that work with a database system—for example, DBA, application admin and programmer, and users. Based on actor characteristics, access control mechanisms can be divided into three categories – DAC, RBAC, and MAC.
Principles of SQL Server security
A SQL Server instance contains a hierarchical collection of entities, starting with the server. Each server contains multiple databases, and each database contains a collection of securable objects. Every SQL Server securable has associated permissions that can be granted to a principal, which is an individual, group, or process granted access to SQL Server.
For each security principal, you can grant rights that allow that principal to access or modify a set of the securables, which are the objects that make up the database and server environment. They can include anything from functions to database users to endpoints. SQL Server scopes the objects hierarchically at the server, database, and schema levels:
- Server-level securables include databases as well as objects such as logins, server roles, and availability groups.
- Database-level securables include schemas as well as objects such as database users, database roles, and full-text catalogs.
- Schema-level securables include objects such as tables, views, functions, and stored procedures.
SQL Server authentication approaches include:
- Authentication: Authentication is the SQL Server login process by which a principal requests access by submitting credentials that the server evaluates. Authentication establishes the identity of the user or process being authenticated. SQL Server authentication helps ensure that only authorized users with valid credentials can access the database server. SQL Server supports two authentication modes, Windows authentication mode and mixed mode.
- Windows authentication is often referred to as integrated security because this SQL Server security model is tightly integrated with Windows.
- Mixed mode supports authentication both by Windows and by SQL Server, using usernames and passwords.
- Authorization: Authorization is the process of determining which securable resources a principal can access and which operations are allowed for those resources. Microsoft SQL -based technologies support this principle by providing mechanisms to define granular object-level permissions and simplify the process by implementing role-based security. Granting permissions to roles rather than users simplifies security administration.
- It is a best practice to use server-level roles for managing server-level access and security, and database roles for managing database-level access.
- Role-based security provides the flexibility to define permissions at a high level of granularity in Microsoft SQL, thus greatly reducing the attack surface area of the database system.
Here are a few additional recommended best practices for SQL Server authentication:
- Use Windows authentication.
- Enables centralized management of SQL Server principals via Active Directory
- Uses Kerberos security protocol to authenticate users
- Supports integrated password policy enforcement including complexity validation for strong passwords, password expiration, and account lockout
- Use separate accounts to authenticate users and applications.
- Enables limiting the permissions granted to users and applications
- Reduces the risks of malicious activity such as SQL injection attacks
- Use contained database users.
- Isolates the user or application account to a single database
- Improves performance, as contained database users authenticate directly to the database without an extra network hop to the master database
- Supports both SQL Server and Azure SQL Database, as well as Azure SQL Data Warehouse
Database security is an important goal of any data management system. Each organization should have a data security policy, which is set of high-level guidelines determined by:
- User requirements
- Environmental aspects
- Internal regulations
- Government laws
Database security is based on three important constructs—confidentiality, integrity, and availability. The goal of database security is to protect your critical and confidential data from unauthorized access.