Your Browser is Out of Date

Nytro.ai uses technology that works best in other browsers.
For a full experience use one of the browsers below

Dell.com Contact Us
US(English)

Blogs

Short articles related to Dell Technologies solutions for Microsoft SQL Server

blogs (23)

SQL Server 2022 is here! Let’s Discuss

Bryan Walsh Bryan Walsh

Wed, 16 Nov 2022 17:14:16 -0000

|

Read Time: 0 minutes

Today at the PASS Community Data Summit, Microsoft announced the general availability of SQL Server 2022. Over the past year Dell Technologies has been working closely with Microsoft to make sure that when this day arrived, our joint customers would be ready to rapidly adopt the latest release and be able to deploy and manage with confidence based on documented testing and best practices.


Schedule

Dell Technologies is a gold sponsor at PASS 2022 and will be engaging with both in-person and virtual conference attendees in the following ways:

In-person sessions

Wednesday 9:30am – 10:45: SQL Server 2022 – A Year in (P)review in room 604

Thursday 6:45AM – 7:45: Deploying and Running SQL Server 2022 on Azure Stack HCI in rooms 602-604

On demand session

Rethink your Backup and Recovery Strategy with SQL Server 2022 available in catalog

Birds of a Feather

Wednesday 12:30 - 2:30 Hybrid Cloud Data Services in Dining Hall 4EF

Thursday 12:30 – 2:30 SQL Server 2022: A Dell Perspective in Dining Hall 4EF

In addition to these sessions, we are excited to announce a wide range of available assets today to assist you with getting the most value out of your Microsoft data platform.

New Feature: T-SQL Snapshot Backups

Taking snapshots of databases isn’t new technology, however, having the ability to leverage them in a supported way on Microsoft SQL Server while leveraging T-SQL is very exciting. Storage-based snapshots are not meant to replace all traditional database backups. Off-appliance and/or offsite backups are still a best practice for full data protection. However, most backup and restore activities do not require off-appliance or offsite backups, and this is where time and space efficiencies come in. Storage-based snapshots accelerate the majority of backup and recovery scenarios without affecting traditional database backups.

In the links below, we show how to leverage this feature our PowerStore, PowerMax, and PowerFlex storage platforms. This is a very effective tool moving forward for Windows, Linux and even containerized environments.

SQL Server 2022 – Time to Rethink your Backup and Recovery Strategy | Dell Technologies Info Hub

SQL Server 2022 Backup Using T-SQL and Dell PowerFlex Storage Snapshots | Dell Technologies Info Hub

SQL Server 2022 T-SQL Snapshot Backup – Yaron Dar's blog: storage, databases, and what brings them together (wordpress.com)

SQL Server 2022 Data Analytics with Dell PowerEdge and Dell ECS

Data virtualization has become popular among large enterprises because unstructured and semi-structured data is everywhere and leveraging this data is challenging. SQL Server 2022 PolyBase makes data virtualization possible for data scientists to use T-SQL for analytic workloads by querying data directly from S3-compatible object storage without separately installing client connection software.

Dell ECS is a modern object storage platform designed for both traditional and next-generation workloads and it provides organizations with an on-premises alternative to public cloud solutions. Dell Technologies has been named a Leader in the 2022 Gartner® Magic Quadrant™ for Distributed File Systems and Object Storage1 for the seventh year in a row – a Leader every year since the commencement of this report. According to the Gartner report, Dell Technologies has also once again received the highest overall position for its Ability to Execute in the Leaders quadrant of the report.1

As organizations, both large and small, seek to gain an edge with their intelligent data estate, access to all types of datasets must be made available. SQL Server 2022 and Dell ECS is the preferred technology for querying the Data Lake using a T-SQL surface area. This combination of products and tools yields modern opportunities to store and manage different types of data on-premises and at public cloud scale. This white paper provides an insight into the benefits of a powerful, agile, and flexible infrastructure for SQL Server 2022 data analytics.

Solution Insight: SQL Server 2022 Data Analytics on Dell PowerEdge with AMD EPYC 7473X Processors and Dell ECS | Dell Technologies Info Hub

New Feature: Backing up SQL Server to Dell ECS

In the same paper above, we leveraged the same environment to showcase the new SQL Server 2022 capabilities to run backup and restore of databases leveraging object storage with Dell ECS. One of the benefits of being able to leverage object storage is the ability to move larger read-only tables outside of the SQL database. This reduced the footprint of the database and decreases the time it takes to backup and restore.

In this document we provide the configuration steps for creating the required credential inside the SQL Server Instance for connection to ECS as well as show the correct syntax for backing up and restoring a database.

SQL Server 2022 on Dell Integrated System (DIS) for Azure Stack HCI

Dell Azure Stack HCI provides a flexible, highly available, and cost-effective platform to host Online Transaction Processing (OLTP) workloads such as SQL Server 2022. In this reference architecture we evaluated some performance results and design best practices for a cluster of AMD EPYC™ 7003 Series Processors with 3D V-Cache based DIS for Azure Stack HCI. This cost-effective solution offers strong performance results, a low data center footprint, and a highly flexible configuration in terms of compute and storage.

For those of you joining us in person at PASS, we will be jointly hosting a breakfast session with AMD and to present the reference architecture, design principles and configuration best practices for a SQL Server 2022 solution for Windows Server 2022 on Azure Stack HCI – AX7525.

Join Dell Technologies and AMD for breakfast here.

Reference Architecture Guide—Implementing SQL Server 2022 on Dell Integrated System for Azure Stack HCI | Dell Technologies Info Hub

azure-stack-hci-and-the-microsoft-data-platform.pdf (delltechnologies.com)

Take the next steps

Dell Technologies Solutions for Microsoft Data Platform | Dell USA

Dell Technologies Solutions for Microsoft Azure Arc | Dell USA               

PASS Data Community Summit November 15-18 2022

SQL Server 2022 | Microsoft


1Gartner, Inc. “Magic Quadrant™ for Distributed File Systems and Object Storage” by Julia Palmer, Jerry Rozeman, Chandra Mukhyala, Jeff Vogel, October 19, 2022. Dell Technologies was previously recognized as Dell EMC in the Magic Quadrant (2016-2019).


Read Full Blog
  • SQL Server
  • AMD

Joint engineering with AMD for SQL Server 2022

Bryan Walsh Vaani Kaur Dilip Ramachandran Bryan Walsh Vaani Kaur Dilip Ramachandran

Mon, 14 Nov 2022 13:49:15 -0000

|

Read Time: 0 minutes

In preparation for the PASS Data Community Summit, Dell Technologies has been heads down in our engineering labs testing some of the new features of SQL Server 2022. In this blog we will highlight a number of use cases, leveraging AMD EPYC™ 7003 Series Processors with 3D V-Cache. 3D V-Cache processors utilize AMD’s ground-breaking 3D Chiplet architecture with up to 768MBs of L3 cache per socket while providing socket compatibility with existing AMD EPYC™ 7003 platforms. AMD EPYC is optimized for performance with up to 64 cores and 4TB of memory per CPU.

SQL Server 2022 is the most Azure-enabled release of SQL Server, with continued innovation across performance, security, and availability. SQL Server 2022 is part of the Microsoft Intelligent Data Platform, which unifies operational databases, analytics, and data governance.  The reference architecture below shows how an Azure Arc-enabled Azure Stack HCI platform helps to consolidate virtualized SQL Server workloads and get up and running quickly on SQL Server 2022.  

SQL Server 2022 on Azure Stack HCI

Dell Azure Stack HCI provides a flexible, highly available, and cost-effective platform to host Online Transaction Processing (OLTP) workloads such as SQL Server 2022. This document provides customers with best practice recommendations on how to deploy SQL Server 2022 on a Dell Integrated System (DIS) for Azure Stack HCI. These best practices consider both performance and high availability.

We also evaluated some performance results and design best practices for a cluster of AMD EPYC™ 7003 Series Processors with 3D V-Cache based DIS for Azure Stack HCI. This cost-effective solution offers strong performance results, a low data center footprint, and a highly flexible configuration in terms of compute and storage.

Only Azure Stack HCI from Dell Technologies leverages Dell OpenManage Integration for Windows Admin Center to orchestrate full stack lifecycle management, enabling complex tasks to be completed in a fraction of the time.

Read the full paper here.

If you are joining Dell Technologies and AMD at the PASS Data Community Summit (November 15th-18th, please join us for breakfast session and get all the details on this solution and ask the experts all of the questions you may have.

Attending the PASS Community Data Summit? Register for the breakfast here.

SQL Server 2022 Data Analytics with Dell PowerEdge and Dell ECS

Data virtualization has become popular among large enterprises because unstructured and semi-structured data is everywhere and using this data is challenging. SQL Server 2022 PolyBase makes data virtualization possible for data scientists to use T-SQL for analytic workloads by querying data directly from S3-compatible object storage without separately installing client connection software.

Data analytic workloads can be CPU intensive and selecting the optimal CPUs for the data analytic servers can be challenging, time consuming, and expensive. Because running T-SQL queries for data analytics require quick response time, as in the previous document, here we leveraged the new Milan-X AMD EPYC processor with 3D V-Cache technology. The AMD 3D V-Cache technology is the first implementation of the AMD 3D Chiplet Architecture. 3D V-Cache product offers three times the L3 cache compared to standard 3rd Gen EPYC processors and keeps memory intensive compute closer to the core speeding up performance for database and analytics workloads. AMD 3rd Gen EPYC™ Processors with 3D V-Cache helps customers optimize core usage, license costs and total cost-of-ownership. 

ECS, a modern object storage platform designed for both traditional and next-generation workloads, provides organizations with an on-premises alternative to public cloud solutions. Dell Technologies has been named a Leader in the 2022 Gartner® Magic Quadrant™ for Distributed File Systems and Object Storage for the seventh year in a row – a Leader every year since the commencement of this report. According to the Gartner report, Dell Technologies has also once again received the highest overall position for its Ability to Execute in the Leaders quadrant of the report (based on Bellcore component reliability modeling across all AX-nodes).

As organizations, both large and small, seek to gain an edge with their intelligent data estate, access to all types of datasets must be made available. SQL Server 2022 and Dell ECS is the preferred technology for querying the Data Lake using a T-SQL surface area. This combination of products and tools yields modern opportunities to store and manage different types of data on-premises and at public cloud scale.

Read the full paper here.

SQL Server 2022 Backup & Restore with Dell ECS S3 Object Storage

In the same paper above, we leveraged the same environment to showcase the new SQL Server 2022 capabilities to run backup and restore of databases leveraging object storage with Dell ECS. One of the benefits of being able to leverage object storage is the ability to move larger read-only tables outside of the SQL database. This reduced the footprint of the database and decreases the time it takes to backup and restore.

In this document we provide the configuration steps for creating the required credential inside the SQL Server Instance for connection to ECS as well as show the correct syntax for backing up and restoring a database.

Take the next step…

AMD EPYC™ Processors | AMD

Microsoft Azure Stack HCI | Dell USA

Dell Technologies Solutions for Microsoft Data Platform | Dell USA

Dell Technologies Solutions for Microsoft Azure Arc | Dell USA

Read Full Blog
  • SQL Server
  • PowerStore
  • best practices

SQL Server deployments–Have you tried this?

Tom Dau Tom Dau

Tue, 27 Sep 2022 19:11:24 -0000

|

Read Time: 0 minutes

SQL Server databases are critical components of most business operations and initiatives. As these systems become more intelligent and complex, maintaining optimal SQL Server database performance and uptime can pose significant challenges to IT—and often have severe implications for the business.

What is SQL Server best practice?

Best practices for SQL Server database solution provide a comprehensive set of recommendations for both the physical infrastructure and the software stack. This set of recommendations is derived from many testing hours and expertise from the Dell Server team, Dell Storage team, and the Dell Solutions and Engineering SQL Server specialists.

Why use SQL Server best practice?

Business-critical applications require an optimized infrastructure to run smoothly and efficiently. An optimized infrastructure allows applications to run smoothly and prevents performance risks, such as system sluggishness that could affect system resources and application response time. Such unexpected outcomes can often result in revenue loss, customer dissatisfaction, and damage to brand reputation.

The mission around best practices

Dell’s mission is to ensure that its customers have a robust and high-performance database infrastructure solution by providing best practices for SQL Server 2019 running on PowerEdge R750xs servers and PowerStore T model storage including the new PowerStore 3.0. These best practices aim to offer time savings for our customers by reducing the complex work required to optimize their databases. To enhance the value of best practices, we identify which configuration changes produce the greatest results and categorize them as follows:

Day 1 through Day 3: Most enterprises implement changes based on the delivery cycle:

  • Day 1: Indicates configuration changes that are part of provisioning a database. The business has defined these best practices as an essential part of delivering a database.
  • Day 2: Indicates configuration changes that are applied after the database has been delivered to the customer.  These best practices address optimization steps to further improve system performance.
  • Day 3: Indicates configuration changes that provide small incremental improvements in the database performance.

Highly, moderately, and fine-tuning recommendations: Customers want to understand the impact of the best practices and these terms are used to indicate the value of each best practice.

  • Highly recommended: Indicates best practices that provided the greatest performance in our tests.
  • Moderately recommended: Indicates best practices that provide modest performance improvements, but which are not as substantial as the highly recommended best practices.
  • Fine-tuning: Indicates best practices that provide small incremental improvements in database performance.

Best practices test methodology for Intel-based PowerEdge and PowerStore deployments

Within each layer of the infrastructure, the team sequentially tested each component and documented the results. For example, within the storage layer, the goal was to show how optimizing the number of volumes for SQL User DB Data area volumes improve performance of a SQL Server database.  

The expectation was that performance would sequentially improve. Using this methodology, an overall optimal SQL Server database solution would be achieved during the last test.

The physical architecture consists of:

  • 2 x PowerEdge R750xs servers
  • 1 x PowerStore T model array

Table 1 and Table 2 show the server configuration and the PowerStore T model configuration.

Table 1.               Server configuration

Processors

2 x Intel® Xeon® Gold 6338 32 core CPU @2.00GHz

Memory

16 x 64 GB 3200MT/s memory, total of 1 TB

Network Adapters

Embedded NIC: 1 x Broadcom BCM5720 1 GbE DP Ethernet

Integrated NIC1: 1 x Broadcom Adv. Dual port 25 Gb Ethernet

NIC slot 5: 1 x Mellanox ConnectX-5-EN 25 GbE Dual port

HBA

2 x Emulex LP35002 32 Gb Dual Port Fibre Channel

Table 2.               PowerStore 5000T configuration details

Processors

2 x Intel® Xeon® Gold 6130 CPU @ 2.10 GHz per Node

 

Cache size

4 x 8.5 GB NVMe NVRAM

Drives

21 x 1.92 TB NVMe SSD 

Total usable capacity

28.3 TB

Front-end I/O modules

2 x Four-Port 32 Gb FC

The software layer consists of:

  • VMware ESXi 7.0.3
  • Red Hat Enterprise Linux 8.5
  • SQL Server 2019 CU 16-15.0.4223.1

There are several combinations possible for the software architecture. For this testing, SQL Server 2019, Red Hat Enterprise Linux 8.5, and VMware vSphere 7.0.3 were selected to have a design that applies to many database customers use today. 

Benchmark tool

HammerDB is a leading benchmarking tool that is used with databases like Oracle, MySQL, Microsoft SQL Server, and others. Dell’s engineering team used HammerDB to generate an Online Transaction Processing (OLTP) workload to simulate enterprise applications. To compare the benchmark results between the baseline configuration and the best practice configuration, there must be a significant load on the SQL Server Database infrastructure to ensure that the system was sufficiently taxed. This method of testing guarantees that the infrastructure resources are optimized after applying best practices. Table 3 shows the HammerDB workload configuration.

Table 3.               HammerDB workload configuration

Setting name

Value

Total transactions per user

1,000,000

Number of warehouses

5,000

Number of virtual users

80

Minutes of ramp up time

10

Minutes of test duration

50

Use all warehouses

Yes

User delay (ms)

500

Repeat delay (ms)

500

Iterations

1

 

New Order per Minute (NOPM) and Transaction per Minute (TPM) provide metrics to interpret the HammerDB results. These metrics are from the TPC-C benchmark and indicate the result of a test. During our best practice validation, we compared those metrics against the baseline configuration to ensure that there was an increase in performance.

Findings

After performing various test cases between the baseline configuration and the best practice configuration, our results showed an improvement over the baseline configuration. The following graphs are derived from the database virtual machines configuration in the following table.

Note: Every database workload and system is different, which means actual results of these best practices may vary from system to system.

Table 4.            vCPU and memory allocation

Resource Reservation

Baseline configuration per virtual machine

Number of SQL Server database virtual machines

Total

vCPU

10 cores

6

60 cores

Memory

112 GB

6

672 GB

 


Read Full Blog
  • SQL Server
  • PowerEdge
  • Benchmark

The Dell PowerEdge R7525: a Leader in Price and performance for SQL Server 2019

Sam Lucido Sam Lucido

Mon, 07 Feb 2022 21:44:20 -0000

|

Read Time: 0 minutes

Graphical user interface, text, application

Description automatically generated

The Microsoft Server team at Dell Technologies is excited to announce the recently published decision support workload benchmark (TPC-H) that has the Dell PowerEdge R7525 as the price performance leader according to the 10,000 GB results published on December 15th, 2021. The Transaction Processing Council (TPC) provides the most trusted sources of independently audited database performance benchmarks. These TPC benchmarks provide a way for customers to compare the performance of servers using different sized workloads. 

The Dell PowerEdge R7525 is a two-socket server that uses AMD EPYC processors and supports up to 4TB of memory, making it a strong choice for database workloads. The AMD EPYC 73F3 processor has 16 CPU cores and a base clock speed of 3.5 GHz that can boost up to 4.0 GHz. The base clock speed of 3.5 GHz for the CPU cores enables quick data processes, which enhances database value for customers. 

We configured the Dell PowerEdge R7525 with two AMD EPYC 73F3 processors for a total of 32 physical cores and with hyperthreading 64 threads. The server was configured with the maximum amount of memory (4 TB) with DDR4-3200 DRAM in a 32 by 128 GiB memory configuration. The server storage configuration included ten 1.92 TB SSD drives and eight 6.4 enterprise NVMe drives. For the complete PowerEdge R7525 configuration, see the TPC Benchmark H Full Disclosure Report

We used Microsoft SQL Server 2019 Enterprise Edition and Red Hat Enterprise Linux 8.3 for the TPC-H workload. The decision support workload (TPC-H) is designed to examine volumes of data, execute queries with a high degree of complexity, and provide answers to critical business questions. The key performance metric is the Composite Query-per-Hour (QphH@Size) for decision support benchmarks. The PowerEdge R7525 achieved a 960,382 QphH@Size rating with a 10,000 GB database size. To determine the price performance metric the total system cost of $379,133 USD was divided by the queries per hour. The price performance metric of $394.78 $/kQphH@10000GB placed the PowerEdge R7525 a leader in this category.* 

The Microsoft team at Dell Technologies has recently published best practices for SQL Server, many of which were used in this independently audited TPC-H benchmark. To review these best practices that provide insights into how organizations can optimize their SQL Server environments, see this link: AMD-based SQL Server Best Practices. 

The SQL Server Best Practices program includes the following: 

The solutions page for Microsoft SQL Server provides an overview of how Dell Technologies and Microsoft SQL Server can enable your company with modern infrastructure and agile operations. To learn more, see Dell Technologies Solutions for Microsoft Data Platform

For those interested in harnessing the performance of the AMD-based PowerEdge R7525 server see, PowerEdge R7525 rack server web page. This provides an overview and technical specifications of the PowerEdge R7525 server. 

Dell Technologies offers a portfolio of other rack servers, tower and modular servers that can be configured to accelerate most any business workload. To learn more about these options, see Dell Technologies PowerEdge Server Solutions

 

* Based on TPC Benchmark H (TPC-H), December 15th, 2021, the Dell EMC PowerEdge R7525 rack server has a TPC-H Composite Query-per-Hour Performance Metric of 960,382 and a price/kQphH metrick of 394.79 USD when run against a 10,000 GB Microsoft SQL Server 2019 database and Red Hat Enterprise Linux 8.3 in a non-clustered environment.  Actual results may vary based on operating environment. Full results are available at tpc.org.

     

Read Full Blog
  • SQL Server
  • PowerEdge

Dell PowerEdge R7525 Server – AMD Performance for Microsoft SQL Server workloads running on RHEL 8.3

James Martin James Martin

Wed, 19 Jan 2022 21:30:10 -0000

|

Read Time: 0 minutes


 

 

 

The Dell EMC PowerEdge R7525 is a highly scalable two-socket 2U rack server that delivers powerful performance and flexible configuration, ideal for data analytics workloads. It features a 2nd Gen AMD EPYC 7002 series processors with up to 24 NVMe drives that provide a unique combination non-oversubscribed NVMe storage and plenty of peripheral options to support applications that require maximum performance.

For this benchmark, the PowerEdge R7525 was subjected to an intense workload over 8 contiguous hours which included necessary operations such as backups. The R7525 emerged with an impressive 1,542,560 QphH@3,000GB TPC Benchmark H (TPC-H) performance rating. 

Based upon the results on July 03, 2021, tcp.org, a Decision Support Benchmark organization, the Dell PowerEdge R7525 Server demonstrated a TPC-H Composite Query-per-Hour metric of 1,542,560 when run against a 3,000GB database yielding a TPC-H Price/Performance of rating of $327.38 per query-per-hour.
 This performance was realized based on the installation of Microsoft SQL Server 2019 Enterprise Edition 64 bit on a Red Hat Enterprise Linux Server Release 8.3

Based on TCP-H V3 benchmarks in the 3,000 Scale Factor Range, this system has demonstrated an outstanding price to performance ratio. You can view the tcp.org results here

 

Performance Measurement

 

Performance

1,542,560 QphH@3,000GB

Price Performance 

$327.38 per 1,542,560 QphH@3,000GB

System Information

 

Processors

2x AMD EPYC MILAN 75F3, 2.95GHz (2 Proc / 64 Cores / 128 Threads)

Memory (6TB)

2048 GB (16x 128GB LRDIMM,3200MT/s, Quad Rank)

Storage

BOSS controller card + with 2 M.2 Sticks 480GB (RAID 1), LP

 

4 * 800GB SSD SAS Mix Use 12Gbps512e 2.5in AG Drive,3 DWPD

 

3 *960GB SSD SAS Read Intensive12Gbps 512 2.5in AG Drive, 1 DWPD

 

8 * Dell 1.6TB, NVMe, Mixed Use Express Flash, 2.5 SFF Drive, U.2, P4610


Why TPC Benchmarks matter

TPC-H is a decision support benchmark recorded at tpc.org. This benchmark consists of a suite of business-oriented ad-hoc queries and concurrent data modifications. The queries and data populating the database simulate a broad industry-wide relevance while maintaining easy implementation. This benchmark illustrates decision support systems that: 

  • Examine large volumes of data 
  • Execute queries with a high degree of complexity 
  • Give answers to critical business questions

Microsoft’s SQL Server is an enterprise-class database platform that more and more companies are using to store critical and sensitive data. SQL Server is an important cog in Microsoft’s.NET enterprise server architecture. It’s easy to use and takes fewer resources to maintain and performance tune.

The PowerEdge R7525 features 3rd Gen AMD EPYC series processors with Dell NVMe drives that provide a unique combination non-oversubscribed NVMe storage along with plenty of peripheral options to support applications that require maximum performance

 This measurement is beneficial to a wide range of businesses that use Microsoft’s SQL Server as their enterprise-class database platform. More and more companies are using Microsoft’s SQL Server to store critical and sensitive data. SQL Server remains an important cog in Microsoft .NET enterprise server architecture. It’s easy to use and requires fewer resources to maintain and performance tune.

To find resources for system that is the right fit for your SQL Server see the Dell Technologies SQL Server support page

Additional Dell Technologies Validated Designs to fit your current and future SQL requirements:

  • Consolidated Mixed Workloads
  • Re-platform Microsoft SQL to Linux
  • Run Microsoft SQL on Containers
  • Deploy Microsoft SQL 2019 Big Data Clusters

To explore these options, visit the Dell Technologies Solutions for Microsoft Data Platform. 
 


Read Full Blog
  • SQL Server
  • PowerEdge
  • Microsoft

SQL Server Modernization: 7X Performance Gains, 5.25X Faster Rebuilds

Christina Perfetto Christina Perfetto

Tue, 26 Oct 2021 09:51:41 -0000

|

Read Time: 0 minutes

Upgrading to the latest server generation significantly improves performance and data protection

When it comes time to modernize with Microsoft® SQL Server 2019® to take advantage of its significant advancements for extracting real-time insights from data-intensive workloads, it makes sense to consider an upgrade to the underlying hardware as well.

The latest generation of Dell EMC PowerEdge servers can help SQL Server transform raw data into actionable insights with better performance and data protection. These servers are highly adaptable for seamless scaling and equipped to meet the demands of real-time analytics, artificial intelligence (AI) and machine learning (ML). 

To help you put some numbers around your infrastructure decision, Dell Technologies recently asked a third-party consulting firm, Prowess Consulting, to test SQL Server performance on the latest generation of Dell EMC PowerEdge servers, compared to the previous generation.  

The testing demonstrates that the latest server generation with RAID storage based on the latest NVMe™ drives can significantly improve SQL Server 2019 performance, capable of processing more than 7X more new orders per minute (NOPM).[1] In addition, RAID array rebuild times were up to 5.25X faster on the newer platform, enabling significantly faster recovery and less downtime.1

 

These remarkable improvements in performance and data protection are driven by a variety of new technologies incorporated in PowerEdge servers: 

  • Latest processors with higher core counts: Compared to the previous generation, 3rd Generation Intel® Xeon® Scalable processors are built on a more efficient architecture that increases core performance, memory, and I/O bandwidth and provides additional memory channels to accelerate workloads. In addition, it supports more cores and sockets to further enhance performance and throughput. 
  • More and faster memory: The 3rd Generation Intel Xeon Scalable processors offer more cores and supports more memory modules (DIMMs) at the same price, for up to 1.60X higher memory bandwidth1 and up to 2.66X higher memory capacity.1 
  • NVMe™ drives with PCIe® Gen4: Compared to the SATA RAID drives with PCIe Gen3 used previously, the latest generation of NVMe solid-state drives paired with PCIe Gen4 interfaces doubles server throughput.1
  • Revolutionary RAID controller technology: The new Dell PERC H755N front NVMe adapter is based on the Broadcom® SAS3916 PCIe to SAS/SATA/PCIe RAID on Chip (RoC) controller. These are the first RAID controllers from Dell Technologies to offer both PCIe Gen4 host and PCIe Gen4 storage interfaces, which deliver double the bandwidth and 75% more IOPS compared to previous generations.1
  • Ethernet controllers: The Broadcom NetXtreme® E-Series P425G 4x 25G PCIe NIC combines a high-bandwidth Ethernet controller with a unique set of highly optimized hardware-acceleration engines to enhance network performance and improve server efficiency for enterprise and cloud-scale networking and storage applications, including AI, ML, and data analytics.

Taken together, these modern features deliver a significant SQL Server performance boost with higher-capacity storage and faster database rebuild times. They also greatly increase IT efficiency, a topic we’ll look at in more detail in the next blog.

Learn more

Read the report: Can Newer Dell EMC Servers Offer Significantly Better Performance for Microsoft SQL Server?

Visit: DellTechnologies.com/Microsoft-Data-Platform 

Visit: DellTechnologies.com/PowerEdge

[1] Dell EMC R750 server compared with similarly-configured Dell EMC PowerEdge R740xd server. Source: Prowess white paper, sponsored by Dell Technologies, “Can Newer Dell EMC Servers Offer Significantly Better Performance for Microsoft SQL Server?” August 2021. Actual results may vary.

Read Full Blog
  • SQL Server
  • PowerEdge
  • AMD
  • Benchmark
  • TPC-H

Microsoft SQL Server 2019 TPC-H Performance on Dell EMC PowerEdge R7515

Mahesh Anand Reddy M Christina Perfetto Mahesh Anand Reddy M Christina Perfetto

Mon, 14 Jun 2021 14:44:40 -0000

|

Read Time: 0 minutes

Dell EMC PowerEdge R7515 servers with AMD 75F3 processors deliver impressive performance and price/performance for SQL Server 2019.

 

While modernizing your data center there are two very important considerations every organization must make. The first is the price of new hardware and the other is how will it perform. 

The latest powerful AMD 75F3 processors change the economics of the datacenter for the better. The Dell EMC PowerEdge R7515(2U) server, with the AMD processors, delivers the balanced I/O, memory, and computing capacity needed for large-scale analytical and business intelligence applications.

 

The latest generation PowerEdge R7515 servers mean organizations will not have to compromise neither on performance nor cost and instead focus on realizing their IT and digital business potentials. Independent tpc.org auditing has demonstrated that these servers are top rated in performance and price/performance for @ 1,000GB scale factor Microsoft SQL Server 2019 Enterprise database with Red Hat Enterprise Linux 8 operating system in a non-clustered environment.[1] Powered by 3rd generation AMD® EPYC™ processors, these servers are capable of handling demanding workloads and applications, such as data warehouses, ecommerce, databases. For more details about Dell servers please visit the Dell website

 

 

                                                                Results (Source: Tpc.org as of May 03,2021)

 

The Dell PowerEdge R7515 server achieved a result of 9,79,335.3 QphH@1000GB and $269.23 $/kQphH@1000GB and with system availability as of Apr 29, 2021. Results were officially published on the TPC org website on May 3,2021. *[2]             

Check out the TPC-H V3 Result Highlights   for additional information on the benchmark configuration. The detailed official benchmark disclosure report at TPC Results Page.


AMD EPYC 75F3 Processor:

The AMD EPYC™ 7003 Series Processors are built with leading-edge Zen 3 core, and AMD Infinity Architecture. The AMD EPYC™ SoC offers a consistent set of features across 8 to 64 cores. Each 3rd Gen AMD EPYC processor consists of up to eight Core Complex Die CCD) and an I/O Die (IOD).


Benchmarking SQL Server 2019 with Dell EMC PowerEdge R7515 Server:  

Microsoft SQL Server was configured on PowerEdge R7515 server with the following configuration. The PowerEdge R7515 server was equipped with single EPYC 75F3 3.3GHz, 32C/64T, 256M Cache (280W) DDR4-3200 and 1 TB of memory (up to 2 TB supported). Storage for this system was eight 1.6TB NVME Gen3 Mixed use Express Flash Drives plus four 480GB SSD SAS mixed use drives and three 480GB SATA read intensive SSD drives. The system ran Microsoft SQL Server 2019 Enterprise Edition and Red Hat Enterprise Linux 8.  For technical information, see the R7515 rack server page for specifications, customizations, and details.

  

You can visit the FDR report for additional information on the benchmark configuration




Performance: 

The performance test consists of two runs:

A run consists of one execution of the Power test followed by one execution of the Throughput test.

RUN 1 is the first run following the load test. Run 2 is the run following Run 1. Below are the RUN1 and RUN2 results. For more detailed information please go through the FDR report.


Dell Technologies solutions for Microsoft SQL Server:

Dell Technologies solutions simplify the deployment, integration and management of Microsoft data platform environments and accelerate time-to-value for better service delivery and business innovation. With a broad infrastructure portfolio and a long-standing partnership with Microsoft, we provide the innovative solutions that reduce complexity and enable you to solve today’s challenges, no matter where you are in your transformation journey. 

We have already released 10 TB TPC-H benchmarks on 4-S R940xa server. For More information about this the testing config and results please visit Dell EMC PowerEdge R940xa Full Disclosure Report.

In addition, Dell Technologies offers a portfolio of other rack servers, tower servers, and modular infrastructure that can be configured to accelerate most any business workload. For more details about Dell Technologies Solutions for Microsoft SQL please visit https://www.delltechnologies.com/sql

 


[1] *Based on TPC Benchmark H (TPC-H), May 2021, at 1,000 GB the Dell EMC PowerEdge R7515 server, priced at $263,658.00 USD, has a TPC-H (V3) Composite Query-per-Hour metric of 979,335.3 yielding a TPC-H Price/Performance of $269.23 USD / kQphH@1,000 GB with Microsoft SQL Server 2019 Enterprise Edition database and Red Hat Enterprise Linux 8 operating system, in a non-clustered environment. Actual results may vary. Full results on tpc.org

[2] Based on TPC Benchmark H (TPC-H), May 2021, at 1,000 GB the Dell EMC PowerEdge R7515 server, priced at $263,658.00 USD, has a TPC-H (V3) Composite Query-per-Hour metric of 979,335.3 yielding a TPC-H Price/Performance of $269.23 USD / kQphH@1,000 GB with Microsoft SQL Server 2019 Enterprise Edition database and Red Hat Enterprise Linux 8 operating system, in a non-clustered environment. Actual results may vary. Full results on tpc.org.

 

Read Full Blog
  • SQL Server
  • AMD

AMD EPYC Zen3 Delivers 20% More SQL Server Performance

Dell Technologies SQL Solutions Team Dell Technologies SQL Solutions Team

Mon, 03 May 2021 14:04:47 -0000

|

Read Time: 0 minutes

It’s a common question: “How much database performance will we gain when upgrading to the newest server technology?” The person asking the question wants to justify the investment based on measured benefit.  An engineering team here at Dell Technologies ran a load test comparing the prior generation of AMD EPYC processors to the new EPYC Zen3 processors. These test findings show a double-digit gain in performance for a typical write-heavy transactional workload using the new AMD EPYC Zen3 processors.

We used two Dell EMC PowerEdge R7525 servers with different generations of AMD EPYC processors.  One server was configured with two 32-core AMD EPYC 75F3 processors running with a base clock speed of 2.95 GHz that can boost up to 4 GHz.

The table below compares the AMD EPYC Zen3 processors to the Zen2 processors used in the other server.  We see that the new Zen3 processors have a higher boost clock speed and double the L3 cache.  These new processors should accelerate database workloads by providing greater performance.

Component

PowerEdge R7525 with AMD EPYC Zen3

PowerEdge R7525 with AMD EPYC Zen2

AMD EPYC CPU

75F3

7542

Base clock speed

2.95 GHz

2.9 GHz

Boost clock speed

4.0 GHz

3.4 GHz

L1 cache

96K per core

96K per core

L2 cache

512K per core

512K per core

L3 cache

256 MB shared

128 MB shared

Both generations of processors support 8 memory channels, with each memory channel supporting up to 2 DIMMS. We believe the faster boost clock speed combined with doubling the L3 cache for the new generation AMD EPYC Zen3 processors will drive greater database performance; however, there are many more features that we haven’t covered. This AMD webpage covers the EPYC 75F3 processors. For a deep technical dive into performance tuning database workloads we recommend this RDBMS tuning guide.

The table below shows a comparison summary of two PowerEdge servers used in the testing.

 

Component

PowerEdge Milan (Zen3)

 PowerEdge ROME (Zen2)

Processor

2 x AMD EPYC 75F3 32 core processor

2 x AMD EPYC 7542 32 core processor

Memory

2,048 GB 3.2 GHz

2,048 GB 3.2 GHz

Disk Storage

8 x Dell Express Flash NVMe P4610 1.6 TB 

8 x Dell Express Flash NVMe P4610 1.6 TB 

Embedded NIC

1 x Broadcom Gigabit Ethernet BCM5720

1 x Broadcom Gigabit Ethernet BCM5720

Integrated NIC

1 x Broadcom Adv. Dual port 25 GB Ethernet

1 x Broadcom Adv. Dual port 25 GB Ethernet

Microsoft SQL Server Enterprise Edition 2019 was virtualized to reflect the most common configuration we see at customer sites.  The performance differences between bare-metal and VMware virtualization are rarely a consideration for customers since AMD and VMware vSphere 7.0 CU2 are continuously optimizing performance. The paper, “Performance Optimizations in VMware vSphere 7.0 U2 CPU Scheduler for AMD EPYC Processors” shows that the VMware CPU scheduler archives up to 50% better performance than 7.0 U1.  Virtualized data management systems have also enabled Database-as-a-Service offerings on-premise for many enterprises. The capability to quickly provision database copies can significantly benefit many IT priorities and programs.

The Dell Engineering team used VMware virtualization to create a SQL Server virtual machine template for this testing.  That template allowed the team to quickly provision four copies of the exact same virtualized database across the two PowerEdge servers. We hosted two virtualized SQL Server databases on each PowerEdge server. The figure below shows the infrastructure configuration for this performance test.

The virtual machine configuration for the four SQL Servers databases and the memory allocations for SQL Server are detailed below.

 

Component

Virtual Machine Configuration and Memory of SQL Server

vCPU

28

Memory

886 GB

Disk Storage

2.7 TB

Memory for SQL Server

758 GB

Both databases used VMware vSphere’s Virtual Machine File System (VMFS) on Direct Attached Storage (DAS). In the table below shows the sizes of each storage volume used for the virtual machine. There are two data volumes (data 1 and data 2) to increase I/O bandwidth to the disk storage. By having two volumes, reads and writes are split between the volumes increasing storage performance.

 

Storage Group

Size (GB)

Operating System

100

Data 1

600

Data 2

600

TempDB and TempDB Log

200

Lob

200

Backup

1,000

In summary, the entire SQL Server software stack consisted of:

  • Microsoft SQL Server Enterprise Edition 2019 CU9
  • Red Hat Enterprise Linux 8.3
  • VMware vSphere ESXi 7.0 Update 2

To create an Online Transaction Processing (OLTP) workload on the two SQL Server databases, the team used HammerDB. HammerDB is a leading benchmark tool used with databases like Microsoft SQL Server, Oracle, and others. We used HammerDB to generate a TPC-C workload that simulates terminal operators executing transactions, which is characteristic of an OLTP workload. A typical OLTP workload sends thousands of small read and write requests per minute to that database that must get committed to storage. New Orders Per Minute (NOPM) indicates the number of orders that were fully processed in one minute and is a metric we can use to compare two database systems. Below are the HammerDB settings used for the TPC-C workload test.

 

Setting

Value

Time Driver Script

Yes

Total Transactions per user

1,000,000

Minutes of Ramp Up Time

10

Minutes of Test Duration

20

Use All Warehouses

Yes

Number of Virtual Users

100

The new AMD EPYC Zen3 processors delivered a 20% increase in New Orders Per Minute performance over the prior generation processor! That is a substantial improvement across the two virtualized SQL Server databases running on the PowerEdge R7525 server. See the comparison chart below.


It’s very likely that the boost memory speed of 4.0 GHz and the larger L3 cache in the AMD EPYC Zen3 processors contributed significantly to the 20% performance gain. Every database is different, and results will vary. However, this performance test provides value in terms of understanding the potential gains in moving to AMDs new Zen3 processors. For enterprises considering migrating their databases to a new server platform, serious consideration should be given to using the PowerEdge R7525 servers with the new generation of AMD processors.

Read Full Blog
  • SQL Server
  • Red Hat
  • containers
  • Kubernetes
  • virtualization
  • Tanzu
  • data management

Why Canonicalization Should Be a Core Component of Your SQL Server Modernization (Part 2)

Robert F. Sonders Robert F. Sonders

Wed, 12 Apr 2023 16:01:55 -0000

|

Read Time: 0 minutes

In Part 1 of this blog series, I introduced the Canonical Model, a fairly recent addition to the Services catalog. Canonicalization will become the north star where all newly created work is deployed to and managed, and it’s simplified approach also allows for vertical integration and solutioning an ecosystem when it comes to the design work of a SQL Server modernization effort. The stack is where the “services” run—starting with bare-metal, all the way to the application, with seven layers up the stack.

 In this blog, I’ll dive further into the detail and operational considerations for the 7 layers of the fully supported stack and use by way of example the product that makes my socks roll up and down: a SQL Server Big Data Cluster. The SQL BDC is absolutely not the only “application” your IT team would address. This conversation is used for any “top of stack application” solutions. One example is Persistent Storage – for databases running in a container. We need to solution for the very top (SQL Server) and the very bottom (Dell Technologies Infrastructure). And, many optional permutation layers.

 First, a Word About Kubernetes 

 One of my good friends at Microsoft, Buck Woody, never fails to mention a particular truth in his deep-dive training sessions for Kubernetes. He says, “If your storage is not built on a strong foundation, Kubernetes will fall apart.” He’s absolutely correct.

 Kubernetes or “K8s” is an open-source container-orchestration system for automating deployment, scaling, and management of containerized applications and is the catalyst in the creation of many new business ventures, startups, and open-source projects. A Kubernetes cluster consists of the components that represent the control plane and includes a set of machines called nodes.

 To get a good handle on Kubernetes, give Global Discipline Lead Daniel Murray’s blog a read, “Preparing to Conduct Your Kubernetes Orchestra in Tune with Your Goals.”

 The 7 Layers of Integration Up the Stack

 

 Let’s look at the vertical integration one layer at a time. This process and solution conversation is very fluid at the start. Facts, IT desires, best practice considerations, IT maturity, is currently all on the table. For me, at this stage, there is zero product conversation. For my data professionals, this is where we get on a white board (or virtual white board) and answer these questions: 

  • Any data?
  • Anywhere?
  • Any way?

 Answers here will help drive our layer conversations.

 From tin to application, we have:

 Layer 1

The foundation of any solid design of the stack starts with Dell Technologies Solutions for SQL Server. Dell Technologies infrastructure is best positioned to drive consistency up and down the stack and its supplemented by the company’s subject matter experts who work with you to make optimal decisions concerning compute, storage, and back up.

 

The requisites and hardware components of Layer 1 are: 

  • Memory, storage class memory (PMEM), and a consideration for later—maybe a bunch of all-flash storage. Suggested equipment: PowerEdge.
  • Storage and CI component. Considerations here included use cases that will drive decisions to be made later within the layers. Encryption and compression in the mix? Repurposing? HA/DR conversations are also potentially spawned here. Suggested hardware: PowerOne, PowerStore, PowerFlex. Other considerations – structured or unstructured? Block? File? Object? Yes to all! Suggested hardware: PowerScale, ECS
  • Hard to argue the huge importance of a solid backup and recovery plan. Suggested hardware: PowerProtect Data Management portfolio.
  • Dell Networking. How are we going to “wire up”—Converged or Hyper-converged, or up the stack of virtualization, containerization and orchestration? How are all those aaS’es going to communicate? These questions concern the stack relationship integration and a key component to getting right. 

Note: All of Layer 1 should consist of Dell Technologies products with deployment and support services. Full stop.

Layer 2

 Now that we’ve laid our foundation from Dell Technologies, we can pivot to other Dell ecosystem solution sets as our journey continues, up the stack. Let’s keep going.

 

Considerations for Layer 2 are:

 

  • Are we sticking with physical tin (bare-metal)?
  • Should we apply a virtualization consolidationfactor here? ESXi, Hyper-V, KVM? Virtualization is “optional” at this point. Again, the answers are fluid right now and it’s okay to say, “it depends.” We’ll get there!
  • Do we want to move to open-source in terms of a fully supported stack? Do we want the comfort of a supported model? IMO, I like a fully supported model although it comes at a cost. Implementing consolidation economics, however, like I mentioned above with virtualization and containerization, equals doing more with less. 

Note: Layer 2 is optional (dependent upon future layers) and would be fully supported by either Dell Technologies, VMware or Microsoft and services provided by Dell Technologies Services or VMware Professional Services.

Layer 3

 Choices in Layer 3 help drive decision or maturity curve comfort level all the way back to Layer 1. Additionally, at this juncture, we’ll also start talking about subsequent layers and thinking about the orchestration of Containers with Kubernetes.

 

Considerations and some of the purpose-built solutions for Layer 3 include: 

  • Software-defined everything such as Dell Technologies PowerFlex (formally VxFlex).
  • Network and storage such as The Dell Technologies VMware Family – vSAN and the Microsoft Azure Family on-premises servers – Edge, Azure Stack Hub, Azure Stack HCI. 

As we are walking through the journey to a containerized database world, at this level, is where we also need to start thinking about the CSI (Container Storage Interface) driver and where it will be supported. 

  Note: Layer 3 is optional (dependent upon future layers) and would be fully supported by either Dell Technologies, VMware or Microsoft and services provided by Dell Technologies Services or VMware Professional Services. 

Layer 4

 Ah, we’ve climbed up four rungs on the ladder and arrived at the Operating System where things get really interesting! (Remember the days when OS was tin and an OS?)

 

Considerations for Layer 4 are:

  • Windows Server. Available in a few different forms—Desktop experience, Core, Nano.
  • Linux OS. Many choices including RedHat, Ubuntu, SUSE, just to name a few. 

Note: Do you want to continue the supported stack path? If so, Microsoft and RedHat are the answers here in terms of where you’ll reach for “phone-a-friend” support. 

Option: We could absolutely stop at this point and deploy our application stack. Perfectly fine to do this. It is a proven methodology. 

Layer 5

Container technology – the ability to isolate one process from another – dates back to 1979. How is it that I didn’t pick this technology when I was 9 years old? 😊 Now, the age of containers is finally upon us. It cannot be ignored. It should not be ignored. If you have read my previous blogs, especially “The New DBA Role – Time to Get your aaS in Order,” you are already embracing SQL Server on containers. Yes!

 

 

 

Considerations and options for Layer 5, the “Container Control plane” are:

 Note: Containers are absolutely optional here. However, certain options, in these layers, that will provide the runway for containers in the future. Virtualization of data and containerization of data can live on the same platform! Even if you are not ready currently. It would be good to setup for success now. Ready to start with containers, within hours, if needed. 

Layer 6 

The Container Orchestration plane. We all know about Virtualization sprawl. Now, we have container sprawl! Where are all these containers running? What cloud are they running? Which Hypervisor? It’s best to now manage through a single pane of glass—understanding and managing “all the things.”

 Considerations for Layer 6 are: 

Note: As of this blog publish date Azure Arc is not yet GA, it’s still in preview. No time like the present to start learning Arc’s in’s and out’s! Sign up for the public preview.

Layer 7

 Finally, we have reached the application layer in our SQL Server Modernization. We can now install SQL Server, or any ancillary service offering in the SQL Server ecosystem. But hold on! There are a couple options to consider: Would you like your SQL services to be managed and “Always Current?” For me, the answer would be yes. And remember, we are talking about on-premises data here.

 

Considerations for Layer 7:

  • The application for this conversation is SQL Server 2019.
  • The appropriate decisions in building you stack will lead you to Azure Arc Data Services (currently in Preview), SQL Server and Kubernetes is a requirement here. 

Note: With Dell Technologies solutions, you can deploy at your rate, as long as your infrastructure is solid.  Dell Technologies Services has services to move/consolidate and/or upgrade old versions of SQL Server to SQL Server 2019.

The Fully Supported Stack 

In terms of considering all the choices and dependencies made at each layer of building and integrating the 7 layers up the stack, there is a fully supported stack available that includes services and products from: 

  1. Dell Technologies
  2. VMware
  3. RedHat
  4. Microsoft 

Also, there are absolutely many open-source choices that your teams can make along the way. Perfectly acceptable to do. In the end, it comes down to who wants to support what, and when.

Dell Technologies Is Here to Help You Succeed

There are deep integration points for the fully supported stack. I can speak for all permutations representing the four companies listed above. In my role at Dell Technologies, I engage with senior leadership, product owners, engineers, evangelists, professional services teams, data scientists—you name it. We all collaborate and discuss what is best for you, the client. When you engage with Dell Technologies for the complete solution experience, we have a fierce drive to make sure you are satisfied, both in the near and long term.  Find out more about our products and services for Microsoft SQL Server. 

I invite you to take a moment to connect with a Dell Technologies Service Expert today and begin moving forward to your fully-support stack / SQL Server Modernization.

Read Full Blog
  • SQL Server
  • Red Hat
  • containers
  • Kubernetes
  • virtualization
  • Tanzu
  • data management

Why Canonicalization Should Be a Core Component of Your SQL Server Modernization (Part 1)

Robert F. Sonders Robert F. Sonders

Tue, 23 Mar 2021 13:00:57 -0000

|

Read Time: 0 minutes

The Canonical Model, Defined

 A canonical model is a design pattern used to communicate between different data formats; a data model which is a superset of all the others (“canonical”) and creates a translator module or layer to/from which all existing modules exchange data with other modules [1]. It’s a form of enterprise application integration that reduces the number of data translations, streamlines maintenance and cost, standardizes on agreed data definitions associated with integrating business systems, and drives consistency in providing common data naming, definition and values with a generalized data framework.

 SQL Server Modernization

 I’ve always been a data fanatic and forever hold a special fondness for SQL Server. As of late, my many clients have asked me: “How do we embark on era of data management for the SQL Server stack?”

 Canonicalization, in fact, is very much applicable to the design work of a SQL Server modernization effort. It’s simplified approach allows for vertical integration and solutioning an entire SQL Server ecosystem. The stack is where the “Services” run—starting with bare-metal, all the way to the application, with seven integrated layers up the stack.

 The 7 Layers of Integration Up the Stack

 The foundation of any solid design of the stack starts with . Dell Technologies is best positioned to drive consistency up and down the stack and its supplemented by the company’s subject matter infrastructure and services experts who work with you to make the best decisions concerning compute, storage, and back up.

 

 

Let’s take a look at the vertical integration one layer at a time. From tin to application, we have: 

  1. Infrastructure from Dell Technologies
  2. Virtualization (optional)
  3. Software defined – everything
  4. An operating system
  5. Container control plane
  6. Container orchestration plane
  7. Application

There are so many dimensions to choose from as we work up this layer cake of both hardware and software-defined and, of course, applications. Think: Dell, VMware, RedHat, Microsoft. With the progress of software, evolving at an ever-increasing rate and eating up the world, there is additional complexity. It’s critical you understand how all the pieces of the puzzle work and which pieces work well together, giving consideration of the integration points you may already have in your ecosystem.

 Determining the Most Reliable and Fully Supported Solution

 With all this complexity, which architecture do you choose to become properly solutioned? How many XaaS would you like to automate? I hope you answer is – All of them! At what point would you like the control plane, or control planes? Think of a control plane as the where your team’s manage from, deploy to, hook your DevOps tooling to. To put it a different way, would you like your teams innovating or maintaining?

 As your control plane insertion point moves up towards the application, the automation below increases, as does the complexity. One example here is the Azure Resource Manager, or ARM. There are ways to connect any infrastructure in your on-premises data centers, driving consistent management. We also want all the role-based access control (RBAC) in place – especially for our data stores we are managing. One example, which we will talk about in Part 2, is Azure Arc.

 This is the main reason for this blog, understanding the choices and tradeoff of cost versus complexity, or automated complexity. Many products deliver this automation, out of the box.  “Pay no attention to the man behind the curtain!”

 One of my good friends at Dell Technologies, Stephen McMaster an Engineering Technologist at Dell, describes these considerations as the Plinko Ball, a choose your own adventure type of scenario. This analogy is spot on!

 With all the choices of dimensions, we must distill down to the most efficient approach. I like to understand both the current IT tool set and the maturity journey of the organization, before I tackle making the proper recommendation for a solid solution set and fully supported stack.

Dell Technologies Is Here to Help You Succeed

 Is “keeping the lights on” preventing your team from innovating?

 Dell Technologies Services can complement your team! As your company’s trusted advisor, my team members share deep expertise for Microsoft products and services and we’re best positioned to help you build your stack from tin to application. Why wait? Contact a Dell Technologies Services Expert today to get started.

 Stay tuned for Part 2 of this blog series where we’ll dive further into the detail and operational considerations of the 7 layers of the fully supported stack.

 

[1] Source: Wikipedia

Read Full Blog

New Decision Support Benchmark Has Dell EMC MX740c as Top Performance Leader

Sam Lucido Sam Lucido

Thu, 18 Mar 2021 15:59:40 -0000

|

Read Time: 0 minutes

The Microsoft SQL Server team at Dell Technologies is excited to announce the recently published decision support workload benchmark (TPC-H) that at the time of this blog has the Dell EMC PowerEdge MX740c modular server at the top position of the leaderboard for performance. For those of you that don’t follow database benchmarking news, it’s the Transaction Processing Council (TPC) that provides the most trusted source of independently audited database performance benchmarks.  The TPC has two organizational goals of creating good benchmarks and a process for auditing those benchmarks.


Dell Technologies recently achieved the top position on the TPC (tpc.org) leaderboard for the decision support workload benchmark (TPC-H) Performance results for a 1,000 GB database.* This benchmark tests decision support systems that are designed to examine large volumes of data, execute queries with a high degree of complexity, and give answers to critical business questions.  Our top ranked system used was the modular PowerEdge MX740c server running Microsoft SQL Server 2019 Enterprise Edition with Red Hat Enterprise 8.0 Linux operating system.*  

 

The performance metric reported for the TPC-H benchmark is the Composite Query-per-Hour Performance Metric (QphH@Size).  Most database benchmarks for high transaction workloads are stated in transactions per second (TPS).  The queries per hour calculated for the decision support are the result of the queries being far more complex than most OLTP transactions by including a rich combination of transformation operators and selectivity constraints that generates intensive activity on the database server being tested.  

 

When it comes to consolidating data management systems on a performance platform the Dell EMC PowerEdge M7000 modular chassis with the PowerEdge MX740c server compute sleds can accelerate many data workload. For the decision support benchmark, the PowerEdge MX740c modular server was configured with:  

 • PowerEdge MX740c server 

- 2 x Intel(R) Xeon(R) Platinum 8268 2.90G, 24/48T. 

- 12 x 64GB memory. 

 Disk Drives (HDDs) 

- 3 x Dell 1.6TB, NVMe, Mixed Use Express Flash, 2.5 SFF Drive, U.2, PM1725b with Carrier 

- 2 x 480GB SSD SAS Mixed Use 12Gbps 512e 2.5in Hot-Plug Drive, PM5-V 

- 1 x 2.4TB 10K RPM SAS 12Gbps 512e 2.5in Hot-plug Hard Drive 

 Controllers 

- PERC H730P RAID Controller 

- BOSS controller card + with2 M.2 Sticks 480G (RAID 1), Blade 

 

The PowerEdge MX740c ranked #1 for performance for a 1,000 GB SQL Server 2019 Enterprise Edition database in a non-clustered configuration with a TPC-H composite query-per-hour 824,693.5.* IT professionals’ benefit from benchmarks that rank database systems using real workloads because they provide a means for comparing performance. Published benchmarks have a great deal of integrity as an independent auditor inspects each benchmark result and performs a comprehensive review.  The TPC Full Disclosure Report and be found here and provides insights into how the system was configured to achieve the query-per-hour result.  

 

Microsoft SQL Server is very efficient at using server memory and solid-state drive technology. In general, the more dedicated memory, and the faster the solid-state storage, the faster databases will perform. The PowerEdge MX740c modular server can have up to 3 TB of LRDIMM memory and up to two-28-core 2nd Generation Intel Xeon Scalable processors. This provides the enterprise with the ability to configure the compute sled for most database workloads like decision support, online transaction processing, data warehouses, and more. 

Dell Technologies has a broad portfolio of solutions for Microsoft SQL Server in the InfoHub: https://InfoHub.DellTechnologies.com. For example, there are many SQL Server solutions like running the database in containers, Big Data Clusters and with VMware vSphere virtualization. It is a technical library for researching and learning about SQL Server solutions.  

 

More interested in an overview of the many solutions for SQL Server? The solutions page for Microsoft SQL Server provides the business overview of how Dell Technologies and Microsoft SQL Server can enable your company with modern infrastructure and agile operations. To learn more visit: https://DellTechnologies.com/sql. 

Technical information about the PowerEdge MX740 modular server can be found at: PowerEdge MX740c Compute Sled. On this page are all the technical specifications and customizations and details . In addition, Dell Technologies offers a portfolio of other rack servers, tower and modular servers that can be configured to accelerate most any business workload: https://www.delltechnologies.com/en-us/servers/index.htm 

 

 

*   Based on TPC Benchmark H (TPC-H), March 2021, the Dell EMC PowerEdge MX740c modular server has a TPC-H Composite Query-per-Hour Performance Metric of 824,693.5 when run against a 1,000 GB Microsoft SQL Server 2019 database and Red Hat Enterprise Linux 8.0 in a non-clustered environment.  Actual results may vary based on operating environment. Full results on tpc.org.

 

Read Full Blog
  • SQL Server
  • Unity
  • Red Hat
  • Microsoft
  • Kubernetes
  • OpenShift
  • Big Data Cluster

Dell Technologies partners with Microsoft and Red Hat running SQL Server Big Data Clusters on OpenShift

Doug Bernhardt Steve Wanless Doug Bernhardt Steve Wanless

Wed, 19 Aug 2020 22:32:11 -0000

|

Read Time: 0 minutes

Introduced with Microsoft SQL Server 2019, SQL Server Big Data Clusters allow customers to deploy scalable clusters of SQL Server, Spark, and HDFS containers running on Kubernetes. Complete info on Big Data Clusters can be found in the Microsoft documentation. Many Dell Technologies customers are using Red Hat OpenShift Container Platform as their Kubernetes platform of choice and with this and many other solutions we are leading the way on OpenShift applications.


In Cumulative Update 5 (CU5) of Microsoft SQL Server 2019 Big Data Clusters (BDC), OpenShift 4.3+ is supported as a platform for Big Data Clusters. This has been a highly anticipated launch as customers not only realize the power of BDC and OpenShift but also look for the support of Dell Technologies, Microsoft, and Red Hat to run mission-critical workloads. Dell Technologies has been working with Microsoft and Red Hat to develop architecture guidance and best practices for deploying and running BDC on OpenShift.  


For this effort we utilized the Databricks’ TPC-DS Spark SQL kit to populate a dataset and run a workload on the OpenShift 4.3 BDC cluster to test the various architecture components of the solution. The TPC-DS benchmark is a popular database benchmark used to evaluate performance in decision support and Big Data environments. 


Based on our testing we were able to achieve linear scale of our workload while fully exercising our OpenShift cluster consisting of 12 Dell EMC R640 PowerEdge Servers and a single Dell EMC Unity 880F storage array.   

  Total time of all queries run for 10,20,and 30TB datasets 


As a result of this testing, a fully detailed OpenShift reference architecture and a best practices paper for running Big Data Clusters on Dell EMC Unity storage are under way and will be published soon. More information on Dell Technologies solutions for OpenShift can be found on our OpenShift Info Hub.  Additional information on Dell Technologies for SQL Server can be found on our Microsoft SQL webpage. 


Read Full Blog
  • SQL Server
  • Microsoft
  • security

Database security methodologies of SQL Server

Anil Papisetty Anil Papisetty

Mon, 03 Aug 2020 16:06:37 -0000

|

Read Time: 0 minutes

In general, security touches every aspect and activity of an information system. The subject of security is vast, and we need to understand that security can never be perfect. Every organization has unique way of dealing with security based on their requirements. In this blog, I describe database security models and briefly review SQL Server security principles. 

A few definitions:

  • Database: A collection of information stored in computer
  • Security: Freedom from danger
  • Database security: The mechanism that protects the database against intentional or accidental threats or that protects it against malicious attempts to steal (view) or modify data

Database security models

Today’s organizations rely on database systems as the key data management technology for a large variety of tasks ranging from regular business operations to critical decision making. The information in the databases is used, shared, and accessed by various users. It needs to be protected and managed because any changes to the database can affect it or other databases. 

The main role of a security system is to preserve integrity of an operational system by enforcing a security policy that is defined by a security model. These security models are the basic theoretical tools to start with when developing a security system. 

Database security models include the following elements:

  • Subject: Individual who performs some activity on the database
  • Object: Database unit that requires authorization in order to manipulate 
  • Access mode/action: Any activity that might be performed on an object by a subject 
  • Authorization: Specification of access modes for each subject on each object
  • Administrative rights: Who has rights in system administration and what responsibilities administrators have
  • Policies: Enterprise-wide accepted security rules
  • Constraint: A more specific rule regarding an aspect of an object and action 

Database security approaches

A typical DBMS supports basic approaches of data security—discretionary control, mandatory control, and role-based access control. 

Discretionary control: A given user typically has different access rights, also known as privileges, for different objects. For discretionary access control, we need a language to support the definition of rights—for example, SQL. 

Mandatory control: Each data object is labeled with a certain classification level, and a given object can be accessed only by a user with a sufficient clearance level. Mandatory access control is applicable to the databases in which data has a rather static or rigid classification structure—for example, military or government environments.

In both discretionary and mandatory control cases, the unit of data and the data object to be protected can range from the entire database to a single, specific tuple.

Role-based access control (RBAC): Permissions are associated with roles, and users are made members of appropriate roles. However, a role brings together a set of users on one side and a set of permissions on the other, whereas user groups are typically defined as a set of users only.

Role-based security provides the flexibility to define permissions at a high level of granularity in Microsoft SQL, thus greatly reducing the attack surface area of the database system.

RBAC mechanisms are a flexible alternative to mandatory access control (MAC) and discretionary access control (DAC).

RBAC terminology:

  • Objects: Any system, resource file, printer, terminal, database record, etc.
  • Operations: An executable image of a program, which upon invocation performs some function for the user.
  • Permissions: An approval to perform an operation on one or more RBAC-protected objects   
  • Role: A job function within the context of an organization with some associated semantics regarding the authority and responsibility conferred on the user assigned to the role.

For more information, see Database Security Models — A Case Study

Note: Access control mechanisms regulate who can access which data. The need for such mechanisms can be concluded from the variety of actors that work with a database system—for example, DBA, application admin and programmer, and users. Based on actor characteristics, access control mechanisms can be divided into three categories – DAC, RBAC, and MAC.

Principles of SQL Server security

A SQL Server instance contains a hierarchical collection of entities, starting with the server. Each server contains multiple databases, and each database contains a collection of securable objects. Every SQL Server securable has associated permissions that can be granted to a principal, which is an individual, group, or process granted access to SQL Server. 

For each security principal, you can grant rights that allow that principal to access or modify a set of the securables, which are the objects that make up the database and server environment. They can include anything from functions to database users to endpoints. SQL Server scopes the objects hierarchically at the server, database, and schema levels:

  • Server-level securables include databases as well as objects such as logins, server roles, and availability groups.
  • Database-level securables include schemas as well as objects such as database users, database roles, and full-text catalogs.
  • Schema-level securables include objects such as tables, views, functions, and stored procedures.

SQL Server authentication approaches include: 

  • Authentication: Authentication is the SQL Server login process by which a principal requests access by submitting credentials that the server evaluates. Authentication establishes the identity of the user or process being authenticated. SQL Server authentication helps ensure that only authorized users with valid credentials can access the database server. SQL Server supports two authentication modes, Windows authentication mode and mixed mode. 
  • Windows authentication is often referred to as integrated security because this SQL Server security model is tightly integrated with Windows.
  • Mixed mode supports authentication both by Windows and by SQL Server, using usernames and passwords. 
  • Authorization: Authorization is the process of determining which securable resources a principal can access and which operations are allowed for those resources. Microsoft SQL -based technologies support this principle by providing mechanisms to define granular object-level permissions and simplify the process by implementing role-based security. Granting permissions to roles rather than users simplifies security administration.
  • It is a best practice to use server-level roles for managing server-level access and security, and database roles for managing database-level access. 
  • Role-based security provides the flexibility to define permissions at a high level of granularity in Microsoft SQL, thus greatly reducing the attack surface area of the database system.

Here are a few additional recommended best practices for SQL Server authentication: 

  • Use Windows authentication. 
  • Enables centralized management of SQL Server principals via Active Directory
  • Uses Kerberos security protocol to authenticate users
  • Supports integrated password policy enforcement including complexity validation for strong passwords, password expiration, and account lockout
  • Use separate accounts to authenticate users and applications. 
  • Enables limiting the permissions granted to users and applications 
  • Reduces the risks of malicious activity such as SQL injection attacks
  • Use contained database users. 
  • Isolates the user or application account to a single database
  • Improves performance, as contained database users authenticate directly to the database without an extra network hop to the master database 
  • Supports both SQL Server and Azure SQL Database, as well as Azure SQL Data Warehouse

Conclusion

Database security is an important goal of any data management system. Each organization should have a data security policy, which is set of high-level guidelines determined by:

  • User requirements 
  • Environmental aspects
  • Internal regulations
  • Government laws

Database security is based on three important constructs—confidentiality, integrity, and availability. The goal of database security is to protect your critical and confidential data from unauthorized access. 

References

https://docs.microsoft.com/en-us/dotnet/framework/data/adonet/sql/overview-of-sql-server-security

https://sqlsunday.com/2014/07/20/the-sql-server-security-model-part-1/

https://www.red-gate.com/simple-talk/sysadmin/data-protection-and-privacy/introduction-to-sql-server-security-part-1/

 

Read Full Blog
  • SQL Server
  • Microsoft
  • Big Data Cluster

Manage and analyze humongous amounts of data with SQL Server 2019 Big Data Cluster

Anil Papisetty Anil Papisetty

Wed, 19 Aug 2020 22:07:59 -0000

|

Read Time: 0 minutes

A collection of facts and statistics for reference or analysis is called data, and, in a way, the term “big data” is a large sum of data. The big data concept has been around for many years, and the volume of data is growing like never, which is why data is a hugely valued asset in this connected world. Effective big data management enables an organization to locate valuable information with ease, regardless of how large or unstructured the data is. The data is collected from various sources including system logs, social media sites, and call detail records.

The four V's associated with big data are Volume, Variety, Velocity, and Veracity:

  • Volume is about the size—how much data you have.
  • Variety means that the data is very different—that you have very different types of data structures.
  • Velocity is about the speed of how fast the data is getting to you.
  • Veracity, the final V, is a difficult one. The issue with big data is that it is very unreliable.

SQL Server Big Data Clusters make it easy to manage this complex assortment of data.

You can use SQL Server 2019 to create a secure, hybrid, machine learning architecture starting with preparing data, training a machine learning model, operationalizing your model, and using it for scoring. SQL Server Big Data Clusters make it easy to unite high-value relational data with high-volume big data.

Big Data Clusters bring together multiple instances of SQL Server with Spark and HDFS, making it much easier to unite relational and big data and use them in reports, predictive models, applications, and AI. 

In addition, using PolyBase, you can connect to many different external data sources such as MongoDB, Oracle, Teradata, SAP HANA, and more. Hence, SQL Server 2019 Big Data Cluster is a scalable, performant, and maintainable SQL platform, data warehouse, data lake, and data science platform that doesn’t require compromising between cloud and on-premises. Components include:

Controller

The controller provides management and security for the cluster. It contains the control service, the configuration store, and other cluster-level services such as Kibana, Grafana, and Elastic Search.

Compute pool

The compute pool provides computational resources to the cluster. It contains nodes running SQL Server on Linux pods. The pods in the compute pool are divided into SQL compute instances for specific processing tasks.

Data pool

The data pool is used for data persistence and caching. The data pool consists of one or more pods running SQL Server on Linux. It is used to ingest data from SQL queries or Spark jobs. SQL Server Big Data Cluster data marts are persisted in the data pool.

Storage pool

The storage pool consists of storage pool pods comprising SQL Server on Linux, Spark, and HDFS. All the storage nodes in a SQL Server Big Data Cluster are members of an HDFS cluster.

Following is the reference architecture of SQL Server 2019 on Big Data Cluster: 

Reference: https://docs.microsoft.com/en-us/sql/big-data-cluster/big-data-cluster-overview?view=sqlallproducts-allversions 

Big data analysis

Data analytics is the science of examining raw data to uncover underlying information. The primary goal is to ensure that the resulting information is of high data quality and accessible for business intelligence as well as big data analytics applications. Big Data Clusters make machine learning easier and more accurate by handling the four Vs of big data:


The impact of the Vs on analytics

How a Big Data Cluster helps 

Volume

The greater the volume of data processed by a machine learning algorithm, the more accurate the predictions will be.

Increases the data volume available for AI by capturing data in scalable, inexpensive big data storage in HDFS and by integrating data from multiple sources using PolyBase connectors.

Variety 

The greater the variety of different sources of data, the more accurate the predictions will be.

Increases the number of varieties of data available for AI by integrating multiple data sources through the PolyBase connectors. 

Velocity

Real-time predictions depend on up to-date data flowing quickly through the data processing pipelines.

Increases the velocity of data to enable AI by using elastic compute and caching to speed up queries. 

Veracity

Accurate machine learning depends on the quality of the data going into the model training.

Increases the veracity of data available for AI by sharing data without copying or moving data, which introduces data latency and data quality issues. SQL Server and Spark can both read and write into the same data files in HDFS.

Cluster management

Azure Data Studio is the tool that data engineers, data scientists, and DBAs use to manage databases and write queries. Cluster admins use the admin portal, which runs as a pod inside the same namespace as a whole cluster and provides information such as status of all pods and overall storage capacity.

Azure Data Studio is a cross-platform management tool for Microsoft databases. It’s like SQL Server Management Studio on top of the popular VS Code editor engine, a rich T-SQL editor with IntelliSense and plug-in support. Currently, it’s the easiest way to connect to the different SQL Server 2019 endpoints (SQL, HDFS, and Spark). To do so, you need to install Data Studio and the SQL Server 2019 extension.

If you have a Kubernetes infrastructure, you can deploy this with a single server cluster in single command and have a cluster in about 30 minutes.

If you want to install SQL Server 2019 Big Data Cluster on your on-premises Kubernetes cluster, you can find an official deployment guide for Big Data Clusters on Minikube in Microsoft docs.

Conclusion

Planning is everything and good planning will get a lot of problems out of the way, especially if you are thinking about streaming data and real-time analytics. 

When it comes to technology, organizations have many different types of big data management solutions to choose from. Dell Technologies solutions for SQL Server help organizations achieve some of the key benefits of SQL Server 2019 Big Data Clusters:

  • Insights to everyone: Access to management services, an admin portal, and integrated security in Azure Data Studio, which makes it easy to manage and create a unified development and administration experience for big data and SQL Server users
  • Enriched data: Data using advanced analytics and artificial intelligence that’s built into the platform
  • Overall data intelligence:
    • Unified access to all data with unparalleled performance 
    • Easily and securely manage data (big/small)
    • Build intelligent apps and AI with all data 
  • Management of any data, any size, anywhere: Simplified management and analysis through unified deployment, governance, and tooling
  • Easy deployment and management of using Kubernetes-based big data solution built in to SQL Server 

To make better decisions and to gain insights from data, large, small, and medium-size enterprises use big data analysis. For information about how the SQL solutions team at Dell help customers store, analyze, and protect data with Microsoft SQL Server 2019 on Big Data Cluster technologies, see the following links:

https://www.delltechnologies.com/en-us/big-data/solutions.htm#dropdown0=0

https://infohub.delltechnologies.com/t/sql-server/

https://infohub.delltechnologies.com/t/microsoft-sql-server-2019-big-data-clusters-a-big-data-solution-using-dell-emc-infrastructure/


Read Full Blog
  • SQL Server
  • Linux
  • containers
  • XtremIO
  • Microsoft
  • VxFlex
  • CSI

SQL Server in containers: Dell EMC CSI plug-in—It's about manageability!

Sam Lucido Sam Lucido

Mon, 03 Jul 2023 15:55:47 -0000

|

Read Time: 0 minutes

A picture can be worth a thousand words, however, not every slide in a presentation is self-explanatory and sometimes even the speaker notes don’t provide enough real estate to cover the full meaning of the content. That happened to me recently with this slide in a technical presentation that I created: 

The unanswered question was what does this sentence mean? - “Get fixes and upgrades faster as Dell EMC’s plug-in doesn’t require Kubernetes updates and upgrades!”  I wrote this blog give more background and details about that statement.  Before we can get to that, let’s discuss the value that the CSI plug-in has for customers using XtremIO X2 and VxRack FLEX. The CSI is a standard used by Dell EMC and other storage providers to provide an interface for container orchestration systems to expose storage services to containers. Thus, the CSI plug-in enables orchestration between containers and storage via Kubernetes. Other orchestration systems such as Mesos, Docker, and Cloud Foundry also use the same CSI specification for managing containers and storage together.

The CSI plug-in has another advantage for both orchestration systems (like Kubernetes) and the storage providers. For example, Kubernetes development can progress independently without requiring storage vendors to check code into the core Kubernetes repository. Similarly, the storage vendors update the CSI plug-in only when required and not with every update or upgrade of Kubernetes. Overall there is less complexity for both Kubernetes developers and storage vendors because the CSI plug-in simplifies the integration between the orchestration and storage layers. Thus, the CSI plug-in enables faster fixes and upgrades by Dell EMC to work with Kubernetes. I hope that answers the question from above.   You can also take a look at this Kubernetes blog that goes into greater detail: Introducing Container Storage Interface (CSI) Alpha for Kubernetes.

We also recently wrote a white paper about SQL Server Containers that provides an overview of how the XtremIO X2 features available with our CSI plug-in can be used with SQL Server 2019 Linux containers .With the CSI plug-in, the Kubernetes administrator can:

  • Dynamically provision and decommission volumes
  • Attach and detach volumes from a host node
  • Mount and unmount a volume from a host node

The Kubernetes administrator can even use the XtremIO X2 snapshot capabilities to provision a copy of the SQL Server. It’s these capabilities that really make automation and orchestration of SQL Server containers easier and faster. Want to learn more? The SQL Server Containers white paper is the right starting place because it takes you through the technology and shows how the XtremIO X2 CSI plug-in with Kubernetes and Docker can address traditional challenges.

Please rate this blog and provide us with ideas for future solutions. Thanks!


Read Full Blog
  • SQL Server
  • Microsoft

The new DBA role—Time to get your aaS in order

Robert F. Sonders Robert F. Sonders

Mon, 03 Aug 2020 16:07:49 -0000

|

Read Time: 0 minutes

Yes, a “catchy” little title for a blog post but aaS is a seriously cool and fun topic and the new DBA role calls for a skill set that’s incredibly career-enhancing. DBA teams, in many cases, will be leading the data-centric revolution! The way data is stored, orchestrated, virtualized, visualized, secured and ultimately, democratized.

Let’s dive right in to see how aaS and the new Hybrid DBA role are shaking up the industry and what they’re all about!

The Future-ready Hybrid DBA

I chatted about the evolution of the DBA in a previous blog within this series called Introduction to SQL Server Data Estate Modernization. I can remember, only a few short years ago, many of my peers were vocalizing the slow demise of the DBA. I never once agreed with their opinion. Judging from the recent data-centric revolution, the DBA is not going anywhere.

But the million-dollar question is how will the DBA seamlessly manage all the different attributes of data mentioned above? By aligning the role with the skill that will be required to exceed in exceptional fashion.

Getting Acquainted with the aaS Shortlist

To begin getting assimilated with what aaS has in store, the DBA will need to get his/her aaS in order and become familiar with the various services. I am not a huge acronym guy (in meetings I often find that folks don’t even know the words behind their own acronyms), so let’s spell out a shortlist of aaS acronyms, so there’s no guesswork at their meaning.

And this is by no means is a be-all-end-all list – technology is advancing at hyper-speed – as Heraclitus of Ephesus philosophized, “Change is the only constant in life.”

Why all this aaS, you ask?

Because teams must align with some, if not all of these services, to enable massive scalability, multitenancy, independence and rapid time to value. These services make up the layers in the cake which comprise the true solution.

Let’s take SQL Server 2019, for instance, a new and incomparable version that’s officially generally available and whose awesome goodness will be the topic of many of my blogs in 2020!

SQL 2019 Editions and Feature Sets

I started with SQL Server 6.5, before a slick GUI existed. Now, with SQL 2019, we have the ability to manage a stateful app, SQL Server, in a container, within a cluster, and as a platform.

That is “awesome sauce” spread all around!

And running SQL Server 2019 on Linux? We’re back to scripting again. I love it! Scripting is testable, repeatable, sharable, and can be checked into source control. Which, quite simply, enables collaboration, which in turn makes a better product. You’ll need to be VCaaS-enabled to store all these awesome scripts.

Is your entire production database schema in source control? If it isn’t, script it out and put it there. A backup of a database is not the same as scripting version control. DBA’s often tell me their production code resides in the database, which is backed up. It’s best practice, however, to begin your proven, repeatable pattern by scripting out the DDL and checking into and deploying from a VCaaS. Aligning your database DDL and test data sets with a CI/CD application development pipeline, should be addressed ASAP. By the way, try this with ADS (Azure Data Studio) and the dacpac extension. Many of the aaSes can be managed with ADS. And, with the #SQLFamily third-party community – the ADS extension tooling will only continue to grow.

DBA as a DevOps Engineer?

Reading this blog as a DBA, you may be thinking I don’t need to know about all these aaS details. Simply put, yes, you do! In many respects, the DBA is now the new DevOps engineer, utilizing all the services listed above. This hybrid role is becoming more of a full stack developer – providing support for multiple scripting languages, an “IT Polyglot.”

Moreover, as a DBA, you are already executing on all of these attributes, and you may not even know it! It turns out that DBAs have always been a part of DevOps. Think about it, you align with Dev — like writing SQL, tuning performance, doing Object Analysis and reporting — and you already do Ops — like, configuring servers and VMS, running backups and restores, and tuning the OS, network and storage. So, you are uniquely positioned to now offer all this as a service.

Congratulations! Your organization is looking to you to lead the charge as the data-centric view of “all things” has arrived!

The Hybrid DBA Role and His or Her Requisite “aaSes”

Let’s look deeper into the chain of services.

As a hybrid DBA, you want to provide DBaaS. What will you need to provide, and understand from a tooling perspective? Well, you are going to need to understand IaaS. Infrastructure as code, to define and automate your core database target workload servers. You will be grouping these services into networking namespaces, there is a bit of NaaS alignment. Even more services if you have data sources residing in a public or on-premises cloud. Maybe you have a FaaS-enabled in a public cloud, that is writing to an unstructured data store. A FaaS, remember, can very much be the replacement for older ETL processes.

Next for agility, let’s containerize these things using CaaS. What will you need to manage and orchestrate those containers? KaaS or Kubernetes which provides the opportunity to build a SQL 2019 Big Data Cluster. In theory, you could say the Kubernetes platform is PaaS with the flexibility of IaaS. Now we are mixing aaS’s together for maybe yet another acronym. :)

Have you started noodling around with SQL 2019 yet? No? Why not? Microsoft has done an incredible job ramping up the product for its launch. I highly recommend jumping in and starting to play with it!

The core Microsoft SQL Server team has put together this excellent reference for SQL 2019 to get you started.

As a DBA, you can then provide PaaS in a few different ways. Again, DBaaS is one form of PaaS. You have a true Data Virtualization layer with SQL 2019 that can also provide PaaS for sourcing across all the data sources – simply with a T-SQL script and PolyBase External Table. That you will enable! There are also many self-service reporting enablement features with PowerBI.

Here is where I like to say XaaS enables data virtualization!

The Beneficial Impact of the Hybrid DBA on Various BUs

I also hear, from the App owners and IT decision makers, that the database team must react to requests, at a much faster rate than the previous norm.

The hybrid DBA will accelerate the cycles for many, if not all, of these business units in the following ways:

Product Managers and Enterprise Architects

  • Reduced/eliminated waits for infrastructure
  • Simultaneous features/projects
  • Improved and reliable software quality
  • Governance around data management and access

Infrastructure Administrators

  • Drastic reduction in turn-around times
  • Operational simplicity
  • Reduced human errors through automation

Developer teams

  • Accelerate iterations of testing and development
  • Secure data sets – infused into the DevOps cycles
  • Production deployments with database alters

LOB Executives

  • Lower CapEx for storing multiple copies
  • Improved employee productivity
  • Faster time to market

Embrace the Containerized World

Big Data Clusters run on a Linux platform on containers within a Kubernetes Cluster. Remember, SQL Server on Linux is the same codebase as SQL Server on Windows. With the only difference being the SQLPAL (SQL Platform Abstraction Layer) and a host extension. This host extension allows SQL Server to interact with the Linux kernel. The hybrid DBA will need to understand how containers are managed and provisioned. Ensuring a solid storage foundation is paramount here, before embarking on containers in production. (*Hint:* Dell Technologies is really good at this!). Additionally, the hybrid DBA should also embrace PowerShell, Bash, R and Python. All in the name of more intelligent tooling across a wide range of modern development platforms and technologies.

If, as a DBA, you are concerned, worried, or downright scared of a containerized world and orchestrator like Kubernetes, no need to worry at all. In fact, there are similarities that you will absolutely love. With K8s there is a declarative object configuration management technique, which is recommended in production.

The Declarative Perspective and Release of SQL 2019

Something else is declarative that we know and love: SQL Server. Declarative languages let users or administrators express a desired state or query. In the case of SQL Server, they want to retrieve as a result, providing broad instructions about what tasks are to be executed and completed. Then, the declarative engine goes to work. You then deal with the results, not the process or automation. Kubernetes and SQL Server, from a declarative perspective, act exactly the same way. I say this because data professionals, especially the DBA, are wired a certain way. They like patterns and they like to declare, “Here is my work to do, go make it happen”.

With the release of SQL 2019, the Hybrid DBA will continue to be in the driver’s seat. You will now administer from the edge (SQL Server Edge) to on-premises SQL Server to additional SQL workloads that may also be in a public cloud. To that end, any cloud, all with SQL Server. Companies large and small will benefit from intelligent SQL Server deployments.

Dell Technologies Is Here to Help You Succeed

Dell Technologies, as your company’s trusted advisor, is positioned and aligned to help you right now with our solutions and services! We also have whitepapers on SQL 2019 available for your reference to help determine which solutions are correct for your workloads.

Dell Technologies offers solutions and services to address hyper-scale data requirements and next-gen hybrid, including:

Many more solutions will become available in the very near future. Why wait? A Dell Technologies Service Expert is ready and able to help you get your aaS in order!

Summary

The hybrid DBA needs to abide to the creed of #neverstoplearning. If you’re stuck and complacent, you will get passed up. There really is no excuse. The cloud playground options are abundant, at minimal or no cost, to play and learn within these environments. Everything in the aaS table above can be vetted and tested within a public or on-premises cloud environment.

Learning a new technology can be simple and fun. Do what I do. Make learning part of your daily life. Carve out a minimum of 2-hour blocks, 5 days a week, and maybe an additional Saturday or Sunday morning to simply play with tech and learn. It works, and you will be much better off in the long run. Personally, some of my best learning happens on a quiet Sunday morning when the email and phone requests are non-existent.

What changes are you willing to incorporate into your daily routine to keep your skills fresh and relevant?

Other Blogs in This Series

Best Practices to Accelerate SQL Server Modernization (Part I)

Best Practices to Accelerate SQL Server Modernization (Part II)

Introduction to SQL Server Data Estate Modernization

Recommended Reading

Running Containerized Applications on Microsoft Azure’s Hybrid Ecosystem – Introduction

Deploy K8s Clusters into Azure Stack Hub User Subscriptions

Deploy a Self-hosted Docker Container Registry on Azure Stack Hub

Read Full Blog
  • SQL Server
  • Microsoft
  • virtualization

In a data-driven world, innovation changes are forcing a new paradigm

Stephen McMaster Stephen McMaster

Wed, 19 Aug 2020 22:15:07 -0000

|

Read Time: 0 minutes

For the last two decades, I’ve enjoyed working at Dell Technologies focusing on customer big-picture ideas. Not just focusing on hardware changes, but on a holistic solution including hardware, software, and services that achieves a business objective by addressing customer goals, problems, and needs. I’ve also partnered with my clients on their transformation journey. The concepts of digital transformation and IT transformation have been universal themes and turning these ideas into realities is where the rubber meets the road.

Now as I engage with customers and partners about Microsoft solutions, an incremental awareness of the idea of “data”, and how data is accessed and leveraged, has become evident. A foundational shift around data has occurred.

We are now living in a new era of data management, but many of us were not aware this change was developing. This has crept up on us without the fanfare you might see from a new technology launch. When you take a step back and look at these shifts in their entirety you see these changes aren’t just isolated updates, but instead are amplifying their benefits within each other. This is a fundamental transformation in the industry, similar to when virtualization was first adopted 15 years ago.

For many, this change started to become apparent with the end of support for SQL Server 2008 earlier this year (along with support for all previous versions of the product). This deadline, coupled with the large install base that still exists on this platform, is helping the conversation along but it’s not just a replace the old with the new in a point-by-point swap out. The doors opened in this new era force a completely different view and approach. We no longer need to have a SQL, Oracle, SAP, or Hadoop conversation – instead it becomes a holistic “data” point of view.

In our hybrid/multi-cloud world, there is not just one answer for managing data. Regardless of the type of data or where it resides, all the diverse data languages and methods of control, the word “data” can encompass a great deal.

Emerging technologies including IoT, 5G, AI and ML are generating greater amounts and varied types of data. How we access that data and derive insight from it becomes critical, but we have been limited by people, processes, and technology.

People have become stuck in the rut of, “I want it to be this way because it has always been this way.” Therefore, replacing dated/expired architectures becomes a swap out story verses a re-examine story and new efficiencies are completely missed. Processes within the organization become rigid with that same mindset and, dare I say politics, where access to that data becomes path- limited. Technology is influenced by both people and process as “the old way is good enough, right?”

The value/importance of “data” really points back to the insight that you drive from it. Having a bunch of ones and zeros on a hard drive is nice but what you derive from that data is critically important. The conversations I have with customers are not so much, “Where is my data and how is it stored?” The conversation is more commonly, “I have a need to get business analytics from my proprietary data so I can impact my customers in a way I never did before.”

To put my Stephen Covey hat on, we are in a paradigm change. What is occurring is incredibly impactful for how customers should view and treat data. There are three key areas that we will examine with the new paradigm today and we’ll start with data gravity.

Data Gravity

Data gravity is the idea that data has weight. Wherever data is created, it tends to remain. Data stores are getting so big that moving data around is becoming expensive, time constrained, and database performance impacting. This in turn, results in silos of data by location and type. Versioning and lack of upgrade/migration/consolidation of databases also perpetuates these silo challenges.

As with physical gravity, we understand that data’s mass encourages applications and analytics to orbit that data store where it resides. Then, application dependency upon the data’s language version cements the silo requirement even further. We have witnessed the proliferation of intelligent core and edge devices, as well as bringing applications to that place where the data resides – at the customer location.

Silos of data based on language, version, and location can’t be readily accessed from a common interface. If I am a SQL user, how do I get that Oracle data I need? I cannot just pull all my data together into a huge common dataset – it’s just too big. We see these silos in almost every customer environment.

Data Virtualization

This is where data virtualization comes into the story. Please note this is not a virtual machine (a common confusion on the naming). Think instead of this being data democratization: the ability to allow all the people access to all the data – within reason, of course. Data virtualization allows you to access the data where the data is stored without a massive ETL event.  You can see and control the data regardless of language, version, or location. The data remains where it is, but you have real-time source access to this data. You can access data from remote or diverse sources and perform actions on that data from one common point of view.

Data virtualization allows access into the silos that, in the past, have been very rigid, blocking the ability to effectively use that data. From a non-SQL Server point of view, having unstructured data or structured data in a different format (like Oracle), required you to hire a specialized person with a specific skill set to access that data. With data virtualization, that is no longer a barrier as these silo walls are reduced. Data virtualization becomes data democratization, meaning that all people (with appropriate permissions) can access and do things with that data.

From a Microsoft point of view, that technology came into reality with Polybase. Polybase with SQL Server allows access with T-SQL, the most commonly used database language. I started using this resource with the Analytics Platform System (APS) many years ago. After Microsoft placed this tool into SQL Server in 2016 and updated its functionality tremendously in SQL Server 2019, we now can ingest Hadoop, Oracle, and use orchestrators like Spark, to access all these disparate data sources. To visualize this, think of Polybase with SQL Server 2019 as a wrapper around these diverse silos of data. You now can access all these disparate data sources within one common interface: T-SQL using Polybase.

Holistic Solution

The final tenet of this fundamental change is the advent of containerization. This enablement technology allows abstraction beyond virtualization and runs just about anywhere. Data becomes nimble and you can move it where needed.

It’s amazing how pervasive containers have become. It’s no longer a science experiment, but is quickly becoming the new normal. In the past, many customers had a forklift perception that when a new technology comes into play, it requires a lift and replace. I’ve heard, “What I am doing today is no longer good, so I have to replace it with whatever your new product is, and it will be painful.”

I’ve been using the phrase that containerization enables “all the things”. Containerization has been adopted by so many architectures that it’s easier to talk about where you can’t do it verses where you can. Traditional SAN, converged, hyperconverged, hybrid cloud — you can place this just about anywhere. There is not just one right path here — do what makes sense for you. It becomes a holistic solution.

There are multiple ways to address the business need that customers have even if it’s leveraging existing designs that they’ve been using for years. Dell Technologies has published details of several architectures supporting SQL Server and has just recently published the first of many papers on SQL Server in containers.

The answer is, you can do all these things with all these architectures. By the way, this isn’t specific to Microsoft and SQL Server. We see similar architectures being created in other databases and technology formats.

These three tenets are each self-supporting to the new paradigm. Data gravity is supported by data virtualization and containerization. Data virtualization allows silos when needed (gravity) and is enabled by containerization. Containerization gives access to silos (wrapper) and is the mechanism to activate data virtualization.

From a Dell Technologies point of view, we are aggressively embracing these tenets. Our enablement technologies to support this paradigm are called out in three discrete points – accelerate, protect, and reuse. We will review these points in a separate blog.

There is much more to come as we continue this journey into the new era of data management. Dell Technologies has deeply invested in resources around this topic with several recent publications and reference designs embracing this paradigm change. Our leadership on this topic is the result of our 30+ year relationship with Microsoft and our continuing “better together” story. A detailed white paper that further expands the ideas within this blog is available here.

Read Full Blog
  • SQL Server
  • Microsoft

Introduction to SQL Server data estate modernization

Robert F. Sonders Robert F. Sonders

Mon, 03 Aug 2020 16:08:26 -0000

|

Read Time: 0 minutes

This blog is the first in a series discussing what’s entailed in modernizing the Microsoft SQL server platform.

I hear the adage “if it ain’t broke don’t fix it” a lot during my conversations with clients about SQL Server, and many times it’s from the Database Administrators. Many DBA’s are reluctant to change—maybe they think their jobs will go away. It’s my opinion that their roles are not going anywhere, but they do need to expand their knowledge to support coming changes. Their role will involve innovation at a different level, merging application technology (think CI/CD pipelines) integrated into the mix. The DBA will also need to trust that their hardware partner has fully tested and vetted a solution for best possible SQL performance and can provide a Future Proof architecture to grow and change as the needs of the business grow and change.

The New Normal Is Hybrid Cloud

When “public clouds” first became mainstream, the knee-jerk reaction was everything must go to the cloud! If this had happened, then the DBA’s job would have gone away by default. This could not be farther from the truth and, of course, it’s not what happened. With regards to SQL Server, the new normal is hybrid.

Now some years later, will some data stores have a place in public cloud?

Absolutely, they will.

However, it’s become apparent that many of these data stores are better suited to existing on-premises.  Also, there are current trends where data is being re-patriated from the public cloud back to the on-premises environments due to cost, management and data gravity (keeping the data close to the business need and/or specific regulatory compliance needs).

The SQL Server data estate can be a vast, and in some cases, a hodgepodge of “Rube Goldberg” design processes. In the past, I was the lead architect of many of these designs, I am sad to admit. (I continue to ask forgiveness from the IT gods.) Today, IT departments manage one base set of database architecture technology for operational databases, another potential hardware partner caching layer, and yet another architecture for data analytics and emerging AI.

Oh wait…one more point…all this data needs to be at the fingertips of a mobile device and edge computing. Managing and executing on all these requirements, in a real-time fashion, can result in a highly complex and specialized platform.

Data Estate Modernization

The new normal everyone wants is to keep it as simple as possible. Remember those “Rube Goldberg” designs referenced above? They’re no longer applicable. Simple, portable and seamless execution is key. As data volumes increase, DBA’s partnering with hardware vendors need to simplify as security, compliance and data integrity remain a fixed concern. However, there is a new normal for data estate management; one where large volumes of data can be referenced in place, with push down compute or seamlessly snapped and copied in a highly efficient manner to other managed environments. The evolution of SQL Server 2019 will also be a part of the data estate orchestration solution.

Are you an early adopter of SQL 2019?

Get Modern: A Unified Approach of Data Estate Management

A SQL Server Get Modern architecture from Dell EMC can consolidate your data estate, defined with our high value core pillars that align perfectly for SQL Server.

The pillars will work with any size environment, from the small and agile to the very large and complex SQL database landscape. There IS a valid solution for all environments. All the pillars work in concert to complement each other across all feature sets and integration points.

  • Accelerate – To not only accelerate and Future-Proof the environment but completely modernize your SQL infrastructure. A revamped perspective on storage, leveraging RAM and other memory technologies, to maximum effect.
  • Protect – Protect your database with industry leading backups, replication, resiliency and self-service Deduplicated copies.
  • Reuse – Reuse snapshots. Operational recovery. Dev/Test repurposing. CI/CD pipelines.

Aligning along these pillars will bring efficiency and consistency to a unified approach of data estate management. The combination of a strong, consistent, high-performance architecture supporting the database platform will make your IT team the modernization execution masters.

What Are Some of the Compelling Reasons to Modernize Your SQL Server Data Estate?

Here are some of the pain point challenges I hear frequently in my travels chatting with clients. I will talk through these topics in future blog posts.

1. Our SQL Server environment is running on aging hardware:

  • Dev/Test/Stage/Prod do not have the same performance characteristics making it hard to regression test and performance test.

2. We have modernization challenges:

  • How can I make the case for modernization?
  • My team does not have the cycles to address a full modernization workstream.

3. The Hybrid data estate is the answer… how to we get there?

4. We are at EOL (End of Life) for SQL Server 2008 / 2008R2 and Windows Server but are stuck due to:

  • ISV (Independent Software Vendor) “lock in” requirement to a specific SQL Server engine version.
  • Migration plan to modernize SQL cannot be staffed and executed through to completion.

5. We need to consolidate SQL Server sprawl and standardize on a SQL Server version:

  • Build for the future, where disruptive SQL version upgrades become a thing of the distant past. Think…containerized SQL Server. OH yeah!
  • CI/CD success – the database is a key piece of the puzzle for success.
  • Copies of databases for App Dev / Test / Reporting copies are consuming valuable space on disk.
  • Backups take too long and consume too much space.

6. I want to embrace SQL Server on Linux.

7. Let’s talk modern SQL Server application tuning and performance.

8. Where do you see the DBA role in the next few years?

Summary

I hope you will come along this SQL Server journey with me as I discuss each of these customer challenges in this blog series:

Best Practices to Accelerate SQL Server Modernization (Part I)

Best Practices to Accelerate SQL Server Modernization (Part II)

And, if you’re ready to consider a certified, award-winning Microsoft partner who understands your Microsoft SQL Server endeavors for modernization, Dell EMC’s holistic approach can help you minimize risk and business disruption. To find out more, contact your Dell EMC representative.

 

Read Full Blog
  • SQL Server
  • Microsoft

Recommendations for modernizing the Microsoft SQL Server platform

Robert F. Sonders Robert F. Sonders

Mon, 03 Aug 2020 16:08:49 -0000

|

Read Time: 0 minutes

This blog follows Introduction to SQL Server Data Estate Modernization, the second in a series discussing what’s entailed in modernizing the Microsoft SQL server platform and recommendations for executing an effective migration.

SQL Server—Love at First Sight Despite Modernization Challenges

SQL Server is my favorite Microsoft product and one of the most prolific databases of our time. I fell in love with SQL Server version 6.5, back in the day when scripting was king! Scripting is making a huge comeback, which is awesome! SQL Server, in fact, now exists in pretty much every environment—sometimes in locations IT organizations don’t even know about.

It can be a daunting task to even kick off a SQL modernization undertaking. Especially if you have End of Life SQL Server and/or Windows Server, running on aging hardware. My clients voice these concerns:

  1. The risk is too high to migrate. (I say, isn’t there a greater risk in doing nothing?)
  2. Our SQL Server environment is running on aging hardware—Dev/Test/Stage/Prod do not have the same performance characteristics, making it hard to regression test and performance test.
  3. How can I make the case for modernization? My team doesn’t have the cycles to address a full modernization workstream.

I will address these concerns, first, by providing a bit of history in terms of virtualization and SQL Server, and second, how to overcome the challenges to modernization, an undertaking in my opinion that should be executed sooner rather than later.

When virtualization first started to appear in data centers, one of its biggest value propositions was to increase server utilization, especially for SQL Server. Hypervisors increased server utilization by allowing multiple enterprise applications to share the same physical server hardware. While the improvement of utilization brought by virtualization is impressive, the amount of unutilized or underutilized resources trapped on each server starts to add up quickly. In a virtual server farm, the data center could have the equivalent of one idle server for every one to three servers deployed.

Fully Leveraging the Benefits of Integrated Copy Data Management (iCDM)

Many of these idle servers, in several instances, were related to Dev/Test/Stage. The QoS (Quality of Service) can also be a concern with these instances.

SQL Server leverages Always On Groups as a native way to create replicas that can then be used for multiple use cases. The most typical deployment of AAG replicas are for high availability failover databases and for offloading heavy read operations, such as reporting, analytics and backup operations.

iCDM allows for additional use cases like the ones listed below with the benefits of inline data services:

  • Test/Dev for continued application feature development, testing, CI/CD pipelines.
  • Maintenance to present an environment to perform resource intensive database maintenance tasks, such as DBCC and CHECKDB operations.
  • Operational Management to test and execute upgrades, performance tuning, and pre-production simulation
  • Reporting to serve as the data source for any business intelligence system or reporting.

One of the key benefits of iCDM technology is the ability to provide cost efficient lifecycle management environment. iCDM provides efficient copy data management at the storage layer to consolidate both primary data and its associated copies on the same scale-out, all-flash array for unprecedented agility and efficiency. When combined with specific Dell EMC products, bullet-proof, consistent IOPS and latency, linear scale-out all-flash performance, and the ability to add more performance and capacity as needed with no application downtime, iCDM delivers incredible potential to consolidate both production and non-production applications without impacting production SLAs.

While emerging technologies, such as artificial intelligence, IoT and software-defined storage and networking, offer competitive benefits, their workloads can be difficult to predict and pose new challenges for IT departments.

Traditional architectures (hardware and software) are not designed for modern business and service delivery goals. Which is yet another solid use case for a SQL modernization effort.

As I mentioned in my previous post, not all data stores will be migrating to the cloud. Ever. The true answer will always be a Hybrid Data Estate. Full stop. However, we need to modernize for a variety of compelling reasons.

5 Paths to SQL Modernization

Here’s how you can simplify making the case for modernization and what’s included in each option.

Do nothing (not really a path at all!):

  • Risky roll of the dice, especially when coupled with aging infrastructure.
  • Run the risk of a security exploit, regulatory requirement (think GDPR) or a non-compliance mandate.

Purchase Extended Support from Microsoft:

  • Supports SQL 2008 and R2 or Windows Server 2008 and R2 only.
  • Substantial cost using a core model pricing.
  • Costs tens of thousands of dollars per year…and that is for just ONE SQL instance! Ouch. How many are in your environment?
  • Available for 3 additional years.
  • True up yearly—meaning you cannot run unsupported for 6 months then purchase an additional 6 months. Yearly purchase only. More – ouch.
  • Paid annually, only for the servers you need.
  • Tech support is only available if Extended Support is purchased (oh…and that tech support is also a separate cost).

Transform with Azure/Azure Stack:

  • Migrate IaaS application and databases to Azure/ Dell EMC Azure Stack virtual machines (Azure, on-premises…Way Cool!!!).
  • Receive an additional 3 years of extended security updates for SQL Server and Windows Server (versions 2008 and R2) at no additional charge.
  • In both cases, there is a new consumption cost, however, security updates are covered.
  • When Azure Stack (on-premises Azure) is the SQL IaaS target, there are many cases where the appliance cost plus the consumption cost, is still substantially cheaper than #2, Extended Support, listed above.
  • Begin the journey to operate in Cloud Operating Model fashion. Start with the Azure Stack SQL Resource Provider and easily, within a day, be providing your internal subscribers a full SQL PaaS (Platform as a Service).
  • If you are currently configured with a Highly Available – Failover Cluster Instance with SQL 2008, this will be condensed into a single node. The Operating System protection you had with Always On FCI is not available with Azure Stack. However, your environment will now be running within a Hyper-Converged Infrastructure. Which does offer node failure (fault domain) protection, not operating system protection, or downtime protection from an Operating System patch. There are trade-offs. Best to weigh the options for your business use case and recovery procedures.

Move and Modernize (the Best Option!):

  • Migrate IaaS instances to modern all Flash Dell EMC infrastructure.
  • Migrate application workloads on a new Server Operating System – Windows Server 2016 or 2019.
  • Migrate SQL Server databases to SQL Server 2017 and quickly upgrade to SQL 2019 when Generally Available. Currently SQL 2019 is not yet GA. Best guess, before end of year 2019.
  • Enable your IT teams with more efficient and effective operational support processes, while reducing licensing costs, solution complexity and time to delivery for new SQL Server services.
  • Reduce operating and licensing costs by consolidating SQL Server workloads. With the Microsoft SQL Server per-core licensing model in SQL Server 2012 and above, moving workloads to a virtual/cloud environment can often present significant licensing savings. In addition, through the Dell Technologies advisory services; we typically discover within the enterprise SQL server landscape, that there are many SQL Server instances which are underutilized, which presents an opportunity to reduce the CPU core counts or move SQL workloads to shared infrastructure to maximize hardware resource utilization and reduce licensing costs.

Rehost 2008 VMware Workloads:

  • Run your VMware workloads natively on Azure
  • Migrate VMware IaaS applications and databases to Azure VMware Solution by CloudSimple or Azure VMware solution by Virtustream
  • Get 3 years of extended security updates for SQL Server and Windows Server 2008 / R2 at no additional charge
  • vSphere network full compatibility

Remember, your Windows Server 2008 and 2008 R2 servers will also be EOL January 14, 2020.

Avoid Risks and Disasters Related to EOL Systems (in Other Words, CYA)

Can your company afford the risk of both an EOL Operating System and an EOL Database Engine? Pretty scary stuff. It really makes sense to look at both. Do your business leaders know this risk? If not, you need to be vocal and explain the risk. Fully. They need to know, understand, and sign off on the risk as something they personally want to absorb when an exploit hits. Somewhat of a CYA for IT, Data professionals and line of business owners. If you need help here, my team can help engage your leadership teams with you!

In my opinion, the larger problem, between SQL EOL and Windows Server EOL, is the latter. Running an unsupported operating system that supports an unsupported SQL Server workload is a serious recipe for disaster. Failover Cluster Instances (Always On FCI) was the normal way to provide Operating System High Availability with SQL Server 2008 and lower, which compounds the issue of multiple unsupported environment levels. Highly Available FCI environments are now left unprotected.

Summary

Some migrations will be simple, others much more complex, especially with mission critical databases. If you have yet to kick off this modernization effort, I recommend starting today. The EOL clock is ticking. Get your key stakeholders involved. Show them the data points from your environment. Send them the link to this blog!

If you continue to struggle or don’t want to go it alone, Dell Technologies Consulting Services, the only holistic SQL Server workload provider, can help your team every step of the way. Take a moment to connect with your Dell Technologies Service Expert today and begin moving forward to a modern platform.

Other Blogs in This Series:

Best Practices to Accelerate SQL Server Modernization (Part II)

Introduction to SQL Server Data Estate Modernization

Read Full Blog
  • SQL Server
  • Microsoft

Accelerate SQL Server modernization: Getting started

Robert F. Sonders Robert F. Sonders

Wed, 03 Aug 2022 21:45:58 -0000

|

Read Time: 0 minutes

In Part I of this blog series, I discussed issues concerning EOL SQL Server and Windows Server running on aging hardware, and three pathways to SQL Server modernization. In this blog, I’ll discuss how a combination of Dell Technologies Consulting Services and free tools from Microsoft can help you get started on your SQL migration journey and how to meet your SQL Server data modernization objectives.

Beginning Your SQL Migration Journey

During my client conversations, I hear three repeating pain points: (1) they would love to modernize, don’t have the cycles or staff to drive a solid iterative SQL migration approach through to completion, (2) they still need to “keep the lights on” and (3) despite having the skills, they lack the team cycles to execute on the plan.

This is where Dell Technologies Consulting Services can help you plan a solid foundational set of goals and develop a roadmap for the modernization. Here are the key points our SQL modernizations teams address:

  • Discover the as-is SQL environment including the current state of all in scope SQL servers, associated workloads and configurations.
  • Inventory and classify applications (which align to SQL databases) and all dependencies. Critical here to think through all the connections, reporting, ETL/ELT processes, etc.
  • Group and prioritize the SQL databases (or entire instance) by application group and develop a near-term modernization plan and roadmap for modernization. Also an excellent time to consider database consolidation.
  • Identify the rough order of magnitude for future state compute, storage and software requirements to support a modernization plan. Here is where our core product teams would collaborate with the SQL modernization teams. This collaboration is a major value add for Dell Technologies.

Additionally, there are excellent, free tools from Microsoft to help your teams, assess, test and begin the migration journey. I will talk about these tools below.

Free Microsoft Tools to Help You Get Started

Microsoft has stated they will potentially assist if there are performance issues with SQL procedures and queries. It is best to utilize the Query Tuning Assistant while preparing your database environment for a migration.

Microsoft provides query plan shape protection when:

  • The new SQL Server version (target) runs on hardware that is comparable to the hardware where the previous SQL Server version (source) was running.
  • The same supported database compatibility level is used both at the target SQL Server and source SQL Server.
  • Any query plan shape regression (as compared to the source SQL Server) that occurs in the above conditions will be addressed (contact Microsoft Customer Support if this is the case).

Here are a few other free Microsoft tools I recommend running ASAP that will enable you to understand your environment fully and provide measurable and actionable data points to feed into your SQL modernization journey. Moreover, proven and reliable upgrades should always start with these tools (also explained in detail within the Database Migration Guide):

  • Discovery – Microsoft Assessment and Planning Toolkit (MAP)
  • Assessment – Database Migration Assistant (DMA)
  • Testing – Database Experimentation Assistant (DEA)

Database engine upgrade methods are listed here. Personally, I am not a fan of any upgrade in-place option. I like greenfield and a minimal cutover with an application connection string change only. There are excellent, proven, migration paths to minimize the application downtime due to a database migration. Another place our Professional Services SQL Server team can provide value and reliability to execute a successful transition.

Here are a few cutover migration options for you to consider:

FeatureNotes
Log ShippingCutover measured in (typically) minutes
ReplicationCutover measured in (potentially) seconds
Backup and RestoreThis is going to take a while for larger databases. However, a Full, Differential and T-log backup / restore process can be automated
Filesystem/SAN CopyCan also take time
Always On Groups (>= SQL 2012)Cutover measured in (typically) seconds – Rolling Upgrade
*Future – SQL on ContainersNo down time – Always On Groups Rolling Upgrade

A consideration for you — all these tools and references are great — but does your team have the cycles and skills to execute the migration?

3 Pillars to Help You with Your SQL Server Data Modernization Objectives

As I mentioned at the start of this blog, it makes perfect sense to use the experts from Dell Technologies Consulting Services for your SQL Server migration. Our Consulting Services teams are seasoned in SQL modernization processes, including pathways to Azure, Azure Stack, the Dell EMC Ready Solutions for SQL Server, existing hardware or cloud, and align perfectly with a server and storage refresh, if you have aging hardware (will be covered in future blog).

Within Dell Technologies Consulting Services, we support 3 pillars to help you with your SQL Server data modernization objectives.

Platform Consulting Services

Dell Technologies Consulting Services provides planning, design and implementation of resilient data architectures with Microsoft technology both on-premises and in the cloud. Whether you are installing or upgrading a data platform, migrating and consolidating databases to the Cloud (Public, Private, Hybrid) or experiencing performance problems with an existing database, our proven methodologies, best practices and expertise, will help you make a successful transition.

Data Modernization Services

Modernizing your data landscape improves the quality of data delivered to stakeholders by distributing workloads in a cost-efficient manner. Platform upgrades and consolidations can lower the total cost of ownership while efficiently delivering data in the hands of stakeholders. Exploring data workloads in the cloud, such and test/development, disaster recovery or for active archives provide elastic scale and reduced maintenance.

Business Intelligence and Analytics Services

By putting data-driven decisions at the heart of your business, your organization can harness a wealth of information, gain immediate insights, drive innovation, and create competitive advantage. Dell Technologies Consulting Services provides a complete ecosystem of services that enable your organization to implement business intelligence and create insightful analytics for on-premises, public cloud, or hybrid solutions.

We have in place, the proven methodologies with an focus, to drive a repeatable and successful SQL Server migration.

3 Approaches to Drive a Repeatable and Successful SQL Server Migration

Migration Procedures and Validation Approach

A critical success factor for migrations is ensuring the migration team has a well-documented set of migration processes, procedures and validation plan. Dell Technologies Consulting Services ensure that every team member involved in the migration process has a clear set of tasks and success metrics to measure and validate the process throughout the migration lifecycle.

Tools-based Migration and Validation

Our migration approach includes tools-based automation solutions and scripts to ensure the target state environment is right-sized and configured to exacting standards. We leverage industry standard tools to synchronize data from the source to target environments. Lastly, we leverage a scripted approach for validating data consistency and performance in the target environment.

Post Migration Support

Once the migration event is complete, our consultants remain at the ready for up to 5 business days, providing Tier-3 support to assist with troubleshooting and mitigating any issues that may manifest in the SQL environment as a result of the migration.

Considerations for the Data Platform of the Future

There are a number of pathways for modernizing SQL Server workloads. Customers need flexibility when it comes to the choice of platform, programming languages & data infrastructure to get from the most from their data. Why? In most IT environments, platforms, technologies and skills are as diverse as they have ever been, the data platform of the future needs to you to build intelligent applications on any data, any platform, any language on premises and in the cloud.

SQL Server manages your data, across platforms, on-premises and cloud. The goal of Dell Technologies Consulting Services is to meet you where you are, on any platform, anywhere with the tools and languages of your choice.

SQL Server now has support for Windows, Linux & Docker Containers. Kubernetes orchestration with SQL 2019 coming soon!

Additionally, SQL allows you to leverage the language of your choice for advanced analytics – R and Python.

Where Can You Move and Modernize these SQL Server Workloads?

Microsoft Azure and Dell EMC Cloud for Microsoft Azure Stack

Migrate your SQL Server databases without changing your apps. Azure SQL Database is the intelligent, fully managed relational cloud database service that provides the broadest SQL Server engine compatibility. Accelerate app development and simplify maintenance using the SQL tools you love to use. Reduce the burden of data-tier management and save time and costs by migrating workloads to the cloud.  Azure Hybrid Benefit for SQL Server provides a cost-effective path for migrating hundreds or thousands of SQL Server databases with minimal effort. Use your SQL Server licenses with Software Assurance to pay a reduced rate when migrating to the cloud. SQL 2008 support will be extended for three years if those IaaS workloads are migrated to Azure or Azure Stack.

Another huge value add, with Azure Stack and SQL Server, is the SQL Server Resource Provider. Think SQL PaaS (Platform as a Service). Here is a quick video I put together on the topic.

The SQL RP does not execute the SQL Server engine. However, it does manage the various SQL Server instances, that can be provided to tenants as SKUs, for SQL database consumption. These managed SQL Server SKUs can exist on, or off, Azure Stack.

Now that is cool if you need big SQL Server horsepower!

Dell EMC Proven SQL Server Solutions

Dell EMC Proven solutions for Microsoft SQL are purpose-designed and validated to optimize performance with SQL Server 2017. Products such as Dell EMC Ready Solutions for Microsoft SQL helps you save the hours required to develop a new database and can also help you avoid the costly pitfalls of a new implementation, while ensuring that your company’s unique needs are met. Our solution also helps you reduce costs with hardware resource consolidation and SQL Server licensing savings by consolidating and simplifying SQL systems onto modern infrastructure that supports mixed DBMS workloads, making them future ready for data analytics and next generation real-time applications.

Leverage Existing Cloud or Physical Infrastructure

Enable your IT teams with more efficient and effective operational support processes, while reducing licensing costs, solution complexity and time to delivery for new SQL Server services. Dell Technologies Consulting Services has experience with all the major cloud platforms. We can assist with planning and implementation services to migrate, upgrade and consolidate SQL Server database workloads on your existing cloud assets.

Consolidate SQL Workloads

Reduce operating and licensing costs by consolidating SQL Server workloads. With the Microsoft SQL Server per-core licensing model in SQL Server 2012 and above, moving workloads to a virtual/cloud environment can often present significant licensing savings. In addition, Dell Technologies Consulting Services will typically discover within the enterprise SQL server landscape, that there are many SQL Server instances which are underutilized, which presents an opportunity to reduce the CPU core counts or move SQL workloads to shared infrastructure to maximize hardware resource utilization and reduce licensing costs.

Summary

Some migrations will be simple, others much more complex, especially with mission critical databases. If you have yet to kick off this modernization effort, I recommend starting today. The EOL clock is ticking. Get your key stakeholders involved. Show them the data points from your environment. Use the free tools from Microsoft. Send them the link to this blog!

If you continue to struggle or don’t want to go it alone, Dell Technologies Consulting Services, the only holistic SQL Server workload provider, can help your team every step of the way. Take a moment to connect with your Dell Technologies Service Expert today and begin moving forward to a modern platform.

Blogs in the Series

Best Practices to Accelerate SQL Server Modernization (Part I)

Introduction to SQL Server Data Estate Modernization

 

Read Full Blog
  • SQL Server
  • PowerEdge
  • XtremIO
  • Microsoft

New Dell EMC Ready Solution powers SQL Server, the complete performance platform

Sam Lucido Sam Lucido

Mon, 03 Aug 2020 16:09:44 -0000

|

Read Time: 0 minutes

Working on the new Dell EMC Ready Solution for SQL Server was like going from 0 to 60 mph in under 3 seconds. The exhilaration of being pushed into the seat as the road roars past in a blur is absolute fun. That’s what the combination of Dell EMC PowerEdge R840 servers and the new Dell EMC XtremIO X2 storage array did for us in our recent tests.

The classic challenge with most database infrastructures is diminishing performance over time. To use an analogy, it’s like gradually increasing the load a supercar must pull until its 0-to-60 time just isn’t impressive anymore. In the case of databases, the load is input/output operations per second (IOPS). As IOPS increase, response times can slow and database performance suffers. What is interesting is how this performance problem happens over time. As more databases are gradually added to an infrastructure, response times slow by a fraction at a time. These incremental hits on performance can condition application users to accept slower performance—until one day someone says, “Performance was good two years ago but today it’s slow.”

When reading about supercars, we usually learn about their 0-to-60 mph time and their top speed. While the top speed is interesting, how many supercars have you seen race by at 200+ mph? Top speeds apply to databases too. Perhaps you have read a third-party study that devoted a massive hardware infrastructure to one database, thereby showing big performance numbers. If only we had the budget to do that for all our databases, right? Top speeds are fun, but scalability is more realistic as most infrastructures will be required to support multiple databases.

Dell EMC Labs took the performance scalability approach in testing the new SQL Server architecture. Our goals were aggressive: Run 8 virtualized databases per server for a total of 16 databases running in parallel, with a focus on generating significant load while maintaining fast response times. To make the scalability tests more interesting, 8 virtualized databases used Windows Server Datacenter on one server and the other 8 databases used Red Hat Enterprise Linux on another server. Figure 1 shows the two PowerEdge R840 servers and the 8-to-1 consolidation ratio (on each server) achieved in the tests.

Figure 1: PowerEdge R840 servers

Quest Benchmark Factory was used to create the same TPC-E OLTP workload across all 16 virtualized databases. On the storage side, XtremIO X2 was used to accelerate all database I/O. The XtremIO X2 configuration included two X-Brick modules, each with 36 flash drives for a total of 72. According to the XtremIO X2 specification sheet, a 72-drive configuration can achieve 220,000 IOPS at .5 milliseconds (ms) of latency with a mixture of 70 percent reads and 30 percent writes using 8K blocks. Figure 2 shows the two X-Brick configuration of the X2 array with some of key features that make the all-flash system ideal for SQL Server databases.

Figure 2: XtremIO X2

Before we review the performance findings, let’s talk about IOPS and latency. IOPS is a measure that defines the load on a storage system. This measurement has greater context if we understand the maximum recommended IOPS for a storage system for a specific configuration. For example, 16 databases running in parallel don’t represent a significant load if they are only generating 20,000 IOPS. However, if the same databases generated 200,000 IOPS, as they did on the XtremIO X2 array that we used in our tests, then that’s a significant workload. Thus, IOPS are important in understanding the load on a storage system.

Response time and latency are used interchangeably in this blog and refer to the amount of time used to respond to a request to read or write data. Latency is our 0-to-60 metric that tells us how fast the storage system responds to a request. Just like with supercars, the lower the time, the faster the car and the storage system. Our goal was to determine if average read and write latencies remained under .5 ms.

Looking at IOPS and latency together brings us to our overall test objective. Can this SQL Server solution remain fast (low latency) under a heavy IOPS load? To answer this question is to understand if the database solution can scale. Scalability is the capability of the database infrastructure to handle increased workload with minimal impact to performance. The greater the scalability of the database solution, the more workload it can support and the greater return on investment it provides to customers. So, for our tests to be meaningful we must show a significant load; otherwise, the database system has not been challenged in terms of scalability.

We broke the achievable IOPS barrier of 220,000 IOPS by more than 55,000 IOPS! In large part, the PowerEdge R840 servers enabled the SQL Server databases to really push the OLTP workload to the XtremIO X2 array. We were able to simulate overloading the system by placing a load that is greater than recommended. In one respect we were impressed that XtremIO X2 supported more than 275,000 IOPS, but then we were concerned that there might have been a trade-off with performance.

The average latency for all physical reads and writes was under .5 ms. So not only did the SQL Server solution generate a large database workload, the XtremIO X2 storage system maintained consistently fast latencies throughout the tests. The test results show that this database solution was designed for performance scalability: The system maintained performance under a large workload across 16 databases. Figure 3 summarizes the test findings.

Figure 3: Summary of test findings

The capability to scale without having to invest in more infrastructure provides greater value to customers. Would I recommend pushing the new SQL Server solution past its limits like Dell EMC Labs did in testing for scalability? No. Running database tests involves achieving a steady state of performance that is uncharacteristic of real-world production databases. Production databases have peak processing times that must be planned for so that the business does not experience any performance issues. Dell EMC has SQL Server experts that can design the Ready Solution for different workloads. In my opinion, one of the key strengths of this solution is that each physical component can be sized to address database requirements. For example, the number of servers might need to be increased, but no additional investment is necessary on XtremIO X2, thus, saving the business money.

If I were to address just one other topic, I would pick the space savings achieved with a 1 TB SQL Server database. In figure 4, test results show a 3.52-to-1 data reduction ratio, which translates to a 71.5 percent space savings for a 1 TB database on the XtremIO X2 array. Always-on inline data reduction saves space by writing only unique blocks and then compressing those blocks to storage. The value of inline data reduction is the resulting ability to consolidate more databases to the XtremIO X2 array.

Figure 4: XtremIO X2 inline data reduction

Are you interested in learning how SQL Server performed on Windows Server Datacenter edition and Red Hat Enterprise Linux Server? I recommend reading the design guide for Dell EMC XtremIO X2 with PowerEdge R840 servers. The validation and use case section of that guide takes the reader through all the performance findings. Or schedule a meeting with your local Microsoft expert at Dell EMC to explore the solution.

Why Ready Solutions for Microsoft SQL?

The Ready Solutions for Microsoft SQL Server team at Dell EMC is a group of SQL Server experts who are passionate about building database solutions. All of our solutions are fully integrated, validated, and tested. Figure 5 shows how we approach developing database solutions. Many of us have been on the customer or consulting side of the business, and these priorities reflect our passion to develop specialized database solutions that are faster and more reliable.

Figure 5: Our database solutions development approach

I hope you enjoyed this blog. If you have any questions, please contact me.

Additional Resources:

Read Full Blog
  • SQL Server
  • Microsoft
  • Live Optics

How IT organizations benefit from the Dell EMC Live Optics monitoring tool

Anil Papisetty Anil Papisetty

Mon, 03 Aug 2020 16:10:08 -0000

|

Read Time: 0 minutes

How can a monitoring tool bring value to any organization? Why is it, in fact, essential for any IT organization? And do IT organizations need to invest in monitoring to see the value prop?

This blog provides an overview of how organizations benefit from monitoring. Later in the blog, you will get to know about the Live Optics free online software that Dell EMC introduced in 2017 and the value proposition that this monitoring software brings to your organization.

Traditional Monitoring Challenges

Traditionally, infrastructure was managed and monitored by IT engineers who would log in to each device or server and check the disk space, memory, processor, network gears, and so on. This required a lot of manual effort and time to identify the issues. It was difficult for IT engineers to proactively predict the issues, and so their efforts were typically reactive.  Later, due to the rapid changes and evolution of technologies, consolidated monitoring tools were introduced to help IT administrators analyze the environment, foresee threats, detect anomalies, and provide end-to-end dashboard reports of the environment.

 Today’s digital transformation has triggered a growth in the number of products, resources, and technologies. The challenge for organizations is investing budget and time into monitoring solutions that can enable greater efficiencies on premises, in the public cloud, or in a blended hybrid environment. The goal is to move away from the traditional labor-intensive monitoring that uses scripts and relies on knowledge experts to automated monitoring that enables IT engineers to focus more on innovation.

Modern Monitoring Tool Benefits

 The benefits of monitoring and how it plays a major role in your organizational growth:

  • Never miss a beat: Helps to prevent and reduce downtime and business losses by actively monitoring the heartbeat of the server’s IT infrastructure
  • Faster alerts: Identifies business interruptions and actively monitors and alerts via email, mobile calls, text, and instant messages
  • Comprehensive view: Helps to resolve uncertainty and provides understanding on how end-to-end infrastructure and its applications work and perform
  • Insights: Recommends upgrades, identifies architectural or technical hiccups, and tracks the smooth transitions (technology upgrades, migrations, and third-party integrations)
  • Budgeting and planning: Enables the IT organization to develop a plan for future projects and costs
  • Protecting against threats: Helps to detect early threats or problems to mitigate risks
  • Analytics: Incorporates analytics and machine learning techniques to analyze live data and to bring about greater improvements in productivity and performance
  • Rich dashboard reporting: Powers BI integration and capturing of consolidated dashboard reports for management leads

In a monitoring context, artificial intelligence plays a major role by proactively tuning or fixing issues—by sending notifications to the appropriate team or individual, or even automatically creating a ticket in a service desk and assigning it to a queue.

Dell EMC Live Optics

We live in a world of constantly changing products. New features are added, competitive features are enhanced, new alternatives are introduced, and prices are changed. To address these changes, Dell EMC offers Live Optics, free online software that helps you to collect, visualize, and share data about your IT environment and workloads.  Live Optics is an agentless monitoring tool that you can set up in minutes.

Live Optics.PNG.png

Eliminate overspending and speed decision-making in your IT environment. Live Optics captures performance, software, OS distribution, and VM data for time frames ranging from a few hours to one week. Live Optics lets you share IT performance and workload data characteristics securely and anonymously. You can collaborate with peers, vendors, or channel partners without compromising security. With Live Optics, you can even model project requirements to gain a deeper knowledge of workloads and their resource requirements.

What is the customer value proposition? Live Optics brings together customer intelligence, competitive insight, and product valuation. Here are ways in which Dell EMC Live Optics can bring value to any IT organization:

 Host Optics:

  • Software for data collection—platform and hardware agnostic; physical or virtual
  • Support for all major operating systems and virtualization platforms
  • Quite often, all the information you need to make a recommendation

 Workload or File Optics:

  • Intense workload-specific assessments for diagnostic issues
  • Rapid file characterization of unstructured data
  • Data archival candidacy
  • Data compression estimates using proprietary algorithms

 Hardware Optics:

  • Performance and configuration retrieved on supported platforms via API and/or file processing
  • Custom options for support of proprietary, OEM-specific APIs

Please reach out to our Live Optics support team at https://support.liveoptics.com/hc/en-us for assistance. We can help in the following ways:

  • Consultation
  • Deployment
  • Adoption
  • Support
  • Optimization

One of my favorite idioms is "Health is wealth." In the same way, the wealth of an IT environment is measured by the health of an organization’s IT infrastructure.

Important links:
https://www.liveoptics.com
https://support.liveoptics.com/hc/en-us
https://www.youtube.com/LiveOptics
https://app.liveoptics.com/Account/Login?ReturnUrl=%2f
https://support.liveoptics.com/hc/en-us/community/topics


Read Full Blog
  • SQL Server
  • Microsoft

How SQL Server can protect your digital currency

Anil Papisetty Anil Papisetty

Mon, 03 Aug 2020 16:10:30 -0000

|

Read Time: 0 minutes

Do you wonder if your data is under attack? When should we worry if our data is safe and secure?   What precautionary steps we have taken to protect data? Can we eliminate data breaching?  In this article I want to introduce some of the great security features built into SQL 2017.  No product can prevent all risk of data loss or unauthorized access.The best defense is a combination of good products, knowledgeable people, and rigorous processes design with data protection at all levels of the organization. 

Let us start by understanding what data is?

There were relatively few methods to create and share data before the advent of computers – primarily paper and film. Today there are many ways to create, store and access digital data (0’s and 1’s).  Data may be a collection of raw facts, data may be a numbers or words, data may be a recorded information of something or someone and in typical digital language data is binaries.  Digital data is much easier to create, share, transfer and store in digital forms, such as an email, digital images, digital movies, e-books but also much more difficult to secure.

The digital data ecosystem

Most data can be classified as structured and unstructured. Most of data being created today is unstructured.  With the advancement of computer and communication technologies, the rate of data generation and sharing has increased exponentially.

In simple terms, structured data is typically stored using a database management system (DBMS) in rows and columns. Structured data is easily searchable by basic algorithms. Unstructured data is pretty much everything else and does not have a predefined data model. Unstructured nature is much more difficult to retrieve and process. Numerous sources and techniques (data mining, natural language processing (NLP) and text analysis) are evolving rapidly by industry to analyze, derive, manage and store both unstructured and structured data.

In 1988, Merrill Lynch cited a rule of thumb somewhere around 80 to 90% all potentially usable business information may have originated in unstructured form. IDC and EMC projected that data will grow to 40 zettabytes by 2020.

https://www.kdnuggets.com/2012/12/idc-digital-universe-2020.html

https://www.emc.com/about/news/press/2012/20121211-01.htm

The chart below shows the amount of the data generated every minute in social media according to Domo's Data Never Sleep 5.0 report.

https://web-assets.domo.com/blog/wp-content/uploads/2017/07/17_domo_data-never-sleeps-5-01.png

It is not necessary to store all the unwanted data. IDC predicts that by 2025 nearly 20% of the data in global data sphere will be critical to our daily lives. Organization should have a prior plan to store right amount of data and how to extract the business value, the value for human experience and personal value. That's the choice and its a definitive challenge.

Following chart provides a view of the total number of records containing personal and other sensitive data that have been in compromised between Jan 2017 and March 2018.

As per Gartner forecast the total spending of cyber security by the organizations world wide were up by 8% from 2017 and the predicted number is $96 Billions in 2018.

What could possibly go wrong?

A data breach is when confidential information exposed or compromised by intentional or unintentional means.

Malware: Any type of virus, including worms, ransomware, spyware and Trojans which gains access to damage a computer without the knowledge of the owner. Malware is usually injected and installed on a machine by tricking to user to install or access a program from the internet. 


Password attack:  Brute force attacks can be very successful for gaining access to systems with insecure passwords. 81% of confirmed data breaches involved weak, default or stolen passwords.

Phishing: Capitalizing on our apparent human need to click things, phishing campaigns try to get the recipient to open an infected attachment or click and equally infectious link.

Social Engineering:  Email or phone call contact with authorized user of sensitive data for obtaining personal information that can be used in an attack to gain unauthorized access.

One ounce of prevention is worth a pound of cure.

SQL 2017 is equipped with many features to help secure and protect your data from breaches. With SQL server, security is just so well integrated, it’s literally something you mostly just turn on.  For example, it is extremely easy to encrypt data on disk, on the wire and in memory, which is big.

  • Always Encrypted (Secure at rest in motion): Large amounts of data lead to added complexity. Data is queried, transmitted, backed up, and replicated nearly and constantly. With all that activity, any link in the chain could be a potential vulnerability. Always Encrypted, enables encryption of sensitive data inside application and on the wire, while never revealing the encryption keys to the database engine. As a result, always encrypted provides a separation between those who own the data and those who manages the data.
  • SQL Dynamic data masking prevents the abuse of sensitive data by controlling what users can access the unmasked data.
  • SQL Server Authentication ensures that only authorized users have access by requiring valid credentials to access the data in databases.

https://docs.microsoft.com/en-us/sql/relational-databases/security/choose-an-authentication-mode?view=sql-serer-2017

  • SQL Server 2017 audit is the primary auditing tool in SQL Server, enabling you to track and log server-level events as well as individual database events. It uses extended events to help create and run audit-related events. SQL server audit components are SQL Server Audit, SQL Server Specification and Database Audit Specification. 
  • Row-Level Security, helps database administrator to implement restricted access to the specific engineer or a user to the rows in a database table. This makes the security system more reliable and robust by reducing the systems surface area

SQL Server Provides enterprise-grade security capabilities on Windows and On Linux. All built in.

 

 

Protect Data

Transparent Data Encryption

Backup Encryption

Cell-level Encryption

Transport layer Security (SSL/TLS)

Always Encrypted

Control Access (Database Access/Application Access)

SQL Server Authentication

Active Directory Authentication

Granular Permission

Row Level Security

Dynamic Data Masking

Monitor Access

Tracking Activities (Fine-grained Audit)

 Summary:  Digitization has led to an explosion of new data that is not expected to abate anytime soon. As data continues to play a vital role in our future, Cyber Criminals are causing organizations to spend ever increasing amounts of money every year to protect data. It is important that organizations get the most value from these investment in data protection.

DATA IS DIGITAL CURRENCY

Read Full Blog