In a data-driven world, innovation changes are forcing a new paradigm
Wed, 01 Apr 2020 14:29:17 -0000|
Read Time: 0 minutes
For the last two decades, I’ve enjoyed working at Dell Technologies focusing on customer big-picture ideas. Not just focusing on hardware changes, but on a holistic solution including hardware, software, and services that achieves a business objective by addressing customer goals, problems, and needs. I’ve also partnered with my clients on their transformation journey. The concepts of digital transformation and IT transformation have been universal themes and turning these ideas into realities is where the rubber meets the road.
Now as I engage with customers and partners about Microsoft solutions, an incremental awareness of the idea of “data”, and how data is accessed and leveraged, has become evident. A foundational shift around data has occurred.
We are now living in a new era of data management, but many of us were not aware this change was developing. This has crept up on us without the fanfare you might see from a new technology launch. When you take a step back and look at these shifts in their entirety you see these changes aren’t just isolated updates, but instead are amplifying their benefits within each other. This is a fundamental transformation in the industry, similar to when virtualization was first adopted 15 years ago.
For many, this change started to become apparent with the end of support for SQL Server 2008 earlier this year (along with support for all previous versions of the product). This deadline, coupled with the large install base that still exists on this platform, is helping the conversation along but it’s not just a replace the old with the new in a point-by-point swap out. The doors opened in this new era force a completely different view and approach. We no longer need to have a SQL, Oracle, SAP, or Hadoop conversation – instead it becomes a holistic “data” point of view.
In our hybrid/multi-cloud world, there is not just one answer for managing data. Regardless of the type of data or where it resides, all the diverse data languages and methods of control, the word “data” can encompass a great deal.
Emerging technologies including IoT, 5G, AI and ML are generating greater amounts and varied types of data. How we access that data and derive insight from it becomes critical, but we have been limited by people, processes, and technology.
People have become stuck in the rut of, “I want it to be this way because it has always been this way.” Therefore, replacing dated/expired architectures becomes a swap out story verses a re-examine story and new efficiencies are completely missed. Processes within the organization become rigid with that same mindset and, dare I say politics, where access to that data becomes path- limited. Technology is influenced by both people and process as “the old way is good enough, right?”
The value/importance of “data” really points back to the insight that you drive from it. Having a bunch of ones and zeros on a hard drive is nice but what you derive from that data is critically important. The conversations I have with customers are not so much, “Where is my data and how is it stored?” The conversation is more commonly, “I have a need to get business analytics from my proprietary data so I can impact my customers in a way I never did before.”
To put my Stephen Covey hat on, we are in a paradigm change. What is occurring is incredibly impactful for how customers should view and treat data. There are three key areas that we will examine with the new paradigm today and we’ll start with data gravity.
Data gravity is the idea that data has weight. Wherever data is created, it tends to remain. Data stores are getting so big that moving data around is becoming expensive, time constrained, and database performance impacting. This in turn, results in silos of data by location and type. Versioning and lack of upgrade/migration/consolidation of databases also perpetuates these silo challenges.
As with physical gravity, we understand that data’s mass encourages applications and analytics to orbit that data store where it resides. Then, application dependency upon the data’s language version cements the silo requirement even further. We have witnessed the proliferation of intelligent core and edge devices, as well as bringing applications to that place where the data resides – at the customer location.
Silos of data based on language, version, and location can’t be readily accessed from a common interface. If I am a SQL user, how do I get that Oracle data I need? I cannot just pull all my data together into a huge common dataset – it’s just too big. We see these silos in almost every customer environment.
This is where data virtualization comes into the story. Please note this is not a virtual machine (a common confusion on the naming). Think instead of this being data democratization: the ability to allow all the people access to all the data – within reason, of course. Data virtualization allows you to access the data where the data is stored without a massive ETL event. You can see and control the data regardless of language, version, or location. The data remains where it is, but you have real-time source access to this data. You can access data from remote or diverse sources and perform actions on that data from one common point of view.
Data virtualization allows access into the silos that, in the past, have been very rigid, blocking the ability to effectively use that data. From a non-SQL Server point of view, having unstructured data or structured data in a different format (like Oracle), required you to hire a specialized person with a specific skill set to access that data. With data virtualization, that is no longer a barrier as these silo walls are reduced. Data virtualization becomes data democratization, meaning that all people (with appropriate permissions) can access and do things with that data.
From a Microsoft point of view, that technology came into reality with Polybase. Polybase with SQL Server allows access with T-SQL, the most commonly used database language. I started using this resource with the Analytics Platform System (APS) many years ago. After Microsoft placed this tool into SQL Server in 2016 and updated its functionality tremendously in SQL Server 2019, we now can ingest Hadoop, Oracle, and use orchestrators like Spark, to access all these disparate data sources. To visualize this, think of Polybase with SQL Server 2019 as a wrapper around these diverse silos of data. You now can access all these disparate data sources within one common interface: T-SQL using Polybase.
The final tenet of this fundamental change is the advent of containerization. This enablement technology allows abstraction beyond virtualization and runs just about anywhere. Data becomes nimble and you can move it where needed.
It’s amazing how pervasive containers have become. It’s no longer a science experiment, but is quickly becoming the new normal. In the past, many customers had a forklift perception that when a new technology comes into play, it requires a lift and replace. I’ve heard, “What I am doing today is no longer good, so I have to replace it with whatever your new product is, and it will be painful.”
I’ve been using the phrase that containerization enables “all the things”. Containerization has been adopted by so many architectures that it’s easier to talk about where you can’t do it verses where you can. Traditional SAN, converged, hyperconverged, hybrid cloud — you can place this just about anywhere. There is not just one right path here — do what makes sense for you. It becomes a holistic solution.
There are multiple ways to address the business need that customers have even if it’s leveraging existing designs that they’ve been using for years. Dell Technologies has published details of several architectures supporting SQL Server and has just recently published the first of many papers on SQL Server in containers.
The answer is, you can do all these things with all these architectures. By the way, this isn’t specific to Microsoft and SQL Server. We see similar architectures being created in other databases and technology formats.
These three tenets are each self-supporting to the new paradigm. Data gravity is supported by data virtualization and containerization. Data virtualization allows silos when needed (gravity) and is enabled by containerization. Containerization gives access to silos (wrapper) and is the mechanism to activate data virtualization.
From a Dell Technologies point of view, we are aggressively embracing these tenets. Our enablement technologies to support this paradigm are called out in three discrete points – accelerate, protect, and reuse. We will review these points in a separate blog.
There is much more to come as we continue this journey into the new era of data management. Dell Technologies has deeply invested in resources around this topic with several recent publications and reference designs embracing this paradigm change. Our leadership on this topic is the result of our 30+ year relationship with Microsoft and our continuing “better together” story. A detailed white paper that further expands the ideas within this blog is available here.
Related Blog Posts
Recommendations for modernizing the Microsoft SQL Server platform
Mon, 30 Mar 2020 18:46:49 -0000|
Read Time: 0 minutes
This blog follows Introduction to SQL Server Data Estate Modernization, the second in a series discussing what’s entailed in modernizing the Microsoft SQL server platform and recommendations for executing an effective migration.
SQL Server—Love at First Sight Despite Modernization Challenges
SQL Server is my favorite Microsoft product and one of the most prolific databases of our time. I fell in love with SQL Server version 6.5, back in the day when scripting was king! Scripting is making a huge comeback, which is awesome! SQL Server, in fact, now exists in pretty much every environment—sometimes in locations IT organizations don’t even know about.
It can be a daunting task to even kick off a SQL modernization undertaking. Especially if you have End of Life SQL Server and/or Windows Server, running on aging hardware. My clients voice these concerns:
- The risk is too high to migrate. (I say, isn’t there a greater risk in doing nothing?)
- Our SQL Server environment is running on aging hardware—Dev/Test/Stage/Prod do not have the same performance characteristics, making it hard to regression test and performance test.
- How can I make the case for modernization? My team doesn’t have the cycles to address a full modernization workstream.
I will address these concerns, first, by providing a bit of history in terms of virtualization and SQL Server, and second, how to overcome the challenges to modernization, an undertaking in my opinion that should be executed sooner rather than later.
When virtualization first started to appear in data centers, one of its biggest value propositions was to increase server utilization, especially for SQL Server. Hypervisors increased server utilization by allowing multiple enterprise applications to share the same physical server hardware. While the improvement of utilization brought by virtualization is impressive, the amount of unutilized or underutilized resources trapped on each server starts to add up quickly. In a virtual server farm, the data center could have the equivalent of one idle server for every one to three servers deployed.
Fully Leveraging the Benefits of Integrated Copy Data Management (iCDM)
Many of these idle servers, in several instances, were related to Dev/Test/Stage. The QoS (Quality of Service) can also be a concern with these instances.
SQL Server leverages Always On Groups as a native way to create replicas that can then be used for multiple use cases. The most typical deployment of AAG replicas are for high availability failover databases and for offloading heavy read operations, such as reporting, analytics and backup operations.
iCDM allows for additional use cases like the ones listed below with the benefits of inline data services:
- Test/Dev for continued application feature development, testing, CI/CD pipelines.
- Maintenance to present an environment to perform resource intensive database maintenance tasks, such as DBCC and CHECKDB operations.
- Operational Management to test and execute upgrades, performance tuning, and pre-production simulation
- Reporting to serve as the data source for any business intelligence system or reporting.
One of the key benefits of iCDM technology is the ability to provide cost efficient lifecycle management environment. iCDM provides efficient copy data management at the storage layer to consolidate both primary data and its associated copies on the same scale-out, all-flash array for unprecedented agility and efficiency. When combined with specific Dell EMC products, bullet-proof, consistent IOPS and latency, linear scale-out all-flash performance, and the ability to add more performance and capacity as needed with no application downtime, iCDM delivers incredible potential to consolidate both production and non-production applications without impacting production SLAs.
While emerging technologies, such as artificial intelligence, IoT and software-defined storage and networking, offer competitive benefits, their workloads can be difficult to predict and pose new challenges for IT departments.
Traditional architectures (hardware and software) are not designed for modern business and service delivery goals. Which is yet another solid use case for a SQL modernization effort.
As I mentioned in my previous post, not all data stores will be migrating to the cloud. Ever. The true answer will always be a Hybrid Data Estate. Full stop. However, we need to modernize for a variety of compelling reasons.
5 Paths to SQL Modernization
Here’s how you can simplify making the case for modernization and what’s included in each option.
Do nothing (not really a path at all!):
- Risky roll of the dice, especially when coupled with aging infrastructure.
- Run the risk of a security exploit, regulatory requirement (think GDPR) or a non-compliance mandate.
Purchase Extended Support from Microsoft:
- Supports SQL 2008 and R2 or Windows Server 2008 and R2 only.
- Substantial cost using a core model pricing.
- Costs tens of thousands of dollars per year…and that is for just ONE SQL instance! Ouch. How many are in your environment?
- Available for 3 additional years.
- True up yearly—meaning you cannot run unsupported for 6 months then purchase an additional 6 months. Yearly purchase only. More – ouch.
- Paid annually, only for the servers you need.
- Tech support is only available if Extended Support is purchased (oh…and that tech support is also a separate cost).
Transform with Azure/Azure Stack:
- Migrate IaaS application and databases to Azure/ Dell EMC Azure Stack virtual machines (Azure, on-premises…Way Cool!!!).
- Receive an additional 3 years of extended security updates for SQL Server and Windows Server (versions 2008 and R2) at no additional charge.
- In both cases, there is a new consumption cost, however, security updates are covered.
- When Azure Stack (on-premises Azure) is the SQL IaaS target, there are many cases where the appliance cost plus the consumption cost, is still substantially cheaper than #2, Extended Support, listed above.
- Begin the journey to operate in Cloud Operating Model fashion. Start with the Azure Stack SQL Resource Provider and easily, within a day, be providing your internal subscribers a full SQL PaaS (Platform as a Service).
- If you are currently configured with a Highly Available – Failover Cluster Instance with SQL 2008, this will be condensed into a single node. The Operating System protection you had with Always On FCI is not available with Azure Stack. However, your environment will now be running within a Hyper-Converged Infrastructure. Which does offer node failure (fault domain) protection, not operating system protection, or downtime protection from an Operating System patch. There are trade-offs. Best to weigh the options for your business use case and recovery procedures.
Move and Modernize (the Best Option!):
- Migrate IaaS instances to modern all Flash Dell EMC infrastructure.
- Migrate application workloads on a new Server Operating System – Windows Server 2016 or 2019.
- Migrate SQL Server databases to SQL Server 2017 and quickly upgrade to SQL 2019 when Generally Available. Currently SQL 2019 is not yet GA. Best guess, before end of year 2019.
- Enable your IT teams with more efficient and effective operational support processes, while reducing licensing costs, solution complexity and time to delivery for new SQL Server services.
- Reduce operating and licensing costs by consolidating SQL Server workloads. With the Microsoft SQL Server per-core licensing model in SQL Server 2012 and above, moving workloads to a virtual/cloud environment can often present significant licensing savings. In addition, through the Dell Technologies advisory services; we typically discover within the enterprise SQL server landscape, that there are many SQL Server instances which are underutilized, which presents an opportunity to reduce the CPU core counts or move SQL workloads to shared infrastructure to maximize hardware resource utilization and reduce licensing costs.
Rehost 2008 VMware Workloads:
- Run your VMware workloads natively on Azure
- Migrate VMware IaaS applications and databases to Azure VMware Solution by CloudSimple or Azure VMware solution by Virtustream
- Get 3 years of extended security updates for SQL Server and Windows Server 2008 / R2 at no additional charge
- vSphere network full compatibility
Remember, your Windows Server 2008 and 2008 R2 servers will also be EOL January 14, 2020.
Avoid Risks and Disasters Related to EOL Systems (in Other Words, CYA)
Can your company afford the risk of both an EOL Operating System and an EOL Database Engine? Pretty scary stuff. It really makes sense to look at both. Do your business leaders know this risk? If not, you need to be vocal and explain the risk. Fully. They need to know, understand, and sign off on the risk as something they personally want to absorb when an exploit hits. Somewhat of a CYA for IT, Data professionals and line of business owners. If you need help here, my team can help engage your leadership teams with you!
In my opinion, the larger problem, between SQL EOL and Windows Server EOL, is the latter. Running an unsupported operating system that supports an unsupported SQL Server workload is a serious recipe for disaster. Failover Cluster Instances (Always On FCI) was the normal way to provide Operating System High Availability with SQL Server 2008 and lower, which compounds the issue of multiple unsupported environment levels. Highly Available FCI environments are now left unprotected.
Some migrations will be simple, others much more complex, especially with mission critical databases. If you have yet to kick off this modernization effort, I recommend starting today. The EOL clock is ticking. Get your key stakeholders involved. Show them the data points from your environment. Send them the link to this blog!
If you continue to struggle or don’t want to go it alone, Dell Technologies Consulting Services, the only holistic SQL Server workload provider, can help your team every step of the way. Take a moment to connect with your Dell Technologies Service Expert today and begin moving forward to a modern platform.
Other Blogs in This Series:
Introduction to SQL Server data estate modernization
Mon, 30 Mar 2020 18:46:49 -0000|
Read Time: 0 minutes
This blog is the first in a series discussing what’s entailed in modernizing the Microsoft SQL server platform.
I hear the adage “if it ain’t broke don’t fix it” a lot during my conversations with clients about SQL Server, and many times it’s from the Database Administrators. Many DBA’s are reluctant to change—maybe they think their jobs will go away. It’s my opinion that their roles are not going anywhere, but they do need to expand their knowledge to support coming changes. Their role will involve innovation at a different level, merging application technology (think CI/CD pipelines) integrated into the mix. The DBA will also need to trust that their hardware partner has fully tested and vetted a solution for best possible SQL performance and can provide a Future Proof architecture to grow and change as the needs of the business grow and change.
The New Normal Is Hybrid Cloud
When “public clouds” first became mainstream, the knee-jerk reaction was everything must go to the cloud! If this had happened, then the DBA’s job would have gone away by default. This could not be farther from the truth and, of course, it’s not what happened. With regards to SQL Server, the new normal is hybrid.
Now some years later, will some data stores have a place in public cloud?
Absolutely, they will.
However, it’s become apparent that many of these data stores are better suited to existing on-premises. Also, there are current trends where data is being re-patriated from the public cloud back to the on-premises environments due to cost, management and data gravity (keeping the data close to the business need and/or specific regulatory compliance needs).
The SQL Server data estate can be a vast, and in some cases, a hodgepodge of “Rube Goldberg” design processes. In the past, I was the lead architect of many of these designs, I am sad to admit. (I continue to ask forgiveness from the IT gods.) Today, IT departments manage one base set of database architecture technology for operational databases, another potential hardware partner caching layer, and yet another architecture for data analytics and emerging AI.
Oh wait…one more point…all this data needs to be at the fingertips of a mobile device and edge computing. Managing and executing on all these requirements, in a real-time fashion, can result in a highly complex and specialized platform.
Data Estate Modernization
The new normal everyone wants is to keep it as simple as possible. Remember those “Rube Goldberg” designs referenced above? They’re no longer applicable. Simple, portable and seamless execution is key. As data volumes increase, DBA’s partnering with hardware vendors need to simplify as security, compliance and data integrity remain a fixed concern. However, there is a new normal for data estate management; one where large volumes of data can be referenced in place, with push down compute or seamlessly snapped and copied in a highly efficient manner to other managed environments. The evolution of SQL Server 2019 will also be a part of the data estate orchestration solution.
Are you an early adopter of SQL 2019?
Get Modern: A Unified Approach of Data Estate Management
A SQL Server Get Modern architecture from Dell EMC can consolidate your data estate, defined with our high value core pillars that align perfectly for SQL Server.
The pillars will work with any size environment, from the small and agile to the very large and complex SQL database landscape. There IS a valid solution for all environments. All the pillars work in concert to complement each other across all feature sets and integration points.
- Accelerate – To not only accelerate and Future-Proof the environment but completely modernize your SQL infrastructure. A revamped perspective on storage, leveraging RAM and other memory technologies, to maximum effect.
- Protect – Protect your database with industry leading backups, replication, resiliency and self-service Deduplicated copies.
- Reuse – Reuse snapshots. Operational recovery. Dev/Test repurposing. CI/CD pipelines.
Aligning along these pillars will bring efficiency and consistency to a unified approach of data estate management. The combination of a strong, consistent, high-performance architecture supporting the database platform will make your IT team the modernization execution masters.
What Are Some of the Compelling Reasons to Modernize Your SQL Server Data Estate?
Here are some of the pain point challenges I hear frequently in my travels chatting with clients. I will talk through these topics in future blog posts.
1. Our SQL Server environment is running on aging hardware:
- Dev/Test/Stage/Prod do not have the same performance characteristics making it hard to regression test and performance test.
2. We have modernization challenges:
- How can I make the case for modernization?
- My team does not have the cycles to address a full modernization workstream.
3. The Hybrid data estate is the answer… how to we get there?
4. We are at EOL (End of Life) for SQL Server 2008 / 2008R2 and Windows Server but are stuck due to:
- ISV (Independent Software Vendor) “lock in” requirement to a specific SQL Server engine version.
- Migration plan to modernize SQL cannot be staffed and executed through to completion.
5. We need to consolidate SQL Server sprawl and standardize on a SQL Server version:
- Build for the future, where disruptive SQL version upgrades become a thing of the distant past. Think…containerized SQL Server. OH yeah!
- CI/CD success – the database is a key piece of the puzzle for success.
- Copies of databases for App Dev / Test / Reporting copies are consuming valuable space on disk.
- Backups take too long and consume too much space.
6. I want to embrace SQL Server on Linux.
7. Let’s talk modern SQL Server application tuning and performance.
8. Where do you see the DBA role in the next few years?
I hope you will come along this SQL Server journey with me as I discuss each of these customer challenges in this blog series:
And, if you’re ready to consider a certified, award-winning Microsoft partner who understands your Microsoft SQL Server endeavors for modernization, Dell EMC’s holistic approach can help you minimize risk and business disruption. To find out more, contact your Dell EMC representative.